pax_global_header00006660000000000000000000000064122526710100014506gustar00rootroot0000000000000052 comment=b98f85c8dcb5294b6764b87d60acc3344e18a631 boto-2.20.1/000077500000000000000000000000001225267101000125335ustar00rootroot00000000000000boto-2.20.1/.gitignore000066400000000000000000000001731225267101000145240ustar00rootroot00000000000000*.pyc .*.swp *.log *~ boto.egg-info build/ dist/ MANIFEST .DS_Store .idea .tox .coverage *flymake.py venv venv-2.5 env-2.5 boto-2.20.1/.travis.yml000066400000000000000000000002641225267101000146460ustar00rootroot00000000000000language: python python: - "2.6" - "2.7" before_install: - sudo apt-get install swig install: pip install --use-mirrors -r requirements.txt script: python tests/test.py unit boto-2.20.1/CONTRIBUTING000066400000000000000000000025011225267101000143630ustar00rootroot00000000000000============ Contributing ============ For more information, please see the official contribution docs at http://docs.pythonboto.org/en/latest/contributing.html. Contributing Code ================= * A good patch: * is clear. * works across all supported versions of Python. * follows the existing style of the code base (PEP-8). * has comments included as needed. * A test case that demonstrates the previous flaw that now passes with the included patch. * If it adds/changes a public API, it must also include documentation for those changes. * Must be appropriately licensed (New BSD). Reporting An Issue/Feature ========================== * Check to see if there's an existing issue/pull request for the bug/feature. All issues are at https://github.com/boto/boto/issues and pull reqs are at https://github.com/boto/boto/pulls. * If there isn't an existing issue there, please file an issue. The ideal report includes: * A description of the problem/suggestion. * How to recreate the bug. * If relevant, including the versions of your: * Python interpreter * boto * Optionally of the other dependencies involved * If possile, create a pull request with a (failing) test case demonstrating what's wrong. This makes the process for fixing bugs quicker & gets issues resolved sooner. boto-2.20.1/LICENSE000066400000000000000000000017571225267101000135520ustar00rootroot00000000000000Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, dis- tribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the fol- lowing conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. boto-2.20.1/MANIFEST.in000066400000000000000000000004641225267101000142750ustar00rootroot00000000000000include boto/cacerts/cacerts.txt include README.rst include boto/file/README include .gitignore include pylintrc include boto/pyami/copybot.cfg include boto/services/sonofmmm.cfg include boto/mturk/test/*.doctest include boto/mturk/test/.gitignore recursive-include tests *.py *.txt recursive-include docs * boto-2.20.1/README.rst000066400000000000000000000101121225267101000142150ustar00rootroot00000000000000#### boto #### boto 2.20.1 Released: 13-December-2013 .. image:: https://travis-ci.org/boto/boto.png?branch=develop :target: https://travis-ci.org/boto/boto .. image:: https://pypip.in/d/boto/badge.png :target: https://crate.io/packages/boto/ ************ Introduction ************ Boto is a Python package that provides interfaces to Amazon Web Services. At the moment, boto supports: * Compute * Amazon Elastic Compute Cloud (EC2) * Amazon Elastic Map Reduce (EMR) * AutoScaling * Amazon Kinesis * Content Delivery * Amazon CloudFront * Database * Amazon Relational Data Service (RDS) * Amazon DynamoDB * Amazon SimpleDB * Amazon ElastiCache * Amazon Redshift * Deployment and Management * AWS Elastic Beanstalk * AWS CloudFormation * AWS Data Pipeline * AWS Opsworks * AWS CloudTrail * Identity & Access * AWS Identity and Access Management (IAM) * Application Services * Amazon CloudSearch * Amazon Elastic Transcoder * Amazon Simple Workflow Service (SWF) * Amazon Simple Queue Service (SQS) * Amazon Simple Notification Server (SNS) * Amazon Simple Email Service (SES) * Monitoring * Amazon CloudWatch * Networking * Amazon Route53 * Amazon Virtual Private Cloud (VPC) * Elastic Load Balancing (ELB) * AWS Direct Connect * Payments and Billing * Amazon Flexible Payment Service (FPS) * Storage * Amazon Simple Storage Service (S3) * Amazon Glacier * Amazon Elastic Block Store (EBS) * Google Cloud Storage * Workforce * Amazon Mechanical Turk * Other * Marketplace Web Services * AWS Support The goal of boto is to support the full breadth and depth of Amazon Web Services. In addition, boto provides support for other public services such as Google Storage in addition to private cloud systems like Eucalyptus, OpenStack and Open Nebula. Boto is developed mainly using Python 2.6.6 and Python 2.7.3 on Mac OSX and Ubuntu Maverick. It is known to work on other Linux distributions and on Windows. Most of Boto requires no additional libraries or packages other than those that are distributed with Python. Efforts are made to keep boto compatible with Python 2.5.x but no guarantees are made. ************ Installation ************ Install via `pip`_: :: $ pip install boto Install from source: :: $ git clone git://github.com/boto/boto.git $ cd boto $ python setup.py install ********** ChangeLogs ********** To see what has changed over time in boto, you can check out the release notes at `http://docs.pythonboto.org/en/latest/#release-notes` *************************** Finding Out More About Boto *************************** The main source code repository for boto can be found on `github.com`_. The boto project uses the `gitflow`_ model for branching. `Online documentation`_ is also available. The online documentation includes full API documentation as well as Getting Started Guides for many of the boto modules. Boto releases can be found on the `Python Cheese Shop`_. Join our IRC channel `#boto` on FreeNode. Webchat IRC channel: http://webchat.freenode.net/?channels=boto Join the `boto-users Google Group`_. ************************* Getting Started with Boto ************************* Your credentials can be passed into the methods that create connections. Alternatively, boto will check for the existence of the following environment variables to ascertain your credentials: **AWS_ACCESS_KEY_ID** - Your AWS Access Key ID **AWS_SECRET_ACCESS_KEY** - Your AWS Secret Access Key Credentials and other boto-related settings can also be stored in a boto config file. See `this`_ for details. .. _pip: http://www.pip-installer.org/ .. _release notes: https://github.com/boto/boto/wiki .. _github.com: http://github.com/boto/boto .. _Online documentation: http://docs.pythonboto.org .. _Python Cheese Shop: http://pypi.python.org/pypi/boto .. _this: http://code.google.com/p/boto/wiki/BotoConfig .. _gitflow: http://nvie.com/posts/a-successful-git-branching-model/ .. _neo: https://github.com/boto/boto/tree/neo .. _boto-users Google Group: https://groups.google.com/forum/?fromgroups#!forum/boto-users boto-2.20.1/bin/000077500000000000000000000000001225267101000133035ustar00rootroot00000000000000boto-2.20.1/bin/asadmin000077500000000000000000000273251225267101000146560ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2011 Joel Barciauskas http://joel.barciausk.as/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # # Auto Scaling Groups Tool # VERSION="0.1" usage = """%prog [options] [command] Commands: list|ls List all Auto Scaling Groups list-lc|ls-lc List all Launch Configurations delete Delete ASG delete-lc Delete Launch Configuration get Get details of ASG create Create an ASG create-lc Create a Launch Configuration update Update a property of an ASG update-image Update image ID for ASG by creating a new LC migrate-instances Shut down current instances one by one and wait for ASG to start up a new instance with the current AMI (useful in conjunction with update-image) Examples: 1) Create launch configuration bin/asadmin create-lc my-lc-1 -i ami-1234abcd -t c1.xlarge -k my-key -s web-group -m 2) Create auto scaling group in us-east-1a and us-east-1c with a load balancer and min size of 2 and max size of 6 bin/asadmin create my-asg -z us-east-1a -z us-east-1c -l my-lc-1 -b my-lb -H ELB -p 180 -x 2 -X 6 """ def get_group(autoscale, name): g = autoscale.get_all_groups(names=[name]) if len(g) < 1: print "No auto scaling groups by the name of %s found" % name return sys.exit(1) return g[0] def get_lc(autoscale, name): l = autoscale.get_all_launch_configurations(names=[name]) if len(l) < 1: print "No launch configurations by the name of %s found" % name sys.exit(1) return l[0] def list(autoscale): """List all ASGs""" print "%-20s %s" % ("Name", "LC Name") print "-"*80 groups = autoscale.get_all_groups() for g in groups: print "%-20s %s" % (g.name, g.launch_config_name) def list_lc(autoscale): """List all LCs""" print "%-30s %-20s %s" % ("Name", "Image ID", "Instance Type") print "-"*80 for l in autoscale.get_all_launch_configurations(): print "%-30s %-20s %s" % (l.name, l.image_id, l.instance_type) def get(autoscale, name): """Get details about ASG """ g = get_group(autoscale, name) print "="*80 print "%-30s %s" % ('Name:', g.name) print "%-30s %s" % ('Launch configuration:', g.launch_config_name) print "%-30s %s" % ('Minimum size:', g.min_size) print "%-30s %s" % ('Maximum size:', g.max_size) print "%-30s %s" % ('Desired capacity:', g.desired_capacity) print "%-30s %s" % ('Load balancers:', ','.join(g.load_balancers)) print print "Instances" print "---------" print "%-20s %-20s %-20s %s" % ("ID", "Status", "Health", "AZ") for i in g.instances: print "%-20s %-20s %-20s %s" % \ (i.instance_id, i.lifecycle_state, i.health_status, i.availability_zone) print def create(autoscale, name, zones, lc_name, load_balancers, hc_type, hc_period, min_size, max_size, cooldown, capacity): """Create an ASG named """ g = AutoScalingGroup(name=name, launch_config=lc_name, availability_zones=zones, load_balancers=load_balancers, default_cooldown=cooldown, health_check_type=hc_type, health_check_period=hc_period, desired_capacity=capacity, min_size=min_size, max_size=max_size) g = autoscale.create_auto_scaling_group(g) return list(autoscale) def create_lc(autoscale, name, image_id, instance_type, key_name, security_groups, instance_monitoring): l = LaunchConfiguration(name=name, image_id=image_id, instance_type=instance_type,key_name=key_name, security_groups=security_groups, instance_monitoring=instance_monitoring) l = autoscale.create_launch_configuration(l) return list_lc(autoscale) def update(autoscale, name, prop, value): g = get_group(autoscale, name) setattr(g, prop, value) g.update() return get(autoscale, name) def delete(autoscale, name, force_delete=False): """Delete this ASG""" g = get_group(autoscale, name) autoscale.delete_auto_scaling_group(g.name, force_delete) print "Auto scaling group %s deleted" % name return list(autoscale) def delete_lc(autoscale, name): """Delete this LC""" l = get_lc(autoscale, name) autoscale.delete_launch_configuration(name) print "Launch configuration %s deleted" % name return list_lc(autoscale) def update_image(autoscale, name, lc_name, image_id, is_migrate_instances=False): """ Get the current launch config, Update its name and image id Re-create it as a new launch config Update the ASG with the new LC Delete the old LC """ g = get_group(autoscale, name) l = get_lc(autoscale, g.launch_config_name) old_lc_name = l.name l.name = lc_name l.image_id = image_id autoscale.create_launch_configuration(l) g.launch_config_name = l.name g.update() if(is_migrate_instances): migrate_instances(autoscale, name) else: return get(autoscale, name) def migrate_instances(autoscale, name): """ Shut down instances of the old image type one by one and let the ASG start up instances with the new image """ g = get_group(autoscale, name) old_instances = g.instances ec2 = boto.connect_ec2() for old_instance in old_instances: print "Terminating instance " + old_instance.instance_id ec2.terminate_instances([old_instance.instance_id]) while True: g = get_group(autoscale, name) new_instances = g.instances for new_instance in new_instances: hasOldInstance = False instancesReady = True if(old_instance.instance_id == new_instance.instance_id): hasOldInstance = True print "Waiting for old instance to shut down..." break elif(new_instance.lifecycle_state != 'InService'): instancesReady = False print "Waiting for instances to be ready...." break if(not hasOldInstance and instancesReady): break else: time.sleep(20) return get(autoscale, name) if __name__ == "__main__": try: import readline except ImportError: pass import boto import sys import time from optparse import OptionParser from boto.mashups.iobject import IObject from boto.ec2.autoscale import AutoScalingGroup from boto.ec2.autoscale import LaunchConfiguration parser = OptionParser(version=VERSION, usage=usage) """ Create launch config options """ parser.add_option("-i", "--image-id", help="Image (AMI) ID", action="store", type="string", default=None, dest="image_id") parser.add_option("-t", "--instance-type", help="EC2 Instance Type (e.g., m1.large, c1.xlarge), default is m1.large", action="store", type="string", default="m1.large", dest="instance_type") parser.add_option("-k", "--key-name", help="EC2 Key Name", action="store", type="string", dest="key_name") parser.add_option("-s", "--security-group", help="EC2 Security Group", action="append", default=[], dest="security_groups") parser.add_option("-m", "--monitoring", help="Enable instance monitoring", action="store_true", default=False, dest="instance_monitoring") """ Create auto scaling group options """ parser.add_option("-z", "--zone", help="Add availability zone", action="append", default=[], dest="zones") parser.add_option("-l", "--lc-name", help="Launch configuration name", action="store", default=None, type="string", dest="lc_name") parser.add_option("-b", "--load-balancer", help="Load balancer name", action="append", default=[], dest="load_balancers") parser.add_option("-H", "--health-check-type", help="Health check type (EC2 or ELB)", action="store", default="EC2", type="string", dest="hc_type") parser.add_option("-p", "--health-check-period", help="Health check period in seconds (default 300s)", action="store", default=300, type="int", dest="hc_period") parser.add_option("-X", "--max-size", help="Max size of ASG (default 10)", action="store", default=10, type="int", dest="max_size") parser.add_option("-x", "--min-size", help="Min size of ASG (default 2)", action="store", default=2, type="int", dest="min_size") parser.add_option("-c", "--cooldown", help="Cooldown time after a scaling activity in seconds (default 300s)", action="store", default=300, type="int", dest="cooldown") parser.add_option("-C", "--desired-capacity", help="Desired capacity of the ASG", action="store", default=None, type="int", dest="capacity") parser.add_option("-f", "--force", help="Force delete ASG", action="store_true", default=False, dest="force") parser.add_option("-y", "--migrate-instances", help="Automatically migrate instances to new image when running update-image", action="store_true", default=False, dest="migrate_instances") (options, args) = parser.parse_args() if len(args) < 1: parser.print_help() sys.exit(1) autoscale = boto.connect_autoscale() print "%s" % (autoscale.region.endpoint) command = args[0].lower() if command in ("ls", "list"): list(autoscale) elif command in ("ls-lc", "list-lc"): list_lc(autoscale) elif command == "get": get(autoscale, args[1]) elif command == "create": create(autoscale, args[1], options.zones, options.lc_name, options.load_balancers, options.hc_type, options.hc_period, options.min_size, options.max_size, options.cooldown, options.capacity) elif command == "create-lc": create_lc(autoscale, args[1], options.image_id, options.instance_type, options.key_name, options.security_groups, options.instance_monitoring) elif command == "update": update(autoscale, args[1], args[2], args[3]) elif command == "delete": delete(autoscale, args[1], options.force) elif command == "delete-lc": delete_lc(autoscale, args[1]) elif command == "update-image": update_image(autoscale, args[1], args[2], options.image_id, options.migrate_instances) elif command == "migrate-instances": migrate_instances(autoscale, args[1]) boto-2.20.1/bin/bundle_image000077500000000000000000000030231225267101000156420ustar00rootroot00000000000000#!/usr/bin/env python from boto.manage.server import Server if __name__ == "__main__": from optparse import OptionParser parser = OptionParser(version="%prog 1.0", usage="Usage: %prog [options] instance-id [instance-id-2]") # Commands parser.add_option("-b", "--bucket", help="Destination Bucket", dest="bucket", default=None) parser.add_option("-p", "--prefix", help="AMI Prefix", dest="prefix", default=None) parser.add_option("-k", "--key", help="Private Key File", dest="key_file", default=None) parser.add_option("-c", "--cert", help="Public Certificate File", dest="cert_file", default=None) parser.add_option("-s", "--size", help="AMI Size", dest="size", default=None) parser.add_option("-i", "--ssh-key", help="SSH Keyfile", dest="ssh_key", default=None) parser.add_option("-u", "--user-name", help="SSH Username", dest="uname", default="root") parser.add_option("-n", "--name", help="Name of Image", dest="name") (options, args) = parser.parse_args() for instance_id in args: try: s = Server.find(instance_id=instance_id).next() print "Found old server object" except StopIteration: print "New Server Object Created" s = Server.create_from_instance_id(instance_id, options.name) assert(s.hostname is not None) b = s.get_bundler(uname=options.uname) b.bundle(bucket=options.bucket,prefix=options.prefix,key_file=options.key_file,cert_file=options.cert_file,size=int(options.size),ssh_key=options.ssh_key) boto-2.20.1/bin/cfadmin000077500000000000000000000065631225267101000146440ustar00rootroot00000000000000#!/usr/bin/env python # Author: Chris Moyer # # cfadmin is similar to sdbadmin for CloudFront, it's a simple # console utility to perform the most frequent tasks with CloudFront # def _print_distributions(dists): """Internal function to print out all the distributions provided""" print "%-12s %-50s %s" % ("Status", "Domain Name", "Origin") print "-"*80 for d in dists: print "%-12s %-50s %-30s" % (d.status, d.domain_name, d.origin) for cname in d.cnames: print " "*12, "CNAME => %s" % cname print "" def help(cf, fnc=None): """Print help message, optionally about a specific function""" import inspect self = sys.modules['__main__'] if fnc: try: cmd = getattr(self, fnc) except: cmd = None if not inspect.isfunction(cmd): print "No function named: %s found" % fnc sys.exit(2) (args, varargs, varkw, defaults) = inspect.getargspec(cmd) print cmd.__doc__ print "Usage: %s %s" % (fnc, " ".join([ "[%s]" % a for a in args[1:]])) else: print "Usage: cfadmin [command]" for cname in dir(self): if not cname.startswith("_"): cmd = getattr(self, cname) if inspect.isfunction(cmd): doc = cmd.__doc__ print "\t%s - %s" % (cname, doc) sys.exit(1) def ls(cf): """List all distributions and streaming distributions""" print "Standard Distributions" _print_distributions(cf.get_all_distributions()) print "Streaming Distributions" _print_distributions(cf.get_all_streaming_distributions()) def invalidate(cf, origin_or_id, *paths): """Create a cloudfront invalidation request""" # Allow paths to be passed using stdin if not paths: paths = [] for path in sys.stdin.readlines(): path = path.strip() if path: paths.append(path) dist = None for d in cf.get_all_distributions(): if d.id == origin_or_id or d.origin.dns_name == origin_or_id: dist = d break if not dist: print "Distribution not found: %s" % origin_or_id sys.exit(1) cf.create_invalidation_request(dist.id, paths) def listinvalidations(cf, origin_or_id): """List invalidation requests for a given origin""" dist = None for d in cf.get_all_distributions(): if d.id == origin_or_id or d.origin.dns_name == origin_or_id: dist = d break if not dist: print "Distribution not found: %s" % origin_or_id sys.exit(1) results = cf.get_invalidation_requests(dist.id) if results: for result in results: if result.status == "InProgress": result = result.get_invalidation_request() print result.id, result.status, result.paths else: print result.id, result.status if __name__ == "__main__": import boto import sys cf = boto.connect_cloudfront() self = sys.modules['__main__'] if len(sys.argv) >= 2: try: cmd = getattr(self, sys.argv[1]) except: cmd = None args = sys.argv[2:] else: cmd = help args = [] if not cmd: cmd = help try: cmd(cf, *args) except TypeError, e: print e help(cf, cmd.__name__) boto-2.20.1/bin/cq000077500000000000000000000056271225267101000136460ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import getopt, sys import boto.sqs from boto.sqs.connection import SQSConnection from boto.exception import SQSError def usage(): print 'cq [-c] [-q queue_name] [-o output_file] [-t timeout] [-r region]' def main(): try: opts, args = getopt.getopt(sys.argv[1:], 'hcq:o:t:r:', ['help', 'clear', 'queue=', 'output=', 'timeout=', 'region=']) except: usage() sys.exit(2) queue_name = '' output_file = '' timeout = 30 region = '' clear = False for o, a in opts: if o in ('-h', '--help'): usage() sys.exit() if o in ('-q', '--queue'): queue_name = a if o in ('-o', '--output'): output_file = a if o in ('-c', '--clear'): clear = True if o in ('-t', '--timeout'): timeout = int(a) if o in ('-r', '--region'): region = a if region: c = boto.sqs.connect_to_region(region) else: c = SQSConnection() if queue_name: try: rs = [c.create_queue(queue_name)] except SQSError, e: print 'An Error Occurred:' print '%s: %s' % (e.status, e.reason) print e.body sys.exit() else: try: rs = c.get_all_queues() except SQSError, e: print 'An Error Occurred:' print '%s: %s' % (e.status, e.reason) print e.body sys.exit() for q in rs: if clear: n = q.clear() print 'clearing %d messages from %s' % (n, q.id) elif output_file: q.dump(output_file) else: print q.id, q.count(vtimeout=timeout) if __name__ == "__main__": main() boto-2.20.1/bin/cwutil000077500000000000000000000116641225267101000145500ustar00rootroot00000000000000#!/usr/bin/env python # Author: Chris Moyer # Description: CloudWatch Utility # For listing stats, creating alarms, and managing # other CloudWatch aspects import boto cw = boto.connect_cloudwatch() from datetime import datetime, timedelta def _parse_time(time_string): """Internal function to parse a time string""" def _parse_dict(d_string): result = {} if d_string: for d in d_string.split(","): d = d.split(":") result[d[0]] = d[1] return result def ls(namespace=None): """ List metrics, optionally filtering by a specific namespace namespace: Optional Namespace to filter on """ print "%-10s %-50s %s" % ("Namespace", "Metric Name", "Dimensions") print "-"*80 for m in cw.list_metrics(): if namespace is None or namespace.upper() in m.namespace: print "%-10s %-50s %s" % (m.namespace, m.name, m.dimensions) def stats(namespace, metric_name, dimensions=None, statistics="Average", start_time=None, end_time=None, period=60, unit=None): """ Lists the statistics for a specific metric namespace: The namespace to use, usually "AWS/EC2", "AWS/SQS", etc. metric_name: The name of the metric to track, pulled from `ls` dimensions: The dimensions to use, formatted as Name:Value (such as QueueName:myQueue) statistics: The statistics to measure, defaults to "Average" 'Minimum', 'Maximum', 'Sum', 'Average', 'SampleCount' start_time: Start time, default to now - 1 day end_time: End time, default to now period: Period/interval for counts, default to 60 minutes unit: Unit to track, default depends on what metric is being tracked """ # Parse the dimensions dimensions = _parse_dict(dimensions) # Parse the times if end_time: end_time = _parse_time(end_time) else: end_time = datetime.utcnow() if start_time: start_time = _parse_time(start_time) else: start_time = datetime.utcnow() - timedelta(days=1) print "%-30s %s" % ('Timestamp', statistics) print "-"*50 data = {} for m in cw.get_metric_statistics(int(period), start_time, end_time, metric_name, namespace, statistics, dimensions, unit): data[m['Timestamp']] = m[statistics] keys = data.keys() keys.sort() for k in keys: print "%-30s %s" % (k, data[k]) def put(namespace, metric_name, dimensions=None, value=None, unit=None, statistics=None, timestamp=None): """ Publish custom metrics namespace: The namespace to use; values starting with "AWS/" are reserved metric_name: The name of the metric to update dimensions: The dimensions to use, formatted as Name:Value (such as QueueName:myQueue) value: The value to store, mutually exclusive with `statistics` statistics: The statistics to store, mutually exclusive with `value` (must specify all of "Minimum", "Maximum", "Sum", "SampleCount") timestamp: The timestamp of this measurement, default is current server time unit: Unit to track, default depends on what metric is being tracked """ def simplify(lst): return lst[0] if len(lst) == 1 else lst print cw.put_metric_data(namespace, simplify(metric_name.split(';')), dimensions = simplify(map(_parse_dict, dimensions.split(';'))) if dimensions else None, value = simplify(value.split(';')) if value else None, statistics = simplify(map(_parse_dict, statistics.split(';'))) if statistics else None, timestamp = simplify(timestamp.split(';')) if timestamp else None, unit = simplify(unit.split(';')) if unit else None) def help(fnc=None): """ Print help message, optionally about a specific function """ import inspect self = sys.modules['__main__'] if fnc: try: cmd = getattr(self, fnc) except: cmd = None if not inspect.isfunction(cmd): print "No function named: %s found" % fnc sys.exit(2) (args, varargs, varkw, defaults) = inspect.getargspec(cmd) print cmd.__doc__ print "Usage: %s %s" % (fnc, " ".join([ "[%s]" % a for a in args])) else: print "Usage: cwutil [command]" for cname in dir(self): if not cname.startswith("_") and not cname == "cmd": cmd = getattr(self, cname) if inspect.isfunction(cmd): doc = cmd.__doc__ print "\t%s - %s" % (cname, doc) sys.exit(1) if __name__ == "__main__": import sys self = sys.modules['__main__'] if len(sys.argv) >= 2: try: cmd = getattr(self, sys.argv[1]) except: cmd = None args = sys.argv[2:] else: cmd = help args = [] if not cmd: cmd = help try: cmd(*args) except TypeError, e: print e help(cmd.__name__) boto-2.20.1/bin/dynamodb_dump000077500000000000000000000040671225267101000160620ustar00rootroot00000000000000#!/usr/bin/env python import argparse import errno import os import boto from boto.compat import json DESCRIPTION = """Dump the contents of one or more DynamoDB tables to the local filesystem. Each table is dumped into two files: - {table_name}.metadata stores the table's name, schema and provisioned throughput. - {table_name}.data stores the table's actual contents. Both files are created in the current directory. To write them somewhere else, use the --out-dir parameter (the target directory will be created if needed). """ def dump_table(table, out_dir): metadata_file = os.path.join(out_dir, "%s.metadata" % table.name) data_file = os.path.join(out_dir, "%s.data" % table.name) with open(metadata_file, "w") as metadata_fd: json.dump( { "name": table.name, "schema": table.schema.dict, "read_units": table.read_units, "write_units": table.write_units, }, metadata_fd ) with open(data_file, "w") as data_fd: for item in table.scan(): # JSON can't serialize sets -- convert those to lists. data = {} for k, v in item.iteritems(): if isinstance(v, (set, frozenset)): data[k] = list(v) else: data[k] = v data_fd.write(json.dumps(data)) data_fd.write("\n") def dynamodb_dump(tables, out_dir): try: os.makedirs(out_dir) except OSError as e: # We don't care if the dir already exists. if e.errno != errno.EEXIST: raise conn = boto.connect_dynamodb() for t in tables: dump_table(conn.get_table(t), out_dir) if __name__ == "__main__": parser = argparse.ArgumentParser( prog="dynamodb_dump", description=DESCRIPTION ) parser.add_argument("--out-dir", default=".") parser.add_argument("tables", metavar="TABLES", nargs="+") namespace = parser.parse_args() dynamodb_dump(namespace.tables, namespace.out_dir) boto-2.20.1/bin/dynamodb_load000077500000000000000000000066211225267101000160320ustar00rootroot00000000000000#!/usr/bin/env python import argparse import os import boto from boto.compat import json from boto.dynamodb.schema import Schema DESCRIPTION = """Load data into one or more DynamoDB tables. For each table, data is read from two files: - {table_name}.metadata for the table's name, schema and provisioned throughput (only required if creating the table). - {table_name}.data for the table's actual contents. Both files are searched for in the current directory. To read them from somewhere else, use the --in-dir parameter. This program does not wipe the tables prior to loading data. However, any items present in the data files will overwrite the table's contents. """ def _json_iterload(fd): """Lazily load newline-separated JSON objects from a file-like object.""" buffer = "" eof = False while not eof: try: # Add a line to the buffer buffer += fd.next() except StopIteration: # We can't let that exception bubble up, otherwise the last # object in the file will never be decoded. eof = True try: # Try to decode a JSON object. json_object = json.loads(buffer.strip()) # Success: clear the buffer (everything was decoded). buffer = "" except ValueError: if eof and buffer.strip(): # No more lines to load and the buffer contains something other # than whitespace: the file is, in fact, malformed. raise # We couldn't decode a complete JSON object: load more lines. continue yield json_object def create_table(metadata_fd): """Create a table from a metadata file-like object.""" def load_table(table, in_fd): """Load items into a table from a file-like object.""" for i in _json_iterload(in_fd): # Convert lists back to sets. data = {} for k, v in i.iteritems(): if isinstance(v, list): data[k] = set(v) else: data[k] = v table.new_item(attrs=data).put() def dynamodb_load(tables, in_dir, create_tables): conn = boto.connect_dynamodb() for t in tables: metadata_file = os.path.join(in_dir, "%s.metadata" % t) data_file = os.path.join(in_dir, "%s.data" % t) if create_tables: with open(metadata_file) as meta_fd: metadata = json.load(meta_fd) table = conn.create_table( name=t, schema=Schema(metadata["schema"]), read_units=metadata["read_units"], write_units=metadata["write_units"], ) table.refresh(wait_for_active=True) else: table = conn.get_table(t) with open(data_file) as in_fd: load_table(table, in_fd) if __name__ == "__main__": parser = argparse.ArgumentParser( prog="dynamodb_load", description=DESCRIPTION ) parser.add_argument( "--create-tables", action="store_true", help="Create the tables if they don't exist already (without this flag, attempts to load data into non-existing tables fail)." ) parser.add_argument("--in-dir", default=".") parser.add_argument("tables", metavar="TABLES", nargs="+") namespace = parser.parse_args() dynamodb_load(namespace.tables, namespace.in_dir, namespace.create_tables) boto-2.20.1/bin/elbadmin000077500000000000000000000222431225267101000150070ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2009 Chris Moyer http://coredumped.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # # Elastic Load Balancer Tool # VERSION = "0.2" usage = """%prog [options] [command] Commands: list|ls List all Elastic Load Balancers delete Delete ELB get Get all instances associated with create Create an ELB; -z and -l are required add Add in ELB remove|rm Remove from ELB reap Remove terminated instances from ELB enable|en Enable Zone for ELB disable Disable Zone for ELB addl Add listeners (specified by -l) to the ELB rml Remove Listener(s) specified by the port on the ELB """ def find_elb(elb, name): try: elbs = elb.get_all_load_balancers(name) except boto.exception.BotoServerError as se: if se.code == 'LoadBalancerNotFound': elbs = [] else: raise if len(elbs) < 1: print "No load balancer by the name of %s found" % name return None elif len(elbs) > 1: print "More than one elb matches %s?" % name return None # Should not happen if name not in elbs[0].name: print "No load balancer by the name of %s found" % name return None return elbs[0] def list(elb): """List all ELBs""" print "%-20s %s" % ("Name", "DNS Name") print "-" * 80 for b in elb.get_all_load_balancers(): print "%-20s %s" % (b.name, b.dns_name) def get(elb, name): """Get details about ELB """ b = find_elb(elb, name) if b: print "=" * 80 print "Name: %s" % b.name print "DNS Name: %s" % b.dns_name if b.canonical_hosted_zone_name: chzn = b.canonical_hosted_zone_name print "Canonical hosted zone name: %s" % chzn if b.canonical_hosted_zone_name_id: chznid = b.canonical_hosted_zone_name_id print "Canonical hosted zone name id: %s" % chznid print print "Health Check: %s" % b.health_check print print "Listeners" print "---------" print "%-8s %-8s %s" % ("IN", "OUT", "PROTO") for l in b.listeners: print "%-8s %-8s %s" % (l[0], l[1], l[2]) print print " Zones " print "---------" for z in b.availability_zones: print z print # Make map of all instance Id's to Name tags if not options.region: ec2 = boto.connect_ec2() else: import boto.ec2.elb ec2 = boto.ec2.connect_to_region(options.region) instance_health = b.get_instance_health() instances = [state.instance_id for state in instance_health] names = {} for i in ec2.get_only_instances(instances): names[i.id] = i.tags.get('Name', '') name_column_width = max([4] + [len(v) for k,v in names.iteritems()]) + 2 print "Instances" print "---------" print "%-12s %-15s %-*s %s" % ("ID", "STATE", name_column_width, "NAME", "DESCRIPTION") for state in instance_health: print "%-12s %-15s %-*s %s" % (state.instance_id, state.state, name_column_width, names[state.instance_id], state.description) print def create(elb, name, zones, listeners): """Create an ELB named """ l_list = [] for l in listeners: l = l.split(",") if l[2] == 'HTTPS': l_list.append((int(l[0]), int(l[1]), l[2], l[3])) else: l_list.append((int(l[0]), int(l[1]), l[2])) b = elb.create_load_balancer(name, zones, l_list) return get(elb, name) def delete(elb, name): """Delete this ELB""" b = find_elb(elb, name) if b: b.delete() print "Load Balancer %s deleted" % name def add_instances(elb, name, instances): """Add to ELB """ b = find_elb(elb, name) if b: b.register_instances(instances) return get(elb, name) def remove_instances(elb, name, instances): """Remove instance from elb """ b = find_elb(elb, name) if b: b.deregister_instances(instances) return get(elb, name) def reap_instances(elb, name): """Remove terminated instances from elb """ b = find_elb(elb, name) if b: for state in b.get_instance_health(): if (state.state == 'OutOfService' and state.description == 'Instance is in terminated state.'): b.deregister_instances([state.instance_id]) return get(elb, name) def enable_zone(elb, name, zone): """Enable for elb""" b = find_elb(elb, name) if b: b.enable_zones([zone]) return get(elb, name) def disable_zone(elb, name, zone): """Disable for elb""" b = find_elb(elb, name) if b: b.disable_zones([zone]) return get(elb, name) def add_listener(elb, name, listeners): """Add listeners to a given load balancer""" l_list = [] for l in listeners: l = l.split(",") l_list.append((int(l[0]), int(l[1]), l[2])) b = find_elb(elb, name) if b: b.create_listeners(l_list) return get(elb, name) def rm_listener(elb, name, ports): """Remove listeners from a given load balancer""" b = find_elb(elb, name) if b: b.delete_listeners(ports) return get(elb, name) if __name__ == "__main__": try: import readline except ImportError: pass import boto import sys from optparse import OptionParser from boto.mashups.iobject import IObject parser = OptionParser(version=VERSION, usage=usage) parser.add_option("-z", "--zone", help="Operate on zone", action="append", default=[], dest="zones") parser.add_option("-l", "--listener", help="Specify Listener in,out,proto", action="append", default=[], dest="listeners") parser.add_option("-r", "--region", help="Region to connect to", action="store", dest="region") (options, args) = parser.parse_args() if len(args) < 1: parser.print_help() sys.exit(1) if not options.region: elb = boto.connect_elb() else: import boto.ec2.elb elb = boto.ec2.elb.connect_to_region(options.region) print "%s" % (elb.region.endpoint) command = args[0].lower() if command in ("ls", "list"): list(elb) elif command == "get": get(elb, args[1]) elif command == "create": if not options.listeners: print "-l option required for command create" sys.exit(1) if not options.zones: print "-z option required for command create" sys.exit(1) create(elb, args[1], options.zones, options.listeners) elif command == "delete": delete(elb, args[1]) elif command in ("add", "put"): add_instances(elb, args[1], args[2:]) elif command in ("rm", "remove"): remove_instances(elb, args[1], args[2:]) elif command == "reap": reap_instances(elb, args[1]) elif command in ("en", "enable"): enable_zone(elb, args[1], args[2]) elif command == "disable": disable_zone(elb, args[1], args[2]) elif command == "addl": if not options.listeners: print "-l option required for command addl" sys.exit(1) add_listener(elb, args[1], options.listeners) elif command == "rml": if not args[2:]: print "port required" sys.exit(2) rm_listener(elb, args[1], args[2:]) boto-2.20.1/bin/fetch_file000077500000000000000000000035001225267101000153170ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2009 Chris Moyer http://coredumped.org # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # if __name__ == "__main__": from optparse import OptionParser usage = """%prog [options] URI Fetch a URI using the boto library and (by default) pipe contents to STDOUT The URI can be either an HTTP URL, or "s3://bucket_name/key_name" """ parser = OptionParser(version="0.1", usage=usage) parser.add_option("-o", "--out-file", help="File to receive output instead of STDOUT", dest="outfile") (options, args) = parser.parse_args() if len(args) < 1: parser.print_help() exit(1) from boto.utils import fetch_file f = fetch_file(args[0]) if options.outfile: open(options.outfile, "w").write(f.read()) else: print f.read() boto-2.20.1/bin/glacier000077500000000000000000000114741225267101000146460ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # Copyright (c) 2012 Miguel Olivares http://moliware.com/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # """ glacier ~~~~~~~ Amazon Glacier tool built on top of boto. Look at the usage method to see how to use it. Author: Miguel Olivares """ import sys from boto.glacier import connect_to_region from getopt import getopt, GetoptError from os.path import isfile COMMANDS = ('vaults', 'jobs', 'upload') def usage(): print """ glacier [args] Commands vaults - Operations with vaults jobs - Operations with jobs upload - Upload files to a vault. If the vault doesn't exits, it is created Common args: --access_key - Your AWS Access Key ID. If not supplied, boto will use the value of the environment variable AWS_ACCESS_KEY_ID --secret_key - Your AWS Secret Access Key. If not supplied, boto will use the value of the environment variable AWS_SECRET_ACCESS_KEY --region - AWS region to use. Possible values: us-east-1, us-west-1, us-west-2, ap-northeast-1, eu-west-1. Default: us-east-1 Vaults operations: List vaults: glacier vaults Jobs operations: List jobs: glacier jobs Uploading files: glacier upload Examples : glacier upload pics *.jpg glacier upload pics a.jpg b.jpg """ sys.exit() def connect(region, debug_level=0, access_key=None, secret_key=None): """ Connect to a specific region """ return connect_to_region(region, aws_access_key_id=access_key, aws_secret_access_key=secret_key, debug=debug_level) def list_vaults(region, access_key=None, secret_key=None): layer2 = connect(region, access_key = access_key, secret_key = secret_key) for vault in layer2.list_vaults(): print vault.arn def list_jobs(vault_name, region, access_key=None, secret_key=None): layer2 = connect(region, access_key = access_key, secret_key = secret_key) print layer2.layer1.list_jobs(vault_name) def upload_files(vault_name, filenames, region, access_key=None, secret_key=None): layer2 = connect(region, access_key = access_key, secret_key = secret_key) layer2.create_vault(vault_name) glacier_vault = layer2.get_vault(vault_name) for filename in filenames: if isfile(filename): print 'Uploading %s to %s' % (filename, vault_name) glacier_vault.upload_archive(filename) def main(): if len(sys.argv) < 2: usage() command = sys.argv[1] if command not in COMMANDS: usage() argv = sys.argv[2:] options = 'a:s:r:' long_options = ['access_key=', 'secret_key=', 'region='] try: opts, args = getopt(argv, options, long_options) except GetoptError, e: usage() # Parse agument access_key = secret_key = None region = 'us-east-1' for option, value in opts: if option in ('-a', '--access_key'): access_key = value elif option in ('-s', '--secret_key'): secret_key = value elif option in ('-r', '--region'): region = value # handle each command if command == 'vaults': list_vaults(region, access_key, secret_key) elif command == 'jobs': if len(args) != 1: usage() list_jobs(args[0], region, access_key, secret_key) elif command == 'upload': if len(args) < 2: usage() upload_files(args[0], args[1:], region, access_key, secret_key) if __name__ == '__main__': main() boto-2.20.1/bin/instance_events000077500000000000000000000132231225267101000164220ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2011 Jim Browne http://www.42lines.net # Borrows heavily from boto/bin/list_instances which has no attribution # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS VERSION="0.1" usage = """%prog [options] Options: -h, --help show help message (including options list) and exit """ from operator import itemgetter HEADERS = { 'ID': {'get': itemgetter('id'), 'length':14}, 'Zone': {'get': itemgetter('zone'), 'length':14}, 'Hostname': {'get': itemgetter('dns'), 'length':20}, 'Code': {'get': itemgetter('code'), 'length':18}, 'Description': {'get': itemgetter('description'), 'length':30}, 'NotBefore': {'get': itemgetter('not_before'), 'length':25}, 'NotAfter': {'get': itemgetter('not_after'), 'length':25}, 'T:': {'length': 30}, } def get_column(name, event=None): if name.startswith('T:'): return event[name] return HEADERS[name]['get'](event) def list(region, headers, order, completed): """List status events for all instances in a given region""" import re ec2 = boto.connect_ec2(region=region) reservations = ec2.get_all_reservations() instanceinfo = {} events = {} displaytags = [ x for x in headers if x.startswith('T:') ] # Collect the tag for every possible instance for res in reservations: for instance in res.instances: iid = instance.id instanceinfo[iid] = {} for tagname in displaytags: _, tag = tagname.split(':', 1) instanceinfo[iid][tagname] = instance.tags.get(tag,'') instanceinfo[iid]['dns'] = instance.public_dns_name stats = ec2.get_all_instance_status() for stat in stats: if stat.events: for event in stat.events: events[stat.id] = {} events[stat.id]['id'] = stat.id events[stat.id]['dns'] = instanceinfo[stat.id]['dns'] events[stat.id]['zone'] = stat.zone for tag in displaytags: events[stat.id][tag] = instanceinfo[stat.id][tag] events[stat.id]['code'] = event.code events[stat.id]['description'] = event.description events[stat.id]['not_before'] = event.not_before events[stat.id]['not_after'] = event.not_after if completed and re.match('^\[Completed\]',event.description): events[stat.id]['not_before'] = 'Completed' events[stat.id]['not_after'] = 'Completed' # Create format string format_string = "" for h in headers: if h.startswith('T:'): format_string += "%%-%ds" % HEADERS['T:']['length'] else: format_string += "%%-%ds" % HEADERS[h]['length'] print format_string % headers print "-" * len(format_string % headers) for instance in sorted(events, key=lambda ev: get_column(order, events[ev])): e = events[instance] print format_string % tuple(get_column(h, e) for h in headers) if __name__ == "__main__": import boto from optparse import OptionParser from boto.ec2 import regions parser = OptionParser(version=VERSION, usage=usage) parser.add_option("-a", "--all", help="check all regions", dest="all", default=False,action="store_true") parser.add_option("-r", "--region", help="region to check (default us-east-1)", dest="region", default="us-east-1") parser.add_option("-H", "--headers", help="Set headers (use 'T:tagname' for including tags)", default=None, action="store", dest="headers", metavar="ID,Zone,Hostname,Code,Description,NotBefore,NotAfter,T:Name") parser.add_option("-S", "--sort", help="Header for sort order", default=None, action="store", dest="order",metavar="HeaderName") parser.add_option("-c", "--completed", help="List time fields as \"Completed\" for completed events (Default: false)", default=False, action="store_true", dest="completed") (options, args) = parser.parse_args() if options.headers: headers = tuple(options.headers.split(',')) else: headers = ('ID', 'Zone', 'Hostname', 'Code', 'NotBefore', 'NotAfter') if options.order: order = options.order else: order = 'ID' if options.all: for r in regions(): print "Region %s" % r.name list(r, headers, order, options.completed) else: # Connect the region for r in regions(): if r.name == options.region: region = r break else: print "Region %s not found." % options.region sys.exit(1) list(r, headers, order, options.completed) boto-2.20.1/bin/kill_instance000077500000000000000000000016331225267101000160530ustar00rootroot00000000000000#!/usr/bin/env python import sys from optparse import OptionParser import boto from boto.ec2 import regions def kill_instance(region, ids): """Kill an instances given it's instance IDs""" # Connect the region ec2 = boto.connect_ec2(region=region) for instance_id in ids: print "Stopping instance: %s" % instance_id ec2.terminate_instances([instance_id]) if __name__ == "__main__": parser = OptionParser(usage="kill_instance [-r] id [id ...]") parser.add_option("-r", "--region", help="Region (default us-east-1)", dest="region", default="us-east-1") (options, args) = parser.parse_args() if not args: parser.print_help() sys.exit(1) for r in regions(): if r.name == options.region: region = r break else: print "Region %s not found." % options.region sys.exit(1) kill_instance(region, args) boto-2.20.1/bin/launch_instance000077500000000000000000000245511225267101000163760ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2009 Chris Moyer http://coredumped.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # # Utility to launch an EC2 Instance # VERSION="0.2" CLOUD_INIT_SCRIPT = """#!/usr/bin/env python f = open("/etc/boto.cfg", "w") f.write(\"\"\"%s\"\"\") f.close() """ import boto.pyami.config import boto.utils import re, os import ConfigParser class Config(boto.pyami.config.Config): """A special config class that also adds import abilities Directly in the config file. To have a config file import another config file, simply use "#import " where is either a relative path or a full URL to another config """ def __init__(self): ConfigParser.SafeConfigParser.__init__(self, {'working_dir' : '/mnt/pyami', 'debug' : '0'}) def add_config(self, file_url): """Add a config file to this configuration :param file_url: URL for the file to add, or a local path :type file_url: str """ if not re.match("^([a-zA-Z0-9]*:\/\/)(.*)", file_url): if not file_url.startswith("/"): file_url = os.path.join(os.getcwd(), file_url) file_url = "file://%s" % file_url (base_url, file_name) = file_url.rsplit("/", 1) base_config = boto.utils.fetch_file(file_url) base_config.seek(0) for line in base_config.readlines(): match = re.match("^#import[\s\t]*([^\s^\t]*)[\s\t]*$", line) if match: self.add_config("%s/%s" % (base_url, match.group(1))) base_config.seek(0) self.readfp(base_config) def add_creds(self, ec2): """Add the credentials to this config if they don't already exist""" if not self.has_section('Credentials'): self.add_section('Credentials') self.set('Credentials', 'aws_access_key_id', ec2.aws_access_key_id) self.set('Credentials', 'aws_secret_access_key', ec2.aws_secret_access_key) def __str__(self): """Get config as string""" from StringIO import StringIO s = StringIO() self.write(s) return s.getvalue() SCRIPTS = [] def scripts_callback(option, opt, value, parser): arg = value.split(',') if len(arg) == 1: SCRIPTS.append(arg[0]) else: SCRIPTS.extend(arg) setattr(parser.values, option.dest, SCRIPTS) def add_script(scr_url): """Read a script and any scripts that are added using #import""" base_url = '/'.join(scr_url.split('/')[:-1]) + '/' script_raw = boto.utils.fetch_file(scr_url) script_content = '' for line in script_raw.readlines(): match = re.match("^#import[\s\t]*([^\s^\t]*)[\s\t]*$", line) #if there is an import if match: #Read the other script and put it in that spot script_content += add_script("%s/%s" % (base_url, match.group(1))) else: #Otherwise, add the line and move on script_content += line return script_content if __name__ == "__main__": try: import readline except ImportError: pass import sys import time import boto from boto.ec2 import regions from optparse import OptionParser from boto.mashups.iobject import IObject parser = OptionParser(version=VERSION, usage="%prog [options] config_url") parser.add_option("-c", "--max-count", help="Maximum number of this type of instance to launch", dest="max_count", default="1") parser.add_option("--min-count", help="Minimum number of this type of instance to launch", dest="min_count", default="1") parser.add_option("--cloud-init", help="Indicates that this is an instance that uses 'CloudInit', Ubuntu's cloud bootstrap process. This wraps the config in a shell script command instead of just passing it in directly", dest="cloud_init", default=False, action="store_true") parser.add_option("-g", "--groups", help="Security Groups to add this instance to", action="append", dest="groups") parser.add_option("-a", "--ami", help="AMI to launch", dest="ami_id") parser.add_option("-t", "--type", help="Type of Instance (default m1.small)", dest="type", default="m1.small") parser.add_option("-k", "--key", help="Keypair", dest="key_name") parser.add_option("-z", "--zone", help="Zone (default us-east-1a)", dest="zone", default="us-east-1a") parser.add_option("-r", "--region", help="Region (default us-east-1)", dest="region", default="us-east-1") parser.add_option("-i", "--ip", help="Elastic IP", dest="elastic_ip") parser.add_option("-n", "--no-add-cred", help="Don't add a credentials section", default=False, action="store_true", dest="nocred") parser.add_option("--save-ebs", help="Save the EBS volume on shutdown, instead of deleting it", default=False, action="store_true", dest="save_ebs") parser.add_option("-w", "--wait", help="Wait until instance is running", default=False, action="store_true", dest="wait") parser.add_option("-d", "--dns", help="Returns public and private DNS (implicates --wait)", default=False, action="store_true", dest="dns") parser.add_option("-T", "--tag", help="Set tag", default=None, action="append", dest="tags", metavar="key:value") parser.add_option("-s", "--scripts", help="Pass in a script or a folder containing scripts to be run when the instance starts up, assumes cloud-init. Specify scripts in a list specified by commas. If multiple scripts are specified, they are run lexically (A good way to ensure they run in the order is to prefix filenames with numbers)", type='string', action="callback", callback=scripts_callback) parser.add_option("--role", help="IAM Role to use, this implies --no-add-cred", dest="role") (options, args) = parser.parse_args() if len(args) < 1: parser.print_help() sys.exit(1) file_url = os.path.expanduser(args[0]) cfg = Config() cfg.add_config(file_url) for r in regions(): if r.name == options.region: region = r break else: print "Region %s not found." % options.region sys.exit(1) ec2 = boto.connect_ec2(region=region) if not options.nocred and not options.role: cfg.add_creds(ec2) iobj = IObject() if options.ami_id: ami = ec2.get_image(options.ami_id) else: ami_id = options.ami_id l = [(a, a.id, a.location) for a in ec2.get_all_images()] ami = iobj.choose_from_list(l, prompt='Choose AMI') if options.key_name: key_name = options.key_name else: l = [(k, k.name, '') for k in ec2.get_all_key_pairs()] key_name = iobj.choose_from_list(l, prompt='Choose Keypair').name if options.groups: groups = options.groups else: groups = [] l = [(g, g.name, g.description) for g in ec2.get_all_security_groups()] g = iobj.choose_from_list(l, prompt='Choose Primary Security Group') while g != None: groups.append(g) l.remove((g, g.name, g.description)) g = iobj.choose_from_list(l, prompt='Choose Additional Security Group (0 to quit)') user_data = str(cfg) # If it's a cloud init AMI, # then we need to wrap the config in our # little wrapper shell script if options.cloud_init: user_data = CLOUD_INIT_SCRIPT % user_data scriptuples = [] if options.scripts: scripts = options.scripts scriptuples.append(('user_data', user_data)) for scr in scripts: scr_url = scr if not re.match("^([a-zA-Z0-9]*:\/\/)(.*)", scr_url): if not scr_url.startswith("/"): scr_url = os.path.join(os.getcwd(), scr_url) try: newfiles = os.listdir(scr_url) for f in newfiles: #put the scripts in the folder in the array such that they run in the correct order scripts.insert(scripts.index(scr) + 1, scr.split("/")[-1] + "/" + f) except OSError: scr_url = "file://%s" % scr_url try: scriptuples.append((scr, add_script(scr_url))) except Exception, e: pass user_data = boto.utils.write_mime_multipart(scriptuples, compress=True) shutdown_proc = "terminate" if options.save_ebs: shutdown_proc = "save" instance_profile_name = None if options.role: instance_profile_name = options.role r = ami.run(min_count=int(options.min_count), max_count=int(options.max_count), key_name=key_name, user_data=user_data, security_groups=groups, instance_type=options.type, placement=options.zone, instance_initiated_shutdown_behavior=shutdown_proc, instance_profile_name=instance_profile_name) instance = r.instances[0] if options.tags: for tag_pair in options.tags: name = tag_pair value = '' if ':' in tag_pair: name, value = tag_pair.split(':', 1) instance.add_tag(name, value) if options.dns: options.wait = True if not options.wait: sys.exit(0) while True: instance.update() if instance.state == 'running': break time.sleep(3) if options.dns: print "Public DNS name: %s" % instance.public_dns_name print "Private DNS name: %s" % instance.private_dns_name boto-2.20.1/bin/list_instances000077500000000000000000000060261225267101000162570ustar00rootroot00000000000000#!/usr/bin/env python import sys from operator import attrgetter from optparse import OptionParser import boto from boto.ec2 import regions HEADERS = { 'ID': {'get': attrgetter('id'), 'length':15}, 'Zone': {'get': attrgetter('placement'), 'length':15}, 'Groups': {'get': attrgetter('groups'), 'length':30}, 'Hostname': {'get': attrgetter('public_dns_name'), 'length':50}, 'PrivateHostname': {'get': attrgetter('private_dns_name'), 'length':50}, 'State': {'get': attrgetter('state'), 'length':15}, 'Image': {'get': attrgetter('image_id'), 'length':15}, 'Type': {'get': attrgetter('instance_type'), 'length':15}, 'IP': {'get': attrgetter('ip_address'), 'length':16}, 'PrivateIP': {'get': attrgetter('private_ip_address'), 'length':16}, 'Key': {'get': attrgetter('key_name'), 'length':25}, 'T:': {'length': 30}, } def get_column(name, instance=None): if name.startswith('T:'): _, tag = name.split(':', 1) return instance.tags.get(tag, '') return HEADERS[name]['get'](instance) def main(): parser = OptionParser() parser.add_option("-r", "--region", help="Region (default us-east-1)", dest="region", default="us-east-1") parser.add_option("-H", "--headers", help="Set headers (use 'T:tagname' for including tags)", default=None, action="store", dest="headers", metavar="ID,Zone,Groups,Hostname,State,T:Name") parser.add_option("-t", "--tab", help="Tab delimited, skip header - useful in shell scripts", action="store_true", default=False) parser.add_option("-f", "--filter", help="Filter option sent to DescribeInstances API call, format is key1=value1,key2=value2,...", default=None) (options, args) = parser.parse_args() # Connect the region for r in regions(): if r.name == options.region: region = r break else: print "Region %s not found." % options.region sys.exit(1) ec2 = boto.connect_ec2(region=region) # Read headers if options.headers: headers = tuple(options.headers.split(',')) else: headers = ("ID", 'Zone', "Groups", "Hostname") # Create format string format_string = "" for h in headers: if h.startswith('T:'): format_string += "%%-%ds" % HEADERS['T:']['length'] else: format_string += "%%-%ds" % HEADERS[h]['length'] # Parse filters (if any) if options.filter: filters = dict([entry.split('=') for entry in options.filter.split(',')]) else: filters = {} # List and print if not options.tab: print format_string % headers print "-" * len(format_string % headers) for r in ec2.get_all_reservations(filters=filters): groups = [g.name for g in r.groups] for i in r.instances: i.groups = ','.join(groups) if options.tab: print "\t".join(tuple(get_column(h, i) for h in headers)) else: print format_string % tuple(get_column(h, i) for h in headers) if __name__ == "__main__": main() boto-2.20.1/bin/lss3000077500000000000000000000050721225267101000141210ustar00rootroot00000000000000#!/usr/bin/env python import boto from boto.s3.connection import OrdinaryCallingFormat def sizeof_fmt(num): for x in ['b ','KB','MB','GB','TB', 'XB']: if num < 1024.0: return "%3.1f %s" % (num, x) num /= 1024.0 return "%3.1f %s" % (num, x) def list_bucket(b, prefix=None, marker=None): """List everything in a bucket""" from boto.s3.prefix import Prefix from boto.s3.key import Key total = 0 if prefix: if not prefix.endswith("/"): prefix = prefix + "/" query = b.list(prefix=prefix, delimiter="/", marker=marker) print "%s" % prefix else: query = b.list(delimiter="/", marker=marker) num = 0 for k in query: num += 1 mode = "-rwx---" if isinstance(k, Prefix): mode = "drwxr--" size = 0 else: size = k.size for g in k.get_acl().acl.grants: if g.id == None: if g.permission == "READ": mode = "-rwxr--" elif g.permission == "FULL_CONTROL": mode = "-rwxrwx" if isinstance(k, Key): print "%s\t%s\t%010s\t%s" % (mode, k.last_modified, sizeof_fmt(size), k.name) else: #If it's not a Key object, it doesn't have a last_modified time, so #print nothing instead print "%s\t%s\t%010s\t%s" % (mode, ' '*24, sizeof_fmt(size), k.name) total += size print "="*80 print "\t\tTOTAL: \t%010s \t%i Files" % (sizeof_fmt(total), num) def list_buckets(s3): """List all the buckets""" for b in s3.get_all_buckets(): print b.name if __name__ == "__main__": import optparse import sys if len(sys.argv) < 2: list_buckets(boto.connect_s3()) sys.exit(0) parser = optparse.OptionParser() parser.add_option('-m', '--marker', help='The S3 key where the listing starts after it.') options, buckets = parser.parse_args() marker = options.marker pairs = [] mixedCase = False for name in buckets: if "/" in name: pairs.append(name.split("/",1)) else: pairs.append([name, None]) if pairs[-1][0].lower() != pairs[-1][0]: mixedCase = True if mixedCase: s3 = boto.connect_s3(calling_format=OrdinaryCallingFormat()) else: s3 = boto.connect_s3() for name, prefix in pairs: list_bucket(s3.get_bucket(name), prefix, marker=marker) boto-2.20.1/bin/mturk000077500000000000000000000414671225267101000144070ustar00rootroot00000000000000#!/usr/bin/env python # Copyright 2012 Kodi Arfer # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS import argparse # Hence, Python 2.7 is required. import sys import os.path import string import inspect import datetime, calendar import boto.mturk.connection, boto.mturk.price, boto.mturk.question, boto.mturk.qualification from boto.compat import json # -------------------------------------------------- # Globals # ------------------------------------------------- interactive = False con = None mturk_website = None default_nicknames_path = os.path.expanduser('~/.boto_mturkcli_hit_nicknames') nicknames = {} nickname_pool = set(string.ascii_lowercase) time_units = dict( s = 1, min = 60, h = 60 * 60, d = 24 * 60 * 60) qual_requirements = dict( Adult = '00000000000000000060', Locale = '00000000000000000071', NumberHITsApproved = '00000000000000000040', PercentAssignmentsSubmitted = '00000000000000000000', PercentAssignmentsAbandoned = '00000000000000000070', PercentAssignmentsReturned = '000000000000000000E0', PercentAssignmentsApproved = '000000000000000000L0', PercentAssignmentsRejected = '000000000000000000S0') qual_comparators = {v : k for k, v in dict( LessThan = '<', LessThanOrEqualTo = '<=', GreaterThan = '>', GreaterThanOrEqualTo = '>=', EqualTo = '==', NotEqualTo = '!=', Exists = 'exists').items()} example_config_file = '''Example configuration file: { "title": "Pick your favorite color", "description": "In this task, you are asked to pick your favorite color.", "reward": 0.50, "assignments": 10, "duration": "20 min", "keywords": ["color", "favorites", "survey"], "lifetime": "7 d", "approval_delay": "14 d", "qualifications": [ "PercentAssignmentsApproved > 90", "Locale == US", "2ARFPLSP75KLA8M8DH1HTEQVJT3SY6 exists" ], "question_url": "http://example.com/myhit", "question_frame_height": 450 }''' # -------------------------------------------------- # Subroutines # -------------------------------------------------- def unjson(path): with open(path) as o: return json.load(o) def add_argparse_arguments(parser): parser.add_argument('-P', '--production', dest = 'sandbox', action = 'store_false', default = True, help = 'use the production site (default: use the sandbox)') parser.add_argument('--nicknames', dest = 'nicknames_path', metavar = 'PATH', default = default_nicknames_path, help = 'where to store HIT nicknames (default: {})'.format( default_nicknames_path)) def init_by_args(args): init(args.sandbox, args.nicknames_path) def init(sandbox = False, nicknames_path = default_nicknames_path): global con, mturk_website, nicknames, original_nicknames mturk_website = 'workersandbox.mturk.com' if sandbox else 'www.mturk.com' con = boto.mturk.connection.MTurkConnection( host = 'mechanicalturk.sandbox.amazonaws.com' if sandbox else 'mechanicalturk.amazonaws.com') try: nicknames = unjson(nicknames_path) except IOError: nicknames = {} original_nicknames = nicknames.copy() def save_nicknames(nicknames_path = default_nicknames_path): if nicknames != original_nicknames: with open(nicknames_path, 'w') as o: json.dump(nicknames, o, sort_keys = True, indent = 4) print >>o def parse_duration(s): '''Parses durations like "2 d", "48 h", "2880 min", "172800 s", or "172800".''' x = s.split() return int(x[0]) * time_units['s' if len(x) == 1 else x[1]] def display_duration(n): for unit, m in sorted(time_units.items(), key = lambda x: -x[1]): if n % m == 0: return '{} {}'.format(n / m, unit) def parse_qualification(inp): '''Parses qualifications like "PercentAssignmentsApproved > 90", "Locale == US", and "2ARFPLSP75KLA8M8DH1HTEQVJT3SY6 exists".''' inp = inp.split() name, comparator, value = inp.pop(0), inp.pop(0), (inp[0] if len(inp) else None) qtid = qual_requirements.get(name) if qtid is None: # Treat "name" as a Qualification Type ID. qtid = name if qtid == qual_requirements['Locale']: return boto.mturk.qualification.LocaleRequirement( qual_comparators[comparator], value, required_to_preview = False) return boto.mturk.qualification.Requirement( qtid, qual_comparators[comparator], value, required_to_preview = qtid == qual_requirements['Adult']) # Thus required_to_preview is true only for the # Worker_Adult requirement. def preview_url(hit): return 'https://{}/mturk/preview?groupId={}'.format( mturk_website, hit.HITTypeId) def parse_timestamp(s): '''Takes a timestamp like "2012-11-24T16:34:41Z". Returns a datetime object in the local time zone.''' return datetime.datetime.fromtimestamp( calendar.timegm( datetime.datetime.strptime(s, '%Y-%m-%dT%H:%M:%SZ').timetuple())) def get_hitid(nickname_or_hitid): return nicknames.get(nickname_or_hitid) or nickname_or_hitid def get_nickname(hitid): for k, v in nicknames.items(): if v == hitid: return k return None def display_datetime(dt): return dt.strftime('%e %b %Y, %l:%M %P') def display_hit(hit, verbose = False): et = parse_timestamp(hit.Expiration) return '\n'.join([ '{} - {} ({}, {}, {})'.format( get_nickname(hit.HITId), hit.Title, hit.FormattedPrice, display_duration(int(hit.AssignmentDurationInSeconds)), hit.HITStatus), 'HIT ID: ' + hit.HITId, 'Type ID: ' + hit.HITTypeId, 'Group ID: ' + hit.HITGroupId, 'Preview: ' + preview_url(hit), 'Created {} {}'.format( display_datetime(parse_timestamp(hit.CreationTime)), 'Expired' if et <= datetime.datetime.now() else 'Expires ' + display_datetime(et)), 'Assignments: {} -- {} avail, {} pending, {} reviewable, {} reviewed'.format( hit.MaxAssignments, hit.NumberOfAssignmentsAvailable, hit.NumberOfAssignmentsPending, int(hit.MaxAssignments) - (int(hit.NumberOfAssignmentsAvailable) + int(hit.NumberOfAssignmentsPending) + int(hit.NumberOfAssignmentsCompleted)), hit.NumberOfAssignmentsCompleted) if hasattr(hit, 'NumberOfAssignmentsAvailable') else 'Assignments: {} total'.format(hit.MaxAssignments), # For some reason, SearchHITs includes the # NumberOfAssignmentsFoobar fields but GetHIT doesn't. ] + ([] if not verbose else [ '\nDescription: ' + hit.Description, '\nKeywords: ' + hit.Keywords ])) + '\n' def digest_assignment(a): return dict( answers = {str(x.qid): str(x.fields[0]) for x in a.answers[0]}, **{k: str(getattr(a, k)) for k in ( 'AcceptTime', 'SubmitTime', 'HITId', 'AssignmentId', 'WorkerId', 'AssignmentStatus')}) # -------------------------------------------------- # Commands # -------------------------------------------------- def get_balance(): return con.get_account_balance() def show_hit(hit): return display_hit(con.get_hit(hit)[0], verbose = True) def list_hits(): 'Lists your 10 most recently created HITs, with the most recent last.' return '\n'.join(reversed(map(display_hit, con.search_hits( sort_by = 'CreationTime', sort_direction = 'Descending', page_size = 10)))) def make_hit(title, description, keywords, reward, question_url, question_frame_height, duration, assignments, approval_delay, lifetime, qualifications = []): r = con.create_hit( title = title, description = description, keywords = con.get_keywords_as_string(keywords), reward = con.get_price_as_price(reward), question = boto.mturk.question.ExternalQuestion( question_url, question_frame_height), duration = parse_duration(duration), qualifications = boto.mturk.qualification.Qualifications( map(parse_qualification, qualifications)), max_assignments = assignments, approval_delay = parse_duration(approval_delay), lifetime = parse_duration(lifetime)) nick = None available_nicks = nickname_pool - set(nicknames.keys()) if available_nicks: nick = min(available_nicks) nicknames[nick] = r[0].HITId if interactive: print 'Nickname:', nick print 'HIT ID:', r[0].HITId print 'Preview:', preview_url(r[0]) else: return r[0] def extend_hit(hit, assignments_increment = None, expiration_increment = None): con.extend_hit(hit, assignments_increment, expiration_increment) def expire_hit(hit): con.expire_hit(hit) def delete_hit(hit): '''Deletes a HIT using DisableHIT. Unreviewed assignments get automatically approved. Unsubmitted assignments get automatically approved upon submission. The API docs say DisableHIT doesn't work with Reviewable HITs, but apparently, it does.''' con.disable_hit(hit) global nicknames nicknames = {k: v for k, v in nicknames.items() if v != hit} def list_assignments(hit, only_reviewable = False): assignments = map(digest_assignment, con.get_assignments( hit_id = hit, page_size = 100, status = 'Submitted' if only_reviewable else None)) if interactive: print json.dumps(assignments, sort_keys = True, indent = 4) print ' '.join([a['AssignmentId'] for a in assignments]) print ' '.join([a['WorkerId'] + ',' + a['AssignmentId'] for a in assignments]) else: return assignments def grant_bonus(message, amount, pairs): for worker, assignment in pairs: con.grant_bonus(worker, assignment, con.get_price_as_price(amount), message) if interactive: print 'Bonused', worker def approve_assignments(message, assignments): for a in assignments: con.approve_assignment(a, message) if interactive: print 'Approved', a def reject_assignments(message, assignments): for a in assignments: con.reject_assignment(a, message) if interactive: print 'Rejected', a def unreject_assignments(message, assignments): for a in assignments: con.approve_rejected_assignment(a, message) if interactive: print 'Unrejected', a def notify_workers(subject, text, workers): con.notify_workers(workers, subject, text) # -------------------------------------------------- # Mainline code # -------------------------------------------------- if __name__ == '__main__': interactive = True parser = argparse.ArgumentParser() add_argparse_arguments(parser) subs = parser.add_subparsers() sub = subs.add_parser('bal', help = 'display your prepaid balance') sub.set_defaults(f = get_balance, a = lambda: []) sub = subs.add_parser('hit', help = 'get information about a HIT') sub.add_argument('hit', help = 'nickname or ID of the HIT to show') sub.set_defaults(f = show_hit, a = lambda: [get_hitid(args.hit)]) sub = subs.add_parser('hits', help = 'list all your HITs') sub.set_defaults(f = list_hits, a = lambda: []) sub = subs.add_parser('new', help = 'create a new HIT (external questions only)', epilog = example_config_file, formatter_class = argparse.RawDescriptionHelpFormatter) sub.add_argument('json_path', help = 'path to JSON configuration file for the HIT') sub.add_argument('-u', '--question-url', dest = 'question_url', metavar = 'URL', help = 'URL for the external question') sub.add_argument('-a', '--assignments', dest = 'assignments', type = int, metavar = 'N', help = 'number of assignments') sub.add_argument('-r', '--reward', dest = 'reward', type = float, metavar = 'PRICE', help = 'reward amount, in USD') sub.set_defaults(f = make_hit, a = lambda: dict( unjson(args.json_path).items() + [(k, getattr(args, k)) for k in ('question_url', 'assignments', 'reward') if getattr(args, k) is not None])) sub = subs.add_parser('extend', help = 'add assignments or time to a HIT') sub.add_argument('hit', help = 'nickname or ID of the HIT to extend') sub.add_argument('-a', '--assignments', dest = 'assignments', metavar = 'N', type = int, help = 'number of assignments to add') sub.add_argument('-t', '--time', dest = 'time', metavar = 'T', help = 'amount of time to add to the expiration date') sub.set_defaults(f = extend_hit, a = lambda: [get_hitid(args.hit), args.assignments, args.time and parse_duration(args.time)]) sub = subs.add_parser('expire', help = 'force a HIT to expire without deleting it') sub.add_argument('hit', help = 'nickname or ID of the HIT to expire') sub.set_defaults(f = expire_hit, a = lambda: [get_hitid(args.hit)]) sub = subs.add_parser('rm', help = 'delete a HIT') sub.add_argument('hit', help = 'nickname or ID of the HIT to delete') sub.set_defaults(f = delete_hit, a = lambda: [get_hitid(args.hit)]) sub = subs.add_parser('as', help = "list a HIT's submitted assignments") sub.add_argument('hit', help = 'nickname or ID of the HIT to get assignments for') sub.add_argument('-r', '--reviewable', dest = 'only_reviewable', action = 'store_true', help = 'show only unreviewed assignments') sub.set_defaults(f = list_assignments, a = lambda: [get_hitid(args.hit), args.only_reviewable]) for command, fun, helpmsg in [ ('approve', approve_assignments, 'approve assignments'), ('reject', reject_assignments, 'reject assignments'), ('unreject', unreject_assignments, 'approve previously rejected assignments')]: sub = subs.add_parser(command, help = helpmsg) sub.add_argument('assignment', nargs = '+', help = 'ID of an assignment') sub.add_argument('-m', '--message', dest = 'message', metavar = 'TEXT', help = 'feedback message shown to workers') sub.set_defaults(f = fun, a = lambda: [args.message, args.assignment]) sub = subs.add_parser('bonus', help = 'give some workers a bonus') sub.add_argument('amount', type = float, help = 'bonus amount, in USD') sub.add_argument('message', help = 'the reason for the bonus (shown to workers in an email sent by MTurk)') sub.add_argument('widaid', nargs = '+', help = 'a WORKER_ID,ASSIGNMENT_ID pair') sub.set_defaults(f = grant_bonus, a = lambda: [args.message, args.amount, [p.split(',') for p in args.widaid]]) sub = subs.add_parser('notify', help = 'send a message to some workers') sub.add_argument('subject', help = 'subject of the message') sub.add_argument('message', help = 'text of the message') sub.add_argument('worker', nargs = '+', help = 'ID of a worker') sub.set_defaults(f = notify_workers, a = lambda: [args.subject, args.message, args.worker]) args = parser.parse_args() init_by_args(args) f = args.f a = args.a() if isinstance(a, dict): # We do some introspective gymnastics so we can produce a # less incomprehensible error message if some arguments # are missing. spec = inspect.getargspec(f) missing = set(spec.args[: len(spec.args) - len(spec.defaults)]) - set(a.keys()) if missing: raise ValueError('Missing arguments: ' + ', '.join(missing)) doit = lambda: f(**a) else: doit = lambda: f(*a) try: x = doit() except boto.mturk.connection.MTurkRequestError as e: print 'MTurk error:', e.error_message sys.exit(1) if x is not None: print x save_nicknames() boto-2.20.1/bin/pyami_sendmail000077500000000000000000000050721225267101000162300ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2010 Chris Moyer http://coredumped.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # # Send Mail from a PYAMI instance, or anything that has a boto.cfg # properly set up # VERSION="0.1" usage = """%prog [options] Sends whatever is on stdin to the recipient specified by your boto.cfg or whoevery you specify in the options here. """ if __name__ == "__main__": from boto.utils import notify import sys from optparse import OptionParser parser = OptionParser(version=VERSION, usage=usage) parser.add_option("-t", "--to", help="Optional to address to send to (default from your boto.cfg)", action="store", default=None, dest="to") parser.add_option("-s", "--subject", help="Optional Subject to send this report as", action="store", default="Report", dest="subject") parser.add_option("-f", "--file", help="Optionally, read from a file instead of STDIN", action="store", default=None, dest="file") parser.add_option("--html", help="HTML Format the email", action="store_true", default=False, dest="html") parser.add_option("--no-instance-id", help="If set, don't append the instance id", action="store_false", default=True, dest="append_instance_id") (options, args) = parser.parse_args() if options.file: body = open(options.file, 'r').read() else: body = sys.stdin.read() if options.html: notify(options.subject, html_body=body, to_string=options.to, append_instance_id=options.append_instance_id) else: notify(options.subject, body=body, to_string=options.to, append_instance_id=options.append_instance_id) boto-2.20.1/bin/route53000077500000000000000000000215121225267101000145400ustar00rootroot00000000000000#!/usr/bin/env python # Author: Chris Moyer # # route53 is similar to sdbadmin for Route53, it's a simple # console utility to perform the most frequent tasks with Route53 # # Example usage. Use route53 get after each command to see how the # zone changes. # # Add a non-weighted record, change its value, then delete. Default TTL: # # route53 add_record ZPO9LGHZ43QB9 rr.example.com A 4.3.2.1 # route53 change_record ZPO9LGHZ43QB9 rr.example.com A 9.8.7.6 # route53 del_record ZPO9LGHZ43QB9 rr.example.com A 9.8.7.6 # # Add a weighted record with two different weights. Note that the TTL # must be specified as route53 uses positional parameters rather than # option flags: # # route53 add_record ZPO9LGHZ43QB9 wrr.example.com A 1.2.3.4 600 foo9 10 # route53 add_record ZPO9LGHZ43QB9 wrr.example.com A 4.3.2.1 600 foo8 10 # # route53 change_record ZPO9LGHZ43QB9 wrr.example.com A 9.9.9.9 600 foo8 10 # # route53 del_record ZPO9LGHZ43QB9 wrr.example.com A 1.2.3.4 600 foo9 10 # route53 del_record ZPO9LGHZ43QB9 wrr.example.com A 9.9.9.9 600 foo8 10 # # Add a non-weighted alias, change its value, then delete. Alaises inherit # their TTLs from the backing ELB: # # route53 add_alias ZPO9LGHZ43QB9 alias.example.com A Z3DZXE0Q79N41H lb-1218761514.us-east-1.elb.amazonaws.com. # route53 change_alias ZPO9LGHZ43QB9 alias.example.com. A Z3DZXE0Q79N41H lb2-1218761514.us-east-1.elb.amazonaws.com. # route53 delete_alias ZPO9LGHZ43QB9 alias.example.com. A Z3DZXE0Q79N41H lb2-1218761514.us-east-1.elb.amazonaws.com. def _print_zone_info(zoneinfo): print "="*80 print "| ID: %s" % zoneinfo['Id'].split("/")[-1] print "| Name: %s" % zoneinfo['Name'] print "| Ref: %s" % zoneinfo['CallerReference'] print "="*80 print zoneinfo['Config'] print def create(conn, hostname, caller_reference=None, comment=''): """Create a hosted zone, returning the nameservers""" response = conn.create_hosted_zone(hostname, caller_reference, comment) print "Pending, please add the following Name Servers:" for ns in response.NameServers: print "\t", ns def delete_zone(conn, hosted_zone_id): """Delete a hosted zone by ID""" response = conn.delete_hosted_zone(hosted_zone_id) print response def ls(conn): """List all hosted zones""" response = conn.get_all_hosted_zones() for zoneinfo in response['ListHostedZonesResponse']['HostedZones']: _print_zone_info(zoneinfo) def get(conn, hosted_zone_id, type=None, name=None, maxitems=None): """Get all the records for a single zone""" response = conn.get_all_rrsets(hosted_zone_id, type, name, maxitems=maxitems) # If a maximum number of items was set, we limit to that number # by turning the response into an actual list (copying it) # instead of allowing it to page if maxitems: response = response[:] print '%-40s %-5s %-20s %s' % ("Name", "Type", "TTL", "Value(s)") for record in response: print '%-40s %-5s %-20s %s' % (record.name, record.type, record.ttl, record.to_print()) def _add_del(conn, hosted_zone_id, change, name, type, identifier, weight, values, ttl, comment): from boto.route53.record import ResourceRecordSets changes = ResourceRecordSets(conn, hosted_zone_id, comment) change = changes.add_change(change, name, type, ttl, identifier=identifier, weight=weight) for value in values.split(','): change.add_value(value) print changes.commit() def _add_del_alias(conn, hosted_zone_id, change, name, type, identifier, weight, alias_hosted_zone_id, alias_dns_name, comment): from boto.route53.record import ResourceRecordSets changes = ResourceRecordSets(conn, hosted_zone_id, comment) change = changes.add_change(change, name, type, identifier=identifier, weight=weight) change.set_alias(alias_hosted_zone_id, alias_dns_name) print changes.commit() def add_record(conn, hosted_zone_id, name, type, values, ttl=600, identifier=None, weight=None, comment=""): """Add a new record to a zone. identifier and weight are optional.""" _add_del(conn, hosted_zone_id, "CREATE", name, type, identifier, weight, values, ttl, comment) def del_record(conn, hosted_zone_id, name, type, values, ttl=600, identifier=None, weight=None, comment=""): """Delete a record from a zone: name, type, ttl, identifier, and weight must match.""" _add_del(conn, hosted_zone_id, "DELETE", name, type, identifier, weight, values, ttl, comment) def add_alias(conn, hosted_zone_id, name, type, alias_hosted_zone_id, alias_dns_name, identifier=None, weight=None, comment=""): """Add a new alias to a zone. identifier and weight are optional.""" _add_del_alias(conn, hosted_zone_id, "CREATE", name, type, identifier, weight, alias_hosted_zone_id, alias_dns_name, comment) def del_alias(conn, hosted_zone_id, name, type, alias_hosted_zone_id, alias_dns_name, identifier=None, weight=None, comment=""): """Delete an alias from a zone: name, type, alias_hosted_zone_id, alias_dns_name, weight and identifier must match.""" _add_del_alias(conn, hosted_zone_id, "DELETE", name, type, identifier, weight, alias_hosted_zone_id, alias_dns_name, comment) def change_record(conn, hosted_zone_id, name, type, newvalues, ttl=600, identifier=None, weight=None, comment=""): """Delete and then add a record to a zone. identifier and weight are optional.""" from boto.route53.record import ResourceRecordSets changes = ResourceRecordSets(conn, hosted_zone_id, comment) # Assume there are not more than 10 WRRs for a given (name, type) responses = conn.get_all_rrsets(hosted_zone_id, type, name, maxitems=10) for response in responses: if response.name != name or response.type != type: continue if response.identifier != identifier or response.weight != weight: continue change1 = changes.add_change("DELETE", name, type, response.ttl, identifier=response.identifier, weight=response.weight) for old_value in response.resource_records: change1.add_value(old_value) change2 = changes.add_change("CREATE", name, type, ttl, identifier=identifier, weight=weight) for new_value in newvalues.split(','): change2.add_value(new_value) print changes.commit() def change_alias(conn, hosted_zone_id, name, type, new_alias_hosted_zone_id, new_alias_dns_name, identifier=None, weight=None, comment=""): """Delete and then add an alias to a zone. identifier and weight are optional.""" from boto.route53.record import ResourceRecordSets changes = ResourceRecordSets(conn, hosted_zone_id, comment) # Assume there are not more than 10 WRRs for a given (name, type) responses = conn.get_all_rrsets(hosted_zone_id, type, name, maxitems=10) for response in responses: if response.name != name or response.type != type: continue if response.identifier != identifier or response.weight != weight: continue change1 = changes.add_change("DELETE", name, type, identifier=response.identifier, weight=response.weight) change1.set_alias(response.alias_hosted_zone_id, response.alias_dns_name) change2 = changes.add_change("CREATE", name, type, identifier=identifier, weight=weight) change2.set_alias(new_alias_hosted_zone_id, new_alias_dns_name) print changes.commit() def help(conn, fnc=None): """Prints this help message""" import inspect self = sys.modules['__main__'] if fnc: try: cmd = getattr(self, fnc) except: cmd = None if not inspect.isfunction(cmd): print "No function named: %s found" % fnc sys.exit(2) (args, varargs, varkw, defaults) = inspect.getargspec(cmd) print cmd.__doc__ print "Usage: %s %s" % (fnc, " ".join([ "[%s]" % a for a in args[1:]])) else: print "Usage: route53 [command]" for cname in dir(self): if not cname.startswith("_"): cmd = getattr(self, cname) if inspect.isfunction(cmd): doc = cmd.__doc__ print "\t%-20s %s" % (cname, doc) sys.exit(1) if __name__ == "__main__": import boto import sys conn = boto.connect_route53() self = sys.modules['__main__'] if len(sys.argv) >= 2: try: cmd = getattr(self, sys.argv[1]) except: cmd = None args = sys.argv[2:] else: cmd = help args = [] if not cmd: cmd = help try: cmd(conn, *args) except TypeError, e: print e help(conn, cmd.__name__) boto-2.20.1/bin/s3put000077500000000000000000000401531225267101000143120ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2006,2007,2008 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import getopt import sys import os import boto try: # multipart portions copyright Fabian Topfstedt # https://gist.github.com/924094 import math import mimetypes from multiprocessing import Pool from boto.s3.connection import S3Connection from filechunkio import FileChunkIO multipart_capable = True usage_flag_multipart_capable = """ [--multipart]""" usage_string_multipart_capable = """ multipart - Upload files as multiple parts. This needs filechunkio. Requires ListBucket, ListMultipartUploadParts, ListBucketMultipartUploads and PutObject permissions.""" except ImportError as err: multipart_capable = False usage_flag_multipart_capable = "" usage_string_multipart_capable = '\n\n "' + \ err.message[len('No module named '):] + \ '" is missing for multipart support ' DEFAULT_REGION = 'us-east-1' usage_string = """ SYNOPSIS s3put [-a/--access_key ] [-s/--secret_key ] -b/--bucket [-c/--callback ] [-d/--debug ] [-i/--ignore ] [-n/--no_op] [-p/--prefix ] [-k/--key_prefix ] [-q/--quiet] [-g/--grant grant] [-w/--no_overwrite] [-r/--reduced] [--header] [--region ] [--host ]""" + \ usage_flag_multipart_capable + """ path [path...] Where access_key - Your AWS Access Key ID. If not supplied, boto will use the value of the environment variable AWS_ACCESS_KEY_ID secret_key - Your AWS Secret Access Key. If not supplied, boto will use the value of the environment variable AWS_SECRET_ACCESS_KEY bucket_name - The name of the S3 bucket the file(s) should be copied to. path - A path to a directory or file that represents the items to be uploaded. If the path points to an individual file, that file will be uploaded to the specified bucket. If the path points to a directory, it will recursively traverse the directory and upload all files to the specified bucket. debug_level - 0 means no debug output (default), 1 means normal debug output from boto, and 2 means boto debug output plus request/response output from httplib ignore_dirs - a comma-separated list of directory names that will be ignored and not uploaded to S3. num_cb - The number of progress callbacks to display. The default is zero which means no callbacks. If you supplied a value of "-c 10" for example, the progress callback would be called 10 times for each file transferred. prefix - A file path prefix that will be stripped from the full path of the file when determining the key name in S3. For example, if the full path of a file is: /home/foo/bar/fie.baz and the prefix is specified as "-p /home/foo/" the resulting key name in S3 will be: /bar/fie.baz The prefix must end in a trailing separator and if it does not then one will be added. key_prefix - A prefix to be added to the S3 key name, after any stripping of the file path is done based on the "-p/--prefix" option. reduced - Use Reduced Redundancy storage grant - A canned ACL policy that will be granted on each file transferred to S3. The value of provided must be one of the "canned" ACL policies supported by S3: private|public-read|public-read-write|authenticated-read no_overwrite - No files will be overwritten on S3, if the file/key exists on s3 it will be kept. This is useful for resuming interrupted transfers. Note this is not a sync, even if the file has been updated locally if the key exists on s3 the file on s3 will not be updated. header - key=value pairs of extra header(s) to pass along in the request region - Manually set a region for buckets that are not in the US classic region. Normally the region is autodetected, but setting this yourself is more efficient. host - Hostname override, for using an endpoint other then AWS S3 """ + usage_string_multipart_capable + """ If the -n option is provided, no files will be transferred to S3 but informational messages will be printed about what would happen. """ def usage(status=1): print usage_string sys.exit(status) def submit_cb(bytes_so_far, total_bytes): print '%d bytes transferred / %d bytes total' % (bytes_so_far, total_bytes) def get_key_name(fullpath, prefix, key_prefix): if fullpath.startswith(prefix): key_name = fullpath[len(prefix):] else: key_name = fullpath l = key_name.split(os.sep) return key_prefix + '/'.join(l) def _upload_part(bucketname, aws_key, aws_secret, multipart_id, part_num, source_path, offset, bytes, debug, cb, num_cb, amount_of_retries=10): """ Uploads a part with retries. """ if debug == 1: print "_upload_part(%s, %s, %s)" % (source_path, offset, bytes) def _upload(retries_left=amount_of_retries): try: if debug == 1: print 'Start uploading part #%d ...' % part_num conn = S3Connection(aws_key, aws_secret) conn.debug = debug bucket = conn.get_bucket(bucketname) for mp in bucket.get_all_multipart_uploads(): if mp.id == multipart_id: with FileChunkIO(source_path, 'r', offset=offset, bytes=bytes) as fp: mp.upload_part_from_file(fp=fp, part_num=part_num, cb=cb, num_cb=num_cb) break except Exception, exc: if retries_left: _upload(retries_left=retries_left - 1) else: print 'Failed uploading part #%d' % part_num raise exc else: if debug == 1: print '... Uploaded part #%d' % part_num _upload() def multipart_upload(bucketname, aws_key, aws_secret, source_path, keyname, reduced, debug, cb, num_cb, acl='private', headers={}, guess_mimetype=True, parallel_processes=4, region=DEFAULT_REGION): """ Parallel multipart upload. """ conn = boto.s3.connect_to_region(region, aws_access_key_id=aws_key, aws_secret_access_key=aws_secret) conn.debug = debug bucket = conn.get_bucket(bucketname) if guess_mimetype: mtype = mimetypes.guess_type(keyname)[0] or 'application/octet-stream' headers.update({'Content-Type': mtype}) mp = bucket.initiate_multipart_upload(keyname, headers=headers, reduced_redundancy=reduced) source_size = os.stat(source_path).st_size bytes_per_chunk = max(int(math.sqrt(5242880) * math.sqrt(source_size)), 5242880) chunk_amount = int(math.ceil(source_size / float(bytes_per_chunk))) pool = Pool(processes=parallel_processes) for i in range(chunk_amount): offset = i * bytes_per_chunk remaining_bytes = source_size - offset bytes = min([bytes_per_chunk, remaining_bytes]) part_num = i + 1 pool.apply_async(_upload_part, [bucketname, aws_key, aws_secret, mp.id, part_num, source_path, offset, bytes, debug, cb, num_cb]) pool.close() pool.join() if len(mp.get_all_parts()) == chunk_amount: mp.complete_upload() key = bucket.get_key(keyname) key.set_acl(acl) else: mp.cancel_upload() def singlepart_upload(bucket, key_name, fullpath, *kargs, **kwargs): """ Single upload. """ k = bucket.new_key(key_name) k.set_contents_from_filename(fullpath, *kargs, **kwargs) def expand_path(path): path = os.path.expanduser(path) path = os.path.expandvars(path) return os.path.abspath(path) def main(): # default values aws_access_key_id = None aws_secret_access_key = None bucket_name = '' ignore_dirs = [] debug = 0 cb = None num_cb = 0 quiet = False no_op = False prefix = '/' key_prefix = '' grant = None no_overwrite = False reduced = False headers = {} host = None multipart_requested = False region = None try: opts, args = getopt.getopt( sys.argv[1:], 'a:b:c::d:g:hi:k:np:qs:wr', ['access_key=', 'bucket=', 'callback=', 'debug=', 'help', 'grant=', 'ignore=', 'key_prefix=', 'no_op', 'prefix=', 'quiet', 'secret_key=', 'no_overwrite', 'reduced', 'header=', 'multipart', 'host=', 'region=']) except: usage(1) # parse opts for o, a in opts: if o in ('-h', '--help'): usage(0) if o in ('-a', '--access_key'): aws_access_key_id = a if o in ('-b', '--bucket'): bucket_name = a if o in ('-c', '--callback'): num_cb = int(a) cb = submit_cb if o in ('-d', '--debug'): debug = int(a) if o in ('-g', '--grant'): grant = a if o in ('-i', '--ignore'): ignore_dirs = a.split(',') if o in ('-n', '--no_op'): no_op = True if o in ('-w', '--no_overwrite'): no_overwrite = True if o in ('-p', '--prefix'): prefix = a if prefix[-1] != os.sep: prefix = prefix + os.sep prefix = expand_path(prefix) if o in ('-k', '--key_prefix'): key_prefix = a if o in ('-q', '--quiet'): quiet = True if o in ('-s', '--secret_key'): aws_secret_access_key = a if o in ('-r', '--reduced'): reduced = True if o in ('--header'): (k, v) = a.split("=", 1) headers[k] = v if o in ('--host'): host = a if o in ('--multipart'): if multipart_capable: multipart_requested = True else: print "multipart upload requested but not capable" sys.exit(4) if o in ('--region'): regions = boto.s3.regions() for region_info in regions: if region_info.name == a: region = a break else: raise ValueError('Invalid region %s specified' % a) if len(args) < 1: usage(2) if not bucket_name: print "bucket name is required!" usage(3) connect_args = { 'aws_access_key_id': aws_access_key_id, 'aws_secret_access_key': aws_secret_access_key } if host: connect_args['host'] = host c = boto.s3.connect_to_region(region or DEFAULT_REGION, **connect_args) c.debug = debug b = c.get_bucket(bucket_name, validate=False) # Attempt to determine location and warn if no --host or --region # arguments were passed. Then try to automagically figure out # what should have been passed and fix it. if host is None and region is None: try: location = b.get_location() # Classic region will be '', any other will have a name if location: print 'Bucket exists in %s but no host or region given!' % location # Override for EU, which is really Ireland according to the docs if location == 'EU': location = 'eu-west-1' print 'Automatically setting region to %s' % location # Here we create a new connection, and then take the existing # bucket and set it to use the new connection c = boto.s3.connect_to_region(location, **connect_args) c.debug = debug b.connection = c except Exception, e: if debug > 0: print e print 'Could not get bucket region info, skipping...' existing_keys_to_check_against = [] files_to_check_for_upload = [] for path in args: path = expand_path(path) # upload a directory of files recursively if os.path.isdir(path): if no_overwrite: if not quiet: print 'Getting list of existing keys to check against' for key in b.list(get_key_name(path, prefix, key_prefix)): existing_keys_to_check_against.append(key.name) for root, dirs, files in os.walk(path): for ignore in ignore_dirs: if ignore in dirs: dirs.remove(ignore) for path in files: if path.startswith("."): continue files_to_check_for_upload.append(os.path.join(root, path)) # upload a single file elif os.path.isfile(path): fullpath = os.path.abspath(path) key_name = get_key_name(fullpath, prefix, key_prefix) files_to_check_for_upload.append(fullpath) existing_keys_to_check_against.append(key_name) # we are trying to upload something unknown else: print "I don't know what %s is, so i can't upload it" % path for fullpath in files_to_check_for_upload: key_name = get_key_name(fullpath, prefix, key_prefix) if no_overwrite and key_name in existing_keys_to_check_against: if b.get_key(key_name): if not quiet: print 'Skipping %s as it exists in s3' % fullpath continue if not quiet: print 'Copying %s to %s/%s' % (fullpath, bucket_name, key_name) if not no_op: # 0-byte files don't work and also don't need multipart upload if os.stat(fullpath).st_size != 0 and multipart_capable and \ multipart_requested: multipart_upload(bucket_name, aws_access_key_id, aws_secret_access_key, fullpath, key_name, reduced, debug, cb, num_cb, grant or 'private', headers, region=region or DEFAULT_REGION) else: singlepart_upload(b, key_name, fullpath, cb=cb, num_cb=num_cb, policy=grant, reduced_redundancy=reduced, headers=headers) if __name__ == "__main__": main() boto-2.20.1/bin/sdbadmin000077500000000000000000000151751225267101000150230ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2009 Chris Moyer http://kopertop.blogspot.com/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # # Tools to dump and recover an SDB domain # VERSION = "%prog version 1.0" import boto import time from boto import sdb from boto.compat import json def choice_input(options, default=None, title=None): """ Choice input """ if title == None: title = "Please choose" print title objects = [] for n, obj in enumerate(options): print "%s: %s" % (n, obj) objects.append(obj) choice = int(raw_input(">>> ")) try: choice = objects[choice] except: choice = default return choice def confirm(message="Are you sure?"): choice = raw_input("%s [yN] " % message) return choice and len(choice) > 0 and choice[0].lower() == "y" def dump_db(domain, file_name, use_json=False, sort_attributes=False): """ Dump SDB domain to file """ f = open(file_name, "w") if use_json: for item in domain: data = {"name": item.name, "attributes": item} print >> f, json.dumps(data, sort_keys=sort_attributes) else: doc = domain.to_xml(f) def empty_db(domain): """ Remove all entries from domain """ for item in domain: item.delete() def load_db(domain, file, use_json=False): """ Load a domain from a file, this doesn't overwrite any existing data in the file so if you want to do a full recovery and restore you need to call empty_db before calling this :param domain: The SDB Domain object to load to :param file: The File to load the DB from """ if use_json: for line in file.readlines(): if line: data = json.loads(line) item = domain.new_item(data['name']) item.update(data['attributes']) item.save() else: domain.from_xml(file) def create_db(domain_name, region_name): """Create a new DB :param domain: Name of the domain to create :type domain: str """ sdb = boto.sdb.connect_to_region(region_name) return sdb.create_domain(domain_name) if __name__ == "__main__": from optparse import OptionParser parser = OptionParser(version=VERSION, usage="Usage: %prog [--dump|--load|--empty|--list|-l] [options]") # Commands parser.add_option("--dump", help="Dump domain to file", dest="dump", default=False, action="store_true") parser.add_option("--load", help="Load domain contents from file", dest="load", default=False, action="store_true") parser.add_option("--empty", help="Empty all contents of domain", dest="empty", default=False, action="store_true") parser.add_option("-l", "--list", help="List All domains", dest="list", default=False, action="store_true") parser.add_option("-c", "--create", help="Create domain", dest="create", default=False, action="store_true") parser.add_option("-a", "--all-domains", help="Operate on all domains", action="store_true", default=False, dest="all_domains") if json: parser.add_option("-j", "--use-json", help="Load/Store as JSON instead of XML", action="store_true", default=False, dest="json") parser.add_option("-s", "--sort-attibutes", help="Sort the element attributes", action="store_true", default=False, dest="sort_attributes") parser.add_option("-d", "--domain", help="Do functions on domain (may be more then one)", action="append", dest="domains") parser.add_option("-f", "--file", help="Input/Output file we're operating on", dest="file_name") parser.add_option("-r", "--region", help="Region (e.g. us-east-1[default] or eu-west-1)", default="us-east-1", dest="region_name") (options, args) = parser.parse_args() if options.create: for domain_name in options.domains: create_db(domain_name, options.region_name) exit() sdb = boto.sdb.connect_to_region(options.region_name) if options.list: for db in sdb.get_all_domains(): print db exit() if not options.dump and not options.load and not options.empty: parser.print_help() exit() # # Setup # if options.domains: domains = [] for domain_name in options.domains: domains.append(sdb.get_domain(domain_name)) elif options.all_domains: domains = sdb.get_all_domains() else: domains = [choice_input(options=sdb.get_all_domains(), title="No domain specified, please choose one")] # # Execute the commands # stime = time.time() if options.empty: if confirm("WARNING!!! Are you sure you want to empty the following domains?: %s" % domains): stime = time.time() for domain in domains: print "--------> Emptying %s <--------" % domain.name empty_db(domain) else: print "Canceling operations" exit() if options.dump: for domain in domains: print "--------> Dumping %s <---------" % domain.name if options.file_name: file_name = options.file_name else: file_name = "%s.db" % domain.name dump_db(domain, file_name, options.json, options.sort_attributes) if options.load: for domain in domains: print "---------> Loading %s <----------" % domain.name if options.file_name: file_name = options.file_name else: file_name = "%s.db" % domain.name load_db(domain, open(file_name, "rb"), options.json) total_time = round(time.time() - stime, 2) print "--------> Finished in %s <--------" % total_time boto-2.20.1/bin/taskadmin000077500000000000000000000072721225267101000152140ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2009 Chris Moyer http://coredumped.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # # Task/Job Administration utility # VERSION="0.1" __version__ = VERSION usage = """%prog [options] [command] Commands: list|ls List all Tasks in SDB delete Delete Task with id get Get Task create|mk Create a new Task with command running every """ def list(): """List all Tasks in SDB""" from boto.manage.task import Task print "%-8s %-40s %s" % ("Hour", "Name", "Command") print "-"*100 for t in Task.all(): print "%-8s %-40s %s" % (t.hour, t.name, t.command) def get(name): """Get a task :param name: The name of the task to fetch :type name: str """ from boto.manage.task import Task q = Task.find() q.filter("name like", "%s%%" % name) for t in q: print "="*80 print "| ", t.id print "|%s" % ("-"*79) print "| Name: ", t.name print "| Hour: ", t.hour print "| Command: ", t.command if t.last_executed: print "| Last Run: ", t.last_executed.ctime() print "| Last Status: ", t.last_status print "| Last Run Log: ", t.last_output print "="*80 def delete(id): from boto.manage.task import Task t = Task.get_by_id(id) print "Deleting task: %s" % t.name if raw_input("Are you sure? ").lower() in ["y", "yes"]: t.delete() print "Deleted" else: print "Canceled" def create(name, hour, command): """Create a new task :param name: Name of the task to create :type name: str :param hour: What hour to run it at, "*" for every hour :type hour: str :param command: The command to execute :type command: str """ from boto.manage.task import Task t = Task() t.name = name t.hour = hour t.command = command t.put() print "Created task: %s" % t.id if __name__ == "__main__": try: import readline except ImportError: pass import boto import sys from optparse import OptionParser from boto.mashups.iobject import IObject parser = OptionParser(version=__version__, usage=usage) (options, args) = parser.parse_args() if len(args) < 1: parser.print_help() sys.exit(1) command = args[0].lower() if command in ("ls", "list"): list() elif command == "get": get(args[1]) elif command == "create": create(args[1], args[2], args[3]) elif command == "delete": delete(args[1]) boto-2.20.1/boto/000077500000000000000000000000001225267101000134765ustar00rootroot00000000000000boto-2.20.1/boto/__init__.py000066400000000000000000000757431225267101000156270ustar00rootroot00000000000000# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010-2011, Eucalyptus Systems, Inc. # Copyright (c) 2011, Nexenta Systems Inc. # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # Copyright (c) 2010, Google, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.pyami.config import Config, BotoConfigLocations from boto.storage_uri import BucketStorageUri, FileStorageUri import boto.plugin import os import platform import re import sys import logging import logging.config import urlparse from boto.exception import InvalidUriError __version__ = '2.20.1' Version = __version__ # for backware compatibility UserAgent = 'Boto/%s Python/%s %s/%s' % ( __version__, platform.python_version(), platform.system(), platform.release() ) config = Config() # Regex to disallow buckets violating charset or not [3..255] chars total. BUCKET_NAME_RE = re.compile(r'^[a-zA-Z0-9][a-zA-Z0-9\._-]{1,253}[a-zA-Z0-9]$') # Regex to disallow buckets with individual DNS labels longer than 63. TOO_LONG_DNS_NAME_COMP = re.compile(r'[-_a-z0-9]{64}') GENERATION_RE = re.compile(r'(?P.+)' r'#(?P[0-9]+)$') VERSION_RE = re.compile('(?P.+)#(?P.+)$') def init_logging(): for file in BotoConfigLocations: try: logging.config.fileConfig(os.path.expanduser(file)) except: pass class NullHandler(logging.Handler): def emit(self, record): pass log = logging.getLogger('boto') perflog = logging.getLogger('boto.perf') log.addHandler(NullHandler()) perflog.addHandler(NullHandler()) init_logging() # convenience function to set logging to a particular file def set_file_logger(name, filepath, level=logging.INFO, format_string=None): global log if not format_string: format_string = "%(asctime)s %(name)s [%(levelname)s]:%(message)s" logger = logging.getLogger(name) logger.setLevel(level) fh = logging.FileHandler(filepath) fh.setLevel(level) formatter = logging.Formatter(format_string) fh.setFormatter(formatter) logger.addHandler(fh) log = logger def set_stream_logger(name, level=logging.DEBUG, format_string=None): global log if not format_string: format_string = "%(asctime)s %(name)s [%(levelname)s]:%(message)s" logger = logging.getLogger(name) logger.setLevel(level) fh = logging.StreamHandler() fh.setLevel(level) formatter = logging.Formatter(format_string) fh.setFormatter(formatter) logger.addHandler(fh) log = logger def connect_sqs(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.sqs.connection.SQSConnection` :return: A connection to Amazon's SQS """ from boto.sqs.connection import SQSConnection return SQSConnection(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_s3(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.s3.connection.S3Connection` :return: A connection to Amazon's S3 """ from boto.s3.connection import S3Connection return S3Connection(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_gs(gs_access_key_id=None, gs_secret_access_key=None, **kwargs): """ @type gs_access_key_id: string @param gs_access_key_id: Your Google Cloud Storage Access Key ID @type gs_secret_access_key: string @param gs_secret_access_key: Your Google Cloud Storage Secret Access Key @rtype: L{GSConnection} @return: A connection to Google's Storage service """ from boto.gs.connection import GSConnection return GSConnection(gs_access_key_id, gs_secret_access_key, **kwargs) def connect_ec2(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.ec2.connection.EC2Connection` :return: A connection to Amazon's EC2 """ from boto.ec2.connection import EC2Connection return EC2Connection(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_elb(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.ec2.elb.ELBConnection` :return: A connection to Amazon's Load Balancing Service """ from boto.ec2.elb import ELBConnection return ELBConnection(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_autoscale(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.ec2.autoscale.AutoScaleConnection` :return: A connection to Amazon's Auto Scaling Service """ from boto.ec2.autoscale import AutoScaleConnection return AutoScaleConnection(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_cloudwatch(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.ec2.cloudwatch.CloudWatchConnection` :return: A connection to Amazon's EC2 Monitoring service """ from boto.ec2.cloudwatch import CloudWatchConnection return CloudWatchConnection(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_sdb(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.sdb.connection.SDBConnection` :return: A connection to Amazon's SDB """ from boto.sdb.connection import SDBConnection return SDBConnection(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_fps(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.fps.connection.FPSConnection` :return: A connection to FPS """ from boto.fps.connection import FPSConnection return FPSConnection(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_mturk(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.mturk.connection.MTurkConnection` :return: A connection to MTurk """ from boto.mturk.connection import MTurkConnection return MTurkConnection(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_cloudfront(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.fps.connection.FPSConnection` :return: A connection to FPS """ from boto.cloudfront import CloudFrontConnection return CloudFrontConnection(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_vpc(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.vpc.VPCConnection` :return: A connection to VPC """ from boto.vpc import VPCConnection return VPCConnection(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_rds(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.rds.RDSConnection` :return: A connection to RDS """ from boto.rds import RDSConnection return RDSConnection(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_emr(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.emr.EmrConnection` :return: A connection to Elastic mapreduce """ from boto.emr import EmrConnection return EmrConnection(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_sns(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.sns.SNSConnection` :return: A connection to Amazon's SNS """ from boto.sns import SNSConnection return SNSConnection(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_iam(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.iam.IAMConnection` :return: A connection to Amazon's IAM """ from boto.iam import IAMConnection return IAMConnection(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_route53(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.dns.Route53Connection` :return: A connection to Amazon's Route53 DNS Service """ from boto.route53 import Route53Connection return Route53Connection(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_cloudformation(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.cloudformation.CloudFormationConnection` :return: A connection to Amazon's CloudFormation Service """ from boto.cloudformation import CloudFormationConnection return CloudFormationConnection(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_euca(host=None, aws_access_key_id=None, aws_secret_access_key=None, port=8773, path='/services/Eucalyptus', is_secure=False, **kwargs): """ Connect to a Eucalyptus service. :type host: string :param host: the host name or ip address of the Eucalyptus server :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.ec2.connection.EC2Connection` :return: A connection to Eucalyptus server """ from boto.ec2 import EC2Connection from boto.ec2.regioninfo import RegionInfo # Check for values in boto config, if not supplied as args if not aws_access_key_id: aws_access_key_id = config.get('Credentials', 'euca_access_key_id', None) if not aws_secret_access_key: aws_secret_access_key = config.get('Credentials', 'euca_secret_access_key', None) if not host: host = config.get('Boto', 'eucalyptus_host', None) reg = RegionInfo(name='eucalyptus', endpoint=host) return EC2Connection(aws_access_key_id, aws_secret_access_key, region=reg, port=port, path=path, is_secure=is_secure, **kwargs) def connect_glacier(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.glacier.layer2.Layer2` :return: A connection to Amazon's Glacier Service """ from boto.glacier.layer2 import Layer2 return Layer2(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_ec2_endpoint(url, aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ Connect to an EC2 Api endpoint. Additional arguments are passed through to connect_ec2. :type url: string :param url: A url for the ec2 api endpoint to connect to :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.ec2.connection.EC2Connection` :return: A connection to Eucalyptus server """ from boto.ec2.regioninfo import RegionInfo purl = urlparse.urlparse(url) kwargs['port'] = purl.port kwargs['host'] = purl.hostname kwargs['path'] = purl.path if not 'is_secure' in kwargs: kwargs['is_secure'] = (purl.scheme == "https") kwargs['region'] = RegionInfo(name=purl.hostname, endpoint=purl.hostname) kwargs['aws_access_key_id'] = aws_access_key_id kwargs['aws_secret_access_key'] = aws_secret_access_key return(connect_ec2(**kwargs)) def connect_walrus(host=None, aws_access_key_id=None, aws_secret_access_key=None, port=8773, path='/services/Walrus', is_secure=False, **kwargs): """ Connect to a Walrus service. :type host: string :param host: the host name or ip address of the Walrus server :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.s3.connection.S3Connection` :return: A connection to Walrus """ from boto.s3.connection import S3Connection from boto.s3.connection import OrdinaryCallingFormat # Check for values in boto config, if not supplied as args if not aws_access_key_id: aws_access_key_id = config.get('Credentials', 'euca_access_key_id', None) if not aws_secret_access_key: aws_secret_access_key = config.get('Credentials', 'euca_secret_access_key', None) if not host: host = config.get('Boto', 'walrus_host', None) return S3Connection(aws_access_key_id, aws_secret_access_key, host=host, port=port, path=path, calling_format=OrdinaryCallingFormat(), is_secure=is_secure, **kwargs) def connect_ses(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.ses.SESConnection` :return: A connection to Amazon's SES """ from boto.ses import SESConnection return SESConnection(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_sts(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.sts.STSConnection` :return: A connection to Amazon's STS """ from boto.sts import STSConnection return STSConnection(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_ia(ia_access_key_id=None, ia_secret_access_key=None, is_secure=False, **kwargs): """ Connect to the Internet Archive via their S3-like API. :type ia_access_key_id: string :param ia_access_key_id: Your IA Access Key ID. This will also look in your boto config file for an entry in the Credentials section called "ia_access_key_id" :type ia_secret_access_key: string :param ia_secret_access_key: Your IA Secret Access Key. This will also look in your boto config file for an entry in the Credentials section called "ia_secret_access_key" :rtype: :class:`boto.s3.connection.S3Connection` :return: A connection to the Internet Archive """ from boto.s3.connection import S3Connection from boto.s3.connection import OrdinaryCallingFormat access_key = config.get('Credentials', 'ia_access_key_id', ia_access_key_id) secret_key = config.get('Credentials', 'ia_secret_access_key', ia_secret_access_key) return S3Connection(access_key, secret_key, host='s3.us.archive.org', calling_format=OrdinaryCallingFormat(), is_secure=is_secure, **kwargs) def connect_dynamodb(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.dynamodb.layer2.Layer2` :return: A connection to the Layer2 interface for DynamoDB. """ from boto.dynamodb.layer2 import Layer2 return Layer2(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_swf(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.swf.layer1.Layer1` :return: A connection to the Layer1 interface for SWF. """ from boto.swf.layer1 import Layer1 return Layer1(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_cloudsearch(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.ec2.autoscale.CloudSearchConnection` :return: A connection to Amazon's CloudSearch service """ from boto.cloudsearch.layer2 import Layer2 return Layer2(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_beanstalk(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.beanstalk.layer1.Layer1` :return: A connection to Amazon's Elastic Beanstalk service """ from boto.beanstalk.layer1 import Layer1 return Layer1(aws_access_key_id, aws_secret_access_key, **kwargs) def connect_elastictranscoder(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.ets.layer1.ElasticTranscoderConnection` :return: A connection to Amazon's Elastic Transcoder service """ from boto.elastictranscoder.layer1 import ElasticTranscoderConnection return ElasticTranscoderConnection( aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key, **kwargs) def connect_opsworks(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): from boto.opsworks.layer1 import OpsWorksConnection return OpsWorksConnection( aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key, **kwargs) def connect_redshift(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.redshift.layer1.RedshiftConnection` :return: A connection to Amazon's Redshift service """ from boto.redshift.layer1 import RedshiftConnection return RedshiftConnection( aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key, **kwargs ) def connect_support(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.support.layer1.SupportConnection` :return: A connection to Amazon's Support service """ from boto.support.layer1 import SupportConnection return SupportConnection( aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key, **kwargs ) def connect_cloudtrail(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ Connect to AWS CloudTrail :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.cloudtrail.layer1.CloudtrailConnection` :return: A connection to the AWS Cloudtrail service """ from boto.cloudtrail.layer1 import CloudTrailConnection return CloudTrailConnection( aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key, **kwargs ) def connect_directconnect(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ Connect to AWS DirectConnect :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key :rtype: :class:`boto.directconnect.layer1.DirectConnectConnection` :return: A connection to the AWS DirectConnect service """ from boto.directconnect.layer1 import DirectConnectConnection return DirectConnectConnection( aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key, **kwargs ) def connect_kinesis(aws_access_key_id=None, aws_secret_access_key=None, **kwargs): """ Connect to Amazon Kinesis :type aws_access_key_id: string :param aws_access_key_id: Your AWS Access Key ID :type aws_secret_access_key: string :param aws_secret_access_key: Your AWS Secret Access Key rtype: :class:`boto.kinesis.layer1.KinesisConnection` :return: A connection to the Amazon Kinesis service """ from boto.kinesis.layer1 import KinesisConnection return KinesisConnection( aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key, **kwargs ) def storage_uri(uri_str, default_scheme='file', debug=0, validate=True, bucket_storage_uri_class=BucketStorageUri, suppress_consec_slashes=True, is_latest=False): """ Instantiate a StorageUri from a URI string. :type uri_str: string :param uri_str: URI naming bucket + optional object. :type default_scheme: string :param default_scheme: default scheme for scheme-less URIs. :type debug: int :param debug: debug level to pass in to boto connection (range 0..2). :type validate: bool :param validate: whether to check for bucket name validity. :type bucket_storage_uri_class: BucketStorageUri interface. :param bucket_storage_uri_class: Allows mocking for unit tests. :param suppress_consec_slashes: If provided, controls whether consecutive slashes will be suppressed in key paths. :type is_latest: bool :param is_latest: whether this versioned object represents the current version. We allow validate to be disabled to allow caller to implement bucket-level wildcarding (outside the boto library; see gsutil). :rtype: :class:`boto.StorageUri` subclass :return: StorageUri subclass for given URI. ``uri_str`` must be one of the following formats: * gs://bucket/name * gs://bucket/name#ver * s3://bucket/name * gs://bucket * s3://bucket * filename (which could be a Unix path like /a/b/c or a Windows path like C:\a\b\c) The last example uses the default scheme ('file', unless overridden). """ version_id = None generation = None # Manually parse URI components instead of using urlparse.urlparse because # what we're calling URIs don't really fit the standard syntax for URIs # (the latter includes an optional host/net location part). end_scheme_idx = uri_str.find('://') if end_scheme_idx == -1: scheme = default_scheme.lower() path = uri_str else: scheme = uri_str[0:end_scheme_idx].lower() path = uri_str[end_scheme_idx + 3:] if scheme not in ['file', 's3', 'gs']: raise InvalidUriError('Unrecognized scheme "%s"' % scheme) if scheme == 'file': # For file URIs we have no bucket name, and use the complete path # (minus 'file://') as the object name. is_stream = False if path == '-': is_stream = True return FileStorageUri(path, debug, is_stream) else: path_parts = path.split('/', 1) bucket_name = path_parts[0] object_name = '' # If validate enabled, ensure the bucket name is valid, to avoid # possibly confusing other parts of the code. (For example if we didn't # catch bucket names containing ':', when a user tried to connect to # the server with that name they might get a confusing error about # non-integer port numbers.) if (validate and bucket_name and (not BUCKET_NAME_RE.match(bucket_name) or TOO_LONG_DNS_NAME_COMP.search(bucket_name))): raise InvalidUriError('Invalid bucket name in URI "%s"' % uri_str) if scheme == 'gs': match = GENERATION_RE.search(path) if match: md = match.groupdict() versionless_uri_str = md['versionless_uri_str'] path_parts = versionless_uri_str.split('/', 1) generation = int(md['generation']) elif scheme == 's3': match = VERSION_RE.search(path) if match: md = match.groupdict() versionless_uri_str = md['versionless_uri_str'] path_parts = versionless_uri_str.split('/', 1) version_id = md['version_id'] else: raise InvalidUriError('Unrecognized scheme "%s"' % scheme) if len(path_parts) > 1: object_name = path_parts[1] return bucket_storage_uri_class( scheme, bucket_name, object_name, debug, suppress_consec_slashes=suppress_consec_slashes, version_id=version_id, generation=generation, is_latest=is_latest) def storage_uri_for_key(key): """Returns a StorageUri for the given key. :type key: :class:`boto.s3.key.Key` or subclass :param key: URI naming bucket + optional object. """ if not isinstance(key, boto.s3.key.Key): raise InvalidUriError('Requested key (%s) is not a subclass of ' 'boto.s3.key.Key' % str(type(key))) prov_name = key.bucket.connection.provider.get_provider_name() uri_str = '%s://%s/%s' % (prov_name, key.bucket.name, key.name) return storage_uri(uri_str) boto.plugin.load_plugins(config) boto-2.20.1/boto/auth.py000066400000000000000000000674171225267101000150300ustar00rootroot00000000000000# Copyright 2010 Google Inc. # Copyright (c) 2011 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2011, Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Handles authentication required to AWS and GS """ import base64 import boto import boto.auth_handler import boto.exception import boto.plugin import boto.utils import copy import datetime from email.utils import formatdate import hmac import sys import time import urllib import posixpath from boto.auth_handler import AuthHandler from boto.exception import BotoClientError # # the following is necessary because of the incompatibilities # between Python 2.4, 2.5, and 2.6 as well as the fact that some # people running 2.4 have installed hashlib as a separate module # this fix was provided by boto user mccormix. # see: http://code.google.com/p/boto/issues/detail?id=172 # for more details. # try: from hashlib import sha1 as sha from hashlib import sha256 as sha256 if sys.version[:3] == "2.4": # we are using an hmac that expects a .new() method. class Faker: def __init__(self, which): self.which = which self.digest_size = self.which().digest_size def new(self, *args, **kwargs): return self.which(*args, **kwargs) sha = Faker(sha) sha256 = Faker(sha256) except ImportError: import sha sha256 = None class HmacKeys(object): """Key based Auth handler helper.""" def __init__(self, host, config, provider): if provider.access_key is None or provider.secret_key is None: raise boto.auth_handler.NotReadyToAuthenticate() self.host = host self.update_provider(provider) def update_provider(self, provider): self._provider = provider self._hmac = hmac.new(self._provider.secret_key, digestmod=sha) if sha256: self._hmac_256 = hmac.new(self._provider.secret_key, digestmod=sha256) else: self._hmac_256 = None def algorithm(self): if self._hmac_256: return 'HmacSHA256' else: return 'HmacSHA1' def _get_hmac(self): if self._hmac_256: digestmod = sha256 else: digestmod = sha return hmac.new(self._provider.secret_key, digestmod=digestmod) def sign_string(self, string_to_sign): new_hmac = self._get_hmac() new_hmac.update(string_to_sign) return base64.encodestring(new_hmac.digest()).strip() def __getstate__(self): pickled_dict = copy.copy(self.__dict__) del pickled_dict['_hmac'] del pickled_dict['_hmac_256'] return pickled_dict def __setstate__(self, dct): self.__dict__ = dct self.update_provider(self._provider) class AnonAuthHandler(AuthHandler, HmacKeys): """ Implements Anonymous requests. """ capability = ['anon'] def __init__(self, host, config, provider): AuthHandler.__init__(self, host, config, provider) def add_auth(self, http_request, **kwargs): pass class HmacAuthV1Handler(AuthHandler, HmacKeys): """ Implements the HMAC request signing used by S3 and GS.""" capability = ['hmac-v1', 's3'] def __init__(self, host, config, provider): AuthHandler.__init__(self, host, config, provider) HmacKeys.__init__(self, host, config, provider) self._hmac_256 = None def update_provider(self, provider): super(HmacAuthV1Handler, self).update_provider(provider) self._hmac_256 = None def add_auth(self, http_request, **kwargs): headers = http_request.headers method = http_request.method auth_path = http_request.auth_path if 'Date' not in headers: headers['Date'] = formatdate(usegmt=True) if self._provider.security_token: key = self._provider.security_token_header headers[key] = self._provider.security_token string_to_sign = boto.utils.canonical_string(method, auth_path, headers, None, self._provider) boto.log.debug('StringToSign:\n%s' % string_to_sign) b64_hmac = self.sign_string(string_to_sign) auth_hdr = self._provider.auth_header auth = ("%s %s:%s" % (auth_hdr, self._provider.access_key, b64_hmac)) boto.log.debug('Signature:\n%s' % auth) headers['Authorization'] = auth class HmacAuthV2Handler(AuthHandler, HmacKeys): """ Implements the simplified HMAC authorization used by CloudFront. """ capability = ['hmac-v2', 'cloudfront'] def __init__(self, host, config, provider): AuthHandler.__init__(self, host, config, provider) HmacKeys.__init__(self, host, config, provider) self._hmac_256 = None def update_provider(self, provider): super(HmacAuthV2Handler, self).update_provider(provider) self._hmac_256 = None def add_auth(self, http_request, **kwargs): headers = http_request.headers if 'Date' not in headers: headers['Date'] = formatdate(usegmt=True) if self._provider.security_token: key = self._provider.security_token_header headers[key] = self._provider.security_token b64_hmac = self.sign_string(headers['Date']) auth_hdr = self._provider.auth_header headers['Authorization'] = ("%s %s:%s" % (auth_hdr, self._provider.access_key, b64_hmac)) class HmacAuthV3Handler(AuthHandler, HmacKeys): """Implements the new Version 3 HMAC authorization used by Route53.""" capability = ['hmac-v3', 'route53', 'ses'] def __init__(self, host, config, provider): AuthHandler.__init__(self, host, config, provider) HmacKeys.__init__(self, host, config, provider) def add_auth(self, http_request, **kwargs): headers = http_request.headers if 'Date' not in headers: headers['Date'] = formatdate(usegmt=True) if self._provider.security_token: key = self._provider.security_token_header headers[key] = self._provider.security_token b64_hmac = self.sign_string(headers['Date']) s = "AWS3-HTTPS AWSAccessKeyId=%s," % self._provider.access_key s += "Algorithm=%s,Signature=%s" % (self.algorithm(), b64_hmac) headers['X-Amzn-Authorization'] = s class HmacAuthV3HTTPHandler(AuthHandler, HmacKeys): """ Implements the new Version 3 HMAC authorization used by DynamoDB. """ capability = ['hmac-v3-http'] def __init__(self, host, config, provider): AuthHandler.__init__(self, host, config, provider) HmacKeys.__init__(self, host, config, provider) def headers_to_sign(self, http_request): """ Select the headers from the request that need to be included in the StringToSign. """ headers_to_sign = {} headers_to_sign = {'Host': self.host} for name, value in http_request.headers.items(): lname = name.lower() if lname.startswith('x-amz'): headers_to_sign[name] = value return headers_to_sign def canonical_headers(self, headers_to_sign): """ Return the headers that need to be included in the StringToSign in their canonical form by converting all header keys to lower case, sorting them in alphabetical order and then joining them into a string, separated by newlines. """ l = sorted(['%s:%s' % (n.lower().strip(), headers_to_sign[n].strip()) for n in headers_to_sign]) return '\n'.join(l) def string_to_sign(self, http_request): """ Return the canonical StringToSign as well as a dict containing the original version of all headers that were included in the StringToSign. """ headers_to_sign = self.headers_to_sign(http_request) canonical_headers = self.canonical_headers(headers_to_sign) string_to_sign = '\n'.join([http_request.method, http_request.auth_path, '', canonical_headers, '', http_request.body]) return string_to_sign, headers_to_sign def add_auth(self, req, **kwargs): """ Add AWS3 authentication to a request. :type req: :class`boto.connection.HTTPRequest` :param req: The HTTPRequest object. """ # This could be a retry. Make sure the previous # authorization header is removed first. if 'X-Amzn-Authorization' in req.headers: del req.headers['X-Amzn-Authorization'] req.headers['X-Amz-Date'] = formatdate(usegmt=True) if self._provider.security_token: req.headers['X-Amz-Security-Token'] = self._provider.security_token string_to_sign, headers_to_sign = self.string_to_sign(req) boto.log.debug('StringToSign:\n%s' % string_to_sign) hash_value = sha256(string_to_sign).digest() b64_hmac = self.sign_string(hash_value) s = "AWS3 AWSAccessKeyId=%s," % self._provider.access_key s += "Algorithm=%s," % self.algorithm() s += "SignedHeaders=%s," % ';'.join(headers_to_sign) s += "Signature=%s" % b64_hmac req.headers['X-Amzn-Authorization'] = s class HmacAuthV4Handler(AuthHandler, HmacKeys): """ Implements the new Version 4 HMAC authorization. """ capability = ['hmac-v4'] def __init__(self, host, config, provider, service_name=None, region_name=None): AuthHandler.__init__(self, host, config, provider) HmacKeys.__init__(self, host, config, provider) # You can set the service_name and region_name to override the # values which would otherwise come from the endpoint, e.g. # ..amazonaws.com. self.service_name = service_name self.region_name = region_name def _sign(self, key, msg, hex=False): if hex: sig = hmac.new(key, msg.encode('utf-8'), sha256).hexdigest() else: sig = hmac.new(key, msg.encode('utf-8'), sha256).digest() return sig def headers_to_sign(self, http_request): """ Select the headers from the request that need to be included in the StringToSign. """ host_header_value = self.host_header(self.host, http_request) headers_to_sign = {} headers_to_sign = {'Host': host_header_value} for name, value in http_request.headers.items(): lname = name.lower() if lname.startswith('x-amz'): headers_to_sign[name] = value return headers_to_sign def host_header(self, host, http_request): port = http_request.port secure = http_request.protocol == 'https' if ((port == 80 and not secure) or (port == 443 and secure)): return host return '%s:%s' % (host, port) def query_string(self, http_request): parameter_names = sorted(http_request.params.keys()) pairs = [] for pname in parameter_names: pval = str(http_request.params[pname]).encode('utf-8') pairs.append(urllib.quote(pname, safe='') + '=' + urllib.quote(pval, safe='-_~')) return '&'.join(pairs) def canonical_query_string(self, http_request): # POST requests pass parameters in through the # http_request.body field. if http_request.method == 'POST': return "" l = [] for param in sorted(http_request.params): value = str(http_request.params[param]) l.append('%s=%s' % (urllib.quote(param, safe='-_.~'), urllib.quote(value, safe='-_.~'))) return '&'.join(l) def canonical_headers(self, headers_to_sign): """ Return the headers that need to be included in the StringToSign in their canonical form by converting all header keys to lower case, sorting them in alphabetical order and then joining them into a string, separated by newlines. """ l = sorted(['%s:%s' % (n.lower().strip(), ' '.join(headers_to_sign[n].strip().split())) for n in headers_to_sign]) return '\n'.join(l) def signed_headers(self, headers_to_sign): l = ['%s' % n.lower().strip() for n in headers_to_sign] l = sorted(l) return ';'.join(l) def canonical_uri(self, http_request): path = http_request.auth_path # Normalize the path # in windows normpath('/') will be '\\' so we chane it back to '/' normalized = posixpath.normpath(path).replace('\\','/') # Then urlencode whatever's left. encoded = urllib.quote(normalized) if len(path) > 1 and path.endswith('/'): encoded += '/' return encoded def payload(self, http_request): body = http_request.body # If the body is a file like object, we can use # boto.utils.compute_hash, which will avoid reading # the entire body into memory. if hasattr(body, 'seek') and hasattr(body, 'read'): return boto.utils.compute_hash(body, hash_algorithm=sha256)[0] return sha256(http_request.body).hexdigest() def canonical_request(self, http_request): cr = [http_request.method.upper()] cr.append(self.canonical_uri(http_request)) cr.append(self.canonical_query_string(http_request)) headers_to_sign = self.headers_to_sign(http_request) cr.append(self.canonical_headers(headers_to_sign) + '\n') cr.append(self.signed_headers(headers_to_sign)) cr.append(self.payload(http_request)) return '\n'.join(cr) def scope(self, http_request): scope = [self._provider.access_key] scope.append(http_request.timestamp) scope.append(http_request.region_name) scope.append(http_request.service_name) scope.append('aws4_request') return '/'.join(scope) def credential_scope(self, http_request): scope = [] http_request.timestamp = http_request.headers['X-Amz-Date'][0:8] scope.append(http_request.timestamp) # The service_name and region_name either come from: # * The service_name/region_name attrs or (if these values are None) # * parsed from the endpoint ..amazonaws.com. parts = http_request.host.split('.') if self.region_name is not None: region_name = self.region_name elif len(parts) > 1: if parts[1] == 'us-gov': region_name = 'us-gov-west-1' else: if len(parts) == 3: region_name = 'us-east-1' else: region_name = parts[1] else: region_name = parts[0] if self.service_name is not None: service_name = self.service_name else: service_name = parts[0] http_request.service_name = service_name http_request.region_name = region_name scope.append(http_request.region_name) scope.append(http_request.service_name) scope.append('aws4_request') return '/'.join(scope) def string_to_sign(self, http_request, canonical_request): """ Return the canonical StringToSign as well as a dict containing the original version of all headers that were included in the StringToSign. """ sts = ['AWS4-HMAC-SHA256'] sts.append(http_request.headers['X-Amz-Date']) sts.append(self.credential_scope(http_request)) sts.append(sha256(canonical_request).hexdigest()) return '\n'.join(sts) def signature(self, http_request, string_to_sign): key = self._provider.secret_key k_date = self._sign(('AWS4' + key).encode('utf-8'), http_request.timestamp) k_region = self._sign(k_date, http_request.region_name) k_service = self._sign(k_region, http_request.service_name) k_signing = self._sign(k_service, 'aws4_request') return self._sign(k_signing, string_to_sign, hex=True) def add_auth(self, req, **kwargs): """ Add AWS4 authentication to a request. :type req: :class`boto.connection.HTTPRequest` :param req: The HTTPRequest object. """ # This could be a retry. Make sure the previous # authorization header is removed first. if 'X-Amzn-Authorization' in req.headers: del req.headers['X-Amzn-Authorization'] now = datetime.datetime.utcnow() req.headers['X-Amz-Date'] = now.strftime('%Y%m%dT%H%M%SZ') if self._provider.security_token: req.headers['X-Amz-Security-Token'] = self._provider.security_token qs = self.query_string(req) if qs and req.method == 'POST': # Stash request parameters into post body # before we generate the signature. req.body = qs req.headers['Content-Type'] = 'application/x-www-form-urlencoded; charset=UTF-8' req.headers['Content-Length'] = str(len(req.body)) else: # Safe to modify req.path here since # the signature will use req.auth_path. req.path = req.path.split('?')[0] req.path = req.path + '?' + qs canonical_request = self.canonical_request(req) boto.log.debug('CanonicalRequest:\n%s' % canonical_request) string_to_sign = self.string_to_sign(req, canonical_request) boto.log.debug('StringToSign:\n%s' % string_to_sign) signature = self.signature(req, string_to_sign) boto.log.debug('Signature:\n%s' % signature) headers_to_sign = self.headers_to_sign(req) l = ['AWS4-HMAC-SHA256 Credential=%s' % self.scope(req)] l.append('SignedHeaders=%s' % self.signed_headers(headers_to_sign)) l.append('Signature=%s' % signature) req.headers['Authorization'] = ','.join(l) class QueryAuthHandler(AuthHandler): """ Provides pure query construction (no actual signing). Mostly useful for STS' ``assume_role_with_web_identity``. Does **NOT** escape query string values! """ capability = ['pure-query'] def _escape_value(self, value): # Would normally be ``return urllib.quote(value)``. return value def _build_query_string(self, params): keys = params.keys() keys.sort(cmp=lambda x, y: cmp(x.lower(), y.lower())) pairs = [] for key in keys: val = boto.utils.get_utf8_value(params[key]) pairs.append(key + '=' + self._escape_value(val)) return '&'.join(pairs) def add_auth(self, http_request, **kwargs): headers = http_request.headers params = http_request.params qs = self._build_query_string( http_request.params ) boto.log.debug('query_string: %s' % qs) headers['Content-Type'] = 'application/json; charset=UTF-8' http_request.body = '' # if this is a retried request, the qs from the previous try will # already be there, we need to get rid of that and rebuild it http_request.path = http_request.path.split('?')[0] http_request.path = http_request.path + '?' + qs class QuerySignatureHelper(HmacKeys): """ Helper for Query signature based Auth handler. Concrete sub class need to implement _calc_sigature method. """ def add_auth(self, http_request, **kwargs): headers = http_request.headers params = http_request.params params['AWSAccessKeyId'] = self._provider.access_key params['SignatureVersion'] = self.SignatureVersion params['Timestamp'] = boto.utils.get_ts() qs, signature = self._calc_signature( http_request.params, http_request.method, http_request.auth_path, http_request.host) boto.log.debug('query_string: %s Signature: %s' % (qs, signature)) if http_request.method == 'POST': headers['Content-Type'] = 'application/x-www-form-urlencoded; charset=UTF-8' http_request.body = qs + '&Signature=' + urllib.quote_plus(signature) http_request.headers['Content-Length'] = str(len(http_request.body)) else: http_request.body = '' # if this is a retried request, the qs from the previous try will # already be there, we need to get rid of that and rebuild it http_request.path = http_request.path.split('?')[0] http_request.path = (http_request.path + '?' + qs + '&Signature=' + urllib.quote_plus(signature)) class QuerySignatureV0AuthHandler(QuerySignatureHelper, AuthHandler): """Provides Signature V0 Signing""" SignatureVersion = 0 capability = ['sign-v0'] def _calc_signature(self, params, *args): boto.log.debug('using _calc_signature_0') hmac = self._get_hmac() s = params['Action'] + params['Timestamp'] hmac.update(s) keys = params.keys() keys.sort(cmp=lambda x, y: cmp(x.lower(), y.lower())) pairs = [] for key in keys: val = boto.utils.get_utf8_value(params[key]) pairs.append(key + '=' + urllib.quote(val)) qs = '&'.join(pairs) return (qs, base64.b64encode(hmac.digest())) class QuerySignatureV1AuthHandler(QuerySignatureHelper, AuthHandler): """ Provides Query Signature V1 Authentication. """ SignatureVersion = 1 capability = ['sign-v1', 'mturk'] def __init__(self, *args, **kw): QuerySignatureHelper.__init__(self, *args, **kw) AuthHandler.__init__(self, *args, **kw) self._hmac_256 = None def _calc_signature(self, params, *args): boto.log.debug('using _calc_signature_1') hmac = self._get_hmac() keys = params.keys() keys.sort(cmp=lambda x, y: cmp(x.lower(), y.lower())) pairs = [] for key in keys: hmac.update(key) val = boto.utils.get_utf8_value(params[key]) hmac.update(val) pairs.append(key + '=' + urllib.quote(val)) qs = '&'.join(pairs) return (qs, base64.b64encode(hmac.digest())) class QuerySignatureV2AuthHandler(QuerySignatureHelper, AuthHandler): """Provides Query Signature V2 Authentication.""" SignatureVersion = 2 capability = ['sign-v2', 'ec2', 'ec2', 'emr', 'fps', 'ecs', 'sdb', 'iam', 'rds', 'sns', 'sqs', 'cloudformation'] def _calc_signature(self, params, verb, path, server_name): boto.log.debug('using _calc_signature_2') string_to_sign = '%s\n%s\n%s\n' % (verb, server_name.lower(), path) hmac = self._get_hmac() params['SignatureMethod'] = self.algorithm() if self._provider.security_token: params['SecurityToken'] = self._provider.security_token keys = sorted(params.keys()) pairs = [] for key in keys: val = boto.utils.get_utf8_value(params[key]) pairs.append(urllib.quote(key, safe='') + '=' + urllib.quote(val, safe='-_~')) qs = '&'.join(pairs) boto.log.debug('query string: %s' % qs) string_to_sign += qs boto.log.debug('string_to_sign: %s' % string_to_sign) hmac.update(string_to_sign) b64 = base64.b64encode(hmac.digest()) boto.log.debug('len(b64)=%d' % len(b64)) boto.log.debug('base64 encoded digest: %s' % b64) return (qs, b64) class POSTPathQSV2AuthHandler(QuerySignatureV2AuthHandler, AuthHandler): """ Query Signature V2 Authentication relocating signed query into the path and allowing POST requests with Content-Types. """ capability = ['mws'] def add_auth(self, req, **kwargs): req.params['AWSAccessKeyId'] = self._provider.access_key req.params['SignatureVersion'] = self.SignatureVersion req.params['Timestamp'] = boto.utils.get_ts() qs, signature = self._calc_signature(req.params, req.method, req.auth_path, req.host) boto.log.debug('query_string: %s Signature: %s' % (qs, signature)) if req.method == 'POST': req.headers['Content-Length'] = str(len(req.body)) req.headers['Content-Type'] = req.headers.get('Content-Type', 'text/plain') else: req.body = '' # if this is a retried req, the qs from the previous try will # already be there, we need to get rid of that and rebuild it req.path = req.path.split('?')[0] req.path = (req.path + '?' + qs + '&Signature=' + urllib.quote_plus(signature)) def get_auth_handler(host, config, provider, requested_capability=None): """Finds an AuthHandler that is ready to authenticate. Lists through all the registered AuthHandlers to find one that is willing to handle for the requested capabilities, config and provider. :type host: string :param host: The name of the host :type config: :param config: :type provider: :param provider: Returns: An implementation of AuthHandler. Raises: boto.exception.NoAuthHandlerFound """ ready_handlers = [] auth_handlers = boto.plugin.get_plugin(AuthHandler, requested_capability) total_handlers = len(auth_handlers) for handler in auth_handlers: try: ready_handlers.append(handler(host, config, provider)) except boto.auth_handler.NotReadyToAuthenticate: pass if not ready_handlers: checked_handlers = auth_handlers names = [handler.__name__ for handler in checked_handlers] raise boto.exception.NoAuthHandlerFound( 'No handler was ready to authenticate. %d handlers were checked.' ' %s ' 'Check your credentials' % (len(names), str(names))) # We select the last ready auth handler that was loaded, to allow users to # customize how auth works in environments where there are shared boto # config files (e.g., /etc/boto.cfg and ~/.boto): The more general, # system-wide shared configs should be loaded first, and the user's # customizations loaded last. That way, for example, the system-wide # config might include a plugin_directory that includes a service account # auth plugin shared by all users of a Google Compute Engine instance # (allowing sharing of non-user data between various services), and the # user could override this with a .boto config that includes user-specific # credentials (for access to user data). return ready_handlers[-1] boto-2.20.1/boto/auth_handler.py000066400000000000000000000040131225267101000165040ustar00rootroot00000000000000# Copyright 2010 Google Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Defines an interface which all Auth handlers need to implement. """ from plugin import Plugin class NotReadyToAuthenticate(Exception): pass class AuthHandler(Plugin): capability = [] def __init__(self, host, config, provider): """Constructs the handlers. :type host: string :param host: The host to which the request is being sent. :type config: boto.pyami.Config :param config: Boto configuration. :type provider: boto.provider.Provider :param provider: Provider details. Raises: NotReadyToAuthenticate: if this handler is not willing to authenticate for the given provider and config. """ pass def add_auth(self, http_request): """Invoked to add authentication details to request. :type http_request: boto.connection.HTTPRequest :param http_request: HTTP request that needs to be authenticated. """ pass boto-2.20.1/boto/beanstalk/000077500000000000000000000000001225267101000154425ustar00rootroot00000000000000boto-2.20.1/boto/beanstalk/__init__.py000066400000000000000000000060211225267101000175520ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.regioninfo import RegionInfo def regions(): """ Get all available regions for the AWS Elastic Beanstalk service. :rtype: list :return: A list of :class:`boto.regioninfo.RegionInfo` """ import boto.beanstalk.layer1 return [RegionInfo(name='us-east-1', endpoint='elasticbeanstalk.us-east-1.amazonaws.com', connection_cls=boto.beanstalk.layer1.Layer1), RegionInfo(name='us-west-1', endpoint='elasticbeanstalk.us-west-1.amazonaws.com', connection_cls=boto.beanstalk.layer1.Layer1), RegionInfo(name='us-west-2', endpoint='elasticbeanstalk.us-west-2.amazonaws.com', connection_cls=boto.beanstalk.layer1.Layer1), RegionInfo(name='ap-northeast-1', endpoint='elasticbeanstalk.ap-northeast-1.amazonaws.com', connection_cls=boto.beanstalk.layer1.Layer1), RegionInfo(name='ap-southeast-1', endpoint='elasticbeanstalk.ap-southeast-1.amazonaws.com', connection_cls=boto.beanstalk.layer1.Layer1), RegionInfo(name='ap-southeast-2', endpoint='elasticbeanstalk.ap-southeast-2.amazonaws.com', connection_cls=boto.beanstalk.layer1.Layer1), RegionInfo(name='eu-west-1', endpoint='elasticbeanstalk.eu-west-1.amazonaws.com', connection_cls=boto.beanstalk.layer1.Layer1), RegionInfo(name='sa-east-1', endpoint='elasticbeanstalk.sa-east-1.amazonaws.com', connection_cls=boto.beanstalk.layer1.Layer1), ] def connect_to_region(region_name, **kw_params): for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/beanstalk/exception.py000066400000000000000000000043161225267101000200160ustar00rootroot00000000000000import sys from boto.compat import json from boto.exception import BotoServerError def simple(e): err = json.loads(e.error_message) code = err['Error']['Code'] try: # Dynamically get the error class. simple_e = getattr(sys.modules[__name__], code)(e, err) except AttributeError: # Return original exception on failure. return e return simple_e class SimpleException(BotoServerError): def __init__(self, e, err): super(SimpleException, self).__init__(e.status, e.reason, e.body) self.body = e.error_message self.request_id = err['RequestId'] self.error_code = err['Error']['Code'] self.error_message = err['Error']['Message'] def __repr__(self): return self.__class__.__name__ + ': ' + self.error_message def __str__(self): return self.__class__.__name__ + ': ' + self.error_message class ValidationError(SimpleException): pass # Common beanstalk exceptions. class IncompleteSignature(SimpleException): pass class InternalFailure(SimpleException): pass class InvalidAction(SimpleException): pass class InvalidClientTokenId(SimpleException): pass class InvalidParameterCombination(SimpleException): pass class InvalidParameterValue(SimpleException): pass class InvalidQueryParameter(SimpleException): pass class MalformedQueryString(SimpleException): pass class MissingAction(SimpleException): pass class MissingAuthenticationToken(SimpleException): pass class MissingParameter(SimpleException): pass class OptInRequired(SimpleException): pass class RequestExpired(SimpleException): pass class ServiceUnavailable(SimpleException): pass class Throttling(SimpleException): pass # Action specific exceptions. class TooManyApplications(SimpleException): pass class InsufficientPrivileges(SimpleException): pass class S3LocationNotInServiceRegion(SimpleException): pass class TooManyApplicationVersions(SimpleException): pass class TooManyConfigurationTemplates(SimpleException): pass class TooManyEnvironments(SimpleException): pass class S3SubscriptionRequired(SimpleException): pass class TooManyBuckets(SimpleException): pass class OperationInProgress(SimpleException): pass class SourceBundleDeletion(SimpleException): pass boto-2.20.1/boto/beanstalk/layer1.py000066400000000000000000001514171225267101000172220ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import boto import boto.jsonresponse from boto.compat import json from boto.regioninfo import RegionInfo from boto.connection import AWSQueryConnection class Layer1(AWSQueryConnection): APIVersion = '2010-12-01' DefaultRegionName = 'us-east-1' DefaultRegionEndpoint = 'elasticbeanstalk.us-east-1.amazonaws.com' def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/', api_version=None, security_token=None): if not region: region = RegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint) self.region = region AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, proxy_user, proxy_pass, self.region.endpoint, debug, https_connection_factory, path, security_token) def _required_auth_capability(self): return ['hmac-v4'] def _encode_bool(self, v): v = bool(v) return {True: "true", False: "false"}[v] def _get_response(self, action, params, path='/', verb='GET'): params['ContentType'] = 'JSON' response = self.make_request(action, params, path, verb) body = response.read() boto.log.debug(body) if response.status == 200: return json.loads(body) else: raise self.ResponseError(response.status, response.reason, body) def check_dns_availability(self, cname_prefix): """Checks if the specified CNAME is available. :type cname_prefix: string :param cname_prefix: The prefix used when this CNAME is reserved. """ params = {'CNAMEPrefix': cname_prefix} return self._get_response('CheckDNSAvailability', params) def create_application(self, application_name, description=None): """ Creates an application that has one configuration template named default and no application versions. :type application_name: string :param application_name: The name of the application. Constraint: This name must be unique within your account. If the specified name already exists, the action returns an InvalidParameterValue error. :type description: string :param description: Describes the application. :raises: TooManyApplicationsException """ params = {'ApplicationName': application_name} if description: params['Description'] = description return self._get_response('CreateApplication', params) def create_application_version(self, application_name, version_label, description=None, s3_bucket=None, s3_key=None, auto_create_application=None): """Creates an application version for the specified application. :type application_name: string :param application_name: The name of the application. If no application is found with this name, and AutoCreateApplication is false, returns an InvalidParameterValue error. :type version_label: string :param version_label: A label identifying this version. Constraint: Must be unique per application. If an application version already exists with this label for the specified application, AWS Elastic Beanstalk returns an InvalidParameterValue error. :type description: string :param description: Describes this version. :type s3_bucket: string :param s3_bucket: The Amazon S3 bucket where the data is located. :type s3_key: string :param s3_key: The Amazon S3 key where the data is located. Both s3_bucket and s3_key must be specified in order to use a specific source bundle. If both of these values are not specified the sample application will be used. :type auto_create_application: boolean :param auto_create_application: Determines how the system behaves if the specified application for this version does not already exist: true: Automatically creates the specified application for this version if it does not already exist. false: Returns an InvalidParameterValue if the specified application for this version does not already exist. Default: false Valid Values: true | false :raises: TooManyApplicationsException, TooManyApplicationVersionsException, InsufficientPrivilegesException, S3LocationNotInServiceRegionException """ params = {'ApplicationName': application_name, 'VersionLabel': version_label} if description: params['Description'] = description if s3_bucket and s3_key: params['SourceBundle.S3Bucket'] = s3_bucket params['SourceBundle.S3Key'] = s3_key if auto_create_application: params['AutoCreateApplication'] = self._encode_bool( auto_create_application) return self._get_response('CreateApplicationVersion', params) def create_configuration_template(self, application_name, template_name, solution_stack_name=None, source_configuration_application_name=None, source_configuration_template_name=None, environment_id=None, description=None, option_settings=None): """Creates a configuration template. Templates are associated with a specific application and are used to deploy different versions of the application with the same configuration settings. :type application_name: string :param application_name: The name of the application to associate with this configuration template. If no application is found with this name, AWS Elastic Beanstalk returns an InvalidParameterValue error. :type template_name: string :param template_name: The name of the configuration template. Constraint: This name must be unique per application. Default: If a configuration template already exists with this name, AWS Elastic Beanstalk returns an InvalidParameterValue error. :type solution_stack_name: string :param solution_stack_name: The name of the solution stack used by this configuration. The solution stack specifies the operating system, architecture, and application server for a configuration template. It determines the set of configuration options as well as the possible and default values. Use ListAvailableSolutionStacks to obtain a list of available solution stacks. Default: If the SolutionStackName is not specified and the source configuration parameter is blank, AWS Elastic Beanstalk uses the default solution stack. If not specified and the source configuration parameter is specified, AWS Elastic Beanstalk uses the same solution stack as the source configuration template. :type source_configuration_application_name: string :param source_configuration_application_name: The name of the application associated with the configuration. :type source_configuration_template_name: string :param source_configuration_template_name: The name of the configuration template. :type environment_id: string :param environment_id: The ID of the environment used with this configuration template. :type description: string :param description: Describes this configuration. :type option_settings: list :param option_settings: If specified, AWS Elastic Beanstalk sets the specified configuration option to the requested value. The new value overrides the value obtained from the solution stack or the source configuration template. :raises: InsufficientPrivilegesException, TooManyConfigurationTemplatesException """ params = {'ApplicationName': application_name, 'TemplateName': template_name} if solution_stack_name: params['SolutionStackName'] = solution_stack_name if source_configuration_application_name: params['SourceConfiguration.ApplicationName'] = source_configuration_application_name if source_configuration_template_name: params['SourceConfiguration.TemplateName'] = source_configuration_template_name if environment_id: params['EnvironmentId'] = environment_id if description: params['Description'] = description if option_settings: self._build_list_params(params, option_settings, 'OptionSettings.member', ('Namespace', 'OptionName', 'Value')) return self._get_response('CreateConfigurationTemplate', params) def create_environment(self, application_name, environment_name, version_label=None, template_name=None, solution_stack_name=None, cname_prefix=None, description=None, option_settings=None, options_to_remove=None): """Launches an environment for the application using a configuration. :type application_name: string :param application_name: The name of the application that contains the version to be deployed. If no application is found with this name, CreateEnvironment returns an InvalidParameterValue error. :type environment_name: string :param environment_name: A unique name for the deployment environment. Used in the application URL. Constraint: Must be from 4 to 23 characters in length. The name can contain only letters, numbers, and hyphens. It cannot start or end with a hyphen. This name must be unique in your account. If the specified name already exists, AWS Elastic Beanstalk returns an InvalidParameterValue error. Default: If the CNAME parameter is not specified, the environment name becomes part of the CNAME, and therefore part of the visible URL for your application. :type version_label: string :param version_label: The name of the application version to deploy. If the specified application has no associated application versions, AWS Elastic Beanstalk UpdateEnvironment returns an InvalidParameterValue error. Default: If not specified, AWS Elastic Beanstalk attempts to launch the most recently created application version. :type template_name: string :param template_name: The name of the configuration template to use in deployment. If no configuration template is found with this name, AWS Elastic Beanstalk returns an InvalidParameterValue error. Condition: You must specify either this parameter or a SolutionStackName, but not both. If you specify both, AWS Elastic Beanstalk returns an InvalidParameterCombination error. If you do not specify either, AWS Elastic Beanstalk returns a MissingRequiredParameter error. :type solution_stack_name: string :param solution_stack_name: This is an alternative to specifying a configuration name. If specified, AWS Elastic Beanstalk sets the configuration values to the default values associated with the specified solution stack. Condition: You must specify either this or a TemplateName, but not both. If you specify both, AWS Elastic Beanstalk returns an InvalidParameterCombination error. If you do not specify either, AWS Elastic Beanstalk returns a MissingRequiredParameter error. :type cname_prefix: string :param cname_prefix: If specified, the environment attempts to use this value as the prefix for the CNAME. If not specified, the environment uses the environment name. :type description: string :param description: Describes this environment. :type option_settings: list :param option_settings: If specified, AWS Elastic Beanstalk sets the specified configuration options to the requested value in the configuration set for the new environment. These override the values obtained from the solution stack or the configuration template. Each element in the list is a tuple of (Namespace, OptionName, Value), for example:: [('aws:autoscaling:launchconfiguration', 'Ec2KeyName', 'mykeypair')] :type options_to_remove: list :param options_to_remove: A list of custom user-defined configuration options to remove from the configuration set for this new environment. :raises: TooManyEnvironmentsException, InsufficientPrivilegesException """ params = {'ApplicationName': application_name, 'EnvironmentName': environment_name} if version_label: params['VersionLabel'] = version_label if template_name: params['TemplateName'] = template_name if solution_stack_name: params['SolutionStackName'] = solution_stack_name if cname_prefix: params['CNAMEPrefix'] = cname_prefix if description: params['Description'] = description if option_settings: self._build_list_params(params, option_settings, 'OptionSettings.member', ('Namespace', 'OptionName', 'Value')) if options_to_remove: self.build_list_params(params, options_to_remove, 'OptionsToRemove.member') return self._get_response('CreateEnvironment', params) def create_storage_location(self): """ Creates the Amazon S3 storage location for the account. This location is used to store user log files. :raises: TooManyBucketsException, S3SubscriptionRequiredException, InsufficientPrivilegesException """ return self._get_response('CreateStorageLocation', params={}) def delete_application(self, application_name, terminate_env_by_force=None): """ Deletes the specified application along with all associated versions and configurations. The application versions will not be deleted from your Amazon S3 bucket. :type application_name: string :param application_name: The name of the application to delete. :type terminate_env_by_force: boolean :param terminate_env_by_force: When set to true, running environments will be terminated before deleting the application. :raises: OperationInProgressException """ params = {'ApplicationName': application_name} if terminate_env_by_force: params['TerminateEnvByForce'] = self._encode_bool( terminate_env_by_force) return self._get_response('DeleteApplication', params) def delete_application_version(self, application_name, version_label, delete_source_bundle=None): """Deletes the specified version from the specified application. :type application_name: string :param application_name: The name of the application to delete releases from. :type version_label: string :param version_label: The label of the version to delete. :type delete_source_bundle: boolean :param delete_source_bundle: Indicates whether to delete the associated source bundle from Amazon S3. Valid Values: true | false :raises: SourceBundleDeletionException, InsufficientPrivilegesException, OperationInProgressException, S3LocationNotInServiceRegionException """ params = {'ApplicationName': application_name, 'VersionLabel': version_label} if delete_source_bundle: params['DeleteSourceBundle'] = self._encode_bool( delete_source_bundle) return self._get_response('DeleteApplicationVersion', params) def delete_configuration_template(self, application_name, template_name): """Deletes the specified configuration template. :type application_name: string :param application_name: The name of the application to delete the configuration template from. :type template_name: string :param template_name: The name of the configuration template to delete. :raises: OperationInProgressException """ params = {'ApplicationName': application_name, 'TemplateName': template_name} return self._get_response('DeleteConfigurationTemplate', params) def delete_environment_configuration(self, application_name, environment_name): """ Deletes the draft configuration associated with the running environment. Updating a running environment with any configuration changes creates a draft configuration set. You can get the draft configuration using DescribeConfigurationSettings while the update is in progress or if the update fails. The DeploymentStatus for the draft configuration indicates whether the deployment is in process or has failed. The draft configuration remains in existence until it is deleted with this action. :type application_name: string :param application_name: The name of the application the environment is associated with. :type environment_name: string :param environment_name: The name of the environment to delete the draft configuration from. """ params = {'ApplicationName': application_name, 'EnvironmentName': environment_name} return self._get_response('DeleteEnvironmentConfiguration', params) def describe_application_versions(self, application_name=None, version_labels=None): """Returns descriptions for existing application versions. :type application_name: string :param application_name: If specified, AWS Elastic Beanstalk restricts the returned descriptions to only include ones that are associated with the specified application. :type version_labels: list :param version_labels: If specified, restricts the returned descriptions to only include ones that have the specified version labels. """ params = {} if application_name: params['ApplicationName'] = application_name if version_labels: self.build_list_params(params, version_labels, 'VersionLabels.member') return self._get_response('DescribeApplicationVersions', params) def describe_applications(self, application_names=None): """Returns the descriptions of existing applications. :type application_names: list :param application_names: If specified, AWS Elastic Beanstalk restricts the returned descriptions to only include those with the specified names. """ params = {} if application_names: self.build_list_params(params, application_names, 'ApplicationNames.member') return self._get_response('DescribeApplications', params) def describe_configuration_options(self, application_name=None, template_name=None, environment_name=None, solution_stack_name=None, options=None): """Describes configuration options used in a template or environment. Describes the configuration options that are used in a particular configuration template or environment, or that a specified solution stack defines. The description includes the values the options, their default values, and an indication of the required action on a running environment if an option value is changed. :type application_name: string :param application_name: The name of the application associated with the configuration template or environment. Only needed if you want to describe the configuration options associated with either the configuration template or environment. :type template_name: string :param template_name: The name of the configuration template whose configuration options you want to describe. :type environment_name: string :param environment_name: The name of the environment whose configuration options you want to describe. :type solution_stack_name: string :param solution_stack_name: The name of the solution stack whose configuration options you want to describe. :type options: list :param options: If specified, restricts the descriptions to only the specified options. """ params = {} if application_name: params['ApplicationName'] = application_name if template_name: params['TemplateName'] = template_name if environment_name: params['EnvironmentName'] = environment_name if solution_stack_name: params['SolutionStackName'] = solution_stack_name if options: self.build_list_params(params, options, 'Options.member') return self._get_response('DescribeConfigurationOptions', params) def describe_configuration_settings(self, application_name, template_name=None, environment_name=None): """ Returns a description of the settings for the specified configuration set, that is, either a configuration template or the configuration set associated with a running environment. When describing the settings for the configuration set associated with a running environment, it is possible to receive two sets of setting descriptions. One is the deployed configuration set, and the other is a draft configuration of an environment that is either in the process of deployment or that failed to deploy. :type application_name: string :param application_name: The application for the environment or configuration template. :type template_name: string :param template_name: The name of the configuration template to describe. Conditional: You must specify either this parameter or an EnvironmentName, but not both. If you specify both, AWS Elastic Beanstalk returns an InvalidParameterCombination error. If you do not specify either, AWS Elastic Beanstalk returns a MissingRequiredParameter error. :type environment_name: string :param environment_name: The name of the environment to describe. Condition: You must specify either this or a TemplateName, but not both. If you specify both, AWS Elastic Beanstalk returns an InvalidParameterCombination error. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. """ params = {'ApplicationName': application_name} if template_name: params['TemplateName'] = template_name if environment_name: params['EnvironmentName'] = environment_name return self._get_response('DescribeConfigurationSettings', params) def describe_environment_resources(self, environment_id=None, environment_name=None): """Returns AWS resources for this environment. :type environment_id: string :param environment_id: The ID of the environment to retrieve AWS resource usage data. Condition: You must specify either this or an EnvironmentName, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. :type environment_name: string :param environment_name: The name of the environment to retrieve AWS resource usage data. Condition: You must specify either this or an EnvironmentId, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. :raises: InsufficientPrivilegesException """ params = {} if environment_id: params['EnvironmentId'] = environment_id if environment_name: params['EnvironmentName'] = environment_name return self._get_response('DescribeEnvironmentResources', params) def describe_environments(self, application_name=None, version_label=None, environment_ids=None, environment_names=None, include_deleted=None, included_deleted_back_to=None): """Returns descriptions for existing environments. :type application_name: string :param application_name: If specified, AWS Elastic Beanstalk restricts the returned descriptions to include only those that are associated with this application. :type version_label: string :param version_label: If specified, AWS Elastic Beanstalk restricts the returned descriptions to include only those that are associated with this application version. :type environment_ids: list :param environment_ids: If specified, AWS Elastic Beanstalk restricts the returned descriptions to include only those that have the specified IDs. :type environment_names: list :param environment_names: If specified, AWS Elastic Beanstalk restricts the returned descriptions to include only those that have the specified names. :type include_deleted: boolean :param include_deleted: Indicates whether to include deleted environments: true: Environments that have been deleted after IncludedDeletedBackTo are displayed. false: Do not include deleted environments. :type included_deleted_back_to: timestamp :param included_deleted_back_to: If specified when IncludeDeleted is set to true, then environments deleted after this date are displayed. """ params = {} if application_name: params['ApplicationName'] = application_name if version_label: params['VersionLabel'] = version_label if environment_ids: self.build_list_params(params, environment_ids, 'EnvironmentIds.member') if environment_names: self.build_list_params(params, environment_names, 'EnvironmentNames.member') if include_deleted: params['IncludeDeleted'] = self._encode_bool(include_deleted) if included_deleted_back_to: params['IncludedDeletedBackTo'] = included_deleted_back_to return self._get_response('DescribeEnvironments', params) def describe_events(self, application_name=None, version_label=None, template_name=None, environment_id=None, environment_name=None, request_id=None, severity=None, start_time=None, end_time=None, max_records=None, next_token=None): """Returns event descriptions matching criteria up to the last 6 weeks. :type application_name: string :param application_name: If specified, AWS Elastic Beanstalk restricts the returned descriptions to include only those associated with this application. :type version_label: string :param version_label: If specified, AWS Elastic Beanstalk restricts the returned descriptions to those associated with this application version. :type template_name: string :param template_name: If specified, AWS Elastic Beanstalk restricts the returned descriptions to those that are associated with this environment configuration. :type environment_id: string :param environment_id: If specified, AWS Elastic Beanstalk restricts the returned descriptions to those associated with this environment. :type environment_name: string :param environment_name: If specified, AWS Elastic Beanstalk restricts the returned descriptions to those associated with this environment. :type request_id: string :param request_id: If specified, AWS Elastic Beanstalk restricts the described events to include only those associated with this request ID. :type severity: string :param severity: If specified, limits the events returned from this call to include only those with the specified severity or higher. :type start_time: timestamp :param start_time: If specified, AWS Elastic Beanstalk restricts the returned descriptions to those that occur on or after this time. :type end_time: timestamp :param end_time: If specified, AWS Elastic Beanstalk restricts the returned descriptions to those that occur up to, but not including, the EndTime. :type max_records: integer :param max_records: Specifies the maximum number of events that can be returned, beginning with the most recent event. :type next_token: string :param next_token: Pagination token. If specified, the events return the next batch of results. """ params = {} if application_name: params['ApplicationName'] = application_name if version_label: params['VersionLabel'] = version_label if template_name: params['TemplateName'] = template_name if environment_id: params['EnvironmentId'] = environment_id if environment_name: params['EnvironmentName'] = environment_name if request_id: params['RequestId'] = request_id if severity: params['Severity'] = severity if start_time: params['StartTime'] = start_time if end_time: params['EndTime'] = end_time if max_records: params['MaxRecords'] = max_records if next_token: params['NextToken'] = next_token return self._get_response('DescribeEvents', params) def list_available_solution_stacks(self): """Returns a list of the available solution stack names.""" return self._get_response('ListAvailableSolutionStacks', params={}) def rebuild_environment(self, environment_id=None, environment_name=None): """ Deletes and recreates all of the AWS resources (for example: the Auto Scaling group, load balancer, etc.) for a specified environment and forces a restart. :type environment_id: string :param environment_id: The ID of the environment to rebuild. Condition: You must specify either this or an EnvironmentName, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. :type environment_name: string :param environment_name: The name of the environment to rebuild. Condition: You must specify either this or an EnvironmentId, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. :raises: InsufficientPrivilegesException """ params = {} if environment_id: params['EnvironmentId'] = environment_id if environment_name: params['EnvironmentName'] = environment_name return self._get_response('RebuildEnvironment', params) def request_environment_info(self, info_type='tail', environment_id=None, environment_name=None): """ Initiates a request to compile the specified type of information of the deployed environment. Setting the InfoType to tail compiles the last lines from the application server log files of every Amazon EC2 instance in your environment. Use RetrieveEnvironmentInfo to access the compiled information. :type info_type: string :param info_type: The type of information to request. :type environment_id: string :param environment_id: The ID of the environment of the requested data. If no such environment is found, RequestEnvironmentInfo returns an InvalidParameterValue error. Condition: You must specify either this or an EnvironmentName, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. :type environment_name: string :param environment_name: The name of the environment of the requested data. If no such environment is found, RequestEnvironmentInfo returns an InvalidParameterValue error. Condition: You must specify either this or an EnvironmentId, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. """ params = {'InfoType': info_type} if environment_id: params['EnvironmentId'] = environment_id if environment_name: params['EnvironmentName'] = environment_name return self._get_response('RequestEnvironmentInfo', params) def restart_app_server(self, environment_id=None, environment_name=None): """ Causes the environment to restart the application container server running on each Amazon EC2 instance. :type environment_id: string :param environment_id: The ID of the environment to restart the server for. Condition: You must specify either this or an EnvironmentName, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. :type environment_name: string :param environment_name: The name of the environment to restart the server for. Condition: You must specify either this or an EnvironmentId, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. """ params = {} if environment_id: params['EnvironmentId'] = environment_id if environment_name: params['EnvironmentName'] = environment_name return self._get_response('RestartAppServer', params) def retrieve_environment_info(self, info_type='tail', environment_id=None, environment_name=None): """ Retrieves the compiled information from a RequestEnvironmentInfo request. :type info_type: string :param info_type: The type of information to retrieve. :type environment_id: string :param environment_id: The ID of the data's environment. If no such environment is found, returns an InvalidParameterValue error. Condition: You must specify either this or an EnvironmentName, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. :type environment_name: string :param environment_name: The name of the data's environment. If no such environment is found, returns an InvalidParameterValue error. Condition: You must specify either this or an EnvironmentId, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. """ params = {'InfoType': info_type} if environment_id: params['EnvironmentId'] = environment_id if environment_name: params['EnvironmentName'] = environment_name return self._get_response('RetrieveEnvironmentInfo', params) def swap_environment_cnames(self, source_environment_id=None, source_environment_name=None, destination_environment_id=None, destination_environment_name=None): """Swaps the CNAMEs of two environments. :type source_environment_id: string :param source_environment_id: The ID of the source environment. Condition: You must specify at least the SourceEnvironmentID or the SourceEnvironmentName. You may also specify both. If you specify the SourceEnvironmentId, you must specify the DestinationEnvironmentId. :type source_environment_name: string :param source_environment_name: The name of the source environment. Condition: You must specify at least the SourceEnvironmentID or the SourceEnvironmentName. You may also specify both. If you specify the SourceEnvironmentName, you must specify the DestinationEnvironmentName. :type destination_environment_id: string :param destination_environment_id: The ID of the destination environment. Condition: You must specify at least the DestinationEnvironmentID or the DestinationEnvironmentName. You may also specify both. You must specify the SourceEnvironmentId with the DestinationEnvironmentId. :type destination_environment_name: string :param destination_environment_name: The name of the destination environment. Condition: You must specify at least the DestinationEnvironmentID or the DestinationEnvironmentName. You may also specify both. You must specify the SourceEnvironmentName with the DestinationEnvironmentName. """ params = {} if source_environment_id: params['SourceEnvironmentId'] = source_environment_id if source_environment_name: params['SourceEnvironmentName'] = source_environment_name if destination_environment_id: params['DestinationEnvironmentId'] = destination_environment_id if destination_environment_name: params['DestinationEnvironmentName'] = destination_environment_name return self._get_response('SwapEnvironmentCNAMEs', params) def terminate_environment(self, environment_id=None, environment_name=None, terminate_resources=None): """Terminates the specified environment. :type environment_id: string :param environment_id: The ID of the environment to terminate. Condition: You must specify either this or an EnvironmentName, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. :type environment_name: string :param environment_name: The name of the environment to terminate. Condition: You must specify either this or an EnvironmentId, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. :type terminate_resources: boolean :param terminate_resources: Indicates whether the associated AWS resources should shut down when the environment is terminated: true: (default) The user AWS resources (for example, the Auto Scaling group, LoadBalancer, etc.) are terminated along with the environment. false: The environment is removed from the AWS Elastic Beanstalk but the AWS resources continue to operate. For more information, see the AWS Elastic Beanstalk User Guide. Default: true Valid Values: true | false :raises: InsufficientPrivilegesException """ params = {} if environment_id: params['EnvironmentId'] = environment_id if environment_name: params['EnvironmentName'] = environment_name if terminate_resources: params['TerminateResources'] = self._encode_bool( terminate_resources) return self._get_response('TerminateEnvironment', params) def update_application(self, application_name, description=None): """ Updates the specified application to have the specified properties. :type application_name: string :param application_name: The name of the application to update. If no such application is found, UpdateApplication returns an InvalidParameterValue error. :type description: string :param description: A new description for the application. Default: If not specified, AWS Elastic Beanstalk does not update the description. """ params = {'ApplicationName': application_name} if description: params['Description'] = description return self._get_response('UpdateApplication', params) def update_application_version(self, application_name, version_label, description=None): """Updates the application version to have the properties. :type application_name: string :param application_name: The name of the application associated with this version. If no application is found with this name, UpdateApplication returns an InvalidParameterValue error. :type version_label: string :param version_label: The name of the version to update. If no application version is found with this label, UpdateApplication returns an InvalidParameterValue error. :type description: string :param description: A new description for this release. """ params = {'ApplicationName': application_name, 'VersionLabel': version_label} if description: params['Description'] = description return self._get_response('UpdateApplicationVersion', params) def update_configuration_template(self, application_name, template_name, description=None, option_settings=None, options_to_remove=None): """ Updates the specified configuration template to have the specified properties or configuration option values. :type application_name: string :param application_name: The name of the application associated with the configuration template to update. If no application is found with this name, UpdateConfigurationTemplate returns an InvalidParameterValue error. :type template_name: string :param template_name: The name of the configuration template to update. If no configuration template is found with this name, UpdateConfigurationTemplate returns an InvalidParameterValue error. :type description: string :param description: A new description for the configuration. :type option_settings: list :param option_settings: A list of configuration option settings to update with the new specified option value. :type options_to_remove: list :param options_to_remove: A list of configuration options to remove from the configuration set. Constraint: You can remove only UserDefined configuration options. :raises: InsufficientPrivilegesException """ params = {'ApplicationName': application_name, 'TemplateName': template_name} if description: params['Description'] = description if option_settings: self._build_list_params(params, option_settings, 'OptionSettings.member', ('Namespace', 'OptionName', 'Value')) if options_to_remove: self.build_list_params(params, options_to_remove, 'OptionsToRemove.member') return self._get_response('UpdateConfigurationTemplate', params) def update_environment(self, environment_id=None, environment_name=None, version_label=None, template_name=None, description=None, option_settings=None, options_to_remove=None): """ Updates the environment description, deploys a new application version, updates the configuration settings to an entirely new configuration template, or updates select configuration option values in the running environment. Attempting to update both the release and configuration is not allowed and AWS Elastic Beanstalk returns an InvalidParameterCombination error. When updating the configuration settings to a new template or individual settings, a draft configuration is created and DescribeConfigurationSettings for this environment returns two setting descriptions with different DeploymentStatus values. :type environment_id: string :param environment_id: The ID of the environment to update. If no environment with this ID exists, AWS Elastic Beanstalk returns an InvalidParameterValue error. Condition: You must specify either this or an EnvironmentName, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. :type environment_name: string :param environment_name: The name of the environment to update. If no environment with this name exists, AWS Elastic Beanstalk returns an InvalidParameterValue error. Condition: You must specify either this or an EnvironmentId, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. :type version_label: string :param version_label: If this parameter is specified, AWS Elastic Beanstalk deploys the named application version to the environment. If no such application version is found, returns an InvalidParameterValue error. :type template_name: string :param template_name: If this parameter is specified, AWS Elastic Beanstalk deploys this configuration template to the environment. If no such configuration template is found, AWS Elastic Beanstalk returns an InvalidParameterValue error. :type description: string :param description: If this parameter is specified, AWS Elastic Beanstalk updates the description of this environment. :type option_settings: list :param option_settings: If specified, AWS Elastic Beanstalk updates the configuration set associated with the running environment and sets the specified configuration options to the requested value. :type options_to_remove: list :param options_to_remove: A list of custom user-defined configuration options to remove from the configuration set for this environment. :raises: InsufficientPrivilegesException """ params = {} if environment_id: params['EnvironmentId'] = environment_id if environment_name: params['EnvironmentName'] = environment_name if version_label: params['VersionLabel'] = version_label if template_name: params['TemplateName'] = template_name if description: params['Description'] = description if option_settings: self._build_list_params(params, option_settings, 'OptionSettings.member', ('Namespace', 'OptionName', 'Value')) if options_to_remove: self.build_list_params(params, options_to_remove, 'OptionsToRemove.member') return self._get_response('UpdateEnvironment', params) def validate_configuration_settings(self, application_name, option_settings, template_name=None, environment_name=None): """ Takes a set of configuration settings and either a configuration template or environment, and determines whether those values are valid. This action returns a list of messages indicating any errors or warnings associated with the selection of option values. :type application_name: string :param application_name: The name of the application that the configuration template or environment belongs to. :type template_name: string :param template_name: The name of the configuration template to validate the settings against. Condition: You cannot specify both this and an environment name. :type environment_name: string :param environment_name: The name of the environment to validate the settings against. Condition: You cannot specify both this and a configuration template name. :type option_settings: list :param option_settings: A list of the options and desired values to evaluate. :raises: InsufficientPrivilegesException """ params = {'ApplicationName': application_name} self._build_list_params(params, option_settings, 'OptionSettings.member', ('Namespace', 'OptionName', 'Value')) if template_name: params['TemplateName'] = template_name if environment_name: params['EnvironmentName'] = environment_name return self._get_response('ValidateConfigurationSettings', params) def _build_list_params(self, params, user_values, prefix, tuple_names): # For params such as the ConfigurationOptionSettings, # they can specify a list of tuples where each tuple maps to a specific # arg. For example: # user_values = [('foo', 'bar', 'baz'] # prefix=MyOption.member # tuple_names=('One', 'Two', 'Three') # would result in: # MyOption.member.1.One = foo # MyOption.member.1.Two = bar # MyOption.member.1.Three = baz for i, user_value in enumerate(user_values, 1): current_prefix = '%s.%s' % (prefix, i) for key, value in zip(tuple_names, user_value): full_key = '%s.%s' % (current_prefix, key) params[full_key] = value boto-2.20.1/boto/beanstalk/response.py000066400000000000000000000665641225267101000176730ustar00rootroot00000000000000"""Classify responses from layer1 and strict type values.""" from datetime import datetime class BaseObject(object): def __repr__(self): result = self.__class__.__name__ + '{ ' counter = 0 for key, value in self.__dict__.iteritems(): # first iteration no comma counter += 1 if counter > 1: result += ', ' result += key + ': ' result += self._repr_by_type(value) result += ' }' return result def _repr_by_type(self, value): # Everything is either a 'Response', 'list', or 'None/str/int/bool'. result = '' if isinstance(value, Response): result += value.__repr__() elif isinstance(value, list): result += self._repr_list(value) else: result += str(value) return result def _repr_list(self, array): result = '[' for value in array: result += ' ' + self._repr_by_type(value) + ',' # Check for trailing comma with a space. if len(result) > 1: result = result[:-1] + ' ' result += ']' return result class Response(BaseObject): def __init__(self, response): super(Response, self).__init__() if response['ResponseMetadata']: self.response_metadata = ResponseMetadata(response['ResponseMetadata']) else: self.response_metadata = None class ResponseMetadata(BaseObject): def __init__(self, response): super(ResponseMetadata, self).__init__() self.request_id = str(response['RequestId']) class ApplicationDescription(BaseObject): def __init__(self, response): super(ApplicationDescription, self).__init__() self.application_name = str(response['ApplicationName']) self.configuration_templates = [] if response['ConfigurationTemplates']: for member in response['ConfigurationTemplates']: configuration_template = str(member) self.configuration_templates.append(configuration_template) self.date_created = datetime.fromtimestamp(response['DateCreated']) self.date_updated = datetime.fromtimestamp(response['DateUpdated']) self.description = str(response['Description']) self.versions = [] if response['Versions']: for member in response['Versions']: version = str(member) self.versions.append(version) class ApplicationVersionDescription(BaseObject): def __init__(self, response): super(ApplicationVersionDescription, self).__init__() self.application_name = str(response['ApplicationName']) self.date_created = datetime.fromtimestamp(response['DateCreated']) self.date_updated = datetime.fromtimestamp(response['DateUpdated']) self.description = str(response['Description']) if response['SourceBundle']: self.source_bundle = S3Location(response['SourceBundle']) else: self.source_bundle = None self.version_label = str(response['VersionLabel']) class AutoScalingGroup(BaseObject): def __init__(self, response): super(AutoScalingGroup, self).__init__() self.name = str(response['Name']) class ConfigurationOptionDescription(BaseObject): def __init__(self, response): super(ConfigurationOptionDescription, self).__init__() self.change_severity = str(response['ChangeSeverity']) self.default_value = str(response['DefaultValue']) self.max_length = int(response['MaxLength']) if response['MaxLength'] else None self.max_value = int(response['MaxValue']) if response['MaxValue'] else None self.min_value = int(response['MinValue']) if response['MinValue'] else None self.name = str(response['Name']) self.namespace = str(response['Namespace']) if response['Regex']: self.regex = OptionRestrictionRegex(response['Regex']) else: self.regex = None self.user_defined = str(response['UserDefined']) self.value_options = [] if response['ValueOptions']: for member in response['ValueOptions']: value_option = str(member) self.value_options.append(value_option) self.value_type = str(response['ValueType']) class ConfigurationOptionSetting(BaseObject): def __init__(self, response): super(ConfigurationOptionSetting, self).__init__() self.namespace = str(response['Namespace']) self.option_name = str(response['OptionName']) self.value = str(response['Value']) class ConfigurationSettingsDescription(BaseObject): def __init__(self, response): super(ConfigurationSettingsDescription, self).__init__() self.application_name = str(response['ApplicationName']) self.date_created = datetime.fromtimestamp(response['DateCreated']) self.date_updated = datetime.fromtimestamp(response['DateUpdated']) self.deployment_status = str(response['DeploymentStatus']) self.description = str(response['Description']) self.environment_name = str(response['EnvironmentName']) self.option_settings = [] if response['OptionSettings']: for member in response['OptionSettings']: option_setting = ConfigurationOptionSetting(member) self.option_settings.append(option_setting) self.solution_stack_name = str(response['SolutionStackName']) self.template_name = str(response['TemplateName']) class EnvironmentDescription(BaseObject): def __init__(self, response): super(EnvironmentDescription, self).__init__() self.application_name = str(response['ApplicationName']) self.cname = str(response['CNAME']) self.date_created = datetime.fromtimestamp(response['DateCreated']) self.date_updated = datetime.fromtimestamp(response['DateUpdated']) self.description = str(response['Description']) self.endpoint_url = str(response['EndpointURL']) self.environment_id = str(response['EnvironmentId']) self.environment_name = str(response['EnvironmentName']) self.health = str(response['Health']) if response['Resources']: self.resources = EnvironmentResourcesDescription(response['Resources']) else: self.resources = None self.solution_stack_name = str(response['SolutionStackName']) self.status = str(response['Status']) self.template_name = str(response['TemplateName']) self.version_label = str(response['VersionLabel']) class EnvironmentInfoDescription(BaseObject): def __init__(self, response): super(EnvironmentInfoDescription, self).__init__() self.ec2_instance_id = str(response['Ec2InstanceId']) self.info_type = str(response['InfoType']) self.message = str(response['Message']) self.sample_timestamp = datetime.fromtimestamp(response['SampleTimestamp']) class EnvironmentResourceDescription(BaseObject): def __init__(self, response): super(EnvironmentResourceDescription, self).__init__() self.auto_scaling_groups = [] if response['AutoScalingGroups']: for member in response['AutoScalingGroups']: auto_scaling_group = AutoScalingGroup(member) self.auto_scaling_groups.append(auto_scaling_group) self.environment_name = str(response['EnvironmentName']) self.instances = [] if response['Instances']: for member in response['Instances']: instance = Instance(member) self.instances.append(instance) self.launch_configurations = [] if response['LaunchConfigurations']: for member in response['LaunchConfigurations']: launch_configuration = LaunchConfiguration(member) self.launch_configurations.append(launch_configuration) self.load_balancers = [] if response['LoadBalancers']: for member in response['LoadBalancers']: load_balancer = LoadBalancer(member) self.load_balancers.append(load_balancer) self.triggers = [] if response['Triggers']: for member in response['Triggers']: trigger = Trigger(member) self.triggers.append(trigger) class EnvironmentResourcesDescription(BaseObject): def __init__(self, response): super(EnvironmentResourcesDescription, self).__init__() if response['LoadBalancer']: self.load_balancer = LoadBalancerDescription(response['LoadBalancer']) else: self.load_balancer = None class EventDescription(BaseObject): def __init__(self, response): super(EventDescription, self).__init__() self.application_name = str(response['ApplicationName']) self.environment_name = str(response['EnvironmentName']) self.event_date = datetime.fromtimestamp(response['EventDate']) self.message = str(response['Message']) self.request_id = str(response['RequestId']) self.severity = str(response['Severity']) self.template_name = str(response['TemplateName']) self.version_label = str(response['VersionLabel']) class Instance(BaseObject): def __init__(self, response): super(Instance, self).__init__() self.id = str(response['Id']) class LaunchConfiguration(BaseObject): def __init__(self, response): super(LaunchConfiguration, self).__init__() self.name = str(response['Name']) class Listener(BaseObject): def __init__(self, response): super(Listener, self).__init__() self.port = int(response['Port']) if response['Port'] else None self.protocol = str(response['Protocol']) class LoadBalancer(BaseObject): def __init__(self, response): super(LoadBalancer, self).__init__() self.name = str(response['Name']) class LoadBalancerDescription(BaseObject): def __init__(self, response): super(LoadBalancerDescription, self).__init__() self.domain = str(response['Domain']) self.listeners = [] if response['Listeners']: for member in response['Listeners']: listener = Listener(member) self.listeners.append(listener) self.load_balancer_name = str(response['LoadBalancerName']) class OptionRestrictionRegex(BaseObject): def __init__(self, response): super(OptionRestrictionRegex, self).__init__() self.label = response['Label'] self.pattern = response['Pattern'] class SolutionStackDescription(BaseObject): def __init__(self, response): super(SolutionStackDescription, self).__init__() self.permitted_file_types = [] if response['PermittedFileTypes']: for member in response['PermittedFileTypes']: permitted_file_type = str(member) self.permitted_file_types.append(permitted_file_type) self.solution_stack_name = str(response['SolutionStackName']) class S3Location(BaseObject): def __init__(self, response): super(S3Location, self).__init__() self.s3_bucket = str(response['S3Bucket']) self.s3_key = str(response['S3Key']) class Trigger(BaseObject): def __init__(self, response): super(Trigger, self).__init__() self.name = str(response['Name']) class ValidationMessage(BaseObject): def __init__(self, response): super(ValidationMessage, self).__init__() self.message = str(response['Message']) self.namespace = str(response['Namespace']) self.option_name = str(response['OptionName']) self.severity = str(response['Severity']) # These are the response objects layer2 uses, one for each layer1 api call. class CheckDNSAvailabilityResponse(Response): def __init__(self, response): response = response['CheckDNSAvailabilityResponse'] super(CheckDNSAvailabilityResponse, self).__init__(response) response = response['CheckDNSAvailabilityResult'] self.fully_qualified_cname = str(response['FullyQualifiedCNAME']) self.available = bool(response['Available']) # Our naming convension produces this class name but api names it with more # capitals. class CheckDnsAvailabilityResponse(CheckDNSAvailabilityResponse): pass class CreateApplicationResponse(Response): def __init__(self, response): response = response['CreateApplicationResponse'] super(CreateApplicationResponse, self).__init__(response) response = response['CreateApplicationResult'] if response['Application']: self.application = ApplicationDescription(response['Application']) else: self.application = None class CreateApplicationVersionResponse(Response): def __init__(self, response): response = response['CreateApplicationVersionResponse'] super(CreateApplicationVersionResponse, self).__init__(response) response = response['CreateApplicationVersionResult'] if response['ApplicationVersion']: self.application_version = ApplicationVersionDescription(response['ApplicationVersion']) else: self.application_version = None class CreateConfigurationTemplateResponse(Response): def __init__(self, response): response = response['CreateConfigurationTemplateResponse'] super(CreateConfigurationTemplateResponse, self).__init__(response) response = response['CreateConfigurationTemplateResult'] self.application_name = str(response['ApplicationName']) self.date_created = datetime.fromtimestamp(response['DateCreated']) self.date_updated = datetime.fromtimestamp(response['DateUpdated']) self.deployment_status = str(response['DeploymentStatus']) self.description = str(response['Description']) self.environment_name = str(response['EnvironmentName']) self.option_settings = [] if response['OptionSettings']: for member in response['OptionSettings']: option_setting = ConfigurationOptionSetting(member) self.option_settings.append(option_setting) self.solution_stack_name = str(response['SolutionStackName']) self.template_name = str(response['TemplateName']) class CreateEnvironmentResponse(Response): def __init__(self, response): response = response['CreateEnvironmentResponse'] super(CreateEnvironmentResponse, self).__init__(response) response = response['CreateEnvironmentResult'] self.application_name = str(response['ApplicationName']) self.cname = str(response['CNAME']) self.date_created = datetime.fromtimestamp(response['DateCreated']) self.date_updated = datetime.fromtimestamp(response['DateUpdated']) self.description = str(response['Description']) self.endpoint_url = str(response['EndpointURL']) self.environment_id = str(response['EnvironmentId']) self.environment_name = str(response['EnvironmentName']) self.health = str(response['Health']) if response['Resources']: self.resources = EnvironmentResourcesDescription(response['Resources']) else: self.resources = None self.solution_stack_name = str(response['SolutionStackName']) self.status = str(response['Status']) self.template_name = str(response['TemplateName']) self.version_label = str(response['VersionLabel']) class CreateStorageLocationResponse(Response): def __init__(self, response): response = response['CreateStorageLocationResponse'] super(CreateStorageLocationResponse, self).__init__(response) response = response['CreateStorageLocationResult'] self.s3_bucket = str(response['S3Bucket']) class DeleteApplicationResponse(Response): def __init__(self, response): response = response['DeleteApplicationResponse'] super(DeleteApplicationResponse, self).__init__(response) class DeleteApplicationVersionResponse(Response): def __init__(self, response): response = response['DeleteApplicationVersionResponse'] super(DeleteApplicationVersionResponse, self).__init__(response) class DeleteConfigurationTemplateResponse(Response): def __init__(self, response): response = response['DeleteConfigurationTemplateResponse'] super(DeleteConfigurationTemplateResponse, self).__init__(response) class DeleteEnvironmentConfigurationResponse(Response): def __init__(self, response): response = response['DeleteEnvironmentConfigurationResponse'] super(DeleteEnvironmentConfigurationResponse, self).__init__(response) class DescribeApplicationVersionsResponse(Response): def __init__(self, response): response = response['DescribeApplicationVersionsResponse'] super(DescribeApplicationVersionsResponse, self).__init__(response) response = response['DescribeApplicationVersionsResult'] self.application_versions = [] if response['ApplicationVersions']: for member in response['ApplicationVersions']: application_version = ApplicationVersionDescription(member) self.application_versions.append(application_version) class DescribeApplicationsResponse(Response): def __init__(self, response): response = response['DescribeApplicationsResponse'] super(DescribeApplicationsResponse, self).__init__(response) response = response['DescribeApplicationsResult'] self.applications = [] if response['Applications']: for member in response['Applications']: application = ApplicationDescription(member) self.applications.append(application) class DescribeConfigurationOptionsResponse(Response): def __init__(self, response): response = response['DescribeConfigurationOptionsResponse'] super(DescribeConfigurationOptionsResponse, self).__init__(response) response = response['DescribeConfigurationOptionsResult'] self.options = [] if response['Options']: for member in response['Options']: option = ConfigurationOptionDescription(member) self.options.append(option) self.solution_stack_name = str(response['SolutionStackName']) class DescribeConfigurationSettingsResponse(Response): def __init__(self, response): response = response['DescribeConfigurationSettingsResponse'] super(DescribeConfigurationSettingsResponse, self).__init__(response) response = response['DescribeConfigurationSettingsResult'] self.configuration_settings = [] if response['ConfigurationSettings']: for member in response['ConfigurationSettings']: configuration_setting = ConfigurationSettingsDescription(member) self.configuration_settings.append(configuration_setting) class DescribeEnvironmentResourcesResponse(Response): def __init__(self, response): response = response['DescribeEnvironmentResourcesResponse'] super(DescribeEnvironmentResourcesResponse, self).__init__(response) response = response['DescribeEnvironmentResourcesResult'] if response['EnvironmentResources']: self.environment_resources = EnvironmentResourceDescription(response['EnvironmentResources']) else: self.environment_resources = None class DescribeEnvironmentsResponse(Response): def __init__(self, response): response = response['DescribeEnvironmentsResponse'] super(DescribeEnvironmentsResponse, self).__init__(response) response = response['DescribeEnvironmentsResult'] self.environments = [] if response['Environments']: for member in response['Environments']: environment = EnvironmentDescription(member) self.environments.append(environment) class DescribeEventsResponse(Response): def __init__(self, response): response = response['DescribeEventsResponse'] super(DescribeEventsResponse, self).__init__(response) response = response['DescribeEventsResult'] self.events = [] if response['Events']: for member in response['Events']: event = EventDescription(member) self.events.append(event) self.next_tokent = str(response['NextToken']) class ListAvailableSolutionStacksResponse(Response): def __init__(self, response): response = response['ListAvailableSolutionStacksResponse'] super(ListAvailableSolutionStacksResponse, self).__init__(response) response = response['ListAvailableSolutionStacksResult'] self.solution_stack_details = [] if response['SolutionStackDetails']: for member in response['SolutionStackDetails']: solution_stack_detail = SolutionStackDescription(member) self.solution_stack_details.append(solution_stack_detail) self.solution_stacks = [] if response['SolutionStacks']: for member in response['SolutionStacks']: solution_stack = str(member) self.solution_stacks.append(solution_stack) class RebuildEnvironmentResponse(Response): def __init__(self, response): response = response['RebuildEnvironmentResponse'] super(RebuildEnvironmentResponse, self).__init__(response) class RequestEnvironmentInfoResponse(Response): def __init__(self, response): response = response['RequestEnvironmentInfoResponse'] super(RequestEnvironmentInfoResponse, self).__init__(response) class RestartAppServerResponse(Response): def __init__(self, response): response = response['RestartAppServerResponse'] super(RestartAppServerResponse, self).__init__(response) class RetrieveEnvironmentInfoResponse(Response): def __init__(self, response): response = response['RetrieveEnvironmentInfoResponse'] super(RetrieveEnvironmentInfoResponse, self).__init__(response) response = response['RetrieveEnvironmentInfoResult'] self.environment_info = [] if response['EnvironmentInfo']: for member in response['EnvironmentInfo']: environment_info = EnvironmentInfoDescription(member) self.environment_info.append(environment_info) class SwapEnvironmentCNAMEsResponse(Response): def __init__(self, response): response = response['SwapEnvironmentCNAMEsResponse'] super(SwapEnvironmentCNAMEsResponse, self).__init__(response) class SwapEnvironmentCnamesResponse(SwapEnvironmentCNAMEsResponse): pass class TerminateEnvironmentResponse(Response): def __init__(self, response): response = response['TerminateEnvironmentResponse'] super(TerminateEnvironmentResponse, self).__init__(response) response = response['TerminateEnvironmentResult'] self.application_name = str(response['ApplicationName']) self.cname = str(response['CNAME']) self.date_created = datetime.fromtimestamp(response['DateCreated']) self.date_updated = datetime.fromtimestamp(response['DateUpdated']) self.description = str(response['Description']) self.endpoint_url = str(response['EndpointURL']) self.environment_id = str(response['EnvironmentId']) self.environment_name = str(response['EnvironmentName']) self.health = str(response['Health']) if response['Resources']: self.resources = EnvironmentResourcesDescription(response['Resources']) else: self.resources = None self.solution_stack_name = str(response['SolutionStackName']) self.status = str(response['Status']) self.template_name = str(response['TemplateName']) self.version_label = str(response['VersionLabel']) class UpdateApplicationResponse(Response): def __init__(self, response): response = response['UpdateApplicationResponse'] super(UpdateApplicationResponse, self).__init__(response) response = response['UpdateApplicationResult'] if response['Application']: self.application = ApplicationDescription(response['Application']) else: self.application = None class UpdateApplicationVersionResponse(Response): def __init__(self, response): response = response['UpdateApplicationVersionResponse'] super(UpdateApplicationVersionResponse, self).__init__(response) response = response['UpdateApplicationVersionResult'] if response['ApplicationVersion']: self.application_version = ApplicationVersionDescription(response['ApplicationVersion']) else: self.application_version = None class UpdateConfigurationTemplateResponse(Response): def __init__(self, response): response = response['UpdateConfigurationTemplateResponse'] super(UpdateConfigurationTemplateResponse, self).__init__(response) response = response['UpdateConfigurationTemplateResult'] self.application_name = str(response['ApplicationName']) self.date_created = datetime.fromtimestamp(response['DateCreated']) self.date_updated = datetime.fromtimestamp(response['DateUpdated']) self.deployment_status = str(response['DeploymentStatus']) self.description = str(response['Description']) self.environment_name = str(response['EnvironmentName']) self.option_settings = [] if response['OptionSettings']: for member in response['OptionSettings']: option_setting = ConfigurationOptionSetting(member) self.option_settings.append(option_setting) self.solution_stack_name = str(response['SolutionStackName']) self.template_name = str(response['TemplateName']) class UpdateEnvironmentResponse(Response): def __init__(self, response): response = response['UpdateEnvironmentResponse'] super(UpdateEnvironmentResponse, self).__init__(response) response = response['UpdateEnvironmentResult'] self.application_name = str(response['ApplicationName']) self.cname = str(response['CNAME']) self.date_created = datetime.fromtimestamp(response['DateCreated']) self.date_updated = datetime.fromtimestamp(response['DateUpdated']) self.description = str(response['Description']) self.endpoint_url = str(response['EndpointURL']) self.environment_id = str(response['EnvironmentId']) self.environment_name = str(response['EnvironmentName']) self.health = str(response['Health']) if response['Resources']: self.resources = EnvironmentResourcesDescription(response['Resources']) else: self.resources = None self.solution_stack_name = str(response['SolutionStackName']) self.status = str(response['Status']) self.template_name = str(response['TemplateName']) self.version_label = str(response['VersionLabel']) class ValidateConfigurationSettingsResponse(Response): def __init__(self, response): response = response['ValidateConfigurationSettingsResponse'] super(ValidateConfigurationSettingsResponse, self).__init__(response) response = response['ValidateConfigurationSettingsResult'] self.messages = [] if response['Messages']: for member in response['Messages']: message = ValidationMessage(member) self.messages.append(message) boto-2.20.1/boto/beanstalk/wrapper.py000066400000000000000000000020641225267101000174760ustar00rootroot00000000000000"""Wraps layer1 api methods and converts layer1 dict responses to objects.""" from boto.beanstalk.layer1 import Layer1 import boto.beanstalk.response from boto.exception import BotoServerError import boto.beanstalk.exception as exception def beanstalk_wrapper(func, name): def _wrapped_low_level_api(*args, **kwargs): try: response = func(*args, **kwargs) except BotoServerError, e: raise exception.simple(e) # Turn 'this_is_a_function_name' into 'ThisIsAFunctionNameResponse'. cls_name = ''.join([part.capitalize() for part in name.split('_')]) + 'Response' cls = getattr(boto.beanstalk.response, cls_name) return cls(response) return _wrapped_low_level_api class Layer1Wrapper(object): def __init__(self, *args, **kwargs): self.api = Layer1(*args, **kwargs) def __getattr__(self, name): try: return beanstalk_wrapper(getattr(self.api, name), name) except AttributeError: raise AttributeError("%s has no attribute %r" % (self, name)) boto-2.20.1/boto/cacerts/000077500000000000000000000000001225267101000151225ustar00rootroot00000000000000boto-2.20.1/boto/cacerts/__init__.py000066400000000000000000000021111225267101000172260ustar00rootroot00000000000000# Copyright 2010 Google Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # boto-2.20.1/boto/cacerts/cacerts.txt000066400000000000000000007537761225267101000173400ustar00rootroot00000000000000## ## boto/cacerts/cacerts.txt -- Bundle of CA Root Certificates ## ## Certificate data from Mozilla as of: Sat Dec 29 20:03:40 2012 ## ## This is a bundle of X.509 certificates of public Certificate Authorities ## (CA). These were automatically extracted from Mozilla's root certificates ## file (certdata.txt). This file can be found in the mozilla source tree: ## http://mxr.mozilla.org/mozilla/source/security/nss/lib/ckfw/builtins/certdata.txt?raw=1 ## ## It contains the certificates in PEM format and therefore ## can be directly used with curl / libcurl / php_curl, or with ## an Apache+mod_ssl webserver for SSL client authentication. ## Just configure this file as the SSLCACertificateFile. ## # @(#) $RCSfile: certdata.txt,v $ $Revision: 1.87 $ $Date: 2012/12/29 16:32:45 $ GTE CyberTrust Global Root ========================== -----BEGIN CERTIFICATE----- MIICWjCCAcMCAgGlMA0GCSqGSIb3DQEBBAUAMHUxCzAJBgNVBAYTAlVTMRgwFgYD VQQKEw9HVEUgQ29ycG9yYXRpb24xJzAlBgNVBAsTHkdURSBDeWJlclRydXN0IFNv bHV0aW9ucywgSW5jLjEjMCEGA1UEAxMaR1RFIEN5YmVyVHJ1c3QgR2xvYmFsIFJv b3QwHhcNOTgwODEzMDAyOTAwWhcNMTgwODEzMjM1OTAwWjB1MQswCQYDVQQGEwJV UzEYMBYGA1UEChMPR1RFIENvcnBvcmF0aW9uMScwJQYDVQQLEx5HVEUgQ3liZXJU cnVzdCBTb2x1dGlvbnMsIEluYy4xIzAhBgNVBAMTGkdURSBDeWJlclRydXN0IEds b2JhbCBSb290MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCVD6C28FCc6HrH iM3dFw4usJTQGz0O9pTAipTHBsiQl8i4ZBp6fmw8U+E3KHNgf7KXUwefU/ltWJTS r41tiGeA5u2ylc9yMcqlHHK6XALnZELn+aks1joNrI1CqiQBOeacPwGFVw1Yh0X4 04Wqk2kmhXBIgD8SFcd5tB8FLztimQIDAQABMA0GCSqGSIb3DQEBBAUAA4GBAG3r GwnpXtlR22ciYaQqPEh346B8pt5zohQDhT37qw4wxYMWM4ETCJ57NE7fQMh017l9 3PR2VX2bY1QY6fDq81yx2YtCHrnAlU66+tXifPVoYb+O7AWXX1uw16OFNMQkpw0P lZPvy5TYnh+dXIVtx6quTx8itc2VrbqnzPmrC3p/ -----END CERTIFICATE----- Thawte Server CA ================ -----BEGIN CERTIFICATE----- MIIDEzCCAnygAwIBAgIBATANBgkqhkiG9w0BAQQFADCBxDELMAkGA1UEBhMCWkEx FTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJQ2FwZSBUb3duMR0wGwYD VQQKExRUaGF3dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UECxMfQ2VydGlmaWNhdGlv biBTZXJ2aWNlcyBEaXZpc2lvbjEZMBcGA1UEAxMQVGhhd3RlIFNlcnZlciBDQTEm MCQGCSqGSIb3DQEJARYXc2VydmVyLWNlcnRzQHRoYXd0ZS5jb20wHhcNOTYwODAx MDAwMDAwWhcNMjAxMjMxMjM1OTU5WjCBxDELMAkGA1UEBhMCWkExFTATBgNVBAgT DFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJQ2FwZSBUb3duMR0wGwYDVQQKExRUaGF3 dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UECxMfQ2VydGlmaWNhdGlvbiBTZXJ2aWNl cyBEaXZpc2lvbjEZMBcGA1UEAxMQVGhhd3RlIFNlcnZlciBDQTEmMCQGCSqGSIb3 DQEJARYXc2VydmVyLWNlcnRzQHRoYXd0ZS5jb20wgZ8wDQYJKoZIhvcNAQEBBQAD gY0AMIGJAoGBANOkUG7I/1Zr5s9dtuoMaHVHoqrC2oQl/Kj0R1HahbUgdJSGHg91 yekIYfUGbTBuFRkC6VLAYttNmZ7iagxEOM3+vuNkCXDF/rFrKbYvScg71CcEJRCX L+eQbcAoQpnXTEPew/UhbVSfXcNY4cDk2VuwuNy0e982OsK1ZiIS1ocNAgMBAAGj EzARMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEEBQADgYEAB/pMaVz7lcxG 7oWDTSEwjsrZqG9JGubaUeNgcGyEYRGhGshIPllDfU+VPaGLtwtimHp1it2ITk6e QNuozDJ0uW8NxuOzRAvZim+aKZuZGCg70eNAKJpaPNW15yAbi8qkq43pUdniTCxZ qdq5snUb9kLy78fyGPmJvKP/iiMucEc= -----END CERTIFICATE----- Thawte Premium Server CA ======================== -----BEGIN CERTIFICATE----- MIIDJzCCApCgAwIBAgIBATANBgkqhkiG9w0BAQQFADCBzjELMAkGA1UEBhMCWkEx FTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJQ2FwZSBUb3duMR0wGwYD VQQKExRUaGF3dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UECxMfQ2VydGlmaWNhdGlv biBTZXJ2aWNlcyBEaXZpc2lvbjEhMB8GA1UEAxMYVGhhd3RlIFByZW1pdW0gU2Vy dmVyIENBMSgwJgYJKoZIhvcNAQkBFhlwcmVtaXVtLXNlcnZlckB0aGF3dGUuY29t MB4XDTk2MDgwMTAwMDAwMFoXDTIwMTIzMTIzNTk1OVowgc4xCzAJBgNVBAYTAlpB MRUwEwYDVQQIEwxXZXN0ZXJuIENhcGUxEjAQBgNVBAcTCUNhcGUgVG93bjEdMBsG A1UEChMUVGhhd3RlIENvbnN1bHRpbmcgY2MxKDAmBgNVBAsTH0NlcnRpZmljYXRp b24gU2VydmljZXMgRGl2aXNpb24xITAfBgNVBAMTGFRoYXd0ZSBQcmVtaXVtIFNl cnZlciBDQTEoMCYGCSqGSIb3DQEJARYZcHJlbWl1bS1zZXJ2ZXJAdGhhd3RlLmNv bTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA0jY2aovXwlue2oFBYo847kkE VdbQ7xwblRZH7xhINTpS9CtqBo87L+pW46+GjZ4X9560ZXUCTe/LCaIhUdib0GfQ ug2SBhRz1JPLlyoAnFxODLz6FVL88kRu2hFKbgifLy3j+ao6hnO2RlNYyIkFvYMR uHM/qgeN9EJN50CdHDcCAwEAAaMTMBEwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG 9w0BAQQFAAOBgQAmSCwWwlj66BZ0DKqqX1Q/8tfJeGBeXm43YyJ3Nn6yF8Q0ufUI hfzJATj/Tb7yFkJD57taRvvBxhEf8UqwKEbJw8RCfbz6q1lu1bdRiBHjpIUZa4JM pAwSremkrj/xw0llmozFyD4lt5SZu5IycQfwhl7tUCemDaYj+bvLpgcUQg== -----END CERTIFICATE----- Equifax Secure CA ================= -----BEGIN CERTIFICATE----- MIIDIDCCAomgAwIBAgIENd70zzANBgkqhkiG9w0BAQUFADBOMQswCQYDVQQGEwJV UzEQMA4GA1UEChMHRXF1aWZheDEtMCsGA1UECxMkRXF1aWZheCBTZWN1cmUgQ2Vy dGlmaWNhdGUgQXV0aG9yaXR5MB4XDTk4MDgyMjE2NDE1MVoXDTE4MDgyMjE2NDE1 MVowTjELMAkGA1UEBhMCVVMxEDAOBgNVBAoTB0VxdWlmYXgxLTArBgNVBAsTJEVx dWlmYXggU2VjdXJlIENlcnRpZmljYXRlIEF1dGhvcml0eTCBnzANBgkqhkiG9w0B AQEFAAOBjQAwgYkCgYEAwV2xWGcIYu6gmi0fCG2RFGiYCh7+2gRvE4RiIcPRfM6f BeC4AfBONOziipUEZKzxa1NfBbPLZ4C/QgKO/t0BCezhABRP/PvwDN1Dulsr4R+A cJkVV5MW8Q+XarfCaCMczE1ZMKxRHjuvK9buY0V7xdlfUNLjUA86iOe/FP3gx7kC AwEAAaOCAQkwggEFMHAGA1UdHwRpMGcwZaBjoGGkXzBdMQswCQYDVQQGEwJVUzEQ MA4GA1UEChMHRXF1aWZheDEtMCsGA1UECxMkRXF1aWZheCBTZWN1cmUgQ2VydGlm aWNhdGUgQXV0aG9yaXR5MQ0wCwYDVQQDEwRDUkwxMBoGA1UdEAQTMBGBDzIwMTgw ODIyMTY0MTUxWjALBgNVHQ8EBAMCAQYwHwYDVR0jBBgwFoAUSOZo+SvSspXXR9gj IBBPM5iQn9QwHQYDVR0OBBYEFEjmaPkr0rKV10fYIyAQTzOYkJ/UMAwGA1UdEwQF MAMBAf8wGgYJKoZIhvZ9B0EABA0wCxsFVjMuMGMDAgbAMA0GCSqGSIb3DQEBBQUA A4GBAFjOKer89961zgK5F7WF0bnj4JXMJTENAKaSbn+2kmOeUJXRmm/kEd5jhW6Y 7qj/WsjTVbJmcVfewCHrPSqnI0kBBIZCe/zuf6IWUrVnZ9NA2zsmWLIodz2uFHdh 1voqZiegDfqnc1zqcPGUIWVEX/r87yloqaKHee9570+sB3c4 -----END CERTIFICATE----- Digital Signature Trust Co. Global CA 1 ======================================= -----BEGIN CERTIFICATE----- MIIDKTCCApKgAwIBAgIENnAVljANBgkqhkiG9w0BAQUFADBGMQswCQYDVQQGEwJV UzEkMCIGA1UEChMbRGlnaXRhbCBTaWduYXR1cmUgVHJ1c3QgQ28uMREwDwYDVQQL EwhEU1RDQSBFMTAeFw05ODEyMTAxODEwMjNaFw0xODEyMTAxODQwMjNaMEYxCzAJ BgNVBAYTAlVTMSQwIgYDVQQKExtEaWdpdGFsIFNpZ25hdHVyZSBUcnVzdCBDby4x ETAPBgNVBAsTCERTVENBIEUxMIGdMA0GCSqGSIb3DQEBAQUAA4GLADCBhwKBgQCg bIGpzzQeJN3+hijM3oMv+V7UQtLodGBmE5gGHKlREmlvMVW5SXIACH7TpWJENySZ j9mDSI+ZbZUTu0M7LklOiDfBu1h//uG9+LthzfNHwJmm8fOR6Hh8AMthyUQncWlV Sn5JTe2io74CTADKAqjuAQIxZA9SLRN0dja1erQtcQIBA6OCASQwggEgMBEGCWCG SAGG+EIBAQQEAwIABzBoBgNVHR8EYTBfMF2gW6BZpFcwVTELMAkGA1UEBhMCVVMx JDAiBgNVBAoTG0RpZ2l0YWwgU2lnbmF0dXJlIFRydXN0IENvLjERMA8GA1UECxMI RFNUQ0EgRTExDTALBgNVBAMTBENSTDEwKwYDVR0QBCQwIoAPMTk5ODEyMTAxODEw MjNagQ8yMDE4MTIxMDE4MTAyM1owCwYDVR0PBAQDAgEGMB8GA1UdIwQYMBaAFGp5 fpFpRhgTCgJ3pVlbYJglDqL4MB0GA1UdDgQWBBRqeX6RaUYYEwoCd6VZW2CYJQ6i +DAMBgNVHRMEBTADAQH/MBkGCSqGSIb2fQdBAAQMMAobBFY0LjADAgSQMA0GCSqG SIb3DQEBBQUAA4GBACIS2Hod3IEGtgllsofIH160L+nEHvI8wbsEkBFKg05+k7lN QseSJqBcNJo4cvj9axY+IO6CizEqkzaFI4iKPANo08kJD038bKTaKHKTDomAsH3+ gG9lbRgzl4vCa4nuYD3Im+9/KzJic5PLPON74nZ4RbyhkwS7hp86W0N6w4pl -----END CERTIFICATE----- Digital Signature Trust Co. Global CA 3 ======================================= -----BEGIN CERTIFICATE----- MIIDKTCCApKgAwIBAgIENm7TzjANBgkqhkiG9w0BAQUFADBGMQswCQYDVQQGEwJV UzEkMCIGA1UEChMbRGlnaXRhbCBTaWduYXR1cmUgVHJ1c3QgQ28uMREwDwYDVQQL EwhEU1RDQSBFMjAeFw05ODEyMDkxOTE3MjZaFw0xODEyMDkxOTQ3MjZaMEYxCzAJ BgNVBAYTAlVTMSQwIgYDVQQKExtEaWdpdGFsIFNpZ25hdHVyZSBUcnVzdCBDby4x ETAPBgNVBAsTCERTVENBIEUyMIGdMA0GCSqGSIb3DQEBAQUAA4GLADCBhwKBgQC/ k48Xku8zExjrEH9OFr//Bo8qhbxe+SSmJIi2A7fBw18DW9Fvrn5C6mYjuGODVvso LeE4i7TuqAHhzhy2iCoiRoX7n6dwqUcUP87eZfCocfdPJmyMvMa1795JJ/9IKn3o TQPMx7JSxhcxEzu1TdvIxPbDDyQq2gyd55FbgM2UnQIBA6OCASQwggEgMBEGCWCG SAGG+EIBAQQEAwIABzBoBgNVHR8EYTBfMF2gW6BZpFcwVTELMAkGA1UEBhMCVVMx JDAiBgNVBAoTG0RpZ2l0YWwgU2lnbmF0dXJlIFRydXN0IENvLjERMA8GA1UECxMI RFNUQ0EgRTIxDTALBgNVBAMTBENSTDEwKwYDVR0QBCQwIoAPMTk5ODEyMDkxOTE3 MjZagQ8yMDE4MTIwOTE5MTcyNlowCwYDVR0PBAQDAgEGMB8GA1UdIwQYMBaAFB6C TShlgDzJQW6sNS5ay97u+DlbMB0GA1UdDgQWBBQegk0oZYA8yUFurDUuWsve7vg5 WzAMBgNVHRMEBTADAQH/MBkGCSqGSIb2fQdBAAQMMAobBFY0LjADAgSQMA0GCSqG SIb3DQEBBQUAA4GBAEeNg61i8tuwnkUiBbmi1gMOOHLnnvx75pO2mqWilMg0HZHR xdf0CiUPPXiBng+xZ8SQTGPdXqfiup/1902lMXucKS1M/mQ+7LZT/uqb7YLbdHVL B3luHtgZg3Pe9T7Qtd7nS2h9Qy4qIOF+oHhEngj1mPnHfxsb1gYgAlihw6ID -----END CERTIFICATE----- Verisign Class 3 Public Primary Certification Authority ======================================================= -----BEGIN CERTIFICATE----- MIICPDCCAaUCEHC65B0Q2Sk0tjjKewPMur8wDQYJKoZIhvcNAQECBQAwXzELMAkG A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFz cyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2 MDEyOTAwMDAwMFoXDTI4MDgwMTIzNTk1OVowXzELMAkGA1UEBhMCVVMxFzAVBgNV BAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmlt YXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUAA4GN ADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhE BarsAx94f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/is I19wKTakyYbnsZogy1Olhec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0G CSqGSIb3DQEBAgUAA4GBALtMEivPLCYATxQT3ab7/AoRhIzzKBxnki98tsX63/Do lbwdj2wsqFHMc9ikwFPwTtYmwHYBV4GSXiHx0bH/59AhWM1pF+NEHJwZRDmJXNyc AA9WjQKZ7aKQRUzkuxCkPfAyAw7xzvjoyVGM5mKf5p/AfbdynMk2OmufTqj/ZA1k -----END CERTIFICATE----- Verisign Class 1 Public Primary Certification Authority - G2 ============================================================ -----BEGIN CERTIFICATE----- MIIDAjCCAmsCEEzH6qqYPnHTkxD4PTqJkZIwDQYJKoZIhvcNAQEFBQAwgcExCzAJ BgNVBAYTAlVTMRcwFQYDVQQKEw5WZXJpU2lnbiwgSW5jLjE8MDoGA1UECxMzQ2xh c3MgMSBQdWJsaWMgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEcy MTowOAYDVQQLEzEoYykgMTk5OCBWZXJpU2lnbiwgSW5jLiAtIEZvciBhdXRob3Jp emVkIHVzZSBvbmx5MR8wHQYDVQQLExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMB4X DTk4MDUxODAwMDAwMFoXDTI4MDgwMTIzNTk1OVowgcExCzAJBgNVBAYTAlVTMRcw FQYDVQQKEw5WZXJpU2lnbiwgSW5jLjE8MDoGA1UECxMzQ2xhc3MgMSBQdWJsaWMg UHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEcyMTowOAYDVQQLEzEo YykgMTk5OCBWZXJpU2lnbiwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5 MR8wHQYDVQQLExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMIGfMA0GCSqGSIb3DQEB AQUAA4GNADCBiQKBgQCq0Lq+Fi24g9TK0g+8djHKlNgdk4xWArzZbxpvUjZudVYK VdPfQ4chEWWKfo+9Id5rMj8bhDSVBZ1BNeuS65bdqlk/AVNtmU/t5eIqWpDBucSm Fc/IReumXY6cPvBkJHalzasab7bYe1FhbqZ/h8jit+U03EGI6glAvnOSPWvndQID AQABMA0GCSqGSIb3DQEBBQUAA4GBAKlPww3HZ74sy9mozS11534Vnjty637rXC0J h9ZrbWB85a7FkCMMXErQr7Fd88e2CtvgFZMN3QO8x3aKtd1Pw5sTdbgBwObJW2ul uIncrKTdcu1OofdPvAbT6shkdHvClUGcZXNY8ZCaPGqxmMnEh7zPRW1F4m4iP/68 DzFc6PLZ -----END CERTIFICATE----- Verisign Class 2 Public Primary Certification Authority - G2 ============================================================ -----BEGIN CERTIFICATE----- MIIDAzCCAmwCEQC5L2DMiJ+hekYJuFtwbIqvMA0GCSqGSIb3DQEBBQUAMIHBMQsw CQYDVQQGEwJVUzEXMBUGA1UEChMOVmVyaVNpZ24sIEluYy4xPDA6BgNVBAsTM0Ns YXNzIDIgUHVibGljIFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgLSBH MjE6MDgGA1UECxMxKGMpIDE5OTggVmVyaVNpZ24sIEluYy4gLSBGb3IgYXV0aG9y aXplZCB1c2Ugb25seTEfMB0GA1UECxMWVmVyaVNpZ24gVHJ1c3QgTmV0d29yazAe Fw05ODA1MTgwMDAwMDBaFw0yODA4MDEyMzU5NTlaMIHBMQswCQYDVQQGEwJVUzEX MBUGA1UEChMOVmVyaVNpZ24sIEluYy4xPDA6BgNVBAsTM0NsYXNzIDIgUHVibGlj IFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgLSBHMjE6MDgGA1UECxMx KGMpIDE5OTggVmVyaVNpZ24sIEluYy4gLSBGb3IgYXV0aG9yaXplZCB1c2Ugb25s eTEfMB0GA1UECxMWVmVyaVNpZ24gVHJ1c3QgTmV0d29yazCBnzANBgkqhkiG9w0B AQEFAAOBjQAwgYkCgYEAp4gBIXQs5xoD8JjhlzwPIQjxnNuX6Zr8wgQGE75fUsjM HiwSViy4AWkszJkfrbCWrnkE8hM5wXuYuggs6MKEEyyqaekJ9MepAqRCwiNPStjw DqL7MWzJ5m+ZJwf15vRMeJ5t60aG+rmGyVTyssSv1EYcWskVMP8NbPUtDm3Of3cC AwEAATANBgkqhkiG9w0BAQUFAAOBgQByLvl/0fFx+8Se9sVeUYpAmLho+Jscg9ji nb3/7aHmZuovCfTK1+qlK5X2JGCGTUQug6XELaDTrnhpb3LabK4I8GOSN+a7xDAX rXfMSTWqz9iP0b63GJZHc2pUIjRkLbYWm1lbtFFZOrMLFPQS32eg9K0yZF6xRnIn jBJ7xUS0rg== -----END CERTIFICATE----- Verisign Class 3 Public Primary Certification Authority - G2 ============================================================ -----BEGIN CERTIFICATE----- MIIDAjCCAmsCEH3Z/gfPqB63EHln+6eJNMYwDQYJKoZIhvcNAQEFBQAwgcExCzAJ BgNVBAYTAlVTMRcwFQYDVQQKEw5WZXJpU2lnbiwgSW5jLjE8MDoGA1UECxMzQ2xh c3MgMyBQdWJsaWMgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEcy MTowOAYDVQQLEzEoYykgMTk5OCBWZXJpU2lnbiwgSW5jLiAtIEZvciBhdXRob3Jp emVkIHVzZSBvbmx5MR8wHQYDVQQLExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMB4X DTk4MDUxODAwMDAwMFoXDTI4MDgwMTIzNTk1OVowgcExCzAJBgNVBAYTAlVTMRcw FQYDVQQKEw5WZXJpU2lnbiwgSW5jLjE8MDoGA1UECxMzQ2xhc3MgMyBQdWJsaWMg UHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEcyMTowOAYDVQQLEzEo YykgMTk5OCBWZXJpU2lnbiwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5 MR8wHQYDVQQLExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMIGfMA0GCSqGSIb3DQEB AQUAA4GNADCBiQKBgQDMXtERXVxp0KvTuWpMmR9ZmDCOFoUgRm1HP9SFIIThbbP4 pO0M8RcPO/mn+SXXwc+EY/J8Y8+iR/LGWzOOZEAEaMGAuWQcRXfH2G71lSk8UOg0 13gfqLptQ5GVj0VXXn7F+8qkBOvqlzdUMG+7AUcyM83cV5tkaWH4mx0ciU9cZwID AQABMA0GCSqGSIb3DQEBBQUAA4GBAFFNzb5cy5gZnBWyATl4Lk0PZ3BwmcYQWpSk U01UbSuvDV1Ai2TT1+7eVmGSX6bEHRBhNtMsJzzoKQm5EWR0zLVznxxIqbxhAe7i F6YM40AIOw7n60RzKprxaZLvcRTDOaxxp5EJb+RxBrO6WVcmeQD2+A2iMzAo1KpY oJ2daZH9 -----END CERTIFICATE----- GlobalSign Root CA ================== -----BEGIN CERTIFICATE----- MIIDdTCCAl2gAwIBAgILBAAAAAABFUtaw5QwDQYJKoZIhvcNAQEFBQAwVzELMAkG A1UEBhMCQkUxGTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYtc2ExEDAOBgNVBAsTB1Jv b3QgQ0ExGzAZBgNVBAMTEkdsb2JhbFNpZ24gUm9vdCBDQTAeFw05ODA5MDExMjAw MDBaFw0yODAxMjgxMjAwMDBaMFcxCzAJBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9i YWxTaWduIG52LXNhMRAwDgYDVQQLEwdSb290IENBMRswGQYDVQQDExJHbG9iYWxT aWduIFJvb3QgQ0EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDaDuaZ jc6j40+Kfvvxi4Mla+pIH/EqsLmVEQS98GPR4mdmzxzdzxtIK+6NiY6arymAZavp xy0Sy6scTHAHoT0KMM0VjU/43dSMUBUc71DuxC73/OlS8pF94G3VNTCOXkNz8kHp 1Wrjsok6Vjk4bwY8iGlbKk3Fp1S4bInMm/k8yuX9ifUSPJJ4ltbcdG6TRGHRjcdG snUOhugZitVtbNV4FpWi6cgKOOvyJBNPc1STE4U6G7weNLWLBYy5d4ux2x8gkasJ U26Qzns3dLlwR5EiUWMWea6xrkEmCMgZK9FGqkjWZCrXgzT/LCrBbBlDSgeF59N8 9iFo7+ryUp9/k5DPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8E BTADAQH/MB0GA1UdDgQWBBRge2YaRQ2XyolQL30EzTSo//z9SzANBgkqhkiG9w0B AQUFAAOCAQEA1nPnfE920I2/7LqivjTFKDK1fPxsnCwrvQmeU79rXqoRSLblCKOz yj1hTdNGCbM+w6DjY1Ub8rrvrTnhQ7k4o+YviiY776BQVvnGCv04zcQLcFGUl5gE 38NflNUVyRRBnMRddWQVDf9VMOyGj/8N7yy5Y0b2qvzfvGn9LhJIZJrglfCm7ymP AbEVtQwdpf5pLGkkeB6zpxxxYu7KyJesF12KwvhHhm4qxFYxldBniYUr+WymXUad DKqC5JlR3XC321Y9YeRq4VzW9v493kHMB65jUr9TU/Qr6cf9tveCX4XSQRjbgbME HMUfpIBvFSDJ3gyICh3WZlXi/EjJKSZp4A== -----END CERTIFICATE----- GlobalSign Root CA - R2 ======================= -----BEGIN CERTIFICATE----- MIIDujCCAqKgAwIBAgILBAAAAAABD4Ym5g0wDQYJKoZIhvcNAQEFBQAwTDEgMB4G A1UECxMXR2xvYmFsU2lnbiBSb290IENBIC0gUjIxEzARBgNVBAoTCkdsb2JhbFNp Z24xEzARBgNVBAMTCkdsb2JhbFNpZ24wHhcNMDYxMjE1MDgwMDAwWhcNMjExMjE1 MDgwMDAwWjBMMSAwHgYDVQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSMjETMBEG A1UEChMKR2xvYmFsU2lnbjETMBEGA1UEAxMKR2xvYmFsU2lnbjCCASIwDQYJKoZI hvcNAQEBBQADggEPADCCAQoCggEBAKbPJA6+Lm8omUVCxKs+IVSbC9N/hHD6ErPL v4dfxn+G07IwXNb9rfF73OX4YJYJkhD10FPe+3t+c4isUoh7SqbKSaZeqKeMWhG8 eoLrvozps6yWJQeXSpkqBy+0Hne/ig+1AnwblrjFuTosvNYSuetZfeLQBoZfXklq tTleiDTsvHgMCJiEbKjNS7SgfQx5TfC4LcshytVsW33hoCmEofnTlEnLJGKRILzd C9XZzPnqJworc5HGnRusyMvo4KD0L5CLTfuwNhv2GXqF4G3yYROIXJ/gkwpRl4pa zq+r1feqCapgvdzZX99yqWATXgAByUr6P6TqBwMhAo6CygPCm48CAwEAAaOBnDCB mTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUm+IH V2ccHsBqBt5ZtJot39wZhi4wNgYDVR0fBC8wLTAroCmgJ4YlaHR0cDovL2NybC5n bG9iYWxzaWduLm5ldC9yb290LXIyLmNybDAfBgNVHSMEGDAWgBSb4gdXZxwewGoG 3lm0mi3f3BmGLjANBgkqhkiG9w0BAQUFAAOCAQEAmYFThxxol4aR7OBKuEQLq4Gs J0/WwbgcQ3izDJr86iw8bmEbTUsp9Z8FHSbBuOmDAGJFtqkIk7mpM0sYmsL4h4hO 291xNBrBVNpGP+DTKqttVCL1OmLNIG+6KYnX3ZHu01yiPqFbQfXf5WRDLenVOavS ot+3i9DAgBkcRcAtjOj4LaR0VknFBbVPFd5uRHg5h6h+u/N5GJG79G+dwfCMNYxd AfvDbbnvRG15RjF+Cv6pgsH/76tuIMRQyV+dTZsXjAzlAcmgQWpzU/qlULRuJQ/7 TBj0/VLZjmmx6BEP3ojY+x1J96relc8geMJgEtslQIxq/H5COEBkEveegeGTLg== -----END CERTIFICATE----- ValiCert Class 1 VA =================== -----BEGIN CERTIFICATE----- MIIC5zCCAlACAQEwDQYJKoZIhvcNAQEFBQAwgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0 IFZhbGlkYXRpb24gTmV0d29yazEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAz BgNVBAsTLFZhbGlDZXJ0IENsYXNzIDEgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9y aXR5MSEwHwYDVQQDExhodHRwOi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG 9w0BCQEWEWluZm9AdmFsaWNlcnQuY29tMB4XDTk5MDYyNTIyMjM0OFoXDTE5MDYy NTIyMjM0OFowgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0IFZhbGlkYXRpb24gTmV0d29y azEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAzBgNVBAsTLFZhbGlDZXJ0IENs YXNzIDEgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9yaXR5MSEwHwYDVQQDExhodHRw Oi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG9w0BCQEWEWluZm9AdmFsaWNl cnQuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDYWYJ6ibiWuqYvaG9Y LqdUHAZu9OqNSLwxlBfw8068srg1knaw0KWlAdcAAxIiGQj4/xEjm84H9b9pGib+ TunRf50sQB1ZaG6m+FiwnRqP0z/x3BkGgagO4DrdyFNFCQbmD3DD+kCmDuJWBQ8Y TfwggtFzVXSNdnKgHZ0dwN0/cQIDAQABMA0GCSqGSIb3DQEBBQUAA4GBAFBoPUn0 LBwGlN+VYH+Wexf+T3GtZMjdd9LvWVXoP+iOBSoh8gfStadS/pyxtuJbdxdA6nLW I8sogTLDAHkY7FkXicnGah5xyf23dKUlRWnFSKsZ4UWKJWsZ7uW7EvV/96aNUcPw nXS3qT6gpf+2SQMT2iLM7XGCK5nPOrf1LXLI -----END CERTIFICATE----- ValiCert Class 2 VA =================== -----BEGIN CERTIFICATE----- MIIC5zCCAlACAQEwDQYJKoZIhvcNAQEFBQAwgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0 IFZhbGlkYXRpb24gTmV0d29yazEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAz BgNVBAsTLFZhbGlDZXJ0IENsYXNzIDIgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9y aXR5MSEwHwYDVQQDExhodHRwOi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG 9w0BCQEWEWluZm9AdmFsaWNlcnQuY29tMB4XDTk5MDYyNjAwMTk1NFoXDTE5MDYy NjAwMTk1NFowgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0IFZhbGlkYXRpb24gTmV0d29y azEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAzBgNVBAsTLFZhbGlDZXJ0IENs YXNzIDIgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9yaXR5MSEwHwYDVQQDExhodHRw Oi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG9w0BCQEWEWluZm9AdmFsaWNl cnQuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDOOnHK5avIWZJV16vY dA757tn2VUdZZUcOBVXc65g2PFxTXdMwzzjsvUGJ7SVCCSRrCl6zfN1SLUzm1NZ9 WlmpZdRJEy0kTRxQb7XBhVQ7/nHk01xC+YDgkRoKWzk2Z/M/VXwbP7RfZHM047QS v4dk+NoS/zcnwbNDu+97bi5p9wIDAQABMA0GCSqGSIb3DQEBBQUAA4GBADt/UG9v UJSZSWI4OB9L+KXIPqeCgfYrx+jFzug6EILLGACOTb2oWH+heQC1u+mNr0HZDzTu IYEZoDJJKPTEjlbVUjP9UNV+mWwD5MlM/Mtsq2azSiGM5bUMMj4QssxsodyamEwC W/POuZ6lcg5Ktz885hZo+L7tdEy8W9ViH0Pd -----END CERTIFICATE----- RSA Root Certificate 1 ====================== -----BEGIN CERTIFICATE----- MIIC5zCCAlACAQEwDQYJKoZIhvcNAQEFBQAwgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0 IFZhbGlkYXRpb24gTmV0d29yazEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAz BgNVBAsTLFZhbGlDZXJ0IENsYXNzIDMgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9y aXR5MSEwHwYDVQQDExhodHRwOi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG 9w0BCQEWEWluZm9AdmFsaWNlcnQuY29tMB4XDTk5MDYyNjAwMjIzM1oXDTE5MDYy NjAwMjIzM1owgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0IFZhbGlkYXRpb24gTmV0d29y azEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAzBgNVBAsTLFZhbGlDZXJ0IENs YXNzIDMgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9yaXR5MSEwHwYDVQQDExhodHRw Oi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG9w0BCQEWEWluZm9AdmFsaWNl cnQuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDjmFGWHOjVsQaBalfD cnWTq8+epvzzFlLWLU2fNUSoLgRNB0mKOCn1dzfnt6td3zZxFJmP3MKS8edgkpfs 2Ejcv8ECIMYkpChMMFp2bbFc893enhBxoYjHW5tBbcqwuI4V7q0zK89HBFx1cQqY JJgpp0lZpd34t0NiYfPT4tBVPwIDAQABMA0GCSqGSIb3DQEBBQUAA4GBAFa7AliE Zwgs3x/be0kz9dNnnfS0ChCzycUs4pJqcXgn8nCDQtM+z6lU9PHYkhaM0QTLS6vJ n0WuPIqpsHEzXcjFV9+vqDWzf4mH6eglkrh/hXqu1rweN1gqZ8mRzyqBPu3GOd/A PhmcGcwTTYJBtYze4D1gCCAPRX5ron+jjBXu -----END CERTIFICATE----- Verisign Class 1 Public Primary Certification Authority - G3 ============================================================ -----BEGIN CERTIFICATE----- MIIEGjCCAwICEQCLW3VWhFSFCwDPrzhIzrGkMA0GCSqGSIb3DQEBBQUAMIHKMQsw CQYDVQQGEwJVUzEXMBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZl cmlTaWduIFRydXN0IE5ldHdvcmsxOjA4BgNVBAsTMShjKSAxOTk5IFZlcmlTaWdu LCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxRTBDBgNVBAMTPFZlcmlT aWduIENsYXNzIDEgUHVibGljIFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3Jp dHkgLSBHMzAeFw05OTEwMDEwMDAwMDBaFw0zNjA3MTYyMzU5NTlaMIHKMQswCQYD VQQGEwJVUzEXMBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZlcmlT aWduIFRydXN0IE5ldHdvcmsxOjA4BgNVBAsTMShjKSAxOTk5IFZlcmlTaWduLCBJ bmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxRTBDBgNVBAMTPFZlcmlTaWdu IENsYXNzIDEgUHVibGljIFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkg LSBHMzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN2E1Lm0+afY8wR4 nN493GwTFtl63SRRZsDHJlkNrAYIwpTRMx/wgzUfbhvI3qpuFU5UJ+/EbRrsC+MO 8ESlV8dAWB6jRx9x7GD2bZTIGDnt/kIYVt/kTEkQeE4BdjVjEjbdZrwBBDajVWjV ojYJrKshJlQGrT/KFOCsyq0GHZXi+J3x4GD/wn91K0zM2v6HmSHquv4+VNfSWXjb PG7PoBMAGrgnoeS+Z5bKoMWznN3JdZ7rMJpfo83ZrngZPyPpXNspva1VyBtUjGP2 6KbqxzcSXKMpHgLZ2x87tNcPVkeBFQRKr4Mn0cVYiMHd9qqnoxjaaKptEVHhv2Vr n5Z20T0CAwEAATANBgkqhkiG9w0BAQUFAAOCAQEAq2aN17O6x5q25lXQBfGfMY1a qtmqRiYPce2lrVNWYgFHKkTp/j90CxObufRNG7LRX7K20ohcs5/Ny9Sn2WCVhDr4 wTcdYcrnsMXlkdpUpqwxga6X3s0IrLjAl4B/bnKk52kTlWUfxJM8/XmPBNQ+T+r3 ns7NZ3xPZQL/kYVUc8f/NveGLezQXk//EZ9yBta4GvFMDSZl4kSAHsef493oCtrs pSCAaWihT37ha88HQfqDjrw43bAuEbFrskLMmrz5SCJ5ShkPshw+IHTZasO+8ih4 E1Z5T21Q6huwtVexN2ZYI/PcD98Kh8TvhgXVOBRgmaNL3gaWcSzy27YfpO8/7g== -----END CERTIFICATE----- Verisign Class 2 Public Primary Certification Authority - G3 ============================================================ -----BEGIN CERTIFICATE----- MIIEGTCCAwECEGFwy0mMX5hFKeewptlQW3owDQYJKoZIhvcNAQEFBQAwgcoxCzAJ BgNVBAYTAlVTMRcwFQYDVQQKEw5WZXJpU2lnbiwgSW5jLjEfMB0GA1UECxMWVmVy aVNpZ24gVHJ1c3QgTmV0d29yazE6MDgGA1UECxMxKGMpIDE5OTkgVmVyaVNpZ24s IEluYy4gLSBGb3IgYXV0aG9yaXplZCB1c2Ugb25seTFFMEMGA1UEAxM8VmVyaVNp Z24gQ2xhc3MgMiBQdWJsaWMgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0 eSAtIEczMB4XDTk5MTAwMTAwMDAwMFoXDTM2MDcxNjIzNTk1OVowgcoxCzAJBgNV BAYTAlVTMRcwFQYDVQQKEw5WZXJpU2lnbiwgSW5jLjEfMB0GA1UECxMWVmVyaVNp Z24gVHJ1c3QgTmV0d29yazE6MDgGA1UECxMxKGMpIDE5OTkgVmVyaVNpZ24sIElu Yy4gLSBGb3IgYXV0aG9yaXplZCB1c2Ugb25seTFFMEMGA1UEAxM8VmVyaVNpZ24g Q2xhc3MgMiBQdWJsaWMgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAt IEczMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArwoNwtUs22e5LeWU J92lvuCwTY+zYVY81nzD9M0+hsuiiOLh2KRpxbXiv8GmR1BeRjmL1Za6tW8UvxDO JxOeBUebMXoT2B/Z0wI3i60sR/COgQanDTAM6/c8DyAd3HJG7qUCyFvDyVZpTMUY wZF7C9UTAJu878NIPkZgIIUq1ZC2zYugzDLdt/1AVbJQHFauzI13TccgTacxdu9o koqQHgiBVrKtaaNS0MscxCM9H5n+TOgWY47GCI72MfbS+uV23bUckqNJzc0BzWjN qWm6o+sdDZykIKbBoMXRRkwXbdKsZj+WjOCE1Db/IlnF+RFgqF8EffIa9iVCYQ/E Srg+iQIDAQABMA0GCSqGSIb3DQEBBQUAA4IBAQA0JhU8wI1NQ0kdvekhktdmnLfe xbjQ5F1fdiLAJvmEOjr5jLX77GDx6M4EsMjdpwOPMPOY36TmpDHf0xwLRtxyID+u 7gU8pDM/CzmscHhzS5kr3zDCVLCoO1Wh/hYozUK9dG6A2ydEp85EXdQbkJgNHkKU sQAsBNB0owIFImNjzYO1+8FtYmtpdf1dcEG59b98377BMnMiIYtYgXsVkXq642RI sH/7NiXaldDxJBQX3RiAa0YjOVT1jmIJBB2UkKab5iXiQkWquJCtvgiPqQtCGJTP cjnhsUPgKM+351psE2tJs//jGHyJizNdrDPXp/naOlXJWBD5qu9ats9LS98q -----END CERTIFICATE----- Verisign Class 3 Public Primary Certification Authority - G3 ============================================================ -----BEGIN CERTIFICATE----- MIIEGjCCAwICEQCbfgZJoz5iudXukEhxKe9XMA0GCSqGSIb3DQEBBQUAMIHKMQsw CQYDVQQGEwJVUzEXMBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZl cmlTaWduIFRydXN0IE5ldHdvcmsxOjA4BgNVBAsTMShjKSAxOTk5IFZlcmlTaWdu LCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxRTBDBgNVBAMTPFZlcmlT aWduIENsYXNzIDMgUHVibGljIFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3Jp dHkgLSBHMzAeFw05OTEwMDEwMDAwMDBaFw0zNjA3MTYyMzU5NTlaMIHKMQswCQYD VQQGEwJVUzEXMBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZlcmlT aWduIFRydXN0IE5ldHdvcmsxOjA4BgNVBAsTMShjKSAxOTk5IFZlcmlTaWduLCBJ bmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxRTBDBgNVBAMTPFZlcmlTaWdu IENsYXNzIDMgUHVibGljIFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkg LSBHMzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMu6nFL8eB8aHm8b N3O9+MlrlBIwT/A2R/XQkQr1F8ilYcEWQE37imGQ5XYgwREGfassbqb1EUGO+i2t KmFZpGcmTNDovFJbcCAEWNF6yaRpvIMXZK0Fi7zQWM6NjPXr8EJJC52XJ2cybuGu kxUccLwgTS8Y3pKI6GyFVxEa6X7jJhFUokWWVYPKMIno3Nij7SqAP395ZVc+FSBm CC+Vk7+qRy+oRpfwEuL+wgorUeZ25rdGt+INpsyow0xZVYnm6FNcHOqd8GIWC6fJ Xwzw3sJ2zq/3avL6QaaiMxTJ5Xpj055iN9WFZZ4O5lMkdBteHRJTW8cs54NJOxWu imi5V5cCAwEAATANBgkqhkiG9w0BAQUFAAOCAQEAERSWwauSCPc/L8my/uRan2Te 2yFPhpk0djZX3dAVL8WtfxUfN2JzPtTnX84XA9s1+ivbrmAJXx5fj267Cz3qWhMe DGBvtcC1IyIuBwvLqXTLR7sdwdela8wv0kL9Sd2nic9TutoAWii/gt/4uhMdUIaC /Y4wjylGsB49Ndo4YhYYSq3mtlFs3q9i6wHQHiT+eo8SGhJouPtmmRQURVyu565p F4ErWjfJXir0xuKhXFSbplQAz/DxwceYMBo7Nhbbo27q/a2ywtrvAkcTisDxszGt TxzhT5yvDwyd93gN2PQ1VoDat20Xj50egWTh/sVFuq1ruQp6Tk9LhO5L8X3dEQ== -----END CERTIFICATE----- Verisign Class 4 Public Primary Certification Authority - G3 ============================================================ -----BEGIN CERTIFICATE----- MIIEGjCCAwICEQDsoKeLbnVqAc/EfMwvlF7XMA0GCSqGSIb3DQEBBQUAMIHKMQsw CQYDVQQGEwJVUzEXMBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZl cmlTaWduIFRydXN0IE5ldHdvcmsxOjA4BgNVBAsTMShjKSAxOTk5IFZlcmlTaWdu LCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxRTBDBgNVBAMTPFZlcmlT aWduIENsYXNzIDQgUHVibGljIFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3Jp dHkgLSBHMzAeFw05OTEwMDEwMDAwMDBaFw0zNjA3MTYyMzU5NTlaMIHKMQswCQYD VQQGEwJVUzEXMBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZlcmlT aWduIFRydXN0IE5ldHdvcmsxOjA4BgNVBAsTMShjKSAxOTk5IFZlcmlTaWduLCBJ bmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxRTBDBgNVBAMTPFZlcmlTaWdu IENsYXNzIDQgUHVibGljIFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkg LSBHMzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK3LpRFpxlmr8Y+1 GQ9Wzsy1HyDkniYlS+BzZYlZ3tCD5PUPtbut8XzoIfzk6AzufEUiGXaStBO3IFsJ +mGuqPKljYXCKtbeZjbSmwL0qJJgfJxptI8kHtCGUvYynEFYHiK9zUVilQhu0Gbd U6LM8BDcVHOLBKFGMzNcF0C5nk3T875Vg+ixiY5afJqWIpA7iCXy0lOIAgwLePLm NxdLMEYH5IBtptiWLugs+BGzOA1mppvqySNb247i8xOOGlktqgLw7KSHZtzBP/XY ufTsgsbSPZUd5cBPhMnZo0QoBmrXRazwa2rvTl/4EYIeOGM0ZlDUPpNz+jDDZq3/ ky2X7wMCAwEAATANBgkqhkiG9w0BAQUFAAOCAQEAj/ola09b5KROJ1WrIhVZPMq1 CtRK26vdoV9TxaBXOcLORyu+OshWv8LZJxA6sQU8wHcxuzrTBXttmhwwjIDLk5Mq g6sFUYICABFna/OIYUdfA5PVWw3g8dShMjWFsjrbsIKr0csKvE+MW8VLADsfKoKm fjaF3H48ZwC15DtS4KjrXRX5xm3wrR0OhbepmnMUWluPQSjA1egtTaRezarZ7c7c 2NU8Qh0XwRJdRTjDOPP8hS6DRkiy1yBfkjaP53kPmF6Z6PDQpLv1U70qzlmwr25/ bLvSHgCwIe34QWKCudiyxLtGUPMxxY8BqHTr9Xgn2uf3ZkPznoM+IKrDNWCRzg== -----END CERTIFICATE----- Entrust.net Secure Server CA ============================ -----BEGIN CERTIFICATE----- MIIE2DCCBEGgAwIBAgIEN0rSQzANBgkqhkiG9w0BAQUFADCBwzELMAkGA1UEBhMC VVMxFDASBgNVBAoTC0VudHJ1c3QubmV0MTswOQYDVQQLEzJ3d3cuZW50cnVzdC5u ZXQvQ1BTIGluY29ycC4gYnkgcmVmLiAobGltaXRzIGxpYWIuKTElMCMGA1UECxMc KGMpIDE5OTkgRW50cnVzdC5uZXQgTGltaXRlZDE6MDgGA1UEAxMxRW50cnVzdC5u ZXQgU2VjdXJlIFNlcnZlciBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw05OTA1 MjUxNjA5NDBaFw0xOTA1MjUxNjM5NDBaMIHDMQswCQYDVQQGEwJVUzEUMBIGA1UE ChMLRW50cnVzdC5uZXQxOzA5BgNVBAsTMnd3dy5lbnRydXN0Lm5ldC9DUFMgaW5j b3JwLiBieSByZWYuIChsaW1pdHMgbGlhYi4pMSUwIwYDVQQLExwoYykgMTk5OSBF bnRydXN0Lm5ldCBMaW1pdGVkMTowOAYDVQQDEzFFbnRydXN0Lm5ldCBTZWN1cmUg U2VydmVyIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGdMA0GCSqGSIb3DQEBAQUA A4GLADCBhwKBgQDNKIM0VBuJ8w+vN5Ex/68xYMmo6LIQaO2f55M28Qpku0f1BBc/ I0dNxScZgSYMVHINiC3ZH5oSn7yzcdOAGT9HZnuMNSjSuQrfJNqc1lB5gXpa0zf3 wkrYKZImZNHkmGw6AIr1NJtl+O3jEP/9uElY3KDegjlrgbEWGWG5VLbmQwIBA6OC AdcwggHTMBEGCWCGSAGG+EIBAQQEAwIABzCCARkGA1UdHwSCARAwggEMMIHeoIHb oIHYpIHVMIHSMQswCQYDVQQGEwJVUzEUMBIGA1UEChMLRW50cnVzdC5uZXQxOzA5 BgNVBAsTMnd3dy5lbnRydXN0Lm5ldC9DUFMgaW5jb3JwLiBieSByZWYuIChsaW1p dHMgbGlhYi4pMSUwIwYDVQQLExwoYykgMTk5OSBFbnRydXN0Lm5ldCBMaW1pdGVk MTowOAYDVQQDEzFFbnRydXN0Lm5ldCBTZWN1cmUgU2VydmVyIENlcnRpZmljYXRp b24gQXV0aG9yaXR5MQ0wCwYDVQQDEwRDUkwxMCmgJ6AlhiNodHRwOi8vd3d3LmVu dHJ1c3QubmV0L0NSTC9uZXQxLmNybDArBgNVHRAEJDAigA8xOTk5MDUyNTE2MDk0 MFqBDzIwMTkwNTI1MTYwOTQwWjALBgNVHQ8EBAMCAQYwHwYDVR0jBBgwFoAU8Bdi E1U9s/8KAGv7UISX8+1i0BowHQYDVR0OBBYEFPAXYhNVPbP/CgBr+1CEl/PtYtAa MAwGA1UdEwQFMAMBAf8wGQYJKoZIhvZ9B0EABAwwChsEVjQuMAMCBJAwDQYJKoZI hvcNAQEFBQADgYEAkNwwAvpkdMKnCqV8IY00F6j7Rw7/JXyNEwr75Ji174z4xRAN 95K+8cPV1ZVqBLssziY2ZcgxxufuP+NXdYR6Ee9GTxj005i7qIcyunL2POI9n9cd 2cNgQ4xYDiKWL2KjLB+6rQXvqzJ4h6BUcxm1XAX5Uj5tLUUL9wqT6u0G+bI= -----END CERTIFICATE----- Entrust.net Premium 2048 Secure Server CA ========================================= -----BEGIN CERTIFICATE----- MIIEXDCCA0SgAwIBAgIEOGO5ZjANBgkqhkiG9w0BAQUFADCBtDEUMBIGA1UEChML RW50cnVzdC5uZXQxQDA+BgNVBAsUN3d3dy5lbnRydXN0Lm5ldC9DUFNfMjA0OCBp bmNvcnAuIGJ5IHJlZi4gKGxpbWl0cyBsaWFiLikxJTAjBgNVBAsTHChjKSAxOTk5 IEVudHJ1c3QubmV0IExpbWl0ZWQxMzAxBgNVBAMTKkVudHJ1c3QubmV0IENlcnRp ZmljYXRpb24gQXV0aG9yaXR5ICgyMDQ4KTAeFw05OTEyMjQxNzUwNTFaFw0xOTEy MjQxODIwNTFaMIG0MRQwEgYDVQQKEwtFbnRydXN0Lm5ldDFAMD4GA1UECxQ3d3d3 LmVudHJ1c3QubmV0L0NQU18yMDQ4IGluY29ycC4gYnkgcmVmLiAobGltaXRzIGxp YWIuKTElMCMGA1UECxMcKGMpIDE5OTkgRW50cnVzdC5uZXQgTGltaXRlZDEzMDEG A1UEAxMqRW50cnVzdC5uZXQgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgKDIwNDgp MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArU1LqRKGsuqjIAcVFmQq K0vRvwtKTY7tgHalZ7d4QMBzQshowNtTK91euHaYNZOLGp18EzoOH1u3Hs/lJBQe sYGpjX24zGtLA/ECDNyrpUAkAH90lKGdCCmziAv1h3edVc3kw37XamSrhRSGlVuX MlBvPci6Zgzj/L24ScF2iUkZ/cCovYmjZy/Gn7xxGWC4LeksyZB2ZnuU4q941mVT XTzWnLLPKQP5L6RQstRIzgUyVYr9smRMDuSYB3Xbf9+5CFVghTAp+XtIpGmG4zU/ HoZdenoVve8AjhUiVBcAkCaTvA5JaJG/+EfTnZVCwQ5N328mz8MYIWJmQ3DW1cAH 4QIDAQABo3QwcjARBglghkgBhvhCAQEEBAMCAAcwHwYDVR0jBBgwFoAUVeSB0RGA vtiJuQijMfmhJAkWuXAwHQYDVR0OBBYEFFXkgdERgL7YibkIozH5oSQJFrlwMB0G CSqGSIb2fQdBAAQQMA4bCFY1LjA6NC4wAwIEkDANBgkqhkiG9w0BAQUFAAOCAQEA WUesIYSKF8mciVMeuoCFGsY8Tj6xnLZ8xpJdGGQC49MGCBFhfGPjK50xA3B20qMo oPS7mmNz7W3lKtvtFKkrxjYR0CvrB4ul2p5cGZ1WEvVUKcgF7bISKo30Axv/55IQ h7A6tcOdBTcSo8f0FbnVpDkWm1M6I5HxqIKiaohowXkCIryqptau37AUX7iH0N18 f3v/rxzP5tsHrV7bhZ3QKw0z2wTR5klAEyt2+z7pnIkPFc4YsIV4IU9rTw76NmfN B/L/CNDi3tm/Kq+4h4YhPATKt5Rof8886ZjXOP/swNlQ8C5LWK5Gb9Auw2DaclVy vUxFnmG6v4SBkgPR0ml8xQ== -----END CERTIFICATE----- Baltimore CyberTrust Root ========================= -----BEGIN CERTIFICATE----- MIIDdzCCAl+gAwIBAgIEAgAAuTANBgkqhkiG9w0BAQUFADBaMQswCQYDVQQGEwJJ RTESMBAGA1UEChMJQmFsdGltb3JlMRMwEQYDVQQLEwpDeWJlclRydXN0MSIwIAYD VQQDExlCYWx0aW1vcmUgQ3liZXJUcnVzdCBSb290MB4XDTAwMDUxMjE4NDYwMFoX DTI1MDUxMjIzNTkwMFowWjELMAkGA1UEBhMCSUUxEjAQBgNVBAoTCUJhbHRpbW9y ZTETMBEGA1UECxMKQ3liZXJUcnVzdDEiMCAGA1UEAxMZQmFsdGltb3JlIEN5YmVy VHJ1c3QgUm9vdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKMEuyKr mD1X6CZymrV51Cni4eiVgLGw41uOKymaZN+hXe2wCQVt2yguzmKiYv60iNoS6zjr IZ3AQSsBUnuId9Mcj8e6uYi1agnnc+gRQKfRzMpijS3ljwumUNKoUMMo6vWrJYeK mpYcqWe4PwzV9/lSEy/CG9VwcPCPwBLKBsua4dnKM3p31vjsufFoREJIE9LAwqSu XmD+tqYF/LTdB1kC1FkYmGP1pWPgkAx9XbIGevOF6uvUA65ehD5f/xXtabz5OTZy dc93Uk3zyZAsuT3lySNTPx8kmCFcB5kpvcY67Oduhjprl3RjM71oGDHweI12v/ye jl0qhqdNkNwnGjkCAwEAAaNFMEMwHQYDVR0OBBYEFOWdWTCCR1jMrPoIVDaGezq1 BE3wMBIGA1UdEwEB/wQIMAYBAf8CAQMwDgYDVR0PAQH/BAQDAgEGMA0GCSqGSIb3 DQEBBQUAA4IBAQCFDF2O5G9RaEIFoN27TyclhAO992T9Ldcw46QQF+vaKSm2eT92 9hkTI7gQCvlYpNRhcL0EYWoSihfVCr3FvDB81ukMJY2GQE/szKN+OMY3EU/t3Wgx jkzSswF07r51XgdIGn9w/xZchMB5hbgF/X++ZRGjD8ACtPhSNzkE1akxehi/oCr0 Epn3o0WC4zxe9Z2etciefC7IpJ5OCBRLbf1wbWsaY71k5h+3zvDyny67G7fyUIhz ksLi4xaNmjICq44Y3ekQEe5+NauQrz4wlHrQMz2nZQ/1/I6eYs9HRCwBXbsdtTLS R9I4LtD+gdwyah617jzV/OeBHRnDJELqYzmp -----END CERTIFICATE----- Equifax Secure Global eBusiness CA ================================== -----BEGIN CERTIFICATE----- MIICkDCCAfmgAwIBAgIBATANBgkqhkiG9w0BAQQFADBaMQswCQYDVQQGEwJVUzEc MBoGA1UEChMTRXF1aWZheCBTZWN1cmUgSW5jLjEtMCsGA1UEAxMkRXF1aWZheCBT ZWN1cmUgR2xvYmFsIGVCdXNpbmVzcyBDQS0xMB4XDTk5MDYyMTA0MDAwMFoXDTIw MDYyMTA0MDAwMFowWjELMAkGA1UEBhMCVVMxHDAaBgNVBAoTE0VxdWlmYXggU2Vj dXJlIEluYy4xLTArBgNVBAMTJEVxdWlmYXggU2VjdXJlIEdsb2JhbCBlQnVzaW5l c3MgQ0EtMTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAuucXkAJlsTRVPEnC UdXfp9E3j9HngXNBUmCbnaEXJnitx7HoJpQytd4zjTov2/KaelpzmKNc6fuKcxtc 58O/gGzNqfTWK8D3+ZmqY6KxRwIP1ORROhI8bIpaVIRw28HFkM9yRcuoWcDNM50/ o5brhTMhHD4ePmBudpxnhcXIw2ECAwEAAaNmMGQwEQYJYIZIAYb4QgEBBAQDAgAH MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAUvqigdHJQa0S3ySPY+6j/s1dr aGwwHQYDVR0OBBYEFL6ooHRyUGtEt8kj2Puo/7NXa2hsMA0GCSqGSIb3DQEBBAUA A4GBADDiAVGqx+pf2rnQZQ8w1j7aDRRJbpGTJxQx78T3LUX47Me/okENI7SS+RkA Z70Br83gcfxaz2TE4JaY0KNA4gGK7ycH8WUBikQtBmV1UsCGECAhX2xrD2yuCRyv 8qIYNMR1pHMc8Y3c7635s3a0kr/clRAevsvIO1qEYBlWlKlV -----END CERTIFICATE----- Equifax Secure eBusiness CA 1 ============================= -----BEGIN CERTIFICATE----- MIICgjCCAeugAwIBAgIBBDANBgkqhkiG9w0BAQQFADBTMQswCQYDVQQGEwJVUzEc MBoGA1UEChMTRXF1aWZheCBTZWN1cmUgSW5jLjEmMCQGA1UEAxMdRXF1aWZheCBT ZWN1cmUgZUJ1c2luZXNzIENBLTEwHhcNOTkwNjIxMDQwMDAwWhcNMjAwNjIxMDQw MDAwWjBTMQswCQYDVQQGEwJVUzEcMBoGA1UEChMTRXF1aWZheCBTZWN1cmUgSW5j LjEmMCQGA1UEAxMdRXF1aWZheCBTZWN1cmUgZUJ1c2luZXNzIENBLTEwgZ8wDQYJ KoZIhvcNAQEBBQADgY0AMIGJAoGBAM4vGbwXt3fek6lfWg0XTzQaDJj0ItlZ1MRo RvC0NcWFAyDGr0WlIVFFQesWWDYyb+JQYmT5/VGcqiTZ9J2DKocKIdMSODRsjQBu WqDZQu4aIZX5UkxVWsUPOE9G+m34LjXWHXzr4vCwdYDIqROsvojvOm6rXyo4YgKw Env+j6YDAgMBAAGjZjBkMBEGCWCGSAGG+EIBAQQEAwIABzAPBgNVHRMBAf8EBTAD AQH/MB8GA1UdIwQYMBaAFEp4MlIR21kWNl7fwRQ2QGpHfEyhMB0GA1UdDgQWBBRK eDJSEdtZFjZe38EUNkBqR3xMoTANBgkqhkiG9w0BAQQFAAOBgQB1W6ibAxHm6VZM zfmpTMANmvPMZWnmJXbMWbfWVMMdzZmsGd20hdXgPfxiIKeES1hl8eL5lSE/9dR+ WB5Hh1Q+WKG1tfgq73HnvMP2sUlG4tega+VWeponmHxGYhTnyfxuAxJ5gDgdSIKN /Bf+KpYrtWKmpj29f5JZzVoqgrI3eQ== -----END CERTIFICATE----- Equifax Secure eBusiness CA 2 ============================= -----BEGIN CERTIFICATE----- MIIDIDCCAomgAwIBAgIEN3DPtTANBgkqhkiG9w0BAQUFADBOMQswCQYDVQQGEwJV UzEXMBUGA1UEChMORXF1aWZheCBTZWN1cmUxJjAkBgNVBAsTHUVxdWlmYXggU2Vj dXJlIGVCdXNpbmVzcyBDQS0yMB4XDTk5MDYyMzEyMTQ0NVoXDTE5MDYyMzEyMTQ0 NVowTjELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDkVxdWlmYXggU2VjdXJlMSYwJAYD VQQLEx1FcXVpZmF4IFNlY3VyZSBlQnVzaW5lc3MgQ0EtMjCBnzANBgkqhkiG9w0B AQEFAAOBjQAwgYkCgYEA5Dk5kx5SBhsoNviyoynF7Y6yEb3+6+e0dMKP/wXn2Z0G vxLIPw7y1tEkshHe0XMJitSxLJgJDR5QRrKDpkWNYmi7hRsgcDKqQM2mll/EcTc/ BPO3QSQ5BxoeLmFYoBIL5aXfxavqN3HMHMg3OrmXUqesxWoklE6ce8/AatbfIb0C AwEAAaOCAQkwggEFMHAGA1UdHwRpMGcwZaBjoGGkXzBdMQswCQYDVQQGEwJVUzEX MBUGA1UEChMORXF1aWZheCBTZWN1cmUxJjAkBgNVBAsTHUVxdWlmYXggU2VjdXJl IGVCdXNpbmVzcyBDQS0yMQ0wCwYDVQQDEwRDUkwxMBoGA1UdEAQTMBGBDzIwMTkw NjIzMTIxNDQ1WjALBgNVHQ8EBAMCAQYwHwYDVR0jBBgwFoAUUJ4L6q9euSBIplBq y/3YIHqngnYwHQYDVR0OBBYEFFCeC+qvXrkgSKZQasv92CB6p4J2MAwGA1UdEwQF MAMBAf8wGgYJKoZIhvZ9B0EABA0wCxsFVjMuMGMDAgbAMA0GCSqGSIb3DQEBBQUA A4GBAAyGgq3oThr1jokn4jVYPSm0B482UJW/bsGe68SQsoWou7dC4A8HOd/7npCy 0cE+U58DRLB+S/Rv5Hwf5+Kx5Lia78O9zt4LMjTZ3ijtM2vE1Nc9ElirfQkty3D1 E4qUoSek1nDFbZS1yX2doNLGCEnZZpum0/QL3MUmV+GRMOrN -----END CERTIFICATE----- AddTrust Low-Value Services Root ================================ -----BEGIN CERTIFICATE----- MIIEGDCCAwCgAwIBAgIBATANBgkqhkiG9w0BAQUFADBlMQswCQYDVQQGEwJTRTEU MBIGA1UEChMLQWRkVHJ1c3QgQUIxHTAbBgNVBAsTFEFkZFRydXN0IFRUUCBOZXR3 b3JrMSEwHwYDVQQDExhBZGRUcnVzdCBDbGFzcyAxIENBIFJvb3QwHhcNMDAwNTMw MTAzODMxWhcNMjAwNTMwMTAzODMxWjBlMQswCQYDVQQGEwJTRTEUMBIGA1UEChML QWRkVHJ1c3QgQUIxHTAbBgNVBAsTFEFkZFRydXN0IFRUUCBOZXR3b3JrMSEwHwYD VQQDExhBZGRUcnVzdCBDbGFzcyAxIENBIFJvb3QwggEiMA0GCSqGSIb3DQEBAQUA A4IBDwAwggEKAoIBAQCWltQhSWDia+hBBwzexODcEyPNwTXH+9ZOEQpnXvUGW2ul CDtbKRY654eyNAbFvAWlA3yCyykQruGIgb3WntP+LVbBFc7jJp0VLhD7Bo8wBN6n tGO0/7Gcrjyvd7ZWxbWroulpOj0OM3kyP3CCkplhbY0wCI9xP6ZIVxn4JdxLZlyl dI+Yrsj5wAYi56xz36Uu+1LcsRVlIPo1Zmne3yzxbrww2ywkEtvrNTVokMsAsJch PXQhI2U0K7t4WaPW4XY5mqRJjox0r26kmqPZm9I4XJuiGMx1I4S+6+JNM3GOGvDC +Mcdoq0Dlyz4zyXG9rgkMbFjXZJ/Y/AlyVMuH79NAgMBAAGjgdIwgc8wHQYDVR0O BBYEFJWxtPCUtr3H2tERCSG+wa9J/RB7MAsGA1UdDwQEAwIBBjAPBgNVHRMBAf8E BTADAQH/MIGPBgNVHSMEgYcwgYSAFJWxtPCUtr3H2tERCSG+wa9J/RB7oWmkZzBl MQswCQYDVQQGEwJTRTEUMBIGA1UEChMLQWRkVHJ1c3QgQUIxHTAbBgNVBAsTFEFk ZFRydXN0IFRUUCBOZXR3b3JrMSEwHwYDVQQDExhBZGRUcnVzdCBDbGFzcyAxIENB IFJvb3SCAQEwDQYJKoZIhvcNAQEFBQADggEBACxtZBsfzQ3duQH6lmM0MkhHma6X 7f1yFqZzR1r0693p9db7RcwpiURdv0Y5PejuvE1Uhh4dbOMXJ0PhiVYrqW9yTkkz 43J8KiOavD7/KCrto/8cI7pDVwlnTUtiBi34/2ydYB7YHEt9tTEv2dB8Xfjea4MY eDdXL+gzB2ffHsdrKpV2ro9Xo/D0UrSpUwjP4E/TelOL/bscVjby/rK25Xa71SJl pz/+0WatC7xrmYbvP33zGDLKe8bjq2RGlfgmadlVg3sslgf/WSxEo8bl6ancoWOA WiFeIc9TVPC6b4nbqKqVz4vjccweGyBECMB6tkD9xOQ14R0WHNC8K47Wcdk= -----END CERTIFICATE----- AddTrust External Root ====================== -----BEGIN CERTIFICATE----- MIIENjCCAx6gAwIBAgIBATANBgkqhkiG9w0BAQUFADBvMQswCQYDVQQGEwJTRTEU MBIGA1UEChMLQWRkVHJ1c3QgQUIxJjAkBgNVBAsTHUFkZFRydXN0IEV4dGVybmFs IFRUUCBOZXR3b3JrMSIwIAYDVQQDExlBZGRUcnVzdCBFeHRlcm5hbCBDQSBSb290 MB4XDTAwMDUzMDEwNDgzOFoXDTIwMDUzMDEwNDgzOFowbzELMAkGA1UEBhMCU0Ux FDASBgNVBAoTC0FkZFRydXN0IEFCMSYwJAYDVQQLEx1BZGRUcnVzdCBFeHRlcm5h bCBUVFAgTmV0d29yazEiMCAGA1UEAxMZQWRkVHJ1c3QgRXh0ZXJuYWwgQ0EgUm9v dDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALf3GjPm8gAELTngTlvt H7xsD821+iO2zt6bETOXpClMfZOfvUq8k+0DGuOPz+VtUFrWlymUWoCwSXrbLpX9 uMq/NzgtHj6RQa1wVsfwTz/oMp50ysiQVOnGXw94nZpAPA6sYapeFI+eh6FqUNzX mk6vBbOmcZSccbNQYArHE504B4YCqOmoaSYYkKtMsE8jqzpPhNjfzp/haW+710LX a0Tkx63ubUFfclpxCDezeWWkWaCUN/cALw3CknLa0Dhy2xSoRcRdKn23tNbE7qzN E0S3ySvdQwAl+mG5aWpYIxG3pzOPVnVZ9c0p10a3CitlttNCbxWyuHv77+ldU9U0 WicCAwEAAaOB3DCB2TAdBgNVHQ4EFgQUrb2YejS0Jvf6xCZU7wO94CTLVBowCwYD VR0PBAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wgZkGA1UdIwSBkTCBjoAUrb2YejS0 Jvf6xCZU7wO94CTLVBqhc6RxMG8xCzAJBgNVBAYTAlNFMRQwEgYDVQQKEwtBZGRU cnVzdCBBQjEmMCQGA1UECxMdQWRkVHJ1c3QgRXh0ZXJuYWwgVFRQIE5ldHdvcmsx IjAgBgNVBAMTGUFkZFRydXN0IEV4dGVybmFsIENBIFJvb3SCAQEwDQYJKoZIhvcN AQEFBQADggEBALCb4IUlwtYj4g+WBpKdQZic2YR5gdkeWxQHIzZlj7DYd7usQWxH YINRsPkyPef89iYTx4AWpb9a/IfPeHmJIZriTAcKhjW88t5RxNKWt9x+Tu5w/Rw5 6wwCURQtjr0W4MHfRnXnJK3s9EK0hZNwEGe6nQY1ShjTK3rMUUKhemPR5ruhxSvC Nr4TDea9Y355e6cJDUCrat2PisP29owaQgVR1EX1n6diIWgVIEM8med8vSTYqZEX c4g/VhsxOBi0cQ+azcgOno4uG+GMmIPLHzHxREzGBHNJdmAPx/i9F4BrLunMTA5a mnkPIAou1Z5jJh5VkpTYghdae9C8x49OhgQ= -----END CERTIFICATE----- AddTrust Public Services Root ============================= -----BEGIN CERTIFICATE----- MIIEFTCCAv2gAwIBAgIBATANBgkqhkiG9w0BAQUFADBkMQswCQYDVQQGEwJTRTEU MBIGA1UEChMLQWRkVHJ1c3QgQUIxHTAbBgNVBAsTFEFkZFRydXN0IFRUUCBOZXR3 b3JrMSAwHgYDVQQDExdBZGRUcnVzdCBQdWJsaWMgQ0EgUm9vdDAeFw0wMDA1MzAx MDQxNTBaFw0yMDA1MzAxMDQxNTBaMGQxCzAJBgNVBAYTAlNFMRQwEgYDVQQKEwtB ZGRUcnVzdCBBQjEdMBsGA1UECxMUQWRkVHJ1c3QgVFRQIE5ldHdvcmsxIDAeBgNV BAMTF0FkZFRydXN0IFB1YmxpYyBDQSBSb290MIIBIjANBgkqhkiG9w0BAQEFAAOC AQ8AMIIBCgKCAQEA6Rowj4OIFMEg2Dybjxt+A3S72mnTRqX4jsIMEZBRpS9mVEBV 6tsfSlbunyNu9DnLoblv8n75XYcmYZ4c+OLspoH4IcUkzBEMP9smcnrHAZcHF/nX GCwwfQ56HmIexkvA/X1id9NEHif2P0tEs7c42TkfYNVRknMDtABp4/MUTu7R3AnP dzRGULD4EfL+OHn3Bzn+UZKXC1sIXzSGAa2Il+tmzV7R/9x98oTaunet3IAIx6eH 1lWfl2royBFkuucZKT8Rs3iQhCBSWxHveNCD9tVIkNAwHM+A+WD+eeSI8t0A65RF 62WUaUC6wNW0uLp9BBGo6zEFlpROWCGOn9Bg/QIDAQABo4HRMIHOMB0GA1UdDgQW BBSBPjfYkrAfd59ctKtzquf2NGAv+jALBgNVHQ8EBAMCAQYwDwYDVR0TAQH/BAUw AwEB/zCBjgYDVR0jBIGGMIGDgBSBPjfYkrAfd59ctKtzquf2NGAv+qFopGYwZDEL MAkGA1UEBhMCU0UxFDASBgNVBAoTC0FkZFRydXN0IEFCMR0wGwYDVQQLExRBZGRU cnVzdCBUVFAgTmV0d29yazEgMB4GA1UEAxMXQWRkVHJ1c3QgUHVibGljIENBIFJv b3SCAQEwDQYJKoZIhvcNAQEFBQADggEBAAP3FUr4JNojVhaTdt02KLmuG7jD8WS6 IBh4lSknVwW8fCr0uVFV2ocC3g8WFzH4qnkuCRO7r7IgGRLlk/lL+YPoRNWyQSW/ iHVv/xD8SlTQX/D67zZzfRs2RcYhbbQVuE7PnFylPVoAjgbjPGsye/Kf8Lb93/Ao GEjwxrzQvzSAlsJKsW2Ox5BF3i9nrEUEo3rcVZLJR2bYGozH7ZxOmuASu7VqTITh 4SINhwBk/ox9Yjllpu9CtoAlEmEBqCQTcAARJl/6NVDFSMwGR+gn2HCNX2TmoUQm XiLsks3/QppEIW1cxeMiHV9HEufOX1362KqxMy3ZdvJOOjMMK7MtkAY= -----END CERTIFICATE----- AddTrust Qualified Certificates Root ==================================== -----BEGIN CERTIFICATE----- MIIEHjCCAwagAwIBAgIBATANBgkqhkiG9w0BAQUFADBnMQswCQYDVQQGEwJTRTEU MBIGA1UEChMLQWRkVHJ1c3QgQUIxHTAbBgNVBAsTFEFkZFRydXN0IFRUUCBOZXR3 b3JrMSMwIQYDVQQDExpBZGRUcnVzdCBRdWFsaWZpZWQgQ0EgUm9vdDAeFw0wMDA1 MzAxMDQ0NTBaFw0yMDA1MzAxMDQ0NTBaMGcxCzAJBgNVBAYTAlNFMRQwEgYDVQQK EwtBZGRUcnVzdCBBQjEdMBsGA1UECxMUQWRkVHJ1c3QgVFRQIE5ldHdvcmsxIzAh BgNVBAMTGkFkZFRydXN0IFF1YWxpZmllZCBDQSBSb290MIIBIjANBgkqhkiG9w0B AQEFAAOCAQ8AMIIBCgKCAQEA5B6a/twJWoekn0e+EV+vhDTbYjx5eLfpMLXsDBwq xBb/4Oxx64r1EW7tTw2R0hIYLUkVAcKkIhPHEWT/IhKauY5cLwjPcWqzZwFZ8V1G 87B4pfYOQnrjfxvM0PC3KP0q6p6zsLkEqv32x7SxuCqg+1jxGaBvcCV+PmlKfw8i 2O+tCBGaKZnhqkRFmhJePp1tUvznoD1oL/BLcHwTOK28FSXx1s6rosAx1i+f4P8U WfyEk9mHfExUE+uf0S0R+Bg6Ot4l2ffTQO2kBhLEO+GRwVY18BTcZTYJbqukB8c1 0cIDMzZbdSZtQvESa0NvS3GU+jQd7RNuyoB/mC9suWXY6QIDAQABo4HUMIHRMB0G A1UdDgQWBBQ5lYtii1zJ1IC6WA+XPxUIQ8yYpzALBgNVHQ8EBAMCAQYwDwYDVR0T AQH/BAUwAwEB/zCBkQYDVR0jBIGJMIGGgBQ5lYtii1zJ1IC6WA+XPxUIQ8yYp6Fr pGkwZzELMAkGA1UEBhMCU0UxFDASBgNVBAoTC0FkZFRydXN0IEFCMR0wGwYDVQQL ExRBZGRUcnVzdCBUVFAgTmV0d29yazEjMCEGA1UEAxMaQWRkVHJ1c3QgUXVhbGlm aWVkIENBIFJvb3SCAQEwDQYJKoZIhvcNAQEFBQADggEBABmrder4i2VhlRO6aQTv hsoToMeqT2QbPxj2qC0sVY8FtzDqQmodwCVRLae/DLPt7wh/bDxGGuoYQ992zPlm hpwsaPXpF/gxsxjE1kh9I0xowX67ARRvxdlu3rsEQmr49lx95dr6h+sNNVJn0J6X dgWTP5XHAeZpVTh/EGGZyeNfpso+gmNIquIISD6q8rKFYqa0p9m9N5xotS1WfbC3 P6CxB9bpT9zeRXEwMn8bLgn5v1Kh7sKAPgZcLlVAwRv1cEWw3F369nJad9Jjzc9Y iQBCYz95OdBEsIJuQRno3eDBiFrRHnGTHyQwdOUeqN48Jzd/g66ed8/wMLH/S5no xqE= -----END CERTIFICATE----- Entrust Root Certification Authority ==================================== -----BEGIN CERTIFICATE----- MIIEkTCCA3mgAwIBAgIERWtQVDANBgkqhkiG9w0BAQUFADCBsDELMAkGA1UEBhMC VVMxFjAUBgNVBAoTDUVudHJ1c3QsIEluYy4xOTA3BgNVBAsTMHd3dy5lbnRydXN0 Lm5ldC9DUFMgaXMgaW5jb3Jwb3JhdGVkIGJ5IHJlZmVyZW5jZTEfMB0GA1UECxMW KGMpIDIwMDYgRW50cnVzdCwgSW5jLjEtMCsGA1UEAxMkRW50cnVzdCBSb290IENl cnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA2MTEyNzIwMjM0MloXDTI2MTEyNzIw NTM0MlowgbAxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1FbnRydXN0LCBJbmMuMTkw NwYDVQQLEzB3d3cuZW50cnVzdC5uZXQvQ1BTIGlzIGluY29ycG9yYXRlZCBieSBy ZWZlcmVuY2UxHzAdBgNVBAsTFihjKSAyMDA2IEVudHJ1c3QsIEluYy4xLTArBgNV BAMTJEVudHJ1c3QgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTCCASIwDQYJ KoZIhvcNAQEBBQADggEPADCCAQoCggEBALaVtkNC+sZtKm9I35RMOVcF7sN5EUFo Nu3s/poBj6E4KPz3EEZmLk0eGrEaTsbRwJWIsMn/MYszA9u3g3s+IIRe7bJWKKf4 4LlAcTfFy0cOlypowCKVYhXbR9n10Cv/gkvJrT7eTNuQgFA/CYqEAOwwCj0Yzfv9 KlmaI5UXLEWeH25DeW0MXJj+SKfFI0dcXv1u5x609mhF0YaDW6KKjbHjKYD+JXGI rb68j6xSlkuqUY3kEzEZ6E5Nn9uss2rVvDlUccp6en+Q3X0dgNmBu1kmwhH+5pPi 94DkZfs0Nw4pgHBNrziGLp5/V6+eF67rHMsoIV+2HNjnogQi+dPa2MsCAwEAAaOB sDCBrTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zArBgNVHRAEJDAi gA8yMDA2MTEyNzIwMjM0MlqBDzIwMjYxMTI3MjA1MzQyWjAfBgNVHSMEGDAWgBRo kORnpKZTgMeGZqTx90tD+4S9bTAdBgNVHQ4EFgQUaJDkZ6SmU4DHhmak8fdLQ/uE vW0wHQYJKoZIhvZ9B0EABBAwDhsIVjcuMTo0LjADAgSQMA0GCSqGSIb3DQEBBQUA A4IBAQCT1DCw1wMgKtD5Y+iRDAUgqV8ZyntyTtSx29CW+1RaGSwMCPeyvIWonX9t O1KzKtvn1ISMY/YPyyYBkVBs9F8U4pN0wBOeMDpQ47RgxRzwIkSNcUesyBrJ6Zua AGAT/3B+XxFNSRuzFVJ7yVTav52Vr2ua2J7p8eRDjeIRRDq/r72DQnNSi6q7pynP 9WQcCk3RvKqsnyrQ/39/2n3qse0wJcGE2jTSW3iDVuycNsMm4hH2Z0kdkquM++v/ eu6FSqdQgPCnXEqULl8FmTxSQeDNtGPPAUO6nIPcj2A781q0tHuu2guQOHXvgR1m 0vdXcDazv/wor3ElhVsT/h5/WrQ8 -----END CERTIFICATE----- RSA Security 2048 v3 ==================== -----BEGIN CERTIFICATE----- MIIDYTCCAkmgAwIBAgIQCgEBAQAAAnwAAAAKAAAAAjANBgkqhkiG9w0BAQUFADA6 MRkwFwYDVQQKExBSU0EgU2VjdXJpdHkgSW5jMR0wGwYDVQQLExRSU0EgU2VjdXJp dHkgMjA0OCBWMzAeFw0wMTAyMjIyMDM5MjNaFw0yNjAyMjIyMDM5MjNaMDoxGTAX BgNVBAoTEFJTQSBTZWN1cml0eSBJbmMxHTAbBgNVBAsTFFJTQSBTZWN1cml0eSAy MDQ4IFYzMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAt49VcdKA3Xtp eafwGFAyPGJn9gqVB93mG/Oe2dJBVGutn3y+Gc37RqtBaB4Y6lXIL5F4iSj7Jylg /9+PjDvJSZu1pJTOAeo+tWN7fyb9Gd3AIb2E0S1PRsNO3Ng3OTsor8udGuorryGl wSMiuLgbWhOHV4PR8CDn6E8jQrAApX2J6elhc5SYcSa8LWrg903w8bYqODGBDSnh AMFRD0xS+ARaqn1y07iHKrtjEAMqs6FPDVpeRrc9DvV07Jmf+T0kgYim3WBU6JU2 PcYJk5qjEoAAVZkZR73QpXzDuvsf9/UP+Ky5tfQ3mBMY3oVbtwyCO4dvlTlYMNpu AWgXIszACwIDAQABo2MwYTAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIB BjAfBgNVHSMEGDAWgBQHw1EwpKrpRa41JPr/JCwz0LGdjDAdBgNVHQ4EFgQUB8NR MKSq6UWuNST6/yQsM9CxnYwwDQYJKoZIhvcNAQEFBQADggEBAF8+hnZuuDU8TjYc HnmYv/3VEhF5Ug7uMYm83X/50cYVIeiKAVQNOvtUudZj1LGqlk2iQk3UUx+LEN5/ Zb5gEydxiKRz44Rj0aRV4VCT5hsOedBnvEbIvz8XDZXmxpBp3ue0L96VfdASPz0+ f00/FGj1EVDVwfSQpQgdMWD/YIwjVAqv/qFuxdF6Kmh4zx6CCiC0H63lhbJqaHVO rSU3lIW+vaHU6rcMSzyd6BIA8F+sDeGscGNz9395nzIlQnQFgCi/vcEkllgVsRch 6YlL2weIZ/QVrXA+L02FO8K32/6YaCOJ4XQP3vTFhGMpG8zLB8kApKnXwiJPZ9d3 7CAFYd4= -----END CERTIFICATE----- GeoTrust Global CA ================== -----BEGIN CERTIFICATE----- MIIDVDCCAjygAwIBAgIDAjRWMA0GCSqGSIb3DQEBBQUAMEIxCzAJBgNVBAYTAlVT MRYwFAYDVQQKEw1HZW9UcnVzdCBJbmMuMRswGQYDVQQDExJHZW9UcnVzdCBHbG9i YWwgQ0EwHhcNMDIwNTIxMDQwMDAwWhcNMjIwNTIxMDQwMDAwWjBCMQswCQYDVQQG EwJVUzEWMBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEbMBkGA1UEAxMSR2VvVHJ1c3Qg R2xvYmFsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA2swYYzD9 9BcjGlZ+W988bDjkcbd4kdS8odhM+KhDtgPpTSEHCIjaWC9mOSm9BXiLnTjoBbdq fnGk5sRgprDvgOSJKA+eJdbtg/OtppHHmMlCGDUUna2YRpIuT8rxh0PBFpVXLVDv iS2Aelet8u5fa9IAjbkU+BQVNdnARqN7csiRv8lVK83Qlz6cJmTM386DGXHKTubU 1XupGc1V3sjs0l44U+VcT4wt/lAjNvxm5suOpDkZALeVAjmRCw7+OC7RHQWa9k0+ bw8HHa8sHo9gOeL6NlMTOdReJivbPagUvTLrGAMoUgRx5aszPeE4uwc2hGKceeoW MPRfwCvocWvk+QIDAQABo1MwUTAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTA ephojYn7qwVkDBF9qn1luMrMTjAfBgNVHSMEGDAWgBTAephojYn7qwVkDBF9qn1l uMrMTjANBgkqhkiG9w0BAQUFAAOCAQEANeMpauUvXVSOKVCUn5kaFOSPeCpilKIn Z57QzxpeR+nBsqTP3UEaBU6bS+5Kb1VSsyShNwrrZHYqLizz/Tt1kL/6cdjHPTfS tQWVYrmm3ok9Nns4d0iXrKYgjy6myQzCsplFAMfOEVEiIuCl6rYVSAlk6l5PdPcF PseKUgzbFbS9bZvlxrFUaKnjaZC2mqUPuLk/IH2uSrW4nOQdtqvmlKXBx4Ot2/Un hw4EbNX/3aBd7YdStysVAq45pmp06drE57xNNB6pXE0zX5IJL4hmXXeXxx12E6nV 5fEWCRE11azbJHFwLJhWC9kXtNHjUStedejV0NxPNO3CBWaAocvmMw== -----END CERTIFICATE----- GeoTrust Global CA 2 ==================== -----BEGIN CERTIFICATE----- MIIDZjCCAk6gAwIBAgIBATANBgkqhkiG9w0BAQUFADBEMQswCQYDVQQGEwJVUzEW MBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEdMBsGA1UEAxMUR2VvVHJ1c3QgR2xvYmFs IENBIDIwHhcNMDQwMzA0MDUwMDAwWhcNMTkwMzA0MDUwMDAwWjBEMQswCQYDVQQG EwJVUzEWMBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEdMBsGA1UEAxMUR2VvVHJ1c3Qg R2xvYmFsIENBIDIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDvPE1A PRDfO1MA4Wf+lGAVPoWI8YkNkMgoI5kF6CsgncbzYEbYwbLVjDHZ3CB5JIG/NTL8 Y2nbsSpr7iFY8gjpeMtvy/wWUsiRxP89c96xPqfCfWbB9X5SJBri1WeR0IIQ13hL TytCOb1kLUCgsBDTOEhGiKEMuzozKmKY+wCdE1l/bztyqu6mD4b5BWHqZ38MN5aL 5mkWRxHCJ1kDs6ZgwiFAVvqgx306E+PsV8ez1q6diYD3Aecs9pYrEw15LNnA5IZ7 S4wMcoKK+xfNAGw6EzywhIdLFnopsk/bHdQL82Y3vdj2V7teJHq4PIu5+pIaGoSe 2HSPqht/XvT+RSIhAgMBAAGjYzBhMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYE FHE4NvICMVNHK266ZUapEBVYIAUJMB8GA1UdIwQYMBaAFHE4NvICMVNHK266ZUap EBVYIAUJMA4GA1UdDwEB/wQEAwIBhjANBgkqhkiG9w0BAQUFAAOCAQEAA/e1K6td EPx7srJerJsOflN4WT5CBP51o62sgU7XAotexC3IUnbHLB/8gTKY0UvGkpMzNTEv /NgdRN3ggX+d6YvhZJFiCzkIjKx0nVnZellSlxG5FntvRdOW2TF9AjYPnDtuzywN A0ZF66D0f0hExghAzN4bcLUprbqLOzRldRtxIR0sFAqwlpW41uryZfspuk/qkZN0 abby/+Ea0AzRdoXLiiW9l14sbxWZJue2Kf8i7MkCx1YAzUm5s2x7UwQa4qjJqhIF I8LO57sEAszAR6LkxCkvW0VXiVHuPOtSCP8HNR6fNWpHSlaY0VqFH4z1Ir+rzoPz 4iIprn2DQKi6bA== -----END CERTIFICATE----- GeoTrust Universal CA ===================== -----BEGIN CERTIFICATE----- MIIFaDCCA1CgAwIBAgIBATANBgkqhkiG9w0BAQUFADBFMQswCQYDVQQGEwJVUzEW MBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEeMBwGA1UEAxMVR2VvVHJ1c3QgVW5pdmVy c2FsIENBMB4XDTA0MDMwNDA1MDAwMFoXDTI5MDMwNDA1MDAwMFowRTELMAkGA1UE BhMCVVMxFjAUBgNVBAoTDUdlb1RydXN0IEluYy4xHjAcBgNVBAMTFUdlb1RydXN0 IFVuaXZlcnNhbCBDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAKYV VaCjxuAfjJ0hUNfBvitbtaSeodlyWL0AG0y/YckUHUWCq8YdgNY96xCcOq9tJPi8 cQGeBvV8Xx7BDlXKg5pZMK4ZyzBIle0iN430SppyZj6tlcDgFgDgEB8rMQ7XlFTT QjOgNB0eRXbdT8oYN+yFFXoZCPzVx5zw8qkuEKmS5j1YPakWaDwvdSEYfyh3peFh F7em6fgemdtzbvQKoiFs7tqqhZJmr/Z6a4LauiIINQ/PQvE1+mrufislzDoR5G2v c7J2Ha3QsnhnGqQ5HFELZ1aD/ThdDc7d8Lsrlh/eezJS/R27tQahsiFepdaVaH/w mZ7cRQg+59IJDTWU3YBOU5fXtQlEIGQWFwMCTFMNaN7VqnJNk22CDtucvc+081xd VHppCZbW2xHBjXWotM85yM48vCR85mLK4b19p71XZQvk/iXttmkQ3CgaRr0BHdCX teGYO8A3ZNY9lO4L4fUorgtWv3GLIylBjobFS1J72HGrH4oVpjuDWtdYAVHGTEHZ f9hBZ3KiKN9gg6meyHv8U3NyWfWTehd2Ds735VzZC1U0oqpbtWpU5xPKV+yXbfRe Bi9Fi1jUIxaS5BZuKGNZMN9QAZxjiRqf2xeUgnA3wySemkfWWspOqGmJch+RbNt+ nhutxx9z3SxPGWX9f5NAEC7S8O08ni4oPmkmM8V7AgMBAAGjYzBhMA8GA1UdEwEB /wQFMAMBAf8wHQYDVR0OBBYEFNq7LqqwDLiIJlF0XG0D08DYj3rWMB8GA1UdIwQY MBaAFNq7LqqwDLiIJlF0XG0D08DYj3rWMA4GA1UdDwEB/wQEAwIBhjANBgkqhkiG 9w0BAQUFAAOCAgEAMXjmx7XfuJRAyXHEqDXsRh3ChfMoWIawC/yOsjmPRFWrZIRc aanQmjg8+uUfNeVE44B5lGiku8SfPeE0zTBGi1QrlaXv9z+ZhP015s8xxtxqv6fX IwjhmF7DWgh2qaavdy+3YL1ERmrvl/9zlcGO6JP7/TG37FcREUWbMPEaiDnBTzyn ANXH/KttgCJwpQzgXQQpAvvLoJHRfNbDflDVnVi+QTjruXU8FdmbyUqDWcDaU/0z uzYYm4UPFd3uLax2k7nZAY1IEKj79TiG8dsKxr2EoyNB3tZ3b4XUhRxQ4K5RirqN Pnbiucon8l+f725ZDQbYKxek0nxru18UGkiPGkzns0ccjkxFKyDuSN/n3QmOGKja QI2SJhFTYXNd673nxE0pN2HrrDktZy4W1vUAg4WhzH92xH3kt0tm7wNFYGm2DFKW koRepqO1pD4r2czYG0eq8kTaT/kD6PAUyz/zg97QwVTjt+gKN02LIFkDMBmhLMi9 ER/frslKxfMnZmaGrGiR/9nmUxwPi1xpZQomyB40w11Re9epnAahNt3ViZS82eQt DF4JbAiXfKM9fJP/P6EUp8+1Xevb2xzEdt+Iub1FBZUbrvxGakyvSOPOrg/Sfuvm bJxPgWp6ZKy7PtXny3YuxadIwVyQD8vIP/rmMuGNG2+k5o7Y+SlIis5z/iw= -----END CERTIFICATE----- GeoTrust Universal CA 2 ======================= -----BEGIN CERTIFICATE----- MIIFbDCCA1SgAwIBAgIBATANBgkqhkiG9w0BAQUFADBHMQswCQYDVQQGEwJVUzEW MBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEgMB4GA1UEAxMXR2VvVHJ1c3QgVW5pdmVy c2FsIENBIDIwHhcNMDQwMzA0MDUwMDAwWhcNMjkwMzA0MDUwMDAwWjBHMQswCQYD VQQGEwJVUzEWMBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEgMB4GA1UEAxMXR2VvVHJ1 c3QgVW5pdmVyc2FsIENBIDIwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoIC AQCzVFLByT7y2dyxUxpZKeexw0Uo5dfR7cXFS6GqdHtXr0om/Nj1XqduGdt0DE81 WzILAePb63p3NeqqWuDW6KFXlPCQo3RWlEQwAx5cTiuFJnSCegx2oG9NzkEtoBUG FF+3Qs17j1hhNNwqCPkuwwGmIkQcTAeC5lvO0Ep8BNMZcyfwqph/Lq9O64ceJHdq XbboW0W63MOhBW9Wjo8QJqVJwy7XQYci4E+GymC16qFjwAGXEHm9ADwSbSsVsaxL se4YuU6W3Nx2/zu+z18DwPw76L5GG//aQMJS9/7jOvdqdzXQ2o3rXhhqMcceujwb KNZrVMaqW9eiLBsZzKIC9ptZvTdrhrVtgrrY6slWvKk2WP0+GfPtDCapkzj4T8Fd IgbQl+rhrcZV4IErKIM6+vR7IVEAvlI4zs1meaj0gVbi0IMJR1FbUGrP20gaXT73 y/Zl92zxlfgCOzJWgjl6W70viRu/obTo/3+NjN8D8WBOWBFM66M/ECuDmgFz2ZRt hAAnZqzwcEAJQpKtT5MNYQlRJNiS1QuUYbKHsu3/mjX/hVTK7URDrBs8FmtISgoc QIgfksILAAX/8sgCSqSqqcyZlpwvWOB94b67B9xfBHJcMTTD7F8t4D1kkCLm0ey4 Lt1ZrtmhN79UNdxzMk+MBB4zsslG8dhcyFVQyWi9qLo2CQIDAQABo2MwYTAPBgNV HRMBAf8EBTADAQH/MB0GA1UdDgQWBBR281Xh+qQ2+/CfXGJx7Tz0RzgQKzAfBgNV HSMEGDAWgBR281Xh+qQ2+/CfXGJx7Tz0RzgQKzAOBgNVHQ8BAf8EBAMCAYYwDQYJ KoZIhvcNAQEFBQADggIBAGbBxiPz2eAubl/oz66wsCVNK/g7WJtAJDday6sWSf+z dXkzoS9tcBc0kf5nfo/sm+VegqlVHy/c1FEHEv6sFj4sNcZj/NwQ6w2jqtB8zNHQ L1EuxBRa3ugZ4T7GzKQp5y6EqgYweHZUcyiYWTjgAA1i00J9IZ+uPTqM1fp3DRgr Fg5fNuH8KrUwJM/gYwx7WBr+mbpCErGR9Hxo4sjoryzqyX6uuyo9DRXcNJW2GHSo ag/HtPQTxORb7QrSpJdMKu0vbBKJPfEncKpqA1Ihn0CoZ1Dy81of398j9tx4TuaY T1U6U+Pv8vSfx3zYWK8pIpe44L2RLrB27FcRz+8pRPPphXpgY+RdM4kX2TGq2tbz GDVyz4crL2MjhF2EjD9XoIj8mZEoJmmZ1I+XRL6O1UixpCgp8RW04eWe3fiPpm8m 1wk8OhwRDqZsN/etRIcsKMfYdIKz0G9KV7s1KSegi+ghp4dkNl3M2Basx7InQJJV OCiNUW7dFGdTbHFcJoRNdVq2fmBWqU2t+5sel/MN2dKXVHfaPRK34B7vCAas+YWH 6aLcr34YEoP9VhdBLtUpgn2Z9DH2canPLAEnpQW5qrJITirvn5NSUZU8UnOOVkwX QMAJKOSLakhT2+zNVVXxxvjpoixMptEmX36vWkzaH6byHCx+rgIW0lbQL1dTR+iS -----END CERTIFICATE----- UTN-USER First-Network Applications =================================== -----BEGIN CERTIFICATE----- MIIEZDCCA0ygAwIBAgIQRL4Mi1AAJLQR0zYwS8AzdzANBgkqhkiG9w0BAQUFADCB ozELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAlVUMRcwFQYDVQQHEw5TYWx0IExha2Ug Q2l0eTEeMBwGA1UEChMVVGhlIFVTRVJUUlVTVCBOZXR3b3JrMSEwHwYDVQQLExho dHRwOi8vd3d3LnVzZXJ0cnVzdC5jb20xKzApBgNVBAMTIlVUTi1VU0VSRmlyc3Qt TmV0d29yayBBcHBsaWNhdGlvbnMwHhcNOTkwNzA5MTg0ODM5WhcNMTkwNzA5MTg1 NzQ5WjCBozELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAlVUMRcwFQYDVQQHEw5TYWx0 IExha2UgQ2l0eTEeMBwGA1UEChMVVGhlIFVTRVJUUlVTVCBOZXR3b3JrMSEwHwYD VQQLExhodHRwOi8vd3d3LnVzZXJ0cnVzdC5jb20xKzApBgNVBAMTIlVUTi1VU0VS Rmlyc3QtTmV0d29yayBBcHBsaWNhdGlvbnMwggEiMA0GCSqGSIb3DQEBAQUAA4IB DwAwggEKAoIBAQCz+5Gh5DZVhawGNFugmliy+LUPBXeDrjKxdpJo7CNKyXY/45y2 N3kDuatpjQclthln5LAbGHNhSuh+zdMvZOOmfAz6F4CjDUeJT1FxL+78P/m4FoCH iZMlIJpDgmkkdihZNaEdwH+DBmQWICzTSaSFtMBhf1EI+GgVkYDLpdXuOzr0hARe YFmnjDRy7rh4xdE7EkpvfmUnuaRVxblvQ6TFHSyZwFKkeEwVs0CYCGtDxgGwenv1 axwiP8vv/6jQOkt2FZ7S0cYu49tXGzKiuG/ohqY/cKvlcJKrRB5AUPuco2LkbG6g yN7igEL66S/ozjIEj3yNtxyjNTwV3Z7DrpelAgMBAAGjgZEwgY4wCwYDVR0PBAQD AgHGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFPqGydvguul49Uuo1hXf8NPh ahQ8ME8GA1UdHwRIMEYwRKBCoECGPmh0dHA6Ly9jcmwudXNlcnRydXN0LmNvbS9V VE4tVVNFUkZpcnN0LU5ldHdvcmtBcHBsaWNhdGlvbnMuY3JsMA0GCSqGSIb3DQEB BQUAA4IBAQCk8yXM0dSRgyLQzDKrm5ZONJFUICU0YV8qAhXhi6r/fWRRzwr/vH3Y IWp4yy9Rb/hCHTO967V7lMPDqaAt39EpHx3+jz+7qEUqf9FuVSTiuwL7MT++6Lzs QCv4AdRWOOTKRIK1YSAhZ2X28AvnNPilwpyjXEAfhZOVBt5P1CeptqX8Fs1zMT+4 ZSfP1FMa8Kxun08FDAOBp4QpxFq9ZFdyrTvPNximmMatBrTcCKME1SmklpoSZ0qM YEWd8SOasACcaLWYUNPvji6SZbFIPiG+FTAqDbUMo2s/rn9X9R+WfN9v3YIwLGUb QErNaLly7HF27FSOH4UMAWr6pjisH8SE -----END CERTIFICATE----- America Online Root Certification Authority 1 ============================================= -----BEGIN CERTIFICATE----- MIIDpDCCAoygAwIBAgIBATANBgkqhkiG9w0BAQUFADBjMQswCQYDVQQGEwJVUzEc MBoGA1UEChMTQW1lcmljYSBPbmxpbmUgSW5jLjE2MDQGA1UEAxMtQW1lcmljYSBP bmxpbmUgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAxMB4XDTAyMDUyODA2 MDAwMFoXDTM3MTExOTIwNDMwMFowYzELMAkGA1UEBhMCVVMxHDAaBgNVBAoTE0Ft ZXJpY2EgT25saW5lIEluYy4xNjA0BgNVBAMTLUFtZXJpY2EgT25saW5lIFJvb3Qg Q2VydGlmaWNhdGlvbiBBdXRob3JpdHkgMTCCASIwDQYJKoZIhvcNAQEBBQADggEP ADCCAQoCggEBAKgv6KRpBgNHw+kqmP8ZonCaxlCyfqXfaE0bfA+2l2h9LaaLl+lk hsmj76CGv2BlnEtUiMJIxUo5vxTjWVXlGbR0yLQFOVwWpeKVBeASrlmLojNoWBym 1BW32J/X3HGrfpq/m44zDyL9Hy7nBzbvYjnF3cu6JRQj3gzGPTzOggjmZj7aUTsW OqMFf6Dch9Wc/HKpoH145LcxVR5lu9RhsCFg7RAycsWSJR74kEoYeEfffjA3PlAb 2xzTa5qGUwew76wGePiEmf4hjUyAtgyC9mZweRrTT6PP8c9GsEsPPt2IYriMqQko O3rHl+Ee5fSfwMCuJKDIodkP1nsmgmkyPacCAwEAAaNjMGEwDwYDVR0TAQH/BAUw AwEB/zAdBgNVHQ4EFgQUAK3Zo/Z59m50qX8zPYEX10zPM94wHwYDVR0jBBgwFoAU AK3Zo/Z59m50qX8zPYEX10zPM94wDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3DQEB BQUAA4IBAQB8itEfGDeC4Liwo+1WlchiYZwFos3CYiZhzRAW18y0ZTTQEYqtqKkF Zu90821fnZmv9ov761KyBZiibyrFVL0lvV+uyIbqRizBs73B6UlwGBaXCBOMIOAb LjpHyx7kADCVW/RFo8AasAFOq73AI25jP4BKxQft3OJvx8Fi8eNy1gTIdGcL+oir oQHIb/AUr9KZzVGTfu0uOMe9zkZQPXLjeSWdm4grECDdpbgyn43gKd8hdIaC2y+C MMbHNYaz+ZZfRtsMRf3zUMNvxsNIrUam4SdHCh0Om7bCd39j8uB9Gr784N/Xx6ds sPmuujz9dLQR6FgNgLzTqIA6me11zEZ7 -----END CERTIFICATE----- America Online Root Certification Authority 2 ============================================= -----BEGIN CERTIFICATE----- MIIFpDCCA4ygAwIBAgIBATANBgkqhkiG9w0BAQUFADBjMQswCQYDVQQGEwJVUzEc MBoGA1UEChMTQW1lcmljYSBPbmxpbmUgSW5jLjE2MDQGA1UEAxMtQW1lcmljYSBP bmxpbmUgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAyMB4XDTAyMDUyODA2 MDAwMFoXDTM3MDkyOTE0MDgwMFowYzELMAkGA1UEBhMCVVMxHDAaBgNVBAoTE0Ft ZXJpY2EgT25saW5lIEluYy4xNjA0BgNVBAMTLUFtZXJpY2EgT25saW5lIFJvb3Qg Q2VydGlmaWNhdGlvbiBBdXRob3JpdHkgMjCCAiIwDQYJKoZIhvcNAQEBBQADggIP ADCCAgoCggIBAMxBRR3pPU0Q9oyxQcngXssNt79Hc9PwVU3dxgz6sWYFas14tNwC 206B89enfHG8dWOgXeMHDEjsJcQDIPT/DjsS/5uN4cbVG7RtIuOx238hZK+GvFci KtZHgVdEglZTvYYUAQv8f3SkWq7xuhG1m1hagLQ3eAkzfDJHA1zEpYNI9FdWboE2 JxhP7JsowtS013wMPgwr38oE18aO6lhOqKSlGBxsRZijQdEt0sdtjRnxrXm3gT+9 BoInLRBYBbV4Bbkv2wxrkJB+FFk4u5QkE+XRnRTf04JNRvCAOVIyD+OEsnpD8l7e Xz8d3eOyG6ChKiMDbi4BFYdcpnV1x5dhvt6G3NRI270qv0pV2uh9UPu0gBe4lL8B PeraunzgWGcXuVjgiIZGZ2ydEEdYMtA1fHkqkKJaEBEjNa0vzORKW6fIJ/KD3l67 Xnfn6KVuY8INXWHQjNJsWiEOyiijzirplcdIz5ZvHZIlyMbGwcEMBawmxNJ10uEq Z8A9W6Wa6897GqidFEXlD6CaZd4vKL3Ob5Rmg0gp2OpljK+T2WSfVVcmv2/LNzGZ o2C7HK2JNDJiuEMhBnIMoVxtRsX6Kc8w3onccVvdtjc+31D1uAclJuW8tf48ArO3 +L5DwYcRlJ4jbBeKuIonDFRH8KmzwICMoCfrHRnjB453cMor9H124HhnAgMBAAGj YzBhMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFE1FwWg4u3OpaaEg5+31IqEj FNeeMB8GA1UdIwQYMBaAFE1FwWg4u3OpaaEg5+31IqEjFNeeMA4GA1UdDwEB/wQE AwIBhjANBgkqhkiG9w0BAQUFAAOCAgEAZ2sGuV9FOypLM7PmG2tZTiLMubekJcmn xPBUlgtk87FYT15R/LKXeydlwuXK5w0MJXti4/qftIe3RUavg6WXSIylvfEWK5t2 LHo1YGwRgJfMqZJS5ivmae2p+DYtLHe/YUjRYwu5W1LtGLBDQiKmsXeu3mnFzccc obGlHBD7GL4acN3Bkku+KVqdPzW+5X1R+FXgJXUjhx5c3LqdsKyzadsXg8n33gy8 CNyRnqjQ1xU3c6U1uPx+xURABsPr+CKAXEfOAuMRn0T//ZoyzH1kUQ7rVyZ2OuMe IjzCpjbdGe+n/BLzJsBZMYVMnNjP36TMzCmT/5RtdlwTCJfy7aULTd3oyWgOZtMA DjMSW7yV5TKQqLPGbIOtd+6Lfn6xqavT4fG2wLHqiMDn05DpKJKUe2h7lyoKZy2F AjgQ5ANh1NolNscIWC2hp1GvMApJ9aZphwctREZ2jirlmjvXGKL8nDgQzMY70rUX Om/9riW99XJZZLF0KjhfGEzfz3EEWjbUvy+ZnOjZurGV5gJLIaFb1cFPj65pbVPb AZO1XB4Y3WRayhgoPmMEEf0cjQAPuDffZ4qdZqkCapH/E8ovXYO8h5Ns3CRRFgQl Zvqz2cK6Kb6aSDiCmfS/O0oxGfm/jiEzFMpPVF/7zvuPcX/9XhmgD0uRuMRUvAaw RY8mkaKO/qk= -----END CERTIFICATE----- Visa eCommerce Root =================== -----BEGIN CERTIFICATE----- MIIDojCCAoqgAwIBAgIQE4Y1TR0/BvLB+WUF1ZAcYjANBgkqhkiG9w0BAQUFADBr MQswCQYDVQQGEwJVUzENMAsGA1UEChMEVklTQTEvMC0GA1UECxMmVmlzYSBJbnRl cm5hdGlvbmFsIFNlcnZpY2UgQXNzb2NpYXRpb24xHDAaBgNVBAMTE1Zpc2EgZUNv bW1lcmNlIFJvb3QwHhcNMDIwNjI2MDIxODM2WhcNMjIwNjI0MDAxNjEyWjBrMQsw CQYDVQQGEwJVUzENMAsGA1UEChMEVklTQTEvMC0GA1UECxMmVmlzYSBJbnRlcm5h dGlvbmFsIFNlcnZpY2UgQXNzb2NpYXRpb24xHDAaBgNVBAMTE1Zpc2EgZUNvbW1l cmNlIFJvb3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCvV95WHm6h 2mCxlCfLF9sHP4CFT8icttD0b0/Pmdjh28JIXDqsOTPHH2qLJj0rNfVIsZHBAk4E lpF7sDPwsRROEW+1QK8bRaVK7362rPKgH1g/EkZgPI2h4H3PVz4zHvtH8aoVlwdV ZqW1LS7YgFmypw23RuwhY/81q6UCzyr0TP579ZRdhE2o8mCP2w4lPJ9zcc+U30rq 299yOIzzlr3xF7zSujtFWsan9sYXiwGd/BmoKoMWuDpI/k4+oKsGGelT84ATB+0t vz8KPFUgOSwsAGl0lUq8ILKpeeUYiZGo3BxN77t+Nwtd/jmliFKMAGzsGHxBvfaL dXe6YJ2E5/4tAgMBAAGjQjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQD AgEGMB0GA1UdDgQWBBQVOIMPPyw/cDMezUb+B4wg4NfDtzANBgkqhkiG9w0BAQUF AAOCAQEAX/FBfXxcCLkr4NWSR/pnXKUTwwMhmytMiUbPWU3J/qVAtmPN3XEolWcR zCSs00Rsca4BIGsDoo8Ytyk6feUWYFN4PMCvFYP3j1IzJL1kk5fui/fbGKhtcbP3 LBfQdCVp9/5rPJS+TUtBjE7ic9DjkCJzQ83z7+pzzkWKsKZJ/0x9nXGIxHYdkFsd 7v3M9+79YKWxehZx0RbQfBI8bGmX265fOZpwLwU8GUYEmSA20GBuYQa7FkKMcPcw ++DbZqMAAb3mLNqRX6BGi01qnD093QVG/na/oAo85ADmJ7f/hC3euiInlhBx6yLt 398znM/jra6O1I7mT1GvFpLgXPYHDw== -----END CERTIFICATE----- Certum Root CA ============== -----BEGIN CERTIFICATE----- MIIDDDCCAfSgAwIBAgIDAQAgMA0GCSqGSIb3DQEBBQUAMD4xCzAJBgNVBAYTAlBM MRswGQYDVQQKExJVbml6ZXRvIFNwLiB6IG8uby4xEjAQBgNVBAMTCUNlcnR1bSBD QTAeFw0wMjA2MTExMDQ2MzlaFw0yNzA2MTExMDQ2MzlaMD4xCzAJBgNVBAYTAlBM MRswGQYDVQQKExJVbml6ZXRvIFNwLiB6IG8uby4xEjAQBgNVBAMTCUNlcnR1bSBD QTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM6xwS7TT3zNJc4YPk/E jG+AanPIW1H4m9LcuwBcsaD8dQPugfCI7iNS6eYVM42sLQnFdvkrOYCJ5JdLkKWo ePhzQ3ukYbDYWMzhbGZ+nPMJXlVjhNWo7/OxLjBos8Q82KxujZlakE403Daaj4GI ULdtlkIJ89eVgw1BS7Bqa/j8D35in2fE7SZfECYPCE/wpFcozo+47UX2bu4lXapu Ob7kky/ZR6By6/qmW6/KUz/iDsaWVhFu9+lmqSbYf5VT7QqFiLpPKaVCjF62/IUg AKpoC6EahQGcxEZjgoi2IrHu/qpGWX7PNSzVttpd90gzFFS269lvzs2I1qsb2pY7 HVkCAwEAAaMTMBEwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQUFAAOCAQEA uI3O7+cUus/usESSbLQ5PqKEbq24IXfS1HeCh+YgQYHu4vgRt2PRFze+GXYkHAQa TOs9qmdvLdTN/mUxcMUbpgIKumB7bVjCmkn+YzILa+M6wKyrO7Do0wlRjBCDxjTg xSvgGrZgFCdsMneMvLJymM/NzD+5yCRCFNZX/OYmQ6kd5YCQzgNUKD73P9P4Te1q CjqTE5s7FCMTY5w/0YcneeVMUeMBrYVdGjux1XMQpNPyvG5k9VpWkKjHDkx0Dy5x O/fIR/RpbxXyEV6DHpx8Uq79AtoSqFlnGNu8cN2bsWntgM6JQEhqDjXKKWYVIZQs 6GAqm4VKQPNriiTsBhYscw== -----END CERTIFICATE----- Comodo AAA Services root ======================== -----BEGIN CERTIFICATE----- MIIEMjCCAxqgAwIBAgIBATANBgkqhkiG9w0BAQUFADB7MQswCQYDVQQGEwJHQjEb MBkGA1UECAwSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHDAdTYWxmb3JkMRow GAYDVQQKDBFDb21vZG8gQ0EgTGltaXRlZDEhMB8GA1UEAwwYQUFBIENlcnRpZmlj YXRlIFNlcnZpY2VzMB4XDTA0MDEwMTAwMDAwMFoXDTI4MTIzMTIzNTk1OVowezEL MAkGA1UEBhMCR0IxGzAZBgNVBAgMEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UE BwwHU2FsZm9yZDEaMBgGA1UECgwRQ29tb2RvIENBIExpbWl0ZWQxITAfBgNVBAMM GEFBQSBDZXJ0aWZpY2F0ZSBTZXJ2aWNlczCCASIwDQYJKoZIhvcNAQEBBQADggEP ADCCAQoCggEBAL5AnfRu4ep2hxxNRUSOvkbIgwadwSr+GB+O5AL686tdUIoWMQua BtDFcCLNSS1UY8y2bmhGC1Pqy0wkwLxyTurxFa70VJoSCsN6sjNg4tqJVfMiWPPe 3M/vg4aijJRPn2jymJBGhCfHdr/jzDUsi14HZGWCwEiwqJH5YZ92IFCokcdmtet4 YgNW8IoaE+oxox6gmf049vYnMlhvB/VruPsUK6+3qszWY19zjNoFmag4qMsXeDZR rOme9Hg6jc8P2ULimAyrL58OAd7vn5lJ8S3frHRNG5i1R8XlKdH5kBjHYpy+g8cm ez6KJcfA3Z3mNWgQIJ2P2N7Sw4ScDV7oL8kCAwEAAaOBwDCBvTAdBgNVHQ4EFgQU oBEKIz6W8Qfs4q8p74Klf9AwpLQwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQF MAMBAf8wewYDVR0fBHQwcjA4oDagNIYyaHR0cDovL2NybC5jb21vZG9jYS5jb20v QUFBQ2VydGlmaWNhdGVTZXJ2aWNlcy5jcmwwNqA0oDKGMGh0dHA6Ly9jcmwuY29t b2RvLm5ldC9BQUFDZXJ0aWZpY2F0ZVNlcnZpY2VzLmNybDANBgkqhkiG9w0BAQUF AAOCAQEACFb8AvCb6P+k+tZ7xkSAzk/ExfYAWMymtrwUSWgEdujm7l3sAg9g1o1Q GE8mTgHj5rCl7r+8dFRBv/38ErjHT1r0iWAFf2C3BUrz9vHCv8S5dIa2LX1rzNLz Rt0vxuBqw8M0Ayx9lt1awg6nCpnBBYurDC/zXDrPbDdVCYfeU0BsWO/8tqtlbgT2 G9w84FoVxp7Z8VlIMCFlA2zs6SFz7JsDoeA3raAVGI/6ugLOpyypEBMs1OUIJqsi l2D4kF501KKaU73yqWjgom7C12yxow+ev+to51byrvLjKzg6CYG1a4XXvi3tPxq3 smPi9WIsgtRqAEFQ8TmDn5XpNpaYbg== -----END CERTIFICATE----- Comodo Secure Services root =========================== -----BEGIN CERTIFICATE----- MIIEPzCCAyegAwIBAgIBATANBgkqhkiG9w0BAQUFADB+MQswCQYDVQQGEwJHQjEb MBkGA1UECAwSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHDAdTYWxmb3JkMRow GAYDVQQKDBFDb21vZG8gQ0EgTGltaXRlZDEkMCIGA1UEAwwbU2VjdXJlIENlcnRp ZmljYXRlIFNlcnZpY2VzMB4XDTA0MDEwMTAwMDAwMFoXDTI4MTIzMTIzNTk1OVow fjELMAkGA1UEBhMCR0IxGzAZBgNVBAgMEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4G A1UEBwwHU2FsZm9yZDEaMBgGA1UECgwRQ29tb2RvIENBIExpbWl0ZWQxJDAiBgNV BAMMG1NlY3VyZSBDZXJ0aWZpY2F0ZSBTZXJ2aWNlczCCASIwDQYJKoZIhvcNAQEB BQADggEPADCCAQoCggEBAMBxM4KK0HDrc4eCQNUd5MvJDkKQ+d40uaG6EfQlhfPM cm3ye5drswfxdySRXyWP9nQ95IDC+DwN879A6vfIUtFyb+/Iq0G4bi4XKpVpDM3S HpR7LZQdqnXXs5jLrLxkU0C8j6ysNstcrbvd4JQX7NFc0L/vpZXJkMWwrPsbQ996 CF23uPJAGysnnlDOXmWCiIxe004MeuoIkbY2qitC++rCoznl2yY4rYsK7hljxxwk 3wN42ubqwUcaCwtGCd0C/N7Lh1/XMGNooa7cMqG6vv5Eq2i2pRcV/b3Vp6ea5EQz 6YiO/O1R65NxTq0B50SOqy3LqP4BSUjwwN3HaNiS/j0CAwEAAaOBxzCBxDAdBgNV HQ4EFgQUPNiTiMLAggnMAZkGkyDpnnAJY08wDgYDVR0PAQH/BAQDAgEGMA8GA1Ud EwEB/wQFMAMBAf8wgYEGA1UdHwR6MHgwO6A5oDeGNWh0dHA6Ly9jcmwuY29tb2Rv Y2EuY29tL1NlY3VyZUNlcnRpZmljYXRlU2VydmljZXMuY3JsMDmgN6A1hjNodHRw Oi8vY3JsLmNvbW9kby5uZXQvU2VjdXJlQ2VydGlmaWNhdGVTZXJ2aWNlcy5jcmww DQYJKoZIhvcNAQEFBQADggEBAIcBbSMdflsXfcFhMs+P5/OKlFlm4J4oqF7Tt/Q0 5qo5spcWxYJvMqTpjOev/e/C6LlLqqP05tqNZSH7uoDrJiiFGv45jN5bBAS0VPmj Z55B+glSzAVIqMk/IQQezkhr/IXownuvf7fM+F86/TXGDe+X3EyrEeFryzHRbPtI gKvcnDe4IRRLDXE97IMzbtFuMhbsmMcWi1mmNKsFVy2T96oTy9IT4rcuO81rUBcJ aD61JlfutuC23bkpgHl9j6PwpCikFcSF9CfUa7/lXORlAnZUtOM3ZiTTGWHIUhDl izeauan5Hb/qmZJhlv8BzaFfDbxxvA6sCx1HRR3B7Hzs/Sk= -----END CERTIFICATE----- Comodo Trusted Services root ============================ -----BEGIN CERTIFICATE----- MIIEQzCCAyugAwIBAgIBATANBgkqhkiG9w0BAQUFADB/MQswCQYDVQQGEwJHQjEb MBkGA1UECAwSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHDAdTYWxmb3JkMRow GAYDVQQKDBFDb21vZG8gQ0EgTGltaXRlZDElMCMGA1UEAwwcVHJ1c3RlZCBDZXJ0 aWZpY2F0ZSBTZXJ2aWNlczAeFw0wNDAxMDEwMDAwMDBaFw0yODEyMzEyMzU5NTla MH8xCzAJBgNVBAYTAkdCMRswGQYDVQQIDBJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAO BgNVBAcMB1NhbGZvcmQxGjAYBgNVBAoMEUNvbW9kbyBDQSBMaW1pdGVkMSUwIwYD VQQDDBxUcnVzdGVkIENlcnRpZmljYXRlIFNlcnZpY2VzMIIBIjANBgkqhkiG9w0B AQEFAAOCAQ8AMIIBCgKCAQEA33FvNlhTWvI2VFeAxHQIIO0Yfyod5jWaHiWsnOWW fnJSoBVC21ndZHoa0Lh73TkVvFVIxO06AOoxEbrycXQaZ7jPM8yoMa+j49d/vzMt TGo87IvDktJTdyR0nAducPy9C1t2ul/y/9c3S0pgePfw+spwtOpZqqPOSC+pw7IL fhdyFgymBwwbOM/JYrc/oJOlh0Hyt3BAd9i+FHzjqMB6juljatEPmsbS9Is6FARW 1O24zG71++IsWL1/T2sr92AkWCTOJu80kTrV44HQsvAEAtdbtz6SrGsSivnkBbA7 kUlcsutT6vifR4buv5XAwAaf0lteERv0xwQ1KdJVXOTt6wIDAQABo4HJMIHGMB0G A1UdDgQWBBTFe1i97doladL3WRaoszLAeydb9DAOBgNVHQ8BAf8EBAMCAQYwDwYD VR0TAQH/BAUwAwEB/zCBgwYDVR0fBHwwejA8oDqgOIY2aHR0cDovL2NybC5jb21v ZG9jYS5jb20vVHJ1c3RlZENlcnRpZmljYXRlU2VydmljZXMuY3JsMDqgOKA2hjRo dHRwOi8vY3JsLmNvbW9kby5uZXQvVHJ1c3RlZENlcnRpZmljYXRlU2VydmljZXMu Y3JsMA0GCSqGSIb3DQEBBQUAA4IBAQDIk4E7ibSvuIQSTI3S8NtwuleGFTQQuS9/ HrCoiWChisJ3DFBKmwCL2Iv0QeLQg4pKHBQGsKNoBXAxMKdTmw7pSqBYaWcOrp32 pSxBvzwGa+RZzG0Q8ZZvH9/0BAKkn0U+yNj6NkZEUD+Cl5EfKNsYEYwq5GWDVxIS jBc/lDb+XbDABHcTuPQV1T84zJQ6VdCsmPW6AF/ghhmBeC8owH7TzEIK9a5QoNE+ xqFx7D+gIIxmOom0jtTYsU0lR+4viMi14QVFwL4Ucd56/Y57fU0IlqUSc/Atyjcn dBInTMu2l+nZrghtWjlA3QVHdWpaIbOjGM9O9y5Xt5hwXsjEeLBi -----END CERTIFICATE----- QuoVadis Root CA ================ -----BEGIN CERTIFICATE----- MIIF0DCCBLigAwIBAgIEOrZQizANBgkqhkiG9w0BAQUFADB/MQswCQYDVQQGEwJC TTEZMBcGA1UEChMQUXVvVmFkaXMgTGltaXRlZDElMCMGA1UECxMcUm9vdCBDZXJ0 aWZpY2F0aW9uIEF1dGhvcml0eTEuMCwGA1UEAxMlUXVvVmFkaXMgUm9vdCBDZXJ0 aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0wMTAzMTkxODMzMzNaFw0yMTAzMTcxODMz MzNaMH8xCzAJBgNVBAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBMaW1pdGVkMSUw IwYDVQQLExxSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MS4wLAYDVQQDEyVR dW9WYWRpcyBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIIBIjANBgkqhkiG 9w0BAQEFAAOCAQ8AMIIBCgKCAQEAv2G1lVO6V/z68mcLOhrfEYBklbTRvM16z/Yp li4kVEAkOPcahdxYTMukJ0KX0J+DisPkBgNbAKVRHnAEdOLB1Dqr1607BxgFjv2D rOpm2RgbaIr1VxqYuvXtdj182d6UajtLF8HVj71lODqV0D1VNk7feVcxKh7YWWVJ WCCYfqtffp/p1k3sg3Spx2zY7ilKhSoGFPlU5tPaZQeLYzcS19Dsw3sgQUSj7cug F+FxZc4dZjH3dgEZyH0DWLaVSR2mEiboxgx24ONmy+pdpibu5cxfvWenAScOospU xbF6lR1xHkopigPcakXBpBlebzbNw6Kwt/5cOOJSvPhEQ+aQuwIDAQABo4ICUjCC Ak4wPQYIKwYBBQUHAQEEMTAvMC0GCCsGAQUFBzABhiFodHRwczovL29jc3AucXVv dmFkaXNvZmZzaG9yZS5jb20wDwYDVR0TAQH/BAUwAwEB/zCCARoGA1UdIASCAREw ggENMIIBCQYJKwYBBAG+WAABMIH7MIHUBggrBgEFBQcCAjCBxxqBxFJlbGlhbmNl IG9uIHRoZSBRdW9WYWRpcyBSb290IENlcnRpZmljYXRlIGJ5IGFueSBwYXJ0eSBh c3N1bWVzIGFjY2VwdGFuY2Ugb2YgdGhlIHRoZW4gYXBwbGljYWJsZSBzdGFuZGFy ZCB0ZXJtcyBhbmQgY29uZGl0aW9ucyBvZiB1c2UsIGNlcnRpZmljYXRpb24gcHJh Y3RpY2VzLCBhbmQgdGhlIFF1b1ZhZGlzIENlcnRpZmljYXRlIFBvbGljeS4wIgYI KwYBBQUHAgEWFmh0dHA6Ly93d3cucXVvdmFkaXMuYm0wHQYDVR0OBBYEFItLbe3T KbkGGew5Oanwl4Rqy+/fMIGuBgNVHSMEgaYwgaOAFItLbe3TKbkGGew5Oanwl4Rq y+/foYGEpIGBMH8xCzAJBgNVBAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBMaW1p dGVkMSUwIwYDVQQLExxSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MS4wLAYD VQQDEyVRdW9WYWRpcyBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5ggQ6tlCL MA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQUFAAOCAQEAitQUtf70mpKnGdSk fnIYj9lofFIk3WdvOXrEql494liwTXCYhGHoG+NpGA7O+0dQoE7/8CQfvbLO9Sf8 7C9TqnN7Az10buYWnuulLsS/VidQK2K6vkscPFVcQR0kvoIgR13VRH56FmjffU1R cHhXHTMe/QKZnAzNCgVPx7uOpHX6Sm2xgI4JVrmcGmD+XcHXetwReNDWXcG31a0y mQM6isxUJTkxgXsTIlG6Rmyhu576BGxJJnSP0nPrzDCi5upZIof4l/UO/erMkqQW xFIY6iHOsfHmhIHluqmGKPJDWl0Snawe2ajlCmqnf6CHKc/yiU3U7MXi5nrQNiOK SnQ2+Q== -----END CERTIFICATE----- QuoVadis Root CA 2 ================== -----BEGIN CERTIFICATE----- MIIFtzCCA5+gAwIBAgICBQkwDQYJKoZIhvcNAQEFBQAwRTELMAkGA1UEBhMCQk0x GTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxGzAZBgNVBAMTElF1b1ZhZGlzIFJv b3QgQ0EgMjAeFw0wNjExMjQxODI3MDBaFw0zMTExMjQxODIzMzNaMEUxCzAJBgNV BAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBMaW1pdGVkMRswGQYDVQQDExJRdW9W YWRpcyBSb290IENBIDIwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCa GMpLlA0ALa8DKYrwD4HIrkwZhR0In6spRIXzL4GtMh6QRr+jhiYaHv5+HBg6XJxg Fyo6dIMzMH1hVBHL7avg5tKifvVrbxi3Cgst/ek+7wrGsxDp3MJGF/hd/aTa/55J WpzmM+Yklvc/ulsrHHo1wtZn/qtmUIttKGAr79dgw8eTvI02kfN/+NsRE8Scd3bB rrcCaoF6qUWD4gXmuVbBlDePSHFjIuwXZQeVikvfj8ZaCuWw419eaxGrDPmF60Tp +ARz8un+XJiM9XOva7R+zdRcAitMOeGylZUtQofX1bOQQ7dsE/He3fbE+Ik/0XX1 ksOR1YqI0JDs3G3eicJlcZaLDQP9nL9bFqyS2+r+eXyt66/3FsvbzSUr5R/7mp/i Ucw6UwxI5g69ybR2BlLmEROFcmMDBOAENisgGQLodKcftslWZvB1JdxnwQ5hYIiz PtGo/KPaHbDRsSNU30R2be1B2MGyIrZTHN81Hdyhdyox5C315eXbyOD/5YDXC2Og /zOhD7osFRXql7PSorW+8oyWHhqPHWykYTe5hnMz15eWniN9gqRMgeKh0bpnX5UH oycR7hYQe7xFSkyyBNKr79X9DFHOUGoIMfmR2gyPZFwDwzqLID9ujWc9Otb+fVuI yV77zGHcizN300QyNQliBJIWENieJ0f7OyHj+OsdWwIDAQABo4GwMIGtMA8GA1Ud EwEB/wQFMAMBAf8wCwYDVR0PBAQDAgEGMB0GA1UdDgQWBBQahGK8SEwzJQTU7tD2 A8QZRtGUazBuBgNVHSMEZzBlgBQahGK8SEwzJQTU7tD2A8QZRtGUa6FJpEcwRTEL MAkGA1UEBhMCQk0xGTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxGzAZBgNVBAMT ElF1b1ZhZGlzIFJvb3QgQ0EgMoICBQkwDQYJKoZIhvcNAQEFBQADggIBAD4KFk2f BluornFdLwUvZ+YTRYPENvbzwCYMDbVHZF34tHLJRqUDGCdViXh9duqWNIAXINzn g/iN/Ae42l9NLmeyhP3ZRPx3UIHmfLTJDQtyU/h2BwdBR5YM++CCJpNVjP4iH2Bl fF/nJrP3MpCYUNQ3cVX2kiF495V5+vgtJodmVjB3pjd4M1IQWK4/YY7yarHvGH5K WWPKjaJW1acvvFYfzznB4vsKqBUsfU16Y8Zsl0Q80m/DShcK+JDSV6IZUaUtl0Ha B0+pUNqQjZRG4T7wlP0QADj1O+hA4bRuVhogzG9Yje0uRY/W6ZM/57Es3zrWIozc hLsib9D45MY56QSIPMO661V6bYCZJPVsAfv4l7CUW+v90m/xd2gNNWQjrLhVoQPR TUIZ3Ph1WVaj+ahJefivDrkRoHy3au000LYmYjgahwz46P0u05B/B5EqHdZ+XIWD mbA4CD/pXvk1B+TJYm5Xf6dQlfe6yJvmjqIBxdZmv3lh8zwc4bmCXF2gw+nYSL0Z ohEUGW6yhhtoPkg3Goi3XZZenMfvJ2II4pEZXNLxId26F0KCl3GBUzGpn/Z9Yr9y 4aOTHcyKJloJONDO1w2AFrR4pTqHTI2KpdVGl/IsELm8VCLAAVBpQ570su9t+Oza 8eOx79+Rj1QqCyXBJhnEUhAFZdWCEOrCMc0u -----END CERTIFICATE----- QuoVadis Root CA 3 ================== -----BEGIN CERTIFICATE----- MIIGnTCCBIWgAwIBAgICBcYwDQYJKoZIhvcNAQEFBQAwRTELMAkGA1UEBhMCQk0x GTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxGzAZBgNVBAMTElF1b1ZhZGlzIFJv b3QgQ0EgMzAeFw0wNjExMjQxOTExMjNaFw0zMTExMjQxOTA2NDRaMEUxCzAJBgNV BAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBMaW1pdGVkMRswGQYDVQQDExJRdW9W YWRpcyBSb290IENBIDMwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDM V0IWVJzmmNPTTe7+7cefQzlKZbPoFog02w1ZkXTPkrgEQK0CSzGrvI2RaNggDhoB 4hp7Thdd4oq3P5kazethq8Jlph+3t723j/z9cI8LoGe+AaJZz3HmDyl2/7FWeUUr H556VOijKTVopAFPD6QuN+8bv+OPEKhyq1hX51SGyMnzW9os2l2ObjyjPtr7guXd 8lyyBTNvijbO0BNO/79KDDRMpsMhvVAEVeuxu537RR5kFd5VAYwCdrXLoT9Cabwv vWhDFlaJKjdhkf2mrk7AyxRllDdLkgbvBNDInIjbC3uBr7E9KsRlOni27tyAsdLT mZw67mtaa7ONt9XOnMK+pUsvFrGeaDsGb659n/je7Mwpp5ijJUMv7/FfJuGITfhe btfZFG4ZM2mnO4SJk8RTVROhUXhA+LjJou57ulJCg54U7QVSWllWp5f8nT8KKdjc T5EOE7zelaTfi5m+rJsziO+1ga8bxiJTyPbH7pcUsMV8eFLI8M5ud2CEpukqdiDt WAEXMJPpGovgc2PZapKUSU60rUqFxKMiMPwJ7Wgic6aIDFUhWMXhOp8q3crhkODZ c6tsgLjoC2SToJyMGf+z0gzskSaHirOi4XCPLArlzW1oUevaPwV/izLmE1xr/l9A 4iLItLRkT9a6fUg+qGkM17uGcclzuD87nSVL2v9A6wIDAQABo4IBlTCCAZEwDwYD VR0TAQH/BAUwAwEB/zCB4QYDVR0gBIHZMIHWMIHTBgkrBgEEAb5YAAMwgcUwgZMG CCsGAQUFBwICMIGGGoGDQW55IHVzZSBvZiB0aGlzIENlcnRpZmljYXRlIGNvbnN0 aXR1dGVzIGFjY2VwdGFuY2Ugb2YgdGhlIFF1b1ZhZGlzIFJvb3QgQ0EgMyBDZXJ0 aWZpY2F0ZSBQb2xpY3kgLyBDZXJ0aWZpY2F0aW9uIFByYWN0aWNlIFN0YXRlbWVu dC4wLQYIKwYBBQUHAgEWIWh0dHA6Ly93d3cucXVvdmFkaXNnbG9iYWwuY29tL2Nw czALBgNVHQ8EBAMCAQYwHQYDVR0OBBYEFPLAE+CCQz777i9nMpY1XNu4ywLQMG4G A1UdIwRnMGWAFPLAE+CCQz777i9nMpY1XNu4ywLQoUmkRzBFMQswCQYDVQQGEwJC TTEZMBcGA1UEChMQUXVvVmFkaXMgTGltaXRlZDEbMBkGA1UEAxMSUXVvVmFkaXMg Um9vdCBDQSAzggIFxjANBgkqhkiG9w0BAQUFAAOCAgEAT62gLEz6wPJv92ZVqyM0 7ucp2sNbtrCD2dDQ4iH782CnO11gUyeim/YIIirnv6By5ZwkajGxkHon24QRiSem d1o417+shvzuXYO8BsbRd2sPbSQvS3pspweWyuOEn62Iix2rFo1bZhfZFvSLgNLd +LJ2w/w4E6oM3kJpK27zPOuAJ9v1pkQNn1pVWQvVDVJIxa6f8i+AxeoyUDUSly7B 4f/xI4hROJ/yZlZ25w9Rl6VSDE1JUZU2Pb+iSwwQHYaZTKrzchGT5Or2m9qoXadN t54CrnMAyNojA+j56hl0YgCUyyIgvpSnWbWCar6ZeXqp8kokUvd0/bpO5qgdAm6x DYBEwa7TIzdfu4V8K5Iu6H6li92Z4b8nby1dqnuH/grdS/yO9SbkbnBCbjPsMZ57 k8HkyWkaPcBrTiJt7qtYTcbQQcEr6k8Sh17rRdhs9ZgC06DYVYoGmRmioHfRMJ6s zHXug/WwYjnPbFfiTNKRCw51KBuav/0aQ/HKd/s7j2G4aSgWQgRecCocIdiP4b0j Wy10QJLZYxkNc91pvGJHvOB0K7Lrfb5BG7XARsWhIstfTsEokt4YutUqKLsRixeT mJlglFwjz1onl14LBQaTNx47aTbrqZ5hHY8y2o4M1nQ+ewkk2gF3R8Q7zTSMmfXK 4SVhM7JZG+Ju1zdXtg2pEto= -----END CERTIFICATE----- Security Communication Root CA ============================== -----BEGIN CERTIFICATE----- MIIDWjCCAkKgAwIBAgIBADANBgkqhkiG9w0BAQUFADBQMQswCQYDVQQGEwJKUDEY MBYGA1UEChMPU0VDT00gVHJ1c3QubmV0MScwJQYDVQQLEx5TZWN1cml0eSBDb21t dW5pY2F0aW9uIFJvb3RDQTEwHhcNMDMwOTMwMDQyMDQ5WhcNMjMwOTMwMDQyMDQ5 WjBQMQswCQYDVQQGEwJKUDEYMBYGA1UEChMPU0VDT00gVHJ1c3QubmV0MScwJQYD VQQLEx5TZWN1cml0eSBDb21tdW5pY2F0aW9uIFJvb3RDQTEwggEiMA0GCSqGSIb3 DQEBAQUAA4IBDwAwggEKAoIBAQCzs/5/022x7xZ8V6UMbXaKL0u/ZPtM7orw8yl8 9f/uKuDp6bpbZCKamm8sOiZpUQWZJtzVHGpxxpp9Hp3dfGzGjGdnSj74cbAZJ6kJ DKaVv0uMDPpVmDvY6CKhS3E4eayXkmmziX7qIWgGmBSWh9JhNrxtJ1aeV+7AwFb9 Ms+k2Y7CI9eNqPPYJayX5HA49LY6tJ07lyZDo6G8SVlyTCMwhwFY9k6+HGhWZq/N QV3Is00qVUarH9oe4kA92819uZKAnDfdDJZkndwi92SL32HeFZRSFaB9UslLqCHJ xrHty8OVYNEP8Ktw+N/LTX7s1vqr2b1/VPKl6Xn62dZ2JChzAgMBAAGjPzA9MB0G A1UdDgQWBBSgc0mZaNyFW2XjmygvV5+9M7wHSDALBgNVHQ8EBAMCAQYwDwYDVR0T AQH/BAUwAwEB/zANBgkqhkiG9w0BAQUFAAOCAQEAaECpqLvkT115swW1F7NgE+vG kl3g0dNq/vu+m22/xwVtWSDEHPC32oRYAmP6SBbvT6UL90qY8j+eG61Ha2POCEfr Uj94nK9NrvjVT8+amCoQQTlSxN3Zmw7vkwGusi7KaEIkQmywszo+zenaSMQVy+n5 Bw+SUEmK3TGXX8npN6o7WWWXlDLJs58+OmJYxUmtYg5xpTKqL8aJdkNAExNnPaJU JRDL8Try2frbSVa7pv6nQTXD4IhhyYjH3zYQIphZ6rBK+1YWc26sTfcioU+tHXot RSflMMFe8toTyyVCUZVHA4xsIcx0Qu1T/zOLjw9XARYvz6buyXAiFL39vmwLAw== -----END CERTIFICATE----- Sonera Class 1 Root CA ====================== -----BEGIN CERTIFICATE----- MIIDIDCCAgigAwIBAgIBJDANBgkqhkiG9w0BAQUFADA5MQswCQYDVQQGEwJGSTEP MA0GA1UEChMGU29uZXJhMRkwFwYDVQQDExBTb25lcmEgQ2xhc3MxIENBMB4XDTAx MDQwNjEwNDkxM1oXDTIxMDQwNjEwNDkxM1owOTELMAkGA1UEBhMCRkkxDzANBgNV BAoTBlNvbmVyYTEZMBcGA1UEAxMQU29uZXJhIENsYXNzMSBDQTCCASIwDQYJKoZI hvcNAQEBBQADggEPADCCAQoCggEBALWJHytPZwp5/8Ue+H887dF+2rDNbS82rDTG 29lkFwhjMDMiikzujrsPDUJVyZ0upe/3p4zDq7mXy47vPxVnqIJyY1MPQYx9EJUk oVqlBvqSV536pQHydekfvFYmUk54GWVYVQNYwBSujHxVX3BbdyMGNpfzJLWaRpXk 3w0LBUXl0fIdgrvGE+D+qnr9aTCU89JFhfzyMlsy3uhsXR/LpCJ0sICOXZT3BgBL qdReLjVQCfOAl/QMF6452F/NM8EcyonCIvdFEu1eEpOdY6uCLrnrQkFEy0oaAIIN nvmLVz5MxxftLItyM19yejhW1ebZrgUaHXVFsculJRwSVzb9IjcCAwEAAaMzMDEw DwYDVR0TAQH/BAUwAwEB/zARBgNVHQ4ECgQIR+IMi/ZTiFIwCwYDVR0PBAQDAgEG MA0GCSqGSIb3DQEBBQUAA4IBAQCLGrLJXWG04bkruVPRsoWdd44W7hE928Jj2VuX ZfsSZ9gqXLar5V7DtxYvyOirHYr9qxp81V9jz9yw3Xe5qObSIjiHBxTZ/75Wtf0H DjxVyhbMp6Z3N/vbXB9OWQaHowND9Rart4S9Tu+fMTfwRvFAttEMpWT4Y14h21VO TzF2nBBhjrZTOqMRvq9tfB69ri3iDGnHhVNoomG6xT60eVR4ngrHAr5i0RGCS2Uv kVrCqIexVmiUefkl98HVrhq4uz2PqYo4Ffdz0Fpg0YCw8NzVUM1O7pJIae2yIx4w zMiUyLb1O4Z/P6Yun/Y+LLWSlj7fLJOK/4GMDw9ZIRlXvVWa -----END CERTIFICATE----- Sonera Class 2 Root CA ====================== -----BEGIN CERTIFICATE----- MIIDIDCCAgigAwIBAgIBHTANBgkqhkiG9w0BAQUFADA5MQswCQYDVQQGEwJGSTEP MA0GA1UEChMGU29uZXJhMRkwFwYDVQQDExBTb25lcmEgQ2xhc3MyIENBMB4XDTAx MDQwNjA3Mjk0MFoXDTIxMDQwNjA3Mjk0MFowOTELMAkGA1UEBhMCRkkxDzANBgNV BAoTBlNvbmVyYTEZMBcGA1UEAxMQU29uZXJhIENsYXNzMiBDQTCCASIwDQYJKoZI hvcNAQEBBQADggEPADCCAQoCggEBAJAXSjWdyvANlsdE+hY3/Ei9vX+ALTU74W+o Z6m/AxxNjG8yR9VBaKQTBME1DJqEQ/xcHf+Js+gXGM2RX/uJ4+q/Tl18GybTdXnt 5oTjV+WtKcT0OijnpXuENmmz/V52vaMtmdOQTiMofRhj8VQ7Jp12W5dCsv+u8E7s 3TmVToMGf+dJQMjFAbJUWmYdPfz56TwKnoG4cPABi+QjVHzIrviQHgCWctRUz2Ej vOr7nQKV0ba5cTppCD8PtOFCx4j1P5iop7oc4HFx71hXgVB6XGt0Rg6DA5jDjqhu 8nYybieDwnPz3BjotJPqdURrBGAgcVeHnfO+oJAjPYok4doh28MCAwEAAaMzMDEw DwYDVR0TAQH/BAUwAwEB/zARBgNVHQ4ECgQISqCqWITTXjwwCwYDVR0PBAQDAgEG MA0GCSqGSIb3DQEBBQUAA4IBAQBazof5FnIVV0sd2ZvnoiYw7JNn39Yt0jSv9zil zqsWuasvfDXLrNAPtEwr/IDva4yRXzZ299uzGxnq9LIR/WFxRL8oszodv7ND6J+/ 3DEIcbCdjdY0RzKQxmUk96BKfARzjzlvF4xytb1LyHr4e4PDKE6cCepnP7JnBBvD FNr450kkkdAdavphOe9r5yF1BgfYErQhIHBCcYHaPJo2vqZbDWpsmh+Re/n570K6 Tk6ezAyNlNzZRZxe7EJQY670XcSxEtzKO6gunRRaBXW37Ndj4ro1tgQIkejanZz2 ZrUYrAqmVCY0M9IbwdR/GjqOC6oybtv8TyWf2TLHllpwrN9M -----END CERTIFICATE----- Staat der Nederlanden Root CA ============================= -----BEGIN CERTIFICATE----- MIIDujCCAqKgAwIBAgIEAJiWijANBgkqhkiG9w0BAQUFADBVMQswCQYDVQQGEwJO TDEeMBwGA1UEChMVU3RhYXQgZGVyIE5lZGVybGFuZGVuMSYwJAYDVQQDEx1TdGFh dCBkZXIgTmVkZXJsYW5kZW4gUm9vdCBDQTAeFw0wMjEyMTcwOTIzNDlaFw0xNTEy MTYwOTE1MzhaMFUxCzAJBgNVBAYTAk5MMR4wHAYDVQQKExVTdGFhdCBkZXIgTmVk ZXJsYW5kZW4xJjAkBgNVBAMTHVN0YWF0IGRlciBOZWRlcmxhbmRlbiBSb290IENB MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAmNK1URF6gaYUmHFtvszn ExvWJw56s2oYHLZhWtVhCb/ekBPHZ+7d89rFDBKeNVU+LCeIQGv33N0iYfXCxw71 9tV2U02PjLwYdjeFnejKScfST5gTCaI+Ioicf9byEGW07l8Y1Rfj+MX94p2i71MO hXeiD+EwR+4A5zN9RGcaC1Hoi6CeUJhoNFIfLm0B8mBF8jHrqTFoKbt6QZ7GGX+U tFE5A3+y3qcym7RHjm+0Sq7lr7HcsBthvJly3uSJt3omXdozSVtSnA71iq3DuD3o BmrC1SoLbHuEvVYFy4ZlkuxEK7COudxwC0barbxjiDn622r+I/q85Ej0ZytqERAh SQIDAQABo4GRMIGOMAwGA1UdEwQFMAMBAf8wTwYDVR0gBEgwRjBEBgRVHSAAMDww OgYIKwYBBQUHAgEWLmh0dHA6Ly93d3cucGtpb3ZlcmhlaWQubmwvcG9saWNpZXMv cm9vdC1wb2xpY3kwDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBSofeu8Y6R0E3QA 7Jbg0zTBLL9s+DANBgkqhkiG9w0BAQUFAAOCAQEABYSHVXQ2YcG70dTGFagTtJ+k /rvuFbQvBgwp8qiSpGEN/KtcCFtREytNwiphyPgJWPwtArI5fZlmgb9uXJVFIGzm eafR2Bwp/MIgJ1HI8XxdNGdphREwxgDS1/PTfLbwMVcoEoJz6TMvplW0C5GUR5z6 u3pCMuiufi3IvKwUv9kP2Vv8wfl6leF9fpb8cbDCTMjfRTTJzg3ynGQI0DvDKcWy 7ZAEwbEpkcUwb8GpcjPM/l0WFywRaed+/sWDCN+83CI6LiBpIzlWYGeQiy52OfsR iJf2fL1LuCAWZwWN4jvBcj+UlTfHXbme2JOhF4//DGYVwSR8MnwDHTuhWEUykw== -----END CERTIFICATE----- TDC Internet Root CA ==================== -----BEGIN CERTIFICATE----- MIIEKzCCAxOgAwIBAgIEOsylTDANBgkqhkiG9w0BAQUFADBDMQswCQYDVQQGEwJE SzEVMBMGA1UEChMMVERDIEludGVybmV0MR0wGwYDVQQLExRUREMgSW50ZXJuZXQg Um9vdCBDQTAeFw0wMTA0MDUxNjMzMTdaFw0yMTA0MDUxNzAzMTdaMEMxCzAJBgNV BAYTAkRLMRUwEwYDVQQKEwxUREMgSW50ZXJuZXQxHTAbBgNVBAsTFFREQyBJbnRl cm5ldCBSb290IENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAxLhA vJHVYx/XmaCLDEAedLdInUaMArLgJF/wGROnN4NrXceO+YQwzho7+vvOi20jxsNu Zp+Jpd/gQlBn+h9sHvTQBda/ytZO5GhgbEaqHF1j4QeGDmUApy6mcca8uYGoOn0a 0vnRrEvLznWv3Hv6gXPU/Lq9QYjUdLP5Xjg6PEOo0pVOd20TDJ2PeAG3WiAfAzc1 4izbSysseLlJ28TQx5yc5IogCSEWVmb/Bexb4/DPqyQkXsN/cHoSxNK1EKC2IeGN eGlVRGn1ypYcNIUXJXfi9i8nmHj9eQY6otZaQ8H/7AQ77hPv01ha/5Lr7K7a8jcD R0G2l8ktCkEiu7vmpwIDAQABo4IBJTCCASEwEQYJYIZIAYb4QgEBBAQDAgAHMGUG A1UdHwReMFwwWqBYoFakVDBSMQswCQYDVQQGEwJESzEVMBMGA1UEChMMVERDIElu dGVybmV0MR0wGwYDVQQLExRUREMgSW50ZXJuZXQgUm9vdCBDQTENMAsGA1UEAxME Q1JMMTArBgNVHRAEJDAigA8yMDAxMDQwNTE2MzMxN1qBDzIwMjEwNDA1MTcwMzE3 WjALBgNVHQ8EBAMCAQYwHwYDVR0jBBgwFoAUbGQBx/2FbazI2p5QCIUItTxWqFAw HQYDVR0OBBYEFGxkAcf9hW2syNqeUAiFCLU8VqhQMAwGA1UdEwQFMAMBAf8wHQYJ KoZIhvZ9B0EABBAwDhsIVjUuMDo0LjADAgSQMA0GCSqGSIb3DQEBBQUAA4IBAQBO Q8zR3R0QGwZ/t6T609lN+yOfI1Rb5osvBCiLtSdtiaHsmGnc540mgwV5dOy0uaOX wTUA/RXaOYE6lTGQ3pfphqiZdwzlWqCE/xIWrG64jcN7ksKsLtB9KOy282A4aW8+ 2ARVPp7MVdK6/rtHBNcK2RYKNCn1WBPVT8+PVkuzHu7TmHnaCB4Mb7j4Fifvwm89 9qNLPg7kbWzbO0ESm70NRyN/PErQr8Cv9u8btRXE64PECV90i9kR+8JWsTz4cMo0 jUNAE4z9mQNUecYu6oah9jrUCbz0vGbMPVjQV0kK7iXiQe4T+Zs4NNEA9X7nlB38 aQNiuJkFBT1reBK9sG9l -----END CERTIFICATE----- TDC OCES Root CA ================ -----BEGIN CERTIFICATE----- MIIFGTCCBAGgAwIBAgIEPki9xDANBgkqhkiG9w0BAQUFADAxMQswCQYDVQQGEwJE SzEMMAoGA1UEChMDVERDMRQwEgYDVQQDEwtUREMgT0NFUyBDQTAeFw0wMzAyMTEw ODM5MzBaFw0zNzAyMTEwOTA5MzBaMDExCzAJBgNVBAYTAkRLMQwwCgYDVQQKEwNU REMxFDASBgNVBAMTC1REQyBPQ0VTIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A MIIBCgKCAQEArGL2YSCyz8DGhdfjeebM7fI5kqSXLmSjhFuHnEz9pPPEXyG9VhDr 2y5h7JNp46PMvZnDBfwGuMo2HP6QjklMxFaaL1a8z3sM8W9Hpg1DTeLpHTk0zY0s 2RKY+ePhwUp8hjjEqcRhiNJerxomTdXkoCJHhNlktxmW/OwZ5LKXJk5KTMuPJItU GBxIYXvViGjaXbXqzRowwYCDdlCqT9HU3Tjw7xb04QxQBr/q+3pJoSgrHPb8FTKj dGqPqcNiKXEx5TukYBdedObaE+3pHx8b0bJoc8YQNHVGEBDjkAB2QMuLt0MJIf+r TpPGWOmlgtt3xDqZsXKVSQTwtyv6e1mO3QIDAQABo4ICNzCCAjMwDwYDVR0TAQH/ BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwgewGA1UdIASB5DCB4TCB3gYIKoFQgSkB AQEwgdEwLwYIKwYBBQUHAgEWI2h0dHA6Ly93d3cuY2VydGlmaWthdC5kay9yZXBv c2l0b3J5MIGdBggrBgEFBQcCAjCBkDAKFgNUREMwAwIBARqBgUNlcnRpZmlrYXRl ciBmcmEgZGVubmUgQ0EgdWRzdGVkZXMgdW5kZXIgT0lEIDEuMi4yMDguMTY5LjEu MS4xLiBDZXJ0aWZpY2F0ZXMgZnJvbSB0aGlzIENBIGFyZSBpc3N1ZWQgdW5kZXIg T0lEIDEuMi4yMDguMTY5LjEuMS4xLjARBglghkgBhvhCAQEEBAMCAAcwgYEGA1Ud HwR6MHgwSKBGoESkQjBAMQswCQYDVQQGEwJESzEMMAoGA1UEChMDVERDMRQwEgYD VQQDEwtUREMgT0NFUyBDQTENMAsGA1UEAxMEQ1JMMTAsoCqgKIYmaHR0cDovL2Ny bC5vY2VzLmNlcnRpZmlrYXQuZGsvb2Nlcy5jcmwwKwYDVR0QBCQwIoAPMjAwMzAy MTEwODM5MzBagQ8yMDM3MDIxMTA5MDkzMFowHwYDVR0jBBgwFoAUYLWF7FZkfhIZ J2cdUBVLc647+RIwHQYDVR0OBBYEFGC1hexWZH4SGSdnHVAVS3OuO/kSMB0GCSqG SIb2fQdBAAQQMA4bCFY2LjA6NC4wAwIEkDANBgkqhkiG9w0BAQUFAAOCAQEACrom JkbTc6gJ82sLMJn9iuFXehHTuJTXCRBuo7E4A9G28kNBKWKnctj7fAXmMXAnVBhO inxO5dHKjHiIzxvTkIvmI/gLDjNDfZziChmPyQE+dF10yYscA+UYyAFMP8uXBV2Y caaYb7Z8vTd/vuGTJW1v8AqtFxjhA7wHKcitJuj4YfD9IQl+mo6paH1IYnK9AOoB mbgGglGBTvH1tJFUuSN6AJqfXY3gPGS5GhKSKseCRHI53OI8xthV9RVOyAUO28bQ YqbsFbS1AoLbrIyigfCbmTH1ICCoiGEKB5+U/NDXG8wuF/MEJ3Zn61SD/aSQfgY9 BKNDLdr8C2LqL19iUw== -----END CERTIFICATE----- UTN DATACorp SGC Root CA ======================== -----BEGIN CERTIFICATE----- MIIEXjCCA0agAwIBAgIQRL4Mi1AAIbQR0ypoBqmtaTANBgkqhkiG9w0BAQUFADCB kzELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAlVUMRcwFQYDVQQHEw5TYWx0IExha2Ug Q2l0eTEeMBwGA1UEChMVVGhlIFVTRVJUUlVTVCBOZXR3b3JrMSEwHwYDVQQLExho dHRwOi8vd3d3LnVzZXJ0cnVzdC5jb20xGzAZBgNVBAMTElVUTiAtIERBVEFDb3Jw IFNHQzAeFw05OTA2MjQxODU3MjFaFw0xOTA2MjQxOTA2MzBaMIGTMQswCQYDVQQG EwJVUzELMAkGA1UECBMCVVQxFzAVBgNVBAcTDlNhbHQgTGFrZSBDaXR5MR4wHAYD VQQKExVUaGUgVVNFUlRSVVNUIE5ldHdvcmsxITAfBgNVBAsTGGh0dHA6Ly93d3cu dXNlcnRydXN0LmNvbTEbMBkGA1UEAxMSVVROIC0gREFUQUNvcnAgU0dDMIIBIjAN BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA3+5YEKIrblXEjr8uRgnn4AgPLit6 E5Qbvfa2gI5lBZMAHryv4g+OGQ0SR+ysraP6LnD43m77VkIVni5c7yPeIbkFdicZ D0/Ww5y0vpQZY/KmEQrrU0icvvIpOxboGqBMpsn0GFlowHDyUwDAXlCCpVZvNvlK 4ESGoE1O1kduSUrLZ9emxAW5jh70/P/N5zbgnAVssjMiFdC04MwXwLLA9P4yPykq lXvY8qdOD1R8oQ2AswkDwf9c3V6aPryuvEeKaq5xyh+xKrhfQgUL7EYw0XILyulW bfXv33i+Ybqypa4ETLyorGkVl73v67SMvzX41MPRKA5cOp9wGDMgd8SirwIDAQAB o4GrMIGoMAsGA1UdDwQEAwIBxjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBRT MtGzz3/64PGgXYVOktKeRR20TzA9BgNVHR8ENjA0MDKgMKAuhixodHRwOi8vY3Js LnVzZXJ0cnVzdC5jb20vVVROLURBVEFDb3JwU0dDLmNybDAqBgNVHSUEIzAhBggr BgEFBQcDAQYKKwYBBAGCNwoDAwYJYIZIAYb4QgQBMA0GCSqGSIb3DQEBBQUAA4IB AQAnNZcAiosovcYzMB4p/OL31ZjUQLtgyr+rFywJNn9Q+kHcrpY6CiM+iVnJowft Gzet/Hy+UUla3joKVAgWRcKZsYfNjGjgaQPpxE6YsjuMFrMOoAyYUJuTqXAJyCyj j98C5OBxOvG0I3KgqgHf35g+FFCgMSa9KOlaMCZ1+XtgHI3zzVAmbQQnmt/VDUVH KWss5nbZqSl9Mt3JNjy9rjXxEZ4du5A/EkdOjtd+D2JzHVImOBwYSf0wdJrE5SIv 2MCN7ZF6TACPcn9d2t0bi0Vr591pl6jFVkwPDPafepE39peC4N1xaf92P2BNPM/3 mfnGV/TJVTl4uix5yaaIK/QI -----END CERTIFICATE----- UTN USERFirst Email Root CA =========================== -----BEGIN CERTIFICATE----- MIIEojCCA4qgAwIBAgIQRL4Mi1AAJLQR0zYlJWfJiTANBgkqhkiG9w0BAQUFADCB rjELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAlVUMRcwFQYDVQQHEw5TYWx0IExha2Ug Q2l0eTEeMBwGA1UEChMVVGhlIFVTRVJUUlVTVCBOZXR3b3JrMSEwHwYDVQQLExho dHRwOi8vd3d3LnVzZXJ0cnVzdC5jb20xNjA0BgNVBAMTLVVUTi1VU0VSRmlyc3Qt Q2xpZW50IEF1dGhlbnRpY2F0aW9uIGFuZCBFbWFpbDAeFw05OTA3MDkxNzI4NTBa Fw0xOTA3MDkxNzM2NThaMIGuMQswCQYDVQQGEwJVUzELMAkGA1UECBMCVVQxFzAV BgNVBAcTDlNhbHQgTGFrZSBDaXR5MR4wHAYDVQQKExVUaGUgVVNFUlRSVVNUIE5l dHdvcmsxITAfBgNVBAsTGGh0dHA6Ly93d3cudXNlcnRydXN0LmNvbTE2MDQGA1UE AxMtVVROLVVTRVJGaXJzdC1DbGllbnQgQXV0aGVudGljYXRpb24gYW5kIEVtYWls MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAsjmFpPJ9q0E7YkY3rs3B YHW8OWX5ShpHornMSMxqmNVNNRm5pELlzkniii8efNIxB8dOtINknS4p1aJkxIW9 hVE1eaROaJB7HHqkkqgX8pgV8pPMyaQylbsMTzC9mKALi+VuG6JG+ni8om+rWV6l L8/K2m2qL+usobNqqrcuZzWLeeEeaYji5kbNoKXqvgvOdjp6Dpvq/NonWz1zHyLm SGHGTPNpsaguG7bUMSAsvIKKjqQOpdeJQ/wWWq8dcdcRWdq6hw2v+vPhwvCkxWeM 1tZUOt4KpLoDd7NlyP0e03RiqhjKaJMeoYV+9Udly/hNVyh00jT/MLbu9mIwFIws 6wIDAQABo4G5MIG2MAsGA1UdDwQEAwIBxjAPBgNVHRMBAf8EBTADAQH/MB0GA1Ud DgQWBBSJgmd9xJ0mcABLtFBIfN49rgRufTBYBgNVHR8EUTBPME2gS6BJhkdodHRw Oi8vY3JsLnVzZXJ0cnVzdC5jb20vVVROLVVTRVJGaXJzdC1DbGllbnRBdXRoZW50 aWNhdGlvbmFuZEVtYWlsLmNybDAdBgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUH AwQwDQYJKoZIhvcNAQEFBQADggEBALFtYV2mGn98q0rkMPxTbyUkxsrt4jFcKw7u 7mFVbwQ+zznexRtJlOTrIEy05p5QLnLZjfWqo7NK2lYcYJeA3IKirUq9iiv/Cwm0 xtcgBEXkzYABurorbs6q15L+5K/r9CYdFip/bDCVNy8zEqx/3cfREYxRmLLQo5HQ rfafnoOTHh1CuEava2bwm3/q4wMC5QJRwarVNZ1yQAOJujEdxRBoUp7fooXFXAim eOZTT7Hot9MUnpOmw2TjrH5xzbyf6QMbzPvprDHBr3wVdAKZw7JHpsIyYdfHb0gk USeh1YdV8nuPmD0Wnu51tvjQjvLzxq4oW6fw8zYX/MMF08oDSlQ= -----END CERTIFICATE----- UTN USERFirst Hardware Root CA ============================== -----BEGIN CERTIFICATE----- MIIEdDCCA1ygAwIBAgIQRL4Mi1AAJLQR0zYq/mUK/TANBgkqhkiG9w0BAQUFADCB lzELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAlVUMRcwFQYDVQQHEw5TYWx0IExha2Ug Q2l0eTEeMBwGA1UEChMVVGhlIFVTRVJUUlVTVCBOZXR3b3JrMSEwHwYDVQQLExho dHRwOi8vd3d3LnVzZXJ0cnVzdC5jb20xHzAdBgNVBAMTFlVUTi1VU0VSRmlyc3Qt SGFyZHdhcmUwHhcNOTkwNzA5MTgxMDQyWhcNMTkwNzA5MTgxOTIyWjCBlzELMAkG A1UEBhMCVVMxCzAJBgNVBAgTAlVUMRcwFQYDVQQHEw5TYWx0IExha2UgQ2l0eTEe MBwGA1UEChMVVGhlIFVTRVJUUlVTVCBOZXR3b3JrMSEwHwYDVQQLExhodHRwOi8v d3d3LnVzZXJ0cnVzdC5jb20xHzAdBgNVBAMTFlVUTi1VU0VSRmlyc3QtSGFyZHdh cmUwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCx98M4P7Sof885glFn 0G2f0v9Y8+efK+wNiVSZuTiZFvfgIXlIwrthdBKWHTxqctU8EGc6Oe0rE81m65UJ M6Rsl7HoxuzBdXmcRl6Nq9Bq/bkqVRcQVLMZ8Jr28bFdtqdt++BxF2uiiPsA3/4a MXcMmgF6sTLjKwEHOG7DpV4jvEWbe1DByTCP2+UretNb+zNAHqDVmBe8i4fDidNd oI6yqqr2jmmIBsX6iSHzCJ1pLgkzmykNRg+MzEk0sGlRvfkGzWitZky8PqxhvQqI DsjfPe58BEydCl5rkdbux+0ojatNh4lz0G6k0B4WixThdkQDf2Os5M1JnMWS9Ksy oUhbAgMBAAGjgbkwgbYwCwYDVR0PBAQDAgHGMA8GA1UdEwEB/wQFMAMBAf8wHQYD VR0OBBYEFKFyXyYbKJhDlV0HN9WFlp1L0sNFMEQGA1UdHwQ9MDswOaA3oDWGM2h0 dHA6Ly9jcmwudXNlcnRydXN0LmNvbS9VVE4tVVNFUkZpcnN0LUhhcmR3YXJlLmNy bDAxBgNVHSUEKjAoBggrBgEFBQcDAQYIKwYBBQUHAwUGCCsGAQUFBwMGBggrBgEF BQcDBzANBgkqhkiG9w0BAQUFAAOCAQEARxkP3nTGmZev/K0oXnWO6y1n7k57K9cM //bey1WiCuFMVGWTYGufEpytXoMs61quwOQt9ABjHbjAbPLPSbtNk28Gpgoiskli CE7/yMgUsogWXecB5BKV5UU0s4tpvc+0hY91UZ59Ojg6FEgSxvunOxqNDYJAB+gE CJChicsZUN/KHAG8HQQZexB2lzvukJDKxA4fFm517zP4029bHpbj4HR3dHuKom4t 3XbWOTCC8KucUvIqx69JXn7HaOWCgchqJ/kniCrVWFCVH/A7HFe7fRQ5YiuayZSS KqMiDP+JJn1fIytH1xUdqWqeUQ0qUZ6B+dQ7XnASfxAynB67nfhmqA== -----END CERTIFICATE----- UTN USERFirst Object Root CA ============================ -----BEGIN CERTIFICATE----- MIIEZjCCA06gAwIBAgIQRL4Mi1AAJLQR0zYt4LNfGzANBgkqhkiG9w0BAQUFADCB lTELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAlVUMRcwFQYDVQQHEw5TYWx0IExha2Ug Q2l0eTEeMBwGA1UEChMVVGhlIFVTRVJUUlVTVCBOZXR3b3JrMSEwHwYDVQQLExho dHRwOi8vd3d3LnVzZXJ0cnVzdC5jb20xHTAbBgNVBAMTFFVUTi1VU0VSRmlyc3Qt T2JqZWN0MB4XDTk5MDcwOTE4MzEyMFoXDTE5MDcwOTE4NDAzNlowgZUxCzAJBgNV BAYTAlVTMQswCQYDVQQIEwJVVDEXMBUGA1UEBxMOU2FsdCBMYWtlIENpdHkxHjAc BgNVBAoTFVRoZSBVU0VSVFJVU1QgTmV0d29yazEhMB8GA1UECxMYaHR0cDovL3d3 dy51c2VydHJ1c3QuY29tMR0wGwYDVQQDExRVVE4tVVNFUkZpcnN0LU9iamVjdDCC ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM6qgT+jo2F4qjEAVZURnicP HxzfOpuCaDDASmEd8S8O+r5596Uj71VRloTN2+O5bj4x2AogZ8f02b+U60cEPgLO KqJdhwQJ9jCdGIqXsqoc/EHSoTbL+z2RuufZcDX65OeQw5ujm9M89RKZd7G3CeBo 5hy485RjiGpq/gt2yb70IuRnuasaXnfBhQfdDWy/7gbHd2pBnqcP1/vulBe3/IW+ pKvEHDHd17bR5PDv3xaPslKT16HUiaEHLr/hARJCHhrh2JU022R5KP+6LhHC5ehb kkj7RwvCbNqtMoNB86XlQXD9ZZBt+vpRxPm9lisZBCzTbafc8H9vg2XiaquHhnUC AwEAAaOBrzCBrDALBgNVHQ8EBAMCAcYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4E FgQU2u1kdBScFDyr3ZmpvVsoTYs8ydgwQgYDVR0fBDswOTA3oDWgM4YxaHR0cDov L2NybC51c2VydHJ1c3QuY29tL1VUTi1VU0VSRmlyc3QtT2JqZWN0LmNybDApBgNV HSUEIjAgBggrBgEFBQcDAwYIKwYBBQUHAwgGCisGAQQBgjcKAwQwDQYJKoZIhvcN AQEFBQADggEBAAgfUrE3RHjb/c652pWWmKpVZIC1WkDdIaXFwfNfLEzIR1pp6ujw NTX00CXzyKakh0q9G7FzCL3Uw8q2NbtZhncxzaeAFK4T7/yxSPlrJSUtUbYsbUXB mMiKVl0+7kNOPmsnjtA6S4ULX9Ptaqd1y9Fahy85dRNacrACgZ++8A+EVCBibGnU 4U3GDZlDAQ0Slox4nb9QorFEqmrPF3rPbw/U+CRVX/A0FklmPlBGyWNxODFiuGK5 81OtbLUrohKqGU8J2l7nk8aOFAj+8DCAGKCGhU3IfdeLA/5u1fedFqySLKAj5ZyR Uh+U3xeUc8OzwcFxBSAAeL0TUh2oPs0AH8g= -----END CERTIFICATE----- Camerfirma Chambers of Commerce Root ==================================== -----BEGIN CERTIFICATE----- MIIEvTCCA6WgAwIBAgIBADANBgkqhkiG9w0BAQUFADB/MQswCQYDVQQGEwJFVTEn MCUGA1UEChMeQUMgQ2FtZXJmaXJtYSBTQSBDSUYgQTgyNzQzMjg3MSMwIQYDVQQL ExpodHRwOi8vd3d3LmNoYW1iZXJzaWduLm9yZzEiMCAGA1UEAxMZQ2hhbWJlcnMg b2YgQ29tbWVyY2UgUm9vdDAeFw0wMzA5MzAxNjEzNDNaFw0zNzA5MzAxNjEzNDRa MH8xCzAJBgNVBAYTAkVVMScwJQYDVQQKEx5BQyBDYW1lcmZpcm1hIFNBIENJRiBB ODI3NDMyODcxIzAhBgNVBAsTGmh0dHA6Ly93d3cuY2hhbWJlcnNpZ24ub3JnMSIw IAYDVQQDExlDaGFtYmVycyBvZiBDb21tZXJjZSBSb290MIIBIDANBgkqhkiG9w0B AQEFAAOCAQ0AMIIBCAKCAQEAtzZV5aVdGDDg2olUkfzIx1L4L1DZ77F1c2VHfRtb unXF/KGIJPov7coISjlUxFF6tdpg6jg8gbLL8bvZkSM/SAFwdakFKq0fcfPJVD0d BmpAPrMMhe5cG3nCYsS4No41XQEMIwRHNaqbYE6gZj3LJgqcQKH0XZi/caulAGgq 7YN6D6IUtdQis4CwPAxaUWktWBiP7Zme8a7ileb2R6jWDA+wWFjbw2Y3npuRVDM3 0pQcakjJyfKl2qUMI/cjDpwyVV5xnIQFUZot/eZOKjRa3spAN2cMVCFVd9oKDMyX roDclDZK9D7ONhMeU+SsTjoF7Nuucpw4i9A5O4kKPnf+dQIBA6OCAUQwggFAMBIG A1UdEwEB/wQIMAYBAf8CAQwwPAYDVR0fBDUwMzAxoC+gLYYraHR0cDovL2NybC5j aGFtYmVyc2lnbi5vcmcvY2hhbWJlcnNyb290LmNybDAdBgNVHQ4EFgQU45T1sU3p 26EpW1eLTXYGduHRooowDgYDVR0PAQH/BAQDAgEGMBEGCWCGSAGG+EIBAQQEAwIA BzAnBgNVHREEIDAegRxjaGFtYmVyc3Jvb3RAY2hhbWJlcnNpZ24ub3JnMCcGA1Ud EgQgMB6BHGNoYW1iZXJzcm9vdEBjaGFtYmVyc2lnbi5vcmcwWAYDVR0gBFEwTzBN BgsrBgEEAYGHLgoDATA+MDwGCCsGAQUFBwIBFjBodHRwOi8vY3BzLmNoYW1iZXJz aWduLm9yZy9jcHMvY2hhbWJlcnNyb290Lmh0bWwwDQYJKoZIhvcNAQEFBQADggEB AAxBl8IahsAifJ/7kPMa0QOx7xP5IV8EnNrJpY0nbJaHkb5BkAFyk+cefV/2icZd p0AJPaxJRUXcLo0waLIJuvvDL8y6C98/d3tGfToSJI6WjzwFCm/SlCgdbQzALogi 1djPHRPH8EjX1wWnz8dHnjs8NMiAT9QUu/wNUPf6s+xCX6ndbcj0dc97wXImsQEc XCz9ek60AcUFV7nnPKoF2YjpB0ZBzu9Bga5Y34OirsrXdx/nADydb47kMgkdTXg0 eDQ8lJsm7U9xxhl6vSAiSFr+S30Dt+dYvsYyTnQeaN2oaFuzPu5ifdmA6Ap1erfu tGWaIZDgqtCYvDi1czyL+Nw= -----END CERTIFICATE----- Camerfirma Global Chambersign Root ================================== -----BEGIN CERTIFICATE----- MIIExTCCA62gAwIBAgIBADANBgkqhkiG9w0BAQUFADB9MQswCQYDVQQGEwJFVTEn MCUGA1UEChMeQUMgQ2FtZXJmaXJtYSBTQSBDSUYgQTgyNzQzMjg3MSMwIQYDVQQL ExpodHRwOi8vd3d3LmNoYW1iZXJzaWduLm9yZzEgMB4GA1UEAxMXR2xvYmFsIENo YW1iZXJzaWduIFJvb3QwHhcNMDMwOTMwMTYxNDE4WhcNMzcwOTMwMTYxNDE4WjB9 MQswCQYDVQQGEwJFVTEnMCUGA1UEChMeQUMgQ2FtZXJmaXJtYSBTQSBDSUYgQTgy NzQzMjg3MSMwIQYDVQQLExpodHRwOi8vd3d3LmNoYW1iZXJzaWduLm9yZzEgMB4G A1UEAxMXR2xvYmFsIENoYW1iZXJzaWduIFJvb3QwggEgMA0GCSqGSIb3DQEBAQUA A4IBDQAwggEIAoIBAQCicKLQn0KuWxfH2H3PFIP8T8mhtxOviteePgQKkotgVvq0 Mi+ITaFgCPS3CU6gSS9J1tPfnZdan5QEcOw/Wdm3zGaLmFIoCQLfxS+EjXqXd7/s QJ0lcqu1PzKY+7e3/HKE5TWH+VX6ox8Oby4o3Wmg2UIQxvi1RMLQQ3/bvOSiPGpV eAp3qdjqGTK3L/5cPxvusZjsyq16aUXjlg9V9ubtdepl6DJWk0aJqCWKZQbua795 B9Dxt6/tLE2Su8CoX6dnfQTyFQhwrJLWfQTSM/tMtgsL+xrJxI0DqX5c8lCrEqWh z0hQpe/SyBoT+rB/sYIcd2oPX9wLlY/vQ37mRQklAgEDo4IBUDCCAUwwEgYDVR0T AQH/BAgwBgEB/wIBDDA/BgNVHR8EODA2MDSgMqAwhi5odHRwOi8vY3JsLmNoYW1i ZXJzaWduLm9yZy9jaGFtYmVyc2lnbnJvb3QuY3JsMB0GA1UdDgQWBBRDnDafsJ4w TcbOX60Qq+UDpfqpFDAOBgNVHQ8BAf8EBAMCAQYwEQYJYIZIAYb4QgEBBAQDAgAH MCoGA1UdEQQjMCGBH2NoYW1iZXJzaWducm9vdEBjaGFtYmVyc2lnbi5vcmcwKgYD VR0SBCMwIYEfY2hhbWJlcnNpZ25yb290QGNoYW1iZXJzaWduLm9yZzBbBgNVHSAE VDBSMFAGCysGAQQBgYcuCgEBMEEwPwYIKwYBBQUHAgEWM2h0dHA6Ly9jcHMuY2hh bWJlcnNpZ24ub3JnL2Nwcy9jaGFtYmVyc2lnbnJvb3QuaHRtbDANBgkqhkiG9w0B AQUFAAOCAQEAPDtwkfkEVCeR4e3t/mh/YV3lQWVPMvEYBZRqHN4fcNs+ezICNLUM bKGKfKX0j//U2K0X1S0E0T9YgOKBWYi+wONGkyT+kL0mojAt6JcmVzWJdJYY9hXi ryQZVgICsroPFOrGimbBhkVVi76SvpykBMdJPJ7oKXqJ1/6v/2j1pReQvayZzKWG VwlnRtvWFsJG8eSpUPWP0ZIV018+xgBJOm5YstHRJw0lyDL4IBHNfTIzSJRUTN3c ecQwn+uOuFW114hcxWokPbLTBQNRxgfvzBRydD1ucs4YKIxKoHflCStFREest2d/ AYoFWpO+ocH/+OcOZ6RHSXZddZAa9SaP8A== -----END CERTIFICATE----- NetLock Qualified (Class QA) Root ================================= -----BEGIN CERTIFICATE----- MIIG0TCCBbmgAwIBAgIBezANBgkqhkiG9w0BAQUFADCByTELMAkGA1UEBhMCSFUx ETAPBgNVBAcTCEJ1ZGFwZXN0MScwJQYDVQQKEx5OZXRMb2NrIEhhbG96YXRiaXp0 b25zYWdpIEtmdC4xGjAYBgNVBAsTEVRhbnVzaXR2YW55a2lhZG9rMUIwQAYDVQQD EzlOZXRMb2NrIE1pbm9zaXRldHQgS296amVneXpvaSAoQ2xhc3MgUUEpIFRhbnVz aXR2YW55a2lhZG8xHjAcBgkqhkiG9w0BCQEWD2luZm9AbmV0bG9jay5odTAeFw0w MzAzMzAwMTQ3MTFaFw0yMjEyMTUwMTQ3MTFaMIHJMQswCQYDVQQGEwJIVTERMA8G A1UEBxMIQnVkYXBlc3QxJzAlBgNVBAoTHk5ldExvY2sgSGFsb3phdGJpenRvbnNh Z2kgS2Z0LjEaMBgGA1UECxMRVGFudXNpdHZhbnlraWFkb2sxQjBABgNVBAMTOU5l dExvY2sgTWlub3NpdGV0dCBLb3pqZWd5em9pIChDbGFzcyBRQSkgVGFudXNpdHZh bnlraWFkbzEeMBwGCSqGSIb3DQEJARYPaW5mb0BuZXRsb2NrLmh1MIIBIjANBgkq hkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAx1Ilstg91IRVCacbvWy5FPSKAtt2/Goq eKvld/Bu4IwjZ9ulZJm53QE+b+8tmjwi8F3JV6BVQX/yQ15YglMxZc4e8ia6AFQe r7C8HORSjKAyr7c3sVNnaHRnUPYtLmTeriZ539+Zhqurf4XsoPuAzPS4DB6TRWO5 3Lhbm+1bOdRfYrCnjnxmOCyqsQhjF2d9zL2z8cM/z1A57dEZgxXbhxInlrfa6uWd vLrqOU+L73Sa58XQ0uqGURzk/mQIKAR5BevKxXEOC++r6uwSEaEYBTJp0QwsGj0l mT+1fMptsK6ZmfoIYOcZwvK9UdPM0wKswREMgM6r3JSda6M5UzrWhQIDAMV9o4IC wDCCArwwEgYDVR0TAQH/BAgwBgEB/wIBBDAOBgNVHQ8BAf8EBAMCAQYwggJ1Bglg hkgBhvhCAQ0EggJmFoICYkZJR1lFTEVNISBFemVuIHRhbnVzaXR2YW55IGEgTmV0 TG9jayBLZnQuIE1pbm9zaXRldHQgU3pvbGdhbHRhdGFzaSBTemFiYWx5emF0YWJh biBsZWlydCBlbGphcmFzb2sgYWxhcGphbiBrZXN6dWx0LiBBIG1pbm9zaXRldHQg ZWxla3Ryb25pa3VzIGFsYWlyYXMgam9naGF0YXMgZXJ2ZW55ZXN1bGVzZW5laywg dmFsYW1pbnQgZWxmb2dhZGFzYW5hayBmZWx0ZXRlbGUgYSBNaW5vc2l0ZXR0IFN6 b2xnYWx0YXRhc2kgU3phYmFseXphdGJhbiwgYXogQWx0YWxhbm9zIFN6ZXJ6b2Rl c2kgRmVsdGV0ZWxla2JlbiBlbG9pcnQgZWxsZW5vcnplc2kgZWxqYXJhcyBtZWd0 ZXRlbGUuIEEgZG9rdW1lbnR1bW9rIG1lZ3RhbGFsaGF0b2sgYSBodHRwczovL3d3 dy5uZXRsb2NrLmh1L2RvY3MvIGNpbWVuIHZhZ3kga2VyaGV0b2sgYXogaW5mb0Bu ZXRsb2NrLm5ldCBlLW1haWwgY2ltZW4uIFdBUk5JTkchIFRoZSBpc3N1YW5jZSBh bmQgdGhlIHVzZSBvZiB0aGlzIGNlcnRpZmljYXRlIGFyZSBzdWJqZWN0IHRvIHRo ZSBOZXRMb2NrIFF1YWxpZmllZCBDUFMgYXZhaWxhYmxlIGF0IGh0dHBzOi8vd3d3 Lm5ldGxvY2suaHUvZG9jcy8gb3IgYnkgZS1tYWlsIGF0IGluZm9AbmV0bG9jay5u ZXQwHQYDVR0OBBYEFAlqYhaSsFq7VQ7LdTI6MuWyIckoMA0GCSqGSIb3DQEBBQUA A4IBAQCRalCc23iBmz+LQuM7/KbD7kPgz/PigDVJRXYC4uMvBcXxKufAQTPGtpvQ MznNwNuhrWw3AkxYQTvyl5LGSKjN5Yo5iWH5Upfpvfb5lHTocQ68d4bDBsxafEp+ NFAwLvt/MpqNPfMgW/hqyobzMUwsWYACff44yTB1HLdV47yfuqhthCgFdbOLDcCR VCHnpgu0mfVRQdzNo0ci2ccBgcTcR08m6h/t280NmPSjnLRzMkqWmf68f8glWPhY 83ZmiVSkpj7EUFy6iRiCdUgh0k8T6GB+B3bbELVR5qq5aKrN9p2QdRLqOBrKROi3 macqaJVmlaut74nLYKkGEsaUR+ko -----END CERTIFICATE----- NetLock Notary (Class A) Root ============================= -----BEGIN CERTIFICATE----- MIIGfTCCBWWgAwIBAgICAQMwDQYJKoZIhvcNAQEEBQAwga8xCzAJBgNVBAYTAkhV MRAwDgYDVQQIEwdIdW5nYXJ5MREwDwYDVQQHEwhCdWRhcGVzdDEnMCUGA1UEChMe TmV0TG9jayBIYWxvemF0Yml6dG9uc2FnaSBLZnQuMRowGAYDVQQLExFUYW51c2l0 dmFueWtpYWRvazE2MDQGA1UEAxMtTmV0TG9jayBLb3pqZWd5em9pIChDbGFzcyBB KSBUYW51c2l0dmFueWtpYWRvMB4XDTk5MDIyNDIzMTQ0N1oXDTE5MDIxOTIzMTQ0 N1owga8xCzAJBgNVBAYTAkhVMRAwDgYDVQQIEwdIdW5nYXJ5MREwDwYDVQQHEwhC dWRhcGVzdDEnMCUGA1UEChMeTmV0TG9jayBIYWxvemF0Yml6dG9uc2FnaSBLZnQu MRowGAYDVQQLExFUYW51c2l0dmFueWtpYWRvazE2MDQGA1UEAxMtTmV0TG9jayBL b3pqZWd5em9pIChDbGFzcyBBKSBUYW51c2l0dmFueWtpYWRvMIIBIjANBgkqhkiG 9w0BAQEFAAOCAQ8AMIIBCgKCAQEAvHSMD7tM9DceqQWC2ObhbHDqeLVu0ThEDaiD zl3S1tWBxdRL51uUcCbbO51qTGL3cfNk1mE7PetzozfZz+qMkjvN9wfcZnSX9EUi 3fRc4L9t875lM+QVOr/bmJBVOMTtplVjC7B4BPTjbsE/jvxReB+SnoPC/tmwqcm8 WgD/qaiYdPv2LD4VOQ22BFWoDpggQrOxJa1+mm9dU7GrDPzr4PN6s6iz/0b2Y6LY Oph7tqyF/7AlT3Rj5xMHpQqPBffAZG9+pyeAlt7ULoZgx2srXnN7F+eRP2QM2Esi NCubMvJIH5+hCoR64sKtlz2O1cH5VqNQ6ca0+pii7pXmKgOM3wIDAQABo4ICnzCC ApswDgYDVR0PAQH/BAQDAgAGMBIGA1UdEwEB/wQIMAYBAf8CAQQwEQYJYIZIAYb4 QgEBBAQDAgAHMIICYAYJYIZIAYb4QgENBIICURaCAk1GSUdZRUxFTSEgRXplbiB0 YW51c2l0dmFueSBhIE5ldExvY2sgS2Z0LiBBbHRhbGFub3MgU3pvbGdhbHRhdGFz aSBGZWx0ZXRlbGVpYmVuIGxlaXJ0IGVsamFyYXNvayBhbGFwamFuIGtlc3p1bHQu IEEgaGl0ZWxlc2l0ZXMgZm9seWFtYXRhdCBhIE5ldExvY2sgS2Z0LiB0ZXJtZWtm ZWxlbG9zc2VnLWJpenRvc2l0YXNhIHZlZGkuIEEgZGlnaXRhbGlzIGFsYWlyYXMg ZWxmb2dhZGFzYW5hayBmZWx0ZXRlbGUgYXogZWxvaXJ0IGVsbGVub3J6ZXNpIGVs amFyYXMgbWVndGV0ZWxlLiBBeiBlbGphcmFzIGxlaXJhc2EgbWVndGFsYWxoYXRv IGEgTmV0TG9jayBLZnQuIEludGVybmV0IGhvbmxhcGphbiBhIGh0dHBzOi8vd3d3 Lm5ldGxvY2submV0L2RvY3MgY2ltZW4gdmFneSBrZXJoZXRvIGF6IGVsbGVub3J6 ZXNAbmV0bG9jay5uZXQgZS1tYWlsIGNpbWVuLiBJTVBPUlRBTlQhIFRoZSBpc3N1 YW5jZSBhbmQgdGhlIHVzZSBvZiB0aGlzIGNlcnRpZmljYXRlIGlzIHN1YmplY3Qg dG8gdGhlIE5ldExvY2sgQ1BTIGF2YWlsYWJsZSBhdCBodHRwczovL3d3dy5uZXRs b2NrLm5ldC9kb2NzIG9yIGJ5IGUtbWFpbCBhdCBjcHNAbmV0bG9jay5uZXQuMA0G CSqGSIb3DQEBBAUAA4IBAQBIJEb3ulZv+sgoA0BO5TE5ayZrU3/b39/zcT0mwBQO xmd7I6gMc90Bu8bKbjc5VdXHjFYgDigKDtIqpLBJUsY4B/6+CgmM0ZjPytoUMaFP 0jn8DxEsQ8Pdq5PHVT5HfBgaANzze9jyf1JsIPQLX2lS9O74silg6+NJMSEN1rUQ QeJBCWziGppWS3cC9qCbmieH6FUpccKQn0V4GuEVZD3QDtigdp+uxdAu6tYPVuxk f1qbFFgBJ34TUMdrKuZoPL9coAob4Q566eKAw+np9v1sEZ7Q5SgnK1QyQhSCdeZK 8CtmdWOMovsEPoMOmzbwGOQmIMOM8CgHrTwXZoi1/baI -----END CERTIFICATE----- NetLock Business (Class B) Root =============================== -----BEGIN CERTIFICATE----- MIIFSzCCBLSgAwIBAgIBaTANBgkqhkiG9w0BAQQFADCBmTELMAkGA1UEBhMCSFUx ETAPBgNVBAcTCEJ1ZGFwZXN0MScwJQYDVQQKEx5OZXRMb2NrIEhhbG96YXRiaXp0 b25zYWdpIEtmdC4xGjAYBgNVBAsTEVRhbnVzaXR2YW55a2lhZG9rMTIwMAYDVQQD EylOZXRMb2NrIFV6bGV0aSAoQ2xhc3MgQikgVGFudXNpdHZhbnlraWFkbzAeFw05 OTAyMjUxNDEwMjJaFw0xOTAyMjAxNDEwMjJaMIGZMQswCQYDVQQGEwJIVTERMA8G A1UEBxMIQnVkYXBlc3QxJzAlBgNVBAoTHk5ldExvY2sgSGFsb3phdGJpenRvbnNh Z2kgS2Z0LjEaMBgGA1UECxMRVGFudXNpdHZhbnlraWFkb2sxMjAwBgNVBAMTKU5l dExvY2sgVXpsZXRpIChDbGFzcyBCKSBUYW51c2l0dmFueWtpYWRvMIGfMA0GCSqG SIb3DQEBAQUAA4GNADCBiQKBgQCx6gTsIKAjwo84YM/HRrPVG/77uZmeBNwcf4xK gZjupNTKihe5In+DCnVMm8Bp2GQ5o+2So/1bXHQawEfKOml2mrriRBf8TKPV/riX iK+IA4kfpPIEPsgHC+b5sy96YhQJRhTKZPWLgLViqNhr1nGTLbO/CVRY7QbrqHvc Q7GhaQIDAQABo4ICnzCCApswEgYDVR0TAQH/BAgwBgEB/wIBBDAOBgNVHQ8BAf8E BAMCAAYwEQYJYIZIAYb4QgEBBAQDAgAHMIICYAYJYIZIAYb4QgENBIICURaCAk1G SUdZRUxFTSEgRXplbiB0YW51c2l0dmFueSBhIE5ldExvY2sgS2Z0LiBBbHRhbGFu b3MgU3pvbGdhbHRhdGFzaSBGZWx0ZXRlbGVpYmVuIGxlaXJ0IGVsamFyYXNvayBh bGFwamFuIGtlc3p1bHQuIEEgaGl0ZWxlc2l0ZXMgZm9seWFtYXRhdCBhIE5ldExv Y2sgS2Z0LiB0ZXJtZWtmZWxlbG9zc2VnLWJpenRvc2l0YXNhIHZlZGkuIEEgZGln aXRhbGlzIGFsYWlyYXMgZWxmb2dhZGFzYW5hayBmZWx0ZXRlbGUgYXogZWxvaXJ0 IGVsbGVub3J6ZXNpIGVsamFyYXMgbWVndGV0ZWxlLiBBeiBlbGphcmFzIGxlaXJh c2EgbWVndGFsYWxoYXRvIGEgTmV0TG9jayBLZnQuIEludGVybmV0IGhvbmxhcGph biBhIGh0dHBzOi8vd3d3Lm5ldGxvY2submV0L2RvY3MgY2ltZW4gdmFneSBrZXJo ZXRvIGF6IGVsbGVub3J6ZXNAbmV0bG9jay5uZXQgZS1tYWlsIGNpbWVuLiBJTVBP UlRBTlQhIFRoZSBpc3N1YW5jZSBhbmQgdGhlIHVzZSBvZiB0aGlzIGNlcnRpZmlj YXRlIGlzIHN1YmplY3QgdG8gdGhlIE5ldExvY2sgQ1BTIGF2YWlsYWJsZSBhdCBo dHRwczovL3d3dy5uZXRsb2NrLm5ldC9kb2NzIG9yIGJ5IGUtbWFpbCBhdCBjcHNA bmV0bG9jay5uZXQuMA0GCSqGSIb3DQEBBAUAA4GBAATbrowXr/gOkDFOzT4JwG06 sPgzTEdM43WIEJessDgVkcYplswhwG08pXTP2IKlOcNl40JwuyKQ433bNXbhoLXa n3BukxowOR0w2y7jfLKRstE3Kfq51hdcR0/jHTjrn9V7lagonhVK0dHQKwCXoOKS NitjrFgBazMpUIaD8QFI -----END CERTIFICATE----- NetLock Express (Class C) Root ============================== -----BEGIN CERTIFICATE----- MIIFTzCCBLigAwIBAgIBaDANBgkqhkiG9w0BAQQFADCBmzELMAkGA1UEBhMCSFUx ETAPBgNVBAcTCEJ1ZGFwZXN0MScwJQYDVQQKEx5OZXRMb2NrIEhhbG96YXRiaXp0 b25zYWdpIEtmdC4xGjAYBgNVBAsTEVRhbnVzaXR2YW55a2lhZG9rMTQwMgYDVQQD EytOZXRMb2NrIEV4cHJlc3N6IChDbGFzcyBDKSBUYW51c2l0dmFueWtpYWRvMB4X DTk5MDIyNTE0MDgxMVoXDTE5MDIyMDE0MDgxMVowgZsxCzAJBgNVBAYTAkhVMREw DwYDVQQHEwhCdWRhcGVzdDEnMCUGA1UEChMeTmV0TG9jayBIYWxvemF0Yml6dG9u c2FnaSBLZnQuMRowGAYDVQQLExFUYW51c2l0dmFueWtpYWRvazE0MDIGA1UEAxMr TmV0TG9jayBFeHByZXNzeiAoQ2xhc3MgQykgVGFudXNpdHZhbnlraWFkbzCBnzAN BgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA6+ywbGGKIyWvYCDj2Z/8kwvbXY2wobNA OoLO/XXgeDIDhlqGlZHtU/qdQPzm6N3ZW3oDvV3zOwzDUXmbrVWg6dADEK8KuhRC 2VImESLH0iDMgqSaqf64gXadarfSNnU+sYYJ9m5tfk63euyucYT2BDMIJTLrdKwW RMbkQJMdf60CAwEAAaOCAp8wggKbMBIGA1UdEwEB/wQIMAYBAf8CAQQwDgYDVR0P AQH/BAQDAgAGMBEGCWCGSAGG+EIBAQQEAwIABzCCAmAGCWCGSAGG+EIBDQSCAlEW ggJNRklHWUVMRU0hIEV6ZW4gdGFudXNpdHZhbnkgYSBOZXRMb2NrIEtmdC4gQWx0 YWxhbm9zIFN6b2xnYWx0YXRhc2kgRmVsdGV0ZWxlaWJlbiBsZWlydCBlbGphcmFz b2sgYWxhcGphbiBrZXN6dWx0LiBBIGhpdGVsZXNpdGVzIGZvbHlhbWF0YXQgYSBO ZXRMb2NrIEtmdC4gdGVybWVrZmVsZWxvc3NlZy1iaXp0b3NpdGFzYSB2ZWRpLiBB IGRpZ2l0YWxpcyBhbGFpcmFzIGVsZm9nYWRhc2FuYWsgZmVsdGV0ZWxlIGF6IGVs b2lydCBlbGxlbm9yemVzaSBlbGphcmFzIG1lZ3RldGVsZS4gQXogZWxqYXJhcyBs ZWlyYXNhIG1lZ3RhbGFsaGF0byBhIE5ldExvY2sgS2Z0LiBJbnRlcm5ldCBob25s YXBqYW4gYSBodHRwczovL3d3dy5uZXRsb2NrLm5ldC9kb2NzIGNpbWVuIHZhZ3kg a2VyaGV0byBheiBlbGxlbm9yemVzQG5ldGxvY2submV0IGUtbWFpbCBjaW1lbi4g SU1QT1JUQU5UISBUaGUgaXNzdWFuY2UgYW5kIHRoZSB1c2Ugb2YgdGhpcyBjZXJ0 aWZpY2F0ZSBpcyBzdWJqZWN0IHRvIHRoZSBOZXRMb2NrIENQUyBhdmFpbGFibGUg YXQgaHR0cHM6Ly93d3cubmV0bG9jay5uZXQvZG9jcyBvciBieSBlLW1haWwgYXQg Y3BzQG5ldGxvY2submV0LjANBgkqhkiG9w0BAQQFAAOBgQAQrX/XDDKACtiG8XmY ta3UzbM2xJZIwVzNmtkFLp++UOv0JhQQLdRmF/iewSf98e3ke0ugbLWrmldwpu2g pO0u9f38vf5NNwgMvOOWgyL1SRt/Syu0VMGAfJlOHdCM7tCs5ZL6dVb+ZKATj7i4 Fp1hBWeAyNDYpQcCNJgEjTME1A== -----END CERTIFICATE----- XRamp Global CA Root ==================== -----BEGIN CERTIFICATE----- MIIEMDCCAxigAwIBAgIQUJRs7Bjq1ZxN1ZfvdY+grTANBgkqhkiG9w0BAQUFADCB gjELMAkGA1UEBhMCVVMxHjAcBgNVBAsTFXd3dy54cmFtcHNlY3VyaXR5LmNvbTEk MCIGA1UEChMbWFJhbXAgU2VjdXJpdHkgU2VydmljZXMgSW5jMS0wKwYDVQQDEyRY UmFtcCBHbG9iYWwgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDQxMTAxMTcx NDA0WhcNMzUwMTAxMDUzNzE5WjCBgjELMAkGA1UEBhMCVVMxHjAcBgNVBAsTFXd3 dy54cmFtcHNlY3VyaXR5LmNvbTEkMCIGA1UEChMbWFJhbXAgU2VjdXJpdHkgU2Vy dmljZXMgSW5jMS0wKwYDVQQDEyRYUmFtcCBHbG9iYWwgQ2VydGlmaWNhdGlvbiBB dXRob3JpdHkwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCYJB69FbS6 38eMpSe2OAtp87ZOqCwuIR1cRN8hXX4jdP5efrRKt6atH67gBhbim1vZZ3RrXYCP KZ2GG9mcDZhtdhAoWORlsH9KmHmf4MMxfoArtYzAQDsRhtDLooY2YKTVMIJt2W7Q DxIEM5dfT2Fa8OT5kavnHTu86M/0ay00fOJIYRyO82FEzG+gSqmUsE3a56k0enI4 qEHMPJQRfevIpoy3hsvKMzvZPTeL+3o+hiznc9cKV6xkmxnr9A8ECIqsAxcZZPRa JSKNNCyy9mgdEm3Tih4U2sSPpuIjhdV6Db1q4Ons7Be7QhtnqiXtRYMh/MHJfNVi PvryxS3T/dRlAgMBAAGjgZ8wgZwwEwYJKwYBBAGCNxQCBAYeBABDAEEwCwYDVR0P BAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFMZPoj0GY4QJnM5i5ASs jVy16bYbMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwueHJhbXBzZWN1cml0 eS5jb20vWEdDQS5jcmwwEAYJKwYBBAGCNxUBBAMCAQEwDQYJKoZIhvcNAQEFBQAD ggEBAJEVOQMBG2f7Shz5CmBbodpNl2L5JFMn14JkTpAuw0kbK5rc/Kh4ZzXxHfAR vbdI4xD2Dd8/0sm2qlWkSLoC295ZLhVbO50WfUfXN+pfTXYSNrsf16GBBEYgoyxt qZ4Bfj8pzgCT3/3JknOJiWSe5yvkHJEs0rnOfc5vMZnT5r7SHpDwCRR5XCOrTdLa IR9NmXmd4c8nnxCbHIgNsIpkQTG4DmyQJKSbXHGPurt+HBvbaoAPIbzp26a3QPSy i6mx5O+aGtA9aZnuqCij4Tyz8LIRnM98QObd50N9otg6tamN8jSZxNQQ4Qb9CYQQ O+7ETPTsJ3xCwnR8gooJybQDJbw= -----END CERTIFICATE----- Go Daddy Class 2 CA =================== -----BEGIN CERTIFICATE----- MIIEADCCAuigAwIBAgIBADANBgkqhkiG9w0BAQUFADBjMQswCQYDVQQGEwJVUzEh MB8GA1UEChMYVGhlIEdvIERhZGR5IEdyb3VwLCBJbmMuMTEwLwYDVQQLEyhHbyBE YWRkeSBDbGFzcyAyIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA0MDYyOTE3 MDYyMFoXDTM0MDYyOTE3MDYyMFowYzELMAkGA1UEBhMCVVMxITAfBgNVBAoTGFRo ZSBHbyBEYWRkeSBHcm91cCwgSW5jLjExMC8GA1UECxMoR28gRGFkZHkgQ2xhc3Mg MiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTCCASAwDQYJKoZIhvcNAQEBBQADggEN ADCCAQgCggEBAN6d1+pXGEmhW+vXX0iG6r7d/+TvZxz0ZWizV3GgXne77ZtJ6XCA PVYYYwhv2vLM0D9/AlQiVBDYsoHUwHU9S3/Hd8M+eKsaA7Ugay9qK7HFiH7Eux6w wdhFJ2+qN1j3hybX2C32qRe3H3I2TqYXP2WYktsqbl2i/ojgC95/5Y0V4evLOtXi EqITLdiOr18SPaAIBQi2XKVlOARFmR6jYGB0xUGlcmIbYsUfb18aQr4CUWWoriMY avx4A6lNf4DD+qta/KFApMoZFv6yyO9ecw3ud72a9nmYvLEHZ6IVDd2gWMZEewo+ YihfukEHU1jPEX44dMX4/7VpkI+EdOqXG68CAQOjgcAwgb0wHQYDVR0OBBYEFNLE sNKR1EwRcbNhyz2h/t2oatTjMIGNBgNVHSMEgYUwgYKAFNLEsNKR1EwRcbNhyz2h /t2oatTjoWekZTBjMQswCQYDVQQGEwJVUzEhMB8GA1UEChMYVGhlIEdvIERhZGR5 IEdyb3VwLCBJbmMuMTEwLwYDVQQLEyhHbyBEYWRkeSBDbGFzcyAyIENlcnRpZmlj YXRpb24gQXV0aG9yaXR5ggEAMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQAD ggEBADJL87LKPpH8EsahB4yOd6AzBhRckB4Y9wimPQoZ+YeAEW5p5JYXMP80kWNy OO7MHAGjHZQopDH2esRU1/blMVgDoszOYtuURXO1v0XJJLXVggKtI3lpjbi2Tc7P TMozI+gciKqdi0FuFskg5YmezTvacPd+mSYgFFQlq25zheabIZ0KbIIOqPjCDPoQ HmyW74cNxA9hi63ugyuV+I6ShHI56yDqg+2DzZduCLzrTia2cyvk0/ZM/iZx4mER dEr/VxqHD3VILs9RaRegAhJhldXRQLIQTO7ErBBDpqWeCtWVYpoNz4iCxTIM5Cuf ReYNnyicsbkqWletNw+vHX/bvZ8= -----END CERTIFICATE----- Starfield Class 2 CA ==================== -----BEGIN CERTIFICATE----- MIIEDzCCAvegAwIBAgIBADANBgkqhkiG9w0BAQUFADBoMQswCQYDVQQGEwJVUzEl MCMGA1UEChMcU3RhcmZpZWxkIFRlY2hub2xvZ2llcywgSW5jLjEyMDAGA1UECxMp U3RhcmZpZWxkIENsYXNzIDIgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDQw NjI5MTczOTE2WhcNMzQwNjI5MTczOTE2WjBoMQswCQYDVQQGEwJVUzElMCMGA1UE ChMcU3RhcmZpZWxkIFRlY2hub2xvZ2llcywgSW5jLjEyMDAGA1UECxMpU3RhcmZp ZWxkIENsYXNzIDIgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwggEgMA0GCSqGSIb3 DQEBAQUAA4IBDQAwggEIAoIBAQC3Msj+6XGmBIWtDBFk385N78gDGIc/oav7PKaf 8MOh2tTYbitTkPskpD6E8J7oX+zlJ0T1KKY/e97gKvDIr1MvnsoFAZMej2YcOadN +lq2cwQlZut3f+dZxkqZJRRU6ybH838Z1TBwj6+wRir/resp7defqgSHo9T5iaU0 X9tDkYI22WY8sbi5gv2cOj4QyDvvBmVmepsZGD3/cVE8MC5fvj13c7JdBmzDI1aa K4UmkhynArPkPw2vCHmCuDY96pzTNbO8acr1zJ3o/WSNF4Azbl5KXZnJHoe0nRrA 1W4TNSNe35tfPe/W93bC6j67eA0cQmdrBNj41tpvi/JEoAGrAgEDo4HFMIHCMB0G A1UdDgQWBBS/X7fRzt0fhvRbVazc1xDCDqmI5zCBkgYDVR0jBIGKMIGHgBS/X7fR zt0fhvRbVazc1xDCDqmI56FspGowaDELMAkGA1UEBhMCVVMxJTAjBgNVBAoTHFN0 YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xMjAwBgNVBAsTKVN0YXJmaWVsZCBD bGFzcyAyIENlcnRpZmljYXRpb24gQXV0aG9yaXR5ggEAMAwGA1UdEwQFMAMBAf8w DQYJKoZIhvcNAQEFBQADggEBAAWdP4id0ckaVaGsafPzWdqbAYcaT1epoXkJKtv3 L7IezMdeatiDh6GX70k1PncGQVhiv45YuApnP+yz3SFmH8lU+nLMPUxA2IGvd56D eruix/U0F47ZEUD0/CwqTRV/p2JdLiXTAAsgGh1o+Re49L2L7ShZ3U0WixeDyLJl xy16paq8U4Zt3VekyvggQQto8PT7dL5WXXp59fkdheMtlb71cZBDzI0fmgAKhynp VSJYACPq4xJDKVtHCN2MQWplBqjlIapBtJUhlbl90TSrE9atvNziPTnNvT51cKEY WQPJIrSPnNVeKtelttQKbfi3QBFGmh95DmK/D5fs4C8fF5Q= -----END CERTIFICATE----- StartCom Certification Authority ================================ -----BEGIN CERTIFICATE----- MIIHyTCCBbGgAwIBAgIBATANBgkqhkiG9w0BAQUFADB9MQswCQYDVQQGEwJJTDEW MBQGA1UEChMNU3RhcnRDb20gTHRkLjErMCkGA1UECxMiU2VjdXJlIERpZ2l0YWwg Q2VydGlmaWNhdGUgU2lnbmluZzEpMCcGA1UEAxMgU3RhcnRDb20gQ2VydGlmaWNh dGlvbiBBdXRob3JpdHkwHhcNMDYwOTE3MTk0NjM2WhcNMzYwOTE3MTk0NjM2WjB9 MQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20gTHRkLjErMCkGA1UECxMi U2VjdXJlIERpZ2l0YWwgQ2VydGlmaWNhdGUgU2lnbmluZzEpMCcGA1UEAxMgU3Rh cnRDb20gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwggIiMA0GCSqGSIb3DQEBAQUA A4ICDwAwggIKAoICAQDBiNsJvGxGfHiflXu1M5DycmLWwTYgIiRezul38kMKogZk pMyONvg45iPwbm2xPN1yo4UcodM9tDMr0y+v/uqwQVlntsQGfQqedIXWeUyAN3rf OQVSWff0G0ZDpNKFhdLDcfN1YjS6LIp/Ho/u7TTQEceWzVI9ujPW3U3eCztKS5/C Ji/6tRYccjV3yjxd5srhJosaNnZcAdt0FCX+7bWgiA/deMotHweXMAEtcnn6RtYT Kqi5pquDSR3l8u/d5AGOGAqPY1MWhWKpDhk6zLVmpsJrdAfkK+F2PrRt2PZE4XNi HzvEvqBTViVsUQn3qqvKv3b9bZvzndu/PWa8DFaqr5hIlTpL36dYUNk4dalb6kMM Av+Z6+hsTXBbKWWc3apdzK8BMewM69KN6Oqce+Zu9ydmDBpI125C4z/eIT574Q1w +2OqqGwaVLRcJXrJosmLFqa7LH4XXgVNWG4SHQHuEhANxjJ/GP/89PrNbpHoNkm+ Gkhpi8KWTRoSsmkXwQqQ1vp5Iki/untp+HDH+no32NgN0nZPV/+Qt+OR0t3vwmC3 Zzrd/qqc8NSLf3Iizsafl7b4r4qgEKjZ+xjGtrVcUjyJthkqcwEKDwOzEmDyei+B 26Nu/yYwl/WL3YlXtq09s68rxbd2AvCl1iuahhQqcvbjM4xdCUsT37uMdBNSSwID AQABo4ICUjCCAk4wDAYDVR0TBAUwAwEB/zALBgNVHQ8EBAMCAa4wHQYDVR0OBBYE FE4L7xqkQFulF2mHMMo0aEPQQa7yMGQGA1UdHwRdMFswLKAqoCiGJmh0dHA6Ly9j ZXJ0LnN0YXJ0Y29tLm9yZy9zZnNjYS1jcmwuY3JsMCugKaAnhiVodHRwOi8vY3Js LnN0YXJ0Y29tLm9yZy9zZnNjYS1jcmwuY3JsMIIBXQYDVR0gBIIBVDCCAVAwggFM BgsrBgEEAYG1NwEBATCCATswLwYIKwYBBQUHAgEWI2h0dHA6Ly9jZXJ0LnN0YXJ0 Y29tLm9yZy9wb2xpY3kucGRmMDUGCCsGAQUFBwIBFilodHRwOi8vY2VydC5zdGFy dGNvbS5vcmcvaW50ZXJtZWRpYXRlLnBkZjCB0AYIKwYBBQUHAgIwgcMwJxYgU3Rh cnQgQ29tbWVyY2lhbCAoU3RhcnRDb20pIEx0ZC4wAwIBARqBl0xpbWl0ZWQgTGlh YmlsaXR5LCByZWFkIHRoZSBzZWN0aW9uICpMZWdhbCBMaW1pdGF0aW9ucyogb2Yg dGhlIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5IFBvbGljeSBhdmFp bGFibGUgYXQgaHR0cDovL2NlcnQuc3RhcnRjb20ub3JnL3BvbGljeS5wZGYwEQYJ YIZIAYb4QgEBBAQDAgAHMDgGCWCGSAGG+EIBDQQrFilTdGFydENvbSBGcmVlIFNT TCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTANBgkqhkiG9w0BAQUFAAOCAgEAFmyZ 9GYMNPXQhV59CuzaEE44HF7fpiUFS5Eyweg78T3dRAlbB0mKKctmArexmvclmAk8 jhvh3TaHK0u7aNM5Zj2gJsfyOZEdUauCe37Vzlrk4gNXcGmXCPleWKYK34wGmkUW FjgKXlf2Ysd6AgXmvB618p70qSmD+LIU424oh0TDkBreOKk8rENNZEXO3SipXPJz ewT4F+irsfMuXGRuczE6Eri8sxHkfY+BUZo7jYn0TZNmezwD7dOaHZrzZVD1oNB1 ny+v8OqCQ5j4aZyJecRDjkZy42Q2Eq/3JR44iZB3fsNrarnDy0RLrHiQi+fHLB5L EUTINFInzQpdn4XBidUaePKVEFMy3YCEZnXZtWgo+2EuvoSoOMCZEoalHmdkrQYu L6lwhceWD3yJZfWOQ1QOq92lgDmUYMA0yZZwLKMS9R9Ie70cfmu3nZD0Ijuu+Pwq yvqCUqDvr0tVk+vBtfAii6w0TiYiBKGHLHVKt+V9E9e4DGTANtLJL4YSjCMJwRuC O3NJo2pXh5Tl1njFmUNj403gdy3hZZlyaQQaRwnmDwFWJPsfvw55qVguucQJAX6V um0ABj6y6koQOdjQK/W/7HW/lwLFCRsI3FU34oH7N4RDYiDK51ZLZer+bMEkkySh NOsF/5oirpt9P/FlUQqmMGqz9IgcgA38corog14= -----END CERTIFICATE----- Taiwan GRCA =========== -----BEGIN CERTIFICATE----- MIIFcjCCA1qgAwIBAgIQH51ZWtcvwgZEpYAIaeNe9jANBgkqhkiG9w0BAQUFADA/ MQswCQYDVQQGEwJUVzEwMC4GA1UECgwnR292ZXJubWVudCBSb290IENlcnRpZmlj YXRpb24gQXV0aG9yaXR5MB4XDTAyMTIwNTEzMjMzM1oXDTMyMTIwNTEzMjMzM1ow PzELMAkGA1UEBhMCVFcxMDAuBgNVBAoMJ0dvdmVybm1lbnQgUm9vdCBDZXJ0aWZp Y2F0aW9uIEF1dGhvcml0eTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIB AJoluOzMonWoe/fOW1mKydGGEghU7Jzy50b2iPN86aXfTEc2pBsBHH8eV4qNw8XR IePaJD9IK/ufLqGU5ywck9G/GwGHU5nOp/UKIXZ3/6m3xnOUT0b3EEk3+qhZSV1q gQdW8or5BtD3cCJNtLdBuTK4sfCxw5w/cP1T3YGq2GN49thTbqGsaoQkclSGxtKy yhwOeYHWtXBiCAEuTk8O1RGvqa/lmr/czIdtJuTJV6L7lvnM4T9TjGxMfptTCAts F/tnyMKtsc2AtJfcdgEWFelq16TheEfOhtX7MfP6Mb40qij7cEwdScevLJ1tZqa2 jWR+tSBqnTuBto9AAGdLiYa4zGX+FVPpBMHWXx1E1wovJ5pGfaENda1UhhXcSTvx ls4Pm6Dso3pdvtUqdULle96ltqqvKKyskKw4t9VoNSZ63Pc78/1Fm9G7Q3hub/FC VGqY8A2tl+lSXunVanLeavcbYBT0peS2cWeqH+riTcFCQP5nRhc4L0c/cZyu5SHK YS1tB6iEfC3uUSXxY5Ce/eFXiGvviiNtsea9P63RPZYLhY3Naye7twWb7LuRqQoH EgKXTiCQ8P8NHuJBO9NAOueNXdpm5AKwB1KYXA6OM5zCppX7VRluTI6uSw+9wThN Xo+EHWbNxWCWtFJaBYmOlXqYwZE8lSOyDvR5tMl8wUohAgMBAAGjajBoMB0GA1Ud DgQWBBTMzO/MKWCkO7GStjz6MmKPrCUVOzAMBgNVHRMEBTADAQH/MDkGBGcqBwAE MTAvMC0CAQAwCQYFKw4DAhoFADAHBgVnKgMAAAQUA5vwIhP/lSg209yewDL7MTqK UWUwDQYJKoZIhvcNAQEFBQADggIBAECASvomyc5eMN1PhnR2WPWus4MzeKR6dBcZ TulStbngCnRiqmjKeKBMmo4sIy7VahIkv9Ro04rQ2JyftB8M3jh+Vzj8jeJPXgyf qzvS/3WXy6TjZwj/5cAWtUgBfen5Cv8b5Wppv3ghqMKnI6mGq3ZW6A4M9hPdKmaK ZEk9GhiHkASfQlK3T8v+R0F2Ne//AHY2RTKbxkaFXeIksB7jSJaYV0eUVXoPQbFE JPPB/hprv4j9wabak2BegUqZIJxIZhm1AHlUD7gsL0u8qV1bYH+Mh6XgUmMqvtg7 hUAV/h62ZT/FS9p+tXo1KaMuephgIqP0fSdOLeq0dDzpD6QzDxARvBMB1uUO07+1 EqLhRSPAzAhuYbeJq4PjJB7mXQfnHyA+z2fI56wwbSdLaG5LKlwCCDTb+HbkZ6Mm nD+iMsJKxYEYMRBWqoTvLQr/uB930r+lWKBi5NdLkXWNiYCYfm3LU05er/ayl4WX udpVBrkk7tfGOB5jGxI7leFYrPLfhNVfmS8NVVvmONsuP3LpSIXLuykTjx44Vbnz ssQwmSNOXfJIoRIM3BKQCZBUkQM8R+XVyWXgt0t97EfTsws+rZ7QdAAO671RrcDe LMDDav7v3Aun+kbfYNucpllQdSNpc5Oy+fwC00fmcc4QAu4njIT/rEUNE1yDMuAl pYYsfPQS -----END CERTIFICATE----- Firmaprofesional Root CA ======================== -----BEGIN CERTIFICATE----- MIIEVzCCAz+gAwIBAgIBATANBgkqhkiG9w0BAQUFADCBnTELMAkGA1UEBhMCRVMx IjAgBgNVBAcTGUMvIE11bnRhbmVyIDI0NCBCYXJjZWxvbmExQjBABgNVBAMTOUF1 dG9yaWRhZCBkZSBDZXJ0aWZpY2FjaW9uIEZpcm1hcHJvZmVzaW9uYWwgQ0lGIEE2 MjYzNDA2ODEmMCQGCSqGSIb3DQEJARYXY2FAZmlybWFwcm9mZXNpb25hbC5jb20w HhcNMDExMDI0MjIwMDAwWhcNMTMxMDI0MjIwMDAwWjCBnTELMAkGA1UEBhMCRVMx IjAgBgNVBAcTGUMvIE11bnRhbmVyIDI0NCBCYXJjZWxvbmExQjBABgNVBAMTOUF1 dG9yaWRhZCBkZSBDZXJ0aWZpY2FjaW9uIEZpcm1hcHJvZmVzaW9uYWwgQ0lGIEE2 MjYzNDA2ODEmMCQGCSqGSIb3DQEJARYXY2FAZmlybWFwcm9mZXNpb25hbC5jb20w ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDnIwNvbyOlXnjOlSztlB5u Cp4Bx+ow0Syd3Tfom5h5VtP8c9/Qit5Vj1H5WuretXDE7aTt/6MNbg9kUDGvASdY rv5sp0ovFy3Tc9UTHI9ZpTQsHVQERc1ouKDAA6XPhUJHlShbz++AbOCQl4oBPB3z hxAwJkh91/zpnZFx/0GaqUC1N5wpIE8fUuOgfRNtVLcK3ulqTgesrBlf3H5idPay BQC6haD9HThuy1q7hryUZzM1gywfI834yJFxzJeL764P3CkDG8A563DtwW4O2GcL iam8NeTvtjS0pbbELaW+0MOUJEjb35bTALVmGotmBQ/dPz/LP6pemkr4tErvlTcb AgMBAAGjgZ8wgZwwKgYDVR0RBCMwIYYfaHR0cDovL3d3dy5maXJtYXByb2Zlc2lv bmFsLmNvbTASBgNVHRMBAf8ECDAGAQH/AgEBMCsGA1UdEAQkMCKADzIwMDExMDI0 MjIwMDAwWoEPMjAxMzEwMjQyMjAwMDBaMA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4E FgQUMwugZtHq2s7eYpMEKFK1FH84aLcwDQYJKoZIhvcNAQEFBQADggEBAEdz/o0n VPD11HecJ3lXV7cVVuzH2Fi3AQL0M+2TUIiefEaxvT8Ub/GzR0iLjJcG1+p+o1wq u00vR+L4OQbJnC4xGgN49Lw4xiKLMzHwFgQEffl25EvXwOaD7FnMP97/T2u3Z36m hoEyIwOdyPdfwUpgpZKpsaSgYMN4h7Mi8yrrW6ntBas3D7Hi05V2Y1Z0jFhyGzfl ZKG+TQyTmAyX9odtsz/ny4Cm7YjHX1BiAuiZdBbQ5rQ58SfLyEDW44YQqSMSkuBp QWOnryULwMWSyx6Yo1q6xTMPoJcB3X/ge9YGVM+h4k0460tQtcsm9MracEpqoeJ5 quGnM/b9Sh/22WA= -----END CERTIFICATE----- Wells Fargo Root CA =================== -----BEGIN CERTIFICATE----- MIID5TCCAs2gAwIBAgIEOeSXnjANBgkqhkiG9w0BAQUFADCBgjELMAkGA1UEBhMC VVMxFDASBgNVBAoTC1dlbGxzIEZhcmdvMSwwKgYDVQQLEyNXZWxscyBGYXJnbyBD ZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTEvMC0GA1UEAxMmV2VsbHMgRmFyZ28gUm9v dCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwHhcNMDAxMDExMTY0MTI4WhcNMjEwMTE0 MTY0MTI4WjCBgjELMAkGA1UEBhMCVVMxFDASBgNVBAoTC1dlbGxzIEZhcmdvMSww KgYDVQQLEyNXZWxscyBGYXJnbyBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTEvMC0G A1UEAxMmV2VsbHMgRmFyZ28gUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwggEi MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDVqDM7Jvk0/82bfuUER84A4n13 5zHCLielTWi5MbqNQ1mXx3Oqfz1cQJ4F5aHiidlMuD+b+Qy0yGIZLEWukR5zcUHE SxP9cMIlrCL1dQu3U+SlK93OvRw6esP3E48mVJwWa2uv+9iWsWCaSOAlIiR5NM4O JgALTqv9i86C1y8IcGjBqAr5dE8Hq6T54oN+J3N0Prj5OEL8pahbSCOz6+MlsoCu ltQKnMJ4msZoGK43YjdeUXWoWGPAUe5AeH6orxqg4bB4nVCMe+ez/I4jsNtlAHCE AQgAFG5Uhpq6zPk3EPbg3oQtnaSFN9OH4xXQwReQfhkhahKpdv0SAulPIV4XAgMB AAGjYTBfMA8GA1UdEwEB/wQFMAMBAf8wTAYDVR0gBEUwQzBBBgtghkgBhvt7hwcB CzAyMDAGCCsGAQUFBwIBFiRodHRwOi8vd3d3LndlbGxzZmFyZ28uY29tL2NlcnRw b2xpY3kwDQYJKoZIhvcNAQEFBQADggEBANIn3ZwKdyu7IvICtUpKkfnRLb7kuxpo 7w6kAOnu5+/u9vnldKTC2FJYxHT7zmu1Oyl5GFrvm+0fazbuSCUlFLZWohDo7qd/ 0D+j0MNdJu4HzMPBJCGHHt8qElNvQRbn7a6U+oxy+hNH8Dx+rn0ROhPs7fpvcmR7 nX1/Jv16+yWt6j4pf0zjAFcysLPp7VMX2YuyFA4w6OXVE8Zkr8QA1dhYJPz1j+zx x32l2w8n0cbyQIjmH/ZhqPRCyLk306m+LFZ4wnKbWV01QIroTmMatukgalHizqSQ 33ZwmVxwQ023tqcZZE6St8WRPH9IFmV7Fv3L/PvZ1dZPIWU7Sn9Ho/s= -----END CERTIFICATE----- Swisscom Root CA 1 ================== -----BEGIN CERTIFICATE----- MIIF2TCCA8GgAwIBAgIQXAuFXAvnWUHfV8w/f52oNjANBgkqhkiG9w0BAQUFADBk MQswCQYDVQQGEwJjaDERMA8GA1UEChMIU3dpc3Njb20xJTAjBgNVBAsTHERpZ2l0 YWwgQ2VydGlmaWNhdGUgU2VydmljZXMxGzAZBgNVBAMTElN3aXNzY29tIFJvb3Qg Q0EgMTAeFw0wNTA4MTgxMjA2MjBaFw0yNTA4MTgyMjA2MjBaMGQxCzAJBgNVBAYT AmNoMREwDwYDVQQKEwhTd2lzc2NvbTElMCMGA1UECxMcRGlnaXRhbCBDZXJ0aWZp Y2F0ZSBTZXJ2aWNlczEbMBkGA1UEAxMSU3dpc3Njb20gUm9vdCBDQSAxMIICIjAN BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA0LmwqAzZuz8h+BvVM5OAFmUgdbI9 m2BtRsiMMW8Xw/qabFbtPMWRV8PNq5ZJkCoZSx6jbVfd8StiKHVFXqrWW/oLJdih FvkcxC7mlSpnzNApbjyFNDhhSbEAn9Y6cV9Nbc5fuankiX9qUvrKm/LcqfmdmUc/ TilftKaNXXsLmREDA/7n29uj/x2lzZAeAR81sH8A25Bvxn570e56eqeqDFdvpG3F EzuwpdntMhy0XmeLVNxzh+XTF3xmUHJd1BpYwdnP2IkCb6dJtDZd0KTeByy2dbco kdaXvij1mB7qWybJvbCXc9qukSbraMH5ORXWZ0sKbU/Lz7DkQnGMU3nn7uHbHaBu HYwadzVcFh4rUx80i9Fs/PJnB3r1re3WmquhsUvhzDdf/X/NTa64H5xD+SpYVUNF vJbNcA78yeNmuk6NO4HLFWR7uZToXTNShXEuT46iBhFRyePLoW4xCGQMwtI89Tbo 19AOeCMgkckkKmUpWyL3Ic6DXqTz3kvTaI9GdVyDCW4pa8RwjPWd1yAv/0bSKzjC L3UcPX7ape8eYIVpQtPM+GP+HkM5haa2Y0EQs3MevNP6yn0WR+Kn1dCjigoIlmJW bjTb2QK5MHXjBNLnj8KwEUAKrNVxAmKLMb7dxiNYMUJDLXT5xp6mig/p/r+D5kNX JLrvRjSq1xIBOO0CAwEAAaOBhjCBgzAOBgNVHQ8BAf8EBAMCAYYwHQYDVR0hBBYw FDASBgdghXQBUwABBgdghXQBUwABMBIGA1UdEwEB/wQIMAYBAf8CAQcwHwYDVR0j BBgwFoAUAyUv3m+CATpcLNwroWm1Z9SM0/0wHQYDVR0OBBYEFAMlL95vggE6XCzc K6FptWfUjNP9MA0GCSqGSIb3DQEBBQUAA4ICAQA1EMvspgQNDQ/NwNurqPKIlwzf ky9NfEBWMXrrpA9gzXrzvsMnjgM+pN0S734edAY8PzHyHHuRMSG08NBsl9Tpl7Ik Vh5WwzW9iAUPWxAaZOHHgjD5Mq2eUCzneAXQMbFamIp1TpBcahQq4FJHgmDmHtqB sfsUC1rxn9KVuj7QG9YVHaO+htXbD8BJZLsuUBlL0iT43R4HVtA4oJVwIHaM190e 3p9xxCPvgxNcoyQVTSlAPGrEqdi3pkSlDfTgnXceQHAm/NrZNuR55LU/vJtlvrsR ls/bxig5OgjOR1tTWsWZ/l2p3e9M1MalrQLmjAcSHm8D0W+go/MpvRLHUKKwf4ip mXeascClOS5cfGniLLDqN2qk4Vrh9VDlg++luyqI54zb/W1elxmofmZ1a3Hqv7HH b6D0jqTsNFFbjCYDcKF31QESVwA12yPeDooomf2xEG9L/zgtYE4snOtnta1J7ksf rK/7DZBaZmBwXarNeNQk7shBoJMBkpxqnvy5JMWzFYJ+vq6VK+uxwNrjAWALXmms hFZhvnEX/h0TD/7Gh0Xp/jKgGg0TpJRVcaUWi7rKibCyx/yP2FS1k2Kdzs9Z+z0Y zirLNRWCXf9UIltxUvu3yf5gmwBBZPCqKuy2QkPOiWaByIufOVQDJdMWNY6E0F/6 MBr1mmz0DlP5OlvRHA== -----END CERTIFICATE----- DigiCert Assured ID Root CA =========================== -----BEGIN CERTIFICATE----- MIIDtzCCAp+gAwIBAgIQDOfg5RfYRv6P5WD8G/AwOTANBgkqhkiG9w0BAQUFADBl MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 d3cuZGlnaWNlcnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJv b3QgQ0EwHhcNMDYxMTEwMDAwMDAwWhcNMzExMTEwMDAwMDAwWjBlMQswCQYDVQQG EwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNl cnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJvb3QgQ0EwggEi MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCtDhXO5EOAXLGH87dg+XESpa7c JpSIqvTO9SA5KFhgDPiA2qkVlTJhPLWxKISKityfCgyDF3qPkKyK53lTXDGEKvYP mDI2dsze3Tyoou9q+yHyUmHfnyDXH+Kx2f4YZNISW1/5WBg1vEfNoTb5a3/UsDg+ wRvDjDPZ2C8Y/igPs6eD1sNuRMBhNZYW/lmci3Zt1/GiSw0r/wty2p5g0I6QNcZ4 VYcgoc/lbQrISXwxmDNsIumH0DJaoroTghHtORedmTpyoeb6pNnVFzF1roV9Iq4/ AUaG9ih5yLHa5FcXxH4cDrC0kqZWs72yl+2qp/C3xag/lRbQ/6GW6whfGHdPAgMB AAGjYzBhMA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW BBRF66Kv9JLLgjEtUYunpyGd823IDzAfBgNVHSMEGDAWgBRF66Kv9JLLgjEtUYun pyGd823IDzANBgkqhkiG9w0BAQUFAAOCAQEAog683+Lt8ONyc3pklL/3cmbYMuRC dWKuh+vy1dneVrOfzM4UKLkNl2BcEkxY5NM9g0lFWJc1aRqoR+pWxnmrEthngYTf fwk8lOa4JiwgvT2zKIn3X/8i4peEH+ll74fg38FnSbNd67IJKusm7Xi+fT8r87cm NW1fiQG2SVufAQWbqz0lwcy2f8Lxb4bG+mRo64EtlOtCt/qMHt1i8b5QZ7dsvfPx H2sMNgcWfzd8qVttevESRmCD1ycEvkvOl77DZypoEd+A5wwzZr8TDRRu838fYxAe +o0bJW1sj6W3YQGx0qMmoRBxna3iw/nDmVG3KwcIzi7mULKn+gpFL6Lw8g== -----END CERTIFICATE----- DigiCert Global Root CA ======================= -----BEGIN CERTIFICATE----- MIIDrzCCApegAwIBAgIQCDvgVpBCRrGhdWrJWZHHSjANBgkqhkiG9w0BAQUFADBh MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBD QTAeFw0wNjExMTAwMDAwMDBaFw0zMTExMTAwMDAwMDBaMGExCzAJBgNVBAYTAlVT MRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5j b20xIDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IENBMIIBIjANBgkqhkiG 9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4jvhEXLeqKTTo1eqUKKPC3eQyaKl7hLOllsB CSDMAZOnTjC3U/dDxGkAV53ijSLdhwZAAIEJzs4bg7/fzTtxRuLWZscFs3YnFo97 nh6Vfe63SKMI2tavegw5BmV/Sl0fvBf4q77uKNd0f3p4mVmFaG5cIzJLv07A6Fpt 43C/dxC//AH2hdmoRBBYMql1GNXRor5H4idq9Joz+EkIYIvUX7Q6hL+hqkpMfT7P T19sdl6gSzeRntwi5m3OFBqOasv+zbMUZBfHWymeMr/y7vrTC0LUq7dBMtoM1O/4 gdW7jVg/tRvoSSiicNoxBN33shbyTApOB6jtSj1etX+jkMOvJwIDAQABo2MwYTAO BgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUA95QNVbR TLtm8KPiGxvDl7I90VUwHwYDVR0jBBgwFoAUA95QNVbRTLtm8KPiGxvDl7I90VUw DQYJKoZIhvcNAQEFBQADggEBAMucN6pIExIK+t1EnE9SsPTfrgT1eXkIoyQY/Esr hMAtudXH/vTBH1jLuG2cenTnmCmrEbXjcKChzUyImZOMkXDiqw8cvpOp/2PV5Adg 06O/nVsJ8dWO41P0jmP6P6fbtGbfYmbW0W5BjfIttep3Sp+dWOIrWcBAI+0tKIJF PnlUkiaY4IBIqDfv8NZ5YBberOgOzW6sRBc4L0na4UU+Krk2U886UAb3LujEV0ls YSEY1QSteDwsOoBrp+uvFRTp2InBuThs4pFsiv9kuXclVzDAGySj4dzp30d8tbQk CAUw7C29C79Fv1C5qfPrmAESrciIxpg0X40KPMbp1ZWVbd4= -----END CERTIFICATE----- DigiCert High Assurance EV Root CA ================================== -----BEGIN CERTIFICATE----- MIIDxTCCAq2gAwIBAgIQAqxcJmoLQJuPC3nyrkYldzANBgkqhkiG9w0BAQUFADBs MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 d3cuZGlnaWNlcnQuY29tMSswKQYDVQQDEyJEaWdpQ2VydCBIaWdoIEFzc3VyYW5j ZSBFViBSb290IENBMB4XDTA2MTExMDAwMDAwMFoXDTMxMTExMDAwMDAwMFowbDEL MAkGA1UEBhMCVVMxFTATBgNVBAoTDERpZ2lDZXJ0IEluYzEZMBcGA1UECxMQd3d3 LmRpZ2ljZXJ0LmNvbTErMCkGA1UEAxMiRGlnaUNlcnQgSGlnaCBBc3N1cmFuY2Ug RVYgUm9vdCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMbM5XPm +9S75S0tMqbf5YE/yc0lSbZxKsPVlDRnogocsF9ppkCxxLeyj9CYpKlBWTrT3JTW PNt0OKRKzE0lgvdKpVMSOO7zSW1xkX5jtqumX8OkhPhPYlG++MXs2ziS4wblCJEM xChBVfvLWokVfnHoNb9Ncgk9vjo4UFt3MRuNs8ckRZqnrG0AFFoEt7oT61EKmEFB Ik5lYYeBQVCmeVyJ3hlKV9Uu5l0cUyx+mM0aBhakaHPQNAQTXKFx01p8VdteZOE3 hzBWBOURtCmAEvF5OYiiAhF8J2a3iLd48soKqDirCmTCv2ZdlYTBoSUeh10aUAsg EsxBu24LUTi4S8sCAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgGGMA8GA1UdEwEB/wQF MAMBAf8wHQYDVR0OBBYEFLE+w2kD+L9HAdSYJhoIAu9jZCvDMB8GA1UdIwQYMBaA FLE+w2kD+L9HAdSYJhoIAu9jZCvDMA0GCSqGSIb3DQEBBQUAA4IBAQAcGgaX3Nec nzyIZgYIVyHbIUf4KmeqvxgydkAQV8GK83rZEWWONfqe/EW1ntlMMUu4kehDLI6z eM7b41N5cdblIZQB2lWHmiRk9opmzN6cN82oNLFpmyPInngiK3BD41VHMWEZ71jF hS9OMPagMRYjyOfiZRYzy78aG6A9+MpeizGLYAiJLQwGXFK3xPkKmNEVX58Svnw2 Yzi9RKR/5CYrCsSXaQ3pjOLAEFe4yHYSkVXySGnYvCoCWw9E1CAx2/S6cCZdkGCe vEsXCS+0yx5DaMkHJ8HSXPfqIbloEpw8nL+e/IBcm2PN7EeqJSdnoDfzAIJ9VNep +OkuE6N36B9K -----END CERTIFICATE----- Certplus Class 2 Primary CA =========================== -----BEGIN CERTIFICATE----- MIIDkjCCAnqgAwIBAgIRAIW9S/PY2uNp9pTXX8OlRCMwDQYJKoZIhvcNAQEFBQAw PTELMAkGA1UEBhMCRlIxETAPBgNVBAoTCENlcnRwbHVzMRswGQYDVQQDExJDbGFz cyAyIFByaW1hcnkgQ0EwHhcNOTkwNzA3MTcwNTAwWhcNMTkwNzA2MjM1OTU5WjA9 MQswCQYDVQQGEwJGUjERMA8GA1UEChMIQ2VydHBsdXMxGzAZBgNVBAMTEkNsYXNz IDIgUHJpbWFyeSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANxQ ltAS+DXSCHh6tlJw/W/uz7kRy1134ezpfgSN1sxvc0NXYKwzCkTsA18cgCSR5aiR VhKC9+Ar9NuuYS6JEI1rbLqzAr3VNsVINyPi8Fo3UjMXEuLRYE2+L0ER4/YXJQyL kcAbmXuZVg2v7tK8R1fjeUl7NIknJITesezpWE7+Tt9avkGtrAjFGA7v0lPubNCd EgETjdyAYveVqUSISnFOYFWe2yMZeVYHDD9jC1yw4r5+FfyUM1hBOHTE4Y+L3yas H7WLO7dDWWuwJKZtkIvEcupdM5i3y95ee++U8Rs+yskhwcWYAqqi9lt3m/V+llU0 HGdpwPFC40es/CgcZlUCAwEAAaOBjDCBiTAPBgNVHRMECDAGAQH/AgEKMAsGA1Ud DwQEAwIBBjAdBgNVHQ4EFgQU43Mt38sOKAze3bOkynm4jrvoMIkwEQYJYIZIAYb4 QgEBBAQDAgEGMDcGA1UdHwQwMC4wLKAqoCiGJmh0dHA6Ly93d3cuY2VydHBsdXMu Y29tL0NSTC9jbGFzczIuY3JsMA0GCSqGSIb3DQEBBQUAA4IBAQCnVM+IRBnL39R/ AN9WM2K191EBkOvDP9GIROkkXe/nFL0gt5o8AP5tn9uQ3Nf0YtaLcF3n5QRIqWh8 yfFC82x/xXp8HVGIutIKPidd3i1RTtMTZGnkLuPT55sJmabglZvOGtd/vjzOUrMR FcEPF80Du5wlFbqidon8BvEY0JNLDnyCt6X09l/+7UCmnYR0ObncHoUW2ikbhiMA ybuJfm6AiB4vFLQDJKgybwOaRywwvlbGp0ICcBvqQNi6BQNwB6SW//1IMwrh3KWB kJtN3X3n57LNXMhqlfil9o3EXXgIvnsG1knPGTZQIy4I5p4FTUcY1Rbpsda2ENW7 l7+ijrRU -----END CERTIFICATE----- DST Root CA X3 ============== -----BEGIN CERTIFICATE----- MIIDSjCCAjKgAwIBAgIQRK+wgNajJ7qJMDmGLvhAazANBgkqhkiG9w0BAQUFADA/ MSQwIgYDVQQKExtEaWdpdGFsIFNpZ25hdHVyZSBUcnVzdCBDby4xFzAVBgNVBAMT DkRTVCBSb290IENBIFgzMB4XDTAwMDkzMDIxMTIxOVoXDTIxMDkzMDE0MDExNVow PzEkMCIGA1UEChMbRGlnaXRhbCBTaWduYXR1cmUgVHJ1c3QgQ28uMRcwFQYDVQQD Ew5EU1QgUm9vdCBDQSBYMzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB AN+v6ZdQCINXtMxiZfaQguzH0yxrMMpb7NnDfcdAwRgUi+DoM3ZJKuM/IUmTrE4O rz5Iy2Xu/NMhD2XSKtkyj4zl93ewEnu1lcCJo6m67XMuegwGMoOifooUMM0RoOEq OLl5CjH9UL2AZd+3UWODyOKIYepLYYHsUmu5ouJLGiifSKOeDNoJjj4XLh7dIN9b xiqKqy69cK3FCxolkHRyxXtqqzTWMIn/5WgTe1QLyNau7Fqckh49ZLOMxt+/yUFw 7BZy1SbsOFU5Q9D8/RhcQPGX69Wam40dutolucbY38EVAjqr2m7xPi71XAicPNaD aeQQmxkqtilX4+U9m5/wAl0CAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNV HQ8BAf8EBAMCAQYwHQYDVR0OBBYEFMSnsaR7LHH62+FLkHX/xBVghYkQMA0GCSqG SIb3DQEBBQUAA4IBAQCjGiybFwBcqR7uKGY3Or+Dxz9LwwmglSBd49lZRNI+DT69 ikugdB/OEIKcdBodfpga3csTS7MgROSR6cz8faXbauX+5v3gTt23ADq1cEmv8uXr AvHRAosZy5Q6XkjEGB5YGV8eAlrwDPGxrancWYaLbumR9YbK+rlmM6pZW87ipxZz R8srzJmwN0jP41ZL9c8PDHIyh8bwRLtTcm1D9SZImlJnt1ir/md2cXjbDaJWFBM5 JDGFoqgCWjBH4d1QB7wCCZAA62RjYJsWvIjJEubSfZGL+T0yjWW06XyxV3bqxbYo Ob8VZRzI9neWagqNdwvYkQsEjgfbKbYK7p2CNTUQ -----END CERTIFICATE----- DST ACES CA X6 ============== -----BEGIN CERTIFICATE----- MIIECTCCAvGgAwIBAgIQDV6ZCtadt3js2AdWO4YV2TANBgkqhkiG9w0BAQUFADBb MQswCQYDVQQGEwJVUzEgMB4GA1UEChMXRGlnaXRhbCBTaWduYXR1cmUgVHJ1c3Qx ETAPBgNVBAsTCERTVCBBQ0VTMRcwFQYDVQQDEw5EU1QgQUNFUyBDQSBYNjAeFw0w MzExMjAyMTE5NThaFw0xNzExMjAyMTE5NThaMFsxCzAJBgNVBAYTAlVTMSAwHgYD VQQKExdEaWdpdGFsIFNpZ25hdHVyZSBUcnVzdDERMA8GA1UECxMIRFNUIEFDRVMx FzAVBgNVBAMTDkRTVCBBQ0VTIENBIFg2MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A MIIBCgKCAQEAuT31LMmU3HWKlV1j6IR3dma5WZFcRt2SPp/5DgO0PWGSvSMmtWPu ktKe1jzIDZBfZIGxqAgNTNj50wUoUrQBJcWVHAx+PhCEdc/BGZFjz+iokYi5Q1K7 gLFViYsx+tC3dr5BPTCapCIlF3PoHuLTrCq9Wzgh1SpL11V94zpVvddtawJXa+ZH fAjIgrrep4c9oW24MFbCswKBXy314powGCi4ZtPLAZZv6opFVdbgnf9nKxcCpk4a ahELfrd755jWjHZvwTvbUJN+5dCOHze4vbrGn2zpfDPyMjwmR/onJALJfh1biEIT ajV8fTXpLmaRcpPVMibEdPVTo7NdmvYJywIDAQABo4HIMIHFMA8GA1UdEwEB/wQF MAMBAf8wDgYDVR0PAQH/BAQDAgHGMB8GA1UdEQQYMBaBFHBraS1vcHNAdHJ1c3Rk c3QuY29tMGIGA1UdIARbMFkwVwYKYIZIAWUDAgEBATBJMEcGCCsGAQUFBwIBFjto dHRwOi8vd3d3LnRydXN0ZHN0LmNvbS9jZXJ0aWZpY2F0ZXMvcG9saWN5L0FDRVMt aW5kZXguaHRtbDAdBgNVHQ4EFgQUCXIGThhDD+XWzMNqizF7eI+og7gwDQYJKoZI hvcNAQEFBQADggEBAKPYjtay284F5zLNAdMEA+V25FYrnJmQ6AgwbN99Pe7lv7Uk QIRJ4dEorsTCOlMwiPH1d25Ryvr/ma8kXxug/fKshMrfqfBfBC6tFr8hlxCBPeP/ h40y3JTlR4peahPJlJU90u7INJXQgNStMgiAVDzgvVJT11J8smk/f3rPanTK+gQq nExaBqXpIK1FZg9p8d2/6eMyi/rgwYZNcjwu2JN4Cir42NInPRmJX1p7ijvMDNpR rscL9yuwNwXsvFcj4jjSm2jzVhKIT0J8uDHEtdvkyCE06UgRNe76x5JXxZ805Mf2 9w4LTJxoeHtxMcfrHuBnQfO3oKfN5XozNmr6mis= -----END CERTIFICATE----- TURKTRUST Certificate Services Provider Root 1 ============================================== -----BEGIN CERTIFICATE----- MIID+zCCAuOgAwIBAgIBATANBgkqhkiG9w0BAQUFADCBtzE/MD0GA1UEAww2VMOc UktUUlVTVCBFbGVrdHJvbmlrIFNlcnRpZmlrYSBIaXptZXQgU2HEn2xhecSxY8Sx c8SxMQswCQYDVQQGDAJUUjEPMA0GA1UEBwwGQU5LQVJBMVYwVAYDVQQKDE0oYykg MjAwNSBUw5xSS1RSVVNUIEJpbGdpIMSwbGV0acWfaW0gdmUgQmlsacWfaW0gR8O8 dmVubGnEn2kgSGl6bWV0bGVyaSBBLsWeLjAeFw0wNTA1MTMxMDI3MTdaFw0xNTAz MjIxMDI3MTdaMIG3MT8wPQYDVQQDDDZUw5xSS1RSVVNUIEVsZWt0cm9uaWsgU2Vy dGlmaWthIEhpem1ldCBTYcSfbGF5xLFjxLFzxLExCzAJBgNVBAYMAlRSMQ8wDQYD VQQHDAZBTktBUkExVjBUBgNVBAoMTShjKSAyMDA1IFTDnFJLVFJVU1QgQmlsZ2kg xLBsZXRpxZ9pbSB2ZSBCaWxpxZ9pbSBHw7x2ZW5sacSfaSBIaXptZXRsZXJpIEEu xZ4uMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAylIF1mMD2Bxf3dJ7 XfIMYGFbazt0K3gNfUW9InTojAPBxhEqPZW8qZSwu5GXyGl8hMW0kWxsE2qkVa2k heiVfrMArwDCBRj1cJ02i67L5BuBf5OI+2pVu32Fks66WJ/bMsW9Xe8iSi9BB35J YbOG7E6mQW6EvAPs9TscyB/C7qju6hJKjRTP8wrgUDn5CDX4EVmt5yLqS8oUBt5C urKZ8y1UiBAG6uEaPj1nH/vO+3yC6BFdSsG5FOpU2WabfIl9BJpiyelSPJ6c79L1 JuTm5Rh8i27fbMx4W09ysstcP4wFjdFMjK2Sx+F4f2VsSQZQLJ4ywtdKxnWKWU51 b0dewQIDAQABoxAwDjAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4IBAQAV 9VX/N5aAWSGk/KEVTCD21F/aAyT8z5Aa9CEKmu46sWrv7/hg0Uw2ZkUd82YCdAR7 kjCo3gp2D++Vbr3JN+YaDayJSFvMgzbC9UZcWYJWtNX+I7TYVBxEq8Sn5RTOPEFh fEPmzcSBCYsk+1Ql1haolgxnB2+zUEfjHCQo3SqYpGH+2+oSN7wBGjSFvW5P55Fy B0SFHljKVETd96y5y4khctuPwGkplyqjrhgjlxxBKot8KsF8kOipKMDTkcatKIdA aLX/7KfS0zgYnNN9aV3wxqUeJBujR/xpB2jn5Jq07Q+hh4cCzofSSE7hvP/L8XKS RGQDJereW26fyfJOrN3H -----END CERTIFICATE----- TURKTRUST Certificate Services Provider Root 2 ============================================== -----BEGIN CERTIFICATE----- MIIEPDCCAySgAwIBAgIBATANBgkqhkiG9w0BAQUFADCBvjE/MD0GA1UEAww2VMOc UktUUlVTVCBFbGVrdHJvbmlrIFNlcnRpZmlrYSBIaXptZXQgU2HEn2xhecSxY8Sx c8SxMQswCQYDVQQGEwJUUjEPMA0GA1UEBwwGQW5rYXJhMV0wWwYDVQQKDFRUw5xS S1RSVVNUIEJpbGdpIMSwbGV0acWfaW0gdmUgQmlsacWfaW0gR8O8dmVubGnEn2kg SGl6bWV0bGVyaSBBLsWeLiAoYykgS2FzxLFtIDIwMDUwHhcNMDUxMTA3MTAwNzU3 WhcNMTUwOTE2MTAwNzU3WjCBvjE/MD0GA1UEAww2VMOcUktUUlVTVCBFbGVrdHJv bmlrIFNlcnRpZmlrYSBIaXptZXQgU2HEn2xhecSxY8Sxc8SxMQswCQYDVQQGEwJU UjEPMA0GA1UEBwwGQW5rYXJhMV0wWwYDVQQKDFRUw5xSS1RSVVNUIEJpbGdpIMSw bGV0acWfaW0gdmUgQmlsacWfaW0gR8O8dmVubGnEn2kgSGl6bWV0bGVyaSBBLsWe LiAoYykgS2FzxLFtIDIwMDUwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB AQCpNn7DkUNMwxmYCMjHWHtPFoylzkkBH3MOrHUTpvqeLCDe2JAOCtFp0if7qnef J1Il4std2NiDUBd9irWCPwSOtNXwSadktx4uXyCcUHVPr+G1QRT0mJKIx+XlZEdh R3n9wFHxwZnn3M5q+6+1ATDcRhzviuyV79z/rxAc653YsKpqhRgNF8k+v/Gb0AmJ Qv2gQrSdiVFVKc8bcLyEVK3BEx+Y9C52YItdP5qtygy/p1Zbj3e41Z55SZI/4PGX JHpsmxcPbe9TmJEr5A++WXkHeLuXlfSfadRYhwqp48y2WBmfJiGxxFmNskF1wK1p zpwACPI2/z7woQ8arBT9pmAPAgMBAAGjQzBBMB0GA1UdDgQWBBTZN7NOBf3Zz58S Fq62iS/rJTqIHDAPBgNVHQ8BAf8EBQMDBwYAMA8GA1UdEwEB/wQFMAMBAf8wDQYJ KoZIhvcNAQEFBQADggEBAHJglrfJ3NgpXiOFX7KzLXb7iNcX/nttRbj2hWyfIvwq ECLsqrkw9qtY1jkQMZkpAL2JZkH7dN6RwRgLn7Vhy506vvWolKMiVW4XSf/SKfE4 Jl3vpao6+XF75tpYHdN0wgH6PmlYX63LaL4ULptswLbcoCb6dxriJNoaN+BnrdFz gw2lGh1uEpJ+hGIAF728JRhX8tepb1mIvDS3LoV4nZbcFMMsilKbloxSZj2GFotH uFEJjOp9zYhys2AzsfAKRO8P9Qk3iCQOLGsgOqL6EfJANZxEaGM7rDNvY7wsu/LS y3Z9fYjYHcgFHW68lKlmjHdxx/qR+i9Rnuk5UrbnBEI= -----END CERTIFICATE----- SwissSign Platinum CA - G2 ========================== -----BEGIN CERTIFICATE----- MIIFwTCCA6mgAwIBAgIITrIAZwwDXU8wDQYJKoZIhvcNAQEFBQAwSTELMAkGA1UE BhMCQ0gxFTATBgNVBAoTDFN3aXNzU2lnbiBBRzEjMCEGA1UEAxMaU3dpc3NTaWdu IFBsYXRpbnVtIENBIC0gRzIwHhcNMDYxMDI1MDgzNjAwWhcNMzYxMDI1MDgzNjAw WjBJMQswCQYDVQQGEwJDSDEVMBMGA1UEChMMU3dpc3NTaWduIEFHMSMwIQYDVQQD ExpTd2lzc1NpZ24gUGxhdGludW0gQ0EgLSBHMjCCAiIwDQYJKoZIhvcNAQEBBQAD ggIPADCCAgoCggIBAMrfogLi2vj8Bxax3mCq3pZcZB/HL37PZ/pEQtZ2Y5Wu669y IIpFR4ZieIbWIDkm9K6j/SPnpZy1IiEZtzeTIsBQnIJ71NUERFzLtMKfkr4k2Htn IuJpX+UFeNSH2XFwMyVTtIc7KZAoNppVRDBopIOXfw0enHb/FZ1glwCNioUD7IC+ 6ixuEFGSzH7VozPY1kneWCqv9hbrS3uQMpe5up1Y8fhXSQQeol0GcN1x2/ndi5ob jM89o03Oy3z2u5yg+gnOI2Ky6Q0f4nIoj5+saCB9bzuohTEJfwvH6GXp43gOCWcw izSC+13gzJ2BbWLuCB4ELE6b7P6pT1/9aXjvCR+htL/68++QHkwFix7qepF6w9fl +zC8bBsQWJj3Gl/QKTIDE0ZNYWqFTFJ0LwYfexHihJfGmfNtf9dng34TaNhxKFrY zt3oEBSa/m0jh26OWnA81Y0JAKeqvLAxN23IhBQeW71FYyBrS3SMvds6DsHPWhaP pZjydomyExI7C3d3rLvlPClKknLKYRorXkzig3R3+jVIeoVNjZpTxN94ypeRSCtF KwH3HBqi7Ri6Cr2D+m+8jVeTO9TUps4e8aCxzqv9KyiaTxvXw3LbpMS/XUz13XuW ae5ogObnmLo2t/5u7Su9IPhlGdpVCX4l3P5hYnL5fhgC72O00Puv5TtjjGePAgMB AAGjgawwgakwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O BBYEFFCvzAeHFUdvOMW0ZdHelarp35zMMB8GA1UdIwQYMBaAFFCvzAeHFUdvOMW0 ZdHelarp35zMMEYGA1UdIAQ/MD0wOwYJYIV0AVkBAQEBMC4wLAYIKwYBBQUHAgEW IGh0dHA6Ly9yZXBvc2l0b3J5LnN3aXNzc2lnbi5jb20vMA0GCSqGSIb3DQEBBQUA A4ICAQAIhab1Fgz8RBrBY+D5VUYI/HAcQiiWjrfFwUF1TglxeeVtlspLpYhg0DB0 uMoI3LQwnkAHFmtllXcBrqS3NQuB2nEVqXQXOHtYyvkv+8Bldo1bAbl93oI9ZLi+ FHSjClTTLJUYFzX1UWs/j6KWYTl4a0vlpqD4U99REJNi54Av4tHgvI42Rncz7Lj7 jposiU0xEQ8mngS7twSNC/K5/FqdOxa3L8iYq/6KUFkuozv8KV2LwUvJ4ooTHbG/ u0IdUt1O2BReEMYxB+9xJ/cbOQncguqLs5WGXv312l0xpuAxtpTmREl0xRbl9x8D YSjFyMsSoEJL+WuICI20MhjzdZ/EfwBPBZWcoxcCw7NTm6ogOSkrZvqdr16zktK1 puEa+S1BaYEUtLS17Yk9zvupnTVCRLEcFHOBzyoBNZox1S2PbYTfgE1X4z/FhHXa icYwu+uPyyIIoK6q8QNsOktNCaUOcsZWayFCTiMlFGiudgp8DAdwZPmaL/YFOSbG DI8Zf0NebvRbFS/bYV3mZy8/CJT5YLSYMdp08YSTcU1f+2BY0fvEwW2JorsgH51x kcsymxM9Pn2SUjWskpSi0xjCfMfqr3YFFt1nJ8J+HAciIfNAChs0B0QTwoRqjt8Z Wr9/6x3iGjjRXK9HkmuAtTClyY3YqzGBH9/CZjfTk6mFhnll0g== -----END CERTIFICATE----- SwissSign Gold CA - G2 ====================== -----BEGIN CERTIFICATE----- MIIFujCCA6KgAwIBAgIJALtAHEP1Xk+wMA0GCSqGSIb3DQEBBQUAMEUxCzAJBgNV BAYTAkNIMRUwEwYDVQQKEwxTd2lzc1NpZ24gQUcxHzAdBgNVBAMTFlN3aXNzU2ln biBHb2xkIENBIC0gRzIwHhcNMDYxMDI1MDgzMDM1WhcNMzYxMDI1MDgzMDM1WjBF MQswCQYDVQQGEwJDSDEVMBMGA1UEChMMU3dpc3NTaWduIEFHMR8wHQYDVQQDExZT d2lzc1NpZ24gR29sZCBDQSAtIEcyMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC CgKCAgEAr+TufoskDhJuqVAtFkQ7kpJcyrhdhJJCEyq8ZVeCQD5XJM1QiyUqt2/8 76LQwB8CJEoTlo8jE+YoWACjR8cGp4QjK7u9lit/VcyLwVcfDmJlD909Vopz2q5+ bbqBHH5CjCA12UNNhPqE21Is8w4ndwtrvxEvcnifLtg+5hg3Wipy+dpikJKVyh+c 6bM8K8vzARO/Ws/BtQpgvd21mWRTuKCWs2/iJneRjOBiEAKfNA+k1ZIzUd6+jbqE emA8atufK+ze3gE/bk3lUIbLtK/tREDFylqM2tIrfKjuvqblCqoOpd8FUrdVxyJd MmqXl2MT28nbeTZ7hTpKxVKJ+STnnXepgv9VHKVxaSvRAiTysybUa9oEVeXBCsdt MDeQKuSeFDNeFhdVxVu1yzSJkvGdJo+hB9TGsnhQ2wwMC3wLjEHXuendjIj3o02y MszYF9rNt85mndT9Xv+9lz4pded+p2JYryU0pUHHPbwNUMoDAw8IWh+Vc3hiv69y FGkOpeUDDniOJihC8AcLYiAQZzlG+qkDzAQ4embvIIO1jEpWjpEA/I5cgt6IoMPi aG59je883WX0XaxR7ySArqpWl2/5rX3aYT+YdzylkbYcjCbaZaIJbcHiVOO5ykxM gI93e2CaHt+28kgeDrpOVG2Y4OGiGqJ3UM/EY5LsRxmd6+ZrzsECAwEAAaOBrDCB qTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUWyV7 lqRlUX64OfPAeGZe6Drn8O4wHwYDVR0jBBgwFoAUWyV7lqRlUX64OfPAeGZe6Drn 8O4wRgYDVR0gBD8wPTA7BglghXQBWQECAQEwLjAsBggrBgEFBQcCARYgaHR0cDov L3JlcG9zaXRvcnkuc3dpc3NzaWduLmNvbS8wDQYJKoZIhvcNAQEFBQADggIBACe6 45R88a7A3hfm5djV9VSwg/S7zV4Fe0+fdWavPOhWfvxyeDgD2StiGwC5+OlgzczO UYrHUDFu4Up+GC9pWbY9ZIEr44OE5iKHjn3g7gKZYbge9LgriBIWhMIxkziWMaa5 O1M/wySTVltpkuzFwbs4AOPsF6m43Md8AYOfMke6UiI0HTJ6CVanfCU2qT1L2sCC bwq7EsiHSycR+R4tx5M/nttfJmtS2S6K8RTGRI0Vqbe/vd6mGu6uLftIdxf+u+yv GPUqUfA5hJeVbG4bwyvEdGB5JbAKJ9/fXtI5z0V9QkvfsywexcZdylU6oJxpmo/a 77KwPJ+HbBIrZXAVUjEaJM9vMSNQH4xPjyPDdEFjHFWoFN0+4FFQz/EbMFYOkrCC hdiDyyJkvC24JdVUorgG6q2SpCSgwYa1ShNqR88uC1aVVMvOmttqtKay20EIhid3 92qgQmwLOM7XdVAyksLfKzAiSNDVQTglXaTpXZ/GlHXQRf0wl0OPkKsKx4ZzYEpp Ld6leNcG2mqeSz53OiATIgHQv2ieY2BrNU0LbbqhPcCT4H8js1WtciVORvnSFu+w ZMEBnunKoGqYDs/YYPIvSbjkQuE4NRb0yG5P94FW6LqjviOvrv1vA+ACOzB2+htt Qc8Bsem4yWb02ybzOqR08kkkW8mw0FfB+j564ZfJ -----END CERTIFICATE----- SwissSign Silver CA - G2 ======================== -----BEGIN CERTIFICATE----- MIIFvTCCA6WgAwIBAgIITxvUL1S7L0swDQYJKoZIhvcNAQEFBQAwRzELMAkGA1UE BhMCQ0gxFTATBgNVBAoTDFN3aXNzU2lnbiBBRzEhMB8GA1UEAxMYU3dpc3NTaWdu IFNpbHZlciBDQSAtIEcyMB4XDTA2MTAyNTA4MzI0NloXDTM2MTAyNTA4MzI0Nlow RzELMAkGA1UEBhMCQ0gxFTATBgNVBAoTDFN3aXNzU2lnbiBBRzEhMB8GA1UEAxMY U3dpc3NTaWduIFNpbHZlciBDQSAtIEcyMIICIjANBgkqhkiG9w0BAQEFAAOCAg8A MIICCgKCAgEAxPGHf9N4Mfc4yfjDmUO8x/e8N+dOcbpLj6VzHVxumK4DV644N0Mv Fz0fyM5oEMF4rhkDKxD6LHmD9ui5aLlV8gREpzn5/ASLHvGiTSf5YXu6t+WiE7br YT7QbNHm+/pe7R20nqA1W6GSy/BJkv6FCgU+5tkL4k+73JU3/JHpMjUi0R86TieF nbAVlDLaYQ1HTWBCrpJH6INaUFjpiou5XaHc3ZlKHzZnu0jkg7Y360g6rw9njxcH 6ATK72oxh9TAtvmUcXtnZLi2kUpCe2UuMGoM9ZDulebyzYLs2aFK7PayS+VFheZt eJMELpyCbTapxDFkH4aDCyr0NQp4yVXPQbBH6TCfmb5hqAaEuSh6XzjZG6k4sIN/ c8HDO0gqgg8hm7jMqDXDhBuDsz6+pJVpATqJAHgE2cn0mRmrVn5bi4Y5FZGkECwJ MoBgs5PAKrYYC51+jUnyEEp/+dVGLxmSo5mnJqy7jDzmDrxHB9xzUfFwZC8I+bRH HTBsROopN4WSaGa8gzj+ezku01DwH/teYLappvonQfGbGHLy9YR0SslnxFSuSGTf jNFusB3hB48IHpmccelM2KX3RxIfdNFRnobzwqIjQAtz20um53MGjMGg6cFZrEb6 5i/4z3GcRm25xBWNOHkDRUjvxF3XCO6HOSKGsg0PWEP3calILv3q1h8CAwEAAaOB rDCBqTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU F6DNweRBtjpbO8tFnb0cwpj6hlgwHwYDVR0jBBgwFoAUF6DNweRBtjpbO8tFnb0c wpj6hlgwRgYDVR0gBD8wPTA7BglghXQBWQEDAQEwLjAsBggrBgEFBQcCARYgaHR0 cDovL3JlcG9zaXRvcnkuc3dpc3NzaWduLmNvbS8wDQYJKoZIhvcNAQEFBQADggIB AHPGgeAn0i0P4JUw4ppBf1AsX19iYamGamkYDHRJ1l2E6kFSGG9YrVBWIGrGvShp WJHckRE1qTodvBqlYJ7YH39FkWnZfrt4csEGDyrOj4VwYaygzQu4OSlWhDJOhrs9 xCrZ1x9y7v5RoSJBsXECYxqCsGKrXlcSH9/L3XWgwF15kIwb4FDm3jH+mHtwX6WQ 2K34ArZv02DdQEsixT2tOnqfGhpHkXkzuoLcMmkDlm4fS/Bx/uNncqCxv1yL5PqZ IseEuRuNI5c/7SXgz2W79WEE790eslpBIlqhn10s6FvJbakMDHiqYMZWjwFaDGi8 aRl5xB9+lwW/xekkUV7U1UtT7dkjWjYDZaPBA61BMPNGG4WQr2W11bHkFlt4dR2X em1ZqSqPe97Dh4kQmUlzeMg9vVE1dCrV8X5pGyq7O70luJpaPXJhkGaH7gzWTdQR dAtq/gsD/KNVV4n+SsuuWxcFyPKNIzFTONItaj+CuY0IavdeQXRuwxF+B6wpYJE/ OMpXEA29MC/HpeZBoNquBYeaoKRlbEwJDIm6uNO5wJOKMPqN5ZprFQFOZ6raYlY+ hAhm0sQ2fac+EPyI4NSA5QC9qvNOBqN6avlicuMJT+ubDgEj8Z+7fNzcbBGXJbLy tGMU0gYqZ4yD9c7qB9iaah7s5Aq7KkzrCWA5zspi2C5u -----END CERTIFICATE----- GeoTrust Primary Certification Authority ======================================== -----BEGIN CERTIFICATE----- MIIDfDCCAmSgAwIBAgIQGKy1av1pthU6Y2yv2vrEoTANBgkqhkiG9w0BAQUFADBY MQswCQYDVQQGEwJVUzEWMBQGA1UEChMNR2VvVHJ1c3QgSW5jLjExMC8GA1UEAxMo R2VvVHJ1c3QgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0wNjEx MjcwMDAwMDBaFw0zNjA3MTYyMzU5NTlaMFgxCzAJBgNVBAYTAlVTMRYwFAYDVQQK Ew1HZW9UcnVzdCBJbmMuMTEwLwYDVQQDEyhHZW9UcnVzdCBQcmltYXJ5IENlcnRp ZmljYXRpb24gQXV0aG9yaXR5MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC AQEAvrgVe//UfH1nrYNke8hCUy3f9oQIIGHWAVlqnEQRr+92/ZV+zmEwu3qDXwK9 AWbK7hWNb6EwnL2hhZ6UOvNWiAAxz9juapYC2e0DjPt1befquFUWBRaa9OBesYjA ZIVcFU2Ix7e64HXprQU9nceJSOC7KMgD4TCTZF5SwFlwIjVXiIrxlQqD17wxcwE0 7e9GceBrAqg1cmuXm2bgyxx5X9gaBGgeRwLmnWDiNpcB3841kt++Z8dtd1k7j53W kBWUvEI0EME5+bEnPn7WinXFsq+W06Lem+SYvn3h6YGttm/81w7a4DSwDRp35+MI mO9Y+pyEtzavwt+s0vQQBnBxNQIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4G A1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQULNVQQZcVi/CPNmFbSvtr2ZnJM5IwDQYJ KoZIhvcNAQEFBQADggEBAFpwfyzdtzRP9YZRqSa+S7iq8XEN3GHHoOo0Hnp3DwQ1 6CePbJC/kRYkRj5KTs4rFtULUh38H2eiAkUxT87z+gOneZ1TatnaYzr4gNfTmeGl 4b7UVXGYNTq+k+qurUKykG/g/CFNNWMziUnWm07Kx+dOCQD32sfvmWKZd7aVIl6K oKv0uHiYyjgZmclynnjNS6yvGaBzEi38wkG6gZHaFloxt/m0cYASSJlyc1pZU8Fj UjPtp8nSOQJw+uCxQmYpqptR7TBUIhRf2asdweSU8Pj1K/fqynhG1riR/aYNKxoU AT6A8EKglQdebc3MS6RFjasS6LPeWuWgfOgPIh1a6Vk= -----END CERTIFICATE----- thawte Primary Root CA ====================== -----BEGIN CERTIFICATE----- MIIEIDCCAwigAwIBAgIQNE7VVyDV7exJ9C/ON9srbTANBgkqhkiG9w0BAQUFADCB qTELMAkGA1UEBhMCVVMxFTATBgNVBAoTDHRoYXd0ZSwgSW5jLjEoMCYGA1UECxMf Q2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjE4MDYGA1UECxMvKGMpIDIw MDYgdGhhd3RlLCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxHzAdBgNV BAMTFnRoYXd0ZSBQcmltYXJ5IFJvb3QgQ0EwHhcNMDYxMTE3MDAwMDAwWhcNMzYw NzE2MjM1OTU5WjCBqTELMAkGA1UEBhMCVVMxFTATBgNVBAoTDHRoYXd0ZSwgSW5j LjEoMCYGA1UECxMfQ2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjE4MDYG A1UECxMvKGMpIDIwMDYgdGhhd3RlLCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNl IG9ubHkxHzAdBgNVBAMTFnRoYXd0ZSBQcmltYXJ5IFJvb3QgQ0EwggEiMA0GCSqG SIb3DQEBAQUAA4IBDwAwggEKAoIBAQCsoPD7gFnUnMekz52hWXMJEEUMDSxuaPFs W0hoSVk3/AszGcJ3f8wQLZU0HObrTQmnHNK4yZc2AreJ1CRfBsDMRJSUjQJib+ta 3RGNKJpchJAQeg29dGYvajig4tVUROsdB58Hum/u6f1OCyn1PoSgAfGcq/gcfomk 6KHYcWUNo1F77rzSImANuVud37r8UVsLr5iy6S7pBOhih94ryNdOwUxkHt3Ph1i6 Sk/KaAcdHJ1KxtUvkcx8cXIcxcBn6zL9yZJclNqFwJu/U30rCfSMnZEfl2pSy94J NqR32HuHUETVPm4pafs5SSYeCaWAe0At6+gnhcn+Yf1+5nyXHdWdAgMBAAGjQjBA MA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBR7W0XP r87Lev0xkhpqtvNG61dIUDANBgkqhkiG9w0BAQUFAAOCAQEAeRHAS7ORtvzw6WfU DW5FvlXok9LOAz/t2iWwHVfLHjp2oEzsUHboZHIMpKnxuIvW1oeEuzLlQRHAd9mz YJ3rG9XRbkREqaYB7FViHXe4XI5ISXycO1cRrK1zN44veFyQaEfZYGDm/Ac9IiAX xPcW6cTYcvnIc3zfFi8VqT79aie2oetaupgf1eNNZAqdE8hhuvU5HIe6uL17In/2 /qxAeeWsEG89jxt5dovEN7MhGITlNgDrYyCZuen+MwS7QcjBAvlEYyCegc5C09Y/ LHbTY5xZ3Y+m4Q6gLkH3LpVHz7z9M/P2C2F+fpErgUfCJzDupxBdN49cOSvkBPB7 jVaMaA== -----END CERTIFICATE----- VeriSign Class 3 Public Primary Certification Authority - G5 ============================================================ -----BEGIN CERTIFICATE----- MIIE0zCCA7ugAwIBAgIQGNrRniZ96LtKIVjNzGs7SjANBgkqhkiG9w0BAQUFADCB yjELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQL ExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwNiBWZXJp U2lnbiwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MUUwQwYDVQQDEzxW ZXJpU2lnbiBDbGFzcyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0 aG9yaXR5IC0gRzUwHhcNMDYxMTA4MDAwMDAwWhcNMzYwNzE2MjM1OTU5WjCByjEL MAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQLExZW ZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwNiBWZXJpU2ln biwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MUUwQwYDVQQDEzxWZXJp U2lnbiBDbGFzcyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9y aXR5IC0gRzUwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCvJAgIKXo1 nmAMqudLO07cfLw8RRy7K+D+KQL5VwijZIUVJ/XxrcgxiV0i6CqqpkKzj/i5Vbex t0uz/o9+B1fs70PbZmIVYc9gDaTY3vjgw2IIPVQT60nKWVSFJuUrjxuf6/WhkcIz SdhDY2pSS9KP6HBRTdGJaXvHcPaz3BJ023tdS1bTlr8Vd6Gw9KIl8q8ckmcY5fQG BO+QueQA5N06tRn/Arr0PO7gi+s3i+z016zy9vA9r911kTMZHRxAy3QkGSGT2RT+ rCpSx4/VBEnkjWNHiDxpg8v+R70rfk/Fla4OndTRQ8Bnc+MUCH7lP59zuDMKz10/ NIeWiu5T6CUVAgMBAAGjgbIwga8wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8E BAMCAQYwbQYIKwYBBQUHAQwEYTBfoV2gWzBZMFcwVRYJaW1hZ2UvZ2lmMCEwHzAH BgUrDgMCGgQUj+XTGoasjY5rw8+AatRIGCx7GS4wJRYjaHR0cDovL2xvZ28udmVy aXNpZ24uY29tL3ZzbG9nby5naWYwHQYDVR0OBBYEFH/TZafC3ey78DAJ80M5+gKv MzEzMA0GCSqGSIb3DQEBBQUAA4IBAQCTJEowX2LP2BqYLz3q3JktvXf2pXkiOOzE p6B4Eq1iDkVwZMXnl2YtmAl+X6/WzChl8gGqCBpH3vn5fJJaCGkgDdk+bW48DW7Y 5gaRQBi5+MHt39tBquCWIMnNZBU4gcmU7qKEKQsTb47bDN0lAtukixlE0kF6BWlK WE9gyn6CagsCqiUXObXbf+eEZSqVir2G3l6BFoMtEMze/aiCKm0oHw0LxOXnGiYZ 4fQRbxC1lfznQgUy286dUV4otp6F01vvpX1FQHKOtw5rDgb7MzVIcbidJ4vEZV8N hnacRHr2lVz2XTIIM6RUthg/aFzyQkqFOFSDX9HoLPKsEdao7WNq -----END CERTIFICATE----- SecureTrust CA ============== -----BEGIN CERTIFICATE----- MIIDuDCCAqCgAwIBAgIQDPCOXAgWpa1Cf/DrJxhZ0DANBgkqhkiG9w0BAQUFADBI MQswCQYDVQQGEwJVUzEgMB4GA1UEChMXU2VjdXJlVHJ1c3QgQ29ycG9yYXRpb24x FzAVBgNVBAMTDlNlY3VyZVRydXN0IENBMB4XDTA2MTEwNzE5MzExOFoXDTI5MTIz MTE5NDA1NVowSDELMAkGA1UEBhMCVVMxIDAeBgNVBAoTF1NlY3VyZVRydXN0IENv cnBvcmF0aW9uMRcwFQYDVQQDEw5TZWN1cmVUcnVzdCBDQTCCASIwDQYJKoZIhvcN AQEBBQADggEPADCCAQoCggEBAKukgeWVzfX2FI7CT8rU4niVWJxB4Q2ZQCQXOZEz Zum+4YOvYlyJ0fwkW2Gz4BERQRwdbvC4u/jep4G6pkjGnx29vo6pQT64lO0pGtSO 0gMdA+9tDWccV9cGrcrI9f4Or2YlSASWC12juhbDCE/RRvgUXPLIXgGZbf2IzIao wW8xQmxSPmjL8xk037uHGFaAJsTQ3MBv396gwpEWoGQRS0S8Hvbn+mPeZqx2pHGj 7DaUaHp3pLHnDi+BeuK1cobvomuL8A/b01k/unK8RCSc43Oz969XL0Imnal0ugBS 8kvNU3xHCzaFDmapCJcWNFfBZveA4+1wVMeT4C4oFVmHursCAwEAAaOBnTCBmjAT BgkrBgEEAYI3FAIEBh4EAEMAQTALBgNVHQ8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB /zAdBgNVHQ4EFgQUQjK2FvoE/f5dS3rD/fdMQB1aQ68wNAYDVR0fBC0wKzApoCeg JYYjaHR0cDovL2NybC5zZWN1cmV0cnVzdC5jb20vU1RDQS5jcmwwEAYJKwYBBAGC NxUBBAMCAQAwDQYJKoZIhvcNAQEFBQADggEBADDtT0rhWDpSclu1pqNlGKa7UTt3 6Z3q059c4EVlew3KW+JwULKUBRSuSceNQQcSc5R+DCMh/bwQf2AQWnL1mA6s7Ll/ 3XpvXdMc9P+IBWlCqQVxyLesJugutIxq/3HcuLHfmbx8IVQr5Fiiu1cprp6poxkm D5kuCLDv/WnPmRoJjeOnnyvJNjR7JLN4TJUXpAYmHrZkUjZfYGfZnMUFdAvnZyPS CPyI6a6Lf+Ew9Dd+/cYy2i2eRDAwbO4H3tI0/NL/QPZL9GZGBlSm8jIKYyYwa5vR 3ItHuuG51WLQoqD0ZwV4KWMabwTW+MZMo5qxN7SN5ShLHZ4swrhovO0C7jE= -----END CERTIFICATE----- Secure Global CA ================ -----BEGIN CERTIFICATE----- MIIDvDCCAqSgAwIBAgIQB1YipOjUiolN9BPI8PjqpTANBgkqhkiG9w0BAQUFADBK MQswCQYDVQQGEwJVUzEgMB4GA1UEChMXU2VjdXJlVHJ1c3QgQ29ycG9yYXRpb24x GTAXBgNVBAMTEFNlY3VyZSBHbG9iYWwgQ0EwHhcNMDYxMTA3MTk0MjI4WhcNMjkx MjMxMTk1MjA2WjBKMQswCQYDVQQGEwJVUzEgMB4GA1UEChMXU2VjdXJlVHJ1c3Qg Q29ycG9yYXRpb24xGTAXBgNVBAMTEFNlY3VyZSBHbG9iYWwgQ0EwggEiMA0GCSqG SIb3DQEBAQUAA4IBDwAwggEKAoIBAQCvNS7YrGxVaQZx5RNoJLNP2MwhR/jxYDiJ iQPpvepeRlMJ3Fz1Wuj3RSoC6zFh1ykzTM7HfAo3fg+6MpjhHZevj8fcyTiW89sa /FHtaMbQbqR8JNGuQsiWUGMu4P51/pinX0kuleM5M2SOHqRfkNJnPLLZ/kG5VacJ jnIFHovdRIWCQtBJwB1g8NEXLJXr9qXBkqPFwqcIYA1gBBCWeZ4WNOaptvolRTnI HmX5k/Wq8VLcmZg9pYYaDDUz+kulBAYVHDGA76oYa8J719rO+TMg1fW9ajMtgQT7 sFzUnKPiXB3jqUJ1XnvUd+85VLrJChgbEplJL4hL/VBi0XPnj3pDAgMBAAGjgZ0w gZowEwYJKwYBBAGCNxQCBAYeBABDAEEwCwYDVR0PBAQDAgGGMA8GA1UdEwEB/wQF MAMBAf8wHQYDVR0OBBYEFK9EBMJBfkiD2045AuzshHrmzsmkMDQGA1UdHwQtMCsw KaAnoCWGI2h0dHA6Ly9jcmwuc2VjdXJldHJ1c3QuY29tL1NHQ0EuY3JsMBAGCSsG AQQBgjcVAQQDAgEAMA0GCSqGSIb3DQEBBQUAA4IBAQBjGghAfaReUw132HquHw0L URYD7xh8yOOvaliTFGCRsoTciE6+OYo68+aCiV0BN7OrJKQVDpI1WkpEXk5X+nXO H0jOZvQ8QCaSmGwb7iRGDBezUqXbpZGRzzfTb+cnCDpOGR86p1hcF895P4vkp9Mm I50mD1hp/Ed+stCNi5O/KU9DaXR2Z0vPB4zmAve14bRDtUstFJ/53CYNv6ZHdAbY iNE6KTCEztI5gGIbqMdXSbxqVVFnFUq+NQfk1XWYN3kwFNspnWzFacxHVaIw98xc f8LDmBxrThaA63p4ZUWiABqvDA1VZDRIuJK58bRQKfJPIx/abKwfROHdI3hRW8cW -----END CERTIFICATE----- COMODO Certification Authority ============================== -----BEGIN CERTIFICATE----- MIIEHTCCAwWgAwIBAgIQToEtioJl4AsC7j41AkblPTANBgkqhkiG9w0BAQUFADCB gTELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4G A1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxJzAlBgNV BAMTHkNPTU9ETyBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0wNjEyMDEwMDAw MDBaFw0yOTEyMzEyMzU5NTlaMIGBMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRowGAYDVQQKExFDT01P RE8gQ0EgTGltaXRlZDEnMCUGA1UEAxMeQ09NT0RPIENlcnRpZmljYXRpb24gQXV0 aG9yaXR5MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0ECLi3LjkRv3 UcEbVASY06m/weaKXTuH+7uIzg3jLz8GlvCiKVCZrts7oVewdFFxze1CkU1B/qnI 2GqGd0S7WWaXUF601CxwRM/aN5VCaTwwxHGzUvAhTaHYujl8HJ6jJJ3ygxaYqhZ8 Q5sVW7euNJH+1GImGEaaP+vB+fGQV+useg2L23IwambV4EajcNxo2f8ESIl33rXp +2dtQem8Ob0y2WIC8bGoPW43nOIv4tOiJovGuFVDiOEjPqXSJDlqR6sA1KGzqSX+ DT+nHbrTUcELpNqsOO9VUCQFZUaTNE8tja3G1CEZ0o7KBWFxB3NH5YoZEr0ETc5O nKVIrLsm9wIDAQABo4GOMIGLMB0GA1UdDgQWBBQLWOWLxkwVN6RAqTCpIb5HNlpW /zAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zBJBgNVHR8EQjBAMD6g PKA6hjhodHRwOi8vY3JsLmNvbW9kb2NhLmNvbS9DT01PRE9DZXJ0aWZpY2F0aW9u QXV0aG9yaXR5LmNybDANBgkqhkiG9w0BAQUFAAOCAQEAPpiem/Yb6dc5t3iuHXIY SdOH5EOC6z/JqvWote9VfCFSZfnVDeFs9D6Mk3ORLgLETgdxb8CPOGEIqB6BCsAv IC9Bi5HcSEW88cbeunZrM8gALTFGTO3nnc+IlP8zwFboJIYmuNg4ON8qa90SzMc/ RxdMosIGlgnW2/4/PEZB31jiVg88O8EckzXZOFKs7sjsLjBOlDW0JB9LeGna8gI4 zJVSk/BwJVmcIGfE7vmLV2H0knZ9P4SNVbfo5azV8fUZVqZa+5Acr5Pr5RzUZ5dd BA6+C4OmF4O5MBKgxTMVBbkN+8cFduPYSo38NBejxiEovjBFMR7HeL5YYTisO+IB ZQ== -----END CERTIFICATE----- Network Solutions Certificate Authority ======================================= -----BEGIN CERTIFICATE----- MIID5jCCAs6gAwIBAgIQV8szb8JcFuZHFhfjkDFo4DANBgkqhkiG9w0BAQUFADBi MQswCQYDVQQGEwJVUzEhMB8GA1UEChMYTmV0d29yayBTb2x1dGlvbnMgTC5MLkMu MTAwLgYDVQQDEydOZXR3b3JrIFNvbHV0aW9ucyBDZXJ0aWZpY2F0ZSBBdXRob3Jp dHkwHhcNMDYxMjAxMDAwMDAwWhcNMjkxMjMxMjM1OTU5WjBiMQswCQYDVQQGEwJV UzEhMB8GA1UEChMYTmV0d29yayBTb2x1dGlvbnMgTC5MLkMuMTAwLgYDVQQDEydO ZXR3b3JrIFNvbHV0aW9ucyBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwggEiMA0GCSqG SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDkvH6SMG3G2I4rC7xGzuAnlt7e+foS0zwz c7MEL7xxjOWftiJgPl9dzgn/ggwbmlFQGiaJ3dVhXRncEg8tCqJDXRfQNJIg6nPP OCwGJgl6cvf6UDL4wpPTaaIjzkGxzOTVHzbRijr4jGPiFFlp7Q3Tf2vouAPlT2rl mGNpSAW+Lv8ztumXWWn4Zxmuk2GWRBXTcrA/vGp97Eh/jcOrqnErU2lBUzS1sLnF BgrEsEX1QV1uiUV7PTsmjHTC5dLRfbIR1PtYMiKagMnc/Qzpf14Dl847ABSHJ3A4 qY5usyd2mFHgBeMhqxrVhSI8KbWaFsWAqPS7azCPL0YCorEMIuDTAgMBAAGjgZcw gZQwHQYDVR0OBBYEFCEwyfsA106Y2oeqKtCnLrFAMadMMA4GA1UdDwEB/wQEAwIB BjAPBgNVHRMBAf8EBTADAQH/MFIGA1UdHwRLMEkwR6BFoEOGQWh0dHA6Ly9jcmwu bmV0c29sc3NsLmNvbS9OZXR3b3JrU29sdXRpb25zQ2VydGlmaWNhdGVBdXRob3Jp dHkuY3JsMA0GCSqGSIb3DQEBBQUAA4IBAQC7rkvnt1frf6ott3NHhWrB5KUd5Oc8 6fRZZXe1eltajSU24HqXLjjAV2CDmAaDn7l2em5Q4LqILPxFzBiwmZVRDuwduIj/ h1AcgsLj4DKAv6ALR8jDMe+ZZzKATxcheQxpXN5eNK4CtSbqUN9/GGUsyfJj4akH /nxxH2szJGoeBfcFaMBqEssuXmHLrijTfsK0ZpEmXzwuJF/LWA/rKOyvEZbz3Htv wKeI8lN3s2Berq4o2jUsbzRF0ybh3uxbTydrFny9RAQYgrOJeRcQcT16ohZO9QHN pGxlaKFJdlxDydi8NmdspZS11My5vWo1ViHe2MPr+8ukYEywVaCge1ey -----END CERTIFICATE----- WellsSecure Public Root Certificate Authority ============================================= -----BEGIN CERTIFICATE----- MIIEvTCCA6WgAwIBAgIBATANBgkqhkiG9w0BAQUFADCBhTELMAkGA1UEBhMCVVMx IDAeBgNVBAoMF1dlbGxzIEZhcmdvIFdlbGxzU2VjdXJlMRwwGgYDVQQLDBNXZWxs cyBGYXJnbyBCYW5rIE5BMTYwNAYDVQQDDC1XZWxsc1NlY3VyZSBQdWJsaWMgUm9v dCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwHhcNMDcxMjEzMTcwNzU0WhcNMjIxMjE0 MDAwNzU0WjCBhTELMAkGA1UEBhMCVVMxIDAeBgNVBAoMF1dlbGxzIEZhcmdvIFdl bGxzU2VjdXJlMRwwGgYDVQQLDBNXZWxscyBGYXJnbyBCYW5rIE5BMTYwNAYDVQQD DC1XZWxsc1NlY3VyZSBQdWJsaWMgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkw ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDub7S9eeKPCCGeOARBJe+r WxxTkqxtnt3CxC5FlAM1iGd0V+PfjLindo8796jE2yljDpFoNoqXjopxaAkH5OjU Dk/41itMpBb570OYj7OeUt9tkTmPOL13i0Nj67eT/DBMHAGTthP796EfvyXhdDcs HqRePGj4S78NuR4uNuip5Kf4D8uCdXw1LSLWwr8L87T8bJVhHlfXBIEyg1J55oNj z7fLY4sR4r1e6/aN7ZVyKLSsEmLpSjPmgzKuBXWVvYSV2ypcm44uDLiBK0HmOFaf SZtsdvqKXfcBeYF8wYNABf5x/Qw/zE5gCQ5lRxAvAcAFP4/4s0HvWkJ+We/Slwxl AgMBAAGjggE0MIIBMDAPBgNVHRMBAf8EBTADAQH/MDkGA1UdHwQyMDAwLqAsoCqG KGh0dHA6Ly9jcmwucGtpLndlbGxzZmFyZ28uY29tL3dzcHJjYS5jcmwwDgYDVR0P AQH/BAQDAgHGMB0GA1UdDgQWBBQmlRkQ2eihl5H/3BnZtQQ+0nMKajCBsgYDVR0j BIGqMIGngBQmlRkQ2eihl5H/3BnZtQQ+0nMKaqGBi6SBiDCBhTELMAkGA1UEBhMC VVMxIDAeBgNVBAoMF1dlbGxzIEZhcmdvIFdlbGxzU2VjdXJlMRwwGgYDVQQLDBNX ZWxscyBGYXJnbyBCYW5rIE5BMTYwNAYDVQQDDC1XZWxsc1NlY3VyZSBQdWJsaWMg Um9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHmCAQEwDQYJKoZIhvcNAQEFBQADggEB ALkVsUSRzCPIK0134/iaeycNzXK7mQDKfGYZUMbVmO2rvwNa5U3lHshPcZeG1eMd /ZDJPHV3V3p9+N701NX3leZ0bh08rnyd2wIDBSxxSyU+B+NemvVmFymIGjifz6pB A4SXa5M4esowRBskRDPQ5NHcKDj0E0M1NSljqHyita04pO2t/caaH/+Xc/77szWn k4bGdpEA5qxRFsQnMlzbc9qlk1eOPm01JghZ1edE13YgY+esE2fDbbFwRnzVlhE9 iW9dqKHrjQrawx0zbKPqZxmamX9LPYNRKh3KL4YMon4QLSvUFpULB6ouFJJJtylv 2G0xffX8oRAHh84vWdw+WNs= -----END CERTIFICATE----- COMODO ECC Certification Authority ================================== -----BEGIN CERTIFICATE----- MIICiTCCAg+gAwIBAgIQH0evqmIAcFBUTAGem2OZKjAKBggqhkjOPQQDAzCBhTEL MAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UE BxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxKzApBgNVBAMT IkNPTU9ETyBFQ0MgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDgwMzA2MDAw MDAwWhcNMzgwMTE4MjM1OTU5WjCBhTELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdy ZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09N T0RPIENBIExpbWl0ZWQxKzApBgNVBAMTIkNPTU9ETyBFQ0MgQ2VydGlmaWNhdGlv biBBdXRob3JpdHkwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQDR3svdcmCFYX7deSR FtSrYpn1PlILBs5BAH+X4QokPB0BBO490o0JlwzgdeT6+3eKKvUDYEs2ixYjFq0J cfRK9ChQtP6IHG4/bC8vCVlbpVsLM5niwz2J+Wos77LTBumjQjBAMB0GA1UdDgQW BBR1cacZSBm8nZ3qQUfflMRId5nTeTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/ BAUwAwEB/zAKBggqhkjOPQQDAwNoADBlAjEA7wNbeqy3eApyt4jf/7VGFAkK+qDm fQjGGoe9GKhzvSbKYAydzpmfz1wPMOG+FDHqAjAU9JM8SaczepBGR7NjfRObTrdv GDeAU/7dIOA1mjbRxwG55tzd8/8dLDoWV9mSOdY= -----END CERTIFICATE----- IGC/A ===== -----BEGIN CERTIFICATE----- MIIEAjCCAuqgAwIBAgIFORFFEJQwDQYJKoZIhvcNAQEFBQAwgYUxCzAJBgNVBAYT AkZSMQ8wDQYDVQQIEwZGcmFuY2UxDjAMBgNVBAcTBVBhcmlzMRAwDgYDVQQKEwdQ TS9TR0ROMQ4wDAYDVQQLEwVEQ1NTSTEOMAwGA1UEAxMFSUdDL0ExIzAhBgkqhkiG 9w0BCQEWFGlnY2FAc2dkbi5wbS5nb3V2LmZyMB4XDTAyMTIxMzE0MjkyM1oXDTIw MTAxNzE0MjkyMlowgYUxCzAJBgNVBAYTAkZSMQ8wDQYDVQQIEwZGcmFuY2UxDjAM BgNVBAcTBVBhcmlzMRAwDgYDVQQKEwdQTS9TR0ROMQ4wDAYDVQQLEwVEQ1NTSTEO MAwGA1UEAxMFSUdDL0ExIzAhBgkqhkiG9w0BCQEWFGlnY2FAc2dkbi5wbS5nb3V2 LmZyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAsh/R0GLFMzvABIaI s9z4iPf930Pfeo2aSVz2TqrMHLmh6yeJ8kbpO0px1R2OLc/mratjUMdUC24SyZA2 xtgv2pGqaMVy/hcKshd+ebUyiHDKcMCWSo7kVc0dJ5S/znIq7Fz5cyD+vfcuiWe4 u0dzEvfRNWk68gq5rv9GQkaiv6GFGvm/5P9JhfejcIYyHF2fYPepraX/z9E0+X1b F8bc1g4oa8Ld8fUzaJ1O/Id8NhLWo4DoQw1VYZTqZDdH6nfK0LJYBcNdfrGoRpAx Vs5wKpayMLh35nnAvSk7/ZR3TL0gzUEl4C7HG7vupARB0l2tEmqKm0f7yd1GQOGd PDPQtQIDAQABo3cwdTAPBgNVHRMBAf8EBTADAQH/MAsGA1UdDwQEAwIBRjAVBgNV HSAEDjAMMAoGCCqBegF5AQEBMB0GA1UdDgQWBBSjBS8YYFDCiQrdKyFP/45OqDAx NjAfBgNVHSMEGDAWgBSjBS8YYFDCiQrdKyFP/45OqDAxNjANBgkqhkiG9w0BAQUF AAOCAQEABdwm2Pp3FURo/C9mOnTgXeQp/wYHE4RKq89toB9RlPhJy3Q2FLwV3duJ L92PoF189RLrn544pEfMs5bZvpwlqwN+Mw+VgQ39FuCIvjfwbF3QMZsyK10XZZOY YLxuj7GoPB7ZHPOpJkL5ZB3C55L29B5aqhlSXa/oovdgoPaN8In1buAKBQGVyYsg Crpa/JosPL3Dt8ldeCUFP1YUmwza+zpI/pdpXsoQhvdOlgQITeywvl3cO45Pwf2a NjSaTFR+FwNIlQgRHAdvhQh+XU3Endv7rs6y0bO4g2wdsrN58dhwmX7wEwLOXt1R 0982gaEbeC9xs/FZTEYYKKuF0mBWWg== -----END CERTIFICATE----- Security Communication EV RootCA1 ================================= -----BEGIN CERTIFICATE----- MIIDfTCCAmWgAwIBAgIBADANBgkqhkiG9w0BAQUFADBgMQswCQYDVQQGEwJKUDEl MCMGA1UEChMcU0VDT00gVHJ1c3QgU3lzdGVtcyBDTy4sTFRELjEqMCgGA1UECxMh U2VjdXJpdHkgQ29tbXVuaWNhdGlvbiBFViBSb290Q0ExMB4XDTA3MDYwNjAyMTIz MloXDTM3MDYwNjAyMTIzMlowYDELMAkGA1UEBhMCSlAxJTAjBgNVBAoTHFNFQ09N IFRydXN0IFN5c3RlbXMgQ08uLExURC4xKjAoBgNVBAsTIVNlY3VyaXR5IENvbW11 bmljYXRpb24gRVYgUm9vdENBMTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC ggEBALx/7FebJOD+nLpCeamIivqA4PUHKUPqjgo0No0c+qe1OXj/l3X3L+SqawSE RMqm4miO/VVQYg+kcQ7OBzgtQoVQrTyWb4vVog7P3kmJPdZkLjjlHmy1V4qe70gO zXppFodEtZDkBp2uoQSXWHnvIEqCa4wiv+wfD+mEce3xDuS4GBPMVjZd0ZoeUWs5 bmB2iDQL87PRsJ3KYeJkHcFGB7hj3R4zZbOOCVVSPbW9/wfrrWFVGCypaZhKqkDF MxRldAD5kd6vA0jFQFTcD4SQaCDFkpbcLuUCRarAX1T4bepJz11sS6/vmsJWXMY1 VkJqMF/Cq/biPT+zyRGPMUzXn0kCAwEAAaNCMEAwHQYDVR0OBBYEFDVK9U2vP9eC OKyrcWUXdYydVZPmMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MA0G CSqGSIb3DQEBBQUAA4IBAQCoh+ns+EBnXcPBZsdAS5f8hxOQWsTvoMpfi7ent/HW tWS3irO4G8za+6xmiEHO6Pzk2x6Ipu0nUBsCMCRGef4Eh3CXQHPRwMFXGZpppSeZ q51ihPZRwSzJIxXYKLerJRO1RuGGAv8mjMSIkh1W/hln8lXkgKNrnKt34VFxDSDb EJrbvXZ5B3eZKK2aXtqxT0QsNY6llsf9g/BYxnnWmHyojf6GPgcWkuF75x3sM3Z+ Qi5KhfmRiWiEA4Glm5q+4zfFVKtWOxgtQaQM+ELbmaDgcm+7XeEWT1MKZPlO9L9O VL14bIjqv5wTJMJwaaJ/D8g8rQjJsJhAoyrniIPtd490 -----END CERTIFICATE----- OISTE WISeKey Global Root GA CA =============================== -----BEGIN CERTIFICATE----- MIID8TCCAtmgAwIBAgIQQT1yx/RrH4FDffHSKFTfmjANBgkqhkiG9w0BAQUFADCB ijELMAkGA1UEBhMCQ0gxEDAOBgNVBAoTB1dJU2VLZXkxGzAZBgNVBAsTEkNvcHly aWdodCAoYykgMjAwNTEiMCAGA1UECxMZT0lTVEUgRm91bmRhdGlvbiBFbmRvcnNl ZDEoMCYGA1UEAxMfT0lTVEUgV0lTZUtleSBHbG9iYWwgUm9vdCBHQSBDQTAeFw0w NTEyMTExNjAzNDRaFw0zNzEyMTExNjA5NTFaMIGKMQswCQYDVQQGEwJDSDEQMA4G A1UEChMHV0lTZUtleTEbMBkGA1UECxMSQ29weXJpZ2h0IChjKSAyMDA1MSIwIAYD VQQLExlPSVNURSBGb3VuZGF0aW9uIEVuZG9yc2VkMSgwJgYDVQQDEx9PSVNURSBX SVNlS2V5IEdsb2JhbCBSb290IEdBIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A MIIBCgKCAQEAy0+zAJs9Nt350UlqaxBJH+zYK7LG+DKBKUOVTJoZIyEVRd7jyBxR VVuuk+g3/ytr6dTqvirdqFEr12bDYVxgAsj1znJ7O7jyTmUIms2kahnBAbtzptf2 w93NvKSLtZlhuAGio9RN1AU9ka34tAhxZK9w8RxrfvbDd50kc3vkDIzh2TbhmYsF mQvtRTEJysIA2/dyoJaqlYfQjse2YXMNdmaM3Bu0Y6Kff5MTMPGhJ9vZ/yxViJGg 4E8HsChWjBgbl0SOid3gF27nKu+POQoxhILYQBRJLnpB5Kf+42TMwVlxSywhp1t9 4B3RLoGbw9ho972WG6xwsRYUC9tguSYBBQIDAQABo1EwTzALBgNVHQ8EBAMCAYYw DwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUswN+rja8sHnR3JQmthG+IbJphpQw EAYJKwYBBAGCNxUBBAMCAQAwDQYJKoZIhvcNAQEFBQADggEBAEuh/wuHbrP5wUOx SPMowB0uyQlB+pQAHKSkq0lPjz0e701vvbyk9vImMMkQyh2I+3QZH4VFvbBsUfk2 ftv1TDI6QU9bR8/oCy22xBmddMVHxjtqD6wU2zz0c5ypBd8A3HR4+vg1YFkCExh8 vPtNsCBtQ7tgMHpnM1zFmdH4LTlSc/uMqpclXHLZCB6rTjzjgTGfA6b7wP4piFXa hNVQA7bihKOmNqoROgHhGEvWRGizPflTdISzRpFGlgC3gCy24eMQ4tui5yiPAZZi Fj4A4xylNoEYokxSdsARo27mHbrjWr42U8U+dY+GaSlYU7Wcu2+fXMUY7N0v4ZjJ /L7fCg0= -----END CERTIFICATE----- S-TRUST Authentication and Encryption Root CA 2005 PN ===================================================== -----BEGIN CERTIFICATE----- MIIEezCCA2OgAwIBAgIQNxkY5lNUfBq1uMtZWts1tzANBgkqhkiG9w0BAQUFADCB rjELMAkGA1UEBhMCREUxIDAeBgNVBAgTF0JhZGVuLVd1ZXJ0dGVtYmVyZyAoQlcp MRIwEAYDVQQHEwlTdHV0dGdhcnQxKTAnBgNVBAoTIERldXRzY2hlciBTcGFya2Fz c2VuIFZlcmxhZyBHbWJIMT4wPAYDVQQDEzVTLVRSVVNUIEF1dGhlbnRpY2F0aW9u IGFuZCBFbmNyeXB0aW9uIFJvb3QgQ0EgMjAwNTpQTjAeFw0wNTA2MjIwMDAwMDBa Fw0zMDA2MjEyMzU5NTlaMIGuMQswCQYDVQQGEwJERTEgMB4GA1UECBMXQmFkZW4t V3VlcnR0ZW1iZXJnIChCVykxEjAQBgNVBAcTCVN0dXR0Z2FydDEpMCcGA1UEChMg RGV1dHNjaGVyIFNwYXJrYXNzZW4gVmVybGFnIEdtYkgxPjA8BgNVBAMTNVMtVFJV U1QgQXV0aGVudGljYXRpb24gYW5kIEVuY3J5cHRpb24gUm9vdCBDQSAyMDA1OlBO MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA2bVKwdMz6tNGs9HiTNL1 toPQb9UY6ZOvJ44TzbUlNlA0EmQpoVXhOmCTnijJ4/Ob4QSwI7+Vio5bG0F/WsPo TUzVJBY+h0jUJ67m91MduwwA7z5hca2/OnpYH5Q9XIHV1W/fuJvS9eXLg3KSwlOy ggLrra1fFi2SU3bxibYs9cEv4KdKb6AwajLrmnQDaHgTncovmwsdvs91DSaXm8f1 XgqfeN+zvOyauu9VjxuapgdjKRdZYgkqeQd3peDRF2npW932kKvimAoA0SVtnteF hy+S8dF2g08LOlk3KC8zpxdQ1iALCvQm+Z845y2kuJuJja2tyWp9iRe79n+Ag3rm 7QIDAQABo4GSMIGPMBIGA1UdEwEB/wQIMAYBAf8CAQAwDgYDVR0PAQH/BAQDAgEG MCkGA1UdEQQiMCCkHjAcMRowGAYDVQQDExFTVFJvbmxpbmUxLTIwNDgtNTAdBgNV HQ4EFgQUD8oeXHngovMpttKFswtKtWXsa1IwHwYDVR0jBBgwFoAUD8oeXHngovMp ttKFswtKtWXsa1IwDQYJKoZIhvcNAQEFBQADggEBAK8B8O0ZPCjoTVy7pWMciDMD pwCHpB8gq9Yc4wYfl35UvbfRssnV2oDsF9eK9XvCAPbpEW+EoFolMeKJ+aQAPzFo LtU96G7m1R08P7K9n3frndOMusDXtk3sU5wPBG7qNWdX4wple5A64U8+wwCSersF iXOMy6ZNwPv2AtawB6MDwidAnwzkhYItr5pCHdDHjfhA7p0GVxzZotiAFP7hYy0y h9WUUpY6RsZxlj33mA6ykaqP2vROJAA5VeitF7nTNCtKqUDMFypVZUF0Qn71wK/I k63yGFs9iQzbRzkk+OBM8h+wPQrKBU6JIRrjKpms/H+h8Q8bHz2eBIPdltkdOpQ= -----END CERTIFICATE----- Microsec e-Szigno Root CA ========================= -----BEGIN CERTIFICATE----- MIIHqDCCBpCgAwIBAgIRAMy4579OKRr9otxmpRwsDxEwDQYJKoZIhvcNAQEFBQAw cjELMAkGA1UEBhMCSFUxETAPBgNVBAcTCEJ1ZGFwZXN0MRYwFAYDVQQKEw1NaWNy b3NlYyBMdGQuMRQwEgYDVQQLEwtlLVN6aWdubyBDQTEiMCAGA1UEAxMZTWljcm9z ZWMgZS1Temlnbm8gUm9vdCBDQTAeFw0wNTA0MDYxMjI4NDRaFw0xNzA0MDYxMjI4 NDRaMHIxCzAJBgNVBAYTAkhVMREwDwYDVQQHEwhCdWRhcGVzdDEWMBQGA1UEChMN TWljcm9zZWMgTHRkLjEUMBIGA1UECxMLZS1Temlnbm8gQ0ExIjAgBgNVBAMTGU1p Y3Jvc2VjIGUtU3ppZ25vIFJvb3QgQ0EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw ggEKAoIBAQDtyADVgXvNOABHzNuEwSFpLHSQDCHZU4ftPkNEU6+r+ICbPHiN1I2u uO/TEdyB5s87lozWbxXGd36hL+BfkrYn13aaHUM86tnsL+4582pnS4uCzyL4ZVX+ LMsvfUh6PXX5qqAnu3jCBspRwn5mS6/NoqdNAoI/gqyFxuEPkEeZlApxcpMqyabA vjxWTHOSJ/FrtfX9/DAFYJLG65Z+AZHCabEeHXtTRbjcQR/Ji3HWVBTji1R4P770 Yjtb9aPs1ZJ04nQw7wHb4dSrmZsqa/i9phyGI0Jf7Enemotb9HI6QMVJPqW+jqpx 62z69Rrkav17fVVA71hu5tnVvCSrwe+3AgMBAAGjggQ3MIIEMzBnBggrBgEFBQcB AQRbMFkwKAYIKwYBBQUHMAGGHGh0dHBzOi8vcmNhLmUtc3ppZ25vLmh1L29jc3Aw LQYIKwYBBQUHMAKGIWh0dHA6Ly93d3cuZS1zemlnbm8uaHUvUm9vdENBLmNydDAP BgNVHRMBAf8EBTADAQH/MIIBcwYDVR0gBIIBajCCAWYwggFiBgwrBgEEAYGoGAIB AQEwggFQMCgGCCsGAQUFBwIBFhxodHRwOi8vd3d3LmUtc3ppZ25vLmh1L1NaU1ov MIIBIgYIKwYBBQUHAgIwggEUHoIBEABBACAAdABhAG4A+gBzAO0AdAB2AOEAbgB5 ACAA6QByAHQAZQBsAG0AZQB6AOkAcwDpAGgAZQB6ACAA6QBzACAAZQBsAGYAbwBn AGEAZADhAHMA4QBoAG8AegAgAGEAIABTAHoAbwBsAGcA4QBsAHQAYQB0APMAIABT AHoAbwBsAGcA4QBsAHQAYQB0AOEAcwBpACAAUwB6AGEAYgDhAGwAeQB6AGEAdABh ACAAcwB6AGUAcgBpAG4AdAAgAGsAZQBsAGwAIABlAGwAagDhAHIAbgBpADoAIABo AHQAdABwADoALwAvAHcAdwB3AC4AZQAtAHMAegBpAGcAbgBvAC4AaAB1AC8AUwBa AFMAWgAvMIHIBgNVHR8EgcAwgb0wgbqggbeggbSGIWh0dHA6Ly93d3cuZS1zemln bm8uaHUvUm9vdENBLmNybIaBjmxkYXA6Ly9sZGFwLmUtc3ppZ25vLmh1L0NOPU1p Y3Jvc2VjJTIwZS1Temlnbm8lMjBSb290JTIwQ0EsT1U9ZS1Temlnbm8lMjBDQSxP PU1pY3Jvc2VjJTIwTHRkLixMPUJ1ZGFwZXN0LEM9SFU/Y2VydGlmaWNhdGVSZXZv Y2F0aW9uTGlzdDtiaW5hcnkwDgYDVR0PAQH/BAQDAgEGMIGWBgNVHREEgY4wgYuB EGluZm9AZS1zemlnbm8uaHWkdzB1MSMwIQYDVQQDDBpNaWNyb3NlYyBlLVN6aWdu w7MgUm9vdCBDQTEWMBQGA1UECwwNZS1TemlnbsOzIEhTWjEWMBQGA1UEChMNTWlj cm9zZWMgS2Z0LjERMA8GA1UEBxMIQnVkYXBlc3QxCzAJBgNVBAYTAkhVMIGsBgNV HSMEgaQwgaGAFMegSXUWYYTbMUuE0vE3QJDvTtz3oXakdDByMQswCQYDVQQGEwJI VTERMA8GA1UEBxMIQnVkYXBlc3QxFjAUBgNVBAoTDU1pY3Jvc2VjIEx0ZC4xFDAS BgNVBAsTC2UtU3ppZ25vIENBMSIwIAYDVQQDExlNaWNyb3NlYyBlLVN6aWdubyBS b290IENBghEAzLjnv04pGv2i3GalHCwPETAdBgNVHQ4EFgQUx6BJdRZhhNsxS4TS 8TdAkO9O3PcwDQYJKoZIhvcNAQEFBQADggEBANMTnGZjWS7KXHAM/IO8VbH0jgds ZifOwTsgqRy7RlRw7lrMoHfqaEQn6/Ip3Xep1fvj1KcExJW4C+FEaGAHQzAxQmHl 7tnlJNUb3+FKG6qfx1/4ehHqE5MAyopYse7tDk2016g2JnzgOsHVV4Lxdbb9iV/a 86g4nzUGCM4ilb7N1fy+W955a9x6qWVmvrElWl/tftOsRm1M9DKHtCAE4Gx4sHfR hUZLphK3dehKyVZs15KrnfVJONJPU+NVkBHbmJbGSfI+9J8b4PeI3CVimUTYc78/ MPMMNz7UwiiAc7EBt51alhQBS6kRnSlqLtBdgcDPsiBDxwPgN05dCtxZICU= -----END CERTIFICATE----- Certigna ======== -----BEGIN CERTIFICATE----- MIIDqDCCApCgAwIBAgIJAP7c4wEPyUj/MA0GCSqGSIb3DQEBBQUAMDQxCzAJBgNV BAYTAkZSMRIwEAYDVQQKDAlEaGlteW90aXMxETAPBgNVBAMMCENlcnRpZ25hMB4X DTA3MDYyOTE1MTMwNVoXDTI3MDYyOTE1MTMwNVowNDELMAkGA1UEBhMCRlIxEjAQ BgNVBAoMCURoaW15b3RpczERMA8GA1UEAwwIQ2VydGlnbmEwggEiMA0GCSqGSIb3 DQEBAQUAA4IBDwAwggEKAoIBAQDIaPHJ1tazNHUmgh7stL7qXOEm7RFHYeGifBZ4 QCHkYJ5ayGPhxLGWkv8YbWkj4Sti993iNi+RB7lIzw7sebYs5zRLcAglozyHGxny gQcPOJAZ0xH+hrTy0V4eHpbNgGzOOzGTtvKg0KmVEn2lmsxryIRWijOp5yIVUxbw zBfsV1/pogqYCd7jX5xv3EjjhQsVWqa6n6xI4wmy9/Qy3l40vhx4XUJbzg4ij02Q 130yGLMLLGq/jj8UEYkgDncUtT2UCIf3JR7VsmAA7G8qKCVuKj4YYxclPz5EIBb2 JsglrgVKtOdjLPOMFlN+XPsRGgjBRmKfIrjxwo1p3Po6WAbfAgMBAAGjgbwwgbkw DwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUGu3+QTmQtCRZvgHyUtVF9lo53BEw ZAYDVR0jBF0wW4AUGu3+QTmQtCRZvgHyUtVF9lo53BGhOKQ2MDQxCzAJBgNVBAYT AkZSMRIwEAYDVQQKDAlEaGlteW90aXMxETAPBgNVBAMMCENlcnRpZ25hggkA/tzj AQ/JSP8wDgYDVR0PAQH/BAQDAgEGMBEGCWCGSAGG+EIBAQQEAwIABzANBgkqhkiG 9w0BAQUFAAOCAQEAhQMeknH2Qq/ho2Ge6/PAD/Kl1NqV5ta+aDY9fm4fTIrv0Q8h bV6lUmPOEvjvKtpv6zf+EwLHyzs+ImvaYS5/1HI93TDhHkxAGYwP15zRgzB7mFnc fca5DClMoTOi62c6ZYTTluLtdkVwj7Ur3vkj1kluPBS1xp81HlDQwY9qcEQCYsuu HWhBp6pX6FOqB9IG9tUUBguRA3UsbHK1YZWaDYu5Def131TN3ubY1gkIl2PlwS6w t0QmwCbAr1UwnjvVNioZBPRcHv/PLLf/0P2HQBHVESO7SMAhqaQoLf0V+LBOK/Qw WyH8EZE0vkHve52Xdf+XlcCWWC/qu0bXu+TZLg== -----END CERTIFICATE----- AC Ra\xC3\xADz Certic\xC3\xA1mara S.A. ====================================== -----BEGIN CERTIFICATE----- MIIGZjCCBE6gAwIBAgIPB35Sk3vgFeNX8GmMy+wMMA0GCSqGSIb3DQEBBQUAMHsx CzAJBgNVBAYTAkNPMUcwRQYDVQQKDD5Tb2NpZWRhZCBDYW1lcmFsIGRlIENlcnRp ZmljYWNpw7NuIERpZ2l0YWwgLSBDZXJ0aWPDoW1hcmEgUy5BLjEjMCEGA1UEAwwa QUMgUmHDrXogQ2VydGljw6FtYXJhIFMuQS4wHhcNMDYxMTI3MjA0NjI5WhcNMzAw NDAyMjE0MjAyWjB7MQswCQYDVQQGEwJDTzFHMEUGA1UECgw+U29jaWVkYWQgQ2Ft ZXJhbCBkZSBDZXJ0aWZpY2FjacOzbiBEaWdpdGFsIC0gQ2VydGljw6FtYXJhIFMu QS4xIzAhBgNVBAMMGkFDIFJhw616IENlcnRpY8OhbWFyYSBTLkEuMIICIjANBgkq hkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAq2uJo1PMSCMI+8PPUZYILrgIem08kBeG qentLhM0R7LQcNzJPNCNyu5LF6vQhbCnIwTLqKL85XXbQMpiiY9QngE9JlsYhBzL fDe3fezTf3MZsGqy2IiKLUV0qPezuMDU2s0iiXRNWhU5cxh0T7XrmafBHoi0wpOQ Y5fzp6cSsgkiBzPZkc0OnB8OIMfuuzONj8LSWKdf/WU34ojC2I+GdV75LaeHM/J4 Ny+LvB2GNzmxlPLYvEqcgxhaBvzz1NS6jBUJJfD5to0EfhcSM2tXSExP2yYe68yQ 54v5aHxwD6Mq0Do43zeX4lvegGHTgNiRg0JaTASJaBE8rF9ogEHMYELODVoqDA+b MMCm8Ibbq0nXl21Ii/kDwFJnmxL3wvIumGVC2daa49AZMQyth9VXAnow6IYm+48j ilSH5L887uvDdUhfHjlvgWJsxS3EF1QZtzeNnDeRyPYL1epjb4OsOMLzP96a++Ej YfDIJss2yKHzMI+ko6Kh3VOz3vCaMh+DkXkwwakfU5tTohVTP92dsxA7SH2JD/zt A/X7JWR1DhcZDY8AFmd5ekD8LVkH2ZD6mq093ICK5lw1omdMEWux+IBkAC1vImHF rEsm5VoQgpukg3s0956JkSCXjrdCx2bD0Omk1vUgjcTDlaxECp1bczwmPS9KvqfJ pxAe+59QafMCAwEAAaOB5jCB4zAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQE AwIBBjAdBgNVHQ4EFgQU0QnQ6dfOeXRU+Tows/RtLAMDG2gwgaAGA1UdIASBmDCB lTCBkgYEVR0gADCBiTArBggrBgEFBQcCARYfaHR0cDovL3d3dy5jZXJ0aWNhbWFy YS5jb20vZHBjLzBaBggrBgEFBQcCAjBOGkxMaW1pdGFjaW9uZXMgZGUgZ2FyYW50 7WFzIGRlIGVzdGUgY2VydGlmaWNhZG8gc2UgcHVlZGVuIGVuY29udHJhciBlbiBs YSBEUEMuMA0GCSqGSIb3DQEBBQUAA4ICAQBclLW4RZFNjmEfAygPU3zmpFmps4p6 xbD/CHwso3EcIRNnoZUSQDWDg4902zNc8El2CoFS3UnUmjIz75uny3XlesuXEpBc unvFm9+7OSPI/5jOCk0iAUgHforA1SBClETvv3eiiWdIG0ADBaGJ7M9i4z0ldma/ Jre7Ir5v/zlXdLp6yQGVwZVR6Kss+LGGIOk/yzVb0hfpKv6DExdA7ohiZVvVO2Dp ezy4ydV/NgIlqmjCMRW3MGXrfx1IebHPOeJCgBbT9ZMj/EyXyVo3bHwi2ErN0o42 gzmRkBDI8ck1fj+404HGIGQatlDCIaR43NAvO2STdPCWkPHv+wlaNECW8DYSwaN0 jJN+Qd53i+yG2dIPPy3RzECiiWZIHiCznCNZc6lEc7wkeZBWN7PGKX6jD/EpOe9+ XCgycDWs2rjIdWb8m0w5R44bb5tNAlQiM+9hup4phO9OSzNHdpdqy35f/RWmnkJD W2ZaiogN9xa5P1FlK2Zqi9E4UqLWRhH6/JocdJ6PlwsCT2TG9WjTSy3/pDceiz+/ RL5hRqGEPQgnTIEgd4kI6mdAXmwIUV80WoyWaM3X94nCHNMyAK9Sy9NgWyo6R35r MDOhYil/SrnhLecUIw4OGEfhefwVVdCx/CVxY3UzHCMrr1zZ7Ud3YA47Dx7SwNxk BYn8eNZcLCZDqQ== -----END CERTIFICATE----- TC TrustCenter Class 2 CA II ============================ -----BEGIN CERTIFICATE----- MIIEqjCCA5KgAwIBAgIOLmoAAQACH9dSISwRXDswDQYJKoZIhvcNAQEFBQAwdjEL MAkGA1UEBhMCREUxHDAaBgNVBAoTE1RDIFRydXN0Q2VudGVyIEdtYkgxIjAgBgNV BAsTGVRDIFRydXN0Q2VudGVyIENsYXNzIDIgQ0ExJTAjBgNVBAMTHFRDIFRydXN0 Q2VudGVyIENsYXNzIDIgQ0EgSUkwHhcNMDYwMTEyMTQzODQzWhcNMjUxMjMxMjI1 OTU5WjB2MQswCQYDVQQGEwJERTEcMBoGA1UEChMTVEMgVHJ1c3RDZW50ZXIgR21i SDEiMCAGA1UECxMZVEMgVHJ1c3RDZW50ZXIgQ2xhc3MgMiBDQTElMCMGA1UEAxMc VEMgVHJ1c3RDZW50ZXIgQ2xhc3MgMiBDQSBJSTCCASIwDQYJKoZIhvcNAQEBBQAD ggEPADCCAQoCggEBAKuAh5uO8MN8h9foJIIRszzdQ2Lu+MNF2ujhoF/RKrLqk2jf tMjWQ+nEdVl//OEd+DFwIxuInie5e/060smp6RQvkL4DUsFJzfb95AhmC1eKokKg uNV/aVyQMrKXDcpK3EY+AlWJU+MaWss2xgdW94zPEfRMuzBwBJWl9jmM/XOBCH2J XjIeIqkiRUuwZi4wzJ9l/fzLganx4Duvo4bRierERXlQXa7pIXSSTYtZgo+U4+lK 8edJsBTj9WLL1XK9H7nSn6DNqPoByNkN39r8R52zyFTfSUrxIan+GE7uSNQZu+99 5OKdy1u2bv/jzVrndIIFuoAlOMvkaZ6vQaoahPUCAwEAAaOCATQwggEwMA8GA1Ud EwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBTjq1RMgKHbVkO3 kUrL84J6E1wIqzCB7QYDVR0fBIHlMIHiMIHfoIHcoIHZhjVodHRwOi8vd3d3LnRy dXN0Y2VudGVyLmRlL2NybC92Mi90Y19jbGFzc18yX2NhX0lJLmNybIaBn2xkYXA6 Ly93d3cudHJ1c3RjZW50ZXIuZGUvQ049VEMlMjBUcnVzdENlbnRlciUyMENsYXNz JTIwMiUyMENBJTIwSUksTz1UQyUyMFRydXN0Q2VudGVyJTIwR21iSCxPVT1yb290 Y2VydHMsREM9dHJ1c3RjZW50ZXIsREM9ZGU/Y2VydGlmaWNhdGVSZXZvY2F0aW9u TGlzdD9iYXNlPzANBgkqhkiG9w0BAQUFAAOCAQEAjNfffu4bgBCzg/XbEeprS6iS GNn3Bzn1LL4GdXpoUxUc6krtXvwjshOg0wn/9vYua0Fxec3ibf2uWWuFHbhOIprt ZjluS5TmVfwLG4t3wVMTZonZKNaL80VKY7f9ewthXbhtvsPcW3nS7Yblok2+XnR8 au0WOB9/WIFaGusyiC2y8zl3gK9etmF1KdsjTYjKUCjLhdLTEKJZbtOTVAB6okaV hgWcqRmY5TFyDADiZ9lA4CQze28suVyrZZ0srHbqNZn1l7kPJOzHdiEoZa5X6AeI dUpWoNIFOqTmjZKILPPy4cHGYdtBxceb9w4aUUXCYWvcZCcXjFq32nQozZfkvQ== -----END CERTIFICATE----- TC TrustCenter Class 3 CA II ============================ -----BEGIN CERTIFICATE----- MIIEqjCCA5KgAwIBAgIOSkcAAQAC5aBd1j8AUb8wDQYJKoZIhvcNAQEFBQAwdjEL MAkGA1UEBhMCREUxHDAaBgNVBAoTE1RDIFRydXN0Q2VudGVyIEdtYkgxIjAgBgNV BAsTGVRDIFRydXN0Q2VudGVyIENsYXNzIDMgQ0ExJTAjBgNVBAMTHFRDIFRydXN0 Q2VudGVyIENsYXNzIDMgQ0EgSUkwHhcNMDYwMTEyMTQ0MTU3WhcNMjUxMjMxMjI1 OTU5WjB2MQswCQYDVQQGEwJERTEcMBoGA1UEChMTVEMgVHJ1c3RDZW50ZXIgR21i SDEiMCAGA1UECxMZVEMgVHJ1c3RDZW50ZXIgQ2xhc3MgMyBDQTElMCMGA1UEAxMc VEMgVHJ1c3RDZW50ZXIgQ2xhc3MgMyBDQSBJSTCCASIwDQYJKoZIhvcNAQEBBQAD ggEPADCCAQoCggEBALTgu1G7OVyLBMVMeRwjhjEQY0NVJz/GRcekPewJDRoeIMJW Ht4bNwcwIi9v8Qbxq63WyKthoy9DxLCyLfzDlml7forkzMA5EpBCYMnMNWju2l+Q Vl/NHE1bWEnrDgFPZPosPIlY2C8u4rBo6SI7dYnWRBpl8huXJh0obazovVkdKyT2 1oQDZogkAHhg8fir/gKya/si+zXmFtGt9i4S5Po1auUZuV3bOx4a+9P/FRQI2Alq ukWdFHlgfa9Aigdzs5OW03Q0jTo3Kd5c7PXuLjHCINy+8U9/I1LZW+Jk2ZyqBwi1 Rb3R0DHBq1SfqdLDYmAD8bs5SpJKPQq5ncWg/jcCAwEAAaOCATQwggEwMA8GA1Ud EwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBTUovyfs8PYA9NX XAek0CSnwPIA1DCB7QYDVR0fBIHlMIHiMIHfoIHcoIHZhjVodHRwOi8vd3d3LnRy dXN0Y2VudGVyLmRlL2NybC92Mi90Y19jbGFzc18zX2NhX0lJLmNybIaBn2xkYXA6 Ly93d3cudHJ1c3RjZW50ZXIuZGUvQ049VEMlMjBUcnVzdENlbnRlciUyMENsYXNz JTIwMyUyMENBJTIwSUksTz1UQyUyMFRydXN0Q2VudGVyJTIwR21iSCxPVT1yb290 Y2VydHMsREM9dHJ1c3RjZW50ZXIsREM9ZGU/Y2VydGlmaWNhdGVSZXZvY2F0aW9u TGlzdD9iYXNlPzANBgkqhkiG9w0BAQUFAAOCAQEANmDkcPcGIEPZIxpC8vijsrlN irTzwppVMXzEO2eatN9NDoqTSheLG43KieHPOh6sHfGcMrSOWXaiQYUlN6AT0PV8 TtXqluJucsG7Kv5sbviRmEb8yRtXW+rIGjs/sFGYPAfaLFkB2otE6OF0/ado3VS6 g0bsyEa1+K+XwDsJHI/OcpY9M1ZwvJbL2NV9IJqDnxrcOfHFcqMRA/07QlIp2+gB 95tejNaNhk4Z+rwcvsUhpYeeeC422wlxo3I0+GzjBgnyXlal092Y+tTmBvTwtiBj S+opvaqCZh77gaqnN60TGOaSw4HBM7uIHqHn4rS9MWwOUT1v+5ZWgOI2F9Hc5A== -----END CERTIFICATE----- TC TrustCenter Universal CA I ============================= -----BEGIN CERTIFICATE----- MIID3TCCAsWgAwIBAgIOHaIAAQAC7LdggHiNtgYwDQYJKoZIhvcNAQEFBQAweTEL MAkGA1UEBhMCREUxHDAaBgNVBAoTE1RDIFRydXN0Q2VudGVyIEdtYkgxJDAiBgNV BAsTG1RDIFRydXN0Q2VudGVyIFVuaXZlcnNhbCBDQTEmMCQGA1UEAxMdVEMgVHJ1 c3RDZW50ZXIgVW5pdmVyc2FsIENBIEkwHhcNMDYwMzIyMTU1NDI4WhcNMjUxMjMx MjI1OTU5WjB5MQswCQYDVQQGEwJERTEcMBoGA1UEChMTVEMgVHJ1c3RDZW50ZXIg R21iSDEkMCIGA1UECxMbVEMgVHJ1c3RDZW50ZXIgVW5pdmVyc2FsIENBMSYwJAYD VQQDEx1UQyBUcnVzdENlbnRlciBVbml2ZXJzYWwgQ0EgSTCCASIwDQYJKoZIhvcN AQEBBQADggEPADCCAQoCggEBAKR3I5ZEr5D0MacQ9CaHnPM42Q9e3s9B6DGtxnSR JJZ4Hgmgm5qVSkr1YnwCqMqs+1oEdjneX/H5s7/zA1hV0qq34wQi0fiU2iIIAI3T fCZdzHd55yx4Oagmcw6iXSVphU9VDprvxrlE4Vc93x9UIuVvZaozhDrzznq+VZeu jRIPFDPiUHDDSYcTvFHe15gSWu86gzOSBnWLknwSaHtwag+1m7Z3W0hZneTvWq3z wZ7U10VOylY0Ibw+F1tvdwxIAUMpsN0/lm7mlaoMwCC2/T42J5zjXM9OgdwZu5GQ fezmlwQek8wiSdeXhrYTCjxDI3d+8NzmzSQfO4ObNDqDNOMCAwEAAaNjMGEwHwYD VR0jBBgwFoAUkqR1LKSevoFE63n8isWVpesQdXMwDwYDVR0TAQH/BAUwAwEB/zAO BgNVHQ8BAf8EBAMCAYYwHQYDVR0OBBYEFJKkdSyknr6BROt5/IrFlaXrEHVzMA0G CSqGSIb3DQEBBQUAA4IBAQAo0uCG1eb4e/CX3CJrO5UUVg8RMKWaTzqwOuAGy2X1 7caXJ/4l8lfmXpWMPmRgFVp/Lw0BxbFg/UU1z/CyvwbZ71q+s2IhtNerNXxTPqYn 8aEt2hojnczd7Dwtnic0XQ/CNnm8yUpiLe1r2X1BQ3y2qsrtYbE3ghUJGooWMNjs ydZHcnhLEEYUjl8Or+zHL6sQ17bxbuyGssLoDZJz3KL0Dzq/YSMQiZxIQG5wALPT ujdEWBF6AmqI8Dc08BnprNRlc/ZpjGSUOnmFKbAWKwyCPwacx/0QK54PLLae4xW/ 2TYcuiUaUj0a7CIMHOCkoj3w6DnPgcB77V0fb8XQC9eY -----END CERTIFICATE----- Deutsche Telekom Root CA 2 ========================== -----BEGIN CERTIFICATE----- MIIDnzCCAoegAwIBAgIBJjANBgkqhkiG9w0BAQUFADBxMQswCQYDVQQGEwJERTEc MBoGA1UEChMTRGV1dHNjaGUgVGVsZWtvbSBBRzEfMB0GA1UECxMWVC1UZWxlU2Vj IFRydXN0IENlbnRlcjEjMCEGA1UEAxMaRGV1dHNjaGUgVGVsZWtvbSBSb290IENB IDIwHhcNOTkwNzA5MTIxMTAwWhcNMTkwNzA5MjM1OTAwWjBxMQswCQYDVQQGEwJE RTEcMBoGA1UEChMTRGV1dHNjaGUgVGVsZWtvbSBBRzEfMB0GA1UECxMWVC1UZWxl U2VjIFRydXN0IENlbnRlcjEjMCEGA1UEAxMaRGV1dHNjaGUgVGVsZWtvbSBSb290 IENBIDIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCrC6M14IspFLEU ha88EOQ5bzVdSq7d6mGNlUn0b2SjGmBmpKlAIoTZ1KXleJMOaAGtuU1cOs7TuKhC QN/Po7qCWWqSG6wcmtoIKyUn+WkjR/Hg6yx6m/UTAtB+NHzCnjwAWav12gz1Mjwr rFDa1sPeg5TKqAyZMg4ISFZbavva4VhYAUlfckE8FQYBjl2tqriTtM2e66foai1S NNs671x1Udrb8zH57nGYMsRUFUQM+ZtV7a3fGAigo4aKSe5TBY8ZTNXeWHmb0moc QqvF1afPaA+W5OFhmHZhyJF81j4A4pFQh+GdCuatl9Idxjp9y7zaAzTVjlsB9WoH txa2bkp/AgMBAAGjQjBAMB0GA1UdDgQWBBQxw3kbuvVT1xfgiXotF2wKsyudMzAP BgNVHRMECDAGAQH/AgEFMA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQUFAAOC AQEAlGRZrTlk5ynrE/5aw4sTV8gEJPB0d8Bg42f76Ymmg7+Wgnxu1MM9756Abrsp tJh6sTtU6zkXR34ajgv8HzFZMQSyzhfzLMdiNlXiItiJVbSYSKpk+tYcNthEeFpa IzpXl/V6ME+un2pMSyuOoAPjPuCp1NJ70rOo4nI8rZ7/gFnkm0W09juwzTkZmDLl 6iFhkOQxIY40sfcvNUqFENrnijchvllj4PKFiDFT1FQUhXB59C4Gdyd1Lx+4ivn+ xbrYNuSD7Odlt79jWvNGr4GUN9RBjNYj1h7P9WgbRGOiWrqnNVmh5XAFmw4jV5mU Cm26OWMohpLzGITY+9HPBVZkVw== -----END CERTIFICATE----- ComSign CA ========== -----BEGIN CERTIFICATE----- MIIDkzCCAnugAwIBAgIQFBOWgxRVjOp7Y+X8NId3RDANBgkqhkiG9w0BAQUFADA0 MRMwEQYDVQQDEwpDb21TaWduIENBMRAwDgYDVQQKEwdDb21TaWduMQswCQYDVQQG EwJJTDAeFw0wNDAzMjQxMTMyMThaFw0yOTAzMTkxNTAyMThaMDQxEzARBgNVBAMT CkNvbVNpZ24gQ0ExEDAOBgNVBAoTB0NvbVNpZ24xCzAJBgNVBAYTAklMMIIBIjAN BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA8ORUaSvTx49qROR+WCf4C9DklBKK 8Rs4OC8fMZwG1Cyn3gsqrhqg455qv588x26i+YtkbDqthVVRVKU4VbirgwTyP2Q2 98CNQ0NqZtH3FyrV7zb6MBBC11PN+fozc0yz6YQgitZBJzXkOPqUm7h65HkfM/sb 2CEJKHxNGGleZIp6GZPKfuzzcuc3B1hZKKxC+cX/zT/npfo4sdAMx9lSGlPWgcxC ejVb7Us6eva1jsz/D3zkYDaHL63woSV9/9JLEYhwVKZBqGdTUkJe5DSe5L6j7Kpi Xd3DTKaCQeQzC6zJMw9kglcq/QytNuEMrkvF7zuZ2SOzW120V+x0cAwqTwIDAQAB o4GgMIGdMAwGA1UdEwQFMAMBAf8wPQYDVR0fBDYwNDAyoDCgLoYsaHR0cDovL2Zl ZGlyLmNvbXNpZ24uY28uaWwvY3JsL0NvbVNpZ25DQS5jcmwwDgYDVR0PAQH/BAQD AgGGMB8GA1UdIwQYMBaAFEsBmz5WGmU2dst7l6qSBe4y5ygxMB0GA1UdDgQWBBRL AZs+VhplNnbLe5eqkgXuMucoMTANBgkqhkiG9w0BAQUFAAOCAQEA0Nmlfv4pYEWd foPPbrxHbvUanlR2QnG0PFg/LUAlQvaBnPGJEMgOqnhPOAlXsDzACPw1jvFIUY0M cXS6hMTXcpuEfDhOZAYnKuGntewImbQKDdSFc8gS4TXt8QUxHXOZDOuWyt3T5oWq 8Ir7dcHyCTxlZWTzTNity4hp8+SDtwy9F1qWF8pb/627HOkthIDYIb6FUtnUdLlp hbpN7Sgy6/lhSuTENh4Z3G+EER+V9YMoGKgzkkMn3V0TBEVPh9VGzT2ouvDzuFYk Res3x+F2T3I5GN9+dHLHcy056mDmrRGiVod7w2ia/viMcKjfZTL0pECMocJEAw6U AGegcQCCSA== -----END CERTIFICATE----- ComSign Secured CA ================== -----BEGIN CERTIFICATE----- MIIDqzCCApOgAwIBAgIRAMcoRwmzuGxFjB36JPU2TukwDQYJKoZIhvcNAQEFBQAw PDEbMBkGA1UEAxMSQ29tU2lnbiBTZWN1cmVkIENBMRAwDgYDVQQKEwdDb21TaWdu MQswCQYDVQQGEwJJTDAeFw0wNDAzMjQxMTM3MjBaFw0yOTAzMTYxNTA0NTZaMDwx GzAZBgNVBAMTEkNvbVNpZ24gU2VjdXJlZCBDQTEQMA4GA1UEChMHQ29tU2lnbjEL MAkGA1UEBhMCSUwwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDGtWhf HZQVw6QIVS3joFd67+l0Kru5fFdJGhFeTymHDEjWaueP1H5XJLkGieQcPOqs49oh gHMhCu95mGwfCP+hUH3ymBvJVG8+pSjsIQQPRbsHPaHA+iqYHU4Gk/v1iDurX8sW v+bznkqH7Rnqwp9D5PGBpX8QTz7RSmKtUxvLg/8HZaWSLWapW7ha9B20IZFKF3ue Mv5WJDmyVIRD9YTC2LxBkMyd1mja6YJQqTtoz7VdApRgFrFD2UNd3V2Hbuq7s8lr 9gOUCXDeFhF6K+h2j0kQmHe5Y1yLM5d19guMsqtb3nQgJT/j8xH5h2iGNXHDHYwt 6+UarA9z1YJZQIDTAgMBAAGjgacwgaQwDAYDVR0TBAUwAwEB/zBEBgNVHR8EPTA7 MDmgN6A1hjNodHRwOi8vZmVkaXIuY29tc2lnbi5jby5pbC9jcmwvQ29tU2lnblNl Y3VyZWRDQS5jcmwwDgYDVR0PAQH/BAQDAgGGMB8GA1UdIwQYMBaAFMFL7XC29z58 ADsAj8c+DkWfHl3sMB0GA1UdDgQWBBTBS+1wtvc+fAA7AI/HPg5Fnx5d7DANBgkq hkiG9w0BAQUFAAOCAQEAFs/ukhNQq3sUnjO2QiBq1BW9Cav8cujvR3qQrFHBZE7p iL1DRYHjZiM/EoZNGeQFsOY3wo3aBijJD4mkU6l1P7CW+6tMM1X5eCZGbxs2mPtC dsGCuY7e+0X5YxtiOzkGynd6qDwJz2w2PQ8KRUtpFhpFfTMDZflScZAmlaxMDPWL kz/MdXSFmLr/YnpNH4n+rr2UAJm/EaXc4HnFFgt9AmEd6oX5AhVP51qJThRv4zdL hfXBPGHg/QVBspJ/wx2g0K5SZGBrGMYmnNj1ZOQ2GmKfig8+/21OGVZOIJFsnzQz OjRXUDpvgV4GxvU+fE6OK85lBi5d0ipTdF7Tbieejw== -----END CERTIFICATE----- Cybertrust Global Root ====================== -----BEGIN CERTIFICATE----- MIIDoTCCAomgAwIBAgILBAAAAAABD4WqLUgwDQYJKoZIhvcNAQEFBQAwOzEYMBYG A1UEChMPQ3liZXJ0cnVzdCwgSW5jMR8wHQYDVQQDExZDeWJlcnRydXN0IEdsb2Jh bCBSb290MB4XDTA2MTIxNTA4MDAwMFoXDTIxMTIxNTA4MDAwMFowOzEYMBYGA1UE ChMPQ3liZXJ0cnVzdCwgSW5jMR8wHQYDVQQDExZDeWJlcnRydXN0IEdsb2JhbCBS b290MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA+Mi8vRRQZhP/8NN5 7CPytxrHjoXxEnOmGaoQ25yiZXRadz5RfVb23CO21O1fWLE3TdVJDm71aofW0ozS J8bi/zafmGWgE07GKmSb1ZASzxQG9Dvj1Ci+6A74q05IlG2OlTEQXO2iLb3VOm2y HLtgwEZLAfVJrn5GitB0jaEMAs7u/OePuGtm839EAL9mJRQr3RAwHQeWP032a7iP t3sMpTjr3kfb1V05/Iin89cqdPHoWqI7n1C6poxFNcJQZZXcY4Lv3b93TZxiyWNz FtApD0mpSPCzqrdsxacwOUBdrsTiXSZT8M4cIwhhqJQZugRiQOwfOHB3EgZxpzAY XSUnpQIDAQABo4GlMIGiMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/ MB0GA1UdDgQWBBS2CHsNesysIEyGVjJez6tuhS1wVzA/BgNVHR8EODA2MDSgMqAw hi5odHRwOi8vd3d3Mi5wdWJsaWMtdHJ1c3QuY29tL2NybC9jdC9jdHJvb3QuY3Js MB8GA1UdIwQYMBaAFLYIew16zKwgTIZWMl7Pq26FLXBXMA0GCSqGSIb3DQEBBQUA A4IBAQBW7wojoFROlZfJ+InaRcHUowAl9B8Tq7ejhVhpwjCt2BWKLePJzYFa+HMj Wqd8BfP9IjsO0QbE2zZMcwSO5bAi5MXzLqXZI+O4Tkogp24CJJ8iYGd7ix1yCcUx XOl5n4BHPa2hCwcUPUf/A2kaDAtE52Mlp3+yybh2hO0j9n0Hq0V+09+zv+mKts2o omcrUtW3ZfA5TGOgkXmTUg9U3YO7n9GPp1Nzw8v/MOx8BLjYRB+TX3EJIrduPuoc A06dGiBh+4E37F78CkWr1+cXVdCg6mCbpvbjjFspwgZgFJ0tl0ypkxWdYcQBX0jW WL1WMRJOEcgh4LMRkWXbtKaIOM5V -----END CERTIFICATE----- ePKI Root Certification Authority ================================= -----BEGIN CERTIFICATE----- MIIFsDCCA5igAwIBAgIQFci9ZUdcr7iXAF7kBtK8nTANBgkqhkiG9w0BAQUFADBe MQswCQYDVQQGEwJUVzEjMCEGA1UECgwaQ2h1bmdod2EgVGVsZWNvbSBDby4sIEx0 ZC4xKjAoBgNVBAsMIWVQS0kgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAe Fw0wNDEyMjAwMjMxMjdaFw0zNDEyMjAwMjMxMjdaMF4xCzAJBgNVBAYTAlRXMSMw IQYDVQQKDBpDaHVuZ2h3YSBUZWxlY29tIENvLiwgTHRkLjEqMCgGA1UECwwhZVBL SSBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIICIjANBgkqhkiG9w0BAQEF AAOCAg8AMIICCgKCAgEA4SUP7o3biDN1Z82tH306Tm2d0y8U82N0ywEhajfqhFAH SyZbCUNsIZ5qyNUD9WBpj8zwIuQf5/dqIjG3LBXy4P4AakP/h2XGtRrBp0xtInAh ijHyl3SJCRImHJ7K2RKilTza6We/CKBk49ZCt0Xvl/T29de1ShUCWH2YWEtgvM3X DZoTM1PRYfl61dd4s5oz9wCGzh1NlDivqOx4UXCKXBCDUSH3ET00hl7lSM2XgYI1 TBnsZfZrxQWh7kcT1rMhJ5QQCtkkO7q+RBNGMD+XPNjX12ruOzjjK9SXDrkb5wdJ fzcq+Xd4z1TtW0ado4AOkUPB1ltfFLqfpo0kR0BZv3I4sjZsN/+Z0V0OWQqraffA sgRFelQArr5T9rXn4fg8ozHSqf4hUmTFpmfwdQcGlBSBVcYn5AGPF8Fqcde+S/uU WH1+ETOxQvdibBjWzwloPn9s9h6PYq2lY9sJpx8iQkEeb5mKPtf5P0B6ebClAZLS nT0IFaUQAS2zMnaolQ2zepr7BxB4EW/hj8e6DyUadCrlHJhBmd8hh+iVBmoKs2pH dmX2Os+PYhcZewoozRrSgx4hxyy/vv9haLdnG7t4TY3OZ+XkwY63I2binZB1NJip NiuKmpS5nezMirH4JYlcWrYvjB9teSSnUmjDhDXiZo1jDiVN1Rmy5nk3pyKdVDEC AwEAAaNqMGgwHQYDVR0OBBYEFB4M97Zn8uGSJglFwFU5Lnc/QkqiMAwGA1UdEwQF MAMBAf8wOQYEZyoHAAQxMC8wLQIBADAJBgUrDgMCGgUAMAcGBWcqAwAABBRFsMLH ClZ87lt4DJX5GFPBphzYEDANBgkqhkiG9w0BAQUFAAOCAgEACbODU1kBPpVJufGB uvl2ICO1J2B01GqZNF5sAFPZn/KmsSQHRGoqxqWOeBLoR9lYGxMqXnmbnwoqZ6Yl PwZpVnPDimZI+ymBV3QGypzqKOg4ZyYr8dW1P2WT+DZdjo2NQCCHGervJ8A9tDkP JXtoUHRVnAxZfVo9QZQlUgjgRywVMRnVvwdVxrsStZf0X4OFunHB2WyBEXYKCrC/ gpf36j36+uwtqSiUO1bd0lEursC9CBWMd1I0ltabrNMdjmEPNXubrjlpC2JgQCA2 j6/7Nu4tCEoduL+bXPjqpRugc6bY+G7gMwRfaKonh+3ZwZCc7b3jajWvY9+rGNm6 5ulK6lCKD2GTHuItGeIwlDWSXQ62B68ZgI9HkFFLLk3dheLSClIKF5r8GrBQAuUB o2M3IUxExJtRmREOc5wGj1QupyheRDmHVi03vYVElOEMSyycw5KFNGHLD7ibSkNS /jQ6fbjpKdx2qcgw+BRxgMYeNkh0IkFch4LoGHGLQYlE535YW6i4jRPpp2zDR+2z Gp1iro2C6pSe3VkQw63d4k3jMdXH7OjysP6SHhYKGvzZ8/gntsm+HbRsZJB/9OTE W9c3rkIO3aQab3yIVMUWbuF6aC74Or8NpDyJO3inTmODBCEIZ43ygknQW/2xzQ+D hNQ+IIX3Sj0rnP0qCglN6oH4EZw= -----END CERTIFICATE----- T\xc3\x9c\x42\xC4\xB0TAK UEKAE K\xC3\xB6k Sertifika Hizmet Sa\xC4\x9Flay\xc4\xb1\x63\xc4\xb1s\xc4\xb1 - S\xC3\xBCr\xC3\xBCm 3 ============================================================================================================================= -----BEGIN CERTIFICATE----- MIIFFzCCA/+gAwIBAgIBETANBgkqhkiG9w0BAQUFADCCASsxCzAJBgNVBAYTAlRS MRgwFgYDVQQHDA9HZWJ6ZSAtIEtvY2FlbGkxRzBFBgNVBAoMPlTDvHJraXllIEJp bGltc2VsIHZlIFRla25vbG9qaWsgQXJhxZ90xLFybWEgS3VydW11IC0gVMOcQsSw VEFLMUgwRgYDVQQLDD9VbHVzYWwgRWxla3Ryb25payB2ZSBLcmlwdG9sb2ppIEFy YcWfdMSxcm1hIEVuc3RpdMO8c8O8IC0gVUVLQUUxIzAhBgNVBAsMGkthbXUgU2Vy dGlmaWthc3lvbiBNZXJrZXppMUowSAYDVQQDDEFUw5xCxLBUQUsgVUVLQUUgS8O2 ayBTZXJ0aWZpa2EgSGl6bWV0IFNhxJ9sYXnEsWPEsXPEsSAtIFPDvHLDvG0gMzAe Fw0wNzA4MjQxMTM3MDdaFw0xNzA4MjExMTM3MDdaMIIBKzELMAkGA1UEBhMCVFIx GDAWBgNVBAcMD0dlYnplIC0gS29jYWVsaTFHMEUGA1UECgw+VMO8cmtpeWUgQmls aW1zZWwgdmUgVGVrbm9sb2ppayBBcmHFn3TEsXJtYSBLdXJ1bXUgLSBUw5xCxLBU QUsxSDBGBgNVBAsMP1VsdXNhbCBFbGVrdHJvbmlrIHZlIEtyaXB0b2xvamkgQXJh xZ90xLFybWEgRW5zdGl0w7xzw7wgLSBVRUtBRTEjMCEGA1UECwwaS2FtdSBTZXJ0 aWZpa2FzeW9uIE1lcmtlemkxSjBIBgNVBAMMQVTDnELEsFRBSyBVRUtBRSBLw7Zr IFNlcnRpZmlrYSBIaXptZXQgU2HEn2xhecSxY8Sxc8SxIC0gU8O8csO8bSAzMIIB IjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAim1L/xCIOsP2fpTo6iBkcK4h gb46ezzb8R1Sf1n68yJMlaCQvEhOEav7t7WNeoMojCZG2E6VQIdhn8WebYGHV2yK O7Rm6sxA/OOqbLLLAdsyv9Lrhc+hDVXDWzhXcLh1xnnRFDDtG1hba+818qEhTsXO fJlfbLm4IpNQp81McGq+agV/E5wrHur+R84EpW+sky58K5+eeROR6Oqeyjh1jmKw lZMq5d/pXpduIF9fhHpEORlAHLpVK/swsoHvhOPc7Jg4OQOFCKlUAwUp8MmPi+oL hmUZEdPpCSPeaJMDyTYcIW7OjGbxmTDY17PDHfiBLqi9ggtm/oLL4eAagsNAgQID AQABo0IwQDAdBgNVHQ4EFgQUvYiHyY/2pAoLquvF/pEjnatKijIwDgYDVR0PAQH/ BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEFBQADggEBAB18+kmP NOm3JpIWmgV050vQbTlswyb2zrgxvMTfvCr4N5EY3ATIZJkrGG2AA1nJrvhY0D7t wyOfaTyGOBye79oneNGEN3GKPEs5z35FBtYt2IpNeBLWrcLTy9LQQfMmNkqblWwM 7uXRQydmwYj3erMgbOqwaSvHIOgMA8RBBZniP+Rr+KCGgceExh/VS4ESshYhLBOh gLJeDEoTniDYYkCrkOpkSi+sDQESeUWoL4cZaMjihccwsnX5OD+ywJO0a+IDRM5n oN+J1q2MdqMTw5RhK2vZbMEHCiIHhWyFJEapvj+LeISCfiQMnf2BN+MlqO02TpUs yZyQ2uypQjyttgI= -----END CERTIFICATE----- Buypass Class 2 CA 1 ==================== -----BEGIN CERTIFICATE----- MIIDUzCCAjugAwIBAgIBATANBgkqhkiG9w0BAQUFADBLMQswCQYDVQQGEwJOTzEd MBsGA1UECgwUQnV5cGFzcyBBUy05ODMxNjMzMjcxHTAbBgNVBAMMFEJ1eXBhc3Mg Q2xhc3MgMiBDQSAxMB4XDTA2MTAxMzEwMjUwOVoXDTE2MTAxMzEwMjUwOVowSzEL MAkGA1UEBhMCTk8xHTAbBgNVBAoMFEJ1eXBhc3MgQVMtOTgzMTYzMzI3MR0wGwYD VQQDDBRCdXlwYXNzIENsYXNzIDIgQ0EgMTCCASIwDQYJKoZIhvcNAQEBBQADggEP ADCCAQoCggEBAIs8B0XY9t/mx8q6jUPFR42wWsE425KEHK8T1A9vNkYgxC7McXA0 ojTTNy7Y3Tp3L8DrKehc0rWpkTSHIln+zNvnma+WwajHQN2lFYxuyHyXA8vmIPLX l18xoS830r7uvqmtqEyeIWZDO6i88wmjONVZJMHCR3axiFyCO7srpgTXjAePzdVB HfCuuCkslFJgNJQ72uA40Z0zPhX0kzLFANq1KWYOOngPIVJfAuWSeyXTkh4vFZ2B 5J2O6O+JzhRMVB0cgRJNcKi+EAUXfh/RuFdV7c27UsKwHnjCTTZoy1YmwVLBvXb3 WNVyfh9EdrsAiR0WnVE1703CVu9r4Iw7DekCAwEAAaNCMEAwDwYDVR0TAQH/BAUw AwEB/zAdBgNVHQ4EFgQUP42aWYv8e3uco684sDntkHGA1sgwDgYDVR0PAQH/BAQD AgEGMA0GCSqGSIb3DQEBBQUAA4IBAQAVGn4TirnoB6NLJzKyQJHyIdFkhb5jatLP gcIV1Xp+DCmsNx4cfHZSldq1fyOhKXdlyTKdqC5Wq2B2zha0jX94wNWZUYN/Xtm+ DKhQ7SLHrQVMdvvt7h5HZPb3J31cKA9FxVxiXqaakZG3Uxcu3K1gnZZkOb1naLKu BctN518fV4bVIJwo+28TOPX2EZL2fZleHwzoq0QkKXJAPTZSr4xYkHPB7GEseaHs h7U/2k3ZIQAw3pDaDtMaSKk+hQsUi4y8QZ5q9w5wwDX3OaJdZtB7WZ+oRxKaJyOk LY4ng5IgodcVf/EuGO70SH8vf/GhGLWhC5SgYiAynB321O+/TIho -----END CERTIFICATE----- Buypass Class 3 CA 1 ==================== -----BEGIN CERTIFICATE----- MIIDUzCCAjugAwIBAgIBAjANBgkqhkiG9w0BAQUFADBLMQswCQYDVQQGEwJOTzEd MBsGA1UECgwUQnV5cGFzcyBBUy05ODMxNjMzMjcxHTAbBgNVBAMMFEJ1eXBhc3Mg Q2xhc3MgMyBDQSAxMB4XDTA1MDUwOTE0MTMwM1oXDTE1MDUwOTE0MTMwM1owSzEL MAkGA1UEBhMCTk8xHTAbBgNVBAoMFEJ1eXBhc3MgQVMtOTgzMTYzMzI3MR0wGwYD VQQDDBRCdXlwYXNzIENsYXNzIDMgQ0EgMTCCASIwDQYJKoZIhvcNAQEBBQADggEP ADCCAQoCggEBAKSO13TZKWTeXx+HgJHqTjnmGcZEC4DVC69TB4sSveZn8AKxifZg isRbsELRwCGoy+Gb72RRtqfPFfV0gGgEkKBYouZ0plNTVUhjP5JW3SROjvi6K//z NIqeKNc0n6wv1g/xpC+9UrJJhW05NfBEMJNGJPO251P7vGGvqaMU+8IXF4Rs4HyI +MkcVyzwPX6UvCWThOiaAJpFBUJXgPROztmuOfbIUxAMZTpHe2DC1vqRycZxbL2R hzyRhkmr8w+gbCZ2Xhysm3HljbybIR6c1jh+JIAVMYKWsUnTYjdbiAwKYjT+p0h+ mbEwi5A3lRyoH6UsjfRVyNvdWQrCrXig9IsCAwEAAaNCMEAwDwYDVR0TAQH/BAUw AwEB/zAdBgNVHQ4EFgQUOBTmyPCppAP0Tj4io1vy1uCtQHQwDgYDVR0PAQH/BAQD AgEGMA0GCSqGSIb3DQEBBQUAA4IBAQABZ6OMySU9E2NdFm/soT4JXJEVKirZgCFP Bdy7pYmrEzMqnji3jG8CcmPHc3ceCQa6Oyh7pEfJYWsICCD8igWKH7y6xsL+z27s EzNxZy5p+qksP2bAEllNC1QCkoS72xLvg3BweMhT+t/Gxv/ciC8HwEmdMldg0/L2 mSlf56oBzKwzqBwKu5HEA6BvtjT5htOzdlSY9EqBs1OdTUDs5XcTRa9bqh/YL0yC e/4qxFi7T/ye/QNlGioOw6UgFpRreaaiErS7GqQjel/wroQk5PMr+4okoyeYZdow dXb8GZHo2+ubPzK/QJcHJrrM85SFSnonk8+QQtS4Wxam58tAA915 -----END CERTIFICATE----- EBG Elektronik Sertifika Hizmet Sa\xC4\x9Flay\xc4\xb1\x63\xc4\xb1s\xc4\xb1 ========================================================================== -----BEGIN CERTIFICATE----- MIIF5zCCA8+gAwIBAgIITK9zQhyOdAIwDQYJKoZIhvcNAQEFBQAwgYAxODA2BgNV BAMML0VCRyBFbGVrdHJvbmlrIFNlcnRpZmlrYSBIaXptZXQgU2HEn2xhecSxY8Sx c8SxMTcwNQYDVQQKDC5FQkcgQmlsacWfaW0gVGVrbm9sb2ppbGVyaSB2ZSBIaXpt ZXRsZXJpIEEuxZ4uMQswCQYDVQQGEwJUUjAeFw0wNjA4MTcwMDIxMDlaFw0xNjA4 MTQwMDMxMDlaMIGAMTgwNgYDVQQDDC9FQkcgRWxla3Ryb25payBTZXJ0aWZpa2Eg SGl6bWV0IFNhxJ9sYXnEsWPEsXPEsTE3MDUGA1UECgwuRUJHIEJpbGnFn2ltIFRl a25vbG9qaWxlcmkgdmUgSGl6bWV0bGVyaSBBLsWeLjELMAkGA1UEBhMCVFIwggIi MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDuoIRh0DpqZhAy2DE4f6en5f2h 4fuXd7hxlugTlkaDT7byX3JWbhNgpQGR4lvFzVcfd2NR/y8927k/qqk153nQ9dAk tiHq6yOU/im/+4mRDGSaBUorzAzu8T2bgmmkTPiab+ci2hC6X5L8GCcKqKpE+i4s tPtGmggDg3KriORqcsnlZR9uKg+ds+g75AxuetpX/dfreYteIAbTdgtsApWjluTL dlHRKJ2hGvxEok3MenaoDT2/F08iiFD9rrbskFBKW5+VQarKD7JK/oCZTqNGFav4 c0JqwmZ2sQomFd2TkuzbqV9UIlKRcF0T6kjsbgNs2d1s/OsNA/+mgxKb8amTD8Um TDGyY5lhcucqZJnSuOl14nypqZoaqsNW2xCaPINStnuWt6yHd6i58mcLlEOzrz5z +kI2sSXFCjEmN1ZnuqMLfdb3ic1nobc6HmZP9qBVFCVMLDMNpkGMvQQxahByCp0O Lna9XvNRiYuoP1Vzv9s6xiQFlpJIqkuNKgPlV5EQ9GooFW5Hd4RcUXSfGenmHmMW OeMRFeNYGkS9y8RsZteEBt8w9DeiQyJ50hBs37vmExH8nYQKE3vwO9D8owrXieqW fo1IhR5kX9tUoqzVegJ5a9KK8GfaZXINFHDk6Y54jzJ0fFfy1tb0Nokb+Clsi7n2 l9GkLqq+CxnCRelwXQIDAJ3Zo2MwYTAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB /wQEAwIBBjAdBgNVHQ4EFgQU587GT/wWZ5b6SqMHwQSny2re2kcwHwYDVR0jBBgw FoAU587GT/wWZ5b6SqMHwQSny2re2kcwDQYJKoZIhvcNAQEFBQADggIBAJuYml2+ 8ygjdsZs93/mQJ7ANtyVDR2tFcU22NU57/IeIl6zgrRdu0waypIN30ckHrMk2pGI 6YNw3ZPX6bqz3xZaPt7gyPvT/Wwp+BVGoGgmzJNSroIBk5DKd8pNSe/iWtkqvTDO TLKBtjDOWU/aWR1qeqRFsIImgYZ29fUQALjuswnoT4cCB64kXPBfrAowzIpAoHME wfuJJPaaHFy3PApnNgUIMbOv2AFoKuB4j3TeuFGkjGwgPaL7s9QJ/XvCgKqTbCmY Iai7FvOpEl90tYeY8pUm3zTvilORiF0alKM/fCL414i6poyWqD1SNGKfAB5UVUJn xk1Gj7sURT0KlhaOEKGXmdXTMIXM3rRyt7yKPBgpaP3ccQfuJDlq+u2lrDgv+R4Q DgZxGhBM/nV+/x5XOULK1+EVoVZVWRvRo68R2E7DpSvvkL/A7IITW43WciyTTo9q Kd+FPNMN4KIYEsxVL0e3p5sC/kH2iExt2qkBR4NkJ2IQgtYSe14DHzSpyZH+r11t hie3I6p1GMog57AP14kOpmciY/SDQSsGS7tY1dHXt7kQY9iJSrSq3RZj9W6+YKH4 7ejWkE8axsWgKdOnIaj1Wjz3x0miIZpKlVIglnKaZsv30oZDfCK+lvm9AahH3eU7 QPl1K5srRmSGjR70j/sHd9DqSaIcjVIUpgqT -----END CERTIFICATE----- certSIGN ROOT CA ================ -----BEGIN CERTIFICATE----- MIIDODCCAiCgAwIBAgIGIAYFFnACMA0GCSqGSIb3DQEBBQUAMDsxCzAJBgNVBAYT AlJPMREwDwYDVQQKEwhjZXJ0U0lHTjEZMBcGA1UECxMQY2VydFNJR04gUk9PVCBD QTAeFw0wNjA3MDQxNzIwMDRaFw0zMTA3MDQxNzIwMDRaMDsxCzAJBgNVBAYTAlJP MREwDwYDVQQKEwhjZXJ0U0lHTjEZMBcGA1UECxMQY2VydFNJR04gUk9PVCBDQTCC ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALczuX7IJUqOtdu0KBuqV5Do 0SLTZLrTk+jUrIZhQGpgV2hUhE28alQCBf/fm5oqrl0Hj0rDKH/v+yv6efHHrfAQ UySQi2bJqIirr1qjAOm+ukbuW3N7LBeCgV5iLKECZbO9xSsAfsT8AzNXDe3i+s5d RdY4zTW2ssHQnIFKquSyAVwdj1+ZxLGt24gh65AIgoDzMKND5pCCrlUoSe1b16kQ OA7+j0xbm0bqQfWwCHTD0IgztnzXdN/chNFDDnU5oSVAKOp4yw4sLjmdjItuFhwv JoIQ4uNllAoEwF73XVv4EOLQunpL+943AAAaWyjj0pxzPjKHmKHJUS/X3qwzs08C AwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAcYwHQYDVR0O BBYEFOCMm9slSbPxfIbWskKHC9BroNnkMA0GCSqGSIb3DQEBBQUAA4IBAQA+0hyJ LjX8+HXd5n9liPRyTMks1zJO890ZeUe9jjtbkw9QSSQTaxQGcu8J06Gh40CEyecY MnQ8SG4Pn0vU9x7Tk4ZkVJdjclDVVc/6IJMCopvDI5NOFlV2oHB5bc0hH88vLbwZ 44gx+FkagQnIl6Z0x2DEW8xXjrJ1/RsCCdtZb3KTafcxQdaIOL+Hsr0Wefmq5L6I Jd1hJyMctTEHBDa0GpC9oHRxUIltvBTjD4au8as+x6AJzKNI0eDbZOeStc+vckNw i/nDhDwTqn6Sm1dTk/pwwpEOMfmbZ13pljheX7NzTogVZ96edhBiIL5VaZVDADlN 9u6wWk5JRFRYX0KD -----END CERTIFICATE----- CNNIC ROOT ========== -----BEGIN CERTIFICATE----- MIIDVTCCAj2gAwIBAgIESTMAATANBgkqhkiG9w0BAQUFADAyMQswCQYDVQQGEwJD TjEOMAwGA1UEChMFQ05OSUMxEzARBgNVBAMTCkNOTklDIFJPT1QwHhcNMDcwNDE2 MDcwOTE0WhcNMjcwNDE2MDcwOTE0WjAyMQswCQYDVQQGEwJDTjEOMAwGA1UEChMF Q05OSUMxEzARBgNVBAMTCkNOTklDIFJPT1QwggEiMA0GCSqGSIb3DQEBAQUAA4IB DwAwggEKAoIBAQDTNfc/c3et6FtzF8LRb+1VvG7q6KR5smzDo+/hn7E7SIX1mlwh IhAsxYLO2uOabjfhhyzcuQxauohV3/2q2x8x6gHx3zkBwRP9SFIhxFXf2tizVHa6 dLG3fdfA6PZZxU3Iva0fFNrfWEQlMhkqx35+jq44sDB7R3IJMfAw28Mbdim7aXZO V/kbZKKTVrdvmW7bCgScEeOAH8tjlBAKqeFkgjH5jCftppkA9nCTGPihNIaj3XrC GHn2emU1z5DrvTOTn1OrczvmmzQgLx3vqR1jGqCA2wMv+SYahtKNu6m+UjqHZ0gN v7Sg2Ca+I19zN38m5pIEo3/PIKe38zrKy5nLAgMBAAGjczBxMBEGCWCGSAGG+EIB AQQEAwIABzAfBgNVHSMEGDAWgBRl8jGtKvf33VKWCscCwQ7vptU7ETAPBgNVHRMB Af8EBTADAQH/MAsGA1UdDwQEAwIB/jAdBgNVHQ4EFgQUZfIxrSr3991SlgrHAsEO 76bVOxEwDQYJKoZIhvcNAQEFBQADggEBAEs17szkrr/Dbq2flTtLP1se31cpolnK OOK5Gv+e5m4y3R6u6jW39ZORTtpC4cMXYFDy0VwmuYK36m3knITnA3kXr5g9lNvH ugDnuL8BV8F3RTIMO/G0HAiw/VGgod2aHRM2mm23xzy54cXZF/qD1T0VoDy7Hgvi yJA/qIYM/PmLXoXLT1tLYhFHxUV8BS9BsZ4QaRuZluBVeftOhpm4lNqGOGqTo+fL buXf6iFViZx9fX+Y9QCJ7uOEwFyWtcVG6kbghVW2G8kS1sHNzYDzAgE8yGnLRUhj 2JTQ7IUOO04RZfSCjKY9ri4ilAnIXOo8gV0WKgOXFlUJ24pBgp5mmxE= -----END CERTIFICATE----- ApplicationCA - Japanese Government =================================== -----BEGIN CERTIFICATE----- MIIDoDCCAoigAwIBAgIBMTANBgkqhkiG9w0BAQUFADBDMQswCQYDVQQGEwJKUDEc MBoGA1UEChMTSmFwYW5lc2UgR292ZXJubWVudDEWMBQGA1UECxMNQXBwbGljYXRp b25DQTAeFw0wNzEyMTIxNTAwMDBaFw0xNzEyMTIxNTAwMDBaMEMxCzAJBgNVBAYT AkpQMRwwGgYDVQQKExNKYXBhbmVzZSBHb3Zlcm5tZW50MRYwFAYDVQQLEw1BcHBs aWNhdGlvbkNBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAp23gdE6H j6UG3mii24aZS2QNcfAKBZuOquHMLtJqO8F6tJdhjYq+xpqcBrSGUeQ3DnR4fl+K f5Sk10cI/VBaVuRorChzoHvpfxiSQE8tnfWuREhzNgaeZCw7NCPbXCbkcXmP1G55 IrmTwcrNwVbtiGrXoDkhBFcsovW8R0FPXjQilbUfKW1eSvNNcr5BViCH/OlQR9cw FO5cjFW6WY2H/CPek9AEjP3vbb3QesmlOmpyM8ZKDQUXKi17safY1vC+9D/qDiht QWEjdnjDuGWk81quzMKq2edY3rZ+nYVunyoKb58DKTCXKB28t89UKU5RMfkntigm /qJj5kEW8DOYRwIDAQABo4GeMIGbMB0GA1UdDgQWBBRUWssmP3HMlEYNllPqa0jQ k/5CdTAOBgNVHQ8BAf8EBAMCAQYwWQYDVR0RBFIwUKROMEwxCzAJBgNVBAYTAkpQ MRgwFgYDVQQKDA/ml6XmnKzlm73mlL/lupwxIzAhBgNVBAsMGuOCouODl+ODquOC seODvOOCt+ODp+ODs0NBMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEFBQAD ggEBADlqRHZ3ODrso2dGD/mLBqj7apAxzn7s2tGJfHrrLgy9mTLnsCTWw//1sogJ hyzjVOGjprIIC8CFqMjSnHH2HZ9g/DgzE+Ge3Atf2hZQKXsvcJEPmbo0NI2VdMV+ eKlmXb3KIXdCEKxmJj3ekav9FfBv7WxfEPjzFvYDio+nEhEMy/0/ecGc/WLuo89U DNErXxc+4z6/wCs+CZv+iKZ+tJIX/COUgb1up8WMwusRRdv4QcmWdupwX3kSa+Sj B1oF7ydJzyGfikwJcGapJsErEU4z0g781mzSDjJkaP+tBXhfAx2o45CsJOAPQKdL rosot4LKGAfmt1t06SAZf7IbiVQ= -----END CERTIFICATE----- GeoTrust Primary Certification Authority - G3 ============================================= -----BEGIN CERTIFICATE----- MIID/jCCAuagAwIBAgIQFaxulBmyeUtB9iepwxgPHzANBgkqhkiG9w0BAQsFADCB mDELMAkGA1UEBhMCVVMxFjAUBgNVBAoTDUdlb1RydXN0IEluYy4xOTA3BgNVBAsT MChjKSAyMDA4IEdlb1RydXN0IEluYy4gLSBGb3IgYXV0aG9yaXplZCB1c2Ugb25s eTE2MDQGA1UEAxMtR2VvVHJ1c3QgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhv cml0eSAtIEczMB4XDTA4MDQwMjAwMDAwMFoXDTM3MTIwMTIzNTk1OVowgZgxCzAJ BgNVBAYTAlVTMRYwFAYDVQQKEw1HZW9UcnVzdCBJbmMuMTkwNwYDVQQLEzAoYykg MjAwOCBHZW9UcnVzdCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxNjA0 BgNVBAMTLUdlb1RydXN0IFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkg LSBHMzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANziXmJYHTNXOTIz +uvLh4yn1ErdBojqZI4xmKU4kB6Yzy5jK/BGvESyiaHAKAxJcCGVn2TAppMSAmUm hsalifD614SgcK9PGpc/BkTVyetyEH3kMSj7HGHmKAdEc5IiaacDiGydY8hS2pgn 5whMcD60yRLBxWeDXTPzAxHsatBT4tG6NmCUgLthY2xbF37fQJQeqw3CIShwiP/W JmxsYAQlTlV+fe+/lEjetx3dcI0FX4ilm/LC7urRQEFtYjgdVgbFA0dRIBn8exAL DmKudlW/X3e+PkkBUz2YJQN2JFodtNuJ6nnltrM7P7pMKEF/BqxqjsHQ9gUdfeZC huOl1UcCAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYw HQYDVR0OBBYEFMR5yo6hTgMdHNxr2zFblD4/MH8tMA0GCSqGSIb3DQEBCwUAA4IB AQAtxRPPVoB7eni9n64smefv2t+UXglpp+duaIy9cr5HqQ6XErhK8WTTOd8lNNTB zU6B8A8ExCSzNJbGpqow32hhc9f5joWJ7w5elShKKiePEI4ufIbEAp7aDHdlDkQN kv39sxY2+hENHYwOB4lqKVb3cvTdFZx3NWZXqxNT2I7BQMXXExZacse3aQHEerGD AWh9jUGhlBjBJVz88P6DAod8DQ3PLghcSkANPuyBYeYk28rgDi0Hsj5W3I31QYUH SJsMC8tJP33st/3LjWeJGqvtux6jAAgIFyqCXDFdRootD4abdNlF+9RAsXqqaC2G spki4cErx5z481+oghLrGREt -----END CERTIFICATE----- thawte Primary Root CA - G2 =========================== -----BEGIN CERTIFICATE----- MIICiDCCAg2gAwIBAgIQNfwmXNmET8k9Jj1Xm67XVjAKBggqhkjOPQQDAzCBhDEL MAkGA1UEBhMCVVMxFTATBgNVBAoTDHRoYXd0ZSwgSW5jLjE4MDYGA1UECxMvKGMp IDIwMDcgdGhhd3RlLCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxJDAi BgNVBAMTG3RoYXd0ZSBQcmltYXJ5IFJvb3QgQ0EgLSBHMjAeFw0wNzExMDUwMDAw MDBaFw0zODAxMTgyMzU5NTlaMIGEMQswCQYDVQQGEwJVUzEVMBMGA1UEChMMdGhh d3RlLCBJbmMuMTgwNgYDVQQLEy8oYykgMjAwNyB0aGF3dGUsIEluYy4gLSBGb3Ig YXV0aG9yaXplZCB1c2Ugb25seTEkMCIGA1UEAxMbdGhhd3RlIFByaW1hcnkgUm9v dCBDQSAtIEcyMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAEotWcgnuVnfFSeIf+iha/ BebfowJPDQfGAFG6DAJSLSKkQjnE/o/qycG+1E3/n3qe4rF8mq2nhglzh9HnmuN6 papu+7qzcMBniKI11KOasf2twu8x+qi58/sIxpHR+ymVo0IwQDAPBgNVHRMBAf8E BTADAQH/MA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUmtgAMADna3+FGO6Lts6K DPgR4bswCgYIKoZIzj0EAwMDaQAwZgIxAN344FdHW6fmCsO99YCKlzUNG4k8VIZ3 KMqh9HneteY4sPBlcIx/AlTCv//YoT7ZzwIxAMSNlPzcU9LcnXgWHxUzI1NS41ox XZ3Krr0TKUQNJ1uo52icEvdYPy5yAlejj6EULg== -----END CERTIFICATE----- thawte Primary Root CA - G3 =========================== -----BEGIN CERTIFICATE----- MIIEKjCCAxKgAwIBAgIQYAGXt0an6rS0mtZLL/eQ+zANBgkqhkiG9w0BAQsFADCB rjELMAkGA1UEBhMCVVMxFTATBgNVBAoTDHRoYXd0ZSwgSW5jLjEoMCYGA1UECxMf Q2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjE4MDYGA1UECxMvKGMpIDIw MDggdGhhd3RlLCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxJDAiBgNV BAMTG3RoYXd0ZSBQcmltYXJ5IFJvb3QgQ0EgLSBHMzAeFw0wODA0MDIwMDAwMDBa Fw0zNzEyMDEyMzU5NTlaMIGuMQswCQYDVQQGEwJVUzEVMBMGA1UEChMMdGhhd3Rl LCBJbmMuMSgwJgYDVQQLEx9DZXJ0aWZpY2F0aW9uIFNlcnZpY2VzIERpdmlzaW9u MTgwNgYDVQQLEy8oYykgMjAwOCB0aGF3dGUsIEluYy4gLSBGb3IgYXV0aG9yaXpl ZCB1c2Ugb25seTEkMCIGA1UEAxMbdGhhd3RlIFByaW1hcnkgUm9vdCBDQSAtIEcz MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAsr8nLPvb2FvdeHsbnndm gcs+vHyu86YnmjSjaDFxODNi5PNxZnmxqWWjpYvVj2AtP0LMqmsywCPLLEHd5N/8 YZzic7IilRFDGF/Eth9XbAoFWCLINkw6fKXRz4aviKdEAhN0cXMKQlkC+BsUa0Lf b1+6a4KinVvnSr0eAXLbS3ToO39/fR8EtCab4LRarEc9VbjXsCZSKAExQGbY2SS9 9irY7CFJXJv2eul/VTV+lmuNk5Mny5K76qxAwJ/C+IDPXfRa3M50hqY+bAtTyr2S zhkGcuYMXDhpxwTWvGzOW/b3aJzcJRVIiKHpqfiYnODz1TEoYRFsZ5aNOZnLwkUk OQIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjAdBgNV HQ4EFgQUrWyqlGCc7eT/+j4KdCtjA/e2Wb8wDQYJKoZIhvcNAQELBQADggEBABpA 2JVlrAmSicY59BDlqQ5mU1143vokkbvnRFHfxhY0Cu9qRFHqKweKA3rD6z8KLFIW oCtDuSWQP3CpMyVtRRooOyfPqsMpQhvfO0zAMzRbQYi/aytlryjvsvXDqmbOe1bu t8jLZ8HJnBoYuMTDSQPxYA5QzUbF83d597YV4Djbxy8ooAw/dyZ02SUS2jHaGh7c KUGRIjxpp7sC8rZcJwOJ9Abqm+RyguOhCcHpABnTPtRwa7pxpqpYrvS76Wy274fM m7v/OeZWYdMKp8RcTGB7BXcmer/YB1IsYvdwY9k5vG8cwnncdimvzsUsZAReiDZu MdRAGmI0Nj81Aa6sY6A= -----END CERTIFICATE----- GeoTrust Primary Certification Authority - G2 ============================================= -----BEGIN CERTIFICATE----- MIICrjCCAjWgAwIBAgIQPLL0SAoA4v7rJDteYD7DazAKBggqhkjOPQQDAzCBmDEL MAkGA1UEBhMCVVMxFjAUBgNVBAoTDUdlb1RydXN0IEluYy4xOTA3BgNVBAsTMChj KSAyMDA3IEdlb1RydXN0IEluYy4gLSBGb3IgYXV0aG9yaXplZCB1c2Ugb25seTE2 MDQGA1UEAxMtR2VvVHJ1c3QgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0 eSAtIEcyMB4XDTA3MTEwNTAwMDAwMFoXDTM4MDExODIzNTk1OVowgZgxCzAJBgNV BAYTAlVTMRYwFAYDVQQKEw1HZW9UcnVzdCBJbmMuMTkwNwYDVQQLEzAoYykgMjAw NyBHZW9UcnVzdCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxNjA0BgNV BAMTLUdlb1RydXN0IFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgLSBH MjB2MBAGByqGSM49AgEGBSuBBAAiA2IABBWx6P0DFUPlrOuHNxFi79KDNlJ9RVcL So17VDs6bl8VAsBQps8lL33KSLjHUGMcKiEIfJo22Av+0SbFWDEwKCXzXV2juLal tJLtbCyf691DiaI8S0iRHVDsJt/WYC69IaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAO BgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFBVfNVdRVfslsq0DafwBo/q+EVXVMAoG CCqGSM49BAMDA2cAMGQCMGSWWaboCd6LuvpaiIjwH5HTRqjySkwCY/tsXzjbLkGT qQ7mndwxHLKgpxgceeHHNgIwOlavmnRs9vuD4DPTCF+hnMJbn0bWtsuRBmOiBucz rD6ogRLQy7rQkgu2npaqBA+K -----END CERTIFICATE----- VeriSign Universal Root Certification Authority =============================================== -----BEGIN CERTIFICATE----- MIIEuTCCA6GgAwIBAgIQQBrEZCGzEyEDDrvkEhrFHTANBgkqhkiG9w0BAQsFADCB vTELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQL ExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwOCBWZXJp U2lnbiwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MTgwNgYDVQQDEy9W ZXJpU2lnbiBVbml2ZXJzYWwgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAe Fw0wODA0MDIwMDAwMDBaFw0zNzEyMDEyMzU5NTlaMIG9MQswCQYDVQQGEwJVUzEX MBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZlcmlTaWduIFRydXN0 IE5ldHdvcmsxOjA4BgNVBAsTMShjKSAyMDA4IFZlcmlTaWduLCBJbmMuIC0gRm9y IGF1dGhvcml6ZWQgdXNlIG9ubHkxODA2BgNVBAMTL1ZlcmlTaWduIFVuaXZlcnNh bCBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIIBIjANBgkqhkiG9w0BAQEF AAOCAQ8AMIIBCgKCAQEAx2E3XrEBNNti1xWb/1hajCMj1mCOkdeQmIN65lgZOIzF 9uVkhbSicfvtvbnazU0AtMgtc6XHaXGVHzk8skQHnOgO+k1KxCHfKWGPMiJhgsWH H26MfF8WIFFE0XBPV+rjHOPMee5Y2A7Cs0WTwCznmhcrewA3ekEzeOEz4vMQGn+H LL729fdC4uW/h2KJXwBL38Xd5HVEMkE6HnFuacsLdUYI0crSK5XQz/u5QGtkjFdN /BMReYTtXlT2NJ8IAfMQJQYXStrxHXpma5hgZqTZ79IugvHw7wnqRMkVauIDbjPT rJ9VAMf2CGqUuV/c4DPxhGD5WycRtPwW8rtWaoAljQIDAQABo4GyMIGvMA8GA1Ud EwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMG0GCCsGAQUFBwEMBGEwX6FdoFsw WTBXMFUWCWltYWdlL2dpZjAhMB8wBwYFKw4DAhoEFI/l0xqGrI2Oa8PPgGrUSBgs exkuMCUWI2h0dHA6Ly9sb2dvLnZlcmlzaWduLmNvbS92c2xvZ28uZ2lmMB0GA1Ud DgQWBBS2d/ppSEefUxLVwuoHMnYH0ZcHGTANBgkqhkiG9w0BAQsFAAOCAQEASvj4 sAPmLGd75JR3Y8xuTPl9Dg3cyLk1uXBPY/ok+myDjEedO2Pzmvl2MpWRsXe8rJq+ seQxIcaBlVZaDrHC1LGmWazxY8u4TB1ZkErvkBYoH1quEPuBUDgMbMzxPcP1Y+Oz 4yHJJDnp/RVmRvQbEdBNc6N9Rvk97ahfYtTxP/jgdFcrGJ2BtMQo2pSXpXDrrB2+ BxHw1dvd5Yzw1TKwg+ZX4o+/vqGqvz0dtdQ46tewXDpPaj+PwGZsY6rp2aQW9IHR lRQOfc2VNNnSj3BzgXucfr2YYdhFh5iQxeuGMMY1v/D/w1WIg0vvBZIGcfK4mJO3 7M2CYfE45k+XmCpajQ== -----END CERTIFICATE----- VeriSign Class 3 Public Primary Certification Authority - G4 ============================================================ -----BEGIN CERTIFICATE----- MIIDhDCCAwqgAwIBAgIQL4D+I4wOIg9IZxIokYesszAKBggqhkjOPQQDAzCByjEL MAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQLExZW ZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwNyBWZXJpU2ln biwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MUUwQwYDVQQDEzxWZXJp U2lnbiBDbGFzcyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9y aXR5IC0gRzQwHhcNMDcxMTA1MDAwMDAwWhcNMzgwMTE4MjM1OTU5WjCByjELMAkG A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQLExZWZXJp U2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwNyBWZXJpU2lnbiwg SW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MUUwQwYDVQQDEzxWZXJpU2ln biBDbGFzcyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5 IC0gRzQwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAASnVnp8Utpkmw4tXNherJI9/gHm GUo9FANL+mAnINmDiWn6VMaaGF5VKmTeBvaNSjutEDxlPZCIBIngMGGzrl0Bp3ve fLK+ymVhAIau2o970ImtTR1ZmkGxvEeA3J5iw/mjgbIwga8wDwYDVR0TAQH/BAUw AwEB/zAOBgNVHQ8BAf8EBAMCAQYwbQYIKwYBBQUHAQwEYTBfoV2gWzBZMFcwVRYJ aW1hZ2UvZ2lmMCEwHzAHBgUrDgMCGgQUj+XTGoasjY5rw8+AatRIGCx7GS4wJRYj aHR0cDovL2xvZ28udmVyaXNpZ24uY29tL3ZzbG9nby5naWYwHQYDVR0OBBYEFLMW kf3upm7ktS5Jj4d4gYDs5bG1MAoGCCqGSM49BAMDA2gAMGUCMGYhDBgmYFo4e1ZC 4Kf8NoRRkSAsdk1DPcQdhCPQrNZ8NQbOzWm9kA3bbEhCHQ6qQgIxAJw9SDkjOVga FRJZap7v1VmyHVIsmXHNxynfGyphe3HR3vPA5Q06Sqotp9iGKt0uEA== -----END CERTIFICATE----- NetLock Arany (Class Gold) Főtanúsítvány ============================================ -----BEGIN CERTIFICATE----- MIIEFTCCAv2gAwIBAgIGSUEs5AAQMA0GCSqGSIb3DQEBCwUAMIGnMQswCQYDVQQG EwJIVTERMA8GA1UEBwwIQnVkYXBlc3QxFTATBgNVBAoMDE5ldExvY2sgS2Z0LjE3 MDUGA1UECwwuVGFuw7pzw610dsOhbnlraWFkw7NrIChDZXJ0aWZpY2F0aW9uIFNl cnZpY2VzKTE1MDMGA1UEAwwsTmV0TG9jayBBcmFueSAoQ2xhc3MgR29sZCkgRsWR dGFuw7pzw610dsOhbnkwHhcNMDgxMjExMTUwODIxWhcNMjgxMjA2MTUwODIxWjCB pzELMAkGA1UEBhMCSFUxETAPBgNVBAcMCEJ1ZGFwZXN0MRUwEwYDVQQKDAxOZXRM b2NrIEtmdC4xNzA1BgNVBAsMLlRhbsO6c8OtdHbDoW55a2lhZMOzayAoQ2VydGlm aWNhdGlvbiBTZXJ2aWNlcykxNTAzBgNVBAMMLE5ldExvY2sgQXJhbnkgKENsYXNz IEdvbGQpIEbFkXRhbsO6c8OtdHbDoW55MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A MIIBCgKCAQEAxCRec75LbRTDofTjl5Bu0jBFHjzuZ9lk4BqKf8owyoPjIMHj9DrT lF8afFttvzBPhCf2nx9JvMaZCpDyD/V/Q4Q3Y1GLeqVw/HpYzY6b7cNGbIRwXdrz AZAj/E4wqX7hJ2Pn7WQ8oLjJM2P+FpD/sLj916jAwJRDC7bVWaaeVtAkH3B5r9s5 VA1lddkVQZQBr17s9o3x/61k/iCa11zr/qYfCGSji3ZVrR47KGAuhyXoqq8fxmRG ILdwfzzeSNuWU7c5d+Qa4scWhHaXWy+7GRWF+GmF9ZmnqfI0p6m2pgP8b4Y9VHx2 BJtr+UBdADTHLpl1neWIA6pN+APSQnbAGwIDAKiLo0UwQzASBgNVHRMBAf8ECDAG AQH/AgEEMA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUzPpnk/C2uNClwB7zU/2M U9+D15YwDQYJKoZIhvcNAQELBQADggEBAKt/7hwWqZw8UQCgwBEIBaeZ5m8BiFRh bvG5GK1Krf6BQCOUL/t1fC8oS2IkgYIL9WHxHG64YTjrgfpioTtaYtOUZcTh5m2C +C8lcLIhJsFyUR+MLMOEkMNaj7rP9KdlpeuY0fsFskZ1FSNqb4VjMIDw1Z4fKRzC bLBQWV2QWzuoDTDPv31/zvGdg73JRm4gpvlhUbohL3u+pRVjodSVh/GeufOJ8z2F uLjbvrW5KfnaNwUASZQDhETnv0Mxz3WLJdH0pmT1kvarBes96aULNmLazAZfNou2 XjG4Kvte9nHfRCaexOYNkbQudZWAUWpLMKawYqGT8ZvYzsRjdT9ZR7E= -----END CERTIFICATE----- Staat der Nederlanden Root CA - G2 ================================== -----BEGIN CERTIFICATE----- MIIFyjCCA7KgAwIBAgIEAJiWjDANBgkqhkiG9w0BAQsFADBaMQswCQYDVQQGEwJO TDEeMBwGA1UECgwVU3RhYXQgZGVyIE5lZGVybGFuZGVuMSswKQYDVQQDDCJTdGFh dCBkZXIgTmVkZXJsYW5kZW4gUm9vdCBDQSAtIEcyMB4XDTA4MDMyNjExMTgxN1oX DTIwMDMyNTExMDMxMFowWjELMAkGA1UEBhMCTkwxHjAcBgNVBAoMFVN0YWF0IGRl ciBOZWRlcmxhbmRlbjErMCkGA1UEAwwiU3RhYXQgZGVyIE5lZGVybGFuZGVuIFJv b3QgQ0EgLSBHMjCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAMVZ5291 qj5LnLW4rJ4L5PnZyqtdj7U5EILXr1HgO+EASGrP2uEGQxGZqhQlEq0i6ABtQ8Sp uOUfiUtnvWFI7/3S4GCI5bkYYCjDdyutsDeqN95kWSpGV+RLufg3fNU254DBtvPU Z5uW6M7XxgpT0GtJlvOjCwV3SPcl5XCsMBQgJeN/dVrlSPhOewMHBPqCYYdu8DvE pMfQ9XQ+pV0aCPKbJdL2rAQmPlU6Yiile7Iwr/g3wtG61jj99O9JMDeZJiFIhQGp 5Rbn3JBV3w/oOM2ZNyFPXfUib2rFEhZgF1XyZWampzCROME4HYYEhLoaJXhena/M UGDWE4dS7WMfbWV9whUYdMrhfmQpjHLYFhN9C0lK8SgbIHRrxT3dsKpICT0ugpTN GmXZK4iambwYfp/ufWZ8Pr2UuIHOzZgweMFvZ9C+X+Bo7d7iscksWXiSqt8rYGPy 5V6548r6f1CGPqI0GAwJaCgRHOThuVw+R7oyPxjMW4T182t0xHJ04eOLoEq9jWYv 6q012iDTiIJh8BIitrzQ1aTsr1SIJSQ8p22xcik/Plemf1WvbibG/ufMQFxRRIEK eN5KzlW/HdXZt1bv8Hb/C3m1r737qWmRRpdogBQ2HbN/uymYNqUg+oJgYjOk7Na6 B6duxc8UpufWkjTYgfX8HV2qXB72o007uPc5AgMBAAGjgZcwgZQwDwYDVR0TAQH/ BAUwAwEB/zBSBgNVHSAESzBJMEcGBFUdIAAwPzA9BggrBgEFBQcCARYxaHR0cDov L3d3dy5wa2lvdmVyaGVpZC5ubC9wb2xpY2llcy9yb290LXBvbGljeS1HMjAOBgNV HQ8BAf8EBAMCAQYwHQYDVR0OBBYEFJFoMocVHYnitfGsNig0jQt8YojrMA0GCSqG SIb3DQEBCwUAA4ICAQCoQUpnKpKBglBu4dfYszk78wIVCVBR7y29JHuIhjv5tLyS CZa59sCrI2AGeYwRTlHSeYAz+51IvuxBQ4EffkdAHOV6CMqqi3WtFMTC6GY8ggen 5ieCWxjmD27ZUD6KQhgpxrRW/FYQoAUXvQwjf/ST7ZwaUb7dRUG/kSS0H4zpX897 IZmflZ85OkYcbPnNe5yQzSipx6lVu6xiNGI1E0sUOlWDuYaNkqbG9AclVMwWVxJK gnjIFNkXgiYtXSAfea7+1HAWFpWD2DU5/1JddRwWxRNVz0fMdWVSSt7wsKfkCpYL +63C4iWEst3kvX5ZbJvw8NjnyvLplzh+ib7M+zkXYT9y2zqR2GUBGR2tUKRXCnxL vJxxcypFURmFzI79R6d0lR2o0a9OF7FpJsKqeFdbxU2n5Z4FF5TKsl+gSRiNNOkm bEgeqmiSBeGCc1qb3AdbCG19ndeNIdn8FCCqwkXfP+cAslHkwvgFuXkajDTznlvk N1trSt8sV4pAWja63XVECDdCcAz+3F4hoKOKwJCcaNpQ5kUQR3i2TtJlycM33+FC Y7BXN0Ute4qcvwXqZVUz9zkQxSgqIXobisQk+T8VyJoVIPVVYpbtbZNQvOSqeK3Z ywplh6ZmwcSBo3c6WB4L7oOLnR7SUqTMHW+wmG2UMbX4cQrcufx9MmDm66+KAQ== -----END CERTIFICATE----- CA Disig ======== -----BEGIN CERTIFICATE----- MIIEDzCCAvegAwIBAgIBATANBgkqhkiG9w0BAQUFADBKMQswCQYDVQQGEwJTSzET MBEGA1UEBxMKQnJhdGlzbGF2YTETMBEGA1UEChMKRGlzaWcgYS5zLjERMA8GA1UE AxMIQ0EgRGlzaWcwHhcNMDYwMzIyMDEzOTM0WhcNMTYwMzIyMDEzOTM0WjBKMQsw CQYDVQQGEwJTSzETMBEGA1UEBxMKQnJhdGlzbGF2YTETMBEGA1UEChMKRGlzaWcg YS5zLjERMA8GA1UEAxMIQ0EgRGlzaWcwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw ggEKAoIBAQCS9jHBfYj9mQGp2HvycXXxMcbzdWb6UShGhJd4NLxs/LxFWYgmGErE Nx+hSkS943EE9UQX4j/8SFhvXJ56CbpRNyIjZkMhsDxkovhqFQ4/61HhVKndBpnX mjxUizkDPw/Fzsbrg3ICqB9x8y34dQjbYkzo+s7552oftms1grrijxaSfQUMbEYD XcDtab86wYqg6I7ZuUUohwjstMoVvoLdtUSLLa2GDGhibYVW8qwUYzrG0ZmsNHhW S8+2rT+MitcE5eN4TPWGqvWP+j1scaMtymfraHtuM6kMgiioTGohQBUgDCZbg8Kp FhXAJIJdKxatymP2dACw30PEEGBWZ2NFAgMBAAGjgf8wgfwwDwYDVR0TAQH/BAUw AwEB/zAdBgNVHQ4EFgQUjbJJaJ1yCCW5wCf1UJNWSEZx+Y8wDgYDVR0PAQH/BAQD AgEGMDYGA1UdEQQvMC2BE2Nhb3BlcmF0b3JAZGlzaWcuc2uGFmh0dHA6Ly93d3cu ZGlzaWcuc2svY2EwZgYDVR0fBF8wXTAtoCugKYYnaHR0cDovL3d3dy5kaXNpZy5z ay9jYS9jcmwvY2FfZGlzaWcuY3JsMCygKqAohiZodHRwOi8vY2EuZGlzaWcuc2sv Y2EvY3JsL2NhX2Rpc2lnLmNybDAaBgNVHSAEEzARMA8GDSuBHpGT5goAAAABAQEw DQYJKoZIhvcNAQEFBQADggEBAF00dGFMrzvY/59tWDYcPQuBDRIrRhCA/ec8J9B6 yKm2fnQwM6M6int0wHl5QpNt/7EpFIKrIYwvF/k/Ji/1WcbvgAa3mkkp7M5+cTxq EEHA9tOasnxakZzArFvITV734VP/Q3f8nktnbNfzg9Gg4H8l37iYC5oyOGwwoPP/ CBUz91BKez6jPiCp3C9WgArtQVCwyfTssuMmRAAOb54GvCKWU3BlxFAKRmukLyeB EicTXxChds6KezfqwzlhA5WYOudsiCUI/HloDYd9Yvi0X/vF2Ey9WLw/Q1vUHgFN PGO+I++MzVpQuGhU+QqZMxEA4Z7CRneC9VkGjCFMhwnN5ag= -----END CERTIFICATE----- Juur-SK ======= -----BEGIN CERTIFICATE----- MIIE5jCCA86gAwIBAgIEO45L/DANBgkqhkiG9w0BAQUFADBdMRgwFgYJKoZIhvcN AQkBFglwa2lAc2suZWUxCzAJBgNVBAYTAkVFMSIwIAYDVQQKExlBUyBTZXJ0aWZp dHNlZXJpbWlza2Vza3VzMRAwDgYDVQQDEwdKdXVyLVNLMB4XDTAxMDgzMDE0MjMw MVoXDTE2MDgyNjE0MjMwMVowXTEYMBYGCSqGSIb3DQEJARYJcGtpQHNrLmVlMQsw CQYDVQQGEwJFRTEiMCAGA1UEChMZQVMgU2VydGlmaXRzZWVyaW1pc2tlc2t1czEQ MA4GA1UEAxMHSnV1ci1TSzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB AIFxNj4zB9bjMI0TfncyRsvPGbJgMUaXhvSYRqTCZUXP00B841oiqBB4M8yIsdOB SvZiF3tfTQou0M+LI+5PAk676w7KvRhj6IAcjeEcjT3g/1tf6mTll+g/mX8MCgkz ABpTpyHhOEvWgxutr2TC+Rx6jGZITWYfGAriPrsfB2WThbkasLnE+w0R9vXW+RvH LCu3GFH+4Hv2qEivbDtPL+/40UceJlfwUR0zlv/vWT3aTdEVNMfqPxZIe5EcgEMP PbgFPtGzlc3Yyg/CQ2fbt5PgIoIuvvVoKIO5wTtpeyDaTpxt4brNj3pssAki14sL 2xzVWiZbDcDq5WDQn/413z8CAwEAAaOCAawwggGoMA8GA1UdEwEB/wQFMAMBAf8w ggEWBgNVHSAEggENMIIBCTCCAQUGCisGAQQBzh8BAQEwgfYwgdAGCCsGAQUFBwIC MIHDHoHAAFMAZQBlACAAcwBlAHIAdABpAGYAaQBrAGEAYQB0ACAAbwBuACAAdgDk AGwAagBhAHMAdABhAHQAdQBkACAAQQBTAC0AaQBzACAAUwBlAHIAdABpAGYAaQB0 AHMAZQBlAHIAaQBtAGkAcwBrAGUAcwBrAHUAcwAgAGEAbABhAG0ALQBTAEsAIABz AGUAcgB0AGkAZgBpAGsAYQBhAHQAaQBkAGUAIABrAGkAbgBuAGkAdABhAG0AaQBz AGUAawBzMCEGCCsGAQUFBwIBFhVodHRwOi8vd3d3LnNrLmVlL2Nwcy8wKwYDVR0f BCQwIjAgoB6gHIYaaHR0cDovL3d3dy5zay5lZS9qdXVyL2NybC8wHQYDVR0OBBYE FASqekej5ImvGs8KQKcYP2/v6X2+MB8GA1UdIwQYMBaAFASqekej5ImvGs8KQKcY P2/v6X2+MA4GA1UdDwEB/wQEAwIB5jANBgkqhkiG9w0BAQUFAAOCAQEAe8EYlFOi CfP+JmeaUOTDBS8rNXiRTHyoERF5TElZrMj3hWVcRrs7EKACr81Ptcw2Kuxd/u+g kcm2k298gFTsxwhwDY77guwqYHhpNjbRxZyLabVAyJRld/JXIWY7zoVAtjNjGr95 HvxcHdMdkxuLDF2FvZkwMhgJkVLpfKG6/2SSmuz+Ne6ML678IIbsSt4beDI3poHS na9aEhbKmVv8b20OxaAehsmR0FyYgl9jDIpaq9iVpszLita/ZEuOyoqysOkhMp6q qIWYNIE5ITuoOlIyPfZrN4YGWhWY3PARZv40ILcD9EEQfTmEeZZyY7aWAuVrua0Z TbvGRNs2yyqcjg== -----END CERTIFICATE----- Hongkong Post Root CA 1 ======================= -----BEGIN CERTIFICATE----- MIIDMDCCAhigAwIBAgICA+gwDQYJKoZIhvcNAQEFBQAwRzELMAkGA1UEBhMCSEsx FjAUBgNVBAoTDUhvbmdrb25nIFBvc3QxIDAeBgNVBAMTF0hvbmdrb25nIFBvc3Qg Um9vdCBDQSAxMB4XDTAzMDUxNTA1MTMxNFoXDTIzMDUxNTA0NTIyOVowRzELMAkG A1UEBhMCSEsxFjAUBgNVBAoTDUhvbmdrb25nIFBvc3QxIDAeBgNVBAMTF0hvbmdr b25nIFBvc3QgUm9vdCBDQSAxMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC AQEArP84tulmAknjorThkPlAj3n54r15/gK97iSSHSL22oVyaf7XPwnU3ZG1ApzQ jVrhVcNQhrkpJsLj2aDxaQMoIIBFIi1WpztUlVYiWR8o3x8gPW2iNr4joLFutbEn PzlTCeqrauh0ssJlXI6/fMN4hM2eFvz1Lk8gKgifd/PFHsSaUmYeSF7jEAaPIpjh ZY4bXSNmO7ilMlHIhqqhqZ5/dpTCpmy3QfDVyAY45tQM4vM7TG1QjMSDJ8EThFk9 nnV0ttgCXjqQesBCNnLsak3c78QA3xMYV18meMjWCnl3v/evt3a5pQuEF10Q6m/h q5URX208o1xNg1vysxmKgIsLhwIDAQABoyYwJDASBgNVHRMBAf8ECDAGAQH/AgED MA4GA1UdDwEB/wQEAwIBxjANBgkqhkiG9w0BAQUFAAOCAQEADkbVPK7ih9legYsC mEEIjEy82tvuJxuC52pF7BaLT4Wg87JwvVqWuspube5Gi27nKi6Wsxkz67SfqLI3 7piol7Yutmcn1KZJ/RyTZXaeQi/cImyaT/JaFTmxcdcrUehtHJjA2Sr0oYJ71clB oiMBdDhViw+5LmeiIAQ32pwL0xch4I+XeTRvhEgCIDMb5jREn5Fw9IBehEPCKdJs EhTkYY2sEJCehFC78JZvRZ+K88psT/oROhUVRsPNH4NbLUES7VBnQRM9IauUiqpO fMGx+6fWtScvl6tu4B3i0RwsH0Ti/L6RoZz71ilTc4afU9hDDl3WY4JxHYB0yvbi AmvZWg== -----END CERTIFICATE----- SecureSign RootCA11 =================== -----BEGIN CERTIFICATE----- MIIDbTCCAlWgAwIBAgIBATANBgkqhkiG9w0BAQUFADBYMQswCQYDVQQGEwJKUDEr MCkGA1UEChMiSmFwYW4gQ2VydGlmaWNhdGlvbiBTZXJ2aWNlcywgSW5jLjEcMBoG A1UEAxMTU2VjdXJlU2lnbiBSb290Q0ExMTAeFw0wOTA0MDgwNDU2NDdaFw0yOTA0 MDgwNDU2NDdaMFgxCzAJBgNVBAYTAkpQMSswKQYDVQQKEyJKYXBhbiBDZXJ0aWZp Y2F0aW9uIFNlcnZpY2VzLCBJbmMuMRwwGgYDVQQDExNTZWN1cmVTaWduIFJvb3RD QTExMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/XeqpRyQBTvLTJsz i1oURaTnkBbR31fSIRCkF/3frNYfp+TbfPfs37gD2pRY/V1yfIw/XwFndBWW4wI8 h9uuywGOwvNmxoVF9ALGOrVisq/6nL+k5tSAMJjzDbaTj6nU2DbysPyKyiyhFTOV MdrAG/LuYpmGYz+/3ZMqg6h2uRMft85OQoWPIucuGvKVCbIFtUROd6EgvanyTgp9 UK31BQ1FT0Zx/Sg+U/sE2C3XZR1KG/rPO7AxmjVuyIsG0wCR8pQIZUyxNAYAeoni 8McDWc/V1uinMrPmmECGxc0nEovMe863ETxiYAcjPitAbpSACW22s293bzUIUPsC h8U+iQIDAQABo0IwQDAdBgNVHQ4EFgQUW/hNT7KlhtQ60vFjmqC+CfZXt94wDgYD VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEFBQADggEB AKChOBZmLqdWHyGcBvod7bkixTgm2E5P7KN/ed5GIaGHd48HCJqypMWvDzKYC3xm KbabfSVSSUOrTC4rbnpwrxYO4wJs+0LmGJ1F2FXI6Dvd5+H0LgscNFxsWEr7jIhQ X5Ucv+2rIrVls4W6ng+4reV6G4pQOh29Dbx7VFALuUKvVaAYga1lme++5Jy/xIWr QbJUb9wlze144o4MjQlJ3WN7WmmWAiGovVJZ6X01y8hSyn+B/tlr0/cR7SXf+Of5 pPpyl4RTDaXQMhhRdlkUbA/r7F+AjHVDg8OFmP9Mni0N5HeDk061lgeLKBObjBmN QSdJQO7e5iNEOdyhIta6A/I= -----END CERTIFICATE----- ACEDICOM Root ============= -----BEGIN CERTIFICATE----- MIIFtTCCA52gAwIBAgIIYY3HhjsBggUwDQYJKoZIhvcNAQEFBQAwRDEWMBQGA1UE AwwNQUNFRElDT00gUm9vdDEMMAoGA1UECwwDUEtJMQ8wDQYDVQQKDAZFRElDT00x CzAJBgNVBAYTAkVTMB4XDTA4MDQxODE2MjQyMloXDTI4MDQxMzE2MjQyMlowRDEW MBQGA1UEAwwNQUNFRElDT00gUm9vdDEMMAoGA1UECwwDUEtJMQ8wDQYDVQQKDAZF RElDT00xCzAJBgNVBAYTAkVTMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKC AgEA/5KV4WgGdrQsyFhIyv2AVClVYyT/kGWbEHV7w2rbYgIB8hiGtXxaOLHkWLn7 09gtn70yN78sFW2+tfQh0hOR2QetAQXW8713zl9CgQr5auODAKgrLlUTY4HKRxx7 XBZXehuDYAQ6PmXDzQHe3qTWDLqO3tkE7hdWIpuPY/1NFgu3e3eM+SW10W2ZEi5P Grjm6gSSrj0RuVFCPYewMYWveVqc/udOXpJPQ/yrOq2lEiZmueIM15jO1FillUAK t0SdE3QrwqXrIhWYENiLxQSfHY9g5QYbm8+5eaA9oiM/Qj9r+hwDezCNzmzAv+Yb X79nuIQZ1RXve8uQNjFiybwCq0Zfm/4aaJQ0PZCOrfbkHQl/Sog4P75n/TSW9R28 MHTLOO7VbKvU/PQAtwBbhTIWdjPp2KOZnQUAqhbm84F9b32qhm2tFXTTxKJxqvQU fecyuB+81fFOvW8XAjnXDpVCOscAPukmYxHqC9FK/xidstd7LzrZlvvoHpKuE1XI 2Sf23EgbsCTBheN3nZqk8wwRHQ3ItBTutYJXCb8gWH8vIiPYcMt5bMlL8qkqyPyH K9caUPgn6C9D4zq92Fdx/c6mUlv53U3t5fZvie27k5x2IXXwkkwp9y+cAS7+UEae ZAwUswdbxcJzbPEHXEUkFDWug/FqTYl6+rPYLWbwNof1K1MCAwEAAaOBqjCBpzAP BgNVHRMBAf8EBTADAQH/MB8GA1UdIwQYMBaAFKaz4SsrSbbXc6GqlPUB53NlTKxQ MA4GA1UdDwEB/wQEAwIBhjAdBgNVHQ4EFgQUprPhKytJttdzoaqU9QHnc2VMrFAw RAYDVR0gBD0wOzA5BgRVHSAAMDEwLwYIKwYBBQUHAgEWI2h0dHA6Ly9hY2VkaWNv bS5lZGljb21ncm91cC5jb20vZG9jMA0GCSqGSIb3DQEBBQUAA4ICAQDOLAtSUWIm fQwng4/F9tqgaHtPkl7qpHMyEVNEskTLnewPeUKzEKbHDZ3Ltvo/Onzqv4hTGzz3 gvoFNTPhNahXwOf9jU8/kzJPeGYDdwdY6ZXIfj7QeQCM8htRM5u8lOk6e25SLTKe I6RF+7YuE7CLGLHdztUdp0J/Vb77W7tH1PwkzQSulgUV1qzOMPPKC8W64iLgpq0i 5ALudBF/TP94HTXa5gI06xgSYXcGCRZj6hitoocf8seACQl1ThCojz2GuHURwCRi ipZ7SkXp7FnFvmuD5uHorLUwHv4FB4D54SMNUI8FmP8sX+g7tq3PgbUhh8oIKiMn MCArz+2UW6yyetLHKKGKC5tNSixthT8Jcjxn4tncB7rrZXtaAWPWkFtPF2Y9fwsZ o5NjEFIqnxQWWOLcpfShFosOkYuByptZ+thrkQdlVV9SH686+5DdaaVbnG0OLLb6 zqylfDJKZ0DcMDQj3dcEI2bw/FWAp/tmGYI1Z2JwOV5vx+qQQEQIHriy1tvuWacN GHk0vFQYXlPKNFHtRQrmjseCNj6nOGOpMCwXEGCSn1WHElkQwg9naRHMTh5+Spqt r0CodaxWkHS4oJyleW/c6RrIaQXpuvoDs3zk4E7Czp3otkYNbn5XOmeUwssfnHdK Z05phkOTOPu220+DkdRgfks+KzgHVZhepA== -----END CERTIFICATE----- Verisign Class 1 Public Primary Certification Authority ======================================================= -----BEGIN CERTIFICATE----- MIICPDCCAaUCED9pHoGc8JpK83P/uUii5N0wDQYJKoZIhvcNAQEFBQAwXzELMAkG A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFz cyAxIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2 MDEyOTAwMDAwMFoXDTI4MDgwMjIzNTk1OVowXzELMAkGA1UEBhMCVVMxFzAVBgNV BAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAxIFB1YmxpYyBQcmlt YXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUAA4GN ADCBiQKBgQDlGb9to1ZhLZlIcfZn3rmN67eehoAKkQ76OCWvRoiC5XOooJskXQ0f zGVuDLDQVoQYh5oGmxChc9+0WDlrbsH2FdWoqD+qEgaNMax/sDTXjzRniAnNFBHi TkVWaR94AoDa3EeRKbs2yWNcxeDXLYd7obcysHswuiovMaruo2fa2wIDAQABMA0G CSqGSIb3DQEBBQUAA4GBAFgVKTk8d6PaXCUDfGD67gmZPCcQcMgMCeazh88K4hiW NWLMv5sneYlfycQJ9M61Hd8qveXbhpxoJeUwfLaJFf5n0a3hUKw8fGJLj7qE1xIV Gx/KXQ/BUpQqEZnae88MNhPVNdwQGVnqlMEAv3WP2fr9dgTbYruQagPZRjXZ+Hxb -----END CERTIFICATE----- Verisign Class 3 Public Primary Certification Authority ======================================================= -----BEGIN CERTIFICATE----- MIICPDCCAaUCEDyRMcsf9tAbDpq40ES/Er4wDQYJKoZIhvcNAQEFBQAwXzELMAkG A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFz cyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2 MDEyOTAwMDAwMFoXDTI4MDgwMjIzNTk1OVowXzELMAkGA1UEBhMCVVMxFzAVBgNV BAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmlt YXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUAA4GN ADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhE BarsAx94f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/is I19wKTakyYbnsZogy1Olhec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0G CSqGSIb3DQEBBQUAA4GBABByUqkFFBkyCEHwxWsKzH4PIRnN5GfcX6kb5sroc50i 2JhucwNhkcV8sEVAbkSdjbCxlnRhLQ2pRdKkkirWmnWXbj9T/UWZYB2oK0z5XqcJ 2HUw19JlYD1n1khVdWk/kfVIC0dpImmClr7JyDiGSnoscxlIaU5rfGW/D/xwzoiQ -----END CERTIFICATE----- Microsec e-Szigno Root CA 2009 ============================== -----BEGIN CERTIFICATE----- MIIECjCCAvKgAwIBAgIJAMJ+QwRORz8ZMA0GCSqGSIb3DQEBCwUAMIGCMQswCQYD VQQGEwJIVTERMA8GA1UEBwwIQnVkYXBlc3QxFjAUBgNVBAoMDU1pY3Jvc2VjIEx0 ZC4xJzAlBgNVBAMMHk1pY3Jvc2VjIGUtU3ppZ25vIFJvb3QgQ0EgMjAwOTEfMB0G CSqGSIb3DQEJARYQaW5mb0BlLXN6aWduby5odTAeFw0wOTA2MTYxMTMwMThaFw0y OTEyMzAxMTMwMThaMIGCMQswCQYDVQQGEwJIVTERMA8GA1UEBwwIQnVkYXBlc3Qx FjAUBgNVBAoMDU1pY3Jvc2VjIEx0ZC4xJzAlBgNVBAMMHk1pY3Jvc2VjIGUtU3pp Z25vIFJvb3QgQ0EgMjAwOTEfMB0GCSqGSIb3DQEJARYQaW5mb0BlLXN6aWduby5o dTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOn4j/NjrdqG2KfgQvvP kd6mJviZpWNwrZuuyjNAfW2WbqEORO7hE52UQlKavXWFdCyoDh2Tthi3jCyoz/tc cbna7P7ofo/kLx2yqHWH2Leh5TvPmUpG0IMZfcChEhyVbUr02MelTTMuhTlAdX4U fIASmFDHQWe4oIBhVKZsTh/gnQ4H6cm6M+f+wFUoLAKApxn1ntxVUwOXewdI/5n7 N4okxFnMUBBjjqqpGrCEGob5X7uxUG6k0QrM1XF+H6cbfPVTbiJfyyvm1HxdrtbC xkzlBQHZ7Vf8wSN5/PrIJIOV87VqUQHQd9bpEqH5GoP7ghu5sJf0dgYzQ0mg/wu1 +rUCAwEAAaOBgDB+MA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0G A1UdDgQWBBTLD8bfQkPMPcu1SCOhGnqmKrs0aDAfBgNVHSMEGDAWgBTLD8bfQkPM Pcu1SCOhGnqmKrs0aDAbBgNVHREEFDASgRBpbmZvQGUtc3ppZ25vLmh1MA0GCSqG SIb3DQEBCwUAA4IBAQDJ0Q5eLtXMs3w+y/w9/w0olZMEyL/azXm4Q5DwpL7v8u8h mLzU1F0G9u5C7DBsoKqpyvGvivo/C3NqPuouQH4frlRheesuCDfXI/OMn74dseGk ddug4lQUsbocKaQY9hK6ohQU4zE1yED/t+AFdlfBHFny+L/k7SViXITwfn4fs775 tyERzAMBVnCnEJIeGzSBHq2cGsMEPO0CYdYeBvNfOofyK/FFh+U9rNHHV4S9a67c 2Pm2G2JwCz02yULyMtd6YebS2z3PyKnJm9zbWETXbzivf3jTo60adbocwTZ8jx5t HMN1Rq41Bab2XD0h7lbwyYIiLXpUq3DDfSJlgnCW -----END CERTIFICATE----- E-Guven Kok Elektronik Sertifika Hizmet Saglayicisi =================================================== -----BEGIN CERTIFICATE----- MIIDtjCCAp6gAwIBAgIQRJmNPMADJ72cdpW56tustTANBgkqhkiG9w0BAQUFADB1 MQswCQYDVQQGEwJUUjEoMCYGA1UEChMfRWxla3Ryb25payBCaWxnaSBHdXZlbmxp Z2kgQS5TLjE8MDoGA1UEAxMzZS1HdXZlbiBLb2sgRWxla3Ryb25payBTZXJ0aWZp a2EgSGl6bWV0IFNhZ2xheWljaXNpMB4XDTA3MDEwNDExMzI0OFoXDTE3MDEwNDEx MzI0OFowdTELMAkGA1UEBhMCVFIxKDAmBgNVBAoTH0VsZWt0cm9uaWsgQmlsZ2kg R3V2ZW5saWdpIEEuUy4xPDA6BgNVBAMTM2UtR3V2ZW4gS29rIEVsZWt0cm9uaWsg U2VydGlmaWthIEhpem1ldCBTYWdsYXlpY2lzaTCCASIwDQYJKoZIhvcNAQEBBQAD ggEPADCCAQoCggEBAMMSIJ6wXgBljU5Gu4Bc6SwGl9XzcslwuedLZYDBS75+PNdU MZTe1RK6UxYC6lhj71vY8+0qGqpxSKPcEC1fX+tcS5yWCEIlKBHMilpiAVDV6wlT L/jDj/6z/P2douNffb7tC+Bg62nsM+3YjfsSSYMAyYuXjDtzKjKzEve5TfL0TW3H 5tYmNwjy2f1rXKPlSFxYvEK+A1qBuhw1DADT9SN+cTAIJjjcJRFHLfO6IxClv7wC 90Nex/6wN1CZew+TzuZDLMN+DfIcQ2Zgy2ExR4ejT669VmxMvLz4Bcpk9Ok0oSy1 c+HCPujIyTQlCFzz7abHlJ+tiEMl1+E5YP6sOVkCAwEAAaNCMEAwDgYDVR0PAQH/ BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFJ/uRLOU1fqRTy7ZVZoE VtstxNulMA0GCSqGSIb3DQEBBQUAA4IBAQB/X7lTW2M9dTLn+sR0GstG30ZpHFLP qk/CaOv/gKlR6D1id4k9CnU58W5dF4dvaAXBlGzZXd/aslnLpRCKysw5zZ/rTt5S /wzw9JKp8mxTq5vSR6AfdPebmvEvFZ96ZDAYBzwqD2fK/A+JYZ1lpTzlvBNbCNvj /+27BrtqBrF6T2XGgv0enIu1De5Iu7i9qgi0+6N8y5/NkHZchpZ4Vwpm+Vganf2X KWDeEaaQHBkc7gGWIjQ0LpH5t8Qn0Xvmv/uARFoW5evg1Ao4vOSR49XrXMGs3xtq fJ7lddK2l4fbzIcrQzqECK+rPNv3PGYxhrCdU3nt+CPeQuMtgvEP5fqX -----END CERTIFICATE----- GlobalSign Root CA - R3 ======================= -----BEGIN CERTIFICATE----- MIIDXzCCAkegAwIBAgILBAAAAAABIVhTCKIwDQYJKoZIhvcNAQELBQAwTDEgMB4G A1UECxMXR2xvYmFsU2lnbiBSb290IENBIC0gUjMxEzARBgNVBAoTCkdsb2JhbFNp Z24xEzARBgNVBAMTCkdsb2JhbFNpZ24wHhcNMDkwMzE4MTAwMDAwWhcNMjkwMzE4 MTAwMDAwWjBMMSAwHgYDVQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSMzETMBEG A1UEChMKR2xvYmFsU2lnbjETMBEGA1UEAxMKR2xvYmFsU2lnbjCCASIwDQYJKoZI hvcNAQEBBQADggEPADCCAQoCggEBAMwldpB5BngiFvXAg7aEyiie/QV2EcWtiHL8 RgJDx7KKnQRfJMsuS+FggkbhUqsMgUdwbN1k0ev1LKMPgj0MK66X17YUhhB5uzsT gHeMCOFJ0mpiLx9e+pZo34knlTifBtc+ycsmWQ1z3rDI6SYOgxXG71uL0gRgykmm KPZpO/bLyCiR5Z2KYVc3rHQU3HTgOu5yLy6c+9C7v/U9AOEGM+iCK65TpjoWc4zd QQ4gOsC0p6Hpsk+QLjJg6VfLuQSSaGjlOCZgdbKfd/+RFO+uIEn8rUAVSNECMWEZ XriX7613t2Saer9fwRPvm2L7DWzgVGkWqQPabumDk3F2xmmFghcCAwEAAaNCMEAw DgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFI/wS3+o LkUkrk1Q+mOai97i3Ru8MA0GCSqGSIb3DQEBCwUAA4IBAQBLQNvAUKr+yAzv95ZU RUm7lgAJQayzE4aGKAczymvmdLm6AC2upArT9fHxD4q/c2dKg8dEe3jgr25sbwMp jjM5RcOO5LlXbKr8EpbsU8Yt5CRsuZRj+9xTaGdWPoO4zzUhw8lo/s7awlOqzJCK 6fBdRoyV3XpYKBovHd7NADdBj+1EbddTKJd+82cEHhXXipa0095MJ6RMG3NzdvQX mcIfeg7jLQitChws/zyrVQ4PkX4268NXSb7hLi18YIvDQVETI53O9zJrlAGomecs Mx86OyXShkDOOyyGeMlhLxS67ttVb9+E7gUJTb0o2HLO02JQZR7rkpeDMdmztcpH WD9f -----END CERTIFICATE----- TC TrustCenter Universal CA III =============================== -----BEGIN CERTIFICATE----- MIID4TCCAsmgAwIBAgIOYyUAAQACFI0zFQLkbPQwDQYJKoZIhvcNAQEFBQAwezEL MAkGA1UEBhMCREUxHDAaBgNVBAoTE1RDIFRydXN0Q2VudGVyIEdtYkgxJDAiBgNV BAsTG1RDIFRydXN0Q2VudGVyIFVuaXZlcnNhbCBDQTEoMCYGA1UEAxMfVEMgVHJ1 c3RDZW50ZXIgVW5pdmVyc2FsIENBIElJSTAeFw0wOTA5MDkwODE1MjdaFw0yOTEy MzEyMzU5NTlaMHsxCzAJBgNVBAYTAkRFMRwwGgYDVQQKExNUQyBUcnVzdENlbnRl ciBHbWJIMSQwIgYDVQQLExtUQyBUcnVzdENlbnRlciBVbml2ZXJzYWwgQ0ExKDAm BgNVBAMTH1RDIFRydXN0Q2VudGVyIFVuaXZlcnNhbCBDQSBJSUkwggEiMA0GCSqG SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDC2pxisLlxErALyBpXsq6DFJmzNEubkKLF 5+cvAqBNLaT6hdqbJYUtQCggbergvbFIgyIpRJ9Og+41URNzdNW88jBmlFPAQDYv DIRlzg9uwliT6CwLOunBjvvya8o84pxOjuT5fdMnnxvVZ3iHLX8LR7PH6MlIfK8v zArZQe+f/prhsq75U7Xl6UafYOPfjdN/+5Z+s7Vy+EutCHnNaYlAJ/Uqwa1D7KRT yGG299J5KmcYdkhtWyUB0SbFt1dpIxVbYYqt8Bst2a9c8SaQaanVDED1M4BDj5yj dipFtK+/fz6HP3bFzSreIMUWWMv5G/UPyw0RUmS40nZid4PxWJ//AgMBAAGjYzBh MB8GA1UdIwQYMBaAFFbn4VslQ4Dg9ozhcbyO5YAvxEjiMA8GA1UdEwEB/wQFMAMB Af8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBRW5+FbJUOA4PaM4XG8juWAL8RI 4jANBgkqhkiG9w0BAQUFAAOCAQEAg8ev6n9NCjw5sWi+e22JLumzCecYV42Fmhfz dkJQEw/HkG8zrcVJYCtsSVgZ1OK+t7+rSbyUyKu+KGwWaODIl0YgoGhnYIg5IFHY aAERzqf2EQf27OysGh+yZm5WZ2B6dF7AbZc2rrUNXWZzwCUyRdhKBgePxLcHsU0G DeGl6/R1yrqc0L2z0zIkTO5+4nYES0lT2PLpVDP85XEfPRRclkvxOvIAu2y0+pZV CIgJwcyRGSmwIC3/yzikQOEXvnlhgP8HA4ZMTnsGnxGGjYnuJ8Tb4rwZjgvDwxPH LQNjO9Po5KIqwoIIlBZU8O8fJ5AluA0OKBtHd0e9HKgl8ZS0Zg== -----END CERTIFICATE----- Autoridad de Certificacion Firmaprofesional CIF A62634068 ========================================================= -----BEGIN CERTIFICATE----- MIIGFDCCA/ygAwIBAgIIU+w77vuySF8wDQYJKoZIhvcNAQEFBQAwUTELMAkGA1UE BhMCRVMxQjBABgNVBAMMOUF1dG9yaWRhZCBkZSBDZXJ0aWZpY2FjaW9uIEZpcm1h cHJvZmVzaW9uYWwgQ0lGIEE2MjYzNDA2ODAeFw0wOTA1MjAwODM4MTVaFw0zMDEy MzEwODM4MTVaMFExCzAJBgNVBAYTAkVTMUIwQAYDVQQDDDlBdXRvcmlkYWQgZGUg Q2VydGlmaWNhY2lvbiBGaXJtYXByb2Zlc2lvbmFsIENJRiBBNjI2MzQwNjgwggIi MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDKlmuO6vj78aI14H9M2uDDUtd9 thDIAl6zQyrET2qyyhxdKJp4ERppWVevtSBC5IsP5t9bpgOSL/UR5GLXMnE42QQM cas9UX4PB99jBVzpv5RvwSmCwLTaUbDBPLutN0pcyvFLNg4kq7/DhHf9qFD0sefG L9ItWY16Ck6WaVICqjaY7Pz6FIMMNx/Jkjd/14Et5cS54D40/mf0PmbR0/RAz15i NA9wBj4gGFrO93IbJWyTdBSTo3OxDqqHECNZXyAFGUftaI6SEspd/NYrspI8IM/h X68gvqB2f3bl7BqGYTM+53u0P6APjqK5am+5hyZvQWyIplD9amML9ZMWGxmPsu2b m8mQ9QEM3xk9Dz44I8kvjwzRAv4bVdZO0I08r0+k8/6vKtMFnXkIoctXMbScyJCy Z/QYFpM6/EfY0XiWMR+6KwxfXZmtY4laJCB22N/9q06mIqqdXuYnin1oKaPnirja EbsXLZmdEyRG98Xi2J+Of8ePdG1asuhy9azuJBCtLxTa/y2aRnFHvkLfuwHb9H/T KI8xWVvTyQKmtFLKbpf7Q8UIJm+K9Lv9nyiqDdVF8xM6HdjAeI9BZzwelGSuewvF 6NkBiDkal4ZkQdU7hwxu+g/GvUgUvzlN1J5Bto+WHWOWk9mVBngxaJ43BjuAiUVh OSPHG0SjFeUc+JIwuwIDAQABo4HvMIHsMBIGA1UdEwEB/wQIMAYBAf8CAQEwDgYD VR0PAQH/BAQDAgEGMB0GA1UdDgQWBBRlzeurNR4APn7VdMActHNHDhpkLzCBpgYD VR0gBIGeMIGbMIGYBgRVHSAAMIGPMC8GCCsGAQUFBwIBFiNodHRwOi8vd3d3LmZp cm1hcHJvZmVzaW9uYWwuY29tL2NwczBcBggrBgEFBQcCAjBQHk4AUABhAHMAZQBv ACAAZABlACAAbABhACAAQgBvAG4AYQBuAG8AdgBhACAANAA3ACAAQgBhAHIAYwBl AGwAbwBuAGEAIAAwADgAMAAxADcwDQYJKoZIhvcNAQEFBQADggIBABd9oPm03cXF 661LJLWhAqvdpYhKsg9VSytXjDvlMd3+xDLx51tkljYyGOylMnfX40S2wBEqgLk9 am58m9Ot/MPWo+ZkKXzR4Tgegiv/J2Wv+xYVxC5xhOW1//qkR71kMrv2JYSiJ0L1 ILDCExARzRAVukKQKtJE4ZYm6zFIEv0q2skGz3QeqUvVhyj5eTSSPi5E6PaPT481 PyWzOdxjKpBrIF/EUhJOlywqrJ2X3kjyo2bbwtKDlaZmp54lD+kLM5FlClrD2VQS 3a/DTg4fJl4N3LON7NWBcN7STyQF82xO9UxJZo3R/9ILJUFI/lGExkKvgATP0H5k SeTy36LssUzAKh3ntLFlosS88Zj0qnAHY7S42jtM+kAiMFsRpvAFDsYCA0irhpuF 3dvd6qJ2gHN99ZwExEWN57kci57q13XRcrHedUTnQn3iV2t93Jm8PYMo6oCTjcVM ZcFwgbg4/EMxsvYDNEeyrPsiBsse3RdHHF9mudMaotoRsaS8I8nkvof/uZS2+F0g StRf571oe2XyFR7SOqkt6dhrJKyXWERHrVkY8SFlcN7ONGCoQPHzPKTDKCOM/icz Q0CgFzzr6juwcqajuUpLXhZI9LK8yIySxZ2frHI2vDSANGupi5LAuBft7HZT9SQB jLMi6Et8Vcad+qMUu2WFbm5PEn4KPJ2V -----END CERTIFICATE----- Izenpe.com ========== -----BEGIN CERTIFICATE----- MIIF8TCCA9mgAwIBAgIQALC3WhZIX7/hy/WL1xnmfTANBgkqhkiG9w0BAQsFADA4 MQswCQYDVQQGEwJFUzEUMBIGA1UECgwLSVpFTlBFIFMuQS4xEzARBgNVBAMMCkl6 ZW5wZS5jb20wHhcNMDcxMjEzMTMwODI4WhcNMzcxMjEzMDgyNzI1WjA4MQswCQYD VQQGEwJFUzEUMBIGA1UECgwLSVpFTlBFIFMuQS4xEzARBgNVBAMMCkl6ZW5wZS5j b20wggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDJ03rKDx6sp4boFmVq scIbRTJxldn+EFvMr+eleQGPicPK8lVx93e+d5TzcqQsRNiekpsUOqHnJJAKClaO xdgmlOHZSOEtPtoKct2jmRXagaKH9HtuJneJWK3W6wyyQXpzbm3benhB6QiIEn6H LmYRY2xU+zydcsC8Lv/Ct90NduM61/e0aL6i9eOBbsFGb12N4E3GVFWJGjMxCrFX uaOKmMPsOzTFlUFpfnXCPCDFYbpRR6AgkJOhkEvzTnyFRVSa0QUmQbC1TR0zvsQD yCV8wXDbO/QJLVQnSKwv4cSsPsjLkkxTOTcj7NMB+eAJRE1NZMDhDVqHIrytG6P+ JrUV86f8hBnp7KGItERphIPzidF0BqnMC9bC3ieFUCbKF7jJeodWLBoBHmy+E60Q rLUk9TiRodZL2vG70t5HtfG8gfZZa88ZU+mNFctKy6lvROUbQc/hhqfK0GqfvEyN BjNaooXlkDWgYlwWTvDjovoDGrQscbNYLN57C9saD+veIR8GdwYDsMnvmfzAuU8L hij+0rnq49qlw0dpEuDb8PYZi+17cNcC1u2HGCgsBCRMd+RIihrGO5rUD8r6ddIB QFqNeb+Lz0vPqhbBleStTIo+F5HUsWLlguWABKQDfo2/2n+iD5dPDNMN+9fR5XJ+ HMh3/1uaD7euBUbl8agW7EekFwIDAQABo4H2MIHzMIGwBgNVHREEgagwgaWBD2lu Zm9AaXplbnBlLmNvbaSBkTCBjjFHMEUGA1UECgw+SVpFTlBFIFMuQS4gLSBDSUYg QTAxMzM3MjYwLVJNZXJjLlZpdG9yaWEtR2FzdGVpeiBUMTA1NSBGNjIgUzgxQzBB BgNVBAkMOkF2ZGEgZGVsIE1lZGl0ZXJyYW5lbyBFdG9yYmlkZWEgMTQgLSAwMTAx MCBWaXRvcmlhLUdhc3RlaXowDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMC AQYwHQYDVR0OBBYEFB0cZQ6o8iV7tJHP5LGx5r1VdGwFMA0GCSqGSIb3DQEBCwUA A4ICAQB4pgwWSp9MiDrAyw6lFn2fuUhfGI8NYjb2zRlrrKvV9pF9rnHzP7MOeIWb laQnIUdCSnxIOvVFfLMMjlF4rJUT3sb9fbgakEyrkgPH7UIBzg/YsfqikuFgba56 awmqxinuaElnMIAkejEWOVt+8Rwu3WwJrfIxwYJOubv5vr8qhT/AQKM6WfxZSzwo JNu0FXWuDYi6LnPAvViH5ULy617uHjAimcs30cQhbIHsvm0m5hzkQiCeR7Csg1lw LDXWrzY0tM07+DKo7+N4ifuNRSzanLh+QBxh5z6ikixL8s36mLYp//Pye6kfLqCT VyvehQP5aTfLnnhqBbTFMXiJ7HqnheG5ezzevh55hM6fcA5ZwjUukCox2eRFekGk LhObNA5me0mrZJfQRsN5nXJQY6aYWwa9SG3YOYNw6DXwBdGqvOPbyALqfP2C2sJb UjWumDqtujWTI6cfSN01RpiyEGjkpTHCClguGYEQyVB1/OpaFs4R1+7vUIgtYf8/ QnMFlEPVjjxOAToZpR9GTnfQXeWBIiGH/pR9hNiTrdZoQ0iy2+tzJOeRf1SktoA+ naM8THLCV8Sg1Mw4J87VBp6iSNnpn86CcDaTmjvfliHjWbcM2pE38P1ZWrOZyGls QyYBNWNgVYkDOnXYukrZVP/u3oDYLdE41V4tC5h9Pmzb/CaIxw== -----END CERTIFICATE----- Chambers of Commerce Root - 2008 ================================ -----BEGIN CERTIFICATE----- MIIHTzCCBTegAwIBAgIJAKPaQn6ksa7aMA0GCSqGSIb3DQEBBQUAMIGuMQswCQYD VQQGEwJFVTFDMEEGA1UEBxM6TWFkcmlkIChzZWUgY3VycmVudCBhZGRyZXNzIGF0 IHd3dy5jYW1lcmZpcm1hLmNvbS9hZGRyZXNzKTESMBAGA1UEBRMJQTgyNzQzMjg3 MRswGQYDVQQKExJBQyBDYW1lcmZpcm1hIFMuQS4xKTAnBgNVBAMTIENoYW1iZXJz IG9mIENvbW1lcmNlIFJvb3QgLSAyMDA4MB4XDTA4MDgwMTEyMjk1MFoXDTM4MDcz MTEyMjk1MFowga4xCzAJBgNVBAYTAkVVMUMwQQYDVQQHEzpNYWRyaWQgKHNlZSBj dXJyZW50IGFkZHJlc3MgYXQgd3d3LmNhbWVyZmlybWEuY29tL2FkZHJlc3MpMRIw EAYDVQQFEwlBODI3NDMyODcxGzAZBgNVBAoTEkFDIENhbWVyZmlybWEgUy5BLjEp MCcGA1UEAxMgQ2hhbWJlcnMgb2YgQ29tbWVyY2UgUm9vdCAtIDIwMDgwggIiMA0G CSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCvAMtwNyuAWko6bHiUfaN/Gh/2NdW9 28sNRHI+JrKQUrpjOyhYb6WzbZSm891kDFX29ufyIiKAXuFixrYp4YFs8r/lfTJq VKAyGVn+H4vXPWCGhSRv4xGzdz4gljUha7MI2XAuZPeEklPWDrCQiorjh40G072Q DuKZoRuGDtqaCrsLYVAGUvGef3bsyw/QHg3PmTA9HMRFEFis1tPo1+XqxQEHd9ZR 5gN/ikilTWh1uem8nk4ZcfUyS5xtYBkL+8ydddy/Js2Pk3g5eXNeJQ7KXOt3EgfL ZEFHcpOrUMPrCXZkNNI5t3YRCQ12RcSprj1qr7V9ZS+UWBDsXHyvfuK2GNnQm05a Sd+pZgvMPMZ4fKecHePOjlO+Bd5gD2vlGts/4+EhySnB8esHnFIbAURRPHsl18Tl UlRdJQfKFiC4reRB7noI/plvg6aRArBsNlVq5331lubKgdaX8ZSD6e2wsWsSaR6s +12pxZjptFtYer49okQ6Y1nUCyXeG0+95QGezdIp1Z8XGQpvvwyQ0wlf2eOKNcx5 Wk0ZN5K3xMGtr/R5JJqyAQuxr1yW84Ay+1w9mPGgP0revq+ULtlVmhduYJ1jbLhj ya6BXBg14JC7vjxPNyK5fuvPnnchpj04gftI2jE9K+OJ9dC1vX7gUMQSibMjmhAx hduub+84Mxh2EQIDAQABo4IBbDCCAWgwEgYDVR0TAQH/BAgwBgEB/wIBDDAdBgNV HQ4EFgQU+SSsD7K1+HnA+mCIG8TZTQKeFxkwgeMGA1UdIwSB2zCB2IAU+SSsD7K1 +HnA+mCIG8TZTQKeFxmhgbSkgbEwga4xCzAJBgNVBAYTAkVVMUMwQQYDVQQHEzpN YWRyaWQgKHNlZSBjdXJyZW50IGFkZHJlc3MgYXQgd3d3LmNhbWVyZmlybWEuY29t L2FkZHJlc3MpMRIwEAYDVQQFEwlBODI3NDMyODcxGzAZBgNVBAoTEkFDIENhbWVy ZmlybWEgUy5BLjEpMCcGA1UEAxMgQ2hhbWJlcnMgb2YgQ29tbWVyY2UgUm9vdCAt IDIwMDiCCQCj2kJ+pLGu2jAOBgNVHQ8BAf8EBAMCAQYwPQYDVR0gBDYwNDAyBgRV HSAAMCowKAYIKwYBBQUHAgEWHGh0dHA6Ly9wb2xpY3kuY2FtZXJmaXJtYS5jb20w DQYJKoZIhvcNAQEFBQADggIBAJASryI1wqM58C7e6bXpeHxIvj99RZJe6dqxGfwW PJ+0W2aeaufDuV2I6A+tzyMP3iU6XsxPpcG1Lawk0lgH3qLPaYRgM+gQDROpI9CF 5Y57pp49chNyM/WqfcZjHwj0/gF/JM8rLFQJ3uIrbZLGOU8W6jx+ekbURWpGqOt1 glanq6B8aBMz9p0w8G8nOSQjKpD9kCk18pPfNKXG9/jvjA9iSnyu0/VU+I22mlaH FoI6M6taIgj3grrqLuBHmrS1RaMFO9ncLkVAO+rcf+g769HsJtg1pDDFOqxXnrN2 pSB7+R5KBWIBpih1YJeSDW4+TTdDDZIVnBgizVGZoCkaPF+KMjNbMMeJL0eYD6MD xvbxrN8y8NmBGuScvfaAFPDRLLmF9dijscilIeUcE5fuDr3fKanvNFNb0+RqE4QG tjICxFKuItLcsiFCGtpA8CnJ7AoMXOLQusxI0zcKzBIKinmwPQN/aUv0NCB9szTq jktk9T79syNnFQ0EuPAtwQlRPLJsFfClI9eDdOTlLsn+mCdCxqvGnrDQWzilm1De fhiYtUU79nm06PcaewaD+9CL2rvHvRirCG88gGtAPxkZumWK5r7VXNM21+9AUiRg OGcEMeyP84LG3rlV8zsxkVrctQgVrXYlCg17LofiDKYGvCYQbTed7N14jHyAxfDZ d0jQ -----END CERTIFICATE----- Global Chambersign Root - 2008 ============================== -----BEGIN CERTIFICATE----- MIIHSTCCBTGgAwIBAgIJAMnN0+nVfSPOMA0GCSqGSIb3DQEBBQUAMIGsMQswCQYD VQQGEwJFVTFDMEEGA1UEBxM6TWFkcmlkIChzZWUgY3VycmVudCBhZGRyZXNzIGF0 IHd3dy5jYW1lcmZpcm1hLmNvbS9hZGRyZXNzKTESMBAGA1UEBRMJQTgyNzQzMjg3 MRswGQYDVQQKExJBQyBDYW1lcmZpcm1hIFMuQS4xJzAlBgNVBAMTHkdsb2JhbCBD aGFtYmVyc2lnbiBSb290IC0gMjAwODAeFw0wODA4MDExMjMxNDBaFw0zODA3MzEx MjMxNDBaMIGsMQswCQYDVQQGEwJFVTFDMEEGA1UEBxM6TWFkcmlkIChzZWUgY3Vy cmVudCBhZGRyZXNzIGF0IHd3dy5jYW1lcmZpcm1hLmNvbS9hZGRyZXNzKTESMBAG A1UEBRMJQTgyNzQzMjg3MRswGQYDVQQKExJBQyBDYW1lcmZpcm1hIFMuQS4xJzAl BgNVBAMTHkdsb2JhbCBDaGFtYmVyc2lnbiBSb290IC0gMjAwODCCAiIwDQYJKoZI hvcNAQEBBQADggIPADCCAgoCggIBAMDfVtPkOpt2RbQT2//BthmLN0EYlVJH6xed KYiONWwGMi5HYvNJBL99RDaxccy9Wglz1dmFRP+RVyXfXjaOcNFccUMd2drvXNL7 G706tcuto8xEpw2uIRU/uXpbknXYpBI4iRmKt4DS4jJvVpyR1ogQC7N0ZJJ0YPP2 zxhPYLIj0Mc7zmFLmY/CDNBAspjcDahOo7kKrmCgrUVSY7pmvWjg+b4aqIG7HkF4 ddPB/gBVsIdU6CeQNR1MM62X/JcumIS/LMmjv9GYERTtY/jKmIhYF5ntRQOXfjyG HoiMvvKRhI9lNNgATH23MRdaKXoKGCQwoze1eqkBfSbW+Q6OWfH9GzO1KTsXO0G2 Id3UwD2ln58fQ1DJu7xsepeY7s2MH/ucUa6LcL0nn3HAa6x9kGbo1106DbDVwo3V yJ2dwW3Q0L9R5OP4wzg2rtandeavhENdk5IMagfeOx2YItaswTXbo6Al/3K1dh3e beksZixShNBFks4c5eUzHdwHU1SjqoI7mjcv3N2gZOnm3b2u/GSFHTynyQbehP9r 6GsaPMWis0L7iwk+XwhSx2LE1AVxv8Rk5Pihg+g+EpuoHtQ2TS9x9o0o9oOpE9Jh wZG7SMA0j0GMS0zbaRL/UJScIINZc+18ofLx/d33SdNDWKBWY8o9PeU1VlnpDsog zCtLkykPAgMBAAGjggFqMIIBZjASBgNVHRMBAf8ECDAGAQH/AgEMMB0GA1UdDgQW BBS5CcqcHtvTbDprru1U8VuTBjUuXjCB4QYDVR0jBIHZMIHWgBS5CcqcHtvTbDpr ru1U8VuTBjUuXqGBsqSBrzCBrDELMAkGA1UEBhMCRVUxQzBBBgNVBAcTOk1hZHJp ZCAoc2VlIGN1cnJlbnQgYWRkcmVzcyBhdCB3d3cuY2FtZXJmaXJtYS5jb20vYWRk cmVzcykxEjAQBgNVBAUTCUE4Mjc0MzI4NzEbMBkGA1UEChMSQUMgQ2FtZXJmaXJt YSBTLkEuMScwJQYDVQQDEx5HbG9iYWwgQ2hhbWJlcnNpZ24gUm9vdCAtIDIwMDiC CQDJzdPp1X0jzjAOBgNVHQ8BAf8EBAMCAQYwPQYDVR0gBDYwNDAyBgRVHSAAMCow KAYIKwYBBQUHAgEWHGh0dHA6Ly9wb2xpY3kuY2FtZXJmaXJtYS5jb20wDQYJKoZI hvcNAQEFBQADggIBAICIf3DekijZBZRG/5BXqfEv3xoNa/p8DhxJJHkn2EaqbylZ UohwEurdPfWbU1Rv4WCiqAm57OtZfMY18dwY6fFn5a+6ReAJ3spED8IXDneRRXoz X1+WLGiLwUePmJs9wOzL9dWCkoQ10b42OFZyMVtHLaoXpGNR6woBrX/sdZ7LoR/x fxKxueRkf2fWIyr0uDldmOghp+G9PUIadJpwr2hsUF1Jz//7Dl3mLEfXgTpZALVz a2Mg9jFFCDkO9HB+QHBaP9BrQql0PSgvAm11cpUJjUhjxsYjV5KTXjXBjfkK9yyd Yhz2rXzdpjEetrHHfoUm+qRqtdpjMNHvkzeyZi99Bffnt0uYlDXA2TopwZ2yUDMd SqlapskD7+3056huirRXhOukP9DuqqqHW2Pok+JrqNS4cnhrG+055F3Lm6qH1U9O AP7Zap88MQ8oAgF9mOinsKJknnn4SPIVqczmyETrP3iZ8ntxPjzxmKfFGBI/5rso M0LpRQp8bfKGeS/Fghl9CYl8slR2iK7ewfPM4W7bMdaTrpmg7yVqc5iJWzouE4ge v8CSlDQb4ye3ix5vQv/n6TebUB0tovkC7stYWDpxvGjjqsGvHCgfotwjZT+B6q6Z 09gwzxMNTxXJhLynSC34MCN32EZLeW32jO06f2ARePTpm67VVMB0gNELQp/B -----END CERTIFICATE----- Go Daddy Root Certificate Authority - G2 ======================================== -----BEGIN CERTIFICATE----- MIIDxTCCAq2gAwIBAgIBADANBgkqhkiG9w0BAQsFADCBgzELMAkGA1UEBhMCVVMx EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxGjAYBgNVBAoT EUdvRGFkZHkuY29tLCBJbmMuMTEwLwYDVQQDEyhHbyBEYWRkeSBSb290IENlcnRp ZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5MDkwMTAwMDAwMFoXDTM3MTIzMTIz NTk1OVowgYMxCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6b25hMRMwEQYDVQQH EwpTY290dHNkYWxlMRowGAYDVQQKExFHb0RhZGR5LmNvbSwgSW5jLjExMC8GA1UE AxMoR28gRGFkZHkgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkgLSBHMjCCASIw DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL9xYgjx+lk09xvJGKP3gElY6SKD E6bFIEMBO4Tx5oVJnyfq9oQbTqC023CYxzIBsQU+B07u9PpPL1kwIuerGVZr4oAH /PMWdYA5UXvl+TW2dE6pjYIT5LY/qQOD+qK+ihVqf94Lw7YZFAXK6sOoBJQ7Rnwy DfMAZiLIjWltNowRGLfTshxgtDj6AozO091GB94KPutdfMh8+7ArU6SSYmlRJQVh GkSBjCypQ5Yj36w6gZoOKcUcqeldHraenjAKOc7xiID7S13MMuyFYkMlNAJWJwGR tDtwKj9useiciAF9n9T521NtYJ2/LOdYq7hfRvzOxBsDPAnrSTFcaUaz4EcCAwEA AaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYE FDqahQcQZyi27/a9BUFuIMGU2g/eMA0GCSqGSIb3DQEBCwUAA4IBAQCZ21151fmX WWcDYfF+OwYxdS2hII5PZYe096acvNjpL9DbWu7PdIxztDhC2gV7+AJ1uP2lsdeu 9tfeE8tTEH6KRtGX+rcuKxGrkLAngPnon1rpN5+r5N9ss4UXnT3ZJE95kTXWXwTr gIOrmgIttRD02JDHBHNA7XIloKmf7J6raBKZV8aPEjoJpL1E/QYVN8Gb5DKj7Tjo 2GTzLH4U/ALqn83/B2gX2yKQOC16jdFU8WnjXzPKej17CuPKf1855eJ1usV2GDPO LPAvTK33sefOT6jEm0pUBsV/fdUID+Ic/n4XuKxe9tQWskMJDE32p2u0mYRlynqI 4uJEvlz36hz1 -----END CERTIFICATE----- Starfield Root Certificate Authority - G2 ========================================= -----BEGIN CERTIFICATE----- MIID3TCCAsWgAwIBAgIBADANBgkqhkiG9w0BAQsFADCBjzELMAkGA1UEBhMCVVMx EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxJTAjBgNVBAoT HFN0YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xMjAwBgNVBAMTKVN0YXJmaWVs ZCBSb290IENlcnRpZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5MDkwMTAwMDAw MFoXDTM3MTIzMTIzNTk1OVowgY8xCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6 b25hMRMwEQYDVQQHEwpTY290dHNkYWxlMSUwIwYDVQQKExxTdGFyZmllbGQgVGVj aG5vbG9naWVzLCBJbmMuMTIwMAYDVQQDEylTdGFyZmllbGQgUm9vdCBDZXJ0aWZp Y2F0ZSBBdXRob3JpdHkgLSBHMjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC ggEBAL3twQP89o/8ArFvW59I2Z154qK3A2FWGMNHttfKPTUuiUP3oWmb3ooa/RMg nLRJdzIpVv257IzdIvpy3Cdhl+72WoTsbhm5iSzchFvVdPtrX8WJpRBSiUZV9Lh1 HOZ/5FSuS/hVclcCGfgXcVnrHigHdMWdSL5stPSksPNkN3mSwOxGXn/hbVNMYq/N Hwtjuzqd+/x5AJhhdM8mgkBj87JyahkNmcrUDnXMN/uLicFZ8WJ/X7NfZTD4p7dN dloedl40wOiWVpmKs/B/pM293DIxfJHP4F8R+GuqSVzRmZTRouNjWwl2tVZi4Ut0 HZbUJtQIBFnQmA4O5t78w+wfkPECAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAO BgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFHwMMh+n2TB/xH1oo2Kooc6rB1snMA0G CSqGSIb3DQEBCwUAA4IBAQARWfolTwNvlJk7mh+ChTnUdgWUXuEok21iXQnCoKjU sHU48TRqneSfioYmUeYs0cYtbpUgSpIB7LiKZ3sx4mcujJUDJi5DnUox9g61DLu3 4jd/IroAow57UvtruzvE03lRTs2Q9GcHGcg8RnoNAX3FWOdt5oUwF5okxBDgBPfg 8n/Uqgr/Qh037ZTlZFkSIHc40zI+OIF1lnP6aI+xy84fxez6nH7PfrHxBy22/L/K pL/QlwVKvOoYKAKQvVR4CSFx09F9HdkWsKlhPdAKACL8x3vLCWRFCztAgfd9fDL1 mMpYjn0q7pBZc2T5NnReJaH1ZgUufzkVqSr7UIuOhWn0 -----END CERTIFICATE----- Starfield Services Root Certificate Authority - G2 ================================================== -----BEGIN CERTIFICATE----- MIID7zCCAtegAwIBAgIBADANBgkqhkiG9w0BAQsFADCBmDELMAkGA1UEBhMCVVMx EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxJTAjBgNVBAoT HFN0YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xOzA5BgNVBAMTMlN0YXJmaWVs ZCBTZXJ2aWNlcyBSb290IENlcnRpZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5 MDkwMTAwMDAwMFoXDTM3MTIzMTIzNTk1OVowgZgxCzAJBgNVBAYTAlVTMRAwDgYD VQQIEwdBcml6b25hMRMwEQYDVQQHEwpTY290dHNkYWxlMSUwIwYDVQQKExxTdGFy ZmllbGQgVGVjaG5vbG9naWVzLCBJbmMuMTswOQYDVQQDEzJTdGFyZmllbGQgU2Vy dmljZXMgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkgLSBHMjCCASIwDQYJKoZI hvcNAQEBBQADggEPADCCAQoCggEBANUMOsQq+U7i9b4Zl1+OiFOxHz/Lz58gE20p OsgPfTz3a3Y4Y9k2YKibXlwAgLIvWX/2h/klQ4bnaRtSmpDhcePYLQ1Ob/bISdm2 8xpWriu2dBTrz/sm4xq6HZYuajtYlIlHVv8loJNwU4PahHQUw2eeBGg6345AWh1K Ts9DkTvnVtYAcMtS7nt9rjrnvDH5RfbCYM8TWQIrgMw0R9+53pBlbQLPLJGmpufe hRhJfGZOozptqbXuNC66DQO4M99H67FrjSXZm86B0UVGMpZwh94CDklDhbZsc7tk 6mFBrMnUVN+HL8cisibMn1lUaJ/8viovxFUcdUBgF4UCVTmLfwUCAwEAAaNCMEAw DwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFJxfAN+q AdcwKziIorhtSpzyEZGDMA0GCSqGSIb3DQEBCwUAA4IBAQBLNqaEd2ndOxmfZyMI bw5hyf2E3F/YNoHN2BtBLZ9g3ccaaNnRbobhiCPPE95Dz+I0swSdHynVv/heyNXB ve6SbzJ08pGCL72CQnqtKrcgfU28elUSwhXqvfdqlS5sdJ/PHLTyxQGjhdByPq1z qwubdQxtRbeOlKyWN7Wg0I8VRw7j6IPdj/3vQQF3zCepYoUz8jcI73HPdwbeyBkd iEDPfUYd/x7H4c7/I9vG+o1VTqkC50cRRj70/b17KSa7qWFiNyi2LSr2EIZkyXCn 0q23KXB56jzaYyWf/Wi3MOxw+3WKt21gZ7IeyLnp2KhvAotnDU0mV3HaIPzBSlCN sSi6 -----END CERTIFICATE----- AffirmTrust Commercial ====================== -----BEGIN CERTIFICATE----- MIIDTDCCAjSgAwIBAgIId3cGJyapsXwwDQYJKoZIhvcNAQELBQAwRDELMAkGA1UE BhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZpcm1UcnVz dCBDb21tZXJjaWFsMB4XDTEwMDEyOTE0MDYwNloXDTMwMTIzMTE0MDYwNlowRDEL MAkGA1UEBhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZp cm1UcnVzdCBDb21tZXJjaWFsMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC AQEA9htPZwcroRX1BiLLHwGy43NFBkRJLLtJJRTWzsO3qyxPxkEylFf6EqdbDuKP Hx6GGaeqtS25Xw2Kwq+FNXkyLbscYjfysVtKPcrNcV/pQr6U6Mje+SJIZMblq8Yr ba0F8PrVC8+a5fBQpIs7R6UjW3p6+DM/uO+Zl+MgwdYoic+U+7lF7eNAFxHUdPAL MeIrJmqbTFeurCA+ukV6BfO9m2kVrn1OIGPENXY6BwLJN/3HR+7o8XYdcxXyl6S1 yHp52UKqK39c/s4mT6NmgTWvRLpUHhwwMmWd5jyTXlBOeuM61G7MGvv50jeuJCqr VwMiKA1JdX+3KNp1v47j3A55MQIDAQABo0IwQDAdBgNVHQ4EFgQUnZPGU4teyq8/ nx4P5ZmVvCT2lI8wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwDQYJ KoZIhvcNAQELBQADggEBAFis9AQOzcAN/wr91LoWXym9e2iZWEnStB03TX8nfUYG XUPGhi4+c7ImfU+TqbbEKpqrIZcUsd6M06uJFdhrJNTxFq7YpFzUf1GO7RgBsZNj vbz4YYCanrHOQnDiqX0GJX0nof5v7LMeJNrjS1UaADs1tDvZ110w/YETifLCBivt Z8SOyUOyXGsViQK8YvxO8rUzqrJv0wqiUOP2O+guRMLbZjipM1ZI8W0bM40NjD9g N53Tym1+NH4Nn3J2ixufcv1SNUFFApYvHLKac0khsUlHRUe072o0EclNmsxZt9YC nlpOZbWUrhvfKbAW8b8Angc6F2S1BLUjIZkKlTuXfO8= -----END CERTIFICATE----- AffirmTrust Networking ====================== -----BEGIN CERTIFICATE----- MIIDTDCCAjSgAwIBAgIIfE8EORzUmS0wDQYJKoZIhvcNAQEFBQAwRDELMAkGA1UE BhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZpcm1UcnVz dCBOZXR3b3JraW5nMB4XDTEwMDEyOTE0MDgyNFoXDTMwMTIzMTE0MDgyNFowRDEL MAkGA1UEBhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZp cm1UcnVzdCBOZXR3b3JraW5nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC AQEAtITMMxcua5Rsa2FSoOujz3mUTOWUgJnLVWREZY9nZOIG41w3SfYvm4SEHi3y YJ0wTsyEheIszx6e/jarM3c1RNg1lho9Nuh6DtjVR6FqaYvZ/Ls6rnla1fTWcbua kCNrmreIdIcMHl+5ni36q1Mr3Lt2PpNMCAiMHqIjHNRqrSK6mQEubWXLviRmVSRL QESxG9fhwoXA3hA/Pe24/PHxI1Pcv2WXb9n5QHGNfb2V1M6+oF4nI979ptAmDgAp 6zxG8D1gvz9Q0twmQVGeFDdCBKNwV6gbh+0t+nvujArjqWaJGctB+d1ENmHP4ndG yH329JKBNv3bNPFyfvMMFr20FQIDAQABo0IwQDAdBgNVHQ4EFgQUBx/S55zawm6i QLSwelAQUHTEyL0wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwDQYJ KoZIhvcNAQEFBQADggEBAIlXshZ6qML91tmbmzTCnLQyFE2npN/svqe++EPbkTfO tDIuUFUaNU52Q3Eg75N3ThVwLofDwR1t3Mu1J9QsVtFSUzpE0nPIxBsFZVpikpzu QY0x2+c06lkh1QF612S4ZDnNye2v7UsDSKegmQGA3GWjNq5lWUhPgkvIZfFXHeVZ Lgo/bNjR9eUJtGxUAArgFU2HdW23WJZa3W3SAKD0m0i+wzekujbgfIeFlxoVot4u olu9rxj5kFDNcFn4J2dHy8egBzp90SxdbBk6ZrV9/ZFvgrG+CJPbFEfxojfHRZ48 x3evZKiT3/Zpg4Jg8klCNO1aAFSFHBY2kgxc+qatv9s= -----END CERTIFICATE----- AffirmTrust Premium =================== -----BEGIN CERTIFICATE----- MIIFRjCCAy6gAwIBAgIIbYwURrGmCu4wDQYJKoZIhvcNAQEMBQAwQTELMAkGA1UE BhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MRwwGgYDVQQDDBNBZmZpcm1UcnVz dCBQcmVtaXVtMB4XDTEwMDEyOTE0MTAzNloXDTQwMTIzMTE0MTAzNlowQTELMAkG A1UEBhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MRwwGgYDVQQDDBNBZmZpcm1U cnVzdCBQcmVtaXVtMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAxBLf qV/+Qd3d9Z+K4/as4Tx4mrzY8H96oDMq3I0gW64tb+eT2TZwamjPjlGjhVtnBKAQ JG9dKILBl1fYSCkTtuG+kU3fhQxTGJoeJKJPj/CihQvL9Cl/0qRY7iZNyaqoe5rZ +jjeRFcV5fiMyNlI4g0WJx0eyIOFJbe6qlVBzAMiSy2RjYvmia9mx+n/K+k8rNrS s8PhaJyJ+HoAVt70VZVs+7pk3WKL3wt3MutizCaam7uqYoNMtAZ6MMgpv+0GTZe5 HMQxK9VfvFMSF5yZVylmd2EhMQcuJUmdGPLu8ytxjLW6OQdJd/zvLpKQBY0tL3d7 70O/Nbua2Plzpyzy0FfuKE4mX4+QaAkvuPjcBukumj5Rp9EixAqnOEhss/n/fauG V+O61oV4d7pD6kh/9ti+I20ev9E2bFhc8e6kGVQa9QPSdubhjL08s9NIS+LI+H+S qHZGnEJlPqQewQcDWkYtuJfzt9WyVSHvutxMAJf7FJUnM7/oQ0dG0giZFmA7mn7S 5u046uwBHjxIVkkJx0w3AJ6IDsBz4W9m6XJHMD4Q5QsDyZpCAGzFlH5hxIrff4Ia C1nEWTJ3s7xgaVY5/bQGeyzWZDbZvUjthB9+pSKPKrhC9IK31FOQeE4tGv2Bb0TX OwF0lkLgAOIua+rF7nKsu7/+6qqo+Nz2snmKtmcCAwEAAaNCMEAwHQYDVR0OBBYE FJ3AZ6YMItkm9UWrpmVSESfYRaxjMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/ BAQDAgEGMA0GCSqGSIb3DQEBDAUAA4ICAQCzV00QYk465KzquByvMiPIs0laUZx2 KI15qldGF9X1Uva3ROgIRL8YhNILgM3FEv0AVQVhh0HctSSePMTYyPtwni94loMg Nt58D2kTiKV1NpgIpsbfrM7jWNa3Pt668+s0QNiigfV4Py/VpfzZotReBA4Xrf5B 8OWycvpEgjNC6C1Y91aMYj+6QrCcDFx+LmUmXFNPALJ4fqENmS2NuB2OosSw/WDQ MKSOyARiqcTtNd56l+0OOF6SL5Nwpamcb6d9Ex1+xghIsV5n61EIJenmJWtSKZGc 0jlzCFfemQa0W50QBuHCAKi4HEoCChTQwUHK+4w1IX2COPKpVJEZNZOUbWo6xbLQ u4mGk+ibyQ86p3q4ofB4Rvr8Ny/lioTz3/4E2aFooC8k4gmVBtWVyuEklut89pMF u+1z6S3RdTnX5yTb2E5fQ4+e0BQ5v1VwSJlXMbSc7kqYA5YwH2AG7hsj/oFgIxpH YoWlzBk0gG+zrBrjn/B7SK3VAdlntqlyk+otZrWyuOQ9PLLvTIzq6we/qzWaVYa8 GKa1qF60g2xraUDTn9zxw2lrueFtCfTxqlB2Cnp9ehehVZZCmTEJ3WARjQUwfuaO RtGdFNrHF+QFlozEJLUbzxQHskD4o55BhrwE0GuWyCqANP2/7waj3VjFhT0+j/6e KeC2uAloGRwYQw== -----END CERTIFICATE----- AffirmTrust Premium ECC ======================= -----BEGIN CERTIFICATE----- MIIB/jCCAYWgAwIBAgIIdJclisc/elQwCgYIKoZIzj0EAwMwRTELMAkGA1UEBhMC VVMxFDASBgNVBAoMC0FmZmlybVRydXN0MSAwHgYDVQQDDBdBZmZpcm1UcnVzdCBQ cmVtaXVtIEVDQzAeFw0xMDAxMjkxNDIwMjRaFw00MDEyMzExNDIwMjRaMEUxCzAJ BgNVBAYTAlVTMRQwEgYDVQQKDAtBZmZpcm1UcnVzdDEgMB4GA1UEAwwXQWZmaXJt VHJ1c3QgUHJlbWl1bSBFQ0MwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQNMF4bFZ0D 0KF5Nbc6PJJ6yhUczWLznCZcBz3lVPqj1swS6vQUX+iOGasvLkjmrBhDeKzQN8O9 ss0s5kfiGuZjuD0uL3jET9v0D6RoTFVya5UdThhClXjMNzyR4ptlKymjQjBAMB0G A1UdDgQWBBSaryl6wBE1NSZRMADDav5A1a7WPDAPBgNVHRMBAf8EBTADAQH/MA4G A1UdDwEB/wQEAwIBBjAKBggqhkjOPQQDAwNnADBkAjAXCfOHiFBar8jAQr9HX/Vs aobgxCd05DhT1wV/GzTjxi+zygk8N53X57hG8f2h4nECMEJZh0PUUd+60wkyWs6I flc9nF9Ca/UHLbXwgpP5WW+uZPpY5Yse42O+tYHNbwKMeQ== -----END CERTIFICATE----- Certum Trusted Network CA ========================= -----BEGIN CERTIFICATE----- MIIDuzCCAqOgAwIBAgIDBETAMA0GCSqGSIb3DQEBBQUAMH4xCzAJBgNVBAYTAlBM MSIwIAYDVQQKExlVbml6ZXRvIFRlY2hub2xvZ2llcyBTLkEuMScwJQYDVQQLEx5D ZXJ0dW0gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkxIjAgBgNVBAMTGUNlcnR1bSBU cnVzdGVkIE5ldHdvcmsgQ0EwHhcNMDgxMDIyMTIwNzM3WhcNMjkxMjMxMTIwNzM3 WjB+MQswCQYDVQQGEwJQTDEiMCAGA1UEChMZVW5pemV0byBUZWNobm9sb2dpZXMg Uy5BLjEnMCUGA1UECxMeQ2VydHVtIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MSIw IAYDVQQDExlDZXJ0dW0gVHJ1c3RlZCBOZXR3b3JrIENBMIIBIjANBgkqhkiG9w0B AQEFAAOCAQ8AMIIBCgKCAQEA4/t9o3K6wvDJFIf1awFO4W5AB7ptJ11/91sts1rH UV+rpDKmYYe2bg+G0jACl/jXaVehGDldamR5xgFZrDwxSjh80gTSSyjoIF87B6LM TXPb865Px1bVWqeWifrzq2jUI4ZZJ88JJ7ysbnKDHDBy3+Ci6dLhdHUZvSqeexVU BBvXQzmtVSjF4hq79MDkrjhJM8x2hZ85RdKknvISjFH4fOQtf/WsX+sWn7Et0brM kUJ3TCXJkDhv2/DM+44el1k+1WBO5gUo7Ul5E0u6SNsv+XLTOcr+H9g0cvW0QM8x AcPs3hEtF10fuFDRXhmnad4HMyjKUJX5p1TLVIZQRan5SQIDAQABo0IwQDAPBgNV HRMBAf8EBTADAQH/MB0GA1UdDgQWBBQIds3LB/8k9sXN7buQvOKEN0Z19zAOBgNV HQ8BAf8EBAMCAQYwDQYJKoZIhvcNAQEFBQADggEBAKaorSLOAT2mo/9i0Eidi15y sHhE49wcrwn9I0j6vSrEuVUEtRCjjSfeC4Jj0O7eDDd5QVsisrCaQVymcODU0HfL I9MA4GxWL+FpDQ3Zqr8hgVDZBqWo/5U30Kr+4rP1mS1FhIrlQgnXdAIv94nYmem8 J9RHjboNRhx3zxSkHLmkMcScKHQDNP8zGSal6Q10tz6XxnboJ5ajZt3hrvJBW8qY VoNzcOSGGtIxQbovvi0TWnZvTuhOgQ4/WwMioBK+ZlgRSssDxLQqKi2WF+A5VLxI 03YnnZotBqbJ7DnSq9ufmgsnAjUpsUCV5/nonFWIGUbWtzT1fs45mtk48VH3Tyw= -----END CERTIFICATE----- Certinomis - Autorité Racine ============================= -----BEGIN CERTIFICATE----- MIIFnDCCA4SgAwIBAgIBATANBgkqhkiG9w0BAQUFADBjMQswCQYDVQQGEwJGUjET MBEGA1UEChMKQ2VydGlub21pczEXMBUGA1UECxMOMDAwMiA0MzM5OTg5MDMxJjAk BgNVBAMMHUNlcnRpbm9taXMgLSBBdXRvcml0w6kgUmFjaW5lMB4XDTA4MDkxNzA4 Mjg1OVoXDTI4MDkxNzA4Mjg1OVowYzELMAkGA1UEBhMCRlIxEzARBgNVBAoTCkNl cnRpbm9taXMxFzAVBgNVBAsTDjAwMDIgNDMzOTk4OTAzMSYwJAYDVQQDDB1DZXJ0 aW5vbWlzIC0gQXV0b3JpdMOpIFJhY2luZTCCAiIwDQYJKoZIhvcNAQEBBQADggIP ADCCAgoCggIBAJ2Fn4bT46/HsmtuM+Cet0I0VZ35gb5j2CN2DpdUzZlMGvE5x4jY F1AMnmHawE5V3udauHpOd4cN5bjr+p5eex7Ezyh0x5P1FMYiKAT5kcOrJ3NqDi5N 8y4oH3DfVS9O7cdxbwlyLu3VMpfQ8Vh30WC8Tl7bmoT2R2FFK/ZQpn9qcSdIhDWe rP5pqZ56XjUl+rSnSTV3lqc2W+HN3yNw2F1MpQiD8aYkOBOo7C+ooWfHpi2GR+6K /OybDnT0K0kCe5B1jPyZOQE51kqJ5Z52qz6WKDgmi92NjMD2AR5vpTESOH2VwnHu 7XSu5DaiQ3XV8QCb4uTXzEIDS3h65X27uK4uIJPT5GHfceF2Z5c/tt9qc1pkIuVC 28+BA5PY9OMQ4HL2AHCs8MF6DwV/zzRpRbWT5BnbUhYjBYkOjUjkJW+zeL9i9Qf6 lSTClrLooyPCXQP8w9PlfMl1I9f09bze5N/NgL+RiH2nE7Q5uiy6vdFrzPOlKO1E nn1So2+WLhl+HPNbxxaOu2B9d2ZHVIIAEWBsMsGoOBvrbpgT1u449fCfDu/+MYHB 0iSVL1N6aaLwD4ZFjliCK0wi1F6g530mJ0jfJUaNSih8hp75mxpZuWW/Bd22Ql09 5gBIgl4g9xGC3srYn+Y3RyYe63j3YcNBZFgCQfna4NH4+ej9Uji29YnfAgMBAAGj WzBZMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBQN jLZh2kS40RR9w759XkjwzspqsDAXBgNVHSAEEDAOMAwGCiqBegFWAgIAAQEwDQYJ KoZIhvcNAQEFBQADggIBACQ+YAZ+He86PtvqrxyaLAEL9MW12Ukx9F1BjYkMTv9s ov3/4gbIOZ/xWqndIlgVqIrTseYyCYIDbNc/CMf4uboAbbnW/FIyXaR/pDGUu7ZM OH8oMDX/nyNTt7buFHAAQCvaR6s0fl6nVjBhK4tDrP22iCj1a7Y+YEq6QpA0Z43q 619FVDsXrIvkxmUP7tCMXWY5zjKn2BCXwH40nJ+U8/aGH88bc62UeYdocMMzpXDn 2NU4lG9jeeu/Cg4I58UvD0KgKxRA/yHgBcUn4YQRE7rWhh1BCxMjidPJC+iKunqj o3M3NYB9Ergzd0A4wPpeMNLytqOx1qKVl4GbUu1pTP+A5FPbVFsDbVRfsbjvJL1v nxHDx2TCDyhihWZeGnuyt++uNckZM6i4J9szVb9o4XVIRFb7zdNIu0eJOqxp9YDG 5ERQL1TEqkPFMTFYvZbF6nVsmnWxTfj3l/+WFvKXTej28xH5On2KOG4Ey+HTRRWq pdEdnV1j6CTmNhTih60bWfVEm/vXd3wfAXBioSAaosUaKPQhA+4u2cGA6rnZgtZb dsLLO7XSAPCjDuGtbkD326C00EauFddEwk01+dIL8hf2rGbVJLJP0RyZwG71fet0 BLj5TXcJ17TPBzAJ8bgAVtkXFhYKK4bfjwEZGuW7gmP/vgt2Fl43N+bYdJeimUV5 -----END CERTIFICATE----- Root CA Generalitat Valenciana ============================== -----BEGIN CERTIFICATE----- MIIGizCCBXOgAwIBAgIEO0XlaDANBgkqhkiG9w0BAQUFADBoMQswCQYDVQQGEwJF UzEfMB0GA1UEChMWR2VuZXJhbGl0YXQgVmFsZW5jaWFuYTEPMA0GA1UECxMGUEtJ R1ZBMScwJQYDVQQDEx5Sb290IENBIEdlbmVyYWxpdGF0IFZhbGVuY2lhbmEwHhcN MDEwNzA2MTYyMjQ3WhcNMjEwNzAxMTUyMjQ3WjBoMQswCQYDVQQGEwJFUzEfMB0G A1UEChMWR2VuZXJhbGl0YXQgVmFsZW5jaWFuYTEPMA0GA1UECxMGUEtJR1ZBMScw JQYDVQQDEx5Sb290IENBIEdlbmVyYWxpdGF0IFZhbGVuY2lhbmEwggEiMA0GCSqG SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDGKqtXETcvIorKA3Qdyu0togu8M1JAJke+ WmmmO3I2F0zo37i7L3bhQEZ0ZQKQUgi0/6iMweDHiVYQOTPvaLRfX9ptI6GJXiKj SgbwJ/BXufjpTjJ3Cj9BZPPrZe52/lSqfR0grvPXdMIKX/UIKFIIzFVd0g/bmoGl u6GzwZTNVOAydTGRGmKy3nXiz0+J2ZGQD0EbtFpKd71ng+CT516nDOeB0/RSrFOy A8dEJvt55cs0YFAQexvba9dHq198aMpunUEDEO5rmXteJajCq+TA81yc477OMUxk Hl6AovWDfgzWyoxVjr7gvkkHD6MkQXpYHYTqWBLI4bft75PelAgxAgMBAAGjggM7 MIIDNzAyBggrBgEFBQcBAQQmMCQwIgYIKwYBBQUHMAGGFmh0dHA6Ly9vY3NwLnBr aS5ndmEuZXMwEgYDVR0TAQH/BAgwBgEB/wIBAjCCAjQGA1UdIASCAiswggInMIIC IwYKKwYBBAG/VQIBADCCAhMwggHoBggrBgEFBQcCAjCCAdoeggHWAEEAdQB0AG8A cgBpAGQAYQBkACAAZABlACAAQwBlAHIAdABpAGYAaQBjAGEAYwBpAPMAbgAgAFIA YQDtAHoAIABkAGUAIABsAGEAIABHAGUAbgBlAHIAYQBsAGkAdABhAHQAIABWAGEA bABlAG4AYwBpAGEAbgBhAC4ADQAKAEwAYQAgAEQAZQBjAGwAYQByAGEAYwBpAPMA bgAgAGQAZQAgAFAAcgDhAGMAdABpAGMAYQBzACAAZABlACAAQwBlAHIAdABpAGYA aQBjAGEAYwBpAPMAbgAgAHEAdQBlACAAcgBpAGcAZQAgAGUAbAAgAGYAdQBuAGMA aQBvAG4AYQBtAGkAZQBuAHQAbwAgAGQAZQAgAGwAYQAgAHAAcgBlAHMAZQBuAHQA ZQAgAEEAdQB0AG8AcgBpAGQAYQBkACAAZABlACAAQwBlAHIAdABpAGYAaQBjAGEA YwBpAPMAbgAgAHMAZQAgAGUAbgBjAHUAZQBuAHQAcgBhACAAZQBuACAAbABhACAA ZABpAHIAZQBjAGMAaQDzAG4AIAB3AGUAYgAgAGgAdAB0AHAAOgAvAC8AdwB3AHcA LgBwAGsAaQAuAGcAdgBhAC4AZQBzAC8AYwBwAHMwJQYIKwYBBQUHAgEWGWh0dHA6 Ly93d3cucGtpLmd2YS5lcy9jcHMwHQYDVR0OBBYEFHs100DSHHgZZu90ECjcPk+y eAT8MIGVBgNVHSMEgY0wgYqAFHs100DSHHgZZu90ECjcPk+yeAT8oWykajBoMQsw CQYDVQQGEwJFUzEfMB0GA1UEChMWR2VuZXJhbGl0YXQgVmFsZW5jaWFuYTEPMA0G A1UECxMGUEtJR1ZBMScwJQYDVQQDEx5Sb290IENBIEdlbmVyYWxpdGF0IFZhbGVu Y2lhbmGCBDtF5WgwDQYJKoZIhvcNAQEFBQADggEBACRhTvW1yEICKrNcda3Fbcrn lD+laJWIwVTAEGmiEi8YPyVQqHxK6sYJ2fR1xkDar1CdPaUWu20xxsdzCkj+IHLt b8zog2EWRpABlUt9jppSCS/2bxzkoXHPjCpaF3ODR00PNvsETUlR4hTJZGH71BTg 9J63NI8KJr2XXPR5OkowGcytT6CYirQxlyric21+eLj4iIlPsSKRZEv1UN4D2+XF ducTZnV+ZfsBn5OHiJ35Rld8TWCvmHMTI6QgkYH60GFmuH3Rr9ZvHmw96RH9qfmC IoaZM3Fa6hlXPZHNqcCjbgcTpsnt+GijnsNacgmHKNHEc8RzGF9QdRYxn7fofMM= -----END CERTIFICATE----- A-Trust-nQual-03 ================ -----BEGIN CERTIFICATE----- MIIDzzCCAregAwIBAgIDAWweMA0GCSqGSIb3DQEBBQUAMIGNMQswCQYDVQQGEwJB VDFIMEYGA1UECgw/QS1UcnVzdCBHZXMuIGYuIFNpY2hlcmhlaXRzc3lzdGVtZSBp bSBlbGVrdHIuIERhdGVudmVya2VociBHbWJIMRkwFwYDVQQLDBBBLVRydXN0LW5R dWFsLTAzMRkwFwYDVQQDDBBBLVRydXN0LW5RdWFsLTAzMB4XDTA1MDgxNzIyMDAw MFoXDTE1MDgxNzIyMDAwMFowgY0xCzAJBgNVBAYTAkFUMUgwRgYDVQQKDD9BLVRy dXN0IEdlcy4gZi4gU2ljaGVyaGVpdHNzeXN0ZW1lIGltIGVsZWt0ci4gRGF0ZW52 ZXJrZWhyIEdtYkgxGTAXBgNVBAsMEEEtVHJ1c3QtblF1YWwtMDMxGTAXBgNVBAMM EEEtVHJ1c3QtblF1YWwtMDMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB AQCtPWFuA/OQO8BBC4SAzewqo51ru27CQoT3URThoKgtUaNR8t4j8DRE/5TrzAUj lUC5B3ilJfYKvUWG6Nm9wASOhURh73+nyfrBJcyFLGM/BWBzSQXgYHiVEEvc+RFZ znF/QJuKqiTfC0Li21a8StKlDJu3Qz7dg9MmEALP6iPESU7l0+m0iKsMrmKS1GWH 2WrX9IWf5DMiJaXlyDO6w8dB3F/GaswADm0yqLaHNgBid5seHzTLkDx4iHQF63n1 k3Flyp3HaxgtPVxO59X4PzF9j4fsCiIvI+n+u33J4PTs63zEsMMtYrWacdaxaujs 2e3Vcuy+VwHOBVWf3tFgiBCzAgMBAAGjNjA0MA8GA1UdEwEB/wQFMAMBAf8wEQYD VR0OBAoECERqlWdVeRFPMA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQUFAAOC AQEAVdRU0VlIXLOThaq/Yy/kgM40ozRiPvbY7meIMQQDbwvUB/tOdQ/TLtPAF8fG KOwGDREkDg6lXb+MshOWcdzUzg4NCmgybLlBMRmrsQd7TZjTXLDR8KdCoLXEjq/+ 8T/0709GAHbrAvv5ndJAlseIOrifEXnzgGWovR/TeIGgUUw3tKZdJXDRZslo+S4R FGjxVJgIrCaSD96JntT6s3kr0qN51OyLrIdTaEJMUVF0HhsnLuP1Hyl0Te2v9+GS mYHovjrHF1D2t8b8m7CKa9aIA5GPBnc6hQLdmNVDeD/GMBWsm2vLV7eJUYs66MmE DNuxUCAKGkq6ahq97BvIxYSazQ== -----END CERTIFICATE----- TWCA Root Certification Authority ================================= -----BEGIN CERTIFICATE----- MIIDezCCAmOgAwIBAgIBATANBgkqhkiG9w0BAQUFADBfMQswCQYDVQQGEwJUVzES MBAGA1UECgwJVEFJV0FOLUNBMRAwDgYDVQQLDAdSb290IENBMSowKAYDVQQDDCFU V0NBIFJvb3QgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDgwODI4MDcyNDMz WhcNMzAxMjMxMTU1OTU5WjBfMQswCQYDVQQGEwJUVzESMBAGA1UECgwJVEFJV0FO LUNBMRAwDgYDVQQLDAdSb290IENBMSowKAYDVQQDDCFUV0NBIFJvb3QgQ2VydGlm aWNhdGlvbiBBdXRob3JpdHkwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB AQCwfnK4pAOU5qfeCTiRShFAh6d8WWQUe7UREN3+v9XAu1bihSX0NXIP+FPQQeFE AcK0HMMxQhZHhTMidrIKbw/lJVBPhYa+v5guEGcevhEFhgWQxFnQfHgQsIBct+HH K3XLfJ+utdGdIzdjp9xCoi2SBBtQwXu4PhvJVgSLL1KbralW6cH/ralYhzC2gfeX RfwZVzsrb+RH9JlF/h3x+JejiB03HFyP4HYlmlD4oFT/RJB2I9IyxsOrBr/8+7/z rX2SYgJbKdM1o5OaQ2RgXbL6Mv87BK9NQGr5x+PvI/1ry+UPizgN7gr8/g+YnzAx 3WxSZfmLgb4i4RxYA7qRG4kHAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNV HRMBAf8EBTADAQH/MB0GA1UdDgQWBBRqOFsmjd6LWvJPelSDGRjjCDWmujANBgkq hkiG9w0BAQUFAAOCAQEAPNV3PdrfibqHDAhUaiBQkr6wQT25JmSDCi/oQMCXKCeC MErJk/9q56YAf4lCmtYR5VPOL8zy2gXE/uJQxDqGfczafhAJO5I1KlOy/usrBdls XebQ79NqZp4VKIV66IIArB6nCWlWQtNoURi+VJq/REG6Sb4gumlc7rh3zc5sH62D lhh9DrUUOYTxKOkto557HnpyWoOzeW/vtPzQCqVYT0bf+215WfKEIlKuD8z7fDvn aspHYcN6+NOSBB+4IIThNlQWx0DeO4pz3N/GCUzf7Nr/1FNCocnyYh0igzyXxfkZ YiesZSLX0zzG5Y6yU8xJzrww/nsOM5D77dIUkR8Hrw== -----END CERTIFICATE----- Security Communication RootCA2 ============================== -----BEGIN CERTIFICATE----- MIIDdzCCAl+gAwIBAgIBADANBgkqhkiG9w0BAQsFADBdMQswCQYDVQQGEwJKUDEl MCMGA1UEChMcU0VDT00gVHJ1c3QgU3lzdGVtcyBDTy4sTFRELjEnMCUGA1UECxMe U2VjdXJpdHkgQ29tbXVuaWNhdGlvbiBSb290Q0EyMB4XDTA5MDUyOTA1MDAzOVoX DTI5MDUyOTA1MDAzOVowXTELMAkGA1UEBhMCSlAxJTAjBgNVBAoTHFNFQ09NIFRy dXN0IFN5c3RlbXMgQ08uLExURC4xJzAlBgNVBAsTHlNlY3VyaXR5IENvbW11bmlj YXRpb24gUm9vdENBMjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANAV OVKxUrO6xVmCxF1SrjpDZYBLx/KWvNs2l9amZIyoXvDjChz335c9S672XewhtUGr zbl+dp+++T42NKA7wfYxEUV0kz1XgMX5iZnK5atq1LXaQZAQwdbWQonCv/Q4EpVM VAX3NuRFg3sUZdbcDE3R3n4MqzvEFb46VqZab3ZpUql6ucjrappdUtAtCms1FgkQ hNBqyjoGADdH5H5XTz+L62e4iKrFvlNVspHEfbmwhRkGeC7bYRr6hfVKkaHnFtWO ojnflLhwHyg/i/xAXmODPIMqGplrz95Zajv8bxbXH/1KEOtOghY6rCcMU/Gt1SSw awNQwS08Ft1ENCcadfsCAwEAAaNCMEAwHQYDVR0OBBYEFAqFqXdlBZh8QIH4D5cs OPEK7DzPMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3 DQEBCwUAA4IBAQBMOqNErLlFsceTfsgLCkLfZOoc7llsCLqJX2rKSpWeeo8HxdpF coJxDjrSzG+ntKEju/Ykn8sX/oymzsLS28yN/HH8AynBbF0zX2S2ZTuJbxh2ePXc okgfGT+Ok+vx+hfuzU7jBBJV1uXk3fs+BXziHV7Gp7yXT2g69ekuCkO2r1dcYmh8 t/2jioSgrGK+KwmHNPBqAbubKVY8/gA3zyNs8U6qtnRGEmyR7jTV7JqR50S+kDFy 1UkC9gLl9B/rfNmWVan/7Ir5mUf/NVoCqgTLiluHcSmRvaS0eg29mvVXIwAHIRc/ SjnRBUkLp7Y3gaVdjKozXoEofKd9J+sAro03 -----END CERTIFICATE----- EC-ACC ====== -----BEGIN CERTIFICATE----- MIIFVjCCBD6gAwIBAgIQ7is969Qh3hSoYqwE893EATANBgkqhkiG9w0BAQUFADCB 8zELMAkGA1UEBhMCRVMxOzA5BgNVBAoTMkFnZW5jaWEgQ2F0YWxhbmEgZGUgQ2Vy dGlmaWNhY2lvIChOSUYgUS0wODAxMTc2LUkpMSgwJgYDVQQLEx9TZXJ2ZWlzIFB1 YmxpY3MgZGUgQ2VydGlmaWNhY2lvMTUwMwYDVQQLEyxWZWdldSBodHRwczovL3d3 dy5jYXRjZXJ0Lm5ldC92ZXJhcnJlbCAoYykwMzE1MDMGA1UECxMsSmVyYXJxdWlh IEVudGl0YXRzIGRlIENlcnRpZmljYWNpbyBDYXRhbGFuZXMxDzANBgNVBAMTBkVD LUFDQzAeFw0wMzAxMDcyMzAwMDBaFw0zMTAxMDcyMjU5NTlaMIHzMQswCQYDVQQG EwJFUzE7MDkGA1UEChMyQWdlbmNpYSBDYXRhbGFuYSBkZSBDZXJ0aWZpY2FjaW8g KE5JRiBRLTA4MDExNzYtSSkxKDAmBgNVBAsTH1NlcnZlaXMgUHVibGljcyBkZSBD ZXJ0aWZpY2FjaW8xNTAzBgNVBAsTLFZlZ2V1IGh0dHBzOi8vd3d3LmNhdGNlcnQu bmV0L3ZlcmFycmVsIChjKTAzMTUwMwYDVQQLEyxKZXJhcnF1aWEgRW50aXRhdHMg ZGUgQ2VydGlmaWNhY2lvIENhdGFsYW5lczEPMA0GA1UEAxMGRUMtQUNDMIIBIjAN BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAsyLHT+KXQpWIR4NA9h0X84NzJB5R 85iKw5K4/0CQBXCHYMkAqbWUZRkiFRfCQ2xmRJoNBD45b6VLeqpjt4pEndljkYRm 4CgPukLjbo73FCeTae6RDqNfDrHrZqJyTxIThmV6PttPB/SnCWDaOkKZx7J/sxaV HMf5NLWUhdWZXqBIoH7nF2W4onW4HvPlQn2v7fOKSGRdghST2MDk/7NQcvJ29rNd QlB50JQ+awwAvthrDk4q7D7SzIKiGGUzE3eeml0aE9jD2z3Il3rucO2n5nzbcc8t lGLfbdb1OL4/pYUKGbio2Al1QnDE6u/LDsg0qBIimAy4E5S2S+zw0JDnJwIDAQAB o4HjMIHgMB0GA1UdEQQWMBSBEmVjX2FjY0BjYXRjZXJ0Lm5ldDAPBgNVHRMBAf8E BTADAQH/MA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUoMOLRKo3pUW/l4Ba0fF4 opvpXY0wfwYDVR0gBHgwdjB0BgsrBgEEAfV4AQMBCjBlMCwGCCsGAQUFBwIBFiBo dHRwczovL3d3dy5jYXRjZXJ0Lm5ldC92ZXJhcnJlbDA1BggrBgEFBQcCAjApGidW ZWdldSBodHRwczovL3d3dy5jYXRjZXJ0Lm5ldC92ZXJhcnJlbCAwDQYJKoZIhvcN AQEFBQADggEBAKBIW4IB9k1IuDlVNZyAelOZ1Vr/sXE7zDkJlF7W2u++AVtd0x7Y /X1PzaBB4DSTv8vihpw3kpBWHNzrKQXlxJ7HNd+KDM3FIUPpqojlNcAZQmNaAl6k SBg6hW/cnbw/nZzBh7h6YQjpdwt/cKt63dmXLGQehb+8dJahw3oS7AwaboMMPOhy Rp/7SNVel+axofjk70YllJyJ22k4vuxcDlbHZVHlUIiIv0LVKz3l+bqeLrPK9HOS Agu+TGbrIP65y7WZf+a2E/rKS03Z7lNGBjvGTq2TWoF+bCpLagVFjPIhpDGQh2xl nJ2lYJU6Un/10asIbvPuW/mIPX64b24D5EI= -----END CERTIFICATE----- Hellenic Academic and Research Institutions RootCA 2011 ======================================================= -----BEGIN CERTIFICATE----- MIIEMTCCAxmgAwIBAgIBADANBgkqhkiG9w0BAQUFADCBlTELMAkGA1UEBhMCR1Ix RDBCBgNVBAoTO0hlbGxlbmljIEFjYWRlbWljIGFuZCBSZXNlYXJjaCBJbnN0aXR1 dGlvbnMgQ2VydC4gQXV0aG9yaXR5MUAwPgYDVQQDEzdIZWxsZW5pYyBBY2FkZW1p YyBhbmQgUmVzZWFyY2ggSW5zdGl0dXRpb25zIFJvb3RDQSAyMDExMB4XDTExMTIw NjEzNDk1MloXDTMxMTIwMTEzNDk1MlowgZUxCzAJBgNVBAYTAkdSMUQwQgYDVQQK EztIZWxsZW5pYyBBY2FkZW1pYyBhbmQgUmVzZWFyY2ggSW5zdGl0dXRpb25zIENl cnQuIEF1dGhvcml0eTFAMD4GA1UEAxM3SGVsbGVuaWMgQWNhZGVtaWMgYW5kIFJl c2VhcmNoIEluc3RpdHV0aW9ucyBSb290Q0EgMjAxMTCCASIwDQYJKoZIhvcNAQEB BQADggEPADCCAQoCggEBAKlTAOMupvaO+mDYLZU++CwqVE7NuYRhlFhPjz2L5EPz dYmNUeTDN9KKiE15HrcS3UN4SoqS5tdI1Q+kOilENbgH9mgdVc04UfCMJDGFr4PJ fel3r+0ae50X+bOdOFAPplp5kYCvN66m0zH7tSYJnTxa71HFK9+WXesyHgLacEns bgzImjeN9/E2YEsmLIKe0HjzDQ9jpFEw4fkrJxIH2Oq9GGKYsFk3fb7u8yBRQlqD 75O6aRXxYp2fmTmCobd0LovUxQt7L/DICto9eQqakxylKHJzkUOap9FNhYS5qXSP FEDH3N6sQWRstBmbAmNtJGSPRLIl6s5ddAxjMlyNh+UCAwEAAaOBiTCBhjAPBgNV HRMBAf8EBTADAQH/MAsGA1UdDwQEAwIBBjAdBgNVHQ4EFgQUppFC/RNhSiOeCKQp 5dgTBCPuQSUwRwYDVR0eBEAwPqA8MAWCAy5ncjAFggMuZXUwBoIELmVkdTAGggQu b3JnMAWBAy5ncjAFgQMuZXUwBoEELmVkdTAGgQQub3JnMA0GCSqGSIb3DQEBBQUA A4IBAQAf73lB4XtuP7KMhjdCSk4cNx6NZrokgclPEg8hwAOXhiVtXdMiKahsog2p 6z0GW5k6x8zDmjR/qw7IThzh+uTczQ2+vyT+bOdrwg3IBp5OjWEopmr95fZi6hg8 TqBTnbI6nOulnJEWtk2C4AwFSKls9cz4y51JtPACpf1wA+2KIaWuE4ZJwzNzvoc7 dIsXRSZMFpGD/md9zU1jZ/rzAxKWeAaNsWftjj++n08C9bMJL/NMh98qy5V8Acys Nnq/onN694/BtZqhFLKPM58N7yLcZnuEvUUXBj08yrl3NI/K6s8/MT7jiOOASSXI l7WdmplNsDz4SgCbZN2fOUvRJ9e4 -----END CERTIFICATE----- Actalis Authentication Root CA ============================== -----BEGIN CERTIFICATE----- MIIFuzCCA6OgAwIBAgIIVwoRl0LE48wwDQYJKoZIhvcNAQELBQAwazELMAkGA1UE BhMCSVQxDjAMBgNVBAcMBU1pbGFuMSMwIQYDVQQKDBpBY3RhbGlzIFMucC5BLi8w MzM1ODUyMDk2NzEnMCUGA1UEAwweQWN0YWxpcyBBdXRoZW50aWNhdGlvbiBSb290 IENBMB4XDTExMDkyMjExMjIwMloXDTMwMDkyMjExMjIwMlowazELMAkGA1UEBhMC SVQxDjAMBgNVBAcMBU1pbGFuMSMwIQYDVQQKDBpBY3RhbGlzIFMucC5BLi8wMzM1 ODUyMDk2NzEnMCUGA1UEAwweQWN0YWxpcyBBdXRoZW50aWNhdGlvbiBSb290IENB MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAp8bEpSmkLO/lGMWwUKNv UTufClrJwkg4CsIcoBh/kbWHuUA/3R1oHwiD1S0eiKD4j1aPbZkCkpAW1V8IbInX 4ay8IMKx4INRimlNAJZaby/ARH6jDuSRzVju3PvHHkVH3Se5CAGfpiEd9UEtL0z9 KK3giq0itFZljoZUj5NDKd45RnijMCO6zfB9E1fAXdKDa0hMxKufgFpbOr3JpyI/ gCczWw63igxdBzcIy2zSekciRDXFzMwujt0q7bd9Zg1fYVEiVRvjRuPjPdA1Yprb rxTIW6HMiRvhMCb8oJsfgadHHwTrozmSBp+Z07/T6k9QnBn+locePGX2oxgkg4YQ 51Q+qDp2JE+BIcXjDwL4k5RHILv+1A7TaLndxHqEguNTVHnd25zS8gebLra8Pu2F be8lEfKXGkJh90qX6IuxEAf6ZYGyojnP9zz/GPvG8VqLWeICrHuS0E4UT1lF9gxe KF+w6D9Fz8+vm2/7hNN3WpVvrJSEnu68wEqPSpP4RCHiMUVhUE4Q2OM1fEwZtN4F v6MGn8i1zeQf1xcGDXqVdFUNaBr8EBtiZJ1t4JWgw5QHVw0U5r0F+7if5t+L4sbn fpb2U8WANFAoWPASUHEXMLrmeGO89LKtmyuy/uE5jF66CyCU3nuDuP/jVo23Eek7 jPKxwV2dpAtMK9myGPW1n0sCAwEAAaNjMGEwHQYDVR0OBBYEFFLYiDrIn3hm7Ynz ezhwlMkCAjbQMA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAUUtiIOsifeGbt ifN7OHCUyQICNtAwDgYDVR0PAQH/BAQDAgEGMA0GCSqGSIb3DQEBCwUAA4ICAQAL e3KHwGCmSUyIWOYdiPcUZEim2FgKDk8TNd81HdTtBjHIgT5q1d07GjLukD0R0i70 jsNjLiNmsGe+b7bAEzlgqqI0JZN1Ut6nna0Oh4lScWoWPBkdg/iaKWW+9D+a2fDz WochcYBNy+A4mz+7+uAwTc+G02UQGRjRlwKxK3JCaKygvU5a2hi/a5iB0P2avl4V SM0RFbnAKVy06Ij3Pjaut2L9HmLecHgQHEhb2rykOLpn7VU+Xlff1ANATIGk0k9j pwlCCRT8AKnCgHNPLsBA2RF7SOp6AsDT6ygBJlh0wcBzIm2Tlf05fbsq4/aC4yyX X04fkZT6/iyj2HYauE2yOE+b+h1IYHkm4vP9qdCa6HCPSXrW5b0KDtst842/6+Ok fcvHlXHo2qN8xcL4dJIEG4aspCJTQLas/kx2z/uUMsA1n3Y/buWQbqCmJqK4LL7R K4X9p2jIugErsWx0Hbhzlefut8cl8ABMALJ+tguLHPPAUJ4lueAI3jZm/zel0btU ZCzJJ7VLkn5l/9Mt4blOvH+kQSGQQXemOR/qnuOf0GZvBeyqdn6/axag67XH/JJU LysRJyU3eExRarDzzFhdFPFqSBX/wge2sY0PjlxQRrM9vwGYT7JZVEc+NHt4bVaT LnPqZih4zR0Uv6CPLy64Lo7yFIrM6bV8+2ydDKXhlg== -----END CERTIFICATE----- Trustis FPS Root CA =================== -----BEGIN CERTIFICATE----- MIIDZzCCAk+gAwIBAgIQGx+ttiD5JNM2a/fH8YygWTANBgkqhkiG9w0BAQUFADBF MQswCQYDVQQGEwJHQjEYMBYGA1UEChMPVHJ1c3RpcyBMaW1pdGVkMRwwGgYDVQQL ExNUcnVzdGlzIEZQUyBSb290IENBMB4XDTAzMTIyMzEyMTQwNloXDTI0MDEyMTEx MzY1NFowRTELMAkGA1UEBhMCR0IxGDAWBgNVBAoTD1RydXN0aXMgTGltaXRlZDEc MBoGA1UECxMTVHJ1c3RpcyBGUFMgUm9vdCBDQTCCASIwDQYJKoZIhvcNAQEBBQAD ggEPADCCAQoCggEBAMVQe547NdDfxIzNjpvto8A2mfRC6qc+gIMPpqdZh8mQRUN+ AOqGeSoDvT03mYlmt+WKVoaTnGhLaASMk5MCPjDSNzoiYYkchU59j9WvezX2fihH iTHcDnlkH5nSW7r+f2C/revnPDgpai/lkQtV/+xvWNUtyd5MZnGPDNcE2gfmHhjj vSkCqPoc4Vu5g6hBSLwacY3nYuUtsuvffM/bq1rKMfFMIvMFE/eC+XN5DL7XSxzA 0RU8k0Fk0ea+IxciAIleH2ulrG6nS4zto3Lmr2NNL4XSFDWaLk6M6jKYKIahkQlB OrTh4/L68MkKokHdqeMDx4gVOxzUGpTXn2RZEm0CAwEAAaNTMFEwDwYDVR0TAQH/ BAUwAwEB/zAfBgNVHSMEGDAWgBS6+nEleYtXQSUhhgtx67JkDoshZzAdBgNVHQ4E FgQUuvpxJXmLV0ElIYYLceuyZA6LIWcwDQYJKoZIhvcNAQEFBQADggEBAH5Y//01 GX2cGE+esCu8jowU/yyg2kdbw++BLa8F6nRIW/M+TgfHbcWzk88iNVy2P3UnXwmW zaD+vkAMXBJV+JOCyinpXj9WV4s4NvdFGkwozZ5BuO1WTISkQMi4sKUraXAEasP4 1BIy+Q7DsdwyhEQsb8tGD+pmQQ9P8Vilpg0ND2HepZ5dfWWhPBfnqFVO76DH7cZE f1T1o+CP8HxVIo8ptoGj4W1OLBuAZ+ytIJ8MYmHVl/9D7S3B2l0pKoU/rGXuhg8F jZBf3+6f9L/uHfuY5H+QK4R4EA5sSVPvFVtlRkpdr7r7OnIdzfYliB6XzCGcKQEN ZetX2fNXlrtIzYE= -----END CERTIFICATE----- StartCom Certification Authority ================================ -----BEGIN CERTIFICATE----- MIIHhzCCBW+gAwIBAgIBLTANBgkqhkiG9w0BAQsFADB9MQswCQYDVQQGEwJJTDEW MBQGA1UEChMNU3RhcnRDb20gTHRkLjErMCkGA1UECxMiU2VjdXJlIERpZ2l0YWwg Q2VydGlmaWNhdGUgU2lnbmluZzEpMCcGA1UEAxMgU3RhcnRDb20gQ2VydGlmaWNh dGlvbiBBdXRob3JpdHkwHhcNMDYwOTE3MTk0NjM3WhcNMzYwOTE3MTk0NjM2WjB9 MQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20gTHRkLjErMCkGA1UECxMi U2VjdXJlIERpZ2l0YWwgQ2VydGlmaWNhdGUgU2lnbmluZzEpMCcGA1UEAxMgU3Rh cnRDb20gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwggIiMA0GCSqGSIb3DQEBAQUA A4ICDwAwggIKAoICAQDBiNsJvGxGfHiflXu1M5DycmLWwTYgIiRezul38kMKogZk pMyONvg45iPwbm2xPN1yo4UcodM9tDMr0y+v/uqwQVlntsQGfQqedIXWeUyAN3rf OQVSWff0G0ZDpNKFhdLDcfN1YjS6LIp/Ho/u7TTQEceWzVI9ujPW3U3eCztKS5/C Ji/6tRYccjV3yjxd5srhJosaNnZcAdt0FCX+7bWgiA/deMotHweXMAEtcnn6RtYT Kqi5pquDSR3l8u/d5AGOGAqPY1MWhWKpDhk6zLVmpsJrdAfkK+F2PrRt2PZE4XNi HzvEvqBTViVsUQn3qqvKv3b9bZvzndu/PWa8DFaqr5hIlTpL36dYUNk4dalb6kMM Av+Z6+hsTXBbKWWc3apdzK8BMewM69KN6Oqce+Zu9ydmDBpI125C4z/eIT574Q1w +2OqqGwaVLRcJXrJosmLFqa7LH4XXgVNWG4SHQHuEhANxjJ/GP/89PrNbpHoNkm+ Gkhpi8KWTRoSsmkXwQqQ1vp5Iki/untp+HDH+no32NgN0nZPV/+Qt+OR0t3vwmC3 Zzrd/qqc8NSLf3Iizsafl7b4r4qgEKjZ+xjGtrVcUjyJthkqcwEKDwOzEmDyei+B 26Nu/yYwl/WL3YlXtq09s68rxbd2AvCl1iuahhQqcvbjM4xdCUsT37uMdBNSSwID AQABo4ICEDCCAgwwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYD VR0OBBYEFE4L7xqkQFulF2mHMMo0aEPQQa7yMB8GA1UdIwQYMBaAFE4L7xqkQFul F2mHMMo0aEPQQa7yMIIBWgYDVR0gBIIBUTCCAU0wggFJBgsrBgEEAYG1NwEBATCC ATgwLgYIKwYBBQUHAgEWImh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5w ZGYwNAYIKwYBBQUHAgEWKGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL2ludGVybWVk aWF0ZS5wZGYwgc8GCCsGAQUFBwICMIHCMCcWIFN0YXJ0IENvbW1lcmNpYWwgKFN0 YXJ0Q29tKSBMdGQuMAMCAQEagZZMaW1pdGVkIExpYWJpbGl0eSwgcmVhZCB0aGUg c2VjdGlvbiAqTGVnYWwgTGltaXRhdGlvbnMqIG9mIHRoZSBTdGFydENvbSBDZXJ0 aWZpY2F0aW9uIEF1dGhvcml0eSBQb2xpY3kgYXZhaWxhYmxlIGF0IGh0dHA6Ly93 d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYwEQYJYIZIAYb4QgEBBAQDAgAHMDgG CWCGSAGG+EIBDQQrFilTdGFydENvbSBGcmVlIFNTTCBDZXJ0aWZpY2F0aW9uIEF1 dGhvcml0eTANBgkqhkiG9w0BAQsFAAOCAgEAjo/n3JR5fPGFf59Jb2vKXfuM/gTF wWLRfUKKvFO3lANmMD+x5wqnUCBVJX92ehQN6wQOQOY+2IirByeDqXWmN3PH/UvS Ta0XQMhGvjt/UfzDtgUx3M2FIk5xt/JxXrAaxrqTi3iSSoX4eA+D/i+tLPfkpLst 0OcNOrg+zvZ49q5HJMqjNTbOx8aHmNrs++myziebiMMEofYLWWivydsQD032ZGNc pRJvkrKTlMeIFw6Ttn5ii5B/q06f/ON1FE8qMt9bDeD1e5MNq6HPh+GlBEXoPBKl CcWw0bdT82AUuoVpaiF8H3VhFyAXe2w7QSlc4axa0c2Mm+tgHRns9+Ww2vl5GKVF P0lDV9LdJNUso/2RjSe15esUBppMeyG7Oq0wBhjA2MFrLH9ZXF2RsXAiV+uKa0hK 1Q8p7MZAwC+ITGgBF3f0JBlPvfrhsiAhS90a2Cl9qrjeVOwhVYBsHvUwyKMQ5bLm KhQxw4UtjJixhlpPiVktucf3HMiKf8CdBUrmQk9io20ppB+Fq9vlgcitKj1MXVuE JnHEhV5xJMqlG2zYYdMa4FTbzrqpMrUi9nNBCV24F10OD5mQ1kfabwo6YigUZ4LZ 8dCAWZvLMdibD4x3TrVoivJs9iQOLWxwxXPR3hTQcY+203sC9uO41Alua551hDnm fyWl8kgAwKQB2j8= -----END CERTIFICATE----- StartCom Certification Authority G2 =================================== -----BEGIN CERTIFICATE----- MIIFYzCCA0ugAwIBAgIBOzANBgkqhkiG9w0BAQsFADBTMQswCQYDVQQGEwJJTDEW MBQGA1UEChMNU3RhcnRDb20gTHRkLjEsMCoGA1UEAxMjU3RhcnRDb20gQ2VydGlm aWNhdGlvbiBBdXRob3JpdHkgRzIwHhcNMTAwMTAxMDEwMDAxWhcNMzkxMjMxMjM1 OTAxWjBTMQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20gTHRkLjEsMCoG A1UEAxMjU3RhcnRDb20gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgRzIwggIiMA0G CSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC2iTZbB7cgNr2Cu+EWIAOVeq8Oo1XJ JZlKxdBWQYeQTSFgpBSHO839sj60ZwNq7eEPS8CRhXBF4EKe3ikj1AENoBB5uNsD vfOpL9HG4A/LnooUCri99lZi8cVytjIl2bLzvWXFDSxu1ZJvGIsAQRSCb0AgJnoo D/Uefyf3lLE3PbfHkffiAez9lInhzG7TNtYKGXmu1zSCZf98Qru23QumNK9LYP5/ Q0kGi4xDuFby2X8hQxfqp0iVAXV16iulQ5XqFYSdCI0mblWbq9zSOdIxHWDirMxW RST1HFSr7obdljKF+ExP6JV2tgXdNiNnvP8V4so75qbsO+wmETRIjfaAKxojAuuK HDp2KntWFhxyKrOq42ClAJ8Em+JvHhRYW6Vsi1g8w7pOOlz34ZYrPu8HvKTlXcxN nw3h3Kq74W4a7I/htkxNeXJdFzULHdfBR9qWJODQcqhaX2YtENwvKhOuJv4KHBnM 0D4LnMgJLvlblnpHnOl68wVQdJVznjAJ85eCXuaPOQgeWeU1FEIT/wCc976qUM/i UUjXuG+v+E5+M5iSFGI6dWPPe/regjupuznixL0sAA7IF6wT700ljtizkC+p2il9 Ha90OrInwMEePnWjFqmveiJdnxMaz6eg6+OGCtP95paV1yPIN93EfKo2rJgaErHg TuixO/XWb/Ew1wIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQE AwIBBjAdBgNVHQ4EFgQUS8W0QGutHLOlHGVuRjaJhwUMDrYwDQYJKoZIhvcNAQEL BQADggIBAHNXPyzVlTJ+N9uWkusZXn5T50HsEbZH77Xe7XRcxfGOSeD8bpkTzZ+K 2s06Ctg6Wgk/XzTQLwPSZh0avZyQN8gMjgdalEVGKua+etqhqaRpEpKwfTbURIfX UfEpY9Z1zRbkJ4kd+MIySP3bmdCPX1R0zKxnNBFi2QwKN4fRoxdIjtIXHfbX/dtl 6/2o1PXWT6RbdejF0mCy2wl+JYt7ulKSnj7oxXehPOBKc2thz4bcQ///If4jXSRK 9dNtD2IEBVeC2m6kMyV5Sy5UGYvMLD0w6dEG/+gyRr61M3Z3qAFdlsHB1b6uJcDJ HgoJIIihDsnzb02CVAAgp9KP5DlUFy6NHrgbuxu9mk47EDTcnIhT76IxW1hPkWLI wpqazRVdOKnWvvgTtZ8SafJQYqz7Fzf07rh1Z2AQ+4NQ+US1dZxAF7L+/XldblhY XzD8AK6vM8EOTmy6p6ahfzLbOOCxchcKK5HsamMm7YnUeMx0HgX4a/6ManY5Ka5l IxKVCCIcl85bBu4M4ru8H0ST9tg4RQUh7eStqxK2A6RCLi3ECToDZ2mEmuFZkIoo hdVddLHRDiBYmxOlsGOm7XtH/UVVMKTumtTm4ofvmMkyghEpIrwACjFeLQ/Ajulr so8uBtjRkcfGEvRM/TAXw8HaOFvjqermobp573PYtlNXLfbQ4ddI -----END CERTIFICATE----- Buypass Class 2 Root CA ======================= -----BEGIN CERTIFICATE----- MIIFWTCCA0GgAwIBAgIBAjANBgkqhkiG9w0BAQsFADBOMQswCQYDVQQGEwJOTzEd MBsGA1UECgwUQnV5cGFzcyBBUy05ODMxNjMzMjcxIDAeBgNVBAMMF0J1eXBhc3Mg Q2xhc3MgMiBSb290IENBMB4XDTEwMTAyNjA4MzgwM1oXDTQwMTAyNjA4MzgwM1ow TjELMAkGA1UEBhMCTk8xHTAbBgNVBAoMFEJ1eXBhc3MgQVMtOTgzMTYzMzI3MSAw HgYDVQQDDBdCdXlwYXNzIENsYXNzIDIgUm9vdCBDQTCCAiIwDQYJKoZIhvcNAQEB BQADggIPADCCAgoCggIBANfHXvfBB9R3+0Mh9PT1aeTuMgHbo4Yf5FkNuud1g1Lr 6hxhFUi7HQfKjK6w3Jad6sNgkoaCKHOcVgb/S2TwDCo3SbXlzwx87vFKu3MwZfPV L4O2fuPn9Z6rYPnT8Z2SdIrkHJasW4DptfQxh6NR/Md+oW+OU3fUl8FVM5I+GC91 1K2GScuVr1QGbNgGE41b/+EmGVnAJLqBcXmQRFBoJJRfuLMR8SlBYaNByyM21cHx MlAQTn/0hpPshNOOvEu/XAFOBz3cFIqUCqTqc/sLUegTBxj6DvEr0VQVfTzh97QZ QmdiXnfgolXsttlpF9U6r0TtSsWe5HonfOV116rLJeffawrbD02TTqigzXsu8lkB arcNuAeBfos4GzjmCleZPe4h6KP1DBbdi+w0jpwqHAAVF41og9JwnxgIzRFo1clr Us3ERo/ctfPYV3Me6ZQ5BL/T3jjetFPsaRyifsSP5BtwrfKi+fv3FmRmaZ9JUaLi FRhnBkp/1Wy1TbMz4GHrXb7pmA8y1x1LPC5aAVKRCfLf6o3YBkBjqhHk/sM3nhRS P/TizPJhk9H9Z2vXUq6/aKtAQ6BXNVN48FP4YUIHZMbXb5tMOA1jrGKvNouicwoN 9SG9dKpN6nIDSdvHXx1iY8f93ZHsM+71bbRuMGjeyNYmsHVee7QHIJihdjK4TWxP AgMBAAGjQjBAMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFMmAd+BikoL1Rpzz uvdMw964o605MA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAgEAU18h 9bqwOlI5LJKwbADJ784g7wbylp7ppHR/ehb8t/W2+xUbP6umwHJdELFx7rxP462s A20ucS6vxOOto70MEae0/0qyexAQH6dXQbLArvQsWdZHEIjzIVEpMMpghq9Gqx3t OluwlN5E40EIosHsHdb9T7bWR9AUC8rmyrV7d35BH16Dx7aMOZawP5aBQW9gkOLo +fsicdl9sz1Gv7SEr5AcD48Saq/v7h56rgJKihcrdv6sVIkkLE8/trKnToyokZf7 KcZ7XC25y2a2t6hbElGFtQl+Ynhw/qlqYLYdDnkM/crqJIByw5c/8nerQyIKx+u2 DISCLIBrQYoIwOula9+ZEsuK1V6ADJHgJgg2SMX6OBE1/yWDLfJ6v9r9jv6ly0Us H8SIU653DtmadsWOLB2jutXsMq7Aqqz30XpN69QH4kj3Io6wpJ9qzo6ysmD0oyLQ I+uUWnpp3Q+/QFesa1lQ2aOZ4W7+jQF5JyMV3pKdewlNWudLSDBaGOYKbeaP4NK7 5t98biGCwWg5TbSYWGZizEqQXsP6JwSxeRV0mcy+rSDeJmAc61ZRpqPq5KM/p/9h 3PFaTWwyI0PurKju7koSCTxdccK+efrCh2gdC/1cacwG0Jp9VJkqyTkaGa9LKkPz Y11aWOIv4x3kqdbQCtCev9eBCfHJxyYNrJgWVqA= -----END CERTIFICATE----- Buypass Class 3 Root CA ======================= -----BEGIN CERTIFICATE----- MIIFWTCCA0GgAwIBAgIBAjANBgkqhkiG9w0BAQsFADBOMQswCQYDVQQGEwJOTzEd MBsGA1UECgwUQnV5cGFzcyBBUy05ODMxNjMzMjcxIDAeBgNVBAMMF0J1eXBhc3Mg Q2xhc3MgMyBSb290IENBMB4XDTEwMTAyNjA4Mjg1OFoXDTQwMTAyNjA4Mjg1OFow TjELMAkGA1UEBhMCTk8xHTAbBgNVBAoMFEJ1eXBhc3MgQVMtOTgzMTYzMzI3MSAw HgYDVQQDDBdCdXlwYXNzIENsYXNzIDMgUm9vdCBDQTCCAiIwDQYJKoZIhvcNAQEB BQADggIPADCCAgoCggIBAKXaCpUWUOOV8l6ddjEGMnqb8RB2uACatVI2zSRHsJ8Y ZLya9vrVediQYkwiL944PdbgqOkcLNt4EemOaFEVcsfzM4fkoF0LXOBXByow9c3E N3coTRiR5r/VUv1xLXA+58bEiuPwKAv0dpihi4dVsjoT/Lc+JzeOIuOoTyrvYLs9 tznDDgFHmV0ST9tD+leh7fmdvhFHJlsTmKtdFoqwNxxXnUX/iJY2v7vKB3tvh2PX 0DJq1l1sDPGzbjniazEuOQAnFN44wOwZZoYS6J1yFhNkUsepNxz9gjDthBgd9K5c /3ATAOux9TN6S9ZV+AWNS2mw9bMoNlwUxFFzTWsL8TQH2xc519woe2v1n/MuwU8X KhDzzMro6/1rqy6any2CbgTUUgGTLT2G/H783+9CHaZr77kgxve9oKeV/afmiSTY zIw0bOIjL9kSGiG5VZFvC5F5GQytQIgLcOJ60g7YaEi7ghM5EFjp2CoHxhLbWNvS O1UQRwUVZ2J+GGOmRj8JDlQyXr8NYnon74Do29lLBlo3WiXQCBJ31G8JUJc9yB3D 34xFMFbG02SrZvPAXpacw8Tvw3xrizp5f7NJzz3iiZ+gMEuFuZyUJHmPfWupRWgP K9Dx2hzLabjKSWJtyNBjYt1gD1iqj6G8BaVmos8bdrKEZLFMOVLAMLrwjEsCsLa3 AgMBAAGjQjBAMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFEe4zf/lb+74suwv Tg75JbCOPGvDMA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAgEAACAj QTUEkMJAYmDv4jVM1z+s4jSQuKFvdvoWFqRINyzpkMLyPPgKn9iB5btb2iUspKdV cSQy9sgL8rxq+JOssgfCX5/bzMiKqr5qb+FJEMwx14C7u8jYog5kV+qi9cKpMRXS IGrs/CIBKM+GuIAeqcwRpTzyFrNHnfzSgCHEy9BHcEGhyoMZCCxt8l13nIoUE9Q2 HJLw5QY33KbmkJs4j1xrG0aGQ0JfPgEHU1RdZX33inOhmlRaHylDFCfChQ+1iHsa O5S3HWCntZznKWlXWpuTekMwGwPXYshApqr8ZORK15FTAaggiG6cX0S5y2CBNOxv 033aSF/rtJC8LakcC6wc1aJoIIAE1vyxjy+7SjENSoYc6+I2KSb12tjE8nVhz36u dmNKekBlk4f4HoCMhuWG1o8O/FMsYOgWYRqiPkN7zTlgVGr18okmAWiDSKIz6MkE kbIRNBE+6tBDGR8Dk5AM/1E9V/RBbuHLoL7ryWPNbczk+DaqaJ3tvV2XcEQNtg41 3OEMXbugUZTLfhbrES+jkkXITHHZvMmZUldGL1DPvTVp9D0VzgalLA8+9oG6lLvD u79leNKGef9JOxqDDPDeeOzI8k1MGt6CKfjBWtrt7uYnXuhF0J0cUahoq0Tj0Itq 4/g7u9xN12TyUb7mqqta6THuBrxzvxNiCp/HuZc= -----END CERTIFICATE----- T-TeleSec GlobalRoot Class 3 ============================ -----BEGIN CERTIFICATE----- MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl YyBHbG9iYWxSb290IENsYXNzIDMwHhcNMDgxMDAxMTAyOTU2WhcNMzMxMDAxMjM1 OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50 ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDMwggEiMA0G CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC9dZPwYiJvJK7genasfb3ZJNW4t/zN 8ELg63iIVl6bmlQdTQyK9tPPcPRStdiTBONGhnFBSivwKixVA9ZIw+A5OO3yXDw/ RLyTPWGrTs0NvvAgJ1gORH8EGoel15YUNpDQSXuhdfsaa3Ox+M6pCSzyU9XDFES4 hqX2iys52qMzVNn6chr3IhUciJFrf2blw2qAsCTz34ZFiP0Zf3WHHx+xGwpzJFu5 ZeAsVMhg02YXP+HMVDNzkQI6pn97djmiH5a2OK61yJN0HZ65tOVgnS9W0eDrXltM EnAMbEQgqxHY9Bn20pxSN+f6tsIxO0rUFJmtxxr1XV/6B7h8DR/Wgx6zAgMBAAGj QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS1 A/d2O2GCahKqGFPrAyGUv/7OyjANBgkqhkiG9w0BAQsFAAOCAQEAVj3vlNW92nOy WL6ukK2YJ5f+AbGwUgC4TeQbIXQbfsDuXmkqJa9c1h3a0nnJ85cp4IaH3gRZD/FZ 1GSFS5mvJQQeyUapl96Cshtwn5z2r3Ex3XsFpSzTucpH9sry9uetuUg/vBa3wW30 6gmv7PO15wWeph6KU1HWk4HMdJP2udqmJQV0eVp+QD6CSyYRMG7hP0HHRwA11fXT 91Q+gT3aSWqas+8QPebrb9HIIkfLzM8BMZLZGOMivgkeGj5asuRrDFR6fUNOuIml e9eiPZaGzPImNC1qkp2aGtAw4l1OBLBfiyB+d8E9lYLRRpo7PHi4b6HQDWSieB4p TpPDpFQUWw== -----END CERTIFICATE----- EE Certification Centre Root CA =============================== -----BEGIN CERTIFICATE----- MIIEAzCCAuugAwIBAgIQVID5oHPtPwBMyonY43HmSjANBgkqhkiG9w0BAQUFADB1 MQswCQYDVQQGEwJFRTEiMCAGA1UECgwZQVMgU2VydGlmaXRzZWVyaW1pc2tlc2t1 czEoMCYGA1UEAwwfRUUgQ2VydGlmaWNhdGlvbiBDZW50cmUgUm9vdCBDQTEYMBYG CSqGSIb3DQEJARYJcGtpQHNrLmVlMCIYDzIwMTAxMDMwMTAxMDMwWhgPMjAzMDEy MTcyMzU5NTlaMHUxCzAJBgNVBAYTAkVFMSIwIAYDVQQKDBlBUyBTZXJ0aWZpdHNl ZXJpbWlza2Vza3VzMSgwJgYDVQQDDB9FRSBDZXJ0aWZpY2F0aW9uIENlbnRyZSBS b290IENBMRgwFgYJKoZIhvcNAQkBFglwa2lAc2suZWUwggEiMA0GCSqGSIb3DQEB AQUAA4IBDwAwggEKAoIBAQDIIMDs4MVLqwd4lfNE7vsLDP90jmG7sWLqI9iroWUy euuOF0+W2Ap7kaJjbMeMTC55v6kF/GlclY1i+blw7cNRfdCT5mzrMEvhvH2/UpvO bntl8jixwKIy72KyaOBhU8E2lf/slLo2rpwcpzIP5Xy0xm90/XsY6KxX7QYgSzIw WFv9zajmofxwvI6Sc9uXp3whrj3B9UiHbCe9nyV0gVWw93X2PaRka9ZP585ArQ/d MtO8ihJTmMmJ+xAdTX7Nfh9WDSFwhfYggx/2uh8Ej+p3iDXE/+pOoYtNP2MbRMNE 1CV2yreN1x5KZmTNXMWcg+HCCIia7E6j8T4cLNlsHaFLAgMBAAGjgYowgYcwDwYD VR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFBLyWj7qVhy/ zQas8fElyalL1BSZMEUGA1UdJQQ+MDwGCCsGAQUFBwMCBggrBgEFBQcDAQYIKwYB BQUHAwMGCCsGAQUFBwMEBggrBgEFBQcDCAYIKwYBBQUHAwkwDQYJKoZIhvcNAQEF BQADggEBAHv25MANqhlHt01Xo/6tu7Fq1Q+e2+RjxY6hUFaTlrg4wCQiZrxTFGGV v9DHKpY5P30osxBAIWrEr7BSdxjhlthWXePdNl4dp1BUoMUq5KqMlIpPnTX/dqQG E5Gion0ARD9V04I8GtVbvFZMIi5GQ4okQC3zErg7cBqklrkar4dBGmoYDQZPxz5u uSlNDUmJEYcyW+ZLBMjkXOZ0c5RdFpgTlf7727FE5TpwrDdr5rMzcijJs1eg9gIW iAYLtqZLICjU3j2LrTcFU3T+bsy8QxdxXvnFzBqpYe73dgzzcvRyrc9yAjYHR8/v GVCJYMzpJJUPwssd8m92kMfMdcGWxZ0= -----END CERTIFICATE----- boto-2.20.1/boto/cloudformation/000077500000000000000000000000001225267101000165235ustar00rootroot00000000000000boto-2.20.1/boto/cloudformation/__init__.py000066400000000000000000000053021225267101000206340ustar00rootroot00000000000000# Copyright (c) 2010-2011 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010-2011, Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from connection import CloudFormationConnection from boto.regioninfo import RegionInfo RegionData = { 'us-east-1': 'cloudformation.us-east-1.amazonaws.com', 'us-west-1': 'cloudformation.us-west-1.amazonaws.com', 'us-west-2': 'cloudformation.us-west-2.amazonaws.com', 'sa-east-1': 'cloudformation.sa-east-1.amazonaws.com', 'eu-west-1': 'cloudformation.eu-west-1.amazonaws.com', 'ap-northeast-1': 'cloudformation.ap-northeast-1.amazonaws.com', 'ap-southeast-1': 'cloudformation.ap-southeast-1.amazonaws.com', 'ap-southeast-2': 'cloudformation.ap-southeast-2.amazonaws.com', } def regions(): """ Get all available regions for the CloudFormation service. :rtype: list :return: A list of :class:`boto.RegionInfo` instances """ regions = [] for region_name in RegionData: region = RegionInfo(name=region_name, endpoint=RegionData[region_name], connection_cls=CloudFormationConnection) regions.append(region) return regions def connect_to_region(region_name, **kw_params): """ Given a valid region name, return a :class:`boto.cloudformation.CloudFormationConnection`. :param str region_name: The name of the region to connect to. :rtype: :class:`boto.cloudformation.CloudFormationConnection` or ``None`` :return: A connection to the given region, or None if an invalid region name is given """ for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/cloudformation/connection.py000066400000000000000000000375201225267101000212430ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import boto from boto.cloudformation.stack import Stack, StackSummary, StackEvent from boto.cloudformation.stack import StackResource, StackResourceSummary from boto.cloudformation.template import Template from boto.connection import AWSQueryConnection from boto.regioninfo import RegionInfo from boto.compat import json class CloudFormationConnection(AWSQueryConnection): """ A Connection to the CloudFormation Service. """ APIVersion = boto.config.get('Boto', 'cfn_version', '2010-05-15') DefaultRegionName = boto.config.get('Boto', 'cfn_region_name', 'us-east-1') DefaultRegionEndpoint = boto.config.get('Boto', 'cfn_region_endpoint', 'cloudformation.us-east-1.amazonaws.com') valid_states = ( 'CREATE_IN_PROGRESS', 'CREATE_FAILED', 'CREATE_COMPLETE', 'ROLLBACK_IN_PROGRESS', 'ROLLBACK_FAILED', 'ROLLBACK_COMPLETE', 'DELETE_IN_PROGRESS', 'DELETE_FAILED', 'DELETE_COMPLETE', 'UPDATE_IN_PROGRESS', 'UPDATE_COMPLETE_CLEANUP_IN_PROGRESS', 'UPDATE_COMPLETE', 'UPDATE_ROLLBACK_IN_PROGRESS', 'UPDATE_ROLLBACK_FAILED', 'UPDATE_ROLLBACK_COMPLETE_CLEANUP_IN_PROGRESS', 'UPDATE_ROLLBACK_COMPLETE') def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/', converter=None, security_token=None, validate_certs=True): if not region: region = RegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint, CloudFormationConnection) self.region = region AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, proxy_user, proxy_pass, self.region.endpoint, debug, https_connection_factory, path, security_token, validate_certs=validate_certs) def _required_auth_capability(self): return ['hmac-v4'] def encode_bool(self, v): v = bool(v) return {True: "true", False: "false"}[v] def _build_create_or_update_params(self, stack_name, template_body, template_url, parameters, notification_arns, disable_rollback, timeout_in_minutes, capabilities, tags): """ Helper that creates JSON parameters needed by a Stack Create or Stack Update call. :type stack_name: string :param stack_name: The name of the Stack, must be unique amoung running Stacks :type template_body: string :param template_body: The template body (JSON string) :type template_url: string :param template_url: An S3 URL of a stored template JSON document. If both the template_body and template_url are specified, the template_body takes precedence :type parameters: list of tuples :param parameters: A list of (key, value) pairs for template input parameters. :type notification_arns: list of strings :param notification_arns: A list of SNS topics to send Stack event notifications to. :type disable_rollback: bool :param disable_rollback: Indicates whether or not to rollback on failure. :type timeout_in_minutes: int :param timeout_in_minutes: Maximum amount of time to let the Stack spend creating itself. If this timeout is exceeded, the Stack will enter the CREATE_FAILED state. :type capabilities: list :param capabilities: The list of capabilities you want to allow in the stack. Currently, the only valid capability is 'CAPABILITY_IAM'. :type tags: dict :param tags: A dictionary of (key, value) pairs of tags to associate with this stack. :rtype: dict :return: JSON parameters represented as a Python dict. """ params = {'ContentType': "JSON", 'StackName': stack_name, 'DisableRollback': self.encode_bool(disable_rollback)} if template_body: params['TemplateBody'] = template_body if template_url: params['TemplateURL'] = template_url if template_body and template_url: boto.log.warning("If both TemplateBody and TemplateURL are" " specified, only TemplateBody will be honored by the API") if len(parameters) > 0: for i, (key, value) in enumerate(parameters): params['Parameters.member.%d.ParameterKey' % (i + 1)] = key params['Parameters.member.%d.ParameterValue' % (i + 1)] = value if capabilities: for i, value in enumerate(capabilities): params['Capabilities.member.%d' % (i + 1)] = value if tags: for i, (key, value) in enumerate(tags.items()): params['Tags.member.%d.Key' % (i + 1)] = key params['Tags.member.%d.Value' % (i + 1)] = value if len(notification_arns) > 0: self.build_list_params(params, notification_arns, "NotificationARNs.member") if timeout_in_minutes: params['TimeoutInMinutes'] = int(timeout_in_minutes) return params def create_stack(self, stack_name, template_body=None, template_url=None, parameters=[], notification_arns=[], disable_rollback=False, timeout_in_minutes=None, capabilities=None, tags=None): """ Creates a CloudFormation Stack as specified by the template. :type stack_name: string :param stack_name: The name of the Stack, must be unique amoung running Stacks :type template_body: string :param template_body: The template body (JSON string) :type template_url: string :param template_url: An S3 URL of a stored template JSON document. If both the template_body and template_url are specified, the template_body takes precedence :type parameters: list of tuples :param parameters: A list of (key, value) pairs for template input parameters. :type notification_arns: list of strings :param notification_arns: A list of SNS topics to send Stack event notifications to. :type disable_rollback: bool :param disable_rollback: Indicates whether or not to rollback on failure. :type timeout_in_minutes: int :param timeout_in_minutes: Maximum amount of time to let the Stack spend creating itself. If this timeout is exceeded, the Stack will enter the CREATE_FAILED state. :type capabilities: list :param capabilities: The list of capabilities you want to allow in the stack. Currently, the only valid capability is 'CAPABILITY_IAM'. :type tags: dict :param tags: A dictionary of (key, value) pairs of tags to associate with this stack. :rtype: string :return: The unique Stack ID. """ params = self._build_create_or_update_params(stack_name, template_body, template_url, parameters, notification_arns, disable_rollback, timeout_in_minutes, capabilities, tags) response = self.make_request('CreateStack', params, '/', 'POST') body = response.read() if response.status == 200: body = json.loads(body) return body['CreateStackResponse']['CreateStackResult']['StackId'] else: boto.log.error('%s %s' % (response.status, response.reason)) boto.log.error('%s' % body) raise self.ResponseError(response.status, response.reason, body) def update_stack(self, stack_name, template_body=None, template_url=None, parameters=[], notification_arns=[], disable_rollback=False, timeout_in_minutes=None, capabilities=None, tags=None): """ Updates a CloudFormation Stack as specified by the template. :type stack_name: string :param stack_name: The name of the Stack, must be unique amoung running Stacks. :type template_body: string :param template_body: The template body (JSON string) :type template_url: string :param template_url: An S3 URL of a stored template JSON document. If both the template_body and template_url are specified, the template_body takes precedence. :type parameters: list of tuples :param parameters: A list of (key, value) pairs for template input parameters. :type notification_arns: list of strings :param notification_arns: A list of SNS topics to send Stack event notifications to. :type disable_rollback: bool :param disable_rollback: Indicates whether or not to rollback on failure. :type timeout_in_minutes: int :param timeout_in_minutes: Maximum amount of time to let the Stack spend creating itself. If this timeout is exceeded, the Stack will enter the CREATE_FAILED state :type capabilities: list :param capabilities: The list of capabilities you want to allow in the stack. Currently, the only valid capability is 'CAPABILITY_IAM'. :type tags: dict :param tags: A dictionary of (key, value) pairs of tags to associate with this stack. :rtype: string :return: The unique Stack ID. """ params = self._build_create_or_update_params(stack_name, template_body, template_url, parameters, notification_arns, disable_rollback, timeout_in_minutes, capabilities, tags) response = self.make_request('UpdateStack', params, '/', 'POST') body = response.read() if response.status == 200: body = json.loads(body) return body['UpdateStackResponse']['UpdateStackResult']['StackId'] else: boto.log.error('%s %s' % (response.status, response.reason)) boto.log.error('%s' % body) raise self.ResponseError(response.status, response.reason, body) def delete_stack(self, stack_name_or_id): params = {'ContentType': "JSON", 'StackName': stack_name_or_id} # TODO: change this to get_status ? response = self.make_request('DeleteStack', params, '/', 'GET') body = response.read() if response.status == 200: return json.loads(body) else: boto.log.error('%s %s' % (response.status, response.reason)) boto.log.error('%s' % body) raise self.ResponseError(response.status, response.reason, body) def describe_stack_events(self, stack_name_or_id=None, next_token=None): params = {} if stack_name_or_id: params['StackName'] = stack_name_or_id if next_token: params['NextToken'] = next_token return self.get_list('DescribeStackEvents', params, [('member', StackEvent)]) def describe_stack_resource(self, stack_name_or_id, logical_resource_id): params = {'ContentType': "JSON", 'StackName': stack_name_or_id, 'LogicalResourceId': logical_resource_id} response = self.make_request('DescribeStackResource', params, '/', 'GET') body = response.read() if response.status == 200: return json.loads(body) else: boto.log.error('%s %s' % (response.status, response.reason)) boto.log.error('%s' % body) raise self.ResponseError(response.status, response.reason, body) def describe_stack_resources(self, stack_name_or_id=None, logical_resource_id=None, physical_resource_id=None): params = {} if stack_name_or_id: params['StackName'] = stack_name_or_id if logical_resource_id: params['LogicalResourceId'] = logical_resource_id if physical_resource_id: params['PhysicalResourceId'] = physical_resource_id return self.get_list('DescribeStackResources', params, [('member', StackResource)]) def describe_stacks(self, stack_name_or_id=None): params = {} if stack_name_or_id: params['StackName'] = stack_name_or_id return self.get_list('DescribeStacks', params, [('member', Stack)]) def get_template(self, stack_name_or_id): params = {'ContentType': "JSON", 'StackName': stack_name_or_id} response = self.make_request('GetTemplate', params, '/', 'GET') body = response.read() if response.status == 200: return json.loads(body) else: boto.log.error('%s %s' % (response.status, response.reason)) boto.log.error('%s' % body) raise self.ResponseError(response.status, response.reason, body) def list_stack_resources(self, stack_name_or_id, next_token=None): params = {'StackName': stack_name_or_id} if next_token: params['NextToken'] = next_token return self.get_list('ListStackResources', params, [('member', StackResourceSummary)]) def list_stacks(self, stack_status_filters=[], next_token=None): params = {} if next_token: params['NextToken'] = next_token if len(stack_status_filters) > 0: self.build_list_params(params, stack_status_filters, "StackStatusFilter.member") return self.get_list('ListStacks', params, [('member', StackSummary)]) def validate_template(self, template_body=None, template_url=None): params = {} if template_body: params['TemplateBody'] = template_body if template_url: params['TemplateURL'] = template_url if template_body and template_url: boto.log.warning("If both TemplateBody and TemplateURL are" " specified, only TemplateBody will be honored by the API") return self.get_object('ValidateTemplate', params, Template, verb="POST") def cancel_update_stack(self, stack_name_or_id=None): params = {} if stack_name_or_id: params['StackName'] = stack_name_or_id return self.get_status('CancelUpdateStack', params) boto-2.20.1/boto/cloudformation/stack.py000077500000000000000000000307161225267101000202140ustar00rootroot00000000000000from datetime import datetime from boto.resultset import ResultSet class Stack(object): def __init__(self, connection=None): self.connection = connection self.creation_time = None self.description = None self.disable_rollback = None self.notification_arns = [] self.outputs = [] self.parameters = [] self.capabilities = [] self.tags = [] self.stack_id = None self.stack_status = None self.stack_name = None self.stack_name_reason = None self.timeout_in_minutes = None def startElement(self, name, attrs, connection): if name == "Parameters": self.parameters = ResultSet([('member', Parameter)]) return self.parameters elif name == "Outputs": self.outputs = ResultSet([('member', Output)]) return self.outputs elif name == "Capabilities": self.capabilities = ResultSet([('member', Capability)]) return self.capabilities elif name == "Tags": self.tags = Tag() return self.tags elif name == 'NotificationARNs': self.notification_arns = ResultSet([('member', NotificationARN)]) return self.notification_arns else: return None def endElement(self, name, value, connection): if name == 'CreationTime': try: self.creation_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ') except ValueError: self.creation_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ') elif name == "Description": self.description = value elif name == "DisableRollback": if str(value).lower() == 'true': self.disable_rollback = True else: self.disable_rollback = False elif name == 'StackId': self.stack_id = value elif name == 'StackName': self.stack_name = value elif name == 'StackStatus': self.stack_status = value elif name == "StackStatusReason": self.stack_status_reason = value elif name == "TimeoutInMinutes": self.timeout_in_minutes = int(value) elif name == "member": pass else: setattr(self, name, value) def delete(self): return self.connection.delete_stack(stack_name_or_id=self.stack_id) def describe_events(self, next_token=None): return self.connection.describe_stack_events( stack_name_or_id=self.stack_id, next_token=next_token ) def describe_resource(self, logical_resource_id): return self.connection.describe_stack_resource( stack_name_or_id=self.stack_id, logical_resource_id=logical_resource_id ) def describe_resources(self, logical_resource_id=None, physical_resource_id=None): return self.connection.describe_stack_resources( stack_name_or_id=self.stack_id, logical_resource_id=logical_resource_id, physical_resource_id=physical_resource_id ) def list_resources(self, next_token=None): return self.connection.list_stack_resources( stack_name_or_id=self.stack_id, next_token=next_token ) def update(self): rs = self.connection.describe_stacks(self.stack_id) if len(rs) == 1 and rs[0].stack_id == self.stack_id: self.__dict__.update(rs[0].__dict__) else: raise ValueError("%s is not a valid Stack ID or Name" % self.stack_id) def get_template(self): return self.connection.get_template(stack_name_or_id=self.stack_id) class StackSummary(object): def __init__(self, connection=None): self.connection = connection self.stack_id = None self.stack_status = None self.stack_name = None self.creation_time = None self.deletion_time = None self.template_description = None def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'StackId': self.stack_id = value elif name == 'StackStatus': self.stack_status = value elif name == 'StackName': self.stack_name = value elif name == 'CreationTime': try: self.creation_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ') except ValueError: self.creation_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ') elif name == "DeletionTime": try: self.deletion_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ') except ValueError: self.deletion_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ') elif name == 'TemplateDescription': self.template_description = value elif name == "member": pass else: setattr(self, name, value) class Parameter(object): def __init__(self, connection=None): self.connection = None self.key = None self.value = None def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == "ParameterKey": self.key = value elif name == "ParameterValue": self.value = value else: setattr(self, name, value) def __repr__(self): return "Parameter:\"%s\"=\"%s\"" % (self.key, self.value) class Output(object): def __init__(self, connection=None): self.connection = connection self.description = None self.key = None self.value = None def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == "Description": self.description = value elif name == "OutputKey": self.key = value elif name == "OutputValue": self.value = value else: setattr(self, name, value) def __repr__(self): return "Output:\"%s\"=\"%s\"" % (self.key, self.value) class Capability(object): def __init__(self, connection=None): self.connection = None self.value = None def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): self.value = value def __repr__(self): return "Capability:\"%s\"" % (self.value) class Tag(dict): def __init__(self, connection=None): dict.__init__(self) self.connection = connection self._current_key = None self._current_value = None def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == "Key": self._current_key = value elif name == "Value": self._current_value = value else: setattr(self, name, value) if self._current_key and self._current_value: self[self._current_key] = self._current_value self._current_key = None self._current_value = None class NotificationARN(object): def __init__(self, connection=None): self.connection = None self.value = None def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): self.value = value def __repr__(self): return "NotificationARN:\"%s\"" % (self.value) class StackResource(object): def __init__(self, connection=None): self.connection = connection self.description = None self.logical_resource_id = None self.physical_resource_id = None self.resource_status = None self.resource_status_reason = None self.resource_type = None self.stack_id = None self.stack_name = None self.timestamp = None def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == "Description": self.description = value elif name == "LogicalResourceId": self.logical_resource_id = value elif name == "PhysicalResourceId": self.physical_resource_id = value elif name == "ResourceStatus": self.resource_status = value elif name == "ResourceStatusReason": self.resource_status_reason = value elif name == "ResourceType": self.resource_type = value elif name == "StackId": self.stack_id = value elif name == "StackName": self.stack_name = value elif name == "Timestamp": try: self.timestamp = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ') except ValueError: self.timestamp = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ') else: setattr(self, name, value) def __repr__(self): return "StackResource:%s (%s)" % (self.logical_resource_id, self.resource_type) class StackResourceSummary(object): def __init__(self, connection=None): self.connection = connection self.last_updated_time = None self.logical_resource_id = None self.physical_resource_id = None self.resource_status = None self.resource_status_reason = None self.resource_type = None def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == "LastUpdatedTime": try: self.last_updated_time = datetime.strptime( value, '%Y-%m-%dT%H:%M:%SZ' ) except ValueError: self.last_updated_time = datetime.strptime( value, '%Y-%m-%dT%H:%M:%S.%fZ' ) elif name == "LogicalResourceId": self.logical_resource_id = value elif name == "PhysicalResourceId": self.physical_resource_id = value elif name == "ResourceStatus": self.resource_status = value elif name == "ResourceStatusReason": self.resource_status_reason = value elif name == "ResourceType": self.resource_type = value else: setattr(self, name, value) def __repr__(self): return "StackResourceSummary:%s (%s)" % (self.logical_resource_id, self.resource_type) class StackEvent(object): valid_states = ("CREATE_IN_PROGRESS", "CREATE_FAILED", "CREATE_COMPLETE", "DELETE_IN_PROGRESS", "DELETE_FAILED", "DELETE_COMPLETE") def __init__(self, connection=None): self.connection = connection self.event_id = None self.logical_resource_id = None self.physical_resource_id = None self.resource_properties = None self.resource_status = None self.resource_status_reason = None self.resource_type = None self.stack_id = None self.stack_name = None self.timestamp = None def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == "EventId": self.event_id = value elif name == "LogicalResourceId": self.logical_resource_id = value elif name == "PhysicalResourceId": self.physical_resource_id = value elif name == "ResourceProperties": self.resource_properties = value elif name == "ResourceStatus": self.resource_status = value elif name == "ResourceStatusReason": self.resource_status_reason = value elif name == "ResourceType": self.resource_type = value elif name == "StackId": self.stack_id = value elif name == "StackName": self.stack_name = value elif name == "Timestamp": try: self.timestamp = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ') except ValueError: self.timestamp = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ') else: setattr(self, name, value) def __repr__(self): return "StackEvent %s %s %s" % (self.resource_type, self.logical_resource_id, self.resource_status) boto-2.20.1/boto/cloudformation/template.py000066400000000000000000000024461225267101000207160ustar00rootroot00000000000000from boto.resultset import ResultSet class Template: def __init__(self, connection=None): self.connection = connection self.description = None self.template_parameters = None def startElement(self, name, attrs, connection): if name == "Parameters": self.template_parameters = ResultSet([('member', TemplateParameter)]) return self.template_parameters else: return None def endElement(self, name, value, connection): if name == "Description": self.description = value else: setattr(self, name, value) class TemplateParameter: def __init__(self, parent): self.parent = parent self.default_value = None self.description = None self.no_echo = None self.parameter_key = None def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == "DefaultValue": self.default_value = value elif name == "Description": self.description = value elif name == "NoEcho": self.no_echo = bool(value) elif name == "ParameterKey": self.parameter_key = value else: setattr(self, name, value) boto-2.20.1/boto/cloudfront/000077500000000000000000000000001225267101000156555ustar00rootroot00000000000000boto-2.20.1/boto/cloudfront/__init__.py000066400000000000000000000351701225267101000177740ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import xml.sax import time import boto from boto.connection import AWSAuthConnection from boto import handler from boto.cloudfront.distribution import Distribution, DistributionSummary, DistributionConfig from boto.cloudfront.distribution import StreamingDistribution, StreamingDistributionSummary, StreamingDistributionConfig from boto.cloudfront.identity import OriginAccessIdentity from boto.cloudfront.identity import OriginAccessIdentitySummary from boto.cloudfront.identity import OriginAccessIdentityConfig from boto.cloudfront.invalidation import InvalidationBatch, InvalidationSummary, InvalidationListResultSet from boto.resultset import ResultSet from boto.cloudfront.exception import CloudFrontServerError class CloudFrontConnection(AWSAuthConnection): DefaultHost = 'cloudfront.amazonaws.com' Version = '2010-11-01' def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, port=None, proxy=None, proxy_port=None, host=DefaultHost, debug=0, security_token=None, validate_certs=True): AWSAuthConnection.__init__(self, host, aws_access_key_id, aws_secret_access_key, True, port, proxy, proxy_port, debug=debug, security_token=security_token, validate_certs=validate_certs) def get_etag(self, response): response_headers = response.msg for key in response_headers.keys(): if key.lower() == 'etag': return response_headers[key] return None def _required_auth_capability(self): return ['cloudfront'] # Generics def _get_all_objects(self, resource, tags, result_set_class=None, result_set_kwargs=None): if not tags: tags = [('DistributionSummary', DistributionSummary)] response = self.make_request('GET', '/%s/%s' % (self.Version, resource)) body = response.read() boto.log.debug(body) if response.status >= 300: raise CloudFrontServerError(response.status, response.reason, body) rs_class = result_set_class or ResultSet rs_kwargs = result_set_kwargs or dict() rs = rs_class(tags, **rs_kwargs) h = handler.XmlHandler(rs, self) xml.sax.parseString(body, h) return rs def _get_info(self, id, resource, dist_class): uri = '/%s/%s/%s' % (self.Version, resource, id) response = self.make_request('GET', uri) body = response.read() boto.log.debug(body) if response.status >= 300: raise CloudFrontServerError(response.status, response.reason, body) d = dist_class(connection=self) response_headers = response.msg for key in response_headers.keys(): if key.lower() == 'etag': d.etag = response_headers[key] h = handler.XmlHandler(d, self) xml.sax.parseString(body, h) return d def _get_config(self, id, resource, config_class): uri = '/%s/%s/%s/config' % (self.Version, resource, id) response = self.make_request('GET', uri) body = response.read() boto.log.debug(body) if response.status >= 300: raise CloudFrontServerError(response.status, response.reason, body) d = config_class(connection=self) d.etag = self.get_etag(response) h = handler.XmlHandler(d, self) xml.sax.parseString(body, h) return d def _set_config(self, distribution_id, etag, config): if isinstance(config, StreamingDistributionConfig): resource = 'streaming-distribution' else: resource = 'distribution' uri = '/%s/%s/%s/config' % (self.Version, resource, distribution_id) headers = {'If-Match': etag, 'Content-Type': 'text/xml'} response = self.make_request('PUT', uri, headers, config.to_xml()) body = response.read() boto.log.debug(body) if response.status != 200: raise CloudFrontServerError(response.status, response.reason, body) return self.get_etag(response) def _create_object(self, config, resource, dist_class): response = self.make_request('POST', '/%s/%s' % (self.Version, resource), {'Content-Type': 'text/xml'}, data=config.to_xml()) body = response.read() boto.log.debug(body) if response.status == 201: d = dist_class(connection=self) h = handler.XmlHandler(d, self) xml.sax.parseString(body, h) d.etag = self.get_etag(response) return d else: raise CloudFrontServerError(response.status, response.reason, body) def _delete_object(self, id, etag, resource): uri = '/%s/%s/%s' % (self.Version, resource, id) response = self.make_request('DELETE', uri, {'If-Match': etag}) body = response.read() boto.log.debug(body) if response.status != 204: raise CloudFrontServerError(response.status, response.reason, body) # Distributions def get_all_distributions(self): tags = [('DistributionSummary', DistributionSummary)] return self._get_all_objects('distribution', tags) def get_distribution_info(self, distribution_id): return self._get_info(distribution_id, 'distribution', Distribution) def get_distribution_config(self, distribution_id): return self._get_config(distribution_id, 'distribution', DistributionConfig) def set_distribution_config(self, distribution_id, etag, config): return self._set_config(distribution_id, etag, config) def create_distribution(self, origin, enabled, caller_reference='', cnames=None, comment='', trusted_signers=None): config = DistributionConfig(origin=origin, enabled=enabled, caller_reference=caller_reference, cnames=cnames, comment=comment, trusted_signers=trusted_signers) return self._create_object(config, 'distribution', Distribution) def delete_distribution(self, distribution_id, etag): return self._delete_object(distribution_id, etag, 'distribution') # Streaming Distributions def get_all_streaming_distributions(self): tags = [('StreamingDistributionSummary', StreamingDistributionSummary)] return self._get_all_objects('streaming-distribution', tags) def get_streaming_distribution_info(self, distribution_id): return self._get_info(distribution_id, 'streaming-distribution', StreamingDistribution) def get_streaming_distribution_config(self, distribution_id): return self._get_config(distribution_id, 'streaming-distribution', StreamingDistributionConfig) def set_streaming_distribution_config(self, distribution_id, etag, config): return self._set_config(distribution_id, etag, config) def create_streaming_distribution(self, origin, enabled, caller_reference='', cnames=None, comment='', trusted_signers=None): config = StreamingDistributionConfig(origin=origin, enabled=enabled, caller_reference=caller_reference, cnames=cnames, comment=comment, trusted_signers=trusted_signers) return self._create_object(config, 'streaming-distribution', StreamingDistribution) def delete_streaming_distribution(self, distribution_id, etag): return self._delete_object(distribution_id, etag, 'streaming-distribution') # Origin Access Identity def get_all_origin_access_identity(self): tags = [('CloudFrontOriginAccessIdentitySummary', OriginAccessIdentitySummary)] return self._get_all_objects('origin-access-identity/cloudfront', tags) def get_origin_access_identity_info(self, access_id): return self._get_info(access_id, 'origin-access-identity/cloudfront', OriginAccessIdentity) def get_origin_access_identity_config(self, access_id): return self._get_config(access_id, 'origin-access-identity/cloudfront', OriginAccessIdentityConfig) def set_origin_access_identity_config(self, access_id, etag, config): return self._set_config(access_id, etag, config) def create_origin_access_identity(self, caller_reference='', comment=''): config = OriginAccessIdentityConfig(caller_reference=caller_reference, comment=comment) return self._create_object(config, 'origin-access-identity/cloudfront', OriginAccessIdentity) def delete_origin_access_identity(self, access_id, etag): return self._delete_object(access_id, etag, 'origin-access-identity/cloudfront') # Object Invalidation def create_invalidation_request(self, distribution_id, paths, caller_reference=None): """Creates a new invalidation request :see: http://goo.gl/8vECq """ # We allow you to pass in either an array or # an InvalidationBatch object if not isinstance(paths, InvalidationBatch): paths = InvalidationBatch(paths) paths.connection = self uri = '/%s/distribution/%s/invalidation' % (self.Version, distribution_id) response = self.make_request('POST', uri, {'Content-Type': 'text/xml'}, data=paths.to_xml()) body = response.read() if response.status == 201: h = handler.XmlHandler(paths, self) xml.sax.parseString(body, h) return paths else: raise CloudFrontServerError(response.status, response.reason, body) def invalidation_request_status(self, distribution_id, request_id, caller_reference=None): uri = '/%s/distribution/%s/invalidation/%s' % (self.Version, distribution_id, request_id) response = self.make_request('GET', uri, {'Content-Type': 'text/xml'}) body = response.read() if response.status == 200: paths = InvalidationBatch([]) h = handler.XmlHandler(paths, self) xml.sax.parseString(body, h) return paths else: raise CloudFrontServerError(response.status, response.reason, body) def get_invalidation_requests(self, distribution_id, marker=None, max_items=None): """ Get all invalidation requests for a given CloudFront distribution. This returns an instance of an InvalidationListResultSet that automatically handles all of the result paging, etc. from CF - you just need to keep iterating until there are no more results. :type distribution_id: string :param distribution_id: The id of the CloudFront distribution :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results and only in a follow-up request to indicate the maximum number of invalidation requests you want in the response. You will need to pass the next_marker property from the previous InvalidationListResultSet response in the follow-up request in order to get the next 'page' of results. :rtype: :class:`boto.cloudfront.invalidation.InvalidationListResultSet` :returns: An InvalidationListResultSet iterator that lists invalidation requests for a given CloudFront distribution. Automatically handles paging the results. """ uri = 'distribution/%s/invalidation' % distribution_id params = dict() if marker: params['Marker'] = marker if max_items: params['MaxItems'] = max_items if params: uri += '?%s=%s' % params.popitem() for k, v in params.items(): uri += '&%s=%s' % (k, v) tags=[('InvalidationSummary', InvalidationSummary)] rs_class = InvalidationListResultSet rs_kwargs = dict(connection=self, distribution_id=distribution_id, max_items=max_items, marker=marker) return self._get_all_objects(uri, tags, result_set_class=rs_class, result_set_kwargs=rs_kwargs) boto-2.20.1/boto/cloudfront/distribution.py000066400000000000000000000743311225267101000207560ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import uuid import base64 import time from boto.compat import json from boto.cloudfront.identity import OriginAccessIdentity from boto.cloudfront.object import Object, StreamingObject from boto.cloudfront.signers import ActiveTrustedSigners, TrustedSigners from boto.cloudfront.logging import LoggingInfo from boto.cloudfront.origin import S3Origin, CustomOrigin from boto.s3.acl import ACL class DistributionConfig(object): def __init__(self, connection=None, origin=None, enabled=False, caller_reference='', cnames=None, comment='', trusted_signers=None, default_root_object=None, logging=None): """ :param origin: Origin information to associate with the distribution. If your distribution will use an Amazon S3 origin, then this should be an S3Origin object. If your distribution will use a custom origin (non Amazon S3), then this should be a CustomOrigin object. :type origin: :class:`boto.cloudfront.origin.S3Origin` or :class:`boto.cloudfront.origin.CustomOrigin` :param enabled: Whether the distribution is enabled to accept end user requests for content. :type enabled: bool :param caller_reference: A unique number that ensures the request can't be replayed. If no caller_reference is provided, boto will generate a type 4 UUID for use as the caller reference. :type enabled: str :param cnames: A CNAME alias you want to associate with this distribution. You can have up to 10 CNAME aliases per distribution. :type enabled: array of str :param comment: Any comments you want to include about the distribution. :type comment: str :param trusted_signers: Specifies any AWS accounts you want to permit to create signed URLs for private content. If you want the distribution to use signed URLs, this should contain a TrustedSigners object; if you want the distribution to use basic URLs, leave this None. :type trusted_signers: :class`boto.cloudfront.signers.TrustedSigners` :param default_root_object: Designates a default root object. Only include a DefaultRootObject value if you are going to assign a default root object for the distribution. :type comment: str :param logging: Controls whether access logs are written for the distribution. If you want to turn on access logs, this should contain a LoggingInfo object; otherwise it should contain None. :type logging: :class`boto.cloudfront.logging.LoggingInfo` """ self.connection = connection self.origin = origin self.enabled = enabled if caller_reference: self.caller_reference = caller_reference else: self.caller_reference = str(uuid.uuid4()) self.cnames = [] if cnames: self.cnames = cnames self.comment = comment self.trusted_signers = trusted_signers self.logging = logging self.default_root_object = default_root_object def to_xml(self): s = '\n' s += '\n' if self.origin: s += self.origin.to_xml() s += ' %s\n' % self.caller_reference for cname in self.cnames: s += ' %s\n' % cname if self.comment: s += ' %s\n' % self.comment s += ' ' if self.enabled: s += 'true' else: s += 'false' s += '\n' if self.trusted_signers: s += '\n' for signer in self.trusted_signers: if signer == 'Self': s += ' \n' else: s += ' %s\n' % signer s += '\n' if self.logging: s += '\n' s += ' %s\n' % self.logging.bucket s += ' %s\n' % self.logging.prefix s += '\n' if self.default_root_object: dro = self.default_root_object s += '%s\n' % dro s += '\n' return s def startElement(self, name, attrs, connection): if name == 'TrustedSigners': self.trusted_signers = TrustedSigners() return self.trusted_signers elif name == 'Logging': self.logging = LoggingInfo() return self.logging elif name == 'S3Origin': self.origin = S3Origin() return self.origin elif name == 'CustomOrigin': self.origin = CustomOrigin() return self.origin else: return None def endElement(self, name, value, connection): if name == 'CNAME': self.cnames.append(value) elif name == 'Comment': self.comment = value elif name == 'Enabled': if value.lower() == 'true': self.enabled = True else: self.enabled = False elif name == 'CallerReference': self.caller_reference = value elif name == 'DefaultRootObject': self.default_root_object = value else: setattr(self, name, value) class StreamingDistributionConfig(DistributionConfig): def __init__(self, connection=None, origin='', enabled=False, caller_reference='', cnames=None, comment='', trusted_signers=None, logging=None): DistributionConfig.__init__(self, connection=connection, origin=origin, enabled=enabled, caller_reference=caller_reference, cnames=cnames, comment=comment, trusted_signers=trusted_signers, logging=logging) def to_xml(self): s = '\n' s += '\n' if self.origin: s += self.origin.to_xml() s += ' %s\n' % self.caller_reference for cname in self.cnames: s += ' %s\n' % cname if self.comment: s += ' %s\n' % self.comment s += ' ' if self.enabled: s += 'true' else: s += 'false' s += '\n' if self.trusted_signers: s += '\n' for signer in self.trusted_signers: if signer == 'Self': s += ' \n' else: s += ' %s\n' % signer s += '\n' if self.logging: s += '\n' s += ' %s\n' % self.logging.bucket s += ' %s\n' % self.logging.prefix s += '\n' s += '\n' return s class DistributionSummary(object): def __init__(self, connection=None, domain_name='', id='', last_modified_time=None, status='', origin=None, cname='', comment='', enabled=False): self.connection = connection self.domain_name = domain_name self.id = id self.last_modified_time = last_modified_time self.status = status self.origin = origin self.enabled = enabled self.cnames = [] if cname: self.cnames.append(cname) self.comment = comment self.trusted_signers = None self.etag = None self.streaming = False def startElement(self, name, attrs, connection): if name == 'TrustedSigners': self.trusted_signers = TrustedSigners() return self.trusted_signers elif name == 'S3Origin': self.origin = S3Origin() return self.origin elif name == 'CustomOrigin': self.origin = CustomOrigin() return self.origin return None def endElement(self, name, value, connection): if name == 'Id': self.id = value elif name == 'Status': self.status = value elif name == 'LastModifiedTime': self.last_modified_time = value elif name == 'DomainName': self.domain_name = value elif name == 'Origin': self.origin = value elif name == 'CNAME': self.cnames.append(value) elif name == 'Comment': self.comment = value elif name == 'Enabled': if value.lower() == 'true': self.enabled = True else: self.enabled = False elif name == 'StreamingDistributionSummary': self.streaming = True else: setattr(self, name, value) def get_distribution(self): return self.connection.get_distribution_info(self.id) class StreamingDistributionSummary(DistributionSummary): def get_distribution(self): return self.connection.get_streaming_distribution_info(self.id) class Distribution(object): def __init__(self, connection=None, config=None, domain_name='', id='', last_modified_time=None, status=''): self.connection = connection self.config = config self.domain_name = domain_name self.id = id self.last_modified_time = last_modified_time self.status = status self.in_progress_invalidation_batches = 0 self.active_signers = None self.etag = None self._bucket = None self._object_class = Object def startElement(self, name, attrs, connection): if name == 'DistributionConfig': self.config = DistributionConfig() return self.config elif name == 'ActiveTrustedSigners': self.active_signers = ActiveTrustedSigners() return self.active_signers else: return None def endElement(self, name, value, connection): if name == 'Id': self.id = value elif name == 'LastModifiedTime': self.last_modified_time = value elif name == 'Status': self.status = value elif name == 'InProgressInvalidationBatches': self.in_progress_invalidation_batches = int(value) elif name == 'DomainName': self.domain_name = value else: setattr(self, name, value) def update(self, enabled=None, cnames=None, comment=None): """ Update the configuration of the Distribution. The only values of the DistributionConfig that can be directly updated are: * CNAMES * Comment * Whether the Distribution is enabled or not Any changes to the ``trusted_signers`` or ``origin`` properties of this distribution's current config object will also be included in the update. Therefore, to set the origin access identity for this distribution, set ``Distribution.config.origin.origin_access_identity`` before calling this update method. :type enabled: bool :param enabled: Whether the Distribution is active or not. :type cnames: list of str :param cnames: The DNS CNAME's associated with this Distribution. Maximum of 10 values. :type comment: str or unicode :param comment: The comment associated with the Distribution. """ new_config = DistributionConfig(self.connection, self.config.origin, self.config.enabled, self.config.caller_reference, self.config.cnames, self.config.comment, self.config.trusted_signers, self.config.default_root_object) if enabled != None: new_config.enabled = enabled if cnames != None: new_config.cnames = cnames if comment != None: new_config.comment = comment self.etag = self.connection.set_distribution_config(self.id, self.etag, new_config) self.config = new_config self._object_class = Object def enable(self): """ Activate the Distribution. A convenience wrapper around the update method. """ self.update(enabled=True) def disable(self): """ Deactivate the Distribution. A convenience wrapper around the update method. """ self.update(enabled=False) def delete(self): """ Delete this CloudFront Distribution. The content associated with the Distribution is not deleted from the underlying Origin bucket in S3. """ self.connection.delete_distribution(self.id, self.etag) def _get_bucket(self): if isinstance(self.config.origin, S3Origin): if not self._bucket: bucket_dns_name = self.config.origin.dns_name bucket_name = bucket_dns_name.replace('.s3.amazonaws.com', '') from boto.s3.connection import S3Connection s3 = S3Connection(self.connection.aws_access_key_id, self.connection.aws_secret_access_key, proxy=self.connection.proxy, proxy_port=self.connection.proxy_port, proxy_user=self.connection.proxy_user, proxy_pass=self.connection.proxy_pass) self._bucket = s3.get_bucket(bucket_name) self._bucket.distribution = self self._bucket.set_key_class(self._object_class) return self._bucket else: raise NotImplementedError('Unable to get_objects on CustomOrigin') def get_objects(self): """ Return a list of all content objects in this distribution. :rtype: list of :class:`boto.cloudfront.object.Object` :return: The content objects """ bucket = self._get_bucket() objs = [] for key in bucket: objs.append(key) return objs def set_permissions(self, object, replace=False): """ Sets the S3 ACL grants for the given object to the appropriate value based on the type of Distribution. If the Distribution is serving private content the ACL will be set to include the Origin Access Identity associated with the Distribution. If the Distribution is serving public content the content will be set up with "public-read". :type object: :class:`boto.cloudfront.object.Object` :param enabled: The Object whose ACL is being set :type replace: bool :param replace: If False, the Origin Access Identity will be appended to the existing ACL for the object. If True, the ACL for the object will be completely replaced with one that grants READ permission to the Origin Access Identity. """ if isinstance(self.config.origin, S3Origin): if self.config.origin.origin_access_identity: id = self.config.origin.origin_access_identity.split('/')[-1] oai = self.connection.get_origin_access_identity_info(id) policy = object.get_acl() if replace: policy.acl = ACL() policy.acl.add_user_grant('READ', oai.s3_user_id) object.set_acl(policy) else: object.set_canned_acl('public-read') def set_permissions_all(self, replace=False): """ Sets the S3 ACL grants for all objects in the Distribution to the appropriate value based on the type of Distribution. :type replace: bool :param replace: If False, the Origin Access Identity will be appended to the existing ACL for the object. If True, the ACL for the object will be completely replaced with one that grants READ permission to the Origin Access Identity. """ bucket = self._get_bucket() for key in bucket: self.set_permissions(key, replace) def add_object(self, name, content, headers=None, replace=True): """ Adds a new content object to the Distribution. The content for the object will be copied to a new Key in the S3 Bucket and the permissions will be set appropriately for the type of Distribution. :type name: str or unicode :param name: The name or key of the new object. :type content: file-like object :param content: A file-like object that contains the content for the new object. :type headers: dict :param headers: A dictionary containing additional headers you would like associated with the new object in S3. :rtype: :class:`boto.cloudfront.object.Object` :return: The newly created object. """ if self.config.origin.origin_access_identity: policy = 'private' else: policy = 'public-read' bucket = self._get_bucket() object = bucket.new_key(name) object.set_contents_from_file(content, headers=headers, policy=policy) if self.config.origin.origin_access_identity: self.set_permissions(object, replace) return object def create_signed_url(self, url, keypair_id, expire_time=None, valid_after_time=None, ip_address=None, policy_url=None, private_key_file=None, private_key_string=None): """ Creates a signed CloudFront URL that is only valid within the specified parameters. :type url: str :param url: The URL of the protected object. :type keypair_id: str :param keypair_id: The keypair ID of the Amazon KeyPair used to sign theURL. This ID MUST correspond to the private key specified with private_key_file or private_key_string. :type expire_time: int :param expire_time: The expiry time of the URL. If provided, the URL will expire after the time has passed. If not provided the URL will never expire. Format is a unix epoch. Use time.time() + duration_in_sec. :type valid_after_time: int :param valid_after_time: If provided, the URL will not be valid until after valid_after_time. Format is a unix epoch. Use time.time() + secs_until_valid. :type ip_address: str :param ip_address: If provided, only allows access from the specified IP address. Use '192.168.0.10' for a single IP or use '192.168.0.0/24' CIDR notation for a subnet. :type policy_url: str :param policy_url: If provided, allows the signature to contain wildcard globs in the URL. For example, you could provide: 'http://example.com/media/\*' and the policy and signature would allow access to all contents of the media subdirectory. If not specified, only allow access to the exact url provided in 'url'. :type private_key_file: str or file object. :param private_key_file: If provided, contains the filename of the private key file used for signing or an open file object containing the private key contents. Only one of private_key_file or private_key_string can be provided. :type private_key_string: str :param private_key_string: If provided, contains the private key string used for signing. Only one of private_key_file or private_key_string can be provided. :rtype: str :return: The signed URL. """ # Get the required parameters params = self._create_signing_params( url=url, keypair_id=keypair_id, expire_time=expire_time, valid_after_time=valid_after_time, ip_address=ip_address, policy_url=policy_url, private_key_file=private_key_file, private_key_string=private_key_string) #combine these into a full url if "?" in url: sep = "&" else: sep = "?" signed_url_params = [] for key in ["Expires", "Policy", "Signature", "Key-Pair-Id"]: if key in params: param = "%s=%s" % (key, params[key]) signed_url_params.append(param) signed_url = url + sep + "&".join(signed_url_params) return signed_url def _create_signing_params(self, url, keypair_id, expire_time=None, valid_after_time=None, ip_address=None, policy_url=None, private_key_file=None, private_key_string=None): """ Creates the required URL parameters for a signed URL. """ params = {} # Check if we can use a canned policy if expire_time and not valid_after_time and not ip_address and not policy_url: # we manually construct this policy string to ensure formatting # matches signature policy = self._canned_policy(url, expire_time) params["Expires"] = str(expire_time) else: # If no policy_url is specified, default to the full url. if policy_url is None: policy_url = url # Can't use canned policy policy = self._custom_policy(policy_url, expires=expire_time, valid_after=valid_after_time, ip_address=ip_address) encoded_policy = self._url_base64_encode(policy) params["Policy"] = encoded_policy #sign the policy signature = self._sign_string(policy, private_key_file, private_key_string) #now base64 encode the signature (URL safe as well) encoded_signature = self._url_base64_encode(signature) params["Signature"] = encoded_signature params["Key-Pair-Id"] = keypair_id return params @staticmethod def _canned_policy(resource, expires): """ Creates a canned policy string. """ policy = ('{"Statement":[{"Resource":"%(resource)s",' '"Condition":{"DateLessThan":{"AWS:EpochTime":' '%(expires)s}}}]}' % locals()) return policy @staticmethod def _custom_policy(resource, expires=None, valid_after=None, ip_address=None): """ Creates a custom policy string based on the supplied parameters. """ condition = {} # SEE: http://docs.amazonwebservices.com/AmazonCloudFront/latest/DeveloperGuide/RestrictingAccessPrivateContent.html#CustomPolicy # The 'DateLessThan' property is required. if not expires: # Defaults to ONE day expires = int(time.time()) + 86400 condition["DateLessThan"] = {"AWS:EpochTime": expires} if valid_after: condition["DateGreaterThan"] = {"AWS:EpochTime": valid_after} if ip_address: if '/' not in ip_address: ip_address += "/32" condition["IpAddress"] = {"AWS:SourceIp": ip_address} policy = {"Statement": [{ "Resource": resource, "Condition": condition}]} return json.dumps(policy, separators=(",", ":")) @staticmethod def _sign_string(message, private_key_file=None, private_key_string=None): """ Signs a string for use with Amazon CloudFront. Requires the rsa library be installed. """ try: import rsa except ImportError: raise NotImplementedError("Boto depends on the python rsa " "library to generate signed URLs for " "CloudFront") # Make sure only one of private_key_file and private_key_string is set if private_key_file and private_key_string: raise ValueError("Only specify the private_key_file or the private_key_string not both") if not private_key_file and not private_key_string: raise ValueError("You must specify one of private_key_file or private_key_string") # If private_key_file is a file name, open it and read it if private_key_string is None: if isinstance(private_key_file, basestring): with open(private_key_file, 'r') as file_handle: private_key_string = file_handle.read() # Otherwise, treat it like a file else: private_key_string = private_key_file.read() # Sign it! private_key = rsa.PrivateKey.load_pkcs1(private_key_string) signature = rsa.sign(str(message), private_key, 'SHA-1') return signature @staticmethod def _url_base64_encode(msg): """ Base64 encodes a string using the URL-safe characters specified by Amazon. """ msg_base64 = base64.b64encode(msg) msg_base64 = msg_base64.replace('+', '-') msg_base64 = msg_base64.replace('=', '_') msg_base64 = msg_base64.replace('/', '~') return msg_base64 class StreamingDistribution(Distribution): def __init__(self, connection=None, config=None, domain_name='', id='', last_modified_time=None, status=''): Distribution.__init__(self, connection, config, domain_name, id, last_modified_time, status) self._object_class = StreamingObject def startElement(self, name, attrs, connection): if name == 'StreamingDistributionConfig': self.config = StreamingDistributionConfig() return self.config else: return Distribution.startElement(self, name, attrs, connection) def update(self, enabled=None, cnames=None, comment=None): """ Update the configuration of the StreamingDistribution. The only values of the StreamingDistributionConfig that can be directly updated are: * CNAMES * Comment * Whether the Distribution is enabled or not Any changes to the ``trusted_signers`` or ``origin`` properties of this distribution's current config object will also be included in the update. Therefore, to set the origin access identity for this distribution, set ``StreamingDistribution.config.origin.origin_access_identity`` before calling this update method. :type enabled: bool :param enabled: Whether the StreamingDistribution is active or not. :type cnames: list of str :param cnames: The DNS CNAME's associated with this Distribution. Maximum of 10 values. :type comment: str or unicode :param comment: The comment associated with the Distribution. """ new_config = StreamingDistributionConfig(self.connection, self.config.origin, self.config.enabled, self.config.caller_reference, self.config.cnames, self.config.comment, self.config.trusted_signers) if enabled != None: new_config.enabled = enabled if cnames != None: new_config.cnames = cnames if comment != None: new_config.comment = comment self.etag = self.connection.set_streaming_distribution_config(self.id, self.etag, new_config) self.config = new_config self._object_class = StreamingObject def delete(self): self.connection.delete_streaming_distribution(self.id, self.etag) boto-2.20.1/boto/cloudfront/exception.py000066400000000000000000000022651225267101000202320ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.exception import BotoServerError class CloudFrontServerError(BotoServerError): pass boto-2.20.1/boto/cloudfront/identity.py000066400000000000000000000106111225267101000200570ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import uuid class OriginAccessIdentity: def __init__(self, connection=None, config=None, id='', s3_user_id='', comment=''): self.connection = connection self.config = config self.id = id self.s3_user_id = s3_user_id self.comment = comment self.etag = None def startElement(self, name, attrs, connection): if name == 'CloudFrontOriginAccessIdentityConfig': self.config = OriginAccessIdentityConfig() return self.config else: return None def endElement(self, name, value, connection): if name == 'Id': self.id = value elif name == 'S3CanonicalUserId': self.s3_user_id = value elif name == 'Comment': self.comment = value else: setattr(self, name, value) def update(self, comment=None): new_config = OriginAccessIdentityConfig(self.connection, self.config.caller_reference, self.config.comment) if comment != None: new_config.comment = comment self.etag = self.connection.set_origin_identity_config(self.id, self.etag, new_config) self.config = new_config def delete(self): return self.connection.delete_origin_access_identity(self.id, self.etag) def uri(self): return 'origin-access-identity/cloudfront/%s' % self.id class OriginAccessIdentityConfig: def __init__(self, connection=None, caller_reference='', comment=''): self.connection = connection if caller_reference: self.caller_reference = caller_reference else: self.caller_reference = str(uuid.uuid4()) self.comment = comment def to_xml(self): s = '\n' s += '\n' s += ' %s\n' % self.caller_reference if self.comment: s += ' %s\n' % self.comment s += '\n' return s def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'Comment': self.comment = value elif name == 'CallerReference': self.caller_reference = value else: setattr(self, name, value) class OriginAccessIdentitySummary: def __init__(self, connection=None, id='', s3_user_id='', comment=''): self.connection = connection self.id = id self.s3_user_id = s3_user_id self.comment = comment self.etag = None def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'Id': self.id = value elif name == 'S3CanonicalUserId': self.s3_user_id = value elif name == 'Comment': self.comment = value else: setattr(self, name, value) def get_origin_access_identity(self): return self.connection.get_origin_access_identity_info(self.id) boto-2.20.1/boto/cloudfront/invalidation.py000066400000000000000000000175351225267101000207230ustar00rootroot00000000000000# Copyright (c) 2006-2010 Chris Moyer http://coredumped.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import uuid import urllib from boto.resultset import ResultSet class InvalidationBatch(object): """A simple invalidation request. :see: http://docs.amazonwebservices.com/AmazonCloudFront/2010-08-01/APIReference/index.html?InvalidationBatchDatatype.html """ def __init__(self, paths=None, connection=None, distribution=None, caller_reference=''): """Create a new invalidation request: :paths: An array of paths to invalidate """ self.paths = paths or [] self.distribution = distribution self.caller_reference = caller_reference if not self.caller_reference: self.caller_reference = str(uuid.uuid4()) # If we passed in a distribution, # then we use that as the connection object if distribution: self.connection = distribution else: self.connection = connection def __repr__(self): return '' % self.id def add(self, path): """Add another path to this invalidation request""" return self.paths.append(path) def remove(self, path): """Remove a path from this invalidation request""" return self.paths.remove(path) def __iter__(self): return iter(self.paths) def __getitem__(self, i): return self.paths[i] def __setitem__(self, k, v): self.paths[k] = v def escape(self, p): """Escape a path, make sure it begins with a slash and contains no invalid characters""" if not p[0] == "/": p = "/%s" % p return urllib.quote(p) def to_xml(self): """Get this batch as XML""" assert self.connection != None s = '\n' s += '\n' % self.connection.Version for p in self.paths: s += ' %s\n' % self.escape(p) s += ' %s\n' % self.caller_reference s += '\n' return s def startElement(self, name, attrs, connection): if name == "InvalidationBatch": self.paths = [] return None def endElement(self, name, value, connection): if name == 'Path': self.paths.append(value) elif name == "Status": self.status = value elif name == "Id": self.id = value elif name == "CreateTime": self.create_time = value elif name == "CallerReference": self.caller_reference = value return None class InvalidationListResultSet(object): """ A resultset for listing invalidations on a given CloudFront distribution. Implements the iterator interface and transparently handles paging results from CF so even if you have many thousands of invalidations on the distribution you can iterate over all invalidations in a reasonably efficient manner. """ def __init__(self, markers=None, connection=None, distribution_id=None, invalidations=None, marker='', next_marker=None, max_items=None, is_truncated=False): self.markers = markers or [] self.connection = connection self.distribution_id = distribution_id self.marker = marker self.next_marker = next_marker self.max_items = max_items self.auto_paginate = max_items is None self.is_truncated = is_truncated self._inval_cache = invalidations or [] def __iter__(self): """ A generator function for listing invalidation requests for a given CloudFront distribution. """ conn = self.connection distribution_id = self.distribution_id result_set = self for inval in result_set._inval_cache: yield inval if not self.auto_paginate: return while result_set.is_truncated: result_set = conn.get_invalidation_requests(distribution_id, marker=result_set.next_marker, max_items=result_set.max_items) for i in result_set._inval_cache: yield i def startElement(self, name, attrs, connection): for root_elem, handler in self.markers: if name == root_elem: obj = handler(connection, distribution_id=self.distribution_id) self._inval_cache.append(obj) return obj def endElement(self, name, value, connection): if name == 'IsTruncated': self.is_truncated = self.to_boolean(value) elif name == 'Marker': self.marker = value elif name == 'NextMarker': self.next_marker = value elif name == 'MaxItems': self.max_items = int(value) def to_boolean(self, value, true_value='true'): if value == true_value: return True else: return False class InvalidationSummary(object): """ Represents InvalidationSummary complex type in CloudFront API that lists the id and status of a given invalidation request. """ def __init__(self, connection=None, distribution_id=None, id='', status=''): self.connection = connection self.distribution_id = distribution_id self.id = id self.status = status def __repr__(self): return '' % self.id def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'Id': self.id = value elif name == 'Status': self.status = value def get_distribution(self): """ Returns a Distribution object representing the parent CloudFront distribution of the invalidation request listed in the InvalidationSummary. :rtype: :class:`boto.cloudfront.distribution.Distribution` :returns: A Distribution object representing the parent CloudFront distribution of the invalidation request listed in the InvalidationSummary """ return self.connection.get_distribution_info(self.distribution_id) def get_invalidation_request(self): """ Returns an InvalidationBatch object representing the invalidation request referred to in the InvalidationSummary. :rtype: :class:`boto.cloudfront.invalidation.InvalidationBatch` :returns: An InvalidationBatch object representing the invalidation request referred to by the InvalidationSummary """ return self.connection.invalidation_request_status( self.distribution_id, self.id) boto-2.20.1/boto/cloudfront/logging.py000066400000000000000000000030251225267101000176550ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class LoggingInfo(object): def __init__(self, bucket='', prefix=''): self.bucket = bucket self.prefix = prefix def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'Bucket': self.bucket = value elif name == 'Prefix': self.prefix = value else: setattr(self, name, value) boto-2.20.1/boto/cloudfront/object.py000066400000000000000000000033651225267101000175040ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.s3.key import Key class Object(Key): def __init__(self, bucket, name=None): Key.__init__(self, bucket, name=name) self.distribution = bucket.distribution def __repr__(self): return '' % (self.distribution.config.origin, self.name) def url(self, scheme='http'): url = '%s://' % scheme url += self.distribution.domain_name if scheme.lower().startswith('rtmp'): url += '/cfx/st/' else: url += '/' url += self.name return url class StreamingObject(Object): def url(self, scheme='rtmp'): return Object.url(self, scheme) boto-2.20.1/boto/cloudfront/origin.py000066400000000000000000000136341225267101000175250ustar00rootroot00000000000000# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from identity import OriginAccessIdentity def get_oai_value(origin_access_identity): if isinstance(origin_access_identity, OriginAccessIdentity): return origin_access_identity.uri() else: return origin_access_identity class S3Origin(object): """ Origin information to associate with the distribution. If your distribution will use an Amazon S3 origin, then you use the S3Origin element. """ def __init__(self, dns_name=None, origin_access_identity=None): """ :param dns_name: The DNS name of your Amazon S3 bucket to associate with the distribution. For example: mybucket.s3.amazonaws.com. :type dns_name: str :param origin_access_identity: The CloudFront origin access identity to associate with the distribution. If you want the distribution to serve private content, include this element; if you want the distribution to serve public content, remove this element. :type origin_access_identity: str """ self.dns_name = dns_name self.origin_access_identity = origin_access_identity def __repr__(self): return '' % self.dns_name def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'DNSName': self.dns_name = value elif name == 'OriginAccessIdentity': self.origin_access_identity = value else: setattr(self, name, value) def to_xml(self): s = ' \n' s += ' %s\n' % self.dns_name if self.origin_access_identity: val = get_oai_value(self.origin_access_identity) s += ' %s\n' % val s += ' \n' return s class CustomOrigin(object): """ Origin information to associate with the distribution. If your distribution will use a non-Amazon S3 origin, then you use the CustomOrigin element. """ def __init__(self, dns_name=None, http_port=80, https_port=443, origin_protocol_policy=None): """ :param dns_name: The DNS name of your Amazon S3 bucket to associate with the distribution. For example: mybucket.s3.amazonaws.com. :type dns_name: str :param http_port: The HTTP port the custom origin listens on. :type http_port: int :param https_port: The HTTPS port the custom origin listens on. :type http_port: int :param origin_protocol_policy: The origin protocol policy to apply to your origin. If you specify http-only, CloudFront will use HTTP only to access the origin. If you specify match-viewer, CloudFront will fetch from your origin using HTTP or HTTPS, based on the protocol of the viewer request. :type origin_protocol_policy: str """ self.dns_name = dns_name self.http_port = http_port self.https_port = https_port self.origin_protocol_policy = origin_protocol_policy def __repr__(self): return '' % self.dns_name def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'DNSName': self.dns_name = value elif name == 'HTTPPort': try: self.http_port = int(value) except ValueError: self.http_port = value elif name == 'HTTPSPort': try: self.https_port = int(value) except ValueError: self.https_port = value elif name == 'OriginProtocolPolicy': self.origin_protocol_policy = value else: setattr(self, name, value) def to_xml(self): s = ' \n' s += ' %s\n' % self.dns_name s += ' %d\n' % self.http_port s += ' %d\n' % self.https_port s += ' %s\n' % self.origin_protocol_policy s += ' \n' return s boto-2.20.1/boto/cloudfront/signers.py000066400000000000000000000040501225267101000177000ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class Signer: def __init__(self): self.id = None self.key_pair_ids = [] def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'Self': self.id = 'Self' elif name == 'AwsAccountNumber': self.id = value elif name == 'KeyPairId': self.key_pair_ids.append(value) class ActiveTrustedSigners(list): def startElement(self, name, attrs, connection): if name == 'Signer': s = Signer() self.append(s) return s def endElement(self, name, value, connection): pass class TrustedSigners(list): def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'Self': self.append(name) elif name == 'AwsAccountNumber': self.append(value) boto-2.20.1/boto/cloudsearch/000077500000000000000000000000001225267101000157725ustar00rootroot00000000000000boto-2.20.1/boto/cloudsearch/__init__.py000066400000000000000000000047671225267101000201210ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.regioninfo import RegionInfo def regions(): """ Get all available regions for the Amazon CloudSearch service. :rtype: list :return: A list of :class:`boto.regioninfo.RegionInfo` """ import boto.cloudsearch.layer1 return [RegionInfo(name='us-east-1', endpoint='cloudsearch.us-east-1.amazonaws.com', connection_cls=boto.cloudsearch.layer1.Layer1), RegionInfo(name='eu-west-1', endpoint='cloudsearch.eu-west-1.amazonaws.com', connection_cls=boto.cloudsearch.layer1.Layer1), RegionInfo(name='us-west-1', endpoint='cloudsearch.us-west-1.amazonaws.com', connection_cls=boto.cloudsearch.layer1.Layer1), RegionInfo(name='us-west-2', endpoint='cloudsearch.us-west-2.amazonaws.com', connection_cls=boto.cloudsearch.layer1.Layer1), RegionInfo(name='ap-southeast-1', endpoint='cloudsearch.ap-southeast-1.amazonaws.com', connection_cls=boto.cloudsearch.layer1.Layer1), ] def connect_to_region(region_name, **kw_params): for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/cloudsearch/document.py000066400000000000000000000225171225267101000201710ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import boto.exception from boto.compat import json import requests import boto class SearchServiceException(Exception): pass class CommitMismatchError(Exception): pass class EncodingError(Exception): """ Content sent for Cloud Search indexing was incorrectly encoded. This usually happens when a document is marked as unicode but non-unicode characters are present. """ pass class ContentTooLongError(Exception): """ Content sent for Cloud Search indexing was too long This will usually happen when documents queued for indexing add up to more than the limit allowed per upload batch (5MB) """ pass class DocumentServiceConnection(object): """ A CloudSearch document service. The DocumentServiceConection is used to add, remove and update documents in CloudSearch. Commands are uploaded to CloudSearch in SDF (Search Document Format). To generate an appropriate SDF, use :func:`add` to add or update documents, as well as :func:`delete` to remove documents. Once the set of documents is ready to be index, use :func:`commit` to send the commands to CloudSearch. If there are a lot of documents to index, it may be preferable to split the generation of SDF data and the actual uploading into CloudSearch. Retrieve the current SDF with :func:`get_sdf`. If this file is the uploaded into S3, it can be retrieved back afterwards for upload into CloudSearch using :func:`add_sdf_from_s3`. The SDF is not cleared after a :func:`commit`. If you wish to continue using the DocumentServiceConnection for another batch upload of commands, you will need to :func:`clear_sdf` first to stop the previous batch of commands from being uploaded again. """ def __init__(self, domain=None, endpoint=None): self.domain = domain self.endpoint = endpoint if not self.endpoint: self.endpoint = domain.doc_service_endpoint self.documents_batch = [] self._sdf = None def add(self, _id, version, fields, lang='en'): """ Add a document to be processed by the DocumentService The document will not actually be added until :func:`commit` is called :type _id: string :param _id: A unique ID used to refer to this document. :type version: int :param version: Version of the document being indexed. If a file is being reindexed, the version should be higher than the existing one in CloudSearch. :type fields: dict :param fields: A dictionary of key-value pairs to be uploaded . :type lang: string :param lang: The language code the data is in. Only 'en' is currently supported """ d = {'type': 'add', 'id': _id, 'version': version, 'lang': lang, 'fields': fields} self.documents_batch.append(d) def delete(self, _id, version): """ Schedule a document to be removed from the CloudSearch service The document will not actually be scheduled for removal until :func:`commit` is called :type _id: string :param _id: The unique ID of this document. :type version: int :param version: Version of the document to remove. The delete will only occur if this version number is higher than the version currently in the index. """ d = {'type': 'delete', 'id': _id, 'version': version} self.documents_batch.append(d) def get_sdf(self): """ Generate the working set of documents in Search Data Format (SDF) :rtype: string :returns: JSON-formatted string of the documents in SDF """ return self._sdf if self._sdf else json.dumps(self.documents_batch) def clear_sdf(self): """ Clear the working documents from this DocumentServiceConnection This should be used after :func:`commit` if the connection will be reused for another set of documents. """ self._sdf = None self.documents_batch = [] def add_sdf_from_s3(self, key_obj): """ Load an SDF from S3 Using this method will result in documents added through :func:`add` and :func:`delete` being ignored. :type key_obj: :class:`boto.s3.key.Key` :param key_obj: An S3 key which contains an SDF """ #@todo:: (lucas) would be nice if this could just take an s3://uri..." self._sdf = key_obj.get_contents_as_string() def commit(self): """ Actually send an SDF to CloudSearch for processing If an SDF file has been explicitly loaded it will be used. Otherwise, documents added through :func:`add` and :func:`delete` will be used. :rtype: :class:`CommitResponse` :returns: A summary of documents added and deleted """ sdf = self.get_sdf() if ': null' in sdf: boto.log.error('null value in sdf detected. This will probably raise ' '500 error.') index = sdf.index(': null') boto.log.error(sdf[index - 100:index + 100]) url = "http://%s/2011-02-01/documents/batch" % (self.endpoint) # Keep-alive is automatic in a post-1.0 requests world. session = requests.Session() adapter = requests.adapters.HTTPAdapter( pool_connections=20, pool_maxsize=50, max_retries=5 ) session.mount('http://', adapter) session.mount('https://', adapter) r = session.post(url, data=sdf, headers={'Content-Type': 'application/json'}) return CommitResponse(r, self, sdf) class CommitResponse(object): """Wrapper for response to Cloudsearch document batch commit. :type response: :class:`requests.models.Response` :param response: Response from Cloudsearch /documents/batch API :type doc_service: :class:`boto.cloudsearch.document.DocumentServiceConnection` :param doc_service: Object containing the documents posted and methods to retry :raises: :class:`boto.exception.BotoServerError` :raises: :class:`boto.cloudsearch.document.SearchServiceException` :raises: :class:`boto.cloudsearch.document.EncodingError` :raises: :class:`boto.cloudsearch.document.ContentTooLongError` """ def __init__(self, response, doc_service, sdf): self.response = response self.doc_service = doc_service self.sdf = sdf try: self.content = json.loads(response.content) except: boto.log.error('Error indexing documents.\nResponse Content:\n{0}\n\n' 'SDF:\n{1}'.format(response.content, self.sdf)) raise boto.exception.BotoServerError(self.response.status_code, '', body=response.content) self.status = self.content['status'] if self.status == 'error': self.errors = [e.get('message') for e in self.content.get('errors', [])] for e in self.errors: if "Illegal Unicode character" in e: raise EncodingError("Illegal Unicode character in document") elif e == "The Content-Length is too long": raise ContentTooLongError("Content was too long") else: self.errors = [] self.adds = self.content['adds'] self.deletes = self.content['deletes'] self._check_num_ops('add', self.adds) self._check_num_ops('delete', self.deletes) def _check_num_ops(self, type_, response_num): """Raise exception if number of ops in response doesn't match commit :type type_: str :param type_: Type of commit operation: 'add' or 'delete' :type response_num: int :param response_num: Number of adds or deletes in the response. :raises: :class:`boto.cloudsearch.document.CommitMismatchError` """ commit_num = len([d for d in self.doc_service.documents_batch if d['type'] == type_]) if response_num != commit_num: raise CommitMismatchError( 'Incorrect number of {0}s returned. Commit: {1} Response: {2}'\ .format(type_, commit_num, response_num)) boto-2.20.1/boto/cloudsearch/domain.py000066400000000000000000000363671225267101000176320ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import boto from boto.compat import json from .optionstatus import OptionStatus from .optionstatus import IndexFieldStatus from .optionstatus import ServicePoliciesStatus from .optionstatus import RankExpressionStatus from .document import DocumentServiceConnection from .search import SearchConnection def handle_bool(value): if value in [True, 'true', 'True', 'TRUE', 1]: return True return False class Domain(object): """ A Cloudsearch domain. :ivar name: The name of the domain. :ivar id: The internally generated unique identifier for the domain. :ivar created: A boolean which is True if the domain is created. It can take several minutes to initialize a domain when CreateDomain is called. Newly created search domains are returned with a False value for Created until domain creation is complete :ivar deleted: A boolean which is True if the search domain has been deleted. The system must clean up resources dedicated to the search domain when delete is called. Newly deleted search domains are returned from list_domains with a True value for deleted for several minutes until resource cleanup is complete. :ivar processing: True if processing is being done to activate the current domain configuration. :ivar num_searchable_docs: The number of documents that have been submittted to the domain and indexed. :ivar requires_index_document: True if index_documents needs to be called to activate the current domain configuration. :ivar search_instance_count: The number of search instances that are available to process search requests. :ivar search_instance_type: The instance type that is being used to process search requests. :ivar search_partition_count: The number of partitions across which the search index is spread. """ def __init__(self, layer1, data): self.layer1 = layer1 self.update_from_data(data) def update_from_data(self, data): self.created = data['created'] self.deleted = data['deleted'] self.processing = data['processing'] self.requires_index_documents = data['requires_index_documents'] self.domain_id = data['domain_id'] self.domain_name = data['domain_name'] self.num_searchable_docs = data['num_searchable_docs'] self.search_instance_count = data['search_instance_count'] self.search_instance_type = data.get('search_instance_type', None) self.search_partition_count = data['search_partition_count'] self._doc_service = data['doc_service'] self._search_service = data['search_service'] @property def doc_service_arn(self): return self._doc_service['arn'] @property def doc_service_endpoint(self): return self._doc_service['endpoint'] @property def search_service_arn(self): return self._search_service['arn'] @property def search_service_endpoint(self): return self._search_service['endpoint'] @property def created(self): return self._created @created.setter def created(self, value): self._created = handle_bool(value) @property def deleted(self): return self._deleted @deleted.setter def deleted(self, value): self._deleted = handle_bool(value) @property def processing(self): return self._processing @processing.setter def processing(self, value): self._processing = handle_bool(value) @property def requires_index_documents(self): return self._requires_index_documents @requires_index_documents.setter def requires_index_documents(self, value): self._requires_index_documents = handle_bool(value) @property def search_partition_count(self): return self._search_partition_count @search_partition_count.setter def search_partition_count(self, value): self._search_partition_count = int(value) @property def search_instance_count(self): return self._search_instance_count @search_instance_count.setter def search_instance_count(self, value): self._search_instance_count = int(value) @property def num_searchable_docs(self): return self._num_searchable_docs @num_searchable_docs.setter def num_searchable_docs(self, value): self._num_searchable_docs = int(value) @property def name(self): return self.domain_name @property def id(self): return self.domain_id def delete(self): """ Delete this domain and all index data associated with it. """ return self.layer1.delete_domain(self.name) def get_stemming(self): """ Return a :class:`boto.cloudsearch.option.OptionStatus` object representing the currently defined stemming options for the domain. """ return OptionStatus(self, None, self.layer1.describe_stemming_options, self.layer1.update_stemming_options) def get_stopwords(self): """ Return a :class:`boto.cloudsearch.option.OptionStatus` object representing the currently defined stopword options for the domain. """ return OptionStatus(self, None, self.layer1.describe_stopword_options, self.layer1.update_stopword_options) def get_synonyms(self): """ Return a :class:`boto.cloudsearch.option.OptionStatus` object representing the currently defined synonym options for the domain. """ return OptionStatus(self, None, self.layer1.describe_synonym_options, self.layer1.update_synonym_options) def get_access_policies(self): """ Return a :class:`boto.cloudsearch.option.OptionStatus` object representing the currently defined access policies for the domain. """ return ServicePoliciesStatus(self, None, self.layer1.describe_service_access_policies, self.layer1.update_service_access_policies) def index_documents(self): """ Tells the search domain to start indexing its documents using the latest text processing options and IndexFields. This operation must be invoked to make options whose OptionStatus has OptioState of RequiresIndexDocuments visible in search results. """ self.layer1.index_documents(self.name) def get_index_fields(self, field_names=None): """ Return a list of index fields defined for this domain. """ data = self.layer1.describe_index_fields(self.name, field_names) return [IndexFieldStatus(self, d) for d in data] def create_index_field(self, field_name, field_type, default='', facet=False, result=False, searchable=False, source_attributes=[]): """ Defines an ``IndexField``, either replacing an existing definition or creating a new one. :type field_name: string :param field_name: The name of a field in the search index. :type field_type: string :param field_type: The type of field. Valid values are uint | literal | text :type default: string or int :param default: The default value for the field. If the field is of type ``uint`` this should be an integer value. Otherwise, it's a string. :type facet: bool :param facet: A boolean to indicate whether facets are enabled for this field or not. Does not apply to fields of type ``uint``. :type results: bool :param results: A boolean to indicate whether values of this field can be returned in search results or used in ranking. Does not apply to fields of type ``uint``. :type searchable: bool :param searchable: A boolean to indicate whether search is enabled for this field or not. Applies only to fields of type ``literal``. :type source_attributes: list of dicts :param source_attributes: An optional list of dicts that provide information about attributes for this index field. A maximum of 20 source attributes can be configured for each index field. Each item in the list is a dict with the following keys: * data_copy - The value is a dict with the following keys: * default - Optional default value if the source attribute is not specified in a document. * name - The name of the document source field to add to this ``IndexField``. * data_function - Identifies the transformation to apply when copying data from a source attribute. * data_map - The value is a dict with the following keys: * cases - A dict that translates source field values to custom values. * default - An optional default value to use if the source attribute is not specified in a document. * name - the name of the document source field to add to this ``IndexField`` * data_trim_title - Trims common title words from a source document attribute when populating an ``IndexField``. This can be used to create an ``IndexField`` you can use for sorting. The value is a dict with the following fields: * default - An optional default value. * language - an IETF RFC 4646 language code. * separator - The separator that follows the text to trim. * name - The name of the document source field to add. :raises: BaseException, InternalException, LimitExceededException, InvalidTypeException, ResourceNotFoundException """ data = self.layer1.define_index_field(self.name, field_name, field_type, default=default, facet=facet, result=result, searchable=searchable, source_attributes=source_attributes) return IndexFieldStatus(self, data, self.layer1.describe_index_fields) def get_rank_expressions(self, rank_names=None): """ Return a list of rank expressions defined for this domain. """ fn = self.layer1.describe_rank_expressions data = fn(self.name, rank_names) return [RankExpressionStatus(self, d, fn) for d in data] def create_rank_expression(self, name, expression): """ Create a new rank expression. :type rank_name: string :param rank_name: The name of an expression computed for ranking while processing a search request. :type rank_expression: string :param rank_expression: The expression to evaluate for ranking or thresholding while processing a search request. The RankExpression syntax is based on JavaScript expressions and supports: * Integer, floating point, hex and octal literals * Shortcut evaluation of logical operators such that an expression a || b evaluates to the value a if a is true without evaluting b at all * JavaScript order of precedence for operators * Arithmetic operators: + - * / % * Boolean operators (including the ternary operator) * Bitwise operators * Comparison operators * Common mathematic functions: abs ceil erf exp floor lgamma ln log2 log10 max min sqrt pow * Trigonometric library functions: acosh acos asinh asin atanh atan cosh cos sinh sin tanh tan * Random generation of a number between 0 and 1: rand * Current time in epoch: time * The min max functions that operate on a variable argument list Intermediate results are calculated as double precision floating point values. The final return value of a RankExpression is automatically converted from floating point to a 32-bit unsigned integer by rounding to the nearest integer, with a natural floor of 0 and a ceiling of max(uint32_t), 4294967295. Mathematical errors such as dividing by 0 will fail during evaluation and return a value of 0. The source data for a RankExpression can be the name of an IndexField of type uint, another RankExpression or the reserved name text_relevance. The text_relevance source is defined to return an integer from 0 to 1000 (inclusive) to indicate how relevant a document is to the search request, taking into account repetition of search terms in the document and proximity of search terms to each other in each matching IndexField in the document. For more information about using rank expressions to customize ranking, see the Amazon CloudSearch Developer Guide. :raises: BaseException, InternalException, LimitExceededException, InvalidTypeException, ResourceNotFoundException """ data = self.layer1.define_rank_expression(self.name, name, expression) return RankExpressionStatus(self, data, self.layer1.describe_rank_expressions) def get_document_service(self): return DocumentServiceConnection(domain=self) def get_search_service(self): return SearchConnection(domain=self) def __repr__(self): return '' % self.domain_name boto-2.20.1/boto/cloudsearch/layer1.py000066400000000000000000001024421225267101000175440ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import boto import boto.jsonresponse from boto.connection import AWSQueryConnection from boto.regioninfo import RegionInfo #boto.set_stream_logger('cloudsearch') def do_bool(val): return 'true' if val in [True, 1, '1', 'true'] else 'false' class Layer1(AWSQueryConnection): APIVersion = '2011-02-01' DefaultRegionName = boto.config.get('Boto', 'cs_region_name', 'us-east-1') DefaultRegionEndpoint = boto.config.get('Boto', 'cs_region_endpoint', 'cloudsearch.us-east-1.amazonaws.com') def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, host=None, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/', api_version=None, security_token=None, validate_certs=True): if not region: region = RegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint) self.region = region AWSQueryConnection.__init__( self, host=self.region.endpoint, aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key, is_secure=is_secure, port=port, proxy=proxy, proxy_port=proxy_port, proxy_user=proxy_user, proxy_pass=proxy_pass, debug=debug, https_connection_factory=https_connection_factory, path=path, security_token=security_token, validate_certs=validate_certs) def _required_auth_capability(self): return ['hmac-v4'] def get_response(self, doc_path, action, params, path='/', parent=None, verb='GET', list_marker=None): if not parent: parent = self response = self.make_request(action, params, path, verb) body = response.read() boto.log.debug(body) if response.status == 200: e = boto.jsonresponse.Element( list_marker=list_marker if list_marker else 'Set', pythonize_name=True) h = boto.jsonresponse.XmlHandler(e, parent) h.parse(body) inner = e for p in doc_path: inner = inner.get(p) if not inner: return None if list_marker == None else [] if isinstance(inner, list): return inner else: return dict(**inner) else: raise self.ResponseError(response.status, response.reason, body) def create_domain(self, domain_name): """ Create a new search domain. :type domain_name: string :param domain_name: A string that represents the name of a domain. Domain names must be unique across the domains owned by an account within an AWS region. Domain names must start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen). Uppercase letters and underscores are not allowed. :raises: BaseException, InternalException, LimitExceededException """ doc_path = ('create_domain_response', 'create_domain_result', 'domain_status') params = {'DomainName': domain_name} return self.get_response(doc_path, 'CreateDomain', params, verb='POST') def define_index_field(self, domain_name, field_name, field_type, default='', facet=False, result=False, searchable=False, source_attributes=None): """ Defines an ``IndexField``, either replacing an existing definition or creating a new one. :type domain_name: string :param domain_name: A string that represents the name of a domain. Domain names must be unique across the domains owned by an account within an AWS region. Domain names must start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen). Uppercase letters and underscores are not allowed. :type field_name: string :param field_name: The name of a field in the search index. :type field_type: string :param field_type: The type of field. Valid values are uint | literal | text :type default: string or int :param default: The default value for the field. If the field is of type ``uint`` this should be an integer value. Otherwise, it's a string. :type facet: bool :param facet: A boolean to indicate whether facets are enabled for this field or not. Does not apply to fields of type ``uint``. :type results: bool :param results: A boolean to indicate whether values of this field can be returned in search results or used in ranking. Does not apply to fields of type ``uint``. :type searchable: bool :param searchable: A boolean to indicate whether search is enabled for this field or not. Applies only to fields of type ``literal``. :type source_attributes: list of dicts :param source_attributes: An optional list of dicts that provide information about attributes for this index field. A maximum of 20 source attributes can be configured for each index field. Each item in the list is a dict with the following keys: * data_copy - The value is a dict with the following keys: * default - Optional default value if the source attribute is not specified in a document. * name - The name of the document source field to add to this ``IndexField``. * data_function - Identifies the transformation to apply when copying data from a source attribute. * data_map - The value is a dict with the following keys: * cases - A dict that translates source field values to custom values. * default - An optional default value to use if the source attribute is not specified in a document. * name - the name of the document source field to add to this ``IndexField`` * data_trim_title - Trims common title words from a source document attribute when populating an ``IndexField``. This can be used to create an ``IndexField`` you can use for sorting. The value is a dict with the following fields: * default - An optional default value. * language - an IETF RFC 4646 language code. * separator - The separator that follows the text to trim. * name - The name of the document source field to add. :raises: BaseException, InternalException, LimitExceededException, InvalidTypeException, ResourceNotFoundException """ doc_path = ('define_index_field_response', 'define_index_field_result', 'index_field') params = {'DomainName': domain_name, 'IndexField.IndexFieldName': field_name, 'IndexField.IndexFieldType': field_type} if field_type == 'literal': params['IndexField.LiteralOptions.DefaultValue'] = default params['IndexField.LiteralOptions.FacetEnabled'] = do_bool(facet) params['IndexField.LiteralOptions.ResultEnabled'] = do_bool(result) params['IndexField.LiteralOptions.SearchEnabled'] = do_bool(searchable) elif field_type == 'uint': params['IndexField.UIntOptions.DefaultValue'] = default elif field_type == 'text': params['IndexField.TextOptions.DefaultValue'] = default params['IndexField.TextOptions.FacetEnabled'] = do_bool(facet) params['IndexField.TextOptions.ResultEnabled'] = do_bool(result) return self.get_response(doc_path, 'DefineIndexField', params, verb='POST') def define_rank_expression(self, domain_name, rank_name, rank_expression): """ Defines a RankExpression, either replacing an existing definition or creating a new one. :type domain_name: string :param domain_name: A string that represents the name of a domain. Domain names must be unique across the domains owned by an account within an AWS region. Domain names must start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen). Uppercase letters and underscores are not allowed. :type rank_name: string :param rank_name: The name of an expression computed for ranking while processing a search request. :type rank_expression: string :param rank_expression: The expression to evaluate for ranking or thresholding while processing a search request. The RankExpression syntax is based on JavaScript expressions and supports: * Integer, floating point, hex and octal literals * Shortcut evaluation of logical operators such that an expression a || b evaluates to the value a if a is true without evaluting b at all * JavaScript order of precedence for operators * Arithmetic operators: + - * / % * Boolean operators (including the ternary operator) * Bitwise operators * Comparison operators * Common mathematic functions: abs ceil erf exp floor lgamma ln log2 log10 max min sqrt pow * Trigonometric library functions: acosh acos asinh asin atanh atan cosh cos sinh sin tanh tan * Random generation of a number between 0 and 1: rand * Current time in epoch: time * The min max functions that operate on a variable argument list Intermediate results are calculated as double precision floating point values. The final return value of a RankExpression is automatically converted from floating point to a 32-bit unsigned integer by rounding to the nearest integer, with a natural floor of 0 and a ceiling of max(uint32_t), 4294967295. Mathematical errors such as dividing by 0 will fail during evaluation and return a value of 0. The source data for a RankExpression can be the name of an IndexField of type uint, another RankExpression or the reserved name text_relevance. The text_relevance source is defined to return an integer from 0 to 1000 (inclusive) to indicate how relevant a document is to the search request, taking into account repetition of search terms in the document and proximity of search terms to each other in each matching IndexField in the document. For more information about using rank expressions to customize ranking, see the Amazon CloudSearch Developer Guide. :raises: BaseException, InternalException, LimitExceededException, InvalidTypeException, ResourceNotFoundException """ doc_path = ('define_rank_expression_response', 'define_rank_expression_result', 'rank_expression') params = {'DomainName': domain_name, 'RankExpression.RankExpression': rank_expression, 'RankExpression.RankName': rank_name} return self.get_response(doc_path, 'DefineRankExpression', params, verb='POST') def delete_domain(self, domain_name): """ Delete a search domain. :type domain_name: string :param domain_name: A string that represents the name of a domain. Domain names must be unique across the domains owned by an account within an AWS region. Domain names must start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen). Uppercase letters and underscores are not allowed. :raises: BaseException, InternalException """ doc_path = ('delete_domain_response', 'delete_domain_result', 'domain_status') params = {'DomainName': domain_name} return self.get_response(doc_path, 'DeleteDomain', params, verb='POST') def delete_index_field(self, domain_name, field_name): """ Deletes an existing ``IndexField`` from the search domain. :type domain_name: string :param domain_name: A string that represents the name of a domain. Domain names must be unique across the domains owned by an account within an AWS region. Domain names must start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen). Uppercase letters and underscores are not allowed. :type field_name: string :param field_name: A string that represents the name of an index field. Field names must begin with a letter and can contain the following characters: a-z (lowercase), 0-9, and _ (underscore). Uppercase letters and hyphens are not allowed. The names "body", "docid", and "text_relevance" are reserved and cannot be specified as field or rank expression names. :raises: BaseException, InternalException, ResourceNotFoundException """ doc_path = ('delete_index_field_response', 'delete_index_field_result', 'index_field') params = {'DomainName': domain_name, 'IndexFieldName': field_name} return self.get_response(doc_path, 'DeleteIndexField', params, verb='POST') def delete_rank_expression(self, domain_name, rank_name): """ Deletes an existing ``RankExpression`` from the search domain. :type domain_name: string :param domain_name: A string that represents the name of a domain. Domain names must be unique across the domains owned by an account within an AWS region. Domain names must start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen). Uppercase letters and underscores are not allowed. :type rank_name: string :param rank_name: Name of the ``RankExpression`` to delete. :raises: BaseException, InternalException, ResourceNotFoundException """ doc_path = ('delete_rank_expression_response', 'delete_rank_expression_result', 'rank_expression') params = {'DomainName': domain_name, 'RankName': rank_name} return self.get_response(doc_path, 'DeleteRankExpression', params, verb='POST') def describe_default_search_field(self, domain_name): """ Describes options defining the default search field used by indexing for the search domain. :type domain_name: string :param domain_name: A string that represents the name of a domain. Domain names must be unique across the domains owned by an account within an AWS region. Domain names must start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen). Uppercase letters and underscores are not allowed. :raises: BaseException, InternalException, ResourceNotFoundException """ doc_path = ('describe_default_search_field_response', 'describe_default_search_field_result', 'default_search_field') params = {'DomainName': domain_name} return self.get_response(doc_path, 'DescribeDefaultSearchField', params, verb='POST') def describe_domains(self, domain_names=None): """ Describes the domains (optionally limited to one or more domains by name) owned by this account. :type domain_names: list :param domain_names: Limits the response to the specified domains. :raises: BaseException, InternalException """ doc_path = ('describe_domains_response', 'describe_domains_result', 'domain_status_list') params = {} if domain_names: for i, domain_name in enumerate(domain_names, 1): params['DomainNames.member.%d' % i] = domain_name return self.get_response(doc_path, 'DescribeDomains', params, verb='POST', list_marker='DomainStatusList') def describe_index_fields(self, domain_name, field_names=None): """ Describes index fields in the search domain, optionally limited to a single ``IndexField``. :type domain_name: string :param domain_name: A string that represents the name of a domain. Domain names must be unique across the domains owned by an account within an AWS region. Domain names must start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen). Uppercase letters and underscores are not allowed. :type field_names: list :param field_names: Limits the response to the specified fields. :raises: BaseException, InternalException, ResourceNotFoundException """ doc_path = ('describe_index_fields_response', 'describe_index_fields_result', 'index_fields') params = {'DomainName': domain_name} if field_names: for i, field_name in enumerate(field_names, 1): params['FieldNames.member.%d' % i] = field_name return self.get_response(doc_path, 'DescribeIndexFields', params, verb='POST', list_marker='IndexFields') def describe_rank_expressions(self, domain_name, rank_names=None): """ Describes RankExpressions in the search domain, optionally limited to a single expression. :type domain_name: string :param domain_name: A string that represents the name of a domain. Domain names must be unique across the domains owned by an account within an AWS region. Domain names must start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen). Uppercase letters and underscores are not allowed. :type rank_names: list :param rank_names: Limit response to the specified rank names. :raises: BaseException, InternalException, ResourceNotFoundException """ doc_path = ('describe_rank_expressions_response', 'describe_rank_expressions_result', 'rank_expressions') params = {'DomainName': domain_name} if rank_names: for i, rank_name in enumerate(rank_names, 1): params['RankNames.member.%d' % i] = rank_name return self.get_response(doc_path, 'DescribeRankExpressions', params, verb='POST', list_marker='RankExpressions') def describe_service_access_policies(self, domain_name): """ Describes the resource-based policies controlling access to the services in this search domain. :type domain_name: string :param domain_name: A string that represents the name of a domain. Domain names must be unique across the domains owned by an account within an AWS region. Domain names must start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen). Uppercase letters and underscores are not allowed. :raises: BaseException, InternalException, ResourceNotFoundException """ doc_path = ('describe_service_access_policies_response', 'describe_service_access_policies_result', 'access_policies') params = {'DomainName': domain_name} return self.get_response(doc_path, 'DescribeServiceAccessPolicies', params, verb='POST') def describe_stemming_options(self, domain_name): """ Describes stemming options used by indexing for the search domain. :type domain_name: string :param domain_name: A string that represents the name of a domain. Domain names must be unique across the domains owned by an account within an AWS region. Domain names must start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen). Uppercase letters and underscores are not allowed. :raises: BaseException, InternalException, ResourceNotFoundException """ doc_path = ('describe_stemming_options_response', 'describe_stemming_options_result', 'stems') params = {'DomainName': domain_name} return self.get_response(doc_path, 'DescribeStemmingOptions', params, verb='POST') def describe_stopword_options(self, domain_name): """ Describes stopword options used by indexing for the search domain. :type domain_name: string :param domain_name: A string that represents the name of a domain. Domain names must be unique across the domains owned by an account within an AWS region. Domain names must start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen). Uppercase letters and underscores are not allowed. :raises: BaseException, InternalException, ResourceNotFoundException """ doc_path = ('describe_stopword_options_response', 'describe_stopword_options_result', 'stopwords') params = {'DomainName': domain_name} return self.get_response(doc_path, 'DescribeStopwordOptions', params, verb='POST') def describe_synonym_options(self, domain_name): """ Describes synonym options used by indexing for the search domain. :type domain_name: string :param domain_name: A string that represents the name of a domain. Domain names must be unique across the domains owned by an account within an AWS region. Domain names must start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen). Uppercase letters and underscores are not allowed. :raises: BaseException, InternalException, ResourceNotFoundException """ doc_path = ('describe_synonym_options_response', 'describe_synonym_options_result', 'synonyms') params = {'DomainName': domain_name} return self.get_response(doc_path, 'DescribeSynonymOptions', params, verb='POST') def index_documents(self, domain_name): """ Tells the search domain to start scanning its documents using the latest text processing options and ``IndexFields``. This operation must be invoked to make visible in searches any options whose OptionStatus has ``OptionState`` of ``RequiresIndexDocuments``. :type domain_name: string :param domain_name: A string that represents the name of a domain. Domain names must be unique across the domains owned by an account within an AWS region. Domain names must start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen). Uppercase letters and underscores are not allowed. :raises: BaseException, InternalException, ResourceNotFoundException """ doc_path = ('index_documents_response', 'index_documents_result', 'field_names') params = {'DomainName': domain_name} return self.get_response(doc_path, 'IndexDocuments', params, verb='POST', list_marker='FieldNames') def update_default_search_field(self, domain_name, default_search_field): """ Updates options defining the default search field used by indexing for the search domain. :type domain_name: string :param domain_name: A string that represents the name of a domain. Domain names must be unique across the domains owned by an account within an AWS region. Domain names must start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen). Uppercase letters and underscores are not allowed. :type default_search_field: string :param default_search_field: The IndexField to use for search requests issued with the q parameter. The default is an empty string, which automatically searches all text fields. :raises: BaseException, InternalException, InvalidTypeException, ResourceNotFoundException """ doc_path = ('update_default_search_field_response', 'update_default_search_field_result', 'default_search_field') params = {'DomainName': domain_name, 'DefaultSearchField': default_search_field} return self.get_response(doc_path, 'UpdateDefaultSearchField', params, verb='POST') def update_service_access_policies(self, domain_name, access_policies): """ Updates the policies controlling access to the services in this search domain. :type domain_name: string :param domain_name: A string that represents the name of a domain. Domain names must be unique across the domains owned by an account within an AWS region. Domain names must start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen). Uppercase letters and underscores are not allowed. :type access_policies: string :param access_policies: An IAM access policy as described in The Access Policy Language in Using AWS Identity and Access Management. The maximum size of an access policy document is 100KB. :raises: BaseException, InternalException, LimitExceededException, ResourceNotFoundException, InvalidTypeException """ doc_path = ('update_service_access_policies_response', 'update_service_access_policies_result', 'access_policies') params = {'AccessPolicies': access_policies, 'DomainName': domain_name} return self.get_response(doc_path, 'UpdateServiceAccessPolicies', params, verb='POST') def update_stemming_options(self, domain_name, stems): """ Updates stemming options used by indexing for the search domain. :type domain_name: string :param domain_name: A string that represents the name of a domain. Domain names must be unique across the domains owned by an account within an AWS region. Domain names must start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen). Uppercase letters and underscores are not allowed. :type stems: string :param stems: Maps terms to their stems. The JSON object has a single key called "stems" whose value is a dict mapping terms to their stems. The maximum size of a stemming document is 500KB. Example: {"stems":{"people": "person", "walking":"walk"}} :raises: BaseException, InternalException, InvalidTypeException, LimitExceededException, ResourceNotFoundException """ doc_path = ('update_stemming_options_response', 'update_stemming_options_result', 'stems') params = {'DomainName': domain_name, 'Stems': stems} return self.get_response(doc_path, 'UpdateStemmingOptions', params, verb='POST') def update_stopword_options(self, domain_name, stopwords): """ Updates stopword options used by indexing for the search domain. :type domain_name: string :param domain_name: A string that represents the name of a domain. Domain names must be unique across the domains owned by an account within an AWS region. Domain names must start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen). Uppercase letters and underscores are not allowed. :type stopwords: string :param stopwords: Lists stopwords in a JSON object. The object has a single key called "stopwords" whose value is an array of strings. The maximum size of a stopwords document is 10KB. Example: {"stopwords": ["a", "an", "the", "of"]} :raises: BaseException, InternalException, InvalidTypeException, LimitExceededException, ResourceNotFoundException """ doc_path = ('update_stopword_options_response', 'update_stopword_options_result', 'stopwords') params = {'DomainName': domain_name, 'Stopwords': stopwords} return self.get_response(doc_path, 'UpdateStopwordOptions', params, verb='POST') def update_synonym_options(self, domain_name, synonyms): """ Updates synonym options used by indexing for the search domain. :type domain_name: string :param domain_name: A string that represents the name of a domain. Domain names must be unique across the domains owned by an account within an AWS region. Domain names must start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen). Uppercase letters and underscores are not allowed. :type synonyms: string :param synonyms: Maps terms to their synonyms. The JSON object has a single key "synonyms" whose value is a dict mapping terms to their synonyms. Each synonym is a simple string or an array of strings. The maximum size of a stopwords document is 100KB. Example: {"synonyms": {"cat": ["feline", "kitten"], "puppy": "dog"}} :raises: BaseException, InternalException, InvalidTypeException, LimitExceededException, ResourceNotFoundException """ doc_path = ('update_synonym_options_response', 'update_synonym_options_result', 'synonyms') params = {'DomainName': domain_name, 'Synonyms': synonyms} return self.get_response(doc_path, 'UpdateSynonymOptions', params, verb='POST') boto-2.20.1/boto/cloudsearch/layer2.py000066400000000000000000000056421225267101000175510ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from .layer1 import Layer1 from .domain import Domain class Layer2(object): def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, host=None, debug=0, session_token=None, region=None, validate_certs=True): self.layer1 = Layer1( aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key, is_secure=is_secure, port=port, proxy=proxy, proxy_port=proxy_port, host=host, debug=debug, security_token=session_token, region=region, validate_certs=validate_certs) def list_domains(self, domain_names=None): """ Return a list of :class:`boto.cloudsearch.domain.Domain` objects for each domain defined in the current account. """ domain_data = self.layer1.describe_domains(domain_names) return [Domain(self.layer1, data) for data in domain_data] def create_domain(self, domain_name): """ Create a new CloudSearch domain and return the corresponding :class:`boto.cloudsearch.domain.Domain` object. """ data = self.layer1.create_domain(domain_name) return Domain(self.layer1, data) def lookup(self, domain_name): """ Lookup a single domain :param domain_name: The name of the domain to look up :type domain_name: str :return: Domain object, or None if the domain isn't found :rtype: :class:`boto.cloudsearch.domain.Domain` """ domains = self.list_domains(domain_names=[domain_name]) if len(domains) > 0: return domains[0] boto-2.20.1/boto/cloudsearch/optionstatus.py000066400000000000000000000210071225267101000211200ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import time from boto.compat import json class OptionStatus(dict): """ Presents a combination of status field (defined below) which are accessed as attributes and option values which are stored in the native Python dictionary. In this class, the option values are merged from a JSON object that is stored as the Option part of the object. :ivar domain_name: The name of the domain this option is associated with. :ivar create_date: A timestamp for when this option was created. :ivar state: The state of processing a change to an option. Possible values: * RequiresIndexDocuments: the option's latest value will not be visible in searches until IndexDocuments has been called and indexing is complete. * Processing: the option's latest value is not yet visible in all searches but is in the process of being activated. * Active: the option's latest value is completely visible. :ivar update_date: A timestamp for when this option was updated. :ivar update_version: A unique integer that indicates when this option was last updated. """ def __init__(self, domain, data=None, refresh_fn=None, save_fn=None): self.domain = domain self.refresh_fn = refresh_fn self.save_fn = save_fn self.refresh(data) def _update_status(self, status): self.creation_date = status['creation_date'] self.status = status['state'] self.update_date = status['update_date'] self.update_version = int(status['update_version']) def _update_options(self, options): if options: self.update(json.loads(options)) def refresh(self, data=None): """ Refresh the local state of the object. You can either pass new state data in as the parameter ``data`` or, if that parameter is omitted, the state data will be retrieved from CloudSearch. """ if not data: if self.refresh_fn: data = self.refresh_fn(self.domain.name) if data: self._update_status(data['status']) self._update_options(data['options']) def to_json(self): """ Return the JSON representation of the options as a string. """ return json.dumps(self) def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'CreationDate': self.created = value elif name == 'State': self.state = value elif name == 'UpdateDate': self.updated = value elif name == 'UpdateVersion': self.update_version = int(value) elif name == 'Options': self.update_from_json_doc(value) else: setattr(self, name, value) def save(self): """ Write the current state of the local object back to the CloudSearch service. """ if self.save_fn: data = self.save_fn(self.domain.name, self.to_json()) self.refresh(data) def wait_for_state(self, state): """ Performs polling of CloudSearch to wait for the ``state`` of this object to change to the provided state. """ while self.state != state: time.sleep(5) self.refresh() class IndexFieldStatus(OptionStatus): def _update_options(self, options): self.update(options) def save(self): pass class RankExpressionStatus(IndexFieldStatus): pass class ServicePoliciesStatus(OptionStatus): def new_statement(self, arn, ip): """ Returns a new policy statement that will allow access to the service described by ``arn`` by the ip specified in ``ip``. :type arn: string :param arn: The Amazon Resource Notation identifier for the service you wish to provide access to. This would be either the search service or the document service. :type ip: string :param ip: An IP address or CIDR block you wish to grant access to. """ return { "Effect":"Allow", "Action":"*", # Docs say use GET, but denies unless * "Resource": arn, "Condition": { "IpAddress": { "aws:SourceIp": [ip] } } } def _allow_ip(self, arn, ip): if 'Statement' not in self: s = self.new_statement(arn, ip) self['Statement'] = [s] self.save() else: add_statement = True for statement in self['Statement']: if statement['Resource'] == arn: for condition_name in statement['Condition']: if condition_name == 'IpAddress': add_statement = False condition = statement['Condition'][condition_name] if ip not in condition['aws:SourceIp']: condition['aws:SourceIp'].append(ip) if add_statement: s = self.new_statement(arn, ip) self['Statement'].append(s) self.save() def allow_search_ip(self, ip): """ Add the provided ip address or CIDR block to the list of allowable address for the search service. :type ip: string :param ip: An IP address or CIDR block you wish to grant access to. """ arn = self.domain.search_service_arn self._allow_ip(arn, ip) def allow_doc_ip(self, ip): """ Add the provided ip address or CIDR block to the list of allowable address for the document service. :type ip: string :param ip: An IP address or CIDR block you wish to grant access to. """ arn = self.domain.doc_service_arn self._allow_ip(arn, ip) def _disallow_ip(self, arn, ip): if 'Statement' not in self: return need_update = False for statement in self['Statement']: if statement['Resource'] == arn: for condition_name in statement['Condition']: if condition_name == 'IpAddress': condition = statement['Condition'][condition_name] if ip in condition['aws:SourceIp']: condition['aws:SourceIp'].remove(ip) need_update = True if need_update: self.save() def disallow_search_ip(self, ip): """ Remove the provided ip address or CIDR block from the list of allowable address for the search service. :type ip: string :param ip: An IP address or CIDR block you wish to grant access to. """ arn = self.domain.search_service_arn self._disallow_ip(arn, ip) def disallow_doc_ip(self, ip): """ Remove the provided ip address or CIDR block from the list of allowable address for the document service. :type ip: string :param ip: An IP address or CIDR block you wish to grant access to. """ arn = self.domain.doc_service_arn self._disallow_ip(arn, ip) boto-2.20.1/boto/cloudsearch/search.py000066400000000000000000000326711225267101000176220ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from math import ceil import time import boto from boto.compat import json import requests class SearchServiceException(Exception): pass class CommitMismatchError(Exception): pass class SearchResults(object): def __init__(self, **attrs): self.rid = attrs['info']['rid'] # self.doc_coverage_pct = attrs['info']['doc-coverage-pct'] self.cpu_time_ms = attrs['info']['cpu-time-ms'] self.time_ms = attrs['info']['time-ms'] self.hits = attrs['hits']['found'] self.docs = attrs['hits']['hit'] self.start = attrs['hits']['start'] self.rank = attrs['rank'] self.match_expression = attrs['match-expr'] self.query = attrs['query'] self.search_service = attrs['search_service'] self.facets = {} if 'facets' in attrs: for (facet, values) in attrs['facets'].iteritems(): if 'constraints' in values: self.facets[facet] = dict((k, v) for (k, v) in map(lambda x: (x['value'], x['count']), values['constraints'])) self.num_pages_needed = ceil(self.hits / self.query.real_size) def __len__(self): return len(self.docs) def __iter__(self): return iter(self.docs) def next_page(self): """Call Cloudsearch to get the next page of search results :rtype: :class:`boto.cloudsearch.search.SearchResults` :return: the following page of search results """ if self.query.page <= self.num_pages_needed: self.query.start += self.query.real_size self.query.page += 1 return self.search_service(self.query) else: raise StopIteration class Query(object): RESULTS_PER_PAGE = 500 def __init__(self, q=None, bq=None, rank=None, return_fields=None, size=10, start=0, facet=None, facet_constraints=None, facet_sort=None, facet_top_n=None, t=None): self.q = q self.bq = bq self.rank = rank or [] self.return_fields = return_fields or [] self.start = start self.facet = facet or [] self.facet_constraints = facet_constraints or {} self.facet_sort = facet_sort or {} self.facet_top_n = facet_top_n or {} self.t = t or {} self.page = 0 self.update_size(size) def update_size(self, new_size): self.size = new_size self.real_size = Query.RESULTS_PER_PAGE if (self.size > Query.RESULTS_PER_PAGE or self.size == 0) else self.size def to_params(self): """Transform search parameters from instance properties to a dictionary :rtype: dict :return: search parameters """ params = {'start': self.start, 'size': self.real_size} if self.q: params['q'] = self.q if self.bq: params['bq'] = self.bq if self.rank: params['rank'] = ','.join(self.rank) if self.return_fields: params['return-fields'] = ','.join(self.return_fields) if self.facet: params['facet'] = ','.join(self.facet) if self.facet_constraints: for k, v in self.facet_constraints.iteritems(): params['facet-%s-constraints' % k] = v if self.facet_sort: for k, v in self.facet_sort.iteritems(): params['facet-%s-sort' % k] = v if self.facet_top_n: for k, v in self.facet_top_n.iteritems(): params['facet-%s-top-n' % k] = v if self.t: for k, v in self.t.iteritems(): params['t-%s' % k] = v return params class SearchConnection(object): def __init__(self, domain=None, endpoint=None): self.domain = domain self.endpoint = endpoint if not endpoint: self.endpoint = domain.search_service_endpoint def build_query(self, q=None, bq=None, rank=None, return_fields=None, size=10, start=0, facet=None, facet_constraints=None, facet_sort=None, facet_top_n=None, t=None): return Query(q=q, bq=bq, rank=rank, return_fields=return_fields, size=size, start=start, facet=facet, facet_constraints=facet_constraints, facet_sort=facet_sort, facet_top_n=facet_top_n, t=t) def search(self, q=None, bq=None, rank=None, return_fields=None, size=10, start=0, facet=None, facet_constraints=None, facet_sort=None, facet_top_n=None, t=None): """ Send a query to CloudSearch Each search query should use at least the q or bq argument to specify the search parameter. The other options are used to specify the criteria of the search. :type q: string :param q: A string to search the default search fields for. :type bq: string :param bq: A string to perform a Boolean search. This can be used to create advanced searches. :type rank: List of strings :param rank: A list of fields or rank expressions used to order the search results. A field can be reversed by using the - operator. ``['-year', 'author']`` :type return_fields: List of strings :param return_fields: A list of fields which should be returned by the search. If this field is not specified, only IDs will be returned. ``['headline']`` :type size: int :param size: Number of search results to specify :type start: int :param start: Offset of the first search result to return (can be used for paging) :type facet: list :param facet: List of fields for which facets should be returned ``['colour', 'size']`` :type facet_constraints: dict :param facet_constraints: Use to limit facets to specific values specified as comma-delimited strings in a Dictionary of facets ``{'colour': "'blue','white','red'", 'size': "big"}`` :type facet_sort: dict :param facet_sort: Rules used to specify the order in which facet values should be returned. Allowed values are *alpha*, *count*, *max*, *sum*. Use *alpha* to sort alphabetical, and *count* to sort the facet by number of available result. ``{'color': 'alpha', 'size': 'count'}`` :type facet_top_n: dict :param facet_top_n: Dictionary of facets and number of facets to return. ``{'colour': 2}`` :type t: dict :param t: Specify ranges for specific fields ``{'year': '2000..2005'}`` :rtype: :class:`boto.cloudsearch.search.SearchResults` :return: Returns the results of this search The following examples all assume we have indexed a set of documents with fields: *author*, *date*, *headline* A simple search will look for documents whose default text search fields will contain the search word exactly: >>> search(q='Tim') # Return documents with the word Tim in them (but not Timothy) A simple search with more keywords will return documents whose default text search fields contain the search strings together or separately. >>> search(q='Tim apple') # Will match "tim" and "apple" More complex searches require the boolean search operator. Wildcard searches can be used to search for any words that start with the search string. >>> search(bq="'Tim*'") # Return documents with words like Tim or Timothy) Search terms can also be combined. Allowed operators are "and", "or", "not", "field", "optional", "token", "phrase", or "filter" >>> search(bq="(and 'Tim' (field author 'John Smith'))") Facets allow you to show classification information about the search results. For example, you can retrieve the authors who have written about Tim: >>> search(q='Tim', facet=['Author']) With facet_constraints, facet_top_n and facet_sort more complicated constraints can be specified such as returning the top author out of John Smith and Mark Smith who have a document with the word Tim in it. >>> search(q='Tim', ... facet=['Author'], ... facet_constraints={'author': "'John Smith','Mark Smith'"}, ... facet=['author'], ... facet_top_n={'author': 1}, ... facet_sort={'author': 'count'}) """ query = self.build_query(q=q, bq=bq, rank=rank, return_fields=return_fields, size=size, start=start, facet=facet, facet_constraints=facet_constraints, facet_sort=facet_sort, facet_top_n=facet_top_n, t=t) return self(query) def __call__(self, query): """Make a call to CloudSearch :type query: :class:`boto.cloudsearch.search.Query` :param query: A group of search criteria :rtype: :class:`boto.cloudsearch.search.SearchResults` :return: search results """ url = "http://%s/2011-02-01/search" % (self.endpoint) params = query.to_params() r = requests.get(url, params=params) try: data = json.loads(r.content) except ValueError, e: if r.status_code == 403: msg = '' import re g = re.search('

403 Forbidden

([^<]+)<', r.content) try: msg = ': %s' % (g.groups()[0].strip()) except AttributeError: pass raise SearchServiceException('Authentication error from Amazon%s' % msg) raise SearchServiceException("Got non-json response from Amazon. %s" % r.content, query) if 'messages' in data and 'error' in data: for m in data['messages']: if m['severity'] == 'fatal': raise SearchServiceException("Error processing search %s " "=> %s" % (params, m['message']), query) elif 'error' in data: raise SearchServiceException("Unknown error processing search %s" % json.dumps(data), query) data['query'] = query data['search_service'] = self return SearchResults(**data) def get_all_paged(self, query, per_page): """Get a generator to iterate over all pages of search results :type query: :class:`boto.cloudsearch.search.Query` :param query: A group of search criteria :type per_page: int :param per_page: Number of docs in each :class:`boto.cloudsearch.search.SearchResults` object. :rtype: generator :return: Generator containing :class:`boto.cloudsearch.search.SearchResults` """ query.update_size(per_page) page = 0 num_pages_needed = 0 while page <= num_pages_needed: results = self(query) num_pages_needed = results.num_pages_needed yield results query.start += query.real_size page += 1 def get_all_hits(self, query): """Get a generator to iterate over all search results Transparently handles the results paging from Cloudsearch search results so even if you have many thousands of results you can iterate over all results in a reasonably efficient manner. :type query: :class:`boto.cloudsearch.search.Query` :param query: A group of search criteria :rtype: generator :return: All docs matching query """ page = 0 num_pages_needed = 0 while page <= num_pages_needed: results = self(query) num_pages_needed = results.num_pages_needed for doc in results: yield doc query.start += query.real_size page += 1 def get_num_hits(self, query): """Return the total number of hits for query :type query: :class:`boto.cloudsearch.search.Query` :param query: a group of search criteria :rtype: int :return: Total number of hits for query """ query.update_size(1) return self(query).hits boto-2.20.1/boto/cloudsearch/sourceattribute.py000066400000000000000000000061251225267101000215740ustar00rootroot00000000000000# Copyright (c) 202 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class SourceAttribute(object): """ Provide information about attributes for an index field. A maximum of 20 source attributes can be configured for each index field. :ivar default: Optional default value if the source attribute is not specified in a document. :ivar name: The name of the document source field to add to this ``IndexField``. :ivar data_function: Identifies the transformation to apply when copying data from a source attribute. :ivar data_map: The value is a dict with the following keys: * cases - A dict that translates source field values to custom values. * default - An optional default value to use if the source attribute is not specified in a document. * name - the name of the document source field to add to this ``IndexField`` :ivar data_trim_title: Trims common title words from a source document attribute when populating an ``IndexField``. This can be used to create an ``IndexField`` you can use for sorting. The value is a dict with the following fields: * default - An optional default value. * language - an IETF RFC 4646 language code. * separator - The separator that follows the text to trim. * name - The name of the document source field to add. """ ValidDataFunctions = ('Copy', 'TrimTitle', 'Map') def __init__(self): self.data_copy = {} self._data_function = self.ValidDataFunctions[0] self.data_map = {} self.data_trim_title = {} @property def data_function(self): return self._data_function @data_function.setter def data_function(self, value): if value not in self.ValidDataFunctions: valid = '|'.join(self.ValidDataFunctions) raise ValueError('data_function must be one of: %s' % valid) self._data_function = value boto-2.20.1/boto/cloudtrail/000077500000000000000000000000001225267101000156405ustar00rootroot00000000000000boto-2.20.1/boto/cloudtrail/__init__.py000066400000000000000000000036201225267101000177520ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.regioninfo import RegionInfo def regions(): """ Get all available regions for the AWS Cloudtrail service. :rtype: list :return: A list of :class:`boto.regioninfo.RegionInfo` """ from boto.cloudtrail.layer1 import CloudTrailConnection return [RegionInfo(name='us-east-1', endpoint='cloudtrail.us-east-1.amazonaws.com', connection_cls=CloudTrailConnection), RegionInfo(name='us-west-2', endpoint='cloudtrail.us-west-2.amazonaws.com', connection_cls=CloudTrailConnection), ] def connect_to_region(region_name, **kw_params): for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/cloudtrail/exceptions.py000066400000000000000000000033421225267101000203750ustar00rootroot00000000000000""" Exceptions that are specific to the cloudtrail module. """ from boto.exception import BotoServerError class InvalidSnsTopicNameException(BotoServerError): """ Raised when an invalid SNS topic name is passed to Cloudtrail. """ pass class InvalidS3BucketNameException(BotoServerError): """ Raised when an invalid S3 bucket name is passed to Cloudtrail. """ pass class TrailAlreadyExistsException(BotoServerError): """ Raised when the given trail name already exists. """ pass class InsufficientSnsTopicPolicyException(BotoServerError): """ Raised when the SNS topic does not allow Cloudtrail to post messages. """ pass class InvalidTrailNameException(BotoServerError): """ Raised when the trail name is invalid. """ pass class InternalErrorException(BotoServerError): """ Raised when there was an internal Cloudtrail error. """ pass class TrailNotFoundException(BotoServerError): """ Raised when the given trail name is not found. """ pass class S3BucketDoesNotExistException(BotoServerError): """ Raised when the given S3 bucket does not exist. """ pass class TrailNotProvidedException(BotoServerError): """ Raised when no trail name was provided. """ pass class InvalidS3PrefixException(BotoServerError): """ Raised when an invalid key prefix is given. """ pass class MaximumNumberOfTrailsExceededException(BotoServerError): """ Raised when no more trails can be created. """ pass class InsufficientS3BucketPolicyException(BotoServerError): """ Raised when the S3 bucket does not allow Cloudtrail to write files into the prefix. """ pass boto-2.20.1/boto/cloudtrail/layer1.py000066400000000000000000000275571225267101000174270ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # try: import json except ImportError: import simplejson as json import boto from boto.connection import AWSQueryConnection from boto.regioninfo import RegionInfo from boto.exception import JSONResponseError from boto.cloudtrail import exceptions class CloudTrailConnection(AWSQueryConnection): """ AWS Cloud Trail This is the CloudTrail API Reference. It provides descriptions of actions, data types, common parameters, and common errors for CloudTrail. CloudTrail is a web service that records AWS API calls for your AWS account and delivers log files to an Amazon S3 bucket. The recorded information includes the identity of the user, the start time of the event, the source IP address, the request parameters, and the response elements returned by the service. As an alternative to using the API, you can use one of the AWS SDKs, which consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .NET, iOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to AWSCloudTrail. For example, the SDKs take care of cryptographically signing requests, managing errors, and retrying requests automatically. For information about the AWS SDKs, including how to download and install them, see the Tools for Amazon Web Services page. See the CloudTrail User Guide for information about the data that is included with each event listed in the log files. """ APIVersion = "2013-11-01" DefaultRegionName = "us-east-1" DefaultRegionEndpoint = "cloudtrail.us-east-1.amazonaws.com" ServiceName = "CloudTrail" TargetPrefix = "com.amazonaws.cloudtrail.v20131101.CloudTrail_20131101" ResponseError = JSONResponseError _faults = { "InvalidSnsTopicNameException": exceptions.InvalidSnsTopicNameException, "InvalidS3BucketNameException": exceptions.InvalidS3BucketNameException, "TrailAlreadyExistsException": exceptions.TrailAlreadyExistsException, "InsufficientSnsTopicPolicyException": exceptions.InsufficientSnsTopicPolicyException, "InvalidTrailNameException": exceptions.InvalidTrailNameException, "InternalErrorException": exceptions.InternalErrorException, "TrailNotFoundException": exceptions.TrailNotFoundException, "S3BucketDoesNotExistException": exceptions.S3BucketDoesNotExistException, "TrailNotProvidedException": exceptions.TrailNotProvidedException, "InvalidS3PrefixException": exceptions.InvalidS3PrefixException, "MaximumNumberOfTrailsExceededException": exceptions.MaximumNumberOfTrailsExceededException, "InsufficientS3BucketPolicyException": exceptions.InsufficientS3BucketPolicyException, } def __init__(self, **kwargs): region = kwargs.pop('region', None) if not region: region = RegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint) if 'host' not in kwargs: kwargs['host'] = region.endpoint AWSQueryConnection.__init__(self, **kwargs) self.region = region def _required_auth_capability(self): return ['hmac-v4'] def create_trail(self, trail=None): """ From the command line, use create-subscription. Creates a trail that specifies the settings for delivery of log data to an Amazon S3 bucket. The request includes a Trail structure that specifies the following: + Trail name. + The name of the Amazon S3 bucket to which CloudTrail delivers your log files. + The name of the Amazon S3 key prefix that precedes each log file. + The name of the Amazon SNS topic that notifies you that a new file is available in your bucket. + Whether the log file should include events from global services. Currently, the only events included in CloudTrail log files are from IAM and AWS STS. Returns the appropriate HTTP status code if successful. If not, it returns either one of the CommonErrors or a FrontEndException with one of the following error codes: **MaximumNumberOfTrailsExceeded** An attempt was made to create more trails than allowed. You can only create one trail for each account in each region. **TrailAlreadyExists** An attempt was made to create a trail with a name that already exists. **S3BucketDoesNotExist** Specified Amazon S3 bucket does not exist. **InsufficientS3BucketPolicy** Policy on Amazon S3 bucket does not permit CloudTrail to write to your bucket. See the AWS CloudTrail User Guide for the required bucket policy. **InsufficientSnsTopicPolicy** The policy on Amazon SNS topic does not permit CloudTrail to write to it. Can also occur when an Amazon SNS topic does not exist. :type trail: dict :param trail: Contains the Trail structure that specifies the settings for each trail. """ params = {} if trail is not None: params['trail'] = trail return self.make_request(action='CreateTrail', body=json.dumps(params)) def delete_trail(self, name=None): """ Deletes a trail. :type name: string :param name: The name of a trail to be deleted. """ params = {} if name is not None: params['Name'] = name return self.make_request(action='DeleteTrail', body=json.dumps(params)) def describe_trails(self, trail_name_list=None): """ Retrieves the settings for some or all trails associated with an account. Returns a list of Trail structures in JSON format. :type trail_name_list: list :param trail_name_list: The list of Trail object names. """ params = {} if trail_name_list is not None: params['trailNameList'] = trail_name_list return self.make_request(action='DescribeTrails', body=json.dumps(params)) def get_trail_status(self, name=None): """ Returns GetTrailStatusResult, which contains a JSON-formatted list of information about the trail specified in the request. JSON fields include information such as delivery errors, Amazon SNS and Amazon S3 errors, and times that logging started and stopped for each trail. :type name: string :param name: The name of the trail for which you are requesting the current status. """ params = {} if name is not None: params['Name'] = name return self.make_request(action='GetTrailStatus', body=json.dumps(params)) def start_logging(self, name=None): """ Starts the processing of recording user activity events and log file delivery for a trail. :type name: string :param name: The name of the Trail for which CloudTrail logs events. """ params = {} if name is not None: params['Name'] = name return self.make_request(action='StartLogging', body=json.dumps(params)) def stop_logging(self, name=None): """ Suspends the recording of user activity events and log file delivery for the specified trail. Under most circumstances, there is no need to use this action. You can update a trail without stopping it first. This action is the only way to stop logging activity. :type name: string :param name: Communicates to CloudTrail the name of the Trail for which to stop logging events. """ params = {} if name is not None: params['Name'] = name return self.make_request(action='StopLogging', body=json.dumps(params)) def update_trail(self, trail=None): """ From the command line, use update-subscription. Updates the settings that specify delivery of log files. Changes to a trail do not require stopping the CloudTrail service. You can use this action to designate an existing bucket for log delivery, or to create a new bucket and prefix. If the existing bucket has previously been a target for CloudTrail log files, an IAM policy exists for the bucket. If you create a new bucket using UpdateTrail, you need to apply the policy to the bucket using one of the means provided by the Amazon S3 service. The request includes a Trail structure that specifies the following: + Trail name. + The name of the Amazon S3 bucket to which CloudTrail delivers your log files. + The name of the Amazon S3 key prefix that precedes each log file. + The name of the Amazon SNS topic that notifies you that a new file is available in your bucket. + Whether the log file should include events from global services, such as IAM or AWS STS. **CreateTrail** returns the appropriate HTTP status code if successful. If not, it returns either one of the common errors or one of the exceptions listed at the end of this page. :type trail: dict :param trail: Represents the Trail structure that contains the CloudTrail setting for an account. """ params = {} if trail is not None: params['trail'] = trail return self.make_request(action='UpdateTrail', body=json.dumps(params)) def make_request(self, action, body): headers = { 'X-Amz-Target': '%s.%s' % (self.TargetPrefix, action), 'Host': self.region.endpoint, 'Content-Type': 'application/x-amz-json-1.1', 'Content-Length': str(len(body)), } http_request = self.build_base_http_request( method='POST', path='/', auth_path='/', params={}, headers=headers, data=body) response = self._mexe(http_request, sender=None, override_num_retries=10) response_body = response.read() boto.log.debug(response_body) if response.status == 200: if response_body: return json.loads(response_body) else: json_body = json.loads(response_body) fault_name = json_body.get('__type', None) exception_class = self._faults.get(fault_name, self.ResponseError) raise exception_class(response.status, response.reason, body=json_body) boto-2.20.1/boto/compat.py000066400000000000000000000024671225267101000153440ustar00rootroot00000000000000# Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # # This allows boto modules to say "from boto.compat import json". This is # preferred so that all modules don't have to repeat this idiom. try: import simplejson as json except ImportError: import json boto-2.20.1/boto/connection.py000066400000000000000000001356571225267101000162300ustar00rootroot00000000000000# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # Copyright (c) 2010 Google # Copyright (c) 2008 rPath, Inc. # Copyright (c) 2009 The Echo Nest Corporation # Copyright (c) 2010, Eucalyptus Systems, Inc. # Copyright (c) 2011, Nexenta Systems Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # # Parts of this code were copied or derived from sample code supplied by AWS. # The following notice applies to that code. # # This software code is made available "AS IS" without warranties of any # kind. You may copy, display, modify and redistribute the software # code either by itself or as incorporated into your code; provided that # you do not remove any proprietary notices. Your use of this software # code is at your own risk and you waive any claim against Amazon # Digital Services, Inc. or its affiliates with respect to your use of # this software code. (c) 2006 Amazon Digital Services, Inc. or its # affiliates. """ Handles basic connections to AWS """ from __future__ import with_statement import base64 import errno import httplib import os import Queue import random import re import socket import sys import time import urllib import urlparse import xml.sax import copy import auth import auth_handler import boto import boto.utils import boto.handler import boto.cacerts from boto import config, UserAgent from boto.exception import AWSConnectionError from boto.exception import BotoClientError from boto.exception import BotoServerError from boto.exception import PleaseRetryException from boto.provider import Provider from boto.resultset import ResultSet HAVE_HTTPS_CONNECTION = False try: import ssl from boto import https_connection # Google App Engine runs on Python 2.5 so doesn't have ssl.SSLError. if hasattr(ssl, 'SSLError'): HAVE_HTTPS_CONNECTION = True except ImportError: pass try: import threading except ImportError: import dummy_threading as threading ON_APP_ENGINE = all(key in os.environ for key in ( 'USER_IS_ADMIN', 'CURRENT_VERSION_ID', 'APPLICATION_ID')) PORTS_BY_SECURITY = {True: 443, False: 80} DEFAULT_CA_CERTS_FILE = os.path.join(os.path.dirname(os.path.abspath(boto.cacerts.__file__ )), "cacerts.txt") class HostConnectionPool(object): """ A pool of connections for one remote (host,port,is_secure). When connections are added to the pool, they are put into a pending queue. The _mexe method returns connections to the pool before the response body has been read, so they connections aren't ready to send another request yet. They stay in the pending queue until they are ready for another request, at which point they are returned to the pool of ready connections. The pool of ready connections is an ordered list of (connection,time) pairs, where the time is the time the connection was returned from _mexe. After a certain period of time, connections are considered stale, and discarded rather than being reused. This saves having to wait for the connection to time out if AWS has decided to close it on the other end because of inactivity. Thread Safety: This class is used only from ConnectionPool while it's mutex is held. """ def __init__(self): self.queue = [] def size(self): """ Returns the number of connections in the pool for this host. Some of the connections may still be in use, and may not be ready to be returned by get(). """ return len(self.queue) def put(self, conn): """ Adds a connection to the pool, along with the time it was added. """ self.queue.append((conn, time.time())) def get(self): """ Returns the next connection in this pool that is ready to be reused. Returns None if there aren't any. """ # Discard ready connections that are too old. self.clean() # Return the first connection that is ready, and remove it # from the queue. Connections that aren't ready are returned # to the end of the queue with an updated time, on the # assumption that somebody is actively reading the response. for _ in range(len(self.queue)): (conn, _) = self.queue.pop(0) if self._conn_ready(conn): return conn else: self.put(conn) return None def _conn_ready(self, conn): """ There is a nice state diagram at the top of httplib.py. It indicates that once the response headers have been read (which _mexe does before adding the connection to the pool), a response is attached to the connection, and it stays there until it's done reading. This isn't entirely true: even after the client is done reading, the response may be closed, but not removed from the connection yet. This is ugly, reading a private instance variable, but the state we care about isn't available in any public methods. """ if ON_APP_ENGINE: # Google AppEngine implementation of HTTPConnection doesn't contain # _HTTPConnection__response attribute. Moreover, it's not possible # to determine if given connection is ready. Reusing connections # simply doesn't make sense with App Engine urlfetch service. return False else: response = getattr(conn, '_HTTPConnection__response', None) return (response is None) or response.isclosed() def clean(self): """ Get rid of stale connections. """ # Note that we do not close the connection here -- somebody # may still be reading from it. while len(self.queue) > 0 and self._pair_stale(self.queue[0]): self.queue.pop(0) def _pair_stale(self, pair): """ Returns true of the (connection,time) pair is too old to be used. """ (_conn, return_time) = pair now = time.time() return return_time + ConnectionPool.STALE_DURATION < now class ConnectionPool(object): """ A connection pool that expires connections after a fixed period of time. This saves time spent waiting for a connection that AWS has timed out on the other end. This class is thread-safe. """ # # The amout of time between calls to clean. # CLEAN_INTERVAL = 5.0 # # How long before a connection becomes "stale" and won't be reused # again. The intention is that this time is less that the timeout # period that AWS uses, so we'll never try to reuse a connection # and find that AWS is timing it out. # # Experimentation in July 2011 shows that AWS starts timing things # out after three minutes. The 60 seconds here is conservative so # we should never hit that 3-minute timout. # STALE_DURATION = 60.0 def __init__(self): # Mapping from (host,port,is_secure) to HostConnectionPool. # If a pool becomes empty, it is removed. self.host_to_pool = {} # The last time the pool was cleaned. self.last_clean_time = 0.0 self.mutex = threading.Lock() ConnectionPool.STALE_DURATION = \ config.getfloat('Boto', 'connection_stale_duration', ConnectionPool.STALE_DURATION) def __getstate__(self): pickled_dict = copy.copy(self.__dict__) pickled_dict['host_to_pool'] = {} del pickled_dict['mutex'] return pickled_dict def __setstate__(self, dct): self.__init__() def size(self): """ Returns the number of connections in the pool. """ return sum(pool.size() for pool in self.host_to_pool.values()) def get_http_connection(self, host, port, is_secure): """ Gets a connection from the pool for the named host. Returns None if there is no connection that can be reused. It's the caller's responsibility to call close() on the connection when it's no longer needed. """ self.clean() with self.mutex: key = (host, port, is_secure) if key not in self.host_to_pool: return None return self.host_to_pool[key].get() def put_http_connection(self, host, port, is_secure, conn): """ Adds a connection to the pool of connections that can be reused for the named host. """ with self.mutex: key = (host, port, is_secure) if key not in self.host_to_pool: self.host_to_pool[key] = HostConnectionPool() self.host_to_pool[key].put(conn) def clean(self): """ Clean up the stale connections in all of the pools, and then get rid of empty pools. Pools clean themselves every time a connection is fetched; this cleaning takes care of pools that aren't being used any more, so nothing is being gotten from them. """ with self.mutex: now = time.time() if self.last_clean_time + self.CLEAN_INTERVAL < now: to_remove = [] for (host, pool) in self.host_to_pool.items(): pool.clean() if pool.size() == 0: to_remove.append(host) for host in to_remove: del self.host_to_pool[host] self.last_clean_time = now class HTTPRequest(object): def __init__(self, method, protocol, host, port, path, auth_path, params, headers, body): """Represents an HTTP request. :type method: string :param method: The HTTP method name, 'GET', 'POST', 'PUT' etc. :type protocol: string :param protocol: The http protocol used, 'http' or 'https'. :type host: string :param host: Host to which the request is addressed. eg. abc.com :type port: int :param port: port on which the request is being sent. Zero means unset, in which case default port will be chosen. :type path: string :param path: URL path that is being accessed. :type auth_path: string :param path: The part of the URL path used when creating the authentication string. :type params: dict :param params: HTTP url query parameters, with key as name of the param, and value as value of param. :type headers: dict :param headers: HTTP headers, with key as name of the header and value as value of header. :type body: string :param body: Body of the HTTP request. If not present, will be None or empty string (''). """ self.method = method self.protocol = protocol self.host = host self.port = port self.path = path if auth_path is None: auth_path = path self.auth_path = auth_path self.params = params # chunked Transfer-Encoding should act only on PUT request. if headers and 'Transfer-Encoding' in headers and \ headers['Transfer-Encoding'] == 'chunked' and \ self.method != 'PUT': self.headers = headers.copy() del self.headers['Transfer-Encoding'] else: self.headers = headers self.body = body def __str__(self): return (('method:(%s) protocol:(%s) host(%s) port(%s) path(%s) ' 'params(%s) headers(%s) body(%s)') % (self.method, self.protocol, self.host, self.port, self.path, self.params, self.headers, self.body)) def authorize(self, connection, **kwargs): for key in self.headers: val = self.headers[key] if isinstance(val, unicode): safe = '!"#$%&\'()*+,/:;<=>?@[\\]^`{|}~' self.headers[key] = urllib.quote_plus(val.encode('utf-8'), safe) connection._auth_handler.add_auth(self, **kwargs) self.headers['User-Agent'] = UserAgent # I'm not sure if this is still needed, now that add_auth is # setting the content-length for POST requests. if 'Content-Length' not in self.headers: if 'Transfer-Encoding' not in self.headers or \ self.headers['Transfer-Encoding'] != 'chunked': self.headers['Content-Length'] = str(len(self.body)) class HTTPResponse(httplib.HTTPResponse): def __init__(self, *args, **kwargs): httplib.HTTPResponse.__init__(self, *args, **kwargs) self._cached_response = '' def read(self, amt=None): """Read the response. This method does not have the same behavior as httplib.HTTPResponse.read. Instead, if this method is called with no ``amt`` arg, then the response body will be cached. Subsequent calls to ``read()`` with no args **will return the cached response**. """ if amt is None: # The reason for doing this is that many places in boto call # response.read() and except to get the response body that they # can then process. To make sure this always works as they expect # we're caching the response so that multiple calls to read() # will return the full body. Note that this behavior only # happens if the amt arg is not specified. if not self._cached_response: self._cached_response = httplib.HTTPResponse.read(self) return self._cached_response else: return httplib.HTTPResponse.read(self, amt) class AWSAuthConnection(object): def __init__(self, host, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, path='/', provider='aws', security_token=None, suppress_consec_slashes=True, validate_certs=True): """ :type host: str :param host: The host to make the connection to :keyword str aws_access_key_id: Your AWS Access Key ID (provided by Amazon). If none is specified, the value in your ``AWS_ACCESS_KEY_ID`` environmental variable is used. :keyword str aws_secret_access_key: Your AWS Secret Access Key (provided by Amazon). If none is specified, the value in your ``AWS_SECRET_ACCESS_KEY`` environmental variable is used. :type is_secure: boolean :param is_secure: Whether the connection is over SSL :type https_connection_factory: list or tuple :param https_connection_factory: A pair of an HTTP connection factory and the exceptions to catch. The factory should have a similar interface to L{httplib.HTTPSConnection}. :param str proxy: Address/hostname for a proxy server :type proxy_port: int :param proxy_port: The port to use when connecting over a proxy :type proxy_user: str :param proxy_user: The username to connect with on the proxy :type proxy_pass: str :param proxy_pass: The password to use when connection over a proxy. :type port: int :param port: The port to use to connect :type suppress_consec_slashes: bool :param suppress_consec_slashes: If provided, controls whether consecutive slashes will be suppressed in key paths. :type validate_certs: bool :param validate_certs: Controls whether SSL certificates will be validated or not. Defaults to True. """ self.suppress_consec_slashes = suppress_consec_slashes self.num_retries = 6 # Override passed-in is_secure setting if value was defined in config. if config.has_option('Boto', 'is_secure'): is_secure = config.getboolean('Boto', 'is_secure') self.is_secure = is_secure # Whether or not to validate server certificates. # The default is now to validate certificates. This can be # overridden in the boto config file are by passing an # explicit validate_certs parameter to the class constructor. self.https_validate_certificates = config.getbool( 'Boto', 'https_validate_certificates', validate_certs) if self.https_validate_certificates and not HAVE_HTTPS_CONNECTION: raise BotoClientError( "SSL server certificate validation is enabled in boto " "configuration, but Python dependencies required to " "support this feature are not available. Certificate " "validation is only supported when running under Python " "2.6 or later.") self.ca_certificates_file = config.get_value( 'Boto', 'ca_certificates_file', DEFAULT_CA_CERTS_FILE) if port: self.port = port else: self.port = PORTS_BY_SECURITY[is_secure] self.handle_proxy(proxy, proxy_port, proxy_user, proxy_pass) # define exceptions from httplib that we want to catch and retry self.http_exceptions = (httplib.HTTPException, socket.error, socket.gaierror, httplib.BadStatusLine) # define subclasses of the above that are not retryable. self.http_unretryable_exceptions = [] if HAVE_HTTPS_CONNECTION: self.http_unretryable_exceptions.append( https_connection.InvalidCertificateException) # define values in socket exceptions we don't want to catch self.socket_exception_values = (errno.EINTR,) if https_connection_factory is not None: self.https_connection_factory = https_connection_factory[0] self.http_exceptions += https_connection_factory[1] else: self.https_connection_factory = None if (is_secure): self.protocol = 'https' else: self.protocol = 'http' self.host = host self.path = path # if the value passed in for debug if not isinstance(debug, (int, long)): debug = 0 self.debug = config.getint('Boto', 'debug', debug) self.host_header = None # Timeout used to tell httplib how long to wait for socket timeouts. # Default is to leave timeout unchanged, which will in turn result in # the socket's default global timeout being used. To specify a # timeout, set http_socket_timeout in Boto config. Regardless, # timeouts will only be applied if Python is 2.6 or greater. self.http_connection_kwargs = {} if (sys.version_info[0], sys.version_info[1]) >= (2, 6): # If timeout isn't defined in boto config file, use 70 second # default as recommended by # http://docs.aws.amazon.com/amazonswf/latest/apireference/API_PollForActivityTask.html self.http_connection_kwargs['timeout'] = config.getint( 'Boto', 'http_socket_timeout', 70) if isinstance(provider, Provider): # Allow overriding Provider self.provider = provider else: self._provider_type = provider self.provider = Provider(self._provider_type, aws_access_key_id, aws_secret_access_key, security_token) # Allow config file to override default host, port, and host header. if self.provider.host: self.host = self.provider.host if self.provider.port: self.port = self.provider.port if self.provider.host_header: self.host_header = self.provider.host_header self._pool = ConnectionPool() self._connection = (self.host, self.port, self.is_secure) self._last_rs = None self._auth_handler = auth.get_auth_handler( host, config, self.provider, self._required_auth_capability()) if getattr(self, 'AuthServiceName', None) is not None: self.auth_service_name = self.AuthServiceName def __repr__(self): return '%s:%s' % (self.__class__.__name__, self.host) def _required_auth_capability(self): return [] def _get_auth_service_name(self): return getattr(self._auth_handler, 'service_name') # For Sigv4, the auth_service_name/auth_region_name properties allow # the service_name/region_name to be explicitly set instead of being # derived from the endpoint url. def _set_auth_service_name(self, value): self._auth_handler.service_name = value auth_service_name = property(_get_auth_service_name, _set_auth_service_name) def _get_auth_region_name(self): return getattr(self._auth_handler, 'region_name') def _set_auth_region_name(self, value): self._auth_handler.region_name = value auth_region_name = property(_get_auth_region_name, _set_auth_region_name) def connection(self): return self.get_http_connection(*self._connection) connection = property(connection) def aws_access_key_id(self): return self.provider.access_key aws_access_key_id = property(aws_access_key_id) gs_access_key_id = aws_access_key_id access_key = aws_access_key_id def aws_secret_access_key(self): return self.provider.secret_key aws_secret_access_key = property(aws_secret_access_key) gs_secret_access_key = aws_secret_access_key secret_key = aws_secret_access_key def get_path(self, path='/'): # The default behavior is to suppress consecutive slashes for reasons # discussed at # https://groups.google.com/forum/#!topic/boto-dev/-ft0XPUy0y8 # You can override that behavior with the suppress_consec_slashes param. if not self.suppress_consec_slashes: return self.path + re.sub('^(/*)/', "\\1", path) pos = path.find('?') if pos >= 0: params = path[pos:] path = path[:pos] else: params = None if path[-1] == '/': need_trailing = True else: need_trailing = False path_elements = self.path.split('/') path_elements.extend(path.split('/')) path_elements = [p for p in path_elements if p] path = '/' + '/'.join(path_elements) if path[-1] != '/' and need_trailing: path += '/' if params: path = path + params return path def server_name(self, port=None): if not port: port = self.port if port == 80: signature_host = self.host else: # This unfortunate little hack can be attributed to # a difference in the 2.6 version of httplib. In old # versions, it would append ":443" to the hostname sent # in the Host header and so we needed to make sure we # did the same when calculating the V2 signature. In 2.6 # (and higher!) # it no longer does that. Hence, this kludge. if ((ON_APP_ENGINE and sys.version[:3] == '2.5') or sys.version[:3] in ('2.6', '2.7')) and port == 443: signature_host = self.host else: signature_host = '%s:%d' % (self.host, port) return signature_host def handle_proxy(self, proxy, proxy_port, proxy_user, proxy_pass): self.proxy = proxy self.proxy_port = proxy_port self.proxy_user = proxy_user self.proxy_pass = proxy_pass if 'http_proxy' in os.environ and not self.proxy: pattern = re.compile( '(?:http://)?' \ '(?:(?P[\w\-\.]+):(?P.*)@)?' \ '(?P[\w\-\.]+)' \ '(?::(?P\d+))?' ) match = pattern.match(os.environ['http_proxy']) if match: self.proxy = match.group('host') self.proxy_port = match.group('port') self.proxy_user = match.group('user') self.proxy_pass = match.group('pass') else: if not self.proxy: self.proxy = config.get_value('Boto', 'proxy', None) if not self.proxy_port: self.proxy_port = config.get_value('Boto', 'proxy_port', None) if not self.proxy_user: self.proxy_user = config.get_value('Boto', 'proxy_user', None) if not self.proxy_pass: self.proxy_pass = config.get_value('Boto', 'proxy_pass', None) if not self.proxy_port and self.proxy: print "http_proxy environment variable does not specify " \ "a port, using default" self.proxy_port = self.port self.no_proxy = os.environ.get('no_proxy', '') or os.environ.get('NO_PROXY', '') self.use_proxy = (self.proxy != None) def get_http_connection(self, host, port, is_secure): conn = self._pool.get_http_connection(host, port, is_secure) if conn is not None: return conn else: return self.new_http_connection(host, port, is_secure) def skip_proxy(self, host): if not self.no_proxy: return False if self.no_proxy == "*": return True hostonly = host hostonly = host.split(':')[0] for name in self.no_proxy.split(','): if name and (hostonly.endswith(name) or host.endswith(name)): return True return False def new_http_connection(self, host, port, is_secure): if host is None: host = self.server_name() # Make sure the host is really just the host, not including # the port number host = host.split(':', 1)[0] http_connection_kwargs = self.http_connection_kwargs.copy() # Connection factories below expect a port keyword argument http_connection_kwargs['port'] = port # Override host with proxy settings if needed if self.use_proxy and not is_secure and \ not self.skip_proxy(host): host = self.proxy http_connection_kwargs['port'] = int(self.proxy_port) if is_secure: boto.log.debug( 'establishing HTTPS connection: host=%s, kwargs=%s', host, http_connection_kwargs) if self.use_proxy and not self.skip_proxy(host): connection = self.proxy_ssl(host, is_secure and 443 or 80) elif self.https_connection_factory: connection = self.https_connection_factory(host) elif self.https_validate_certificates and HAVE_HTTPS_CONNECTION: connection = https_connection.CertValidatingHTTPSConnection( host, ca_certs=self.ca_certificates_file, **http_connection_kwargs) else: connection = httplib.HTTPSConnection(host, **http_connection_kwargs) else: boto.log.debug('establishing HTTP connection: kwargs=%s' % http_connection_kwargs) if self.https_connection_factory: # even though the factory says https, this is too handy # to not be able to allow overriding for http also. connection = self.https_connection_factory(host, **http_connection_kwargs) else: connection = httplib.HTTPConnection(host, **http_connection_kwargs) if self.debug > 1: connection.set_debuglevel(self.debug) # self.connection must be maintained for backwards-compatibility # however, it must be dynamically pulled from the connection pool # set a private variable which will enable that if host.split(':')[0] == self.host and is_secure == self.is_secure: self._connection = (host, port, is_secure) # Set the response class of the http connection to use our custom # class. connection.response_class = HTTPResponse return connection def put_http_connection(self, host, port, is_secure, connection): self._pool.put_http_connection(host, port, is_secure, connection) def proxy_ssl(self, host=None, port=None): if host and port: host = '%s:%d' % (host, port) else: host = '%s:%d' % (self.host, self.port) sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: sock.connect((self.proxy, int(self.proxy_port))) if "timeout" in self.http_connection_kwargs: sock.settimeout(self.http_connection_kwargs["timeout"]) except: raise boto.log.debug("Proxy connection: CONNECT %s HTTP/1.0\r\n", host) sock.sendall("CONNECT %s HTTP/1.0\r\n" % host) sock.sendall("User-Agent: %s\r\n" % UserAgent) if self.proxy_user and self.proxy_pass: for k, v in self.get_proxy_auth_header().items(): sock.sendall("%s: %s\r\n" % (k, v)) # See discussion about this config option at # https://groups.google.com/forum/?fromgroups#!topic/boto-dev/teenFvOq2Cc if config.getbool('Boto', 'send_crlf_after_proxy_auth_headers', False): sock.sendall("\r\n") else: sock.sendall("\r\n") resp = httplib.HTTPResponse(sock, strict=True, debuglevel=self.debug) resp.begin() if resp.status != 200: # Fake a socket error, use a code that make it obvious it hasn't # been generated by the socket library raise socket.error(-71, "Error talking to HTTP proxy %s:%s: %s (%s)" % (self.proxy, self.proxy_port, resp.status, resp.reason)) # We can safely close the response, it duped the original socket resp.close() h = httplib.HTTPConnection(host) if self.https_validate_certificates and HAVE_HTTPS_CONNECTION: boto.log.debug("wrapping ssl socket for proxied connection; " "CA certificate file=%s", self.ca_certificates_file) key_file = self.http_connection_kwargs.get('key_file', None) cert_file = self.http_connection_kwargs.get('cert_file', None) sslSock = ssl.wrap_socket(sock, keyfile=key_file, certfile=cert_file, cert_reqs=ssl.CERT_REQUIRED, ca_certs=self.ca_certificates_file) cert = sslSock.getpeercert() hostname = self.host.split(':', 0)[0] if not https_connection.ValidateCertificateHostname(cert, hostname): raise https_connection.InvalidCertificateException( hostname, cert, 'hostname mismatch') else: # Fallback for old Python without ssl.wrap_socket if hasattr(httplib, 'ssl'): sslSock = httplib.ssl.SSLSocket(sock) else: sslSock = socket.ssl(sock, None, None) sslSock = httplib.FakeSocket(sock, sslSock) # This is a bit unclean h.sock = sslSock return h def prefix_proxy_to_path(self, path, host=None): path = self.protocol + '://' + (host or self.server_name()) + path return path def get_proxy_auth_header(self): auth = base64.encodestring(self.proxy_user + ':' + self.proxy_pass) return {'Proxy-Authorization': 'Basic %s' % auth} def set_host_header(self, request): try: request.headers['Host'] = \ self._auth_handler.host_header(self.host, request) except AttributeError: request.headers['Host'] = self.host.split(':', 1)[0] def _mexe(self, request, sender=None, override_num_retries=None, retry_handler=None): """ mexe - Multi-execute inside a loop, retrying multiple times to handle transient Internet errors by simply trying again. Also handles redirects. This code was inspired by the S3Utils classes posted to the boto-users Google group by Larry Bates. Thanks! """ boto.log.debug('Method: %s' % request.method) boto.log.debug('Path: %s' % request.path) boto.log.debug('Data: %s' % request.body) boto.log.debug('Headers: %s' % request.headers) boto.log.debug('Host: %s' % request.host) boto.log.debug('Port: %s' % request.port) boto.log.debug('Params: %s' % request.params) response = None body = None e = None if override_num_retries is None: num_retries = config.getint('Boto', 'num_retries', self.num_retries) else: num_retries = override_num_retries i = 0 connection = self.get_http_connection(request.host, request.port, self.is_secure) while i <= num_retries: # Use binary exponential backoff to desynchronize client requests. next_sleep = random.random() * (2 ** i) try: # we now re-sign each request before it is retried boto.log.debug('Token: %s' % self.provider.security_token) request.authorize(connection=self) # Only force header for non-s3 connections, because s3 uses # an older signing method + bucket resource URLs that include # the port info. All others should be now be up to date and # not include the port. if 's3' not in self._required_auth_capability(): self.set_host_header(request) if callable(sender): response = sender(connection, request.method, request.path, request.body, request.headers) else: connection.request(request.method, request.path, request.body, request.headers) response = connection.getresponse() location = response.getheader('location') # -- gross hack -- # httplib gets confused with chunked responses to HEAD requests # so I have to fake it out if request.method == 'HEAD' and getattr(response, 'chunked', False): response.chunked = 0 if callable(retry_handler): status = retry_handler(response, i, next_sleep) if status: msg, i, next_sleep = status if msg: boto.log.debug(msg) time.sleep(next_sleep) continue if response.status in [500, 502, 503, 504]: msg = 'Received %d response. ' % response.status msg += 'Retrying in %3.1f seconds' % next_sleep boto.log.debug(msg) body = response.read() elif response.status < 300 or response.status >= 400 or \ not location: # don't return connection to the pool if response contains # Connection:close header, because the connection has been # closed and default reconnect behavior may do something # different than new_http_connection. Also, it's probably # less efficient to try to reuse a closed connection. conn_header_value = response.getheader('connection') if conn_header_value == 'close': connection.close() else: self.put_http_connection(request.host, request.port, self.is_secure, connection) return response else: scheme, request.host, request.path, \ params, query, fragment = urlparse.urlparse(location) if query: request.path += '?' + query # urlparse can return both host and port in netloc, so if # that's the case we need to split them up properly if ':' in request.host: request.host, request.port = request.host.split(':', 1) msg = 'Redirecting: %s' % scheme + '://' msg += request.host + request.path boto.log.debug(msg) connection = self.get_http_connection(request.host, request.port, scheme == 'https') response = None continue except PleaseRetryException, e: boto.log.debug('encountered a retry exception: %s' % e) connection = self.new_http_connection(request.host, request.port, self.is_secure) response = e.response except self.http_exceptions, e: for unretryable in self.http_unretryable_exceptions: if isinstance(e, unretryable): boto.log.debug( 'encountered unretryable %s exception, re-raising' % e.__class__.__name__) raise boto.log.debug('encountered %s exception, reconnecting' % \ e.__class__.__name__) connection = self.new_http_connection(request.host, request.port, self.is_secure) time.sleep(next_sleep) i += 1 # If we made it here, it's because we have exhausted our retries # and stil haven't succeeded. So, if we have a response object, # use it to raise an exception. # Otherwise, raise the exception that must have already happened. if response: raise BotoServerError(response.status, response.reason, body) elif e: raise else: msg = 'Please report this exception as a Boto Issue!' raise BotoClientError(msg) def build_base_http_request(self, method, path, auth_path, params=None, headers=None, data='', host=None): path = self.get_path(path) if auth_path is not None: auth_path = self.get_path(auth_path) if params == None: params = {} else: params = params.copy() if headers == None: headers = {} else: headers = headers.copy() if (self.host_header and not boto.utils.find_matching_headers('host', headers)): headers['host'] = self.host_header host = host or self.host if self.use_proxy: if not auth_path: auth_path = path path = self.prefix_proxy_to_path(path, host) if self.proxy_user and self.proxy_pass and not self.is_secure: # If is_secure, we don't have to set the proxy authentication # header here, we did that in the CONNECT to the proxy. headers.update(self.get_proxy_auth_header()) return HTTPRequest(method, self.protocol, host, self.port, path, auth_path, params, headers, data) def make_request(self, method, path, headers=None, data='', host=None, auth_path=None, sender=None, override_num_retries=None, params=None, retry_handler=None): """Makes a request to the server, with stock multiple-retry logic.""" if params is None: params = {} http_request = self.build_base_http_request(method, path, auth_path, params, headers, data, host) return self._mexe(http_request, sender, override_num_retries, retry_handler=retry_handler) def close(self): """(Optional) Close any open HTTP connections. This is non-destructive, and making a new request will open a connection again.""" boto.log.debug('closing all HTTP connections') self._connection = None # compat field class AWSQueryConnection(AWSAuthConnection): APIVersion = '' ResponseError = BotoServerError def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, host=None, debug=0, https_connection_factory=None, path='/', security_token=None, validate_certs=True): AWSAuthConnection.__init__(self, host, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, proxy_user, proxy_pass, debug, https_connection_factory, path, security_token=security_token, validate_certs=validate_certs) def _required_auth_capability(self): return [] def get_utf8_value(self, value): return boto.utils.get_utf8_value(value) def make_request(self, action, params=None, path='/', verb='GET'): http_request = self.build_base_http_request(verb, path, None, params, {}, '', self.host) if action: http_request.params['Action'] = action if self.APIVersion: http_request.params['Version'] = self.APIVersion return self._mexe(http_request) def build_list_params(self, params, items, label): if isinstance(items, basestring): items = [items] for i in range(1, len(items) + 1): params['%s.%d' % (label, i)] = items[i - 1] def build_complex_list_params(self, params, items, label, names): """Serialize a list of structures. For example:: items = [('foo', 'bar', 'baz'), ('foo2', 'bar2', 'baz2')] label = 'ParamName.member' names = ('One', 'Two', 'Three') self.build_complex_list_params(params, items, label, names) would result in the params dict being updated with these params:: ParamName.member.1.One = foo ParamName.member.1.Two = bar ParamName.member.1.Three = baz ParamName.member.2.One = foo2 ParamName.member.2.Two = bar2 ParamName.member.2.Three = baz2 :type params: dict :param params: The params dict. The complex list params will be added to this dict. :type items: list of tuples :param items: The list to serialize. :type label: string :param label: The prefix to apply to the parameter. :type names: tuple of strings :param names: The names associated with each tuple element. """ for i, item in enumerate(items, 1): current_prefix = '%s.%s' % (label, i) for key, value in zip(names, item): full_key = '%s.%s' % (current_prefix, key) params[full_key] = value # generics def get_list(self, action, params, markers, path='/', parent=None, verb='GET'): if not parent: parent = self response = self.make_request(action, params, path, verb) body = response.read() boto.log.debug(body) if not body: boto.log.error('Null body %s' % body) raise self.ResponseError(response.status, response.reason, body) elif response.status == 200: rs = ResultSet(markers) h = boto.handler.XmlHandler(rs, parent) xml.sax.parseString(body, h) return rs else: boto.log.error('%s %s' % (response.status, response.reason)) boto.log.error('%s' % body) raise self.ResponseError(response.status, response.reason, body) def get_object(self, action, params, cls, path='/', parent=None, verb='GET'): if not parent: parent = self response = self.make_request(action, params, path, verb) body = response.read() boto.log.debug(body) if not body: boto.log.error('Null body %s' % body) raise self.ResponseError(response.status, response.reason, body) elif response.status == 200: obj = cls(parent) h = boto.handler.XmlHandler(obj, parent) xml.sax.parseString(body, h) return obj else: boto.log.error('%s %s' % (response.status, response.reason)) boto.log.error('%s' % body) raise self.ResponseError(response.status, response.reason, body) def get_status(self, action, params, path='/', parent=None, verb='GET'): if not parent: parent = self response = self.make_request(action, params, path, verb) body = response.read() boto.log.debug(body) if not body: boto.log.error('Null body %s' % body) raise self.ResponseError(response.status, response.reason, body) elif response.status == 200: rs = ResultSet() h = boto.handler.XmlHandler(rs, parent) xml.sax.parseString(body, h) return rs.status else: boto.log.error('%s %s' % (response.status, response.reason)) boto.log.error('%s' % body) raise self.ResponseError(response.status, response.reason, body) boto-2.20.1/boto/contrib/000077500000000000000000000000001225267101000151365ustar00rootroot00000000000000boto-2.20.1/boto/contrib/__init__.py000066400000000000000000000021231225267101000172450ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # boto-2.20.1/boto/contrib/ymlmessage.py000066400000000000000000000035151225267101000176620ustar00rootroot00000000000000# Copyright (c) 2006,2007 Chris Moyer # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ This module was contributed by Chris Moyer. It provides a subclass of the SQS Message class that supports YAML as the body of the message. This module requires the yaml module. """ from boto.sqs.message import Message import yaml class YAMLMessage(Message): """ The YAMLMessage class provides a YAML compatible message. Encoding and decoding are handled automaticaly. Access this message data like such: m.data = [ 1, 2, 3] m.data[0] # Returns 1 This depends on the PyYAML package """ def __init__(self, queue=None, body='', xml_attrs=None): self.data = None Message.__init__(self, queue, body) def set_body(self, body): self.data = yaml.load(body) def get_body(self): return yaml.dump(self.data) boto-2.20.1/boto/core/000077500000000000000000000000001225267101000144265ustar00rootroot00000000000000boto-2.20.1/boto/core/README000066400000000000000000000040621225267101000153100ustar00rootroot00000000000000What's This All About? ====================== This directory contains the beginnings of what is hoped will be the new core of boto. We want to move from using httplib to using requests. We also want to offer full support for Python 2.6, 2.7, and 3.x. This is a pretty big change and will require some time to roll out but this module provides a starting point. What you will find in this module: * auth.py provides a SigV2 authentication packages as a args hook for requests. * credentials.py provides a way of finding AWS credentials (see below). * dictresponse.py provides a generic response handler that parses XML responses and returns them as nested Python data structures. * service.py provides a simple example of a service that actually makes an EC2 request and returns a response. Credentials =========== Credentials are being handled a bit differently here. The following describes the order of search for credentials: 1. If your local environment for has ACCESS_KEY and SECRET_KEY variables defined, these will be used. 2. If your local environment has AWS_CREDENTIAL_FILE defined, it is assumed that it will be a config file with entries like this: [default] access_key = xxxxxxxxxxxxxxxx sercret_key = xxxxxxxxxxxxxxxxxx [test] access_key = yyyyyyyyyyyyyy secret_key = yyyyyyyyyyyyyyyyyy Each section in the config file is called a persona and you can reference a particular persona by name when instantiating a Service class. 3. If a standard boto config file is found that contains credentials, those will be used. 4. If temporary credentials for an IAM Role are found in the instance metadata of an EC2 instance, these credentials will be used. Trying Things Out ================= To try this code out, cd to the directory containing the core module. >>> import core.service >>> s = core.service.Service() >>> s.describe_instances() This code should return a Python data structure containing information about your currently running EC2 instances. This example should run in Python 2.6.x, 2.7.x and Python 3.x.boto-2.20.1/boto/core/__init__.py000066400000000000000000000022331225267101000165370ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # boto-2.20.1/boto/core/auth.py000066400000000000000000000061161225267101000157450ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import requests.packages.urllib3 import hmac import base64 from hashlib import sha256 import sys import datetime try: from urllib.parse import quote except ImportError: from urllib import quote class SigV2Auth(object): """ Sign an Query Signature V2 request. """ def __init__(self, credentials, api_version=''): self.credentials = credentials self.api_version = api_version self.hmac = hmac.new(self.credentials.secret_key.encode('utf-8'), digestmod=sha256) def calc_signature(self, args): scheme, host, port = requests.packages.urllib3.get_host(args['url']) string_to_sign = '%s\n%s\n%s\n' % (args['method'], host, '/') hmac = self.hmac.copy() args['params']['SignatureMethod'] = 'HmacSHA256' if self.credentials.token: args['params']['SecurityToken'] = self.credentials.token sorted_params = sorted(args['params']) pairs = [] for key in sorted_params: value = args['params'][key] pairs.append(quote(key, safe='') + '=' + quote(value, safe='-_~')) qs = '&'.join(pairs) string_to_sign += qs print('string_to_sign') print(string_to_sign) hmac.update(string_to_sign.encode('utf-8')) b64 = base64.b64encode(hmac.digest()).strip().decode('utf-8') return (qs, b64) def add_auth(self, args): args['params']['Action'] = 'DescribeInstances' args['params']['AWSAccessKeyId'] = self.credentials.access_key args['params']['SignatureVersion'] = '2' args['params']['Timestamp'] = datetime.datetime.utcnow().isoformat() args['params']['Version'] = self.api_version qs, signature = self.calc_signature(args) args['params']['Signature'] = signature if args['method'] == 'POST': args['data'] = args['params'] args['params'] = {} boto-2.20.1/boto/core/credentials.py000066400000000000000000000130351225267101000172770ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import os from six.moves import configparser from boto.compat import json import requests class Credentials(object): """ Holds the credentials needed to authenticate requests. In addition the Credential object knows how to search for credentials and how to choose the right credentials when multiple credentials are found. """ def __init__(self, access_key=None, secret_key=None, token=None): self.access_key = access_key self.secret_key = secret_key self.token = token def _search_md(url='http://169.254.169.254/latest/meta-data/iam/'): d = {} try: r = requests.get(url, timeout=.1) if r.content: fields = r.content.split('\n') for field in fields: if field.endswith('/'): d[field[0:-1]] = get_iam_role(url + field) else: val = requests.get(url + field).content if val[0] == '{': val = json.loads(val) else: p = val.find('\n') if p > 0: val = r.content.split('\n') d[field] = val except (requests.Timeout, requests.ConnectionError): pass return d def search_metadata(**kwargs): credentials = None metadata = _search_md() # Assuming there's only one role on the instance profile. if metadata: metadata = metadata['iam']['security-credentials'].values()[0] credentials = Credentials(metadata['AccessKeyId'], metadata['SecretAccessKey'], metadata['Token']) return credentials def search_environment(**kwargs): """ Search for credentials in explicit environment variables. """ credentials = None access_key = os.environ.get(kwargs['access_key_name'].upper(), None) secret_key = os.environ.get(kwargs['secret_key_name'].upper(), None) if access_key and secret_key: credentials = Credentials(access_key, secret_key) return credentials def search_file(**kwargs): """ If the 'AWS_CREDENTIAL_FILE' environment variable exists, parse that file for credentials. """ credentials = None if 'AWS_CREDENTIAL_FILE' in os.environ: persona = kwargs.get('persona', 'default') access_key_name = kwargs['access_key_name'] secret_key_name = kwargs['secret_key_name'] access_key = secret_key = None path = os.getenv('AWS_CREDENTIAL_FILE') path = os.path.expandvars(path) path = os.path.expanduser(path) cp = configparser.RawConfigParser() cp.read(path) if not cp.has_section(persona): raise ValueError('Persona: %s not found' % persona) if cp.has_option(persona, access_key_name): access_key = cp.get(persona, access_key_name) else: access_key = None if cp.has_option(persona, secret_key_name): secret_key = cp.get(persona, secret_key_name) else: secret_key = None if access_key and secret_key: credentials = Credentials(access_key, secret_key) return credentials def search_boto_config(**kwargs): """ Look for credentials in boto config file. """ credentials = access_key = secret_key = None if 'BOTO_CONFIG' in os.environ: paths = [os.environ['BOTO_CONFIG']] else: paths = ['/etc/boto.cfg', '~/.boto'] paths = [os.path.expandvars(p) for p in paths] paths = [os.path.expanduser(p) for p in paths] cp = configparser.RawConfigParser() cp.read(paths) if cp.has_section('Credentials'): access_key = cp.get('Credentials', 'aws_access_key_id') secret_key = cp.get('Credentials', 'aws_secret_access_key') if access_key and secret_key: credentials = Credentials(access_key, secret_key) return credentials AllCredentialFunctions = [search_environment, search_file, search_boto_config, search_metadata] def get_credentials(persona='default'): for cred_fn in AllCredentialFunctions: credentials = cred_fn(persona=persona, access_key_name='access_key', secret_key_name='secret_key') if credentials: break return credentials boto-2.20.1/boto/core/dictresponse.py000066400000000000000000000140161225267101000175040ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import xml.sax def pythonize_name(name, sep='_'): s = '' if name[0].isupper: s = name[0].lower() for c in name[1:]: if c.isupper(): s += sep + c.lower() else: s += c return s class XmlHandler(xml.sax.ContentHandler): def __init__(self, root_node, connection): self.connection = connection self.nodes = [('root', root_node)] self.current_text = '' def startElement(self, name, attrs): self.current_text = '' t = self.nodes[-1][1].startElement(name, attrs, self.connection) if t != None: if isinstance(t, tuple): self.nodes.append(t) else: self.nodes.append((name, t)) def endElement(self, name): self.nodes[-1][1].endElement(name, self.current_text, self.connection) if self.nodes[-1][0] == name: self.nodes.pop() self.current_text = '' def characters(self, content): self.current_text += content def parse(self, s): xml.sax.parseString(s, self) class Element(dict): def __init__(self, connection=None, element_name=None, stack=None, parent=None, list_marker=None, item_marker=None, pythonize_name=False): dict.__init__(self) self.connection = connection self.element_name = element_name self.list_marker = list_marker or ['Set'] self.item_marker = item_marker or ['member', 'item'] if stack is None: self.stack = [] else: self.stack = stack self.pythonize_name = pythonize_name self.parent = parent def __getattr__(self, key): if key in self: return self[key] for k in self: e = self[k] if isinstance(e, Element): try: return getattr(e, key) except AttributeError: pass raise AttributeError def get_name(self, name): if self.pythonize_name: name = pythonize_name(name) return name def startElement(self, name, attrs, connection): self.stack.append(name) for lm in self.list_marker: if name.endswith(lm): l = ListElement(self.connection, name, self.list_marker, self.item_marker, self.pythonize_name) self[self.get_name(name)] = l return l if len(self.stack) > 0: element_name = self.stack[-1] e = Element(self.connection, element_name, self.stack, self, self.list_marker, self.item_marker, self.pythonize_name) self[self.get_name(element_name)] = e return (element_name, e) else: return None def endElement(self, name, value, connection): if len(self.stack) > 0: self.stack.pop() value = value.strip() if value: if isinstance(self.parent, Element): self.parent[self.get_name(name)] = value elif isinstance(self.parent, ListElement): self.parent.append(value) class ListElement(list): def __init__(self, connection=None, element_name=None, list_marker=['Set'], item_marker=('member', 'item'), pythonize_name=False): list.__init__(self) self.connection = connection self.element_name = element_name self.list_marker = list_marker self.item_marker = item_marker self.pythonize_name = pythonize_name def get_name(self, name): if self.pythonize_name: name = utils.pythonize_name(name) return name def startElement(self, name, attrs, connection): for lm in self.list_marker: if name.endswith(lm): l = ListElement(self.connection, name, self.list_marker, self.item_marker, self.pythonize_name) setattr(self, self.get_name(name), l) return l if name in self.item_marker: e = Element(self.connection, name, parent=self, list_marker=self.list_marker, item_marker=self.item_marker, pythonize_name=self.pythonize_name) self.append(e) return e else: return None def endElement(self, name, value, connection): if name == self.element_name: if len(self) > 0: empty = [] for e in self: if isinstance(e, Element): if len(e) == 0: empty.append(e) for e in empty: self.remove(e) else: setattr(self, self.get_name(name), value) boto-2.20.1/boto/core/service.py000066400000000000000000000053171225267101000164460ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import requests from .auth import SigV2Auth from .credentials import get_credentials from .dictresponse import Element, XmlHandler class Service(object): """ This is a simple example service that connects to the EC2 endpoint and supports a single request (DescribeInstances) to show how to use the requests-based code rather than the standard boto code which is based on httplib. At the moment, the only auth mechanism supported is SigV2. """ def __init__(self, host='https://ec2.us-east-1.amazonaws.com', path='/', api_version='2012-03-01', persona=None): self.credentials = get_credentials(persona) self.auth = SigV2Auth(self.credentials, api_version=api_version) self.host = host self.path = path def get_response(self, params, list_marker=None): r = requests.post(self.host, params=params, hooks={'args': self.auth.add_auth}) r.encoding = 'utf-8' body = r.text.encode('utf-8') e = Element(list_marker=list_marker, pythonize_name=True) h = XmlHandler(e, self) h.parse(body) return e def build_list_params(self, params, items, label): if isinstance(items, str): items = [items] for i in range(1, len(items) + 1): params['%s.%d' % (label, i)] = items[i - 1] def describe_instances(self, instance_ids=None): params = {} if instance_ids: self.build_list_params(params, instance_ids, 'InstanceId') return self.get_response(params) boto-2.20.1/boto/datapipeline/000077500000000000000000000000001225267101000161355ustar00rootroot00000000000000boto-2.20.1/boto/datapipeline/__init__.py000066400000000000000000000000001225267101000202340ustar00rootroot00000000000000boto-2.20.1/boto/datapipeline/exceptions.py000066400000000000000000000026771225267101000207040ustar00rootroot00000000000000# Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.exception import JSONResponseError class PipelineDeletedException(JSONResponseError): pass class InvalidRequestException(JSONResponseError): pass class TaskNotFoundException(JSONResponseError): pass class PipelineNotFoundException(JSONResponseError): pass class InternalServiceError(JSONResponseError): pass boto-2.20.1/boto/datapipeline/layer1.py000066400000000000000000000704771225267101000177230ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import boto from boto.compat import json from boto.connection import AWSQueryConnection from boto.regioninfo import RegionInfo from boto.exception import JSONResponseError from boto.datapipeline import exceptions class DataPipelineConnection(AWSQueryConnection): """ This is the AWS Data Pipeline API Reference . This guide provides descriptions and samples of the AWS Data Pipeline API. AWS Data Pipeline is a web service that configures and manages a data-driven workflow called a pipeline. AWS Data Pipeline handles the details of scheduling and ensuring that data dependencies are met so your application can focus on processing the data. The AWS Data Pipeline API implements two main sets of functionality. The first set of actions configure the pipeline in the web service. You call these actions to create a pipeline and define data sources, schedules, dependencies, and the transforms to be performed on the data. The second set of actions are used by a task runner application that calls the AWS Data Pipeline API to receive the next task ready for processing. The logic for performing the task, such as querying the data, running data analysis, or converting the data from one format to another, is contained within the task runner. The task runner performs the task assigned to it by the web service, reporting progress to the web service as it does so. When the task is done, the task runner reports the final success or failure of the task to the web service. AWS Data Pipeline provides an open-source implementation of a task runner called AWS Data Pipeline Task Runner. AWS Data Pipeline Task Runner provides logic for common data management scenarios, such as performing database queries and running data analysis using Amazon Elastic MapReduce (Amazon EMR). You can use AWS Data Pipeline Task Runner as your task runner, or you can write your own task runner to provide custom data management. The AWS Data Pipeline API uses the Signature Version 4 protocol for signing requests. For more information about how to sign a request with this protocol, see `Signature Version 4 Signing Process`_. In the code examples in this reference, the Signature Version 4 Request parameters are represented as AuthParams. """ APIVersion = "2012-10-29" DefaultRegionName = "us-east-1" DefaultRegionEndpoint = "datapipeline.us-east-1.amazonaws.com" ServiceName = "DataPipeline" TargetPrefix = "DataPipeline" ResponseError = JSONResponseError _faults = { "PipelineDeletedException": exceptions.PipelineDeletedException, "InvalidRequestException": exceptions.InvalidRequestException, "TaskNotFoundException": exceptions.TaskNotFoundException, "PipelineNotFoundException": exceptions.PipelineNotFoundException, "InternalServiceError": exceptions.InternalServiceError, } def __init__(self, **kwargs): region = kwargs.get('region') if not region: region = RegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint) kwargs['host'] = region.endpoint AWSQueryConnection.__init__(self, **kwargs) self.region = region def _required_auth_capability(self): return ['hmac-v4'] def activate_pipeline(self, pipeline_id): """ Validates a pipeline and initiates processing. If the pipeline does not pass validation, activation fails. Call this action to start processing pipeline tasks of a pipeline you've created using the CreatePipeline and PutPipelineDefinition actions. A pipeline cannot be modified after it has been successfully activated. :type pipeline_id: string :param pipeline_id: The identifier of the pipeline to activate. """ params = {'pipelineId': pipeline_id, } return self.make_request(action='ActivatePipeline', body=json.dumps(params)) def create_pipeline(self, name, unique_id, description=None): """ Creates a new empty pipeline. When this action succeeds, you can then use the PutPipelineDefinition action to populate the pipeline. :type name: string :param name: The name of the new pipeline. You can use the same name for multiple pipelines associated with your AWS account, because AWS Data Pipeline assigns each new pipeline a unique pipeline identifier. :type unique_id: string :param unique_id: A unique identifier that you specify. This identifier is not the same as the pipeline identifier assigned by AWS Data Pipeline. You are responsible for defining the format and ensuring the uniqueness of this identifier. You use this parameter to ensure idempotency during repeated calls to CreatePipeline. For example, if the first call to CreatePipeline does not return a clear success, you can pass in the same unique identifier and pipeline name combination on a subsequent call to CreatePipeline. CreatePipeline ensures that if a pipeline already exists with the same name and unique identifier, a new pipeline will not be created. Instead, you'll receive the pipeline identifier from the previous attempt. The uniqueness of the name and unique identifier combination is scoped to the AWS account or IAM user credentials. :type description: string :param description: The description of the new pipeline. """ params = {'name': name, 'uniqueId': unique_id, } if description is not None: params['description'] = description return self.make_request(action='CreatePipeline', body=json.dumps(params)) def delete_pipeline(self, pipeline_id): """ Permanently deletes a pipeline, its pipeline definition and its run history. You cannot query or restore a deleted pipeline. AWS Data Pipeline will attempt to cancel instances associated with the pipeline that are currently being processed by task runners. Deleting a pipeline cannot be undone. To temporarily pause a pipeline instead of deleting it, call SetStatus with the status set to Pause on individual components. Components that are paused by SetStatus can be resumed. :type pipeline_id: string :param pipeline_id: The identifier of the pipeline to be deleted. """ params = {'pipelineId': pipeline_id, } return self.make_request(action='DeletePipeline', body=json.dumps(params)) def describe_objects(self, object_ids, pipeline_id, marker=None, evaluate_expressions=None): """ Returns the object definitions for a set of objects associated with the pipeline. Object definitions are composed of a set of fields that define the properties of the object. :type pipeline_id: string :param pipeline_id: Identifier of the pipeline that contains the object definitions. :type object_ids: list :param object_ids: Identifiers of the pipeline objects that contain the definitions to be described. You can pass as many as 25 identifiers in a single call to DescribeObjects. :type evaluate_expressions: boolean :param evaluate_expressions: Indicates whether any expressions in the object should be evaluated when the object descriptions are returned. :type marker: string :param marker: The starting point for the results to be returned. The first time you call DescribeObjects, this value should be empty. As long as the action returns `HasMoreResults` as `True`, you can call DescribeObjects again and pass the marker value from the response to retrieve the next set of results. """ params = { 'pipelineId': pipeline_id, 'objectIds': object_ids, } if evaluate_expressions is not None: params['evaluateExpressions'] = evaluate_expressions if marker is not None: params['marker'] = marker return self.make_request(action='DescribeObjects', body=json.dumps(params)) def describe_pipelines(self, pipeline_ids): """ Retrieve metadata about one or more pipelines. The information retrieved includes the name of the pipeline, the pipeline identifier, its current state, and the user account that owns the pipeline. Using account credentials, you can retrieve metadata about pipelines that you or your IAM users have created. If you are using an IAM user account, you can retrieve metadata about only those pipelines you have read permission for. To retrieve the full pipeline definition instead of metadata about the pipeline, call the GetPipelineDefinition action. :type pipeline_ids: list :param pipeline_ids: Identifiers of the pipelines to describe. You can pass as many as 25 identifiers in a single call to DescribePipelines. You can obtain pipeline identifiers by calling ListPipelines. """ params = {'pipelineIds': pipeline_ids, } return self.make_request(action='DescribePipelines', body=json.dumps(params)) def evaluate_expression(self, pipeline_id, expression, object_id): """ Evaluates a string in the context of a specified object. A task runner can use this action to evaluate SQL queries stored in Amazon S3. :type pipeline_id: string :param pipeline_id: The identifier of the pipeline. :type object_id: string :param object_id: The identifier of the object. :type expression: string :param expression: The expression to evaluate. """ params = { 'pipelineId': pipeline_id, 'objectId': object_id, 'expression': expression, } return self.make_request(action='EvaluateExpression', body=json.dumps(params)) def get_pipeline_definition(self, pipeline_id, version=None): """ Returns the definition of the specified pipeline. You can call GetPipelineDefinition to retrieve the pipeline definition you provided using PutPipelineDefinition. :type pipeline_id: string :param pipeline_id: The identifier of the pipeline. :type version: string :param version: The version of the pipeline definition to retrieve. This parameter accepts the values `latest` (default) and `active`. Where `latest` indicates the last definition saved to the pipeline and `active` indicates the last definition of the pipeline that was activated. """ params = {'pipelineId': pipeline_id, } if version is not None: params['version'] = version return self.make_request(action='GetPipelineDefinition', body=json.dumps(params)) def list_pipelines(self, marker=None): """ Returns a list of pipeline identifiers for all active pipelines. Identifiers are returned only for pipelines you have permission to access. :type marker: string :param marker: The starting point for the results to be returned. The first time you call ListPipelines, this value should be empty. As long as the action returns `HasMoreResults` as `True`, you can call ListPipelines again and pass the marker value from the response to retrieve the next set of results. """ params = {} if marker is not None: params['marker'] = marker return self.make_request(action='ListPipelines', body=json.dumps(params)) def poll_for_task(self, worker_group, hostname=None, instance_identity=None): """ Task runners call this action to receive a task to perform from AWS Data Pipeline. The task runner specifies which tasks it can perform by setting a value for the workerGroup parameter of the PollForTask call. The task returned by PollForTask may come from any of the pipelines that match the workerGroup value passed in by the task runner and that was launched using the IAM user credentials specified by the task runner. If tasks are ready in the work queue, PollForTask returns a response immediately. If no tasks are available in the queue, PollForTask uses long-polling and holds on to a poll connection for up to a 90 seconds during which time the first newly scheduled task is handed to the task runner. To accomodate this, set the socket timeout in your task runner to 90 seconds. The task runner should not call PollForTask again on the same `workerGroup` until it receives a response, and this may take up to 90 seconds. :type worker_group: string :param worker_group: Indicates the type of task the task runner is configured to accept and process. The worker group is set as a field on objects in the pipeline when they are created. You can only specify a single value for `workerGroup` in the call to PollForTask. There are no wildcard values permitted in `workerGroup`, the string must be an exact, case-sensitive, match. :type hostname: string :param hostname: The public DNS name of the calling task runner. :type instance_identity: dict :param instance_identity: Identity information for the Amazon EC2 instance that is hosting the task runner. You can get this value by calling the URI, `http://169.254.169.254/latest/meta-data/instance- id`, from the EC2 instance. For more information, go to `Instance Metadata`_ in the Amazon Elastic Compute Cloud User Guide. Passing in this value proves that your task runner is running on an EC2 instance, and ensures the proper AWS Data Pipeline service charges are applied to your pipeline. """ params = {'workerGroup': worker_group, } if hostname is not None: params['hostname'] = hostname if instance_identity is not None: params['instanceIdentity'] = instance_identity return self.make_request(action='PollForTask', body=json.dumps(params)) def put_pipeline_definition(self, pipeline_objects, pipeline_id): """ Adds tasks, schedules, and preconditions that control the behavior of the pipeline. You can use PutPipelineDefinition to populate a new pipeline or to update an existing pipeline that has not yet been activated. PutPipelineDefinition also validates the configuration as it adds it to the pipeline. Changes to the pipeline are saved unless one of the following three validation errors exists in the pipeline. #. An object is missing a name or identifier field. #. A string or reference field is empty. #. The number of objects in the pipeline exceeds the maximum allowed objects. Pipeline object definitions are passed to the PutPipelineDefinition action and returned by the GetPipelineDefinition action. :type pipeline_id: string :param pipeline_id: The identifier of the pipeline to be configured. :type pipeline_objects: list :param pipeline_objects: The objects that define the pipeline. These will overwrite the existing pipeline definition. """ params = { 'pipelineId': pipeline_id, 'pipelineObjects': pipeline_objects, } return self.make_request(action='PutPipelineDefinition', body=json.dumps(params)) def query_objects(self, pipeline_id, sphere, marker=None, query=None, limit=None): """ Queries a pipeline for the names of objects that match a specified set of conditions. The objects returned by QueryObjects are paginated and then filtered by the value you set for query. This means the action may return an empty result set with a value set for marker. If `HasMoreResults` is set to `True`, you should continue to call QueryObjects, passing in the returned value for marker, until `HasMoreResults` returns `False`. :type pipeline_id: string :param pipeline_id: Identifier of the pipeline to be queried for object names. :type query: dict :param query: Query that defines the objects to be returned. The Query object can contain a maximum of ten selectors. The conditions in the query are limited to top-level String fields in the object. These filters can be applied to components, instances, and attempts. :type sphere: string :param sphere: Specifies whether the query applies to components or instances. Allowable values: `COMPONENT`, `INSTANCE`, `ATTEMPT`. :type marker: string :param marker: The starting point for the results to be returned. The first time you call QueryObjects, this value should be empty. As long as the action returns `HasMoreResults` as `True`, you can call QueryObjects again and pass the marker value from the response to retrieve the next set of results. :type limit: integer :param limit: Specifies the maximum number of object names that QueryObjects will return in a single call. The default value is 100. """ params = {'pipelineId': pipeline_id, 'sphere': sphere, } if query is not None: params['query'] = query if marker is not None: params['marker'] = marker if limit is not None: params['limit'] = limit return self.make_request(action='QueryObjects', body=json.dumps(params)) def report_task_progress(self, task_id): """ Updates the AWS Data Pipeline service on the progress of the calling task runner. When the task runner is assigned a task, it should call ReportTaskProgress to acknowledge that it has the task within 2 minutes. If the web service does not recieve this acknowledgement within the 2 minute window, it will assign the task in a subsequent PollForTask call. After this initial acknowledgement, the task runner only needs to report progress every 15 minutes to maintain its ownership of the task. You can change this reporting time from 15 minutes by specifying a `reportProgressTimeout` field in your pipeline. If a task runner does not report its status after 5 minutes, AWS Data Pipeline will assume that the task runner is unable to process the task and will reassign the task in a subsequent response to PollForTask. task runners should call ReportTaskProgress every 60 seconds. :type task_id: string :param task_id: Identifier of the task assigned to the task runner. This value is provided in the TaskObject that the service returns with the response for the PollForTask action. """ params = {'taskId': task_id, } return self.make_request(action='ReportTaskProgress', body=json.dumps(params)) def report_task_runner_heartbeat(self, taskrunner_id, worker_group=None, hostname=None): """ Task runners call ReportTaskRunnerHeartbeat every 15 minutes to indicate that they are operational. In the case of AWS Data Pipeline Task Runner launched on a resource managed by AWS Data Pipeline, the web service can use this call to detect when the task runner application has failed and restart a new instance. :type taskrunner_id: string :param taskrunner_id: The identifier of the task runner. This value should be unique across your AWS account. In the case of AWS Data Pipeline Task Runner launched on a resource managed by AWS Data Pipeline, the web service provides a unique identifier when it launches the application. If you have written a custom task runner, you should assign a unique identifier for the task runner. :type worker_group: string :param worker_group: Indicates the type of task the task runner is configured to accept and process. The worker group is set as a field on objects in the pipeline when they are created. You can only specify a single value for `workerGroup` in the call to ReportTaskRunnerHeartbeat. There are no wildcard values permitted in `workerGroup`, the string must be an exact, case-sensitive, match. :type hostname: string :param hostname: The public DNS name of the calling task runner. """ params = {'taskrunnerId': taskrunner_id, } if worker_group is not None: params['workerGroup'] = worker_group if hostname is not None: params['hostname'] = hostname return self.make_request(action='ReportTaskRunnerHeartbeat', body=json.dumps(params)) def set_status(self, object_ids, status, pipeline_id): """ Requests that the status of an array of physical or logical pipeline objects be updated in the pipeline. This update may not occur immediately, but is eventually consistent. The status that can be set depends on the type of object. :type pipeline_id: string :param pipeline_id: Identifies the pipeline that contains the objects. :type object_ids: list :param object_ids: Identifies an array of objects. The corresponding objects can be either physical or components, but not a mix of both types. :type status: string :param status: Specifies the status to be set on all the objects in `objectIds`. For components, this can be either `PAUSE` or `RESUME`. For instances, this can be either `CANCEL`, `RERUN`, or `MARK_FINISHED`. """ params = { 'pipelineId': pipeline_id, 'objectIds': object_ids, 'status': status, } return self.make_request(action='SetStatus', body=json.dumps(params)) def set_task_status(self, task_id, task_status, error_id=None, error_message=None, error_stack_trace=None): """ Notifies AWS Data Pipeline that a task is completed and provides information about the final status. The task runner calls this action regardless of whether the task was sucessful. The task runner does not need to call SetTaskStatus for tasks that are canceled by the web service during a call to ReportTaskProgress. :type task_id: string :param task_id: Identifies the task assigned to the task runner. This value is set in the TaskObject that is returned by the PollForTask action. :type task_status: string :param task_status: If `FINISHED`, the task successfully completed. If `FAILED` the task ended unsuccessfully. The `FALSE` value is used by preconditions. :type error_id: string :param error_id: If an error occurred during the task, this value specifies an id value that represents the error. This value is set on the physical attempt object. It is used to display error information to the user. It should not start with string "Service_" which is reserved by the system. :type error_message: string :param error_message: If an error occurred during the task, this value specifies a text description of the error. This value is set on the physical attempt object. It is used to display error information to the user. The web service does not parse this value. :type error_stack_trace: string :param error_stack_trace: If an error occurred during the task, this value specifies the stack trace associated with the error. This value is set on the physical attempt object. It is used to display error information to the user. The web service does not parse this value. """ params = {'taskId': task_id, 'taskStatus': task_status, } if error_id is not None: params['errorId'] = error_id if error_message is not None: params['errorMessage'] = error_message if error_stack_trace is not None: params['errorStackTrace'] = error_stack_trace return self.make_request(action='SetTaskStatus', body=json.dumps(params)) def validate_pipeline_definition(self, pipeline_objects, pipeline_id): """ Tests the pipeline definition with a set of validation checks to ensure that it is well formed and can run without error. :type pipeline_id: string :param pipeline_id: Identifies the pipeline whose definition is to be validated. :type pipeline_objects: list :param pipeline_objects: A list of objects that define the pipeline changes to validate against the pipeline. """ params = { 'pipelineId': pipeline_id, 'pipelineObjects': pipeline_objects, } return self.make_request(action='ValidatePipelineDefinition', body=json.dumps(params)) def make_request(self, action, body): headers = { 'X-Amz-Target': '%s.%s' % (self.TargetPrefix, action), 'Host': self.region.endpoint, 'Content-Type': 'application/x-amz-json-1.1', 'Content-Length': str(len(body)), } http_request = self.build_base_http_request( method='POST', path='/', auth_path='/', params={}, headers=headers, data=body) response = self._mexe(http_request, sender=None, override_num_retries=10) response_body = response.read() boto.log.debug(response_body) if response.status == 200: if response_body: return json.loads(response_body) else: json_body = json.loads(response_body) fault_name = json_body.get('__type', None) exception_class = self._faults.get(fault_name, self.ResponseError) raise exception_class(response.status, response.reason, body=json_body) boto-2.20.1/boto/directconnect/000077500000000000000000000000001225267101000163225ustar00rootroot00000000000000boto-2.20.1/boto/directconnect/__init__.py000066400000000000000000000057571225267101000204510ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.regioninfo import RegionInfo def regions(): """ Get all available regions for the AWS DirectConnect service. :rtype: list :return: A list of :class:`boto.regioninfo.RegionInfo` """ from boto.directconnect.layer1 import DirectConnectConnection return [RegionInfo(name='us-east-1', endpoint='directconnect.us-east-1.amazonaws.com', connection_cls=DirectConnectConnection), RegionInfo(name='us-west-1', endpoint='directconnect.us-west-1.amazonaws.com', connection_cls=DirectConnectConnection), RegionInfo(name='us-west-2', endpoint='directconnect.us-west-2.amazonaws.com', connection_cls=DirectConnectConnection), RegionInfo(name='eu-west-1', endpoint='directconnect.eu-west-1.amazonaws.com', connection_cls=DirectConnectConnection), RegionInfo(name='ap-southeast-1', endpoint='directconnect.ap-southeast-1.amazonaws.com', connection_cls=DirectConnectConnection), RegionInfo(name='ap-southeast-2', endpoint='directconnect.ap-southeast-2.amazonaws.com', connection_cls=DirectConnectConnection), RegionInfo(name='ap-southeast-3', endpoint='directconnect.ap-southeast-3.amazonaws.com', connection_cls=DirectConnectConnection), RegionInfo(name='sa-east-1', endpoint='directconnect.sa-east-1.amazonaws.com', connection_cls=DirectConnectConnection), ] def connect_to_region(region_name, **kw_params): for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/directconnect/exceptions.py000066400000000000000000000023261225267101000210600ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # class DirectConnectClientException(Exception): pass class DirectConnectServerException(Exception): pass boto-2.20.1/boto/directconnect/layer1.py000066400000000000000000000560711225267101000201020ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # try: import json except ImportError: import simplejson as json import boto from boto.connection import AWSQueryConnection from boto.regioninfo import RegionInfo from boto.exception import JSONResponseError from boto.directconnect import exceptions class DirectConnectConnection(AWSQueryConnection): """ AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to Amazon Web Services (AWS). Using AWS Direct Connect, you can establish private connectivity between AWS and your data center, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections. The AWS Direct Connect API Reference provides descriptions, syntax, and usage examples for each of the actions and data types for AWS Direct Connect. Use the following links to get started using the AWS Direct Connect API Reference : + `Actions`_: An alphabetical list of all AWS Direct Connect actions. + `Data Types`_: An alphabetical list of all AWS Direct Connect data types. + `Common Query Parameters`_: Parameters that all Query actions can use. + `Common Errors`_: Client and server errors that all actions can return. """ APIVersion = "2012-10-25" DefaultRegionName = "us-east-1" DefaultRegionEndpoint = "directconnect.us-east-1.amazonaws.com" ServiceName = "DirectConnect" TargetPrefix = "OvertureService" ResponseError = JSONResponseError _faults = { "DirectConnectClientException": exceptions.DirectConnectClientException, "DirectConnectServerException": exceptions.DirectConnectServerException, } def __init__(self, **kwargs): region = kwargs.pop('region', None) if not region: region = RegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint) if 'host' not in kwargs: kwargs['host'] = region.endpoint AWSQueryConnection.__init__(self, **kwargs) self.region = region def _required_auth_capability(self): return ['hmac-v4'] def allocate_connection_on_interconnect(self, bandwidth, connection_name, owner_account, interconnect_id, vlan): """ Creates a hosted connection on an interconnect. Allocates a VLAN number and a specified amount of bandwidth for use by a hosted connection on the given interconnect. :type bandwidth: string :param bandwidth: Bandwidth of the connection. Example: " 500Mbps " Default: None :type connection_name: string :param connection_name: Name of the provisioned connection. Example: " 500M Connection to AWS " Default: None :type owner_account: string :param owner_account: Numeric account Id of the customer for whom the connection will be provisioned. Example: 123443215678 Default: None :type interconnect_id: string :param interconnect_id: ID of the interconnect on which the connection will be provisioned. Example: dxcon-456abc78 Default: None :type vlan: integer :param vlan: The dedicated VLAN provisioned to the connection. Example: 101 Default: None """ params = { 'bandwidth': bandwidth, 'connectionName': connection_name, 'ownerAccount': owner_account, 'interconnectId': interconnect_id, 'vlan': vlan, } return self.make_request(action='AllocateConnectionOnInterconnect', body=json.dumps(params)) def allocate_private_virtual_interface(self, connection_id, owner_account, new_private_virtual_interface_allocation): """ Provisions a private virtual interface to be owned by a different customer. The owner of a connection calls this function to provision a private virtual interface which will be owned by another AWS customer. Virtual interfaces created using this function must be confirmed by the virtual interface owner by calling ConfirmPrivateVirtualInterface. Until this step has been completed, the virtual interface will be in 'Confirming' state, and will not be available for handling traffic. :type connection_id: string :param connection_id: The connection ID on which the private virtual interface is provisioned. Default: None :type owner_account: string :param owner_account: The AWS account that will own the new private virtual interface. Default: None :type new_private_virtual_interface_allocation: dict :param new_private_virtual_interface_allocation: Detailed information for the private virtual interface to be provisioned. Default: None """ params = { 'connectionId': connection_id, 'ownerAccount': owner_account, 'newPrivateVirtualInterfaceAllocation': new_private_virtual_interface_allocation, } return self.make_request(action='AllocatePrivateVirtualInterface', body=json.dumps(params)) def allocate_public_virtual_interface(self, connection_id, owner_account, new_public_virtual_interface_allocation): """ Provisions a public virtual interface to be owned by a different customer. The owner of a connection calls this function to provision a public virtual interface which will be owned by another AWS customer. Virtual interfaces created using this function must be confirmed by the virtual interface owner by calling ConfirmPublicVirtualInterface. Until this step has been completed, the virtual interface will be in 'Confirming' state, and will not be available for handling traffic. :type connection_id: string :param connection_id: The connection ID on which the public virtual interface is provisioned. Default: None :type owner_account: string :param owner_account: The AWS account that will own the new public virtual interface. Default: None :type new_public_virtual_interface_allocation: dict :param new_public_virtual_interface_allocation: Detailed information for the public virtual interface to be provisioned. Default: None """ params = { 'connectionId': connection_id, 'ownerAccount': owner_account, 'newPublicVirtualInterfaceAllocation': new_public_virtual_interface_allocation, } return self.make_request(action='AllocatePublicVirtualInterface', body=json.dumps(params)) def confirm_connection(self, connection_id): """ Confirm the creation of a hosted connection on an interconnect. Upon creation, the hosted connection is initially in the 'Ordering' state, and will remain in this state until the owner calls ConfirmConnection to confirm creation of the hosted connection. :type connection_id: string :param connection_id: ID of the connection. Example: dxcon-fg5678gh Default: None """ params = {'connectionId': connection_id, } return self.make_request(action='ConfirmConnection', body=json.dumps(params)) def confirm_private_virtual_interface(self, virtual_interface_id, virtual_gateway_id): """ Accept ownership of a private virtual interface created by another customer. After the virtual interface owner calls this function, the virtual interface will be created and attached to the given virtual private gateway, and will be available for handling traffic. :type virtual_interface_id: string :param virtual_interface_id: ID of the virtual interface. Example: dxvif-123dfg56 Default: None :type virtual_gateway_id: string :param virtual_gateway_id: ID of the virtual private gateway that will be attached to the virtual interface. A virtual private gateway can be managed via the Amazon Virtual Private Cloud (VPC) console or the `EC2 CreateVpnGateway`_ action. Default: None """ params = { 'virtualInterfaceId': virtual_interface_id, 'virtualGatewayId': virtual_gateway_id, } return self.make_request(action='ConfirmPrivateVirtualInterface', body=json.dumps(params)) def confirm_public_virtual_interface(self, virtual_interface_id): """ Accept ownership of a public virtual interface created by another customer. After the virtual interface owner calls this function, the specified virtual interface will be created and made available for handling traffic. :type virtual_interface_id: string :param virtual_interface_id: ID of the virtual interface. Example: dxvif-123dfg56 Default: None """ params = {'virtualInterfaceId': virtual_interface_id, } return self.make_request(action='ConfirmPublicVirtualInterface', body=json.dumps(params)) def create_connection(self, location, bandwidth, connection_name): """ Creates a new connection between the customer network and a specific AWS Direct Connect location. A connection links your internal network to an AWS Direct Connect location over a standard 1 gigabit or 10 gigabit Ethernet fiber-optic cable. One end of the cable is connected to your router, the other to an AWS Direct Connect router. An AWS Direct Connect location provides access to Amazon Web Services in the region it is associated with. You can establish connections with AWS Direct Connect locations in multiple regions, but a connection in one region does not provide connectivity to other regions. :type location: string :param location: Where the connection is located. Example: EqSV5 Default: None :type bandwidth: string :param bandwidth: Bandwidth of the connection. Example: 1Gbps Default: None :type connection_name: string :param connection_name: The name of the connection. Example: " My Connection to AWS " Default: None """ params = { 'location': location, 'bandwidth': bandwidth, 'connectionName': connection_name, } return self.make_request(action='CreateConnection', body=json.dumps(params)) def create_interconnect(self, interconnect_name, bandwidth, location): """ Creates a new interconnect between a AWS Direct Connect partner's network and a specific AWS Direct Connect location. An interconnect is a connection which is capable of hosting other connections. The AWS Direct Connect partner can use an interconnect to provide sub-1Gbps AWS Direct Connect service to tier 2 customers who do not have their own connections. Like a standard connection, an interconnect links the AWS Direct Connect partner's network to an AWS Direct Connect location over a standard 1 Gbps or 10 Gbps Ethernet fiber- optic cable. One end is connected to the partner's router, the other to an AWS Direct Connect router. For each end customer, the AWS Direct Connect partner provisions a connection on their interconnect by calling AllocateConnectionOnInterconnect. The end customer can then connect to AWS resources by creating a virtual interface on their connection, using the VLAN assigned to them by the AWS Direct Connect partner. :type interconnect_name: string :param interconnect_name: The name of the interconnect. Example: " 1G Interconnect to AWS " Default: None :type bandwidth: string :param bandwidth: The port bandwidth Example: 1Gbps Default: None Available values: 1Gbps,10Gbps :type location: string :param location: Where the interconnect is located Example: EqSV5 Default: None """ params = { 'interconnectName': interconnect_name, 'bandwidth': bandwidth, 'location': location, } return self.make_request(action='CreateInterconnect', body=json.dumps(params)) def create_private_virtual_interface(self, connection_id, new_private_virtual_interface): """ Creates a new private virtual interface. A virtual interface is the VLAN that transports AWS Direct Connect traffic. A private virtual interface supports sending traffic to a single virtual private cloud (VPC). :type connection_id: string :param connection_id: ID of the connection. Example: dxcon-fg5678gh Default: None :type new_private_virtual_interface: dict :param new_private_virtual_interface: Detailed information for the private virtual interface to be created. Default: None """ params = { 'connectionId': connection_id, 'newPrivateVirtualInterface': new_private_virtual_interface, } return self.make_request(action='CreatePrivateVirtualInterface', body=json.dumps(params)) def create_public_virtual_interface(self, connection_id, new_public_virtual_interface): """ Creates a new public virtual interface. A virtual interface is the VLAN that transports AWS Direct Connect traffic. A public virtual interface supports sending traffic to public services of AWS such as Amazon Simple Storage Service (Amazon S3). :type connection_id: string :param connection_id: ID of the connection. Example: dxcon-fg5678gh Default: None :type new_public_virtual_interface: dict :param new_public_virtual_interface: Detailed information for the public virtual interface to be created. Default: None """ params = { 'connectionId': connection_id, 'newPublicVirtualInterface': new_public_virtual_interface, } return self.make_request(action='CreatePublicVirtualInterface', body=json.dumps(params)) def delete_connection(self, connection_id): """ Deletes the connection. Deleting a connection only stops the AWS Direct Connect port hour and data transfer charges. You need to cancel separately with the providers any services or charges for cross-connects or network circuits that connect you to the AWS Direct Connect location. :type connection_id: string :param connection_id: ID of the connection. Example: dxcon-fg5678gh Default: None """ params = {'connectionId': connection_id, } return self.make_request(action='DeleteConnection', body=json.dumps(params)) def delete_interconnect(self, interconnect_id): """ Deletes the specified interconnect. :type interconnect_id: string :param interconnect_id: The ID of the interconnect. Example: dxcon-abc123 """ params = {'interconnectId': interconnect_id, } return self.make_request(action='DeleteInterconnect', body=json.dumps(params)) def delete_virtual_interface(self, virtual_interface_id): """ Deletes a virtual interface. :type virtual_interface_id: string :param virtual_interface_id: ID of the virtual interface. Example: dxvif-123dfg56 Default: None """ params = {'virtualInterfaceId': virtual_interface_id, } return self.make_request(action='DeleteVirtualInterface', body=json.dumps(params)) def describe_connections(self, connection_id=None): """ Displays all connections in this region. If a connection ID is provided, the call returns only that particular connection. :type connection_id: string :param connection_id: ID of the connection. Example: dxcon-fg5678gh Default: None """ params = {} if connection_id is not None: params['connectionId'] = connection_id return self.make_request(action='DescribeConnections', body=json.dumps(params)) def describe_connections_on_interconnect(self, interconnect_id): """ Return a list of connections that have been provisioned on the given interconnect. :type interconnect_id: string :param interconnect_id: ID of the interconnect on which a list of connection is provisioned. Example: dxcon-abc123 Default: None """ params = {'interconnectId': interconnect_id, } return self.make_request(action='DescribeConnectionsOnInterconnect', body=json.dumps(params)) def describe_interconnects(self, interconnect_id=None): """ Returns a list of interconnects owned by the AWS account. If an interconnect ID is provided, it will only return this particular interconnect. :type interconnect_id: string :param interconnect_id: The ID of the interconnect. Example: dxcon-abc123 """ params = {} if interconnect_id is not None: params['interconnectId'] = interconnect_id return self.make_request(action='DescribeInterconnects', body=json.dumps(params)) def describe_locations(self): """ Returns the list of AWS Direct Connect locations in the current AWS region. These are the locations that may be selected when calling CreateConnection or CreateInterconnect. """ params = {} return self.make_request(action='DescribeLocations', body=json.dumps(params)) def describe_virtual_gateways(self): """ Returns a list of virtual private gateways owned by the AWS account. You can create one or more AWS Direct Connect private virtual interfaces linking to a virtual private gateway. A virtual private gateway can be managed via Amazon Virtual Private Cloud (VPC) console or the `EC2 CreateVpnGateway`_ action. """ params = {} return self.make_request(action='DescribeVirtualGateways', body=json.dumps(params)) def describe_virtual_interfaces(self, connection_id=None, virtual_interface_id=None): """ Displays all virtual interfaces for an AWS account. Virtual interfaces deleted fewer than 15 minutes before DescribeVirtualInterfaces is called are also returned. If a connection ID is included then only virtual interfaces associated with this connection will be returned. If a virtual interface ID is included then only a single virtual interface will be returned. A virtual interface (VLAN) transmits the traffic between the AWS Direct Connect location and the customer. If a connection ID is provided, only virtual interfaces provisioned on the specified connection will be returned. If a virtual interface ID is provided, only this particular virtual interface will be returned. :type connection_id: string :param connection_id: ID of the connection. Example: dxcon-fg5678gh Default: None :type virtual_interface_id: string :param virtual_interface_id: ID of the virtual interface. Example: dxvif-123dfg56 Default: None """ params = {} if connection_id is not None: params['connectionId'] = connection_id if virtual_interface_id is not None: params['virtualInterfaceId'] = virtual_interface_id return self.make_request(action='DescribeVirtualInterfaces', body=json.dumps(params)) def make_request(self, action, body): headers = { 'X-Amz-Target': '%s.%s' % (self.TargetPrefix, action), 'Host': self.region.endpoint, 'Content-Type': 'application/x-amz-json-1.1', 'Content-Length': str(len(body)), } http_request = self.build_base_http_request( method='POST', path='/', auth_path='/', params={}, headers=headers, data=body) response = self._mexe(http_request, sender=None, override_num_retries=10) response_body = response.read() boto.log.debug(response_body) if response.status == 200: if response_body: return json.loads(response_body) else: json_body = json.loads(response_body) fault_name = json_body.get('__type', None) exception_class = self._faults.get(fault_name, self.ResponseError) raise exception_class(response.status, response.reason, body=json_body) boto-2.20.1/boto/dynamodb/000077500000000000000000000000001225267101000152735ustar00rootroot00000000000000boto-2.20.1/boto/dynamodb/__init__.py000066400000000000000000000062521225267101000174110ustar00rootroot00000000000000# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2011 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.regioninfo import RegionInfo def regions(): """ Get all available regions for the Amazon DynamoDB service. :rtype: list :return: A list of :class:`boto.regioninfo.RegionInfo` """ import boto.dynamodb.layer2 return [RegionInfo(name='us-east-1', endpoint='dynamodb.us-east-1.amazonaws.com', connection_cls=boto.dynamodb.layer2.Layer2), RegionInfo(name='us-gov-west-1', endpoint='dynamodb.us-gov-west-1.amazonaws.com', connection_cls=boto.dynamodb.layer2.Layer2), RegionInfo(name='us-west-1', endpoint='dynamodb.us-west-1.amazonaws.com', connection_cls=boto.dynamodb.layer2.Layer2), RegionInfo(name='us-west-2', endpoint='dynamodb.us-west-2.amazonaws.com', connection_cls=boto.dynamodb.layer2.Layer2), RegionInfo(name='ap-northeast-1', endpoint='dynamodb.ap-northeast-1.amazonaws.com', connection_cls=boto.dynamodb.layer2.Layer2), RegionInfo(name='ap-southeast-1', endpoint='dynamodb.ap-southeast-1.amazonaws.com', connection_cls=boto.dynamodb.layer2.Layer2), RegionInfo(name='ap-southeast-2', endpoint='dynamodb.ap-southeast-2.amazonaws.com', connection_cls=boto.dynamodb.layer2.Layer2), RegionInfo(name='eu-west-1', endpoint='dynamodb.eu-west-1.amazonaws.com', connection_cls=boto.dynamodb.layer2.Layer2), RegionInfo(name='sa-east-1', endpoint='dynamodb.sa-east-1.amazonaws.com', connection_cls=boto.dynamodb.layer2.Layer2), ] def connect_to_region(region_name, **kw_params): for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/dynamodb/batch.py000066400000000000000000000230651225267101000167340ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # class Batch(object): """ Used to construct a BatchGet request. :ivar table: The Table object from which the item is retrieved. :ivar keys: A list of scalar or tuple values. Each element in the list represents one Item to retrieve. If the schema for the table has both a HashKey and a RangeKey, each element in the list should be a tuple consisting of (hash_key, range_key). If the schema for the table contains only a HashKey, each element in the list should be a scalar value of the appropriate type for the table schema. NOTE: The maximum number of items that can be retrieved for a single operation is 100. Also, the number of items retrieved is constrained by a 1 MB size limit. :ivar attributes_to_get: A list of attribute names. If supplied, only the specified attribute names will be returned. Otherwise, all attributes will be returned. :ivar consistent_read: Specify whether or not to use a consistent read. Defaults to False. """ def __init__(self, table, keys, attributes_to_get=None, consistent_read=False): self.table = table self.keys = keys self.attributes_to_get = attributes_to_get self.consistent_read = consistent_read def to_dict(self): """ Convert the Batch object into the format required for Layer1. """ batch_dict = {} key_list = [] for key in self.keys: if isinstance(key, tuple): hash_key, range_key = key else: hash_key = key range_key = None k = self.table.layer2.build_key_from_values(self.table.schema, hash_key, range_key) key_list.append(k) batch_dict['Keys'] = key_list if self.attributes_to_get: batch_dict['AttributesToGet'] = self.attributes_to_get if self.consistent_read: batch_dict['ConsistentRead'] = True else: batch_dict['ConsistentRead'] = False return batch_dict class BatchWrite(object): """ Used to construct a BatchWrite request. Each BatchWrite object represents a collection of PutItem and DeleteItem requests for a single Table. :ivar table: The Table object from which the item is retrieved. :ivar puts: A list of :class:`boto.dynamodb.item.Item` objects that you want to write to DynamoDB. :ivar deletes: A list of scalar or tuple values. Each element in the list represents one Item to delete. If the schema for the table has both a HashKey and a RangeKey, each element in the list should be a tuple consisting of (hash_key, range_key). If the schema for the table contains only a HashKey, each element in the list should be a scalar value of the appropriate type for the table schema. """ def __init__(self, table, puts=None, deletes=None): self.table = table self.puts = puts or [] self.deletes = deletes or [] def to_dict(self): """ Convert the Batch object into the format required for Layer1. """ op_list = [] for item in self.puts: d = {'Item': self.table.layer2.dynamize_item(item)} d = {'PutRequest': d} op_list.append(d) for key in self.deletes: if isinstance(key, tuple): hash_key, range_key = key else: hash_key = key range_key = None k = self.table.layer2.build_key_from_values(self.table.schema, hash_key, range_key) d = {'Key': k} op_list.append({'DeleteRequest': d}) return (self.table.name, op_list) class BatchList(list): """ A subclass of a list object that contains a collection of :class:`boto.dynamodb.batch.Batch` objects. """ def __init__(self, layer2): list.__init__(self) self.unprocessed = None self.layer2 = layer2 def add_batch(self, table, keys, attributes_to_get=None, consistent_read=False): """ Add a Batch to this BatchList. :type table: :class:`boto.dynamodb.table.Table` :param table: The Table object in which the items are contained. :type keys: list :param keys: A list of scalar or tuple values. Each element in the list represents one Item to retrieve. If the schema for the table has both a HashKey and a RangeKey, each element in the list should be a tuple consisting of (hash_key, range_key). If the schema for the table contains only a HashKey, each element in the list should be a scalar value of the appropriate type for the table schema. NOTE: The maximum number of items that can be retrieved for a single operation is 100. Also, the number of items retrieved is constrained by a 1 MB size limit. :type attributes_to_get: list :param attributes_to_get: A list of attribute names. If supplied, only the specified attribute names will be returned. Otherwise, all attributes will be returned. """ self.append(Batch(table, keys, attributes_to_get, consistent_read)) def resubmit(self): """ Resubmit the batch to get the next result set. The request object is rebuild from scratch meaning that all batch added between ``submit`` and ``resubmit`` will be lost. Note: This method is experimental and subject to changes in future releases """ del self[:] if not self.unprocessed: return None for table_name, table_req in self.unprocessed.iteritems(): table_keys = table_req['Keys'] table = self.layer2.get_table(table_name) keys = [] for key in table_keys: h = key['HashKeyElement'] r = None if 'RangeKeyElement' in key: r = key['RangeKeyElement'] keys.append((h, r)) attributes_to_get = None if 'AttributesToGet' in table_req: attributes_to_get = table_req['AttributesToGet'] self.add_batch(table, keys, attributes_to_get=attributes_to_get) return self.submit() def submit(self): res = self.layer2.batch_get_item(self) if 'UnprocessedKeys' in res: self.unprocessed = res['UnprocessedKeys'] return res def to_dict(self): """ Convert a BatchList object into format required for Layer1. """ d = {} for batch in self: b = batch.to_dict() if b['Keys']: d[batch.table.name] = b return d class BatchWriteList(list): """ A subclass of a list object that contains a collection of :class:`boto.dynamodb.batch.BatchWrite` objects. """ def __init__(self, layer2): list.__init__(self) self.layer2 = layer2 def add_batch(self, table, puts=None, deletes=None): """ Add a BatchWrite to this BatchWriteList. :type table: :class:`boto.dynamodb.table.Table` :param table: The Table object in which the items are contained. :type puts: list of :class:`boto.dynamodb.item.Item` objects :param puts: A list of items that you want to write to DynamoDB. :type deletes: A list :param deletes: A list of scalar or tuple values. Each element in the list represents one Item to delete. If the schema for the table has both a HashKey and a RangeKey, each element in the list should be a tuple consisting of (hash_key, range_key). If the schema for the table contains only a HashKey, each element in the list should be a scalar value of the appropriate type for the table schema. """ self.append(BatchWrite(table, puts, deletes)) def submit(self): return self.layer2.batch_write_item(self) def to_dict(self): """ Convert a BatchWriteList object into format required for Layer1. """ d = {} for batch in self: table_name, batch_dict = batch.to_dict() d[table_name] = batch_dict return d boto-2.20.1/boto/dynamodb/condition.py000066400000000000000000000074511225267101000176420ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.dynamodb.types import dynamize_value class Condition(object): """ Base class for conditions. Doesn't do a darn thing but allows is to test if something is a Condition instance or not. """ def __eq__(self, other): if isinstance(other, Condition): return self.to_dict() == other.to_dict() class ConditionNoArgs(Condition): """ Abstract class for Conditions that require no arguments, such as NULL or NOT_NULL. """ def __repr__(self): return '%s' % self.__class__.__name__ def to_dict(self): return {'ComparisonOperator': self.__class__.__name__} class ConditionOneArg(Condition): """ Abstract class for Conditions that require a single argument such as EQ or NE. """ def __init__(self, v1): self.v1 = v1 def __repr__(self): return '%s:%s' % (self.__class__.__name__, self.v1) def to_dict(self): return {'AttributeValueList': [dynamize_value(self.v1)], 'ComparisonOperator': self.__class__.__name__} class ConditionTwoArgs(Condition): """ Abstract class for Conditions that require two arguments. The only example of this currently is BETWEEN. """ def __init__(self, v1, v2): self.v1 = v1 self.v2 = v2 def __repr__(self): return '%s(%s, %s)' % (self.__class__.__name__, self.v1, self.v2) def to_dict(self): values = (self.v1, self.v2) return {'AttributeValueList': [dynamize_value(v) for v in values], 'ComparisonOperator': self.__class__.__name__} class ConditionSeveralArgs(Condition): """ Abstract class for conditions that require several argument (ex: IN). """ def __init__(self, values): self.values = values def __repr__(self): return '{0}({1})'.format(self.__class__.__name__, ', '.join(self.values)) def to_dict(self): return {'AttributeValueList': [dynamize_value(v) for v in self.values], 'ComparisonOperator': self.__class__.__name__} class EQ(ConditionOneArg): pass class NE(ConditionOneArg): pass class LE(ConditionOneArg): pass class LT(ConditionOneArg): pass class GE(ConditionOneArg): pass class GT(ConditionOneArg): pass class NULL(ConditionNoArgs): pass class NOT_NULL(ConditionNoArgs): pass class CONTAINS(ConditionOneArg): pass class NOT_CONTAINS(ConditionOneArg): pass class BEGINS_WITH(ConditionOneArg): pass class IN(ConditionSeveralArgs): pass class BEGINS_WITH(ConditionOneArg): pass class BETWEEN(ConditionTwoArgs): pass boto-2.20.1/boto/dynamodb/exceptions.py000066400000000000000000000032271225267101000200320ustar00rootroot00000000000000""" Exceptions that are specific to the dynamodb module. """ from boto.exception import BotoServerError, BotoClientError from boto.exception import DynamoDBResponseError class DynamoDBExpiredTokenError(BotoServerError): """ Raised when a DynamoDB security token expires. This is generally boto's (or the user's) notice to renew their DynamoDB security tokens. """ pass class DynamoDBKeyNotFoundError(BotoClientError): """ Raised when attempting to retrieve or interact with an item whose key can't be found. """ pass class DynamoDBItemError(BotoClientError): """ Raised when invalid parameters are passed when creating a new Item in DynamoDB. """ pass class DynamoDBNumberError(BotoClientError): """ Raised in the event of incompatible numeric type casting. """ pass class DynamoDBConditionalCheckFailedError(DynamoDBResponseError): """ Raised when a ConditionalCheckFailedException response is received. This happens when a conditional check, expressed via the expected_value paramenter, fails. """ pass class DynamoDBValidationError(DynamoDBResponseError): """ Raised when a ValidationException response is received. This happens when one or more required parameter values are missing, or if the item has exceeded the 64Kb size limit. """ pass class DynamoDBThroughputExceededError(DynamoDBResponseError): """ Raised when the provisioned throughput has been exceeded. Normally, when provisioned throughput is exceeded the operation is retried. If the retries are exhausted then this exception will be raised. """ pass boto-2.20.1/boto/dynamodb/item.py000066400000000000000000000201341225267101000166030ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.dynamodb.exceptions import DynamoDBItemError class Item(dict): """ An item in Amazon DynamoDB. :ivar hash_key: The HashKey of this item. :ivar range_key: The RangeKey of this item or None if no RangeKey is defined. :ivar hash_key_name: The name of the HashKey associated with this item. :ivar range_key_name: The name of the RangeKey associated with this item. :ivar table: The Table this item belongs to. """ def __init__(self, table, hash_key=None, range_key=None, attrs=None): self.table = table self._updates = None self._hash_key_name = self.table.schema.hash_key_name self._range_key_name = self.table.schema.range_key_name if attrs == None: attrs = {} if hash_key == None: hash_key = attrs.get(self._hash_key_name, None) self[self._hash_key_name] = hash_key if self._range_key_name: if range_key == None: range_key = attrs.get(self._range_key_name, None) self[self._range_key_name] = range_key self._updates = {} for key, value in attrs.items(): if key != self._hash_key_name and key != self._range_key_name: self[key] = value self.consumed_units = 0 @property def hash_key(self): return self[self._hash_key_name] @property def range_key(self): return self.get(self._range_key_name) @property def hash_key_name(self): return self._hash_key_name @property def range_key_name(self): return self._range_key_name def add_attribute(self, attr_name, attr_value): """ Queue the addition of an attribute to an item in DynamoDB. This will eventually result in an UpdateItem request being issued with an update action of ADD when the save method is called. :type attr_name: str :param attr_name: Name of the attribute you want to alter. :type attr_value: int|long|float|set :param attr_value: Value which is to be added to the attribute. """ self._updates[attr_name] = ("ADD", attr_value) def delete_attribute(self, attr_name, attr_value=None): """ Queue the deletion of an attribute from an item in DynamoDB. This call will result in a UpdateItem request being issued with update action of DELETE when the save method is called. :type attr_name: str :param attr_name: Name of the attribute you want to alter. :type attr_value: set :param attr_value: A set of values to be removed from the attribute. This parameter is optional. If None, the whole attribute is removed from the item. """ self._updates[attr_name] = ("DELETE", attr_value) def put_attribute(self, attr_name, attr_value): """ Queue the putting of an attribute to an item in DynamoDB. This call will result in an UpdateItem request being issued with the update action of PUT when the save method is called. :type attr_name: str :param attr_name: Name of the attribute you want to alter. :type attr_value: int|long|float|str|set :param attr_value: New value of the attribute. """ self._updates[attr_name] = ("PUT", attr_value) def save(self, expected_value=None, return_values=None): """ Commits pending updates to Amazon DynamoDB. :type expected_value: dict :param expected_value: A dictionary of name/value pairs that you expect. This dictionary should have name/value pairs where the name is the name of the attribute and the value is either the value you are expecting or False if you expect the attribute not to exist. :type return_values: str :param return_values: Controls the return of attribute name/value pairs before they were updated. Possible values are: None, 'ALL_OLD', 'UPDATED_OLD', 'ALL_NEW' or 'UPDATED_NEW'. If 'ALL_OLD' is specified and the item is overwritten, the content of the old item is returned. If 'ALL_NEW' is specified, then all the attributes of the new version of the item are returned. If 'UPDATED_NEW' is specified, the new versions of only the updated attributes are returned. """ return self.table.layer2.update_item(self, expected_value, return_values) def delete(self, expected_value=None, return_values=None): """ Delete the item from DynamoDB. :type expected_value: dict :param expected_value: A dictionary of name/value pairs that you expect. This dictionary should have name/value pairs where the name is the name of the attribute and the value is either the value you are expecting or False if you expect the attribute not to exist. :type return_values: str :param return_values: Controls the return of attribute name-value pairs before then were changed. Possible values are: None or 'ALL_OLD'. If 'ALL_OLD' is specified and the item is overwritten, the content of the old item is returned. """ return self.table.layer2.delete_item(self, expected_value, return_values) def put(self, expected_value=None, return_values=None): """ Store a new item or completely replace an existing item in Amazon DynamoDB. :type expected_value: dict :param expected_value: A dictionary of name/value pairs that you expect. This dictionary should have name/value pairs where the name is the name of the attribute and the value is either the value you are expecting or False if you expect the attribute not to exist. :type return_values: str :param return_values: Controls the return of attribute name-value pairs before then were changed. Possible values are: None or 'ALL_OLD'. If 'ALL_OLD' is specified and the item is overwritten, the content of the old item is returned. """ return self.table.layer2.put_item(self, expected_value, return_values) def __setitem__(self, key, value): """Overrwrite the setter to instead update the _updates method so this can act like a normal dict""" if self._updates is not None: self.put_attribute(key, value) dict.__setitem__(self, key, value) def __delitem__(self, key): """Remove this key from the items""" if self._updates is not None: self.delete_attribute(key) dict.__delitem__(self, key) # Allow this item to still be pickled def __getstate__(self): return self.__dict__ def __setstate__(self, d): self.__dict__.update(d) boto-2.20.1/boto/dynamodb/layer1.py000066400000000000000000000564521225267101000170560ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import time from binascii import crc32 import boto from boto.connection import AWSAuthConnection from boto.exception import DynamoDBResponseError from boto.provider import Provider from boto.dynamodb import exceptions as dynamodb_exceptions from boto.compat import json class Layer1(AWSAuthConnection): """ This is the lowest-level interface to DynamoDB. Methods at this layer map directly to API requests and parameters to the methods are either simple, scalar values or they are the Python equivalent of the JSON input as defined in the DynamoDB Developer's Guide. All responses are direct decoding of the JSON response bodies to Python data structures via the json or simplejson modules. :ivar throughput_exceeded_events: An integer variable that keeps a running total of the number of ThroughputExceeded responses this connection has received from Amazon DynamoDB. """ DefaultRegionName = 'us-east-1' """The default region name for DynamoDB API.""" ServiceName = 'DynamoDB' """The name of the Service""" Version = '20111205' """DynamoDB API version.""" ThruputError = "ProvisionedThroughputExceededException" """The error response returned when provisioned throughput is exceeded""" SessionExpiredError = 'com.amazon.coral.service#ExpiredTokenException' """The error response returned when session token has expired""" ConditionalCheckFailedError = 'ConditionalCheckFailedException' """The error response returned when a conditional check fails""" ValidationError = 'ValidationException' """The error response returned when an item is invalid in some way""" ResponseError = DynamoDBResponseError NumberRetries = 10 """The number of times an error is retried.""" def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, debug=0, security_token=None, region=None, validate_certs=True, validate_checksums=True): if not region: region_name = boto.config.get('DynamoDB', 'region', self.DefaultRegionName) for reg in boto.dynamodb.regions(): if reg.name == region_name: region = reg break self.region = region AWSAuthConnection.__init__(self, self.region.endpoint, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, debug=debug, security_token=security_token, validate_certs=validate_certs) self.throughput_exceeded_events = 0 self._validate_checksums = boto.config.getbool( 'DynamoDB', 'validate_checksums', validate_checksums) def _get_session_token(self): self.provider = Provider(self._provider_type) self._auth_handler.update_provider(self.provider) def _required_auth_capability(self): return ['hmac-v4'] def make_request(self, action, body='', object_hook=None): """ :raises: ``DynamoDBExpiredTokenError`` if the security token expires. """ headers = {'X-Amz-Target': '%s_%s.%s' % (self.ServiceName, self.Version, action), 'Host': self.region.endpoint, 'Content-Type': 'application/x-amz-json-1.0', 'Content-Length': str(len(body))} http_request = self.build_base_http_request('POST', '/', '/', {}, headers, body, None) start = time.time() response = self._mexe(http_request, sender=None, override_num_retries=self.NumberRetries, retry_handler=self._retry_handler) elapsed = (time.time() - start) * 1000 request_id = response.getheader('x-amzn-RequestId') boto.log.debug('RequestId: %s' % request_id) boto.perflog.debug('%s: id=%s time=%sms', headers['X-Amz-Target'], request_id, int(elapsed)) response_body = response.read() boto.log.debug(response_body) return json.loads(response_body, object_hook=object_hook) def _retry_handler(self, response, i, next_sleep): status = None if response.status == 400: response_body = response.read() boto.log.debug(response_body) data = json.loads(response_body) if self.ThruputError in data.get('__type'): self.throughput_exceeded_events += 1 msg = "%s, retry attempt %s" % (self.ThruputError, i) next_sleep = self._exponential_time(i) i += 1 status = (msg, i, next_sleep) if i == self.NumberRetries: # If this was our last retry attempt, raise # a specific error saying that the throughput # was exceeded. raise dynamodb_exceptions.DynamoDBThroughputExceededError( response.status, response.reason, data) elif self.SessionExpiredError in data.get('__type'): msg = 'Renewing Session Token' self._get_session_token() status = (msg, i + self.num_retries - 1, 0) elif self.ConditionalCheckFailedError in data.get('__type'): raise dynamodb_exceptions.DynamoDBConditionalCheckFailedError( response.status, response.reason, data) elif self.ValidationError in data.get('__type'): raise dynamodb_exceptions.DynamoDBValidationError( response.status, response.reason, data) else: raise self.ResponseError(response.status, response.reason, data) expected_crc32 = response.getheader('x-amz-crc32') if self._validate_checksums and expected_crc32 is not None: boto.log.debug('Validating crc32 checksum for body: %s', response.read()) actual_crc32 = crc32(response.read()) & 0xffffffff expected_crc32 = int(expected_crc32) if actual_crc32 != expected_crc32: msg = ("The calculated checksum %s did not match the expected " "checksum %s" % (actual_crc32, expected_crc32)) status = (msg, i + 1, self._exponential_time(i)) return status def _exponential_time(self, i): if i == 0: next_sleep = 0 else: next_sleep = 0.05 * (2 ** i) return next_sleep def list_tables(self, limit=None, start_table=None): """ Returns a dictionary of results. The dictionary contains a **TableNames** key whose value is a list of the table names. The dictionary could also contain a **LastEvaluatedTableName** key whose value would be the last table name returned if the complete list of table names was not returned. This value would then be passed as the ``start_table`` parameter on a subsequent call to this method. :type limit: int :param limit: The maximum number of tables to return. :type start_table: str :param start_table: The name of the table that starts the list. If you ran a previous list_tables and not all results were returned, the response dict would include a LastEvaluatedTableName attribute. Use that value here to continue the listing. """ data = {} if limit: data['Limit'] = limit if start_table: data['ExclusiveStartTableName'] = start_table json_input = json.dumps(data) return self.make_request('ListTables', json_input) def describe_table(self, table_name): """ Returns information about the table including current state of the table, primary key schema and when the table was created. :type table_name: str :param table_name: The name of the table to describe. """ data = {'TableName': table_name} json_input = json.dumps(data) return self.make_request('DescribeTable', json_input) def create_table(self, table_name, schema, provisioned_throughput): """ Add a new table to your account. The table name must be unique among those associated with the account issuing the request. This request triggers an asynchronous workflow to begin creating the table. When the workflow is complete, the state of the table will be ACTIVE. :type table_name: str :param table_name: The name of the table to create. :type schema: dict :param schema: A Python version of the KeySchema data structure as defined by DynamoDB :type provisioned_throughput: dict :param provisioned_throughput: A Python version of the ProvisionedThroughput data structure defined by DynamoDB. """ data = {'TableName': table_name, 'KeySchema': schema, 'ProvisionedThroughput': provisioned_throughput} json_input = json.dumps(data) response_dict = self.make_request('CreateTable', json_input) return response_dict def update_table(self, table_name, provisioned_throughput): """ Updates the provisioned throughput for a given table. :type table_name: str :param table_name: The name of the table to update. :type provisioned_throughput: dict :param provisioned_throughput: A Python version of the ProvisionedThroughput data structure defined by DynamoDB. """ data = {'TableName': table_name, 'ProvisionedThroughput': provisioned_throughput} json_input = json.dumps(data) return self.make_request('UpdateTable', json_input) def delete_table(self, table_name): """ Deletes the table and all of it's data. After this request the table will be in the DELETING state until DynamoDB completes the delete operation. :type table_name: str :param table_name: The name of the table to delete. """ data = {'TableName': table_name} json_input = json.dumps(data) return self.make_request('DeleteTable', json_input) def get_item(self, table_name, key, attributes_to_get=None, consistent_read=False, object_hook=None): """ Return a set of attributes for an item that matches the supplied key. :type table_name: str :param table_name: The name of the table containing the item. :type key: dict :param key: A Python version of the Key data structure defined by DynamoDB. :type attributes_to_get: list :param attributes_to_get: A list of attribute names. If supplied, only the specified attribute names will be returned. Otherwise, all attributes will be returned. :type consistent_read: bool :param consistent_read: If True, a consistent read request is issued. Otherwise, an eventually consistent request is issued. """ data = {'TableName': table_name, 'Key': key} if attributes_to_get: data['AttributesToGet'] = attributes_to_get if consistent_read: data['ConsistentRead'] = True json_input = json.dumps(data) response = self.make_request('GetItem', json_input, object_hook=object_hook) if 'Item' not in response: raise dynamodb_exceptions.DynamoDBKeyNotFoundError( "Key does not exist." ) return response def batch_get_item(self, request_items, object_hook=None): """ Return a set of attributes for a multiple items in multiple tables using their primary keys. :type request_items: dict :param request_items: A Python version of the RequestItems data structure defined by DynamoDB. """ # If the list is empty, return empty response if not request_items: return {} data = {'RequestItems': request_items} json_input = json.dumps(data) return self.make_request('BatchGetItem', json_input, object_hook=object_hook) def batch_write_item(self, request_items, object_hook=None): """ This operation enables you to put or delete several items across multiple tables in a single API call. :type request_items: dict :param request_items: A Python version of the RequestItems data structure defined by DynamoDB. """ data = {'RequestItems': request_items} json_input = json.dumps(data) return self.make_request('BatchWriteItem', json_input, object_hook=object_hook) def put_item(self, table_name, item, expected=None, return_values=None, object_hook=None): """ Create a new item or replace an old item with a new item (including all attributes). If an item already exists in the specified table with the same primary key, the new item will completely replace the old item. You can perform a conditional put by specifying an expected rule. :type table_name: str :param table_name: The name of the table in which to put the item. :type item: dict :param item: A Python version of the Item data structure defined by DynamoDB. :type expected: dict :param expected: A Python version of the Expected data structure defined by DynamoDB. :type return_values: str :param return_values: Controls the return of attribute name-value pairs before then were changed. Possible values are: None or 'ALL_OLD'. If 'ALL_OLD' is specified and the item is overwritten, the content of the old item is returned. """ data = {'TableName': table_name, 'Item': item} if expected: data['Expected'] = expected if return_values: data['ReturnValues'] = return_values json_input = json.dumps(data) return self.make_request('PutItem', json_input, object_hook=object_hook) def update_item(self, table_name, key, attribute_updates, expected=None, return_values=None, object_hook=None): """ Edits an existing item's attributes. You can perform a conditional update (insert a new attribute name-value pair if it doesn't exist, or replace an existing name-value pair if it has certain expected attribute values). :type table_name: str :param table_name: The name of the table. :type key: dict :param key: A Python version of the Key data structure defined by DynamoDB which identifies the item to be updated. :type attribute_updates: dict :param attribute_updates: A Python version of the AttributeUpdates data structure defined by DynamoDB. :type expected: dict :param expected: A Python version of the Expected data structure defined by DynamoDB. :type return_values: str :param return_values: Controls the return of attribute name-value pairs before then were changed. Possible values are: None or 'ALL_OLD'. If 'ALL_OLD' is specified and the item is overwritten, the content of the old item is returned. """ data = {'TableName': table_name, 'Key': key, 'AttributeUpdates': attribute_updates} if expected: data['Expected'] = expected if return_values: data['ReturnValues'] = return_values json_input = json.dumps(data) return self.make_request('UpdateItem', json_input, object_hook=object_hook) def delete_item(self, table_name, key, expected=None, return_values=None, object_hook=None): """ Delete an item and all of it's attributes by primary key. You can perform a conditional delete by specifying an expected rule. :type table_name: str :param table_name: The name of the table containing the item. :type key: dict :param key: A Python version of the Key data structure defined by DynamoDB. :type expected: dict :param expected: A Python version of the Expected data structure defined by DynamoDB. :type return_values: str :param return_values: Controls the return of attribute name-value pairs before then were changed. Possible values are: None or 'ALL_OLD'. If 'ALL_OLD' is specified and the item is overwritten, the content of the old item is returned. """ data = {'TableName': table_name, 'Key': key} if expected: data['Expected'] = expected if return_values: data['ReturnValues'] = return_values json_input = json.dumps(data) return self.make_request('DeleteItem', json_input, object_hook=object_hook) def query(self, table_name, hash_key_value, range_key_conditions=None, attributes_to_get=None, limit=None, consistent_read=False, scan_index_forward=True, exclusive_start_key=None, object_hook=None, count=False): """ Perform a query of DynamoDB. This version is currently punting and expecting you to provide a full and correct JSON body which is passed as is to DynamoDB. :type table_name: str :param table_name: The name of the table to query. :type hash_key_value: dict :param key: A DynamoDB-style HashKeyValue. :type range_key_conditions: dict :param range_key_conditions: A Python version of the RangeKeyConditions data structure. :type attributes_to_get: list :param attributes_to_get: A list of attribute names. If supplied, only the specified attribute names will be returned. Otherwise, all attributes will be returned. :type limit: int :param limit: The maximum number of items to return. :type count: bool :param count: If True, Amazon DynamoDB returns a total number of items for the Query operation, even if the operation has no matching items for the assigned filter. :type consistent_read: bool :param consistent_read: If True, a consistent read request is issued. Otherwise, an eventually consistent request is issued. :type scan_index_forward: bool :param scan_index_forward: Specified forward or backward traversal of the index. Default is forward (True). :type exclusive_start_key: list or tuple :param exclusive_start_key: Primary key of the item from which to continue an earlier query. This would be provided as the LastEvaluatedKey in that query. """ data = {'TableName': table_name, 'HashKeyValue': hash_key_value} if range_key_conditions: data['RangeKeyCondition'] = range_key_conditions if attributes_to_get: data['AttributesToGet'] = attributes_to_get if limit: data['Limit'] = limit if count: data['Count'] = True if consistent_read: data['ConsistentRead'] = True if scan_index_forward: data['ScanIndexForward'] = True else: data['ScanIndexForward'] = False if exclusive_start_key: data['ExclusiveStartKey'] = exclusive_start_key json_input = json.dumps(data) return self.make_request('Query', json_input, object_hook=object_hook) def scan(self, table_name, scan_filter=None, attributes_to_get=None, limit=None, exclusive_start_key=None, object_hook=None, count=False): """ Perform a scan of DynamoDB. This version is currently punting and expecting you to provide a full and correct JSON body which is passed as is to DynamoDB. :type table_name: str :param table_name: The name of the table to scan. :type scan_filter: dict :param scan_filter: A Python version of the ScanFilter data structure. :type attributes_to_get: list :param attributes_to_get: A list of attribute names. If supplied, only the specified attribute names will be returned. Otherwise, all attributes will be returned. :type limit: int :param limit: The maximum number of items to evaluate. :type count: bool :param count: If True, Amazon DynamoDB returns a total number of items for the Scan operation, even if the operation has no matching items for the assigned filter. :type exclusive_start_key: list or tuple :param exclusive_start_key: Primary key of the item from which to continue an earlier query. This would be provided as the LastEvaluatedKey in that query. """ data = {'TableName': table_name} if scan_filter: data['ScanFilter'] = scan_filter if attributes_to_get: data['AttributesToGet'] = attributes_to_get if limit: data['Limit'] = limit if count: data['Count'] = True if exclusive_start_key: data['ExclusiveStartKey'] = exclusive_start_key json_input = json.dumps(data) return self.make_request('Scan', json_input, object_hook=object_hook) boto-2.20.1/boto/dynamodb/layer2.py000066400000000000000000001015341225267101000170470ustar00rootroot00000000000000# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2011 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.dynamodb.layer1 import Layer1 from boto.dynamodb.table import Table from boto.dynamodb.schema import Schema from boto.dynamodb.item import Item from boto.dynamodb.batch import BatchList, BatchWriteList from boto.dynamodb.types import get_dynamodb_type, Dynamizer, \ LossyFloatDynamizer class TableGenerator(object): """ This is an object that wraps up the table_generator function. The only real reason to have this is that we want to be able to accumulate and return the ConsumedCapacityUnits element that is part of each response. :ivar last_evaluated_key: A sequence representing the key(s) of the item last evaluated, or None if no additional results are available. :ivar remaining: The remaining quantity of results requested. :ivar table: The table to which the call was made. """ def __init__(self, table, callable, remaining, item_class, kwargs): self.table = table self.callable = callable self.remaining = -1 if remaining is None else remaining self.item_class = item_class self.kwargs = kwargs self._consumed_units = 0.0 self.last_evaluated_key = None self._count = 0 self._scanned_count = 0 self._response = None @property def count(self): """ The total number of items retrieved thus far. This value changes with iteration and even when issuing a call with count=True, it is necessary to complete the iteration to assert an accurate count value. """ self.response return self._count @property def scanned_count(self): """ As above, but representing the total number of items scanned by DynamoDB, without regard to any filters. """ self.response return self._scanned_count @property def consumed_units(self): """ Returns a float representing the ConsumedCapacityUnits accumulated. """ self.response return self._consumed_units @property def response(self): """ The current response to the call from DynamoDB. """ return self.next_response() if self._response is None else self._response def next_response(self): """ Issue a call and return the result. You can invoke this method while iterating over the TableGenerator in order to skip to the next "page" of results. """ # preserve any existing limit in case the user alters self.remaining limit = self.kwargs.get('limit') if (self.remaining > 0 and (limit is None or limit > self.remaining)): self.kwargs['limit'] = self.remaining self._response = self.callable(**self.kwargs) self.kwargs['limit'] = limit self._consumed_units += self._response.get('ConsumedCapacityUnits', 0.0) self._count += self._response.get('Count', 0) self._scanned_count += self._response.get('ScannedCount', 0) # at the expense of a possibly gratuitous dynamize, ensure that # early generator termination won't result in bad LEK values if 'LastEvaluatedKey' in self._response: lek = self._response['LastEvaluatedKey'] esk = self.table.layer2.dynamize_last_evaluated_key(lek) self.kwargs['exclusive_start_key'] = esk lektuple = (lek['HashKeyElement'],) if 'RangeKeyElement' in lek: lektuple += (lek['RangeKeyElement'],) self.last_evaluated_key = lektuple else: self.last_evaluated_key = None return self._response def __iter__(self): while self.remaining != 0: response = self.response for item in response.get('Items', []): self.remaining -= 1 yield self.item_class(self.table, attrs=item) if self.remaining == 0: break if response is not self._response: break else: if self.last_evaluated_key is not None: self.next_response() continue break if response is not self._response: continue break class Layer2(object): def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, debug=0, security_token=None, region=None, validate_certs=True, dynamizer=LossyFloatDynamizer): self.layer1 = Layer1(aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, debug, security_token, region, validate_certs=validate_certs) self.dynamizer = dynamizer() def use_decimals(self): """ Use the ``decimal.Decimal`` type for encoding/decoding numeric types. By default, ints/floats are used to represent numeric types ('N', 'NS') received from DynamoDB. Using the ``Decimal`` type is recommended to prevent loss of precision. """ # Eventually this should be made the default dynamizer. self.dynamizer = Dynamizer() def dynamize_attribute_updates(self, pending_updates): """ Convert a set of pending item updates into the structure required by Layer1. """ d = {} for attr_name in pending_updates: action, value = pending_updates[attr_name] if value is None: # DELETE without an attribute value d[attr_name] = {"Action": action} else: d[attr_name] = {"Action": action, "Value": self.dynamizer.encode(value)} return d def dynamize_item(self, item): d = {} for attr_name in item: d[attr_name] = self.dynamizer.encode(item[attr_name]) return d def dynamize_range_key_condition(self, range_key_condition): """ Convert a layer2 range_key_condition parameter into the structure required by Layer1. """ return range_key_condition.to_dict() def dynamize_scan_filter(self, scan_filter): """ Convert a layer2 scan_filter parameter into the structure required by Layer1. """ d = None if scan_filter: d = {} for attr_name in scan_filter: condition = scan_filter[attr_name] d[attr_name] = condition.to_dict() return d def dynamize_expected_value(self, expected_value): """ Convert an expected_value parameter into the data structure required for Layer1. """ d = None if expected_value: d = {} for attr_name in expected_value: attr_value = expected_value[attr_name] if attr_value is True: attr_value = {'Exists': True} elif attr_value is False: attr_value = {'Exists': False} else: val = self.dynamizer.encode(expected_value[attr_name]) attr_value = {'Value': val} d[attr_name] = attr_value return d def dynamize_last_evaluated_key(self, last_evaluated_key): """ Convert a last_evaluated_key parameter into the data structure required for Layer1. """ d = None if last_evaluated_key: hash_key = last_evaluated_key['HashKeyElement'] d = {'HashKeyElement': self.dynamizer.encode(hash_key)} if 'RangeKeyElement' in last_evaluated_key: range_key = last_evaluated_key['RangeKeyElement'] d['RangeKeyElement'] = self.dynamizer.encode(range_key) return d def build_key_from_values(self, schema, hash_key, range_key=None): """ Build a Key structure to be used for accessing items in Amazon DynamoDB. This method takes the supplied hash_key and optional range_key and validates them against the schema. If there is a mismatch, a TypeError is raised. Otherwise, a Python dict version of a Amazon DynamoDB Key data structure is returned. :type hash_key: int|float|str|unicode|Binary :param hash_key: The hash key of the item you are looking for. The type of the hash key should match the type defined in the schema. :type range_key: int|float|str|unicode|Binary :param range_key: The range key of the item your are looking for. This should be supplied only if the schema requires a range key. The type of the range key should match the type defined in the schema. """ dynamodb_key = {} dynamodb_value = self.dynamizer.encode(hash_key) if dynamodb_value.keys()[0] != schema.hash_key_type: msg = 'Hashkey must be of type: %s' % schema.hash_key_type raise TypeError(msg) dynamodb_key['HashKeyElement'] = dynamodb_value if range_key is not None: dynamodb_value = self.dynamizer.encode(range_key) if dynamodb_value.keys()[0] != schema.range_key_type: msg = 'RangeKey must be of type: %s' % schema.range_key_type raise TypeError(msg) dynamodb_key['RangeKeyElement'] = dynamodb_value return dynamodb_key def new_batch_list(self): """ Return a new, empty :class:`boto.dynamodb.batch.BatchList` object. """ return BatchList(self) def new_batch_write_list(self): """ Return a new, empty :class:`boto.dynamodb.batch.BatchWriteList` object. """ return BatchWriteList(self) def list_tables(self, limit=None): """ Return a list of the names of all tables associated with the current account and region. :type limit: int :param limit: The maximum number of tables to return. """ tables = [] start_table = None while not limit or len(tables) < limit: this_round_limit = None if limit: this_round_limit = limit - len(tables) this_round_limit = min(this_round_limit, 100) result = self.layer1.list_tables(limit=this_round_limit, start_table=start_table) tables.extend(result.get('TableNames', [])) start_table = result.get('LastEvaluatedTableName', None) if not start_table: break return tables def describe_table(self, name): """ Retrieve information about an existing table. :type name: str :param name: The name of the desired table. """ return self.layer1.describe_table(name) def table_from_schema(self, name, schema): """ Create a Table object from a schema. This method will create a Table object without making any API calls. If you know the name and schema of the table, you can use this method instead of ``get_table``. Example usage:: table = layer2.table_from_schema( 'tablename', Schema.create(hash_key=('foo', 'N'))) :type name: str :param name: The name of the table. :type schema: :class:`boto.dynamodb.schema.Schema` :param schema: The schema associated with the table. :rtype: :class:`boto.dynamodb.table.Table` :return: A Table object representing the table. """ return Table.create_from_schema(self, name, schema) def get_table(self, name): """ Retrieve the Table object for an existing table. :type name: str :param name: The name of the desired table. :rtype: :class:`boto.dynamodb.table.Table` :return: A Table object representing the table. """ response = self.layer1.describe_table(name) return Table(self, response) lookup = get_table def create_table(self, name, schema, read_units, write_units): """ Create a new Amazon DynamoDB table. :type name: str :param name: The name of the desired table. :type schema: :class:`boto.dynamodb.schema.Schema` :param schema: The Schema object that defines the schema used by this table. :type read_units: int :param read_units: The value for ReadCapacityUnits. :type write_units: int :param write_units: The value for WriteCapacityUnits. :rtype: :class:`boto.dynamodb.table.Table` :return: A Table object representing the new Amazon DynamoDB table. """ response = self.layer1.create_table(name, schema.dict, {'ReadCapacityUnits': read_units, 'WriteCapacityUnits': write_units}) return Table(self, response) def update_throughput(self, table, read_units, write_units): """ Update the ProvisionedThroughput for the Amazon DynamoDB Table. :type table: :class:`boto.dynamodb.table.Table` :param table: The Table object whose throughput is being updated. :type read_units: int :param read_units: The new value for ReadCapacityUnits. :type write_units: int :param write_units: The new value for WriteCapacityUnits. """ response = self.layer1.update_table(table.name, {'ReadCapacityUnits': read_units, 'WriteCapacityUnits': write_units}) table.update_from_response(response) def delete_table(self, table): """ Delete this table and all items in it. After calling this the Table objects status attribute will be set to 'DELETING'. :type table: :class:`boto.dynamodb.table.Table` :param table: The Table object that is being deleted. """ response = self.layer1.delete_table(table.name) table.update_from_response(response) def create_schema(self, hash_key_name, hash_key_proto_value, range_key_name=None, range_key_proto_value=None): """ Create a Schema object used when creating a Table. :type hash_key_name: str :param hash_key_name: The name of the HashKey for the schema. :type hash_key_proto_value: int|long|float|str|unicode|Binary :param hash_key_proto_value: A sample or prototype of the type of value you want to use for the HashKey. Alternatively, you can also just pass in the Python type (e.g. int, float, etc.). :type range_key_name: str :param range_key_name: The name of the RangeKey for the schema. This parameter is optional. :type range_key_proto_value: int|long|float|str|unicode|Binary :param range_key_proto_value: A sample or prototype of the type of value you want to use for the RangeKey. Alternatively, you can also pass in the Python type (e.g. int, float, etc.) This parameter is optional. """ hash_key = (hash_key_name, get_dynamodb_type(hash_key_proto_value)) if range_key_name and range_key_proto_value is not None: range_key = (range_key_name, get_dynamodb_type(range_key_proto_value)) else: range_key = None return Schema.create(hash_key, range_key) def get_item(self, table, hash_key, range_key=None, attributes_to_get=None, consistent_read=False, item_class=Item): """ Retrieve an existing item from the table. :type table: :class:`boto.dynamodb.table.Table` :param table: The Table object from which the item is retrieved. :type hash_key: int|long|float|str|unicode|Binary :param hash_key: The HashKey of the requested item. The type of the value must match the type defined in the schema for the table. :type range_key: int|long|float|str|unicode|Binary :param range_key: The optional RangeKey of the requested item. The type of the value must match the type defined in the schema for the table. :type attributes_to_get: list :param attributes_to_get: A list of attribute names. If supplied, only the specified attribute names will be returned. Otherwise, all attributes will be returned. :type consistent_read: bool :param consistent_read: If True, a consistent read request is issued. Otherwise, an eventually consistent request is issued. :type item_class: Class :param item_class: Allows you to override the class used to generate the items. This should be a subclass of :class:`boto.dynamodb.item.Item` """ key = self.build_key_from_values(table.schema, hash_key, range_key) response = self.layer1.get_item(table.name, key, attributes_to_get, consistent_read, object_hook=self.dynamizer.decode) item = item_class(table, hash_key, range_key, response['Item']) if 'ConsumedCapacityUnits' in response: item.consumed_units = response['ConsumedCapacityUnits'] return item def batch_get_item(self, batch_list): """ Return a set of attributes for a multiple items in multiple tables using their primary keys. :type batch_list: :class:`boto.dynamodb.batch.BatchList` :param batch_list: A BatchList object which consists of a list of :class:`boto.dynamoddb.batch.Batch` objects. Each Batch object contains the information about one batch of objects that you wish to retrieve in this request. """ request_items = batch_list.to_dict() return self.layer1.batch_get_item(request_items, object_hook=self.dynamizer.decode) def batch_write_item(self, batch_list): """ Performs multiple Puts and Deletes in one batch. :type batch_list: :class:`boto.dynamodb.batch.BatchWriteList` :param batch_list: A BatchWriteList object which consists of a list of :class:`boto.dynamoddb.batch.BatchWrite` objects. Each Batch object contains the information about one batch of objects that you wish to put or delete. """ request_items = batch_list.to_dict() return self.layer1.batch_write_item(request_items, object_hook=self.dynamizer.decode) def put_item(self, item, expected_value=None, return_values=None): """ Store a new item or completely replace an existing item in Amazon DynamoDB. :type item: :class:`boto.dynamodb.item.Item` :param item: The Item to write to Amazon DynamoDB. :type expected_value: dict :param expected_value: A dictionary of name/value pairs that you expect. This dictionary should have name/value pairs where the name is the name of the attribute and the value is either the value you are expecting or False if you expect the attribute not to exist. :type return_values: str :param return_values: Controls the return of attribute name-value pairs before then were changed. Possible values are: None or 'ALL_OLD'. If 'ALL_OLD' is specified and the item is overwritten, the content of the old item is returned. """ expected_value = self.dynamize_expected_value(expected_value) response = self.layer1.put_item(item.table.name, self.dynamize_item(item), expected_value, return_values, object_hook=self.dynamizer.decode) if 'ConsumedCapacityUnits' in response: item.consumed_units = response['ConsumedCapacityUnits'] return response def update_item(self, item, expected_value=None, return_values=None): """ Commit pending item updates to Amazon DynamoDB. :type item: :class:`boto.dynamodb.item.Item` :param item: The Item to update in Amazon DynamoDB. It is expected that you would have called the add_attribute, put_attribute and/or delete_attribute methods on this Item prior to calling this method. Those queued changes are what will be updated. :type expected_value: dict :param expected_value: A dictionary of name/value pairs that you expect. This dictionary should have name/value pairs where the name is the name of the attribute and the value is either the value you are expecting or False if you expect the attribute not to exist. :type return_values: str :param return_values: Controls the return of attribute name/value pairs before they were updated. Possible values are: None, 'ALL_OLD', 'UPDATED_OLD', 'ALL_NEW' or 'UPDATED_NEW'. If 'ALL_OLD' is specified and the item is overwritten, the content of the old item is returned. If 'ALL_NEW' is specified, then all the attributes of the new version of the item are returned. If 'UPDATED_NEW' is specified, the new versions of only the updated attributes are returned. """ expected_value = self.dynamize_expected_value(expected_value) key = self.build_key_from_values(item.table.schema, item.hash_key, item.range_key) attr_updates = self.dynamize_attribute_updates(item._updates) response = self.layer1.update_item(item.table.name, key, attr_updates, expected_value, return_values, object_hook=self.dynamizer.decode) item._updates.clear() if 'ConsumedCapacityUnits' in response: item.consumed_units = response['ConsumedCapacityUnits'] return response def delete_item(self, item, expected_value=None, return_values=None): """ Delete the item from Amazon DynamoDB. :type item: :class:`boto.dynamodb.item.Item` :param item: The Item to delete from Amazon DynamoDB. :type expected_value: dict :param expected_value: A dictionary of name/value pairs that you expect. This dictionary should have name/value pairs where the name is the name of the attribute and the value is either the value you are expecting or False if you expect the attribute not to exist. :type return_values: str :param return_values: Controls the return of attribute name-value pairs before then were changed. Possible values are: None or 'ALL_OLD'. If 'ALL_OLD' is specified and the item is overwritten, the content of the old item is returned. """ expected_value = self.dynamize_expected_value(expected_value) key = self.build_key_from_values(item.table.schema, item.hash_key, item.range_key) return self.layer1.delete_item(item.table.name, key, expected=expected_value, return_values=return_values, object_hook=self.dynamizer.decode) def query(self, table, hash_key, range_key_condition=None, attributes_to_get=None, request_limit=None, max_results=None, consistent_read=False, scan_index_forward=True, exclusive_start_key=None, item_class=Item, count=False): """ Perform a query on the table. :type table: :class:`boto.dynamodb.table.Table` :param table: The Table object that is being queried. :type hash_key: int|long|float|str|unicode|Binary :param hash_key: The HashKey of the requested item. The type of the value must match the type defined in the schema for the table. :type range_key_condition: :class:`boto.dynamodb.condition.Condition` :param range_key_condition: A Condition object. Condition object can be one of the following types: EQ|LE|LT|GE|GT|BEGINS_WITH|BETWEEN The only condition which expects or will accept two values is 'BETWEEN', otherwise a single value should be passed to the Condition constructor. :type attributes_to_get: list :param attributes_to_get: A list of attribute names. If supplied, only the specified attribute names will be returned. Otherwise, all attributes will be returned. :type request_limit: int :param request_limit: The maximum number of items to retrieve from Amazon DynamoDB on each request. You may want to set a specific request_limit based on the provisioned throughput of your table. The default behavior is to retrieve as many results as possible per request. :type max_results: int :param max_results: The maximum number of results that will be retrieved from Amazon DynamoDB in total. For example, if you only wanted to see the first 100 results from the query, regardless of how many were actually available, you could set max_results to 100 and the generator returned from the query method will only yeild 100 results max. :type consistent_read: bool :param consistent_read: If True, a consistent read request is issued. Otherwise, an eventually consistent request is issued. :type scan_index_forward: bool :param scan_index_forward: Specified forward or backward traversal of the index. Default is forward (True). :type count: bool :param count: If True, Amazon DynamoDB returns a total number of items for the Query operation, even if the operation has no matching items for the assigned filter. If count is True, the actual items are not returned and the count is accessible as the ``count`` attribute of the returned object. :type exclusive_start_key: list or tuple :param exclusive_start_key: Primary key of the item from which to continue an earlier query. This would be provided as the LastEvaluatedKey in that query. :type item_class: Class :param item_class: Allows you to override the class used to generate the items. This should be a subclass of :class:`boto.dynamodb.item.Item` :rtype: :class:`boto.dynamodb.layer2.TableGenerator` """ if range_key_condition: rkc = self.dynamize_range_key_condition(range_key_condition) else: rkc = None if exclusive_start_key: esk = self.build_key_from_values(table.schema, *exclusive_start_key) else: esk = None kwargs = {'table_name': table.name, 'hash_key_value': self.dynamizer.encode(hash_key), 'range_key_conditions': rkc, 'attributes_to_get': attributes_to_get, 'limit': request_limit, 'count': count, 'consistent_read': consistent_read, 'scan_index_forward': scan_index_forward, 'exclusive_start_key': esk, 'object_hook': self.dynamizer.decode} return TableGenerator(table, self.layer1.query, max_results, item_class, kwargs) def scan(self, table, scan_filter=None, attributes_to_get=None, request_limit=None, max_results=None, exclusive_start_key=None, item_class=Item, count=False): """ Perform a scan of DynamoDB. :type table: :class:`boto.dynamodb.table.Table` :param table: The Table object that is being scanned. :type scan_filter: A dict :param scan_filter: A dictionary where the key is the attribute name and the value is a :class:`boto.dynamodb.condition.Condition` object. Valid Condition objects include: * EQ - equal (1) * NE - not equal (1) * LE - less than or equal (1) * LT - less than (1) * GE - greater than or equal (1) * GT - greater than (1) * NOT_NULL - attribute exists (0, use None) * NULL - attribute does not exist (0, use None) * CONTAINS - substring or value in list (1) * NOT_CONTAINS - absence of substring or value in list (1) * BEGINS_WITH - substring prefix (1) * IN - exact match in list (N) * BETWEEN - >= first value, <= second value (2) :type attributes_to_get: list :param attributes_to_get: A list of attribute names. If supplied, only the specified attribute names will be returned. Otherwise, all attributes will be returned. :type request_limit: int :param request_limit: The maximum number of items to retrieve from Amazon DynamoDB on each request. You may want to set a specific request_limit based on the provisioned throughput of your table. The default behavior is to retrieve as many results as possible per request. :type max_results: int :param max_results: The maximum number of results that will be retrieved from Amazon DynamoDB in total. For example, if you only wanted to see the first 100 results from the query, regardless of how many were actually available, you could set max_results to 100 and the generator returned from the query method will only yeild 100 results max. :type count: bool :param count: If True, Amazon DynamoDB returns a total number of items for the Scan operation, even if the operation has no matching items for the assigned filter. If count is True, the actual items are not returned and the count is accessible as the ``count`` attribute of the returned object. :type exclusive_start_key: list or tuple :param exclusive_start_key: Primary key of the item from which to continue an earlier query. This would be provided as the LastEvaluatedKey in that query. :type item_class: Class :param item_class: Allows you to override the class used to generate the items. This should be a subclass of :class:`boto.dynamodb.item.Item` :rtype: :class:`boto.dynamodb.layer2.TableGenerator` """ if exclusive_start_key: esk = self.build_key_from_values(table.schema, *exclusive_start_key) else: esk = None kwargs = {'table_name': table.name, 'scan_filter': self.dynamize_scan_filter(scan_filter), 'attributes_to_get': attributes_to_get, 'limit': request_limit, 'count': count, 'exclusive_start_key': esk, 'object_hook': self.dynamizer.decode} return TableGenerator(table, self.layer1.scan, max_results, item_class, kwargs) boto-2.20.1/boto/dynamodb/schema.py000066400000000000000000000076121225267101000171130ustar00rootroot00000000000000# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2011 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # class Schema(object): """ Represents a DynamoDB schema. :ivar hash_key_name: The name of the hash key of the schema. :ivar hash_key_type: The DynamoDB type specification for the hash key of the schema. :ivar range_key_name: The name of the range key of the schema or None if no range key is defined. :ivar range_key_type: The DynamoDB type specification for the range key of the schema or None if no range key is defined. :ivar dict: The underlying Python dictionary that needs to be passed to Layer1 methods. """ def __init__(self, schema_dict): self._dict = schema_dict def __repr__(self): if self.range_key_name: s = 'Schema(%s:%s)' % (self.hash_key_name, self.range_key_name) else: s = 'Schema(%s)' % self.hash_key_name return s @classmethod def create(cls, hash_key, range_key=None): """Convenience method to create a schema object. Example usage:: schema = Schema.create(hash_key=('foo', 'N')) schema2 = Schema.create(hash_key=('foo', 'N'), range_key=('bar', 'S')) :type hash_key: tuple :param hash_key: A tuple of (hash_key_name, hash_key_type) :type range_key: tuple :param hash_key: A tuple of (range_key_name, range_key_type) """ reconstructed = { 'HashKeyElement': { 'AttributeName': hash_key[0], 'AttributeType': hash_key[1], } } if range_key is not None: reconstructed['RangeKeyElement'] = { 'AttributeName': range_key[0], 'AttributeType': range_key[1], } instance = cls(None) instance._dict = reconstructed return instance @property def dict(self): return self._dict @property def hash_key_name(self): return self._dict['HashKeyElement']['AttributeName'] @property def hash_key_type(self): return self._dict['HashKeyElement']['AttributeType'] @property def range_key_name(self): name = None if 'RangeKeyElement' in self._dict: name = self._dict['RangeKeyElement']['AttributeName'] return name @property def range_key_type(self): type = None if 'RangeKeyElement' in self._dict: type = self._dict['RangeKeyElement']['AttributeType'] return type def __eq__(self, other): return (self.hash_key_name == other.hash_key_name and self.hash_key_type == other.hash_key_type and self.range_key_name == other.range_key_name and self.range_key_type == other.range_key_type) boto-2.20.1/boto/dynamodb/table.py000066400000000000000000000524601225267101000167430ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.dynamodb.batch import BatchList from boto.dynamodb.schema import Schema from boto.dynamodb.item import Item from boto.dynamodb import exceptions as dynamodb_exceptions import time class TableBatchGenerator(object): """ A low-level generator used to page through results from batch_get_item operations. :ivar consumed_units: An integer that holds the number of ConsumedCapacityUnits accumulated thus far for this generator. """ def __init__(self, table, keys, attributes_to_get=None, consistent_read=False): self.table = table self.keys = keys self.consumed_units = 0 self.attributes_to_get = attributes_to_get self.consistent_read = consistent_read def _queue_unprocessed(self, res): if not u'UnprocessedKeys' in res: return if not self.table.name in res[u'UnprocessedKeys']: return keys = res[u'UnprocessedKeys'][self.table.name][u'Keys'] for key in keys: h = key[u'HashKeyElement'] r = key[u'RangeKeyElement'] if u'RangeKeyElement' in key else None self.keys.append((h, r)) def __iter__(self): while self.keys: # Build the next batch batch = BatchList(self.table.layer2) batch.add_batch(self.table, self.keys[:100], self.attributes_to_get) res = batch.submit() # parse the results if not self.table.name in res[u'Responses']: continue self.consumed_units += res[u'Responses'][self.table.name][u'ConsumedCapacityUnits'] for elem in res[u'Responses'][self.table.name][u'Items']: yield elem # re-queue un processed keys self.keys = self.keys[100:] self._queue_unprocessed(res) class Table(object): """ An Amazon DynamoDB table. :ivar name: The name of the table. :ivar create_time: The date and time that the table was created. :ivar status: The current status of the table. One of: 'ACTIVE', 'UPDATING', 'DELETING'. :ivar schema: A :class:`boto.dynamodb.schema.Schema` object representing the schema defined for the table. :ivar item_count: The number of items in the table. This value is set only when the Table object is created or refreshed and may not reflect the actual count. :ivar size_bytes: Total size of the specified table, in bytes. Amazon DynamoDB updates this value approximately every six hours. Recent changes might not be reflected in this value. :ivar read_units: The ReadCapacityUnits of the tables Provisioned Throughput. :ivar write_units: The WriteCapacityUnits of the tables Provisioned Throughput. :ivar schema: The Schema object associated with the table. """ def __init__(self, layer2, response): """ :type layer2: :class:`boto.dynamodb.layer2.Layer2` :param layer2: A `Layer2` api object. :type response: dict :param response: The output of `boto.dynamodb.layer1.Layer1.describe_table`. """ self.layer2 = layer2 self._dict = {} self.update_from_response(response) @classmethod def create_from_schema(cls, layer2, name, schema): """Create a Table object. If you know the name and schema of your table, you can create a ``Table`` object without having to make any API calls (normally an API call is made to retrieve the schema of a table). Example usage:: table = Table.create_from_schema( boto.connect_dynamodb(), 'tablename', Schema.create(hash_key=('keyname', 'N'))) :type layer2: :class:`boto.dynamodb.layer2.Layer2` :param layer2: A ``Layer2`` api object. :type name: str :param name: The name of the table. :type schema: :class:`boto.dynamodb.schema.Schema` :param schema: The schema associated with the table. :rtype: :class:`boto.dynamodb.table.Table` :return: A Table object representing the table. """ table = cls(layer2, {'Table': {'TableName': name}}) table._schema = schema return table def __repr__(self): return 'Table(%s)' % self.name @property def name(self): return self._dict['TableName'] @property def create_time(self): return self._dict.get('CreationDateTime', None) @property def status(self): return self._dict.get('TableStatus', None) @property def item_count(self): return self._dict.get('ItemCount', 0) @property def size_bytes(self): return self._dict.get('TableSizeBytes', 0) @property def schema(self): return self._schema @property def read_units(self): try: return self._dict['ProvisionedThroughput']['ReadCapacityUnits'] except KeyError: return None @property def write_units(self): try: return self._dict['ProvisionedThroughput']['WriteCapacityUnits'] except KeyError: return None def update_from_response(self, response): """ Update the state of the Table object based on the response data received from Amazon DynamoDB. """ # 'Table' is from a describe_table call. if 'Table' in response: self._dict.update(response['Table']) # 'TableDescription' is from a create_table call. elif 'TableDescription' in response: self._dict.update(response['TableDescription']) if 'KeySchema' in self._dict: self._schema = Schema(self._dict['KeySchema']) def refresh(self, wait_for_active=False, retry_seconds=5): """ Refresh all of the fields of the Table object by calling the underlying DescribeTable request. :type wait_for_active: bool :param wait_for_active: If True, this command will not return until the table status, as returned from Amazon DynamoDB, is 'ACTIVE'. :type retry_seconds: int :param retry_seconds: If wait_for_active is True, this parameter controls the number of seconds of delay between calls to update_table in Amazon DynamoDB. Default is 5 seconds. """ done = False while not done: response = self.layer2.describe_table(self.name) self.update_from_response(response) if wait_for_active: if self.status == 'ACTIVE': done = True else: time.sleep(retry_seconds) else: done = True def update_throughput(self, read_units, write_units): """ Update the ProvisionedThroughput for the Amazon DynamoDB Table. :type read_units: int :param read_units: The new value for ReadCapacityUnits. :type write_units: int :param write_units: The new value for WriteCapacityUnits. """ self.layer2.update_throughput(self, read_units, write_units) def delete(self): """ Delete this table and all items in it. After calling this the Table objects status attribute will be set to 'DELETING'. """ self.layer2.delete_table(self) def get_item(self, hash_key, range_key=None, attributes_to_get=None, consistent_read=False, item_class=Item): """ Retrieve an existing item from the table. :type hash_key: int|long|float|str|unicode|Binary :param hash_key: The HashKey of the requested item. The type of the value must match the type defined in the schema for the table. :type range_key: int|long|float|str|unicode|Binary :param range_key: The optional RangeKey of the requested item. The type of the value must match the type defined in the schema for the table. :type attributes_to_get: list :param attributes_to_get: A list of attribute names. If supplied, only the specified attribute names will be returned. Otherwise, all attributes will be returned. :type consistent_read: bool :param consistent_read: If True, a consistent read request is issued. Otherwise, an eventually consistent request is issued. :type item_class: Class :param item_class: Allows you to override the class used to generate the items. This should be a subclass of :class:`boto.dynamodb.item.Item` """ return self.layer2.get_item(self, hash_key, range_key, attributes_to_get, consistent_read, item_class) lookup = get_item def has_item(self, hash_key, range_key=None, consistent_read=False): """ Checks the table to see if the Item with the specified ``hash_key`` exists. This may save a tiny bit of time/bandwidth over a straight :py:meth:`get_item` if you have no intention to touch the data that is returned, since this method specifically tells Amazon not to return anything but the Item's key. :type hash_key: int|long|float|str|unicode|Binary :param hash_key: The HashKey of the requested item. The type of the value must match the type defined in the schema for the table. :type range_key: int|long|float|str|unicode|Binary :param range_key: The optional RangeKey of the requested item. The type of the value must match the type defined in the schema for the table. :type consistent_read: bool :param consistent_read: If True, a consistent read request is issued. Otherwise, an eventually consistent request is issued. :rtype: bool :returns: ``True`` if the Item exists, ``False`` if not. """ try: # Attempt to get the key. If it can't be found, it'll raise # an exception. self.get_item(hash_key, range_key=range_key, # This minimizes the size of the response body. attributes_to_get=[hash_key], consistent_read=consistent_read) except dynamodb_exceptions.DynamoDBKeyNotFoundError: # Key doesn't exist. return False return True def new_item(self, hash_key=None, range_key=None, attrs=None, item_class=Item): """ Return an new, unsaved Item which can later be PUT to Amazon DynamoDB. This method has explicit (but optional) parameters for the hash_key and range_key values of the item. You can use these explicit parameters when calling the method, such as:: >>> my_item = my_table.new_item(hash_key='a', range_key=1, attrs={'key1': 'val1', 'key2': 'val2'}) >>> my_item {u'bar': 1, u'foo': 'a', 'key1': 'val1', 'key2': 'val2'} Or, if you prefer, you can simply put the hash_key and range_key in the attrs dictionary itself, like this:: >>> attrs = {'foo': 'a', 'bar': 1, 'key1': 'val1', 'key2': 'val2'} >>> my_item = my_table.new_item(attrs=attrs) >>> my_item {u'bar': 1, u'foo': 'a', 'key1': 'val1', 'key2': 'val2'} The effect is the same. .. note: The explicit parameters take priority over the values in the attrs dict. So, if you have a hash_key or range_key in the attrs dict and you also supply either or both using the explicit parameters, the values in the attrs will be ignored. :type hash_key: int|long|float|str|unicode|Binary :param hash_key: The HashKey of the new item. The type of the value must match the type defined in the schema for the table. :type range_key: int|long|float|str|unicode|Binary :param range_key: The optional RangeKey of the new item. The type of the value must match the type defined in the schema for the table. :type attrs: dict :param attrs: A dictionary of key value pairs used to populate the new item. :type item_class: Class :param item_class: Allows you to override the class used to generate the items. This should be a subclass of :class:`boto.dynamodb.item.Item` """ return item_class(self, hash_key, range_key, attrs) def query(self, hash_key, *args, **kw): """ Perform a query on the table. :type hash_key: int|long|float|str|unicode|Binary :param hash_key: The HashKey of the requested item. The type of the value must match the type defined in the schema for the table. :type range_key_condition: :class:`boto.dynamodb.condition.Condition` :param range_key_condition: A Condition object. Condition object can be one of the following types: EQ|LE|LT|GE|GT|BEGINS_WITH|BETWEEN The only condition which expects or will accept two values is 'BETWEEN', otherwise a single value should be passed to the Condition constructor. :type attributes_to_get: list :param attributes_to_get: A list of attribute names. If supplied, only the specified attribute names will be returned. Otherwise, all attributes will be returned. :type request_limit: int :param request_limit: The maximum number of items to retrieve from Amazon DynamoDB on each request. You may want to set a specific request_limit based on the provisioned throughput of your table. The default behavior is to retrieve as many results as possible per request. :type max_results: int :param max_results: The maximum number of results that will be retrieved from Amazon DynamoDB in total. For example, if you only wanted to see the first 100 results from the query, regardless of how many were actually available, you could set max_results to 100 and the generator returned from the query method will only yeild 100 results max. :type consistent_read: bool :param consistent_read: If True, a consistent read request is issued. Otherwise, an eventually consistent request is issued. :type scan_index_forward: bool :param scan_index_forward: Specified forward or backward traversal of the index. Default is forward (True). :type exclusive_start_key: list or tuple :param exclusive_start_key: Primary key of the item from which to continue an earlier query. This would be provided as the LastEvaluatedKey in that query. :type count: bool :param count: If True, Amazon DynamoDB returns a total number of items for the Query operation, even if the operation has no matching items for the assigned filter. If count is True, the actual items are not returned and the count is accessible as the ``count`` attribute of the returned object. :type item_class: Class :param item_class: Allows you to override the class used to generate the items. This should be a subclass of :class:`boto.dynamodb.item.Item` """ return self.layer2.query(self, hash_key, *args, **kw) def scan(self, *args, **kw): """ Scan through this table, this is a very long and expensive operation, and should be avoided if at all possible. :type scan_filter: A dict :param scan_filter: A dictionary where the key is the attribute name and the value is a :class:`boto.dynamodb.condition.Condition` object. Valid Condition objects include: * EQ - equal (1) * NE - not equal (1) * LE - less than or equal (1) * LT - less than (1) * GE - greater than or equal (1) * GT - greater than (1) * NOT_NULL - attribute exists (0, use None) * NULL - attribute does not exist (0, use None) * CONTAINS - substring or value in list (1) * NOT_CONTAINS - absence of substring or value in list (1) * BEGINS_WITH - substring prefix (1) * IN - exact match in list (N) * BETWEEN - >= first value, <= second value (2) :type attributes_to_get: list :param attributes_to_get: A list of attribute names. If supplied, only the specified attribute names will be returned. Otherwise, all attributes will be returned. :type request_limit: int :param request_limit: The maximum number of items to retrieve from Amazon DynamoDB on each request. You may want to set a specific request_limit based on the provisioned throughput of your table. The default behavior is to retrieve as many results as possible per request. :type max_results: int :param max_results: The maximum number of results that will be retrieved from Amazon DynamoDB in total. For example, if you only wanted to see the first 100 results from the query, regardless of how many were actually available, you could set max_results to 100 and the generator returned from the query method will only yeild 100 results max. :type count: bool :param count: If True, Amazon DynamoDB returns a total number of items for the Scan operation, even if the operation has no matching items for the assigned filter. If count is True, the actual items are not returned and the count is accessible as the ``count`` attribute of the returned object. :type exclusive_start_key: list or tuple :param exclusive_start_key: Primary key of the item from which to continue an earlier query. This would be provided as the LastEvaluatedKey in that query. :type item_class: Class :param item_class: Allows you to override the class used to generate the items. This should be a subclass of :class:`boto.dynamodb.item.Item` :return: A TableGenerator (generator) object which will iterate over all results :rtype: :class:`boto.dynamodb.layer2.TableGenerator` """ return self.layer2.scan(self, *args, **kw) def batch_get_item(self, keys, attributes_to_get=None): """ Return a set of attributes for a multiple items from a single table using their primary keys. This abstraction removes the 100 Items per batch limitations as well as the "UnprocessedKeys" logic. :type keys: list :param keys: A list of scalar or tuple values. Each element in the list represents one Item to retrieve. If the schema for the table has both a HashKey and a RangeKey, each element in the list should be a tuple consisting of (hash_key, range_key). If the schema for the table contains only a HashKey, each element in the list should be a scalar value of the appropriate type for the table schema. NOTE: The maximum number of items that can be retrieved for a single operation is 100. Also, the number of items retrieved is constrained by a 1 MB size limit. :type attributes_to_get: list :param attributes_to_get: A list of attribute names. If supplied, only the specified attribute names will be returned. Otherwise, all attributes will be returned. :return: A TableBatchGenerator (generator) object which will iterate over all results :rtype: :class:`boto.dynamodb.table.TableBatchGenerator` """ return TableBatchGenerator(self, keys, attributes_to_get) boto-2.20.1/boto/dynamodb/types.py000066400000000000000000000234241225267101000170160ustar00rootroot00000000000000# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2011 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # """ Some utility functions to deal with mapping Amazon DynamoDB types to Python types and vice-versa. """ import base64 from decimal import (Decimal, DecimalException, Context, Clamped, Overflow, Inexact, Underflow, Rounded) from exceptions import DynamoDBNumberError DYNAMODB_CONTEXT = Context( Emin=-128, Emax=126, rounding=None, prec=38, traps=[Clamped, Overflow, Inexact, Rounded, Underflow]) # python2.6 cannot convert floats directly to # Decimals. This is taken from: # http://docs.python.org/release/2.6.7/library/decimal.html#decimal-faq def float_to_decimal(f): n, d = f.as_integer_ratio() numerator, denominator = Decimal(n), Decimal(d) ctx = DYNAMODB_CONTEXT result = ctx.divide(numerator, denominator) while ctx.flags[Inexact]: ctx.flags[Inexact] = False ctx.prec *= 2 result = ctx.divide(numerator, denominator) return result def is_num(n): types = (int, long, float, bool, Decimal) return isinstance(n, types) or n in types def is_str(n): return isinstance(n, basestring) or (isinstance(n, type) and issubclass(n, basestring)) def is_binary(n): return isinstance(n, Binary) def serialize_num(val): """Cast a number to a string and perform validation to ensure no loss of precision. """ if isinstance(val, bool): return str(int(val)) return str(val) def convert_num(s): if '.' in s: n = float(s) else: n = int(s) return n def convert_binary(n): return Binary(base64.b64decode(n)) def get_dynamodb_type(val): """ Take a scalar Python value and return a string representing the corresponding Amazon DynamoDB type. If the value passed in is not a supported type, raise a TypeError. """ dynamodb_type = None if is_num(val): dynamodb_type = 'N' elif is_str(val): dynamodb_type = 'S' elif isinstance(val, (set, frozenset)): if False not in map(is_num, val): dynamodb_type = 'NS' elif False not in map(is_str, val): dynamodb_type = 'SS' elif False not in map(is_binary, val): dynamodb_type = 'BS' elif isinstance(val, Binary): dynamodb_type = 'B' if dynamodb_type is None: msg = 'Unsupported type "%s" for value "%s"' % (type(val), val) raise TypeError(msg) return dynamodb_type def dynamize_value(val): """ Take a scalar Python value and return a dict consisting of the Amazon DynamoDB type specification and the value that needs to be sent to Amazon DynamoDB. If the type of the value is not supported, raise a TypeError """ dynamodb_type = get_dynamodb_type(val) if dynamodb_type == 'N': val = {dynamodb_type: serialize_num(val)} elif dynamodb_type == 'S': val = {dynamodb_type: val} elif dynamodb_type == 'NS': val = {dynamodb_type: map(serialize_num, val)} elif dynamodb_type == 'SS': val = {dynamodb_type: [n for n in val]} elif dynamodb_type == 'B': val = {dynamodb_type: val.encode()} elif dynamodb_type == 'BS': val = {dynamodb_type: [n.encode() for n in val]} return val class Binary(object): def __init__(self, value): self.value = value def encode(self): return base64.b64encode(self.value) def __eq__(self, other): if isinstance(other, Binary): return self.value == other.value else: return self.value == other def __ne__(self, other): return not self.__eq__(other) def __repr__(self): return 'Binary(%s)' % self.value def __str__(self): return self.value def __hash__(self): return hash(self.value) def item_object_hook(dct): """ A custom object hook for use when decoding JSON item bodys. This hook will transform Amazon DynamoDB JSON responses to something that maps directly to native Python types. """ if len(dct.keys()) > 1: return dct if 'S' in dct: return dct['S'] if 'N' in dct: return convert_num(dct['N']) if 'SS' in dct: return set(dct['SS']) if 'NS' in dct: return set(map(convert_num, dct['NS'])) if 'B' in dct: return convert_binary(dct['B']) if 'BS' in dct: return set(map(convert_binary, dct['BS'])) return dct class Dynamizer(object): """Control serialization/deserialization of types. This class controls the encoding of python types to the format that is expected by the DynamoDB API, as well as taking DynamoDB types and constructing the appropriate python types. If you want to customize this process, you can subclass this class and override the encoding/decoding of specific types. For example:: 'foo' (Python type) | v encode('foo') | v _encode_s('foo') | v {'S': 'foo'} (Encoding sent to/received from DynamoDB) | V decode({'S': 'foo'}) | v _decode_s({'S': 'foo'}) | v 'foo' (Python type) """ def _get_dynamodb_type(self, attr): return get_dynamodb_type(attr) def encode(self, attr): """ Encodes a python type to the format expected by DynamoDB. """ dynamodb_type = self._get_dynamodb_type(attr) try: encoder = getattr(self, '_encode_%s' % dynamodb_type.lower()) except AttributeError: raise ValueError("Unable to encode dynamodb type: %s" % dynamodb_type) return {dynamodb_type: encoder(attr)} def _encode_n(self, attr): try: if isinstance(attr, float) and not hasattr(Decimal, 'from_float'): # python2.6 does not support creating Decimals directly # from floats so we have to do this ourself. n = str(float_to_decimal(attr)) else: n = str(DYNAMODB_CONTEXT.create_decimal(attr)) if filter(lambda x: x in n, ('Infinity', 'NaN')): raise TypeError('Infinity and NaN not supported') return n except (TypeError, DecimalException), e: msg = '{0} numeric for `{1}`\n{2}'.format( e.__class__.__name__, attr, str(e) or '') raise DynamoDBNumberError(msg) def _encode_s(self, attr): if isinstance(attr, unicode): attr = attr.encode('utf-8') elif not isinstance(attr, str): attr = str(attr) return attr def _encode_ns(self, attr): return map(self._encode_n, attr) def _encode_ss(self, attr): return [self._encode_s(n) for n in attr] def _encode_b(self, attr): return attr.encode() def _encode_bs(self, attr): return [self._encode_b(n) for n in attr] def decode(self, attr): """ Takes the format returned by DynamoDB and constructs the appropriate python type. """ if len(attr) > 1 or not attr: return attr dynamodb_type = attr.keys()[0] if dynamodb_type.lower() == dynamodb_type: # It's not an actual type, just a single character attr that # overlaps with the DDB types. Return it. return attr try: decoder = getattr(self, '_decode_%s' % dynamodb_type.lower()) except AttributeError: return attr return decoder(attr[dynamodb_type]) def _decode_n(self, attr): return DYNAMODB_CONTEXT.create_decimal(attr) def _decode_s(self, attr): return attr def _decode_ns(self, attr): return set(map(self._decode_n, attr)) def _decode_ss(self, attr): return set(map(self._decode_s, attr)) def _decode_b(self, attr): return convert_binary(attr) def _decode_bs(self, attr): return set(map(self._decode_b, attr)) class LossyFloatDynamizer(Dynamizer): """Use float/int instead of Decimal for numeric types. This class is provided for backwards compatibility. Instead of using Decimals for the 'N', 'NS' types it uses ints/floats. This class is deprecated and its usage is not encouraged, as doing so may result in loss of precision. Use the `Dynamizer` class instead. """ def _encode_n(self, attr): return serialize_num(attr) def _encode_ns(self, attr): return [str(i) for i in attr] def _decode_n(self, attr): return convert_num(attr) def _decode_ns(self, attr): return set(map(self._decode_n, attr)) boto-2.20.1/boto/dynamodb2/000077500000000000000000000000001225267101000153555ustar00rootroot00000000000000boto-2.20.1/boto/dynamodb2/__init__.py000066400000000000000000000061721225267101000174740ustar00rootroot00000000000000# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2011 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.regioninfo import RegionInfo def regions(): """ Get all available regions for the Amazon DynamoDB service. :rtype: list :return: A list of :class:`boto.regioninfo.RegionInfo` """ from boto.dynamodb2.layer1 import DynamoDBConnection return [RegionInfo(name='us-east-1', endpoint='dynamodb.us-east-1.amazonaws.com', connection_cls=DynamoDBConnection), RegionInfo(name='us-gov-west-1', endpoint='dynamodb.us-gov-west-1.amazonaws.com', connection_cls=DynamoDBConnection), RegionInfo(name='us-west-1', endpoint='dynamodb.us-west-1.amazonaws.com', connection_cls=DynamoDBConnection), RegionInfo(name='us-west-2', endpoint='dynamodb.us-west-2.amazonaws.com', connection_cls=DynamoDBConnection), RegionInfo(name='eu-west-1', endpoint='dynamodb.eu-west-1.amazonaws.com', connection_cls=DynamoDBConnection), RegionInfo(name='ap-northeast-1', endpoint='dynamodb.ap-northeast-1.amazonaws.com', connection_cls=DynamoDBConnection), RegionInfo(name='ap-southeast-1', endpoint='dynamodb.ap-southeast-1.amazonaws.com', connection_cls=DynamoDBConnection), RegionInfo(name='ap-southeast-2', endpoint='dynamodb.ap-southeast-2.amazonaws.com', connection_cls=DynamoDBConnection), RegionInfo(name='sa-east-1', endpoint='dynamodb.sa-east-1.amazonaws.com', connection_cls=DynamoDBConnection), ] def connect_to_region(region_name, **kw_params): for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/dynamodb2/exceptions.py000066400000000000000000000036301225267101000201120ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.exception import JSONResponseError class ProvisionedThroughputExceededException(JSONResponseError): pass class LimitExceededException(JSONResponseError): pass class ConditionalCheckFailedException(JSONResponseError): pass class ResourceInUseException(JSONResponseError): pass class ResourceNotFoundException(JSONResponseError): pass class InternalServerError(JSONResponseError): pass class ValidationException(JSONResponseError): pass class ItemCollectionSizeLimitExceededException(JSONResponseError): pass class DynamoDBError(Exception): pass class UnknownSchemaFieldError(DynamoDBError): pass class UnknownIndexFieldError(DynamoDBError): pass class UnknownFilterTypeError(DynamoDBError): pass class QueryError(DynamoDBError): pass boto-2.20.1/boto/dynamodb2/fields.py000066400000000000000000000115611225267101000172010ustar00rootroot00000000000000from boto.dynamodb2.types import STRING class BaseSchemaField(object): """ An abstract class for defining schema fields. Contains most of the core functionality for the field. Subclasses must define an ``attr_type`` to pass to DynamoDB. """ attr_type = None def __init__(self, name, data_type=STRING): """ Creates a Python schema field, to represent the data to pass to DynamoDB. Requires a ``name`` parameter, which should be a string name of the field. Optionally accepts a ``data_type`` parameter, which should be a constant from ``boto.dynamodb2.types``. (Default: ``STRING``) """ self.name = name self.data_type = data_type def definition(self): """ Returns the attribute definition structure DynamoDB expects. Example:: >>> field.definition() { 'AttributeName': 'username', 'AttributeType': 'S', } """ return { 'AttributeName': self.name, 'AttributeType': self.data_type, } def schema(self): """ Returns the schema structure DynamoDB expects. Example:: >>> field.schema() { 'AttributeName': 'username', 'KeyType': 'HASH', } """ return { 'AttributeName': self.name, 'KeyType': self.attr_type, } class HashKey(BaseSchemaField): """ An field representing a hash key. Example:: >>> from boto.dynamodb2.types import NUMBER >>> HashKey('username') >>> HashKey('date_joined', data_type=NUMBER) """ attr_type = 'HASH' class RangeKey(BaseSchemaField): """ An field representing a range key. Example:: >>> from boto.dynamodb2.types import NUMBER >>> HashKey('username') >>> HashKey('date_joined', data_type=NUMBER) """ attr_type = 'RANGE' class BaseIndexField(object): """ An abstract class for defining schema fields. Contains most of the core functionality for the field. Subclasses must define an ``attr_type`` to pass to DynamoDB. """ def __init__(self, name, parts): self.name = name self.parts = parts def definition(self): """ Returns the attribute definition structure DynamoDB expects. Example:: >>> index.definition() { 'AttributeName': 'username', 'AttributeType': 'S', } """ definition = [] for part in self.parts: definition.append({ 'AttributeName': part.name, 'AttributeType': part.data_type, }) return definition def schema(self): """ Returns the schema structure DynamoDB expects. Example:: >>> index.schema() { 'IndexName': 'LastNameIndex', 'KeySchema': [ { 'AttributeName': 'username', 'KeyType': 'HASH', }, ], 'Projection': { 'ProjectionType': 'KEYS_ONLY, } } """ key_schema = [] for part in self.parts: key_schema.append(part.schema()) return { 'IndexName': self.name, 'KeySchema': key_schema, 'Projection': { 'ProjectionType': self.projection_type, } } class AllIndex(BaseIndexField): """ An index signifying all fields should be in the index. Example:: >>> AllIndex('MostRecentlyJoined', parts=[ ... HashKey('username'), ... RangeKey('date_joined') ... ]) """ projection_type = 'ALL' class KeysOnlyIndex(BaseIndexField): """ An index signifying only key fields should be in the index. Example:: >>> KeysOnlyIndex('MostRecentlyJoined', parts=[ ... HashKey('username'), ... RangeKey('date_joined') ... ]) """ projection_type = 'KEYS_ONLY' class IncludeIndex(BaseIndexField): """ An index signifying only certain fields should be in the index. Example:: >>> IncludeIndex('GenderIndex', parts=[ ... HashKey('username'), ... RangeKey('date_joined') ... ], includes=['gender']) """ projection_type = 'INCLUDE' def __init__(self, *args, **kwargs): self.includes_fields = kwargs.pop('includes', []) super(IncludeIndex, self).__init__(*args, **kwargs) def schema(self): schema_data = super(IncludeIndex, self).schema() schema_data['Projection']['NonKeyAttributes'] = self.includes_fields return schema_data boto-2.20.1/boto/dynamodb2/items.py000066400000000000000000000342251225267101000170560ustar00rootroot00000000000000from copy import deepcopy from boto.dynamodb2.types import Dynamizer class NEWVALUE(object): # A marker for new data added. pass class Item(object): """ An object representing the item data within a DynamoDB table. An item is largely schema-free, meaning it can contain any data. The only limitation is that it must have data for the fields in the ``Table``'s schema. This object presents a dictionary-like interface for accessing/storing data. It also tries to intelligently track how data has changed throughout the life of the instance, to be as efficient as possible about updates. Empty items, or items that have no data, are considered falsey. """ def __init__(self, table, data=None, loaded=False): """ Constructs an (unsaved) ``Item`` instance. To persist the data in DynamoDB, you'll need to call the ``Item.save`` (or ``Item.partial_save``) on the instance. Requires a ``table`` parameter, which should be a ``Table`` instance. This is required, as DynamoDB's API is focus around all operations being table-level. It's also for persisting schema around many objects. Optionally accepts a ``data`` parameter, which should be a dictionary of the fields & values of the item. Optionally accepts a ``loaded`` parameter, which should be a boolean. ``True`` if it was preexisting data loaded from DynamoDB, ``False`` if it's new data from the user. Default is ``False``. Example:: >>> users = Table('users') >>> user = Item(users, data={ ... 'username': 'johndoe', ... 'first_name': 'John', ... 'date_joined': 1248o61592, ... }) # Change existing data. >>> user['first_name'] = 'Johann' # Add more data. >>> user['last_name'] = 'Doe' # Delete data. >>> del user['date_joined'] # Iterate over all the data. >>> for field, val in user.items(): ... print "%s: %s" % (field, val) username: johndoe first_name: John date_joined: 1248o61592 """ self.table = table self._loaded = loaded self._orig_data = {} self._data = data self._dynamizer = Dynamizer() if self._data is None: self._data = {} if self._loaded: self._orig_data = deepcopy(self._data) def __getitem__(self, key): return self._data.get(key, None) def __setitem__(self, key, value): self._data[key] = value def __delitem__(self, key): if not key in self._data: return del self._data[key] def keys(self): return self._data.keys() def values(self): return self._data.values() def items(self): return self._data.items() def get(self, key, default=None): return self._data.get(key, default) def __iter__(self): for key in self._data: yield self._data[key] def __contains__(self, key): return key in self._data def __nonzero__(self): return bool(self._data) def _determine_alterations(self): """ Checks the ``-orig_data`` against the ``_data`` to determine what changes to the data are present. Returns a dictionary containing the keys ``adds``, ``changes`` & ``deletes``, containing the updated data. """ alterations = { 'adds': {}, 'changes': {}, 'deletes': [], } orig_keys = set(self._orig_data.keys()) data_keys = set(self._data.keys()) # Run through keys we know are in both for changes. for key in orig_keys.intersection(data_keys): if self._data[key] != self._orig_data[key]: if self._is_storable(self._data[key]): alterations['changes'][key] = self._data[key] else: alterations['deletes'].append(key) # Run through additions. for key in data_keys.difference(orig_keys): if self._is_storable(self._data[key]): alterations['adds'][key] = self._data[key] # Run through deletions. for key in orig_keys.difference(data_keys): alterations['deletes'].append(key) return alterations def needs_save(self, data=None): """ Returns whether or not the data has changed on the ``Item``. Optionally accepts a ``data`` argument, which accepts the output from ``self._determine_alterations()`` if you've already called it. Typically unnecessary to do. Default is ``None``. Example: >>> user.needs_save() False >>> user['first_name'] = 'Johann' >>> user.needs_save() True """ if data is None: data = self._determine_alterations() needs_save = False for kind in ['adds', 'changes', 'deletes']: if len(data[kind]): needs_save = True break return needs_save def mark_clean(self): """ Marks an ``Item`` instance as no longer needing to be saved. Example: >>> user.needs_save() False >>> user['first_name'] = 'Johann' >>> user.needs_save() True >>> user.mark_clean() >>> user.needs_save() False """ self._orig_data = deepcopy(self._data) def mark_dirty(self): """ DEPRECATED: Marks an ``Item`` instance as needing to be saved. This method is no longer necessary, as the state tracking on ``Item`` has been improved to automatically detect proper state. """ return def load(self, data): """ This is only useful when being handed raw data from DynamoDB directly. If you have a Python datastructure already, use the ``__init__`` or manually set the data instead. Largely internal, unless you know what you're doing or are trying to mix the low-level & high-level APIs. """ self._data = {} for field_name, field_value in data.get('Item', {}).items(): self[field_name] = self._dynamizer.decode(field_value) self._loaded = True self._orig_data = deepcopy(self._data) def get_keys(self): """ Returns a Python-style dict of the keys/values. Largely internal. """ key_fields = self.table.get_key_fields() key_data = {} for key in key_fields: key_data[key] = self[key] return key_data def get_raw_keys(self): """ Returns a DynamoDB-style dict of the keys/values. Largely internal. """ raw_key_data = {} for key, value in self.get_keys().items(): raw_key_data[key] = self._dynamizer.encode(value) return raw_key_data def build_expects(self, fields=None): """ Builds up a list of expecations to hand off to DynamoDB on save. Largely internal. """ expects = {} if fields is None: fields = self._data.keys() + self._orig_data.keys() # Only uniques. fields = set(fields) for key in fields: expects[key] = { 'Exists': True, } value = None # Check for invalid keys. if not key in self._orig_data and not key in self._data: raise ValueError("Unknown key %s provided." % key) # States: # * New field (only in _data) # * Unchanged field (in both _data & _orig_data, same data) # * Modified field (in both _data & _orig_data, different data) # * Deleted field (only in _orig_data) orig_value = self._orig_data.get(key, NEWVALUE) current_value = self._data.get(key, NEWVALUE) if orig_value == current_value: # Existing field unchanged. value = current_value else: if key in self._data: if not key in self._orig_data: # New field. expects[key]['Exists'] = False else: # Existing field modified. value = orig_value else: # Existing field deleted. value = orig_value if value is not None: expects[key]['Value'] = self._dynamizer.encode(value) return expects def _is_storable(self, value): # We need to prevent ``None``, empty string & empty set from # heading to DDB, but allow false-y values like 0 & False make it. if not value: if not value in (0, 0.0, False): return False return True def prepare_full(self): """ Runs through all fields & encodes them to be handed off to DynamoDB as part of an ``save`` (``put_item``) call. Largely internal. """ # This doesn't save on it's own. Rather, we prepare the datastructure # and hand-off to the table to handle creation/update. final_data = {} for key, value in self._data.items(): if not self._is_storable(value): continue final_data[key] = self._dynamizer.encode(value) return final_data def prepare_partial(self): """ Runs through **ONLY** the changed/deleted fields & encodes them to be handed off to DynamoDB as part of an ``partial_save`` (``update_item``) call. Largely internal. """ # This doesn't save on it's own. Rather, we prepare the datastructure # and hand-off to the table to handle creation/update. final_data = {} fields = set() alterations = self._determine_alterations() for key, value in alterations['adds'].items(): final_data[key] = { 'Action': 'PUT', 'Value': self._dynamizer.encode(self._data[key]) } fields.add(key) for key, value in alterations['changes'].items(): final_data[key] = { 'Action': 'PUT', 'Value': self._dynamizer.encode(self._data[key]) } fields.add(key) for key in alterations['deletes']: final_data[key] = { 'Action': 'DELETE', } fields.add(key) return final_data, fields def partial_save(self): """ Saves only the changed data to DynamoDB. Extremely useful for high-volume/high-write data sets, this allows you to update only a handful of fields rather than having to push entire items. This prevents many accidental overwrite situations as well as saves on the amount of data to transfer over the wire. Returns ``True`` on success, ``False`` if no save was performed or the write failed. Example:: >>> user['last_name'] = 'Doh!' # Only the last name field will be sent to DynamoDB. >>> user.partial_save() """ key = self.get_keys() # Build a new dict of only the data we're changing. final_data, fields = self.prepare_partial() if not final_data: return False # Remove the key(s) from the ``final_data`` if present. # They should only be present if this is a new item, in which # case we shouldn't be sending as part of the data to update. for fieldname, value in key.items(): if fieldname in final_data: del final_data[fieldname] try: # It's likely also in ``fields``, so remove it there too. fields.remove(fieldname) except KeyError: pass # Build expectations of only the fields we're planning to update. expects = self.build_expects(fields=fields) returned = self.table._update_item(key, final_data, expects=expects) # Mark the object as clean. self.mark_clean() return returned def save(self, overwrite=False): """ Saves all data to DynamoDB. By default, this attempts to ensure that none of the underlying data has changed. If any fields have changed in between when the ``Item`` was constructed & when it is saved, this call will fail so as not to cause any data loss. If you're sure possibly overwriting data is acceptable, you can pass an ``overwrite=True``. If that's not acceptable, you may be able to use ``Item.partial_save`` to only write the changed field data. Optionally accepts an ``overwrite`` parameter, which should be a boolean. If you provide ``True``, the item will be forcibly overwritten within DynamoDB, even if another process changed the data in the meantime. (Default: ``False``) Returns ``True`` on success, ``False`` if no save was performed. Example:: >>> user['last_name'] = 'Doh!' # All data on the Item is sent to DynamoDB. >>> user.save() # If it fails, you can overwrite. >>> user.save(overwrite=True) """ if not self.needs_save() and not overwrite: return False final_data = self.prepare_full() expects = None if overwrite is False: # Build expectations about *all* of the data. expects = self.build_expects() returned = self.table._put_item(final_data, expects=expects) # Mark the object as clean. self.mark_clean() return returned def delete(self): """ Deletes the item's data to DynamoDB. Returns ``True`` on success. Example:: # Buh-bye now. >>> user.delete() """ key_data = self.get_keys() return self.table.delete_item(**key_data) boto-2.20.1/boto/dynamodb2/layer1.py000066400000000000000000002373531225267101000171410ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from binascii import crc32 try: import json except ImportError: import simplejson as json import boto from boto.connection import AWSQueryConnection from boto.regioninfo import RegionInfo from boto.exception import JSONResponseError from boto.dynamodb2 import exceptions class DynamoDBConnection(AWSQueryConnection): """ Amazon DynamoDB **Overview** This is the Amazon DynamoDB API Reference. This guide provides descriptions and samples of the Amazon DynamoDB API. """ APIVersion = "2012-08-10" DefaultRegionName = "us-east-1" DefaultRegionEndpoint = "dynamodb.us-east-1.amazonaws.com" ServiceName = "DynamoDB" TargetPrefix = "DynamoDB_20120810" ResponseError = JSONResponseError _faults = { "ProvisionedThroughputExceededException": exceptions.ProvisionedThroughputExceededException, "LimitExceededException": exceptions.LimitExceededException, "ConditionalCheckFailedException": exceptions.ConditionalCheckFailedException, "ResourceInUseException": exceptions.ResourceInUseException, "ResourceNotFoundException": exceptions.ResourceNotFoundException, "InternalServerError": exceptions.InternalServerError, "ItemCollectionSizeLimitExceededException": exceptions.ItemCollectionSizeLimitExceededException, "ValidationException": exceptions.ValidationException, } NumberRetries = 10 def __init__(self, **kwargs): region = kwargs.pop('region', None) validate_checksums = kwargs.pop('validate_checksums', True) if not region: region_name = boto.config.get('DynamoDB', 'region', self.DefaultRegionName) for reg in boto.dynamodb2.regions(): if reg.name == region_name: region = reg break # Only set host if it isn't manually overwritten if 'host' not in kwargs: kwargs['host'] = region.endpoint AWSQueryConnection.__init__(self, **kwargs) self.region = region self._validate_checksums = boto.config.getbool( 'DynamoDB', 'validate_checksums', validate_checksums) self.throughput_exceeded_events = 0 def _required_auth_capability(self): return ['hmac-v4'] def batch_get_item(self, request_items, return_consumed_capacity=None): """ The BatchGetItem operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key. A single operation can retrieve up to 1 MB of data, which can comprise as many as 100 items. BatchGetItem will return a partial result if the response size limit is exceeded, the table's provisioned throughput is exceeded, or an internal processing failure occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys . You can use this value to retry the operation starting with the next item to get. For example, if you ask to retrieve 100 items, but each individual item is 50 KB in size, the system returns 20 items (1 MB) and an appropriate UnprocessedKeys value so you can get the next page of results. If desired, your application can include its own logic to assemble the pages of results into one dataset. If no items can be processed because of insufficient provisioned throughput on each of the tables involved in the request, BatchGetItem throws ProvisionedThroughputExceededException . By default, BatchGetItem performs eventually consistent reads on every table in the request. If you want strongly consistent reads instead, you can set ConsistentRead to `True` for any or all tables. In order to minimize response latency, BatchGetItem fetches items in parallel. When designing your application, keep in mind that Amazon DynamoDB does not return attributes in any particular order. To help parse the response by item, include the primary key values for the items in your request in the AttributesToGet parameter. If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see `Capacity Units Calculations`_ in the Amazon DynamoDB Developer Guide. :type request_items: map :param request_items: A map of one or more table names and, for each table, the corresponding primary keys for the items to retrieve. Each table name can be invoked only once. Each element in the map consists of the following: + Keys - An array of primary key attribute values that define specific items in the table. + AttributesToGet - One or more attributes to be retrieved from the table or index. By default, all attributes are returned. If a specified attribute is not found, it does not appear in the result. + ConsistentRead - If `True`, a strongly consistent read is used; if `False` (the default), an eventually consistent read is used. :type return_consumed_capacity: string :param return_consumed_capacity: If set to `TOTAL`, ConsumedCapacity is included in the response; if set to `NONE` (the default), ConsumedCapacity is not included. """ params = {'RequestItems': request_items, } if return_consumed_capacity is not None: params['ReturnConsumedCapacity'] = return_consumed_capacity return self.make_request(action='BatchGetItem', body=json.dumps(params)) def batch_write_item(self, request_items, return_consumed_capacity=None, return_item_collection_metrics=None): """ The BatchWriteItem operation puts or deletes multiple items in one or more tables. A single call to BatchWriteItem can write up to 1 MB of data, which can comprise as many as 25 put or delete requests. Individual items to be written can be as large as 64 KB. BatchWriteItem cannot update items. To update items, use the UpdateItem API. The individual PutItem and DeleteItem operations specified in BatchWriteItem are atomic; however BatchWriteItem as a whole is not. If any requested operations fail because the table's provisioned throughput is exceeded or an internal processing failure occurs, the failed operations are returned in the UnprocessedItems response parameter. You can investigate and optionally resend the requests. Typically, you would call BatchWriteItem in a loop. Each iteration would check for unprocessed items and submit a new BatchWriteItem request with those unprocessed items until all items have been processed. To write one item, you can use the PutItem operation; to delete one item, you can use the DeleteItem operation. With BatchWriteItem , you can efficiently write or delete large amounts of data, such as from Amazon Elastic MapReduce (EMR), or copy data from another database into Amazon DynamoDB. In order to improve performance with these large- scale operations, BatchWriteItem does not behave in the same way as individual PutItem and DeleteItem calls would For example, you cannot specify conditions on individual put and delete requests, and BatchWriteItem does not return deleted items in the response. If you use a programming language that supports concurrency, such as Java, you can use threads to write items in parallel. Your application must include the necessary logic to manage the threads. With languages that don't support threading, such as PHP, BatchWriteItem will write or delete the specified items one at a time. In both situations, BatchWriteItem provides an alternative where the API performs the specified put and delete operations in parallel, giving you the power of the thread pool approach without having to introduce complexity into your application. Parallel processing reduces latency, but each specified put and delete request consumes the same number of write capacity units whether it is processed in parallel or not. Delete operations on nonexistent items consume one write capacity unit. If one or more of the following is true, Amazon DynamoDB rejects the entire batch write operation: + One or more tables specified in the BatchWriteItem request does not exist. + Primary key attributes specified on an item in the request do not match those in the corresponding table's primary key schema. + You try to perform multiple operations on the same item in the same BatchWriteItem request. For example, you cannot put and delete the same item in the same BatchWriteItem request. + The total request size exceeds 1 MB. + Any individual item in a batch exceeds 64 KB. :type request_items: map :param request_items: A map of one or more table names and, for each table, a list of operations to be performed ( DeleteRequest or PutRequest ). Each element in the map consists of the following: + DeleteRequest - Perform a DeleteItem operation on the specified item. The item to be deleted is identified by a Key subelement: + Key - A map of primary key attribute values that uniquely identify the item. Each entry in this map consists of an attribute name and an attribute value. + PutRequest - Perform a PutItem operation on the specified item. The item to be put is identified by an Item subelement: + Item - A map of attributes and their values. Each entry in this map consists of an attribute name and an attribute value. Attribute values must not be null; string and binary type attributes must have lengths greater than zero; and set type attributes must not be empty. Requests that contain empty values will be rejected with a ValidationException . If you specify any attributes that are part of an index key, then the data types for those attributes must match those of the schema in the table's attribute definition. :type return_consumed_capacity: string :param return_consumed_capacity: If set to `TOTAL`, ConsumedCapacity is included in the response; if set to `NONE` (the default), ConsumedCapacity is not included. :type return_item_collection_metrics: string :param return_item_collection_metrics: If set to `SIZE`, statistics about item collections, if any, that were modified during the operation are returned in the response. If set to `NONE` (the default), no statistics are returned.. """ params = {'RequestItems': request_items, } if return_consumed_capacity is not None: params['ReturnConsumedCapacity'] = return_consumed_capacity if return_item_collection_metrics is not None: params['ReturnItemCollectionMetrics'] = return_item_collection_metrics return self.make_request(action='BatchWriteItem', body=json.dumps(params)) def create_table(self, attribute_definitions, table_name, key_schema, provisioned_throughput, local_secondary_indexes=None, global_secondary_indexes=None): """ The CreateTable operation adds a new table to your account. In an AWS account, table names must be unique within each region. That is, you can have two tables with same name if you create the tables in different regions. CreateTable is an asynchronous operation. Upon receiving a CreateTable request, Amazon DynamoDB immediately returns a response with a TableStatus of `CREATING`. After the table is created, Amazon DynamoDB sets the TableStatus to `ACTIVE`. You can perform read and write operations only on an `ACTIVE` table. If you want to create multiple tables with local secondary indexes on them, you must create them sequentially. Only one table with local secondary indexes can be in the `CREATING` state at any given time. You can use the DescribeTable API to check the table status. :type attribute_definitions: list :param attribute_definitions: An array of attributes that describe the key schema for the table and indexes. :type table_name: string :param table_name: The name of the table to create. :type key_schema: list :param key_schema: Specifies the attributes that make up the primary key for the table. The attributes in KeySchema must also be defined in the AttributeDefinitions array. For more information, see `Data Model`_ in the Amazon DynamoDB Developer Guide. Each KeySchemaElement in the array is composed of: + AttributeName - The name of this key attribute. + KeyType - Determines whether the key attribute is `HASH` or `RANGE`. For a primary key that consists of a hash attribute, you must specify exactly one element with a KeyType of `HASH`. For a primary key that consists of hash and range attributes, you must specify exactly two elements, in this order: The first element must have a KeyType of `HASH`, and the second element must have a KeyType of `RANGE`. For more information, see `Specifying the Primary Key`_ in the Amazon DynamoDB Developer Guide. :type local_secondary_indexes: list :param local_secondary_indexes: One or more secondary indexes (the maximum is five) to be created on the table. Each index is scoped to a given hash key value. There is a 10 gigabyte size limit per hash key; otherwise, the size of a local secondary index is unconstrained. Each secondary index in the array includes the following: + IndexName - The name of the secondary index. Must be unique only for this table. + KeySchema - Specifies the key schema for the index. The key schema must begin with the same hash key attribute as the table. + Projection - Specifies attributes that are copied (projected) from the table into the index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Each attribute specification is composed of: + ProjectionType - One of the following: + `KEYS_ONLY` - Only the index and primary keys are projected into the index. + `INCLUDE` - Only the specified table attributes are projected into the index. The list of projected attributes are in NonKeyAttributes . + `ALL` - All of the table attributes are projected into the index. + NonKeyAttributes - A list of one or more non-key attribute names that are projected into the index. The total count of attributes specified in NonKeyAttributes , summed across all of the local secondary indexes, must not exceed 20. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total. :type global_secondary_indexes: list :param global_secondary_indexes: :type provisioned_throughput: dict :param provisioned_throughput: The provisioned throughput settings for the specified table. The settings can be modified using the UpdateTable operation. For current minimum and maximum provisioned throughput values, see `Limits`_ in the Amazon DynamoDB Developer Guide. """ params = { 'AttributeDefinitions': attribute_definitions, 'TableName': table_name, 'KeySchema': key_schema, 'ProvisionedThroughput': provisioned_throughput, } if local_secondary_indexes is not None: params['LocalSecondaryIndexes'] = local_secondary_indexes if global_secondary_indexes is not None: params['GlobalSecondaryIndexes'] = global_secondary_indexes return self.make_request(action='CreateTable', body=json.dumps(params)) def delete_item(self, table_name, key, expected=None, return_values=None, return_consumed_capacity=None, return_item_collection_metrics=None): """ Deletes a single item in a table by primary key. You can perform a conditional delete operation that deletes the item if it exists, or if it has an expected attribute value. In addition to deleting an item, you can also return the item's attribute values in the same operation, using the ReturnValues parameter. Unless you specify conditions, the DeleteItem is an idempotent operation; running it multiple times on the same item or attribute does not result in an error response. Conditional deletes are useful for only deleting items if specific conditions are met. If those conditions are met, Amazon DynamoDB performs the delete. Otherwise, the item is not deleted. :type table_name: string :param table_name: The name of the table from which to delete the item. :type key: map :param key: A map of attribute names to AttributeValue objects, representing the primary key of the item to delete. :type expected: map :param expected: A map of attribute/condition pairs. This is the conditional block for the DeleteItem operation. All the conditions must be met for the operation to succeed. Expected allows you to provide an attribute name, and whether or not Amazon DynamoDB should check to see if the attribute value already exists; or if the attribute value exists and has a particular value before changing it. Each item in Expected represents an attribute name for Amazon DynamoDB to check, along with the following: + Value - The attribute value for Amazon DynamoDB to check. + Exists - Causes Amazon DynamoDB to evaluate the value before attempting a conditional operation: + If Exists is `True`, Amazon DynamoDB will check to see if that attribute value already exists in the table. If it is found, then the operation succeeds. If it is not found, the operation fails with a ConditionalCheckFailedException . + If Exists is `False`, Amazon DynamoDB assumes that the attribute value does not exist in the table. If in fact the value does not exist, then the assumption is valid and the operation succeeds. If the value is found, despite the assumption that it does not exist, the operation fails with a ConditionalCheckFailedException . The default setting for Exists is `True`. If you supply a Value all by itself, Amazon DynamoDB assumes the attribute exists: You don't have to set Exists to `True`, because it is implied. Amazon DynamoDB returns a ValidationException if: + Exists is `True` but there is no Value to check. (You expect a value to exist, but don't specify what that value is.) + Exists is `False` but you also specify a Value . (You cannot expect an attribute to have a value, while also expecting it not to exist.) If you specify more than one condition for Exists , then all of the conditions must evaluate to true. (In other words, the conditions are ANDed together.) Otherwise, the conditional operation will fail. :type return_values: string :param return_values: Use ReturnValues if you want to get the item attributes as they appeared before they were deleted. For DeleteItem , the valid values are: + `NONE` - If ReturnValues is not specified, or if its value is `NONE`, then nothing is returned. (This is the default for ReturnValues .) + `ALL_OLD` - The content of the old item is returned. :type return_consumed_capacity: string :param return_consumed_capacity: If set to `TOTAL`, ConsumedCapacity is included in the response; if set to `NONE` (the default), ConsumedCapacity is not included. :type return_item_collection_metrics: string :param return_item_collection_metrics: If set to `SIZE`, statistics about item collections, if any, that were modified during the operation are returned in the response. If set to `NONE` (the default), no statistics are returned.. """ params = {'TableName': table_name, 'Key': key, } if expected is not None: params['Expected'] = expected if return_values is not None: params['ReturnValues'] = return_values if return_consumed_capacity is not None: params['ReturnConsumedCapacity'] = return_consumed_capacity if return_item_collection_metrics is not None: params['ReturnItemCollectionMetrics'] = return_item_collection_metrics return self.make_request(action='DeleteItem', body=json.dumps(params)) def delete_table(self, table_name): """ The DeleteTable operation deletes a table and all of its items. After a DeleteTable request, the specified table is in the `DELETING` state until Amazon DynamoDB completes the deletion. If the table is in the `ACTIVE` state, you can delete it. If a table is in `CREATING` or `UPDATING` states, then Amazon DynamoDB returns a ResourceInUseException . If the specified table does not exist, Amazon DynamoDB returns a ResourceNotFoundException . If table is already in the `DELETING` state, no error is returned. Amazon DynamoDB might continue to accept data read and write operations, such as GetItem and PutItem , on a table in the `DELETING` state until the table deletion is complete. When you delete a table, any local secondary indexes on that table are also deleted. Use the DescribeTable API to check the status of the table. :type table_name: string :param table_name: The name of the table to delete. """ params = {'TableName': table_name, } return self.make_request(action='DeleteTable', body=json.dumps(params)) def describe_table(self, table_name): """ Returns information about the table, including the current status of the table, when it was created, the primary key schema, and any indexes on the table. :type table_name: string :param table_name: The name of the table to describe. """ params = {'TableName': table_name, } return self.make_request(action='DescribeTable', body=json.dumps(params)) def get_item(self, table_name, key, attributes_to_get=None, consistent_read=None, return_consumed_capacity=None): """ The GetItem operation returns a set of attributes for the item with the given primary key. If there is no matching item, GetItem does not return any data. GetItem provides an eventually consistent read by default. If your application requires a strongly consistent read, set ConsistentRead to `True`. Although a strongly consistent read might take more time than an eventually consistent read, it always returns the last updated value. :type table_name: string :param table_name: The name of the table containing the requested item. :type key: map :param key: A map of attribute names to AttributeValue objects, representing the primary key of the item to retrieve. :type attributes_to_get: list :param attributes_to_get: The names of one or more attributes to retrieve. If no attribute names are specified, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result. :type consistent_read: boolean :param consistent_read: If set to `True`, then the operation uses strongly consistent reads; otherwise, eventually consistent reads are used. :type return_consumed_capacity: string :param return_consumed_capacity: If set to `TOTAL`, ConsumedCapacity is included in the response; if set to `NONE` (the default), ConsumedCapacity is not included. """ params = {'TableName': table_name, 'Key': key, } if attributes_to_get is not None: params['AttributesToGet'] = attributes_to_get if consistent_read is not None: params['ConsistentRead'] = consistent_read if return_consumed_capacity is not None: params['ReturnConsumedCapacity'] = return_consumed_capacity return self.make_request(action='GetItem', body=json.dumps(params)) def list_tables(self, exclusive_start_table_name=None, limit=None): """ Returns an array of all the tables associated with the current account and endpoint. :type exclusive_start_table_name: string :param exclusive_start_table_name: The name of the table that starts the list. If you already ran a ListTables operation and received a LastEvaluatedTableName value in the response, use that value here to continue the list. :type limit: integer :param limit: A maximum number of table names to return. """ params = {} if exclusive_start_table_name is not None: params['ExclusiveStartTableName'] = exclusive_start_table_name if limit is not None: params['Limit'] = limit return self.make_request(action='ListTables', body=json.dumps(params)) def put_item(self, table_name, item, expected=None, return_values=None, return_consumed_capacity=None, return_item_collection_metrics=None): """ Creates a new item, or replaces an old item with a new item. If an item already exists in the specified table with the same primary key, the new item completely replaces the existing item. You can perform a conditional put (insert a new item if one with the specified primary key doesn't exist), or replace an existing item if it has certain attribute values. In addition to putting an item, you can also return the item's attribute values in the same operation, using the ReturnValues parameter. When you add an item, the primary key attribute(s) are the only required attributes. Attribute values cannot be null. String and binary type attributes must have lengths greater than zero. Set type attributes cannot be empty. Requests with empty values will be rejected with a ValidationException . You can request that PutItem return either a copy of the old item (before the update) or a copy of the new item (after the update). For more information, see the ReturnValues description. To prevent a new item from replacing an existing item, use a conditional put operation with Exists set to `False` for the primary key attribute, or attributes. For more information about using this API, see `Working with Items`_ in the Amazon DynamoDB Developer Guide. :type table_name: string :param table_name: The name of the table to contain the item. :type item: map :param item: A map of attribute name/value pairs, one for each attribute. Only the primary key attributes are required; you can optionally provide other attribute name-value pairs for the item. If you specify any attributes that are part of an index key, then the data types for those attributes must match those of the schema in the table's attribute definition. For more information about primary keys, see `Primary Key`_ in the Amazon DynamoDB Developer Guide. Each element in the Item map is an AttributeValue object. :type expected: map :param expected: A map of attribute/condition pairs. This is the conditional block for the PutItem operation. All the conditions must be met for the operation to succeed. Expected allows you to provide an attribute name, and whether or not Amazon DynamoDB should check to see if the attribute value already exists; or if the attribute value exists and has a particular value before changing it. Each item in Expected represents an attribute name for Amazon DynamoDB to check, along with the following: + Value - The attribute value for Amazon DynamoDB to check. + Exists - Causes Amazon DynamoDB to evaluate the value before attempting a conditional operation: + If Exists is `True`, Amazon DynamoDB will check to see if that attribute value already exists in the table. If it is found, then the operation succeeds. If it is not found, the operation fails with a ConditionalCheckFailedException . + If Exists is `False`, Amazon DynamoDB assumes that the attribute value does not exist in the table. If in fact the value does not exist, then the assumption is valid and the operation succeeds. If the value is found, despite the assumption that it does not exist, the operation fails with a ConditionalCheckFailedException . The default setting for Exists is `True`. If you supply a Value all by itself, Amazon DynamoDB assumes the attribute exists: You don't have to set Exists to `True`, because it is implied. Amazon DynamoDB returns a ValidationException if: + Exists is `True` but there is no Value to check. (You expect a value to exist, but don't specify what that value is.) + Exists is `False` but you also specify a Value . (You cannot expect an attribute to have a value, while also expecting it not to exist.) If you specify more than one condition for Exists , then all of the conditions must evaluate to true. (In other words, the conditions are ANDed together.) Otherwise, the conditional operation will fail. :type return_values: string :param return_values: Use ReturnValues if you want to get the item attributes as they appeared before they were updated with the PutItem request. For PutItem , the valid values are: + `NONE` - If ReturnValues is not specified, or if its value is `NONE`, then nothing is returned. (This is the default for ReturnValues .) + `ALL_OLD` - If PutItem overwrote an attribute name-value pair, then the content of the old item is returned. :type return_consumed_capacity: string :param return_consumed_capacity: If set to `TOTAL`, ConsumedCapacity is included in the response; if set to `NONE` (the default), ConsumedCapacity is not included. :type return_item_collection_metrics: string :param return_item_collection_metrics: If set to `SIZE`, statistics about item collections, if any, that were modified during the operation are returned in the response. If set to `NONE` (the default), no statistics are returned.. """ params = {'TableName': table_name, 'Item': item, } if expected is not None: params['Expected'] = expected if return_values is not None: params['ReturnValues'] = return_values if return_consumed_capacity is not None: params['ReturnConsumedCapacity'] = return_consumed_capacity if return_item_collection_metrics is not None: params['ReturnItemCollectionMetrics'] = return_item_collection_metrics return self.make_request(action='PutItem', body=json.dumps(params)) def query(self, table_name, index_name=None, select=None, attributes_to_get=None, limit=None, consistent_read=None, key_conditions=None, scan_index_forward=None, exclusive_start_key=None, return_consumed_capacity=None): """ A Query operation directly accesses items from a table using the table primary key, or from an index using the index key. You must provide a specific hash key value. You can narrow the scope of the query by using comparison operators on the range key value, or on the index key. You can use the ScanIndexForward parameter to get results in forward or reverse order, by range key or by index key. Queries that do not return results consume the minimum read capacity units according to the type of read. If the total number of items meeting the query criteria exceeds the result set size limit of 1 MB, the query stops and results are returned to the user with a LastEvaluatedKey to continue the query in a subsequent operation. Unlike a Scan operation, a Query operation never returns an empty result set and a LastEvaluatedKey . The LastEvaluatedKey is only provided if the results exceed 1 MB, or if you have used Limit . To request a strongly consistent result, set ConsistentRead to true. :type table_name: string :param table_name: The name of the table containing the requested items. :type index_name: string :param index_name: The name of an index on the table to query. :type select: string :param select: The attributes to be returned in the result. You can retrieve all item attributes, specific item attributes, the count of matching items, or in the case of an index, some or all of the attributes projected into the index. + `ALL_ATTRIBUTES`: Returns all of the item attributes. For a table, this is the default. For an index, this mode causes Amazon DynamoDB to fetch the full item from the table for each matching item in the index. If the index is configured to project all item attributes, the matching items will not be fetched from the table. Fetching items from the table incurs additional throughput cost and latency. + `ALL_PROJECTED_ATTRIBUTES`: Allowed only when querying an index. Retrieves all attributes which have been projected into the index. If the index is configured to project all attributes, this is equivalent to specifying ALL_ATTRIBUTES . + `COUNT`: Returns the number of matching items, rather than the matching items themselves. + `SPECIFIC_ATTRIBUTES` : Returns only the attributes listed in AttributesToGet . This is equivalent to specifying AttributesToGet without specifying any value for Select . If you are querying an index and request only attributes that are projected into that index, the operation will read only the index and not the table. If any of the requested attributes are not projected into the index, Amazon DynamoDB will need to fetch each matching item from the table. This extra fetching incurs additional throughput cost and latency. When neither Select nor AttributesToGet are specified, Amazon DynamoDB defaults to `ALL_ATTRIBUTES` when accessing a table, and `ALL_PROJECTED_ATTRIBUTES` when accessing an index. You cannot use both Select and AttributesToGet together in a single request, unless the value for Select is `SPECIFIC_ATTRIBUTES`. (This usage is equivalent to specifying AttributesToGet without any value for Select .) :type attributes_to_get: list :param attributes_to_get: The names of one or more attributes to retrieve. If no attribute names are specified, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result. If you are querying an index and request only attributes that are projected into that index, the operation will read only the index and not the table. If any of the requested attributes are not projected into the index, Amazon DynamoDB will need to fetch each matching item from the table. This extra fetching incurs additional throughput cost and latency. You cannot use both AttributesToGet and Select together in a Query request, unless the value for Select is `SPECIFIC_ATTRIBUTES`. (This usage is equivalent to specifying AttributesToGet without any value for Select .) :type limit: integer :param limit: The maximum number of items to evaluate (not necessarily the number of matching items). If Amazon DynamoDB processes the number of items up to the limit while processing the results, it stops the operation and returns the matching values up to that point, and a LastEvaluatedKey to apply in a subsequent operation, so that you can pick up where you left off. Also, if the processed data set size exceeds 1 MB before Amazon DynamoDB reaches this limit, it stops the operation and returns the matching values up to the limit, and a LastEvaluatedKey to apply in a subsequent operation to continue the operation. For more information see `Query and Scan`_ in the Amazon DynamoDB Developer Guide. :type consistent_read: boolean :param consistent_read: If set to `True`, then the operation uses strongly consistent reads; otherwise, eventually consistent reads are used. :type key_conditions: map :param key_conditions: The selection criteria for the query. For a query on a table, you can only have conditions on the table primary key attributes. You must specify the hash key attribute name and value as an `EQ` condition. You can optionally specify a second condition, referring to the range key attribute. For a query on a secondary index, you can only have conditions on the index key attributes. You must specify the index hash attribute name and value as an EQ condition. You can optionally specify a second condition, referring to the index key range attribute. Multiple conditions are evaluated using "AND"; in other words, all of the conditions must be met in order for an item to appear in the results results. Each KeyConditions element consists of an attribute name to compare, along with the following: + AttributeValueList - One or more values to evaluate against the supplied attribute. This list contains exactly one value, except for a `BETWEEN` or `IN` comparison, in which case the list contains two values. For type Number, value comparisons are numeric. String value comparisons for greater than, equals, or less than are based on ASCII character code values. For example, `a` is greater than `A`, and `aa` is greater than `B`. For a list of code values, see `http://en.wikipedia.org/wiki/ASCII#ASCII_printable_characters`_. For Binary, Amazon DynamoDB treats each byte of the binary data as unsigned when it compares binary values, for example when evaluating query expressions. + ComparisonOperator - A comparator for evaluating attributes. For example, equals, greater than, less than, etc. Valid comparison operators for Query: `EQ | LE | LT | GE | GT | BEGINS_WITH | BETWEEN` For information on specifying data types in JSON, see `JSON Data Format`_ in the Amazon DynamoDB Developer Guide. The following are descriptions of each comparison operator. + `EQ` : Equal. AttributeValueList can contain only one AttributeValue of type String, Number, or Binary (not a set). If an item contains an AttributeValue of a different type than the one specified in the request, the value does not match. For example, `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not equal `{"NS":["6", "2", "1"]}`. + `LE` : Less than or equal. AttributeValueList can contain only one AttributeValue of type String, Number, or Binary (not a set). If an item contains an AttributeValue of a different type than the one specified in the request, the value does not match. For example, `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not compare to `{"NS":["6", "2", "1"]}`. + `LT` : Less than. AttributeValueList can contain only one AttributeValue of type String, Number, or Binary (not a set). If an item contains an AttributeValue of a different type than the one specified in the request, the value does not match. For example, `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not compare to `{"NS":["6", "2", "1"]}`. + `GE` : Greater than or equal. AttributeValueList can contain only one AttributeValue of type String, Number, or Binary (not a set). If an item contains an AttributeValue of a different type than the one specified in the request, the value does not match. For example, `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not compare to `{"NS":["6", "2", "1"]}`. + `GT` : Greater than. AttributeValueList can contain only one AttributeValue of type String, Number, or Binary (not a set). If an item contains an AttributeValue of a different type than the one specified in the request, the value does not match. For example, `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not compare to `{"NS":["6", "2", "1"]}`. + `BEGINS_WITH` : checks for a prefix. AttributeValueList can contain only one AttributeValue of type String or Binary (not a Number or a set). The target attribute of the comparison must be a String or Binary (not a Number or a set). + `BETWEEN` : Greater than or equal to the first value, and less than or equal to the second value. AttributeValueList must contain two AttributeValue elements of the same type, either String, Number, or Binary (not a set). A target attribute matches if the target value is greater than, or equal to, the first element and less than, or equal to, the second element. If an item contains an AttributeValue of a different type than the one specified in the request, the value does not match. For example, `{"S":"6"}` does not compare to `{"N":"6"}`. Also, `{"N":"6"}` does not compare to `{"NS":["6", "2", "1"]}` :type scan_index_forward: boolean :param scan_index_forward: Specifies ascending (true) or descending (false) traversal of the index. Amazon DynamoDB returns results reflecting the requested order determined by the range key. If the data type is Number, the results are returned in numeric order. For String, the results are returned in order of ASCII character code values. For Binary, Amazon DynamoDB treats each byte of the binary data as unsigned when it compares binary values. If ScanIndexForward is not specified, the results are returned in ascending order. :type exclusive_start_key: map :param exclusive_start_key: The primary key of the first item that this operation will evaluate. Use the value that was returned for LastEvaluatedKey in the previous operation. The data type for ExclusiveStartKey must be String, Number or Binary. No set data types are allowed. :type return_consumed_capacity: string :param return_consumed_capacity: If set to `TOTAL`, ConsumedCapacity is included in the response; if set to `NONE` (the default), ConsumedCapacity is not included. """ params = {'TableName': table_name, } if index_name is not None: params['IndexName'] = index_name if select is not None: params['Select'] = select if attributes_to_get is not None: params['AttributesToGet'] = attributes_to_get if limit is not None: params['Limit'] = limit if consistent_read is not None: params['ConsistentRead'] = consistent_read if key_conditions is not None: params['KeyConditions'] = key_conditions if scan_index_forward is not None: params['ScanIndexForward'] = scan_index_forward if exclusive_start_key is not None: params['ExclusiveStartKey'] = exclusive_start_key if return_consumed_capacity is not None: params['ReturnConsumedCapacity'] = return_consumed_capacity return self.make_request(action='Query', body=json.dumps(params)) def scan(self, table_name, attributes_to_get=None, limit=None, select=None, scan_filter=None, exclusive_start_key=None, return_consumed_capacity=None, total_segments=None, segment=None): """ The Scan operation returns one or more items and item attributes by accessing every item in the table. To have Amazon DynamoDB return fewer items, you can provide a ScanFilter . If the total number of scanned items exceeds the maximum data set size limit of 1 MB, the scan stops and results are returned to the user with a LastEvaluatedKey to continue the scan in a subsequent operation. The results also include the number of items exceeding the limit. A scan can result in no table data meeting the filter criteria. The result set is eventually consistent. By default, Scan operations proceed sequentially; however, for faster performance on large tables, applications can request a parallel Scan by specifying the Segment and TotalSegments parameters. For more information, see `Parallel Scan`_ in the Amazon DynamoDB Developer Guide. :type table_name: string :param table_name: The name of the table containing the requested items. :type attributes_to_get: list :param attributes_to_get: The names of one or more attributes to retrieve. If no attribute names are specified, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result. :type limit: integer :param limit: The maximum number of items to evaluate (not necessarily the number of matching items). If Amazon DynamoDB processes the number of items up to the limit while processing the results, it stops the operation and returns the matching values up to that point, and a LastEvaluatedKey to apply in a subsequent operation, so that you can pick up where you left off. Also, if the processed data set size exceeds 1 MB before Amazon DynamoDB reaches this limit, it stops the operation and returns the matching values up to the limit, and a LastEvaluatedKey to apply in a subsequent operation to continue the operation. For more information see `Query and Scan`_ in the Amazon DynamoDB Developer Guide. :type select: string :param select: The attributes to be returned in the result. You can retrieve all item attributes, specific item attributes, the count of matching items, or in the case of an index, some or all of the attributes projected into the index. + `ALL_ATTRIBUTES`: Returns all of the item attributes. For a table, this is the default. For an index, this mode causes Amazon DynamoDB to fetch the full item from the table for each matching item in the index. If the index is configured to project all item attributes, the matching items will not be fetched from the table. Fetching items from the table incurs additional throughput cost and latency. + `ALL_PROJECTED_ATTRIBUTES`: Retrieves all attributes which have been projected into the index. If the index is configured to project all attributes, this is equivalent to specifying ALL_ATTRIBUTES . + `COUNT`: Returns the number of matching items, rather than the matching items themselves. + `SPECIFIC_ATTRIBUTES` : Returns only the attributes listed in AttributesToGet . This is equivalent to specifying AttributesToGet without specifying any value for Select . If you are querying an index and request only attributes that are projected into that index, the operation will read only the index and not the table. If any of the requested attributes are not projected into the index, Amazon DynamoDB will need to fetch each matching item from the table. This extra fetching incurs additional throughput cost and latency. When neither Select nor AttributesToGet are specified, Amazon DynamoDB defaults to `ALL_ATTRIBUTES` when accessing a table, and `ALL_PROJECTED_ATTRIBUTES` when accessing an index. You cannot use both Select and AttributesToGet together in a single request, unless the value for Select is `SPECIFIC_ATTRIBUTES`. (This usage is equivalent to specifying AttributesToGet without any value for Select .) :type scan_filter: map :param scan_filter: Evaluates the scan results and returns only the desired values. Multiple conditions are treated as "AND" operations: all conditions must be met to be included in the results. Each ScanConditions element consists of an attribute name to compare, along with the following: + AttributeValueList - One or more values to evaluate against the supplied attribute. This list contains exactly one value, except for a `BETWEEN` or `IN` comparison, in which case the list contains two values. For type Number, value comparisons are numeric. String value comparisons for greater than, equals, or less than are based on ASCII character code values. For example, `a` is greater than `A`, and `aa` is greater than `B`. For a list of code values, see `http://en.wikipedia.org/wiki/ASCII#ASCII_printable_characters`_. For Binary, Amazon DynamoDB treats each byte of the binary data as unsigned when it compares binary values, for example when evaluating query expressions. + ComparisonOperator - A comparator for evaluating attributes. For example, equals, greater than, less than, etc. Valid comparison operators for Scan: `EQ | NE | LE | LT | GE | GT | NOT_NULL | NULL | CONTAINS | NOT_CONTAINS | BEGINS_WITH | IN | BETWEEN` For information on specifying data types in JSON, see `JSON Data Format`_ in the Amazon DynamoDB Developer Guide. The following are descriptions of each comparison operator. + `EQ` : Equal. AttributeValueList can contain only one AttributeValue of type String, Number, or Binary (not a set). If an item contains an AttributeValue of a different type than the one specified in the request, the value does not match. For example, `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not equal `{"NS":["6", "2", "1"]}`. + `NE` : Not equal. AttributeValueList can contain only one AttributeValue of type String, Number, or Binary (not a set). If an item contains an AttributeValue of a different type than the one specified in the request, the value does not match. For example, `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not equal `{"NS":["6", "2", "1"]}`. + `LE` : Less than or equal. AttributeValueList can contain only one AttributeValue of type String, Number, or Binary (not a set). If an item contains an AttributeValue of a different type than the one specified in the request, the value does not match. For example, `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not compare to `{"NS":["6", "2", "1"]}`. + `LT` : Less than. AttributeValueList can contain only one AttributeValue of type String, Number, or Binary (not a set). If an item contains an AttributeValue of a different type than the one specified in the request, the value does not match. For example, `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not compare to `{"NS":["6", "2", "1"]}`. + `GE` : Greater than or equal. AttributeValueList can contain only one AttributeValue of type String, Number, or Binary (not a set). If an item contains an AttributeValue of a different type than the one specified in the request, the value does not match. For example, `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not compare to `{"NS":["6", "2", "1"]}`. + `GT` : Greater than. AttributeValueList can contain only one AttributeValue of type String, Number, or Binary (not a set). If an item contains an AttributeValue of a different type than the one specified in the request, the value does not match. For example, `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not compare to `{"NS":["6", "2", "1"]}`. + `NOT_NULL` : The attribute exists. + `NULL` : The attribute does not exist. + `CONTAINS` : checks for a subsequence, or value in a set. AttributeValueList can contain only one AttributeValue of type String, Number, or Binary (not a set). If the target attribute of the comparison is a String, then the operation checks for a substring match. If the target attribute of the comparison is Binary, then the operation looks for a subsequence of the target that matches the input. If the target attribute of the comparison is a set ("SS", "NS", or "BS"), then the operation checks for a member of the set (not as a substring). + `NOT_CONTAINS` : checks for absence of a subsequence, or absence of a value in a set. AttributeValueList can contain only one AttributeValue of type String, Number, or Binary (not a set). If the target attribute of the comparison is a String, then the operation checks for the absence of a substring match. If the target attribute of the comparison is Binary, then the operation checks for the absence of a subsequence of the target that matches the input. If the target attribute of the comparison is a set ("SS", "NS", or "BS"), then the operation checks for the absence of a member of the set (not as a substring). + `BEGINS_WITH` : checks for a prefix. AttributeValueList can contain only one AttributeValue of type String or Binary (not a Number or a set). The target attribute of the comparison must be a String or Binary (not a Number or a set). + `IN` : checks for exact matches. AttributeValueList can contain more than one AttributeValue of type String, Number, or Binary (not a set). The target attribute of the comparison must be of the same type and exact value to match. A String never matches a String set. + `BETWEEN` : Greater than or equal to the first value, and less than or equal to the second value. AttributeValueList must contain two AttributeValue elements of the same type, either String, Number, or Binary (not a set). A target attribute matches if the target value is greater than, or equal to, the first element and less than, or equal to, the second element. If an item contains an AttributeValue of a different type than the one specified in the request, the value does not match. For example, `{"S":"6"}` does not compare to `{"N":"6"}`. Also, `{"N":"6"}` does not compare to `{"NS":["6", "2", "1"]}` :type exclusive_start_key: map :param exclusive_start_key: The primary key of the first item that this operation will evaluate. Use the value that was returned for LastEvaluatedKey in the previous operation. The data type for ExclusiveStartKey must be String, Number or Binary. No set data types are allowed. In a parallel scan, a Scan request that includes ExclusiveStartKey must specify the same segment whose previous Scan returned the corresponding value of LastEvaluatedKey . :type return_consumed_capacity: string :param return_consumed_capacity: If set to `TOTAL`, ConsumedCapacity is included in the response; if set to `NONE` (the default), ConsumedCapacity is not included. :type total_segments: integer :param total_segments: For a parallel Scan request, TotalSegments represents the total number of segments into which the Scan operation will be divided. The value of TotalSegments corresponds to the number of application workers that will perform the parallel scan. For example, if you want to scan a table using four application threads, you would specify a TotalSegments value of 4. The value for TotalSegments must be greater than or equal to 1, and less than or equal to 4096. If you specify a TotalSegments value of 1, the Scan will be sequential rather than parallel. If you specify TotalSegments , you must also specify Segment . :type segment: integer :param segment: For a parallel Scan request, Segment identifies an individual segment to be scanned by an application worker. Segment IDs are zero-based, so the first segment is always 0. For example, if you want to scan a table using four application threads, the first thread would specify a Segment value of 0, the second thread would specify 1, and so on. The value of LastEvaluatedKey returned from a parallel Scan request must be used as ExclusiveStartKey with the same Segment ID in a subsequent Scan operation. The value for Segment must be greater than or equal to 0, and less than the value provided for TotalSegments . If you specify Segment , you must also specify TotalSegments . """ params = {'TableName': table_name, } if attributes_to_get is not None: params['AttributesToGet'] = attributes_to_get if limit is not None: params['Limit'] = limit if select is not None: params['Select'] = select if scan_filter is not None: params['ScanFilter'] = scan_filter if exclusive_start_key is not None: params['ExclusiveStartKey'] = exclusive_start_key if return_consumed_capacity is not None: params['ReturnConsumedCapacity'] = return_consumed_capacity if total_segments is not None: params['TotalSegments'] = total_segments if segment is not None: params['Segment'] = segment return self.make_request(action='Scan', body=json.dumps(params)) def update_item(self, table_name, key, attribute_updates=None, expected=None, return_values=None, return_consumed_capacity=None, return_item_collection_metrics=None): """ Edits an existing item's attributes, or inserts a new item if it does not already exist. You can put, delete, or add attribute values. You can also perform a conditional update (insert a new attribute name-value pair if it doesn't exist, or replace an existing name-value pair if it has certain expected attribute values). In addition to updating an item, you can also return the item's attribute values in the same operation, using the ReturnValues parameter. :type table_name: string :param table_name: The name of the table containing the item to update. :type key: map :param key: The primary key that defines the item. Each element consists of an attribute name and a value for that attribute. :type attribute_updates: map :param attribute_updates: The names of attributes to be modified, the action to perform on each, and the new value for each. If you are updating an attribute that is an index key attribute for any indexes on that table, the attribute type must match the index key type defined in the AttributesDefinition of the table description. You can use UpdateItem to update any non-key attributes. Attribute values cannot be null. String and binary type attributes must have lengths greater than zero. Set type attributes must not be empty. Requests with empty values will be rejected with a ValidationException . Each AttributeUpdates element consists of an attribute name to modify, along with the following: + Value - The new value, if applicable, for this attribute. + Action - Specifies how to perform the update. Valid values for Action are `PUT`, `DELETE`, and `ADD`. The behavior depends on whether the specified primary key already exists in the table. **If an item with the specified Key is found in the table:** + `PUT` - Adds the specified attribute to the item. If the attribute already exists, it is replaced by the new value. + `DELETE` - If no value is specified, the attribute and its value are removed from the item. The data type of the specified value must match the existing value's data type. If a set of values is specified, then those values are subtracted from the old set. For example, if the attribute value was the set `[a,b,c]` and the DELETE action specified `[a,c]`, then the final attribute value would be `[b]`. Specifying an empty set is an error. + `ADD` - If the attribute does not already exist, then the attribute and its values are added to the item. If the attribute does exist, then the behavior of `ADD` depends on the data type of the attribute: + If the existing attribute is a number, and if Value is also a number, then the Value is mathematically added to the existing attribute. If Value is a negative number, then it is subtracted from the existing attribute. If you use `ADD` to increment or decrement a number value for an item that doesn't exist before the update, Amazon DynamoDB uses 0 as the initial value. In addition, if you use `ADD` to update an existing item, and intend to increment or decrement an attribute value which does not yet exist, Amazon DynamoDB uses `0` as the initial value. For example, suppose that the item you want to update does not yet have an attribute named itemcount , but you decide to `ADD` the number `3` to this attribute anyway, even though it currently does not exist. Amazon DynamoDB will create the itemcount attribute, set its initial value to `0`, and finally add `3` to it. The result will be a new itemcount attribute in the item, with a value of `3`. + If the existing data type is a set, and if the Value is also a set, then the Value is added to the existing set. (This is a set operation, not mathematical addition.) For example, if the attribute value was the set `[1,2]`, and the `ADD` action specified `[3]`, then the final attribute value would be `[1,2,3]`. An error occurs if an Add action is specified for a set attribute and the attribute type specified does not match the existing set type. Both sets must have the same primitive data type. For example, if the existing data type is a set of strings, the Value must also be a set of strings. The same holds true for number sets and binary sets. This action is only valid for an existing attribute whose data type is number or is a set. Do not use `ADD` for any other data types. **If no item with the specified Key is found:** + `PUT` - Amazon DynamoDB creates a new item with the specified primary key, and then adds the attribute. + `DELETE` - Nothing happens; there is no attribute to delete. + `ADD` - Amazon DynamoDB creates an item with the supplied primary key and number (or set of numbers) for the attribute value. The only data types allowed are number and number set; no other data types can be specified. If you specify any attributes that are part of an index key, then the data types for those attributes must match those of the schema in the table's attribute definition. :type expected: map :param expected: A map of attribute/condition pairs. This is the conditional block for the UpdateItem operation. All the conditions must be met for the operation to succeed. Expected allows you to provide an attribute name, and whether or not Amazon DynamoDB should check to see if the attribute value already exists; or if the attribute value exists and has a particular value before changing it. Each item in Expected represents an attribute name for Amazon DynamoDB to check, along with the following: + Value - The attribute value for Amazon DynamoDB to check. + Exists - Causes Amazon DynamoDB to evaluate the value before attempting a conditional operation: + If Exists is `True`, Amazon DynamoDB will check to see if that attribute value already exists in the table. If it is found, then the operation succeeds. If it is not found, the operation fails with a ConditionalCheckFailedException . + If Exists is `False`, Amazon DynamoDB assumes that the attribute value does not exist in the table. If in fact the value does not exist, then the assumption is valid and the operation succeeds. If the value is found, despite the assumption that it does not exist, the operation fails with a ConditionalCheckFailedException . The default setting for Exists is `True`. If you supply a Value all by itself, Amazon DynamoDB assumes the attribute exists: You don't have to set Exists to `True`, because it is implied. Amazon DynamoDB returns a ValidationException if: + Exists is `True` but there is no Value to check. (You expect a value to exist, but don't specify what that value is.) + Exists is `False` but you also specify a Value . (You cannot expect an attribute to have a value, while also expecting it not to exist.) If you specify more than one condition for Exists , then all of the conditions must evaluate to true. (In other words, the conditions are ANDed together.) Otherwise, the conditional operation will fail. :type return_values: string :param return_values: Use ReturnValues if you want to get the item attributes as they appeared either before or after they were updated. For UpdateItem , the valid values are: + `NONE` - If ReturnValues is not specified, or if its value is `NONE`, then nothing is returned. (This is the default for ReturnValues .) + `ALL_OLD` - If UpdateItem overwrote an attribute name-value pair, then the content of the old item is returned. + `UPDATED_OLD` - The old versions of only the updated attributes are returned. + `ALL_NEW` - All of the attributes of the new version of the item are returned. + `UPDATED_NEW` - The new versions of only the updated attributes are returned. :type return_consumed_capacity: string :param return_consumed_capacity: If set to `TOTAL`, ConsumedCapacity is included in the response; if set to `NONE` (the default), ConsumedCapacity is not included. :type return_item_collection_metrics: string :param return_item_collection_metrics: If set to `SIZE`, statistics about item collections, if any, that were modified during the operation are returned in the response. If set to `NONE` (the default), no statistics are returned.. """ params = {'TableName': table_name, 'Key': key, } if attribute_updates is not None: params['AttributeUpdates'] = attribute_updates if expected is not None: params['Expected'] = expected if return_values is not None: params['ReturnValues'] = return_values if return_consumed_capacity is not None: params['ReturnConsumedCapacity'] = return_consumed_capacity if return_item_collection_metrics is not None: params['ReturnItemCollectionMetrics'] = return_item_collection_metrics return self.make_request(action='UpdateItem', body=json.dumps(params)) def update_table(self, table_name, provisioned_throughput=None, global_secondary_index_updates=None): """ Updates the provisioned throughput for the given table. Setting the throughput for a table helps you manage performance and is part of the provisioned throughput feature of Amazon DynamoDB. The provisioned throughput values can be upgraded or downgraded based on the maximums and minimums listed in the `Limits`_ section in the Amazon DynamoDB Developer Guide. The table must be in the `ACTIVE` state for this operation to succeed. UpdateTable is an asynchronous operation; while executing the operation, the table is in the `UPDATING` state. While the table is in the `UPDATING` state, the table still has the provisioned throughput from before the call. The new provisioned throughput setting is in effect only when the table returns to the `ACTIVE` state after the UpdateTable operation. You cannot add, modify or delete local secondary indexes using UpdateTable . Local secondary indexes can only be defined at table creation time. :type table_name: string :param table_name: The name of the table to be updated. :type provisioned_throughput: dict :param provisioned_throughput: The provisioned throughput settings for the specified table. The settings can be modified using the UpdateTable operation. For current minimum and maximum provisioned throughput values, see `Limits`_ in the Amazon DynamoDB Developer Guide. :type global_secondary_index_updates: list :param global_secondary_index_updates: """ params = {'TableName': table_name, } if provisioned_throughput is not None: params['ProvisionedThroughput'] = provisioned_throughput if global_secondary_index_updates is not None: params['GlobalSecondaryIndexUpdates'] = global_secondary_index_updates return self.make_request(action='UpdateTable', body=json.dumps(params)) def make_request(self, action, body): headers = { 'X-Amz-Target': '%s.%s' % (self.TargetPrefix, action), 'Host': self.host, 'Content-Type': 'application/x-amz-json-1.0', 'Content-Length': str(len(body)), } http_request = self.build_base_http_request( method='POST', path='/', auth_path='/', params={}, headers=headers, data=body, host=self.host) response = self._mexe(http_request, sender=None, override_num_retries=self.NumberRetries, retry_handler=self._retry_handler) response_body = response.read() boto.log.debug(response_body) if response.status == 200: if response_body: return json.loads(response_body) else: json_body = json.loads(response_body) fault_name = json_body.get('__type', None) exception_class = self._faults.get(fault_name, self.ResponseError) raise exception_class(response.status, response.reason, body=json_body) def _retry_handler(self, response, i, next_sleep): status = None boto.log.debug("Saw HTTP status: %s" % response.status) if response.status == 400: response_body = response.read() boto.log.debug(response_body) data = json.loads(response_body) if 'ProvisionedThroughputExceededException' in data.get('__type'): self.throughput_exceeded_events += 1 msg = "%s, retry attempt %s" % ( 'ProvisionedThroughputExceededException', i ) next_sleep = self._exponential_time(i) i += 1 status = (msg, i, next_sleep) if i == self.NumberRetries: # If this was our last retry attempt, raise # a specific error saying that the throughput # was exceeded. raise exceptions.ProvisionedThroughputExceededException( response.status, response.reason, data) elif 'ConditionalCheckFailedException' in data.get('__type'): raise exceptions.ConditionalCheckFailedException( response.status, response.reason, data) elif 'ValidationException' in data.get('__type'): raise exceptions.ValidationException( response.status, response.reason, data) else: raise self.ResponseError(response.status, response.reason, data) expected_crc32 = response.getheader('x-amz-crc32') if self._validate_checksums and expected_crc32 is not None: boto.log.debug('Validating crc32 checksum for body: %s', response.read()) actual_crc32 = crc32(response.read()) & 0xffffffff expected_crc32 = int(expected_crc32) if actual_crc32 != expected_crc32: msg = ("The calculated checksum %s did not match the expected " "checksum %s" % (actual_crc32, expected_crc32)) status = (msg, i + 1, self._exponential_time(i)) return status def _exponential_time(self, i): if i == 0: next_sleep = 0 else: next_sleep = 0.05 * (2 ** i) return next_sleep boto-2.20.1/boto/dynamodb2/results.py000066400000000000000000000123101225267101000174250ustar00rootroot00000000000000class ResultSet(object): """ A class used to lazily handle page-to-page navigation through a set of results. It presents a transparent iterator interface, so that all the user has to do is use it in a typical ``for`` loop (or list comprehension, etc.) to fetch results, even if they weren't present in the current page of results. This is used by the ``Table.query`` & ``Table.scan`` methods. Example:: >>> users = Table('users') >>> results = ResultSet() >>> results.to_call(users.query, username__gte='johndoe') # Now iterate. When it runs out of results, it'll fetch the next page. >>> for res in results: ... print res['username'] """ def __init__(self): super(ResultSet, self).__init__() self.the_callable = None self.call_args = [] self.call_kwargs = {} self._results = [] self._offset = -1 self._results_left = True self._last_key_seen = None @property def first_key(self): return 'exclusive_start_key' def _reset(self): """ Resets the internal state of the ``ResultSet``. This prevents results from being cached long-term & consuming excess memory. Largely internal. """ self._results = [] self._offset = 0 def __iter__(self): return self def next(self): self._offset += 1 if self._offset >= len(self._results): if self._results_left is False: raise StopIteration() self.fetch_more() # It's possible that previous call to ``fetch_more`` may not return # anything useful but there may be more results. Loop until we get # something back, making sure we guard for no results left. while not len(self._results) and self._results_left: self.fetch_more() if self._offset < len(self._results): return self._results[self._offset] else: raise StopIteration() def to_call(self, the_callable, *args, **kwargs): """ Sets up the callable & any arguments to run it with. This is stored for subsequent calls so that those queries can be run without requiring user intervention. Example:: # Just an example callable. >>> def squares_to(y): ... for x in range(1, y): ... yield x**2 >>> rs = ResultSet() # Set up what to call & arguments. >>> rs.to_call(squares_to, y=3) """ if not callable(the_callable): raise ValueError( 'You must supply an object or function to be called.' ) self.the_callable = the_callable self.call_args = args self.call_kwargs = kwargs def fetch_more(self): """ When the iterator runs out of results, this method is run to re-execute the callable (& arguments) to fetch the next page. Largely internal. """ self._reset() args = self.call_args[:] kwargs = self.call_kwargs.copy() if self._last_key_seen is not None: kwargs[self.first_key] = self._last_key_seen results = self.the_callable(*args, **kwargs) new_results = results.get('results', []) self._last_key_seen = results.get('last_key', None) if len(new_results): self._results.extend(results['results']) # Decrease the limit, if it's present. if self.call_kwargs.get('limit'): self.call_kwargs['limit'] -= len(results['results']) # and if limit hits zero, we don't have any more # results to look for if 0 == self.call_kwargs['limit']: self._results_left = False if self._last_key_seen is None: self._results_left = False class BatchGetResultSet(ResultSet): def __init__(self, *args, **kwargs): self._keys_left = kwargs.pop('keys', []) self._max_batch_get = kwargs.pop('max_batch_get', 100) super(BatchGetResultSet, self).__init__(*args, **kwargs) def fetch_more(self): self._reset() args = self.call_args[:] kwargs = self.call_kwargs.copy() # Slice off the max we can fetch. kwargs['keys'] = self._keys_left[:self._max_batch_get] self._keys_left = self._keys_left[self._max_batch_get:] results = self.the_callable(*args, **kwargs) if not len(results.get('results', [])): self._results_left = False return self._results.extend(results['results']) for offset, key_data in enumerate(results.get('unprocessed_keys', [])): # We've got an unprocessed key. Reinsert it into the list. # DynamoDB only returns valid keys, so there should be no risk of # missing keys ever making it here. self._keys_left.insert(offset, key_data) if len(self._keys_left) <= 0: self._results_left = False # Decrease the limit, if it's present. if self.call_kwargs.get('limit'): self.call_kwargs['limit'] -= len(results['results']) boto-2.20.1/boto/dynamodb2/table.py000066400000000000000000001205101225267101000170150ustar00rootroot00000000000000import boto from boto.dynamodb2 import exceptions from boto.dynamodb2.fields import (HashKey, RangeKey, AllIndex, KeysOnlyIndex, IncludeIndex) from boto.dynamodb2.items import Item from boto.dynamodb2.layer1 import DynamoDBConnection from boto.dynamodb2.results import ResultSet, BatchGetResultSet from boto.dynamodb2.types import Dynamizer, FILTER_OPERATORS, QUERY_OPERATORS class Table(object): """ Interacts & models the behavior of a DynamoDB table. The ``Table`` object represents a set (or rough categorization) of records within DynamoDB. The important part is that all records within the table, while largely-schema-free, share the same schema & are essentially namespaced for use in your application. For example, you might have a ``users`` table or a ``forums`` table. """ max_batch_get = 100 def __init__(self, table_name, schema=None, throughput=None, indexes=None, connection=None): """ Sets up a new in-memory ``Table``. This is useful if the table already exists within DynamoDB & you simply want to use it for additional interactions. The only required parameter is the ``table_name``. However, under the hood, the object will call ``describe_table`` to determine the schema/indexes/throughput. You can avoid this extra call by passing in ``schema`` & ``indexes``. **IMPORTANT** - If you're creating a new ``Table`` for the first time, you should use the ``Table.create`` method instead, as it will persist the table structure to DynamoDB. Requires a ``table_name`` parameter, which should be a simple string of the name of the table. Optionally accepts a ``schema`` parameter, which should be a list of ``BaseSchemaField`` subclasses representing the desired schema. Optionally accepts a ``throughput`` parameter, which should be a dictionary. If provided, it should specify a ``read`` & ``write`` key, both of which should have an integer value associated with them. Optionally accepts a ``indexes`` parameter, which should be a list of ``BaseIndexField`` subclasses representing the desired indexes. Optionally accepts a ``connection`` parameter, which should be a ``DynamoDBConnection`` instance (or subclass). This is primarily useful for specifying alternate connection parameters. Example:: # The simple, it-already-exists case. >>> conn = Table('users') # The full, minimum-extra-calls case. >>> from boto import dynamodb2 >>> users = Table('users', schema=[ ... HashKey('username'), ... RangeKey('date_joined', data_type=NUMBER) ... ], throughput={ ... 'read':20, ... 'write': 10, ... }, indexes=[ ... KeysOnlyIndex('MostRecentlyJoined', parts=[ ... RangeKey('date_joined') ... ]), ... ], ... connection=dynamodb2.connect_to_region('us-west-2', ... aws_access_key_id='key', ... aws_secret_access_key='key', ... )) """ self.table_name = table_name self.connection = connection self.throughput = { 'read': 5, 'write': 5, } self.schema = schema self.indexes = indexes if self.connection is None: self.connection = DynamoDBConnection() if throughput is not None: self.throughput = throughput self._dynamizer = Dynamizer() @classmethod def create(cls, table_name, schema, throughput=None, indexes=None, connection=None): """ Creates a new table in DynamoDB & returns an in-memory ``Table`` object. This will setup a brand new table within DynamoDB. The ``table_name`` must be unique for your AWS account. The ``schema`` is also required to define the key structure of the table. **IMPORTANT** - You should consider the usage pattern of your table up-front, as the schema & indexes can **NOT** be modified once the table is created, requiring the creation of a new table & migrating the data should you wish to revise it. **IMPORTANT** - If the table already exists in DynamoDB, additional calls to this method will result in an error. If you just need a ``Table`` object to interact with the existing table, you should just initialize a new ``Table`` object, which requires only the ``table_name``. Requires a ``table_name`` parameter, which should be a simple string of the name of the table. Requires a ``schema`` parameter, which should be a list of ``BaseSchemaField`` subclasses representing the desired schema. Optionally accepts a ``throughput`` parameter, which should be a dictionary. If provided, it should specify a ``read`` & ``write`` key, both of which should have an integer value associated with them. Optionally accepts a ``indexes`` parameter, which should be a list of ``BaseIndexField`` subclasses representing the desired indexes. Optionally accepts a ``connection`` parameter, which should be a ``DynamoDBConnection`` instance (or subclass). This is primarily useful for specifying alternate connection parameters. Example:: >>> users = Table.create('users', schema=[ ... HashKey('username'), ... RangeKey('date_joined', data_type=NUMBER) ... ], throughput={ ... 'read':20, ... 'write': 10, ... }, indexes=[ ... KeysOnlyIndex('MostRecentlyJoined', parts=[ ... RangeKey('date_joined') ... ]), ... ]) """ table = cls(table_name=table_name, connection=connection) table.schema = schema if throughput is not None: table.throughput = throughput if indexes is not None: table.indexes = indexes # Prep the schema. raw_schema = [] attr_defs = [] for field in table.schema: raw_schema.append(field.schema()) # Build the attributes off what we know. attr_defs.append(field.definition()) raw_throughput = { 'ReadCapacityUnits': int(table.throughput['read']), 'WriteCapacityUnits': int(table.throughput['write']), } kwargs = {} if table.indexes: # Prep the LSIs. raw_lsi = [] for index_field in table.indexes: raw_lsi.append(index_field.schema()) # Again, build the attributes off what we know. # HOWEVER, only add attributes *NOT* already seen. attr_define = index_field.definition() for part in attr_define: attr_names = [attr['AttributeName'] for attr in attr_defs] if not part['AttributeName'] in attr_names: attr_defs.append(part) kwargs['local_secondary_indexes'] = raw_lsi table.connection.create_table( table_name=table.table_name, attribute_definitions=attr_defs, key_schema=raw_schema, provisioned_throughput=raw_throughput, **kwargs ) return table def _introspect_schema(self, raw_schema): """ Given a raw schema structure back from a DynamoDB response, parse out & build the high-level Python objects that represent them. """ schema = [] for field in raw_schema: if field['KeyType'] == 'HASH': schema.append(HashKey(field['AttributeName'])) elif field['KeyType'] == 'RANGE': schema.append(RangeKey(field['AttributeName'])) else: raise exceptions.UnknownSchemaFieldError( "%s was seen, but is unknown. Please report this at " "https://github.com/boto/boto/issues." % field['KeyType'] ) return schema def _introspect_indexes(self, raw_indexes): """ Given a raw index structure back from a DynamoDB response, parse out & build the high-level Python objects that represent them. """ indexes = [] for field in raw_indexes: index_klass = AllIndex kwargs = { 'parts': [] } if field['Projection']['ProjectionType'] == 'ALL': index_klass = AllIndex elif field['Projection']['ProjectionType'] == 'KEYS_ONLY': index_klass = KeysOnlyIndex elif field['Projection']['ProjectionType'] == 'INCLUDE': index_klass = IncludeIndex kwargs['includes'] = field['Projection']['NonKeyAttributes'] else: raise exceptions.UnknownIndexFieldError( "%s was seen, but is unknown. Please report this at " "https://github.com/boto/boto/issues." % \ field['Projection']['ProjectionType'] ) name = field['IndexName'] kwargs['parts'] = self._introspect_schema(field['KeySchema']) indexes.append(index_klass(name, **kwargs)) return indexes def describe(self): """ Describes the current structure of the table in DynamoDB. This information will be used to update the ``schema``, ``indexes`` and ``throughput`` information on the ``Table``. Some calls, such as those involving creating keys or querying, will require this information to be populated. It also returns the full raw datastructure from DynamoDB, in the event you'd like to parse out additional information (such as the ``ItemCount`` or usage information). Example:: >>> users.describe() { # Lots of keys here... } >>> len(users.schema) 2 """ result = self.connection.describe_table(self.table_name) # Blindly update throughput, since what's on DynamoDB's end is likely # more correct. raw_throughput = result['Table']['ProvisionedThroughput'] self.throughput['read'] = int(raw_throughput['ReadCapacityUnits']) self.throughput['write'] = int(raw_throughput['WriteCapacityUnits']) if not self.schema: # Since we have the data, build the schema. raw_schema = result['Table'].get('KeySchema', []) self.schema = self._introspect_schema(raw_schema) if not self.indexes: # Build the index information as well. raw_indexes = result['Table'].get('LocalSecondaryIndexes', []) self.indexes = self._introspect_indexes(raw_indexes) # This is leaky. return result def update(self, throughput): """ Updates table attributes in DynamoDB. Currently, the only thing you can modify about a table after it has been created is the throughput. Requires a ``throughput`` parameter, which should be a dictionary. If provided, it should specify a ``read`` & ``write`` key, both of which should have an integer value associated with them. Returns ``True`` on success. Example:: # For a read-heavier application... >>> users.update(throughput={ ... 'read': 20, ... 'write': 10, ... }) True """ self.throughput = throughput self.connection.update_table(self.table_name, { 'ReadCapacityUnits': int(self.throughput['read']), 'WriteCapacityUnits': int(self.throughput['write']), }) return True def delete(self): """ Deletes a table in DynamoDB. **IMPORTANT** - Be careful when using this method, there is no undo. Returns ``True`` on success. Example:: >>> users.delete() True """ self.connection.delete_table(self.table_name) return True def _encode_keys(self, keys): """ Given a flat Python dictionary of keys/values, converts it into the nested dictionary DynamoDB expects. Converts:: { 'username': 'john', 'tags': [1, 2, 5], } ...to...:: { 'username': {'S': 'john'}, 'tags': {'NS': ['1', '2', '5']}, } """ raw_key = {} for key, value in keys.items(): raw_key[key] = self._dynamizer.encode(value) return raw_key def get_item(self, consistent=False, **kwargs): """ Fetches an item (record) from a table in DynamoDB. To specify the key of the item you'd like to get, you can specify the key attributes as kwargs. Optionally accepts a ``consistent`` parameter, which should be a boolean. If you provide ``True``, it will perform a consistent (but more expensive) read from DynamoDB. (Default: ``False``) Returns an ``Item`` instance containing all the data for that record. Example:: # A simple hash key. >>> john = users.get_item(username='johndoe') >>> john['first_name'] 'John' # A complex hash+range key. >>> john = users.get_item(username='johndoe', last_name='Doe') >>> john['first_name'] 'John' # A consistent read (assuming the data might have just changed). >>> john = users.get_item(username='johndoe', consistent=True) >>> john['first_name'] 'Johann' # With a key that is an invalid variable name in Python. # Also, assumes a different schema than previous examples. >>> john = users.get_item(**{ ... 'date-joined': 127549192, ... }) >>> john['first_name'] 'John' """ raw_key = self._encode_keys(kwargs) item_data = self.connection.get_item( self.table_name, raw_key, consistent_read=consistent ) item = Item(self) item.load(item_data) return item def lookup(self, *args, **kwargs): """ Look up an entry in DynamoDB. This is mostly backwards compatible with boto.dynamodb. Unlike get_item, it takes hash_key and range_key first, although you may still specify keyword arguments instead. Also unlike the get_item command, if the returned item has no keys (i.e., it does not exist in DynamoDB), a None result is returned, instead of an empty key object. Example:: >>> user = users.lookup(username) >>> user = users.lookup(username, consistent=True) >>> app = apps.lookup('my_customer_id', 'my_app_id') """ if not self.schema: self.describe() for x, arg in enumerate(args): kwargs[self.schema[x].name] = arg ret = self.get_item(**kwargs) if not ret.keys(): return None return ret def new_item(self, *args): """ Returns a new, blank item This is mostly for consistency with boto.dynamodb """ if not self.schema: self.describe() data = {} for x, arg in enumerate(args): data[self.schema[x].name] = arg return Item(self, data=data) def put_item(self, data, overwrite=False): """ Saves an entire item to DynamoDB. By default, if any part of the ``Item``'s original data doesn't match what's currently in DynamoDB, this request will fail. This prevents other processes from updating the data in between when you read the item & when your request to update the item's data is processed, which would typically result in some data loss. Requires a ``data`` parameter, which should be a dictionary of the data you'd like to store in DynamoDB. Optionally accepts an ``overwrite`` parameter, which should be a boolean. If you provide ``True``, this will tell DynamoDB to blindly overwrite whatever data is present, if any. Returns ``True`` on success. Example:: >>> users.put_item(data={ ... 'username': 'jane', ... 'first_name': 'Jane', ... 'last_name': 'Doe', ... 'date_joined': 126478915, ... }) True """ item = Item(self, data=data) return item.save(overwrite=overwrite) def _put_item(self, item_data, expects=None): """ The internal variant of ``put_item`` (full data). This is used by the ``Item`` objects, since that operation is represented at the table-level by the API, but conceptually maps better to telling an individual ``Item`` to save itself. """ kwargs = {} if expects is not None: kwargs['expected'] = expects self.connection.put_item(self.table_name, item_data, **kwargs) return True def _update_item(self, key, item_data, expects=None): """ The internal variant of ``put_item`` (partial data). This is used by the ``Item`` objects, since that operation is represented at the table-level by the API, but conceptually maps better to telling an individual ``Item`` to save itself. """ raw_key = self._encode_keys(key) kwargs = {} if expects is not None: kwargs['expected'] = expects self.connection.update_item(self.table_name, raw_key, item_data, **kwargs) return True def delete_item(self, **kwargs): """ Deletes an item in DynamoDB. **IMPORTANT** - Be careful when using this method, there is no undo. To specify the key of the item you'd like to get, you can specify the key attributes as kwargs. Returns ``True`` on success. Example:: # A simple hash key. >>> users.delete_item(username='johndoe') True # A complex hash+range key. >>> users.delete_item(username='jane', last_name='Doe') True # With a key that is an invalid variable name in Python. # Also, assumes a different schema than previous examples. >>> users.delete_item(**{ ... 'date-joined': 127549192, ... }) True """ raw_key = self._encode_keys(kwargs) self.connection.delete_item(self.table_name, raw_key) return True def get_key_fields(self): """ Returns the fields necessary to make a key for a table. If the ``Table`` does not already have a populated ``schema``, this will request it via a ``Table.describe`` call. Returns a list of fieldnames (strings). Example:: # A simple hash key. >>> users.get_key_fields() ['username'] # A complex hash+range key. >>> users.get_key_fields() ['username', 'last_name'] """ if not self.schema: # We don't know the structure of the table. Get a description to # populate the schema. self.describe() return [field.name for field in self.schema] def batch_write(self): """ Allows the batching of writes to DynamoDB. Since each write/delete call to DynamoDB has a cost associated with it, when loading lots of data, it makes sense to batch them, creating as few calls as possible. This returns a context manager that will transparently handle creating these batches. The object you get back lightly-resembles a ``Table`` object, sharing just the ``put_item`` & ``delete_item`` methods (which are all that DynamoDB can batch in terms of writing data). DynamoDB's maximum batch size is 25 items per request. If you attempt to put/delete more than that, the context manager will batch as many as it can up to that number, then flush them to DynamoDB & continue batching as more calls come in. Example:: # Assuming a table with one record... >>> with users.batch_write() as batch: ... batch.put_item(data={ ... 'username': 'johndoe', ... 'first_name': 'John', ... 'last_name': 'Doe', ... 'owner': 1, ... }) ... # Nothing across the wire yet. ... batch.delete_item(username='bob') ... # Still no requests sent. ... batch.put_item(data={ ... 'username': 'jane', ... 'first_name': 'Jane', ... 'last_name': 'Doe', ... 'date_joined': 127436192, ... }) ... # Nothing yet, but once we leave the context, the ... # put/deletes will be sent. """ # PHENOMENAL COSMIC DOCS!!! itty-bitty code. return BatchTable(self) def _build_filters(self, filter_kwargs, using=QUERY_OPERATORS): """ An internal method for taking query/scan-style ``**kwargs`` & turning them into the raw structure DynamoDB expects for filtering. """ filters = {} for field_and_op, value in filter_kwargs.items(): field_bits = field_and_op.split('__') fieldname = '__'.join(field_bits[:-1]) try: op = using[field_bits[-1]] except KeyError: raise exceptions.UnknownFilterTypeError( "Operator '%s' from '%s' is not recognized." % ( field_bits[-1], field_and_op ) ) lookup = { 'AttributeValueList': [], 'ComparisonOperator': op, } # Special-case the ``NULL/NOT_NULL`` case. if field_bits[-1] == 'null': del lookup['AttributeValueList'] if value is False: lookup['ComparisonOperator'] = 'NOT_NULL' else: lookup['ComparisonOperator'] = 'NULL' # Special-case the ``BETWEEN`` case. elif field_bits[-1] == 'between': if len(value) == 2 and isinstance(value, (list, tuple)): lookup['AttributeValueList'].append( self._dynamizer.encode(value[0]) ) lookup['AttributeValueList'].append( self._dynamizer.encode(value[1]) ) # Special-case the ``IN`` case elif field_bits[-1] == 'in': for val in value: lookup['AttributeValueList'].append(self._dynamizer.encode(val)) else: # Fix up the value for encoding, because it was built to only work # with ``set``s. if isinstance(value, (list, tuple)): value = set(value) lookup['AttributeValueList'].append( self._dynamizer.encode(value) ) # Finally, insert it into the filters. filters[fieldname] = lookup return filters def query(self, limit=None, index=None, reverse=False, consistent=False, attributes=None, **filter_kwargs): """ Queries for a set of matching items in a DynamoDB table. Queries can be performed against a hash key, a hash+range key or against any data stored in your local secondary indexes. **Note** - You can not query against arbitrary fields within the data stored in DynamoDB. To specify the filters of the items you'd like to get, you can specify the filters as kwargs. Each filter kwarg should follow the pattern ``__=``. Optionally accepts a ``limit`` parameter, which should be an integer count of the total number of items to return. (Default: ``None`` - all results) Optionally accepts an ``index`` parameter, which should be a string of name of the local secondary index you want to query against. (Default: ``None``) Optionally accepts a ``reverse`` parameter, which will present the results in reverse order. (Default: ``None`` - normal order) Optionally accepts a ``consistent`` parameter, which should be a boolean. If you provide ``True``, it will force a consistent read of the data (more expensive). (Default: ``False`` - use eventually consistent reads) Optionally accepts a ``attributes`` parameter, which should be a tuple. If you provide any attributes only these will be fetched from DynamoDB. This uses the ``AttributesToGet`` and set's ``Select`` to ``SPECIFIC_ATTRIBUTES`` API. Returns a ``ResultSet``, which transparently handles the pagination of results you get back. Example:: # Look for last names equal to "Doe". >>> results = users.query(last_name__eq='Doe') >>> for res in results: ... print res['first_name'] 'John' 'Jane' # Look for last names beginning with "D", in reverse order, limit 3. >>> results = users.query( ... last_name__beginswith='D', ... reverse=True, ... limit=3 ... ) >>> for res in results: ... print res['first_name'] 'Alice' 'Jane' 'John' # Use an LSI & a consistent read. >>> results = users.query( ... date_joined__gte=1236451000, ... owner__eq=1, ... index='DateJoinedIndex', ... consistent=True ... ) >>> for res in results: ... print res['first_name'] 'Alice' 'Bob' 'John' 'Fred' """ if self.schema: if len(self.schema) == 1 and len(filter_kwargs) <= 1: raise exceptions.QueryError( "You must specify more than one key to filter on." ) if attributes is not None: select = 'SPECIFIC_ATTRIBUTES' else: select = None results = ResultSet() kwargs = filter_kwargs.copy() kwargs.update({ 'limit': limit, 'index': index, 'reverse': reverse, 'consistent': consistent, 'select': select, 'attributes_to_get': attributes }) results.to_call(self._query, **kwargs) return results def query_count(self, index=None, consistent=False, **filter_kwargs): """ Queries the exact count of matching items in a DynamoDB table. Queries can be performed against a hash key, a hash+range key or against any data stored in your local secondary indexes. To specify the filters of the items you'd like to get, you can specify the filters as kwargs. Each filter kwarg should follow the pattern ``__=``. Optionally accepts an ``index`` parameter, which should be a string of name of the local secondary index you want to query against. (Default: ``None``) Optionally accepts a ``consistent`` parameter, which should be a boolean. If you provide ``True``, it will force a consistent read of the data (more expensive). (Default: ``False`` - use eventually consistent reads) Returns an integer which represents the exact amount of matched items. Example:: # Look for last names equal to "Doe". >>> users.query_count(last_name__eq='Doe') 5 # Use an LSI & a consistent read. >>> users.query_count( ... date_joined__gte=1236451000, ... owner__eq=1, ... index='DateJoinedIndex', ... consistent=True ... ) 2 """ key_conditions = self._build_filters( filter_kwargs, using=QUERY_OPERATORS ) raw_results = self.connection.query( self.table_name, index_name=index, consistent_read=consistent, select='COUNT', key_conditions=key_conditions, ) return int(raw_results.get('Count', 0)) def _query(self, limit=None, index=None, reverse=False, consistent=False, exclusive_start_key=None, select=None, attributes_to_get=None, **filter_kwargs): """ The internal method that performs the actual queries. Used extensively by ``ResultSet`` to perform each (paginated) request. """ kwargs = { 'limit': limit, 'index_name': index, 'scan_index_forward': reverse, 'consistent_read': consistent, 'select': select, 'attributes_to_get': attributes_to_get } if exclusive_start_key: kwargs['exclusive_start_key'] = {} for key, value in exclusive_start_key.items(): kwargs['exclusive_start_key'][key] = \ self._dynamizer.encode(value) # Convert the filters into something we can actually use. kwargs['key_conditions'] = self._build_filters( filter_kwargs, using=QUERY_OPERATORS ) raw_results = self.connection.query( self.table_name, **kwargs ) results = [] last_key = None for raw_item in raw_results.get('Items', []): item = Item(self) item.load({ 'Item': raw_item, }) results.append(item) if raw_results.get('LastEvaluatedKey', None): last_key = {} for key, value in raw_results['LastEvaluatedKey'].items(): last_key[key] = self._dynamizer.decode(value) return { 'results': results, 'last_key': last_key, } def scan(self, limit=None, segment=None, total_segments=None, **filter_kwargs): """ Scans across all items within a DynamoDB table. Scans can be performed against a hash key or a hash+range key. You can additionally filter the results after the table has been read but before the response is returned. To specify the filters of the items you'd like to get, you can specify the filters as kwargs. Each filter kwarg should follow the pattern ``__=``. Optionally accepts a ``limit`` parameter, which should be an integer count of the total number of items to return. (Default: ``None`` - all results) Returns a ``ResultSet``, which transparently handles the pagination of results you get back. Example:: # All results. >>> everything = users.scan() # Look for last names beginning with "D". >>> results = users.scan(last_name__beginswith='D') >>> for res in results: ... print res['first_name'] 'Alice' 'John' 'Jane' # Use an ``IN`` filter & limit. >>> results = users.scan( ... age__in=[25, 26, 27, 28, 29], ... limit=1 ... ) >>> for res in results: ... print res['first_name'] 'Alice' """ results = ResultSet() kwargs = filter_kwargs.copy() kwargs.update({ 'limit': limit, 'segment': segment, 'total_segments': total_segments, }) results.to_call(self._scan, **kwargs) return results def _scan(self, limit=None, exclusive_start_key=None, segment=None, total_segments=None, **filter_kwargs): """ The internal method that performs the actual scan. Used extensively by ``ResultSet`` to perform each (paginated) request. """ kwargs = { 'limit': limit, 'segment': segment, 'total_segments': total_segments, } if exclusive_start_key: kwargs['exclusive_start_key'] = {} for key, value in exclusive_start_key.items(): kwargs['exclusive_start_key'][key] = \ self._dynamizer.encode(value) # Convert the filters into something we can actually use. kwargs['scan_filter'] = self._build_filters( filter_kwargs, using=FILTER_OPERATORS ) raw_results = self.connection.scan( self.table_name, **kwargs ) results = [] last_key = None for raw_item in raw_results.get('Items', []): item = Item(self) item.load({ 'Item': raw_item, }) results.append(item) if raw_results.get('LastEvaluatedKey', None): last_key = {} for key, value in raw_results['LastEvaluatedKey'].items(): last_key[key] = self._dynamizer.decode(value) return { 'results': results, 'last_key': last_key, } def batch_get(self, keys, consistent=False): """ Fetches many specific items in batch from a table. Requires a ``keys`` parameter, which should be a list of dictionaries. Each dictionary should consist of the keys values to specify. Optionally accepts a ``consistent`` parameter, which should be a boolean. If you provide ``True``, a strongly consistent read will be used. (Default: False) Returns a ``ResultSet``, which transparently handles the pagination of results you get back. Example:: >>> results = users.batch_get(keys=[ ... { ... 'username': 'johndoe', ... }, ... { ... 'username': 'jane', ... }, ... { ... 'username': 'fred', ... }, ... ]) >>> for res in results: ... print res['first_name'] 'John' 'Jane' 'Fred' """ # We pass the keys to the constructor instead, so it can maintain it's # own internal state as to what keys have been processed. results = BatchGetResultSet(keys=keys, max_batch_get=self.max_batch_get) results.to_call(self._batch_get, consistent=False) return results def _batch_get(self, keys, consistent=False): """ The internal method that performs the actual batch get. Used extensively by ``BatchGetResultSet`` to perform each (paginated) request. """ items = { self.table_name: { 'Keys': [], }, } if consistent: items[self.table_name]['ConsistentRead'] = True for key_data in keys: raw_key = {} for key, value in key_data.items(): raw_key[key] = self._dynamizer.encode(value) items[self.table_name]['Keys'].append(raw_key) raw_results = self.connection.batch_get_item(request_items=items) results = [] unprocessed_keys = [] for raw_item in raw_results['Responses'].get(self.table_name, []): item = Item(self) item.load({ 'Item': raw_item, }) results.append(item) raw_unproccessed = raw_results.get('UnprocessedKeys', {}) for raw_key in raw_unproccessed.get('Keys', []): py_key = {} for key, value in raw_key.items(): py_key[key] = self._dynamizer.decode(value) unprocessed_keys.append(py_key) return { 'results': results, # NEVER return a ``last_key``. Just in-case any part of # ``ResultSet`` peeks through, since much of the # original underlying implementation is based on this key. 'last_key': None, 'unprocessed_keys': unprocessed_keys, } def count(self): """ Returns a (very) eventually consistent count of the number of items in a table. Lag time is about 6 hours, so don't expect a high degree of accuracy. Example:: >>> users.count() 6 """ info = self.describe() return info['Table'].get('ItemCount', 0) class BatchTable(object): """ Used by ``Table`` as the context manager for batch writes. You likely don't want to try to use this object directly. """ def __init__(self, table): self.table = table self._to_put = [] self._to_delete = [] self._unprocessed = [] def __enter__(self): return self def __exit__(self, type, value, traceback): if self._to_put or self._to_delete: # Flush anything that's left. self.flush() if self._unprocessed: # Finally, handle anything that wasn't processed. self.resend_unprocessed() def put_item(self, data, overwrite=False): self._to_put.append(data) if self.should_flush(): self.flush() def delete_item(self, **kwargs): self._to_delete.append(kwargs) if self.should_flush(): self.flush() def should_flush(self): if len(self._to_put) + len(self._to_delete) == 25: return True return False def flush(self): batch_data = { self.table.table_name: [ # We'll insert data here shortly. ], } for put in self._to_put: item = Item(self.table, data=put) batch_data[self.table.table_name].append({ 'PutRequest': { 'Item': item.prepare_full(), } }) for delete in self._to_delete: batch_data[self.table.table_name].append({ 'DeleteRequest': { 'Key': self.table._encode_keys(delete), } }) resp = self.table.connection.batch_write_item(batch_data) self.handle_unprocessed(resp) self._to_put = [] self._to_delete = [] return True def handle_unprocessed(self, resp): if len(resp.get('UnprocessedItems', [])): table_name = self.table.table_name unprocessed = resp['UnprocessedItems'].get(table_name, []) # Some items have not been processed. Stow them for now & # re-attempt processing on ``__exit__``. msg = "%s items were unprocessed. Storing for later." boto.log.info(msg % len(unprocessed)) self._unprocessed.extend(unprocessed) def resend_unprocessed(self): # If there are unprocessed records (for instance, the user was over # their throughput limitations), iterate over them & send until they're # all there. boto.log.info( "Re-sending %s unprocessed items." % len(self._unprocessed) ) while len(self._unprocessed): # Again, do 25 at a time. to_resend = self._unprocessed[:25] # Remove them from the list. self._unprocessed = self._unprocessed[25:] batch_data = { self.table.table_name: to_resend } boto.log.info("Sending %s items" % len(to_resend)) resp = self.table.connection.batch_write_item(batch_data) self.handle_unprocessed(resp) boto.log.info( "%s unprocessed items left" % len(self._unprocessed) ) boto-2.20.1/boto/dynamodb2/types.py000066400000000000000000000015331225267101000170750ustar00rootroot00000000000000# Shadow the DynamoDB v1 bits. # This way, no end user should have to cross-import between versions & we # reserve the namespace to extend v2 if it's ever needed. from boto.dynamodb.types import Dynamizer # Some constants for our use. STRING = 'S' NUMBER = 'N' BINARY = 'B' STRING_SET = 'SS' NUMBER_SET = 'NS' BINARY_SET = 'BS' QUERY_OPERATORS = { 'eq': 'EQ', 'lte': 'LE', 'lt': 'LT', 'gte': 'GE', 'gt': 'GT', 'beginswith': 'BEGINS_WITH', 'between': 'BETWEEN', } FILTER_OPERATORS = { 'eq': 'EQ', 'ne': 'NE', 'lte': 'LE', 'lt': 'LT', 'gte': 'GE', 'gt': 'GT', # FIXME: Is this necessary? i.e. ``whatever__null=False`` 'nnull': 'NOT_NULL', 'null': 'NULL', 'contains': 'CONTAINS', 'ncontains': 'NOT_CONTAINS', 'beginswith': 'BEGINS_WITH', 'in': 'IN', 'between': 'BETWEEN', } boto-2.20.1/boto/ec2/000077500000000000000000000000001225267101000141475ustar00rootroot00000000000000boto-2.20.1/boto/ec2/__init__.py000066400000000000000000000072151225267101000162650ustar00rootroot00000000000000# Copyright (c) 2006-2008 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # """ This module provides an interface to the Elastic Compute Cloud (EC2) service from AWS. """ from boto.ec2.connection import EC2Connection from boto.regioninfo import RegionInfo RegionData = { 'us-east-1': 'ec2.us-east-1.amazonaws.com', 'us-gov-west-1': 'ec2.us-gov-west-1.amazonaws.com', 'us-west-1': 'ec2.us-west-1.amazonaws.com', 'us-west-2': 'ec2.us-west-2.amazonaws.com', 'sa-east-1': 'ec2.sa-east-1.amazonaws.com', 'eu-west-1': 'ec2.eu-west-1.amazonaws.com', 'ap-northeast-1': 'ec2.ap-northeast-1.amazonaws.com', 'ap-southeast-1': 'ec2.ap-southeast-1.amazonaws.com', 'ap-southeast-2': 'ec2.ap-southeast-2.amazonaws.com', } def regions(**kw_params): """ Get all available regions for the EC2 service. You may pass any of the arguments accepted by the EC2Connection object's constructor as keyword arguments and they will be passed along to the EC2Connection object. :rtype: list :return: A list of :class:`boto.ec2.regioninfo.RegionInfo` """ regions = [] for region_name in RegionData: region = RegionInfo(name=region_name, endpoint=RegionData[region_name], connection_cls=EC2Connection) regions.append(region) return regions def connect_to_region(region_name, **kw_params): """ Given a valid region name, return a :class:`boto.ec2.connection.EC2Connection`. Any additional parameters after the region_name are passed on to the connect method of the region object. :type: str :param region_name: The name of the region to connect to. :rtype: :class:`boto.ec2.connection.EC2Connection` or ``None`` :return: A connection to the given region, or None if an invalid region name is given """ if 'region' in kw_params and isinstance(kw_params['region'], RegionInfo)\ and region_name == kw_params['region'].name: return EC2Connection(**kw_params) for region in regions(**kw_params): if region.name == region_name: return region.connect(**kw_params) return None def get_region(region_name, **kw_params): """ Find and return a :class:`boto.ec2.regioninfo.RegionInfo` object given a region name. :type: str :param: The name of the region. :rtype: :class:`boto.ec2.regioninfo.RegionInfo` :return: The RegionInfo object for the given region or None if an invalid region name is provided. """ for region in regions(**kw_params): if region.name == region_name: return region return None boto-2.20.1/boto/ec2/address.py000066400000000000000000000110371225267101000161500ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.ec2.ec2object import EC2Object class Address(EC2Object): """ Represents an EC2 Elastic IP Address :ivar public_ip: The Elastic IP address. :ivar instance_id: The instance the address is associated with (if any). :ivar domain: Indicates whether the address is a EC2 address or a VPC address (standard|vpc). :ivar allocation_id: The allocation ID for the address (VPC addresses only). :ivar association_id: The association ID for the address (VPC addresses only). :ivar network_interface_id: The network interface (if any) that the address is associated with (VPC addresses only). :ivar network_interface_owner_id: The owner IID (VPC addresses only). :ivar private_ip_address: The private IP address associated with the Elastic IP address (VPC addresses only). """ def __init__(self, connection=None, public_ip=None, instance_id=None): EC2Object.__init__(self, connection) self.connection = connection self.public_ip = public_ip self.instance_id = instance_id self.domain = None self.allocation_id = None self.association_id = None self.network_interface_id = None self.network_interface_owner_id = None self.private_ip_address = None def __repr__(self): return 'Address:%s' % self.public_ip def endElement(self, name, value, connection): if name == 'publicIp': self.public_ip = value elif name == 'instanceId': self.instance_id = value elif name == 'domain': self.domain = value elif name == 'allocationId': self.allocation_id = value elif name == 'associationId': self.association_id = value elif name == 'networkInterfaceId': self.network_interface_id = value elif name == 'networkInterfaceOwnerId': self.network_interface_owner_id = value elif name == 'privateIpAddress': self.private_ip_address = value else: setattr(self, name, value) def release(self, dry_run=False): """ Free up this Elastic IP address. :see: :meth:`boto.ec2.connection.EC2Connection.release_address` """ if self.allocation_id: return self.connection.release_address( None, self.allocation_id, dry_run=dry_run) else: return self.connection.release_address( self.public_ip, dry_run=dry_run ) delete = release def associate(self, instance_id, dry_run=False): """ Associate this Elastic IP address with a currently running instance. :see: :meth:`boto.ec2.connection.EC2Connection.associate_address` """ return self.connection.associate_address( instance_id, self.public_ip, dry_run=dry_run ) def disassociate(self, dry_run=False): """ Disassociate this Elastic IP address from a currently running instance. :see: :meth:`boto.ec2.connection.EC2Connection.disassociate_address` """ if self.association_id: return self.connection.disassociate_address( None, self.association_id, dry_run=dry_run ) else: return self.connection.disassociate_address( self.public_ip, dry_run=dry_run ) boto-2.20.1/boto/ec2/attributes.py000066400000000000000000000052261225267101000167140ustar00rootroot00000000000000# Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class AccountAttribute(object): def __init__(self, connection=None): self.connection = connection self.attribute_name = None self.attribute_values = None def startElement(self, name, attrs, connection): if name == 'attributeValueSet': self.attribute_values = AttributeValues() return self.attribute_values def endElement(self, name, value, connection): if name == 'attributeName': self.attribute_name = value class AttributeValues(list): def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'attributeValue': self.append(value) class VPCAttribute(object): def __init__(self, connection=None): self.connection = connection self.vpc_id = None self.enable_dns_hostnames = None self.enable_dns_support = None self._current_attr = None def startElement(self, name, attrs, connection): if name in ('enableDnsHostnames', 'enableDnsSupport'): self._current_attr = name def endElement(self, name, value, connection): if name == 'vpcId': self.vpc_id = value elif name == 'value': if value == 'true': value = True else: value = False if self._current_attr == 'enableDnsHostnames': self.enable_dns_hostnames = value elif self._current_attr == 'enableDnsSupport': self.enable_dns_support = value boto-2.20.1/boto/ec2/autoscale/000077500000000000000000000000001225267101000161275ustar00rootroot00000000000000boto-2.20.1/boto/ec2/autoscale/__init__.py000066400000000000000000001043161225267101000202450ustar00rootroot00000000000000# Copyright (c) 2009-2011 Reza Lotun http://reza.lotun.name/ # Copyright (c) 2011 Jann Kleen # Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ This module provides an interface to the Elastic Compute Cloud (EC2) Auto Scaling service. """ import base64 import boto from boto.connection import AWSQueryConnection from boto.ec2.regioninfo import RegionInfo from boto.ec2.autoscale.request import Request from boto.ec2.autoscale.launchconfig import LaunchConfiguration from boto.ec2.autoscale.group import AutoScalingGroup from boto.ec2.autoscale.group import ProcessType from boto.ec2.autoscale.activity import Activity from boto.ec2.autoscale.policy import AdjustmentType from boto.ec2.autoscale.policy import MetricCollectionTypes from boto.ec2.autoscale.policy import ScalingPolicy from boto.ec2.autoscale.policy import TerminationPolicies from boto.ec2.autoscale.instance import Instance from boto.ec2.autoscale.scheduled import ScheduledUpdateGroupAction from boto.ec2.autoscale.tag import Tag RegionData = { 'us-east-1': 'autoscaling.us-east-1.amazonaws.com', 'us-gov-west-1': 'autoscaling.us-gov-west-1.amazonaws.com', 'us-west-1': 'autoscaling.us-west-1.amazonaws.com', 'us-west-2': 'autoscaling.us-west-2.amazonaws.com', 'sa-east-1': 'autoscaling.sa-east-1.amazonaws.com', 'eu-west-1': 'autoscaling.eu-west-1.amazonaws.com', 'ap-northeast-1': 'autoscaling.ap-northeast-1.amazonaws.com', 'ap-southeast-1': 'autoscaling.ap-southeast-1.amazonaws.com', 'ap-southeast-2': 'autoscaling.ap-southeast-2.amazonaws.com', } def regions(): """ Get all available regions for the Auto Scaling service. :rtype: list :return: A list of :class:`boto.RegionInfo` instances """ regions = [] for region_name in RegionData: region = RegionInfo(name=region_name, endpoint=RegionData[region_name], connection_cls=AutoScaleConnection) regions.append(region) return regions def connect_to_region(region_name, **kw_params): """ Given a valid region name, return a :class:`boto.ec2.autoscale.AutoScaleConnection`. :param str region_name: The name of the region to connect to. :rtype: :class:`boto.ec2.AutoScaleConnection` or ``None`` :return: A connection to the given region, or None if an invalid region name is given """ for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None class AutoScaleConnection(AWSQueryConnection): APIVersion = boto.config.get('Boto', 'autoscale_version', '2011-01-01') DefaultRegionEndpoint = boto.config.get('Boto', 'autoscale_endpoint', 'autoscaling.us-east-1.amazonaws.com') DefaultRegionName = boto.config.get('Boto', 'autoscale_region_name', 'us-east-1') def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/', security_token=None, validate_certs=True): """ Init method to create a new connection to the AutoScaling service. B{Note:} The host argument is overridden by the host specified in the boto configuration file. """ if not region: region = RegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint, AutoScaleConnection) self.region = region AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, proxy_user, proxy_pass, self.region.endpoint, debug, https_connection_factory, path=path, security_token=security_token, validate_certs=validate_certs) def _required_auth_capability(self): return ['hmac-v4'] def build_list_params(self, params, items, label): """ Items is a list of dictionaries or strings:: [ { 'Protocol' : 'HTTP', 'LoadBalancerPort' : '80', 'InstancePort' : '80' }, .. ] etc. or:: ['us-east-1b',...] """ # different from EC2 list params for i in xrange(1, len(items) + 1): if isinstance(items[i - 1], dict): for k, v in items[i - 1].iteritems(): if isinstance(v, dict): for kk, vv in v.iteritems(): params['%s.member.%d.%s.%s' % (label, i, k, kk)] = vv else: params['%s.member.%d.%s' % (label, i, k)] = v elif isinstance(items[i - 1], basestring): params['%s.member.%d' % (label, i)] = items[i - 1] def _update_group(self, op, as_group): params = {'AutoScalingGroupName': as_group.name, 'LaunchConfigurationName': as_group.launch_config_name, 'MinSize': as_group.min_size, 'MaxSize': as_group.max_size} # get availability zone information (required param) zones = as_group.availability_zones self.build_list_params(params, zones, 'AvailabilityZones') if as_group.desired_capacity: params['DesiredCapacity'] = as_group.desired_capacity if as_group.vpc_zone_identifier: params['VPCZoneIdentifier'] = as_group.vpc_zone_identifier if as_group.health_check_period: params['HealthCheckGracePeriod'] = as_group.health_check_period if as_group.health_check_type: params['HealthCheckType'] = as_group.health_check_type if as_group.default_cooldown: params['DefaultCooldown'] = as_group.default_cooldown if as_group.placement_group: params['PlacementGroup'] = as_group.placement_group if as_group.termination_policies: self.build_list_params(params, as_group.termination_policies, 'TerminationPolicies') if op.startswith('Create'): # you can only associate load balancers with an autoscale # group at creation time if as_group.load_balancers: self.build_list_params(params, as_group.load_balancers, 'LoadBalancerNames') if as_group.tags: for i, tag in enumerate(as_group.tags): tag.build_params(params, i + 1) return self.get_object(op, params, Request) def create_auto_scaling_group(self, as_group): """ Create auto scaling group. """ return self._update_group('CreateAutoScalingGroup', as_group) def delete_auto_scaling_group(self, name, force_delete=False): """ Deletes the specified auto scaling group if the group has no instances and no scaling activities in progress. """ if(force_delete): params = {'AutoScalingGroupName': name, 'ForceDelete': 'true'} else: params = {'AutoScalingGroupName': name} return self.get_object('DeleteAutoScalingGroup', params, Request) def create_launch_configuration(self, launch_config): """ Creates a new Launch Configuration. :type launch_config: :class:`boto.ec2.autoscale.launchconfig.LaunchConfiguration` :param launch_config: LaunchConfiguration object. """ params = {'ImageId': launch_config.image_id, 'LaunchConfigurationName': launch_config.name, 'InstanceType': launch_config.instance_type} if launch_config.key_name: params['KeyName'] = launch_config.key_name if launch_config.user_data: params['UserData'] = base64.b64encode(launch_config.user_data) if launch_config.kernel_id: params['KernelId'] = launch_config.kernel_id if launch_config.ramdisk_id: params['RamdiskId'] = launch_config.ramdisk_id if launch_config.block_device_mappings: [x.autoscale_build_list_params(params) for x in launch_config.block_device_mappings] if launch_config.security_groups: self.build_list_params(params, launch_config.security_groups, 'SecurityGroups') if launch_config.instance_monitoring: params['InstanceMonitoring.Enabled'] = 'true' else: params['InstanceMonitoring.Enabled'] = 'false' if launch_config.spot_price is not None: params['SpotPrice'] = str(launch_config.spot_price) if launch_config.instance_profile_name is not None: params['IamInstanceProfile'] = launch_config.instance_profile_name if launch_config.ebs_optimized: params['EbsOptimized'] = 'true' else: params['EbsOptimized'] = 'false' if launch_config.associate_public_ip_address is True: params['AssociatePublicIpAddress'] = 'true' elif launch_config.associate_public_ip_address is False: params['AssociatePublicIpAddress'] = 'false' return self.get_object('CreateLaunchConfiguration', params, Request, verb='POST') def create_scaling_policy(self, scaling_policy): """ Creates a new Scaling Policy. :type scaling_policy: :class:`boto.ec2.autoscale.policy.ScalingPolicy` :param scaling_policy: ScalingPolicy object. """ params = {'AdjustmentType': scaling_policy.adjustment_type, 'AutoScalingGroupName': scaling_policy.as_name, 'PolicyName': scaling_policy.name, 'ScalingAdjustment': scaling_policy.scaling_adjustment} if scaling_policy.adjustment_type == "PercentChangeInCapacity" and \ scaling_policy.min_adjustment_step is not None: params['MinAdjustmentStep'] = scaling_policy.min_adjustment_step if scaling_policy.cooldown is not None: params['Cooldown'] = scaling_policy.cooldown return self.get_object('PutScalingPolicy', params, Request) def delete_launch_configuration(self, launch_config_name): """ Deletes the specified LaunchConfiguration. The specified launch configuration must not be attached to an Auto Scaling group. Once this call completes, the launch configuration is no longer available for use. """ params = {'LaunchConfigurationName': launch_config_name} return self.get_object('DeleteLaunchConfiguration', params, Request) def get_all_groups(self, names=None, max_records=None, next_token=None): """ Returns a full description of each Auto Scaling group in the given list. This includes all Amazon EC2 instances that are members of the group. If a list of names is not provided, the service returns the full details of all Auto Scaling groups. This action supports pagination by returning a token if there are more pages to retrieve. To get the next page, call this action again with the returned token as the NextToken parameter. :type names: list :param names: List of group names which should be searched for. :type max_records: int :param max_records: Maximum amount of groups to return. :rtype: list :returns: List of :class:`boto.ec2.autoscale.group.AutoScalingGroup` instances. """ params = {} if max_records: params['MaxRecords'] = max_records if next_token: params['NextToken'] = next_token if names: self.build_list_params(params, names, 'AutoScalingGroupNames') return self.get_list('DescribeAutoScalingGroups', params, [('member', AutoScalingGroup)]) def get_all_launch_configurations(self, **kwargs): """ Returns a full description of the launch configurations given the specified names. If no names are specified, then the full details of all launch configurations are returned. :type names: list :param names: List of configuration names which should be searched for. :type max_records: int :param max_records: Maximum amount of configurations to return. :type next_token: str :param next_token: If you have more results than can be returned at once, pass in this parameter to page through all results. :rtype: list :returns: List of :class:`boto.ec2.autoscale.launchconfig.LaunchConfiguration` instances. """ params = {} max_records = kwargs.get('max_records', None) names = kwargs.get('names', None) if max_records is not None: params['MaxRecords'] = max_records if names: self.build_list_params(params, names, 'LaunchConfigurationNames') next_token = kwargs.get('next_token') if next_token: params['NextToken'] = next_token return self.get_list('DescribeLaunchConfigurations', params, [('member', LaunchConfiguration)]) def get_all_activities(self, autoscale_group, activity_ids=None, max_records=None, next_token=None): """ Get all activities for the given autoscaling group. This action supports pagination by returning a token if there are more pages to retrieve. To get the next page, call this action again with the returned token as the NextToken parameter :type autoscale_group: str or :class:`boto.ec2.autoscale.group.AutoScalingGroup` object :param autoscale_group: The auto scaling group to get activities on. :type max_records: int :param max_records: Maximum amount of activities to return. :rtype: list :returns: List of :class:`boto.ec2.autoscale.activity.Activity` instances. """ name = autoscale_group if isinstance(autoscale_group, AutoScalingGroup): name = autoscale_group.name params = {'AutoScalingGroupName': name} if max_records: params['MaxRecords'] = max_records if next_token: params['NextToken'] = next_token if activity_ids: self.build_list_params(params, activity_ids, 'ActivityIds') return self.get_list('DescribeScalingActivities', params, [('member', Activity)]) def get_termination_policies(self): """Gets all valid termination policies. These values can then be used as the termination_policies arg when creating and updating autoscale groups. """ return self.get_object('DescribeTerminationPolicyTypes', {}, TerminationPolicies) def delete_scheduled_action(self, scheduled_action_name, autoscale_group=None): """ Deletes a previously scheduled action. :type scheduled_action_name: str :param scheduled_action_name: The name of the action you want to delete. :type autoscale_group: str :param autoscale_group: The name of the autoscale group. """ params = {'ScheduledActionName': scheduled_action_name} if autoscale_group: params['AutoScalingGroupName'] = autoscale_group return self.get_status('DeleteScheduledAction', params) def terminate_instance(self, instance_id, decrement_capacity=True): """ Terminates the specified instance. The desired group size can also be adjusted, if desired. :type instance_id: str :param instance_id: The ID of the instance to be terminated. :type decrement_capability: bool :param decrement_capacity: Whether to decrement the size of the autoscaling group or not. """ params = {'InstanceId': instance_id} if decrement_capacity: params['ShouldDecrementDesiredCapacity'] = 'true' else: params['ShouldDecrementDesiredCapacity'] = 'false' return self.get_object('TerminateInstanceInAutoScalingGroup', params, Activity) def delete_policy(self, policy_name, autoscale_group=None): """ Delete a policy. :type policy_name: str :param policy_name: The name or ARN of the policy to delete. :type autoscale_group: str :param autoscale_group: The name of the autoscale group. """ params = {'PolicyName': policy_name} if autoscale_group: params['AutoScalingGroupName'] = autoscale_group return self.get_status('DeletePolicy', params) def get_all_adjustment_types(self): return self.get_list('DescribeAdjustmentTypes', {}, [('member', AdjustmentType)]) def get_all_autoscaling_instances(self, instance_ids=None, max_records=None, next_token=None): """ Returns a description of each Auto Scaling instance in the instance_ids list. If a list is not provided, the service returns the full details of all instances up to a maximum of fifty. This action supports pagination by returning a token if there are more pages to retrieve. To get the next page, call this action again with the returned token as the NextToken parameter. :type instance_ids: list :param instance_ids: List of Autoscaling Instance IDs which should be searched for. :type max_records: int :param max_records: Maximum number of results to return. :rtype: list :returns: List of :class:`boto.ec2.autoscale.instance.Instance` objects. """ params = {} if instance_ids: self.build_list_params(params, instance_ids, 'InstanceIds') if max_records: params['MaxRecords'] = max_records if next_token: params['NextToken'] = next_token return self.get_list('DescribeAutoScalingInstances', params, [('member', Instance)]) def get_all_metric_collection_types(self): """ Returns a list of metrics and a corresponding list of granularities for each metric. """ return self.get_object('DescribeMetricCollectionTypes', {}, MetricCollectionTypes) def get_all_policies(self, as_group=None, policy_names=None, max_records=None, next_token=None): """ Returns descriptions of what each policy does. This action supports pagination. If the response includes a token, there are more records available. To get the additional records, repeat the request with the response token as the NextToken parameter. If no group name or list of policy names are provided, all available policies are returned. :type as_group: str :param as_group: The name of the :class:`boto.ec2.autoscale.group.AutoScalingGroup` to filter for. :type policy_names: list :param policy_names: List of policy names which should be searched for. :type max_records: int :param max_records: Maximum amount of groups to return. :type next_token: str :param next_token: If you have more results than can be returned at once, pass in this parameter to page through all results. """ params = {} if as_group: params['AutoScalingGroupName'] = as_group if policy_names: self.build_list_params(params, policy_names, 'PolicyNames') if max_records: params['MaxRecords'] = max_records if next_token: params['NextToken'] = next_token return self.get_list('DescribePolicies', params, [('member', ScalingPolicy)]) def get_all_scaling_process_types(self): """ Returns scaling process types for use in the ResumeProcesses and SuspendProcesses actions. """ return self.get_list('DescribeScalingProcessTypes', {}, [('member', ProcessType)]) def suspend_processes(self, as_group, scaling_processes=None): """ Suspends Auto Scaling processes for an Auto Scaling group. :type as_group: string :param as_group: The auto scaling group to suspend processes on. :type scaling_processes: list :param scaling_processes: Processes you want to suspend. If omitted, all processes will be suspended. """ params = {'AutoScalingGroupName': as_group} if scaling_processes: self.build_list_params(params, scaling_processes, 'ScalingProcesses') return self.get_status('SuspendProcesses', params) def resume_processes(self, as_group, scaling_processes=None): """ Resumes Auto Scaling processes for an Auto Scaling group. :type as_group: string :param as_group: The auto scaling group to resume processes on. :type scaling_processes: list :param scaling_processes: Processes you want to resume. If omitted, all processes will be resumed. """ params = {'AutoScalingGroupName': as_group} if scaling_processes: self.build_list_params(params, scaling_processes, 'ScalingProcesses') return self.get_status('ResumeProcesses', params) def create_scheduled_group_action(self, as_group, name, time=None, desired_capacity=None, min_size=None, max_size=None, start_time=None, end_time=None, recurrence=None): """ Creates a scheduled scaling action for a Auto Scaling group. If you leave a parameter unspecified, the corresponding value remains unchanged in the affected Auto Scaling group. :type as_group: string :param as_group: The auto scaling group to get activities on. :type name: string :param name: Scheduled action name. :type time: datetime.datetime :param time: The time for this action to start. (Depracated) :type desired_capacity: int :param desired_capacity: The number of EC2 instances that should be running in this group. :type min_size: int :param min_size: The minimum size for the new auto scaling group. :type max_size: int :param max_size: The minimum size for the new auto scaling group. :type start_time: datetime.datetime :param start_time: The time for this action to start. When StartTime and EndTime are specified with Recurrence, they form the boundaries of when the recurring action will start and stop. :type end_time: datetime.datetime :param end_time: The time for this action to end. When StartTime and EndTime are specified with Recurrence, they form the boundaries of when the recurring action will start and stop. :type recurrence: string :param recurrence: The time when recurring future actions will start. Start time is specified by the user following the Unix cron syntax format. EXAMPLE: '0 10 * * *' """ params = {'AutoScalingGroupName': as_group, 'ScheduledActionName': name} if start_time is not None: params['StartTime'] = start_time.isoformat() if end_time is not None: params['EndTime'] = end_time.isoformat() if recurrence is not None: params['Recurrence'] = recurrence if time: params['Time'] = time.isoformat() if desired_capacity is not None: params['DesiredCapacity'] = desired_capacity if min_size is not None: params['MinSize'] = min_size if max_size is not None: params['MaxSize'] = max_size return self.get_status('PutScheduledUpdateGroupAction', params) def get_all_scheduled_actions(self, as_group=None, start_time=None, end_time=None, scheduled_actions=None, max_records=None, next_token=None): params = {} if as_group: params['AutoScalingGroupName'] = as_group if scheduled_actions: self.build_list_params(params, scheduled_actions, 'ScheduledActionNames') if max_records: params['MaxRecords'] = max_records if next_token: params['NextToken'] = next_token return self.get_list('DescribeScheduledActions', params, [('member', ScheduledUpdateGroupAction)]) def disable_metrics_collection(self, as_group, metrics=None): """ Disables monitoring of group metrics for the Auto Scaling group specified in AutoScalingGroupName. You can specify the list of affected metrics with the Metrics parameter. """ params = {'AutoScalingGroupName': as_group} if metrics: self.build_list_params(params, metrics, 'Metrics') return self.get_status('DisableMetricsCollection', params) def enable_metrics_collection(self, as_group, granularity, metrics=None): """ Enables monitoring of group metrics for the Auto Scaling group specified in AutoScalingGroupName. You can specify the list of enabled metrics with the Metrics parameter. Auto scaling metrics collection can be turned on only if the InstanceMonitoring.Enabled flag, in the Auto Scaling group's launch configuration, is set to true. :type autoscale_group: string :param autoscale_group: The auto scaling group to get activities on. :type granularity: string :param granularity: The granularity to associate with the metrics to collect. Currently, the only legal granularity is "1Minute". :type metrics: string list :param metrics: The list of metrics to collect. If no metrics are specified, all metrics are enabled. """ params = {'AutoScalingGroupName': as_group, 'Granularity': granularity} if metrics: self.build_list_params(params, metrics, 'Metrics') return self.get_status('EnableMetricsCollection', params) def execute_policy(self, policy_name, as_group=None, honor_cooldown=None): params = {'PolicyName': policy_name} if as_group: params['AutoScalingGroupName'] = as_group if honor_cooldown: params['HonorCooldown'] = honor_cooldown return self.get_status('ExecutePolicy', params) def put_notification_configuration(self, autoscale_group, topic, notification_types): """ Configures an Auto Scaling group to send notifications when specified events take place. :type autoscale_group: str or :class:`boto.ec2.autoscale.group.AutoScalingGroup` object :param autoscale_group: The Auto Scaling group to put notification configuration on. :type topic: str :param topic: The Amazon Resource Name (ARN) of the Amazon Simple Notification Service (SNS) topic. :type notification_types: list :param notification_types: The type of events that will trigger the notification. Valid types are: 'autoscaling:EC2_INSTANCE_LAUNCH', 'autoscaling:EC2_INSTANCE_LAUNCH_ERROR', 'autoscaling:EC2_INSTANCE_TERMINATE', 'autoscaling:EC2_INSTANCE_TERMINATE_ERROR', 'autoscaling:TEST_NOTIFICATION' """ name = autoscale_group if isinstance(autoscale_group, AutoScalingGroup): name = autoscale_group.name params = {'AutoScalingGroupName': name, 'TopicARN': topic} self.build_list_params(params, notification_types, 'NotificationTypes') return self.get_status('PutNotificationConfiguration', params) def delete_notification_configuration(self, autoscale_group, topic): """ Deletes notifications created by put_notification_configuration. :type autoscale_group: str or :class:`boto.ec2.autoscale.group.AutoScalingGroup` object :param autoscale_group: The Auto Scaling group to put notification configuration on. :type topic: str :param topic: The Amazon Resource Name (ARN) of the Amazon Simple Notification Service (SNS) topic. """ name = autoscale_group if isinstance(autoscale_group, AutoScalingGroup): name = autoscale_group.name params = {'AutoScalingGroupName': name, 'TopicARN': topic} return self.get_status('DeleteNotificationConfiguration', params) def set_instance_health(self, instance_id, health_status, should_respect_grace_period=True): """ Explicitly set the health status of an instance. :type instance_id: str :param instance_id: The identifier of the EC2 instance. :type health_status: str :param health_status: The health status of the instance. "Healthy" means that the instance is healthy and should remain in service. "Unhealthy" means that the instance is unhealthy. Auto Scaling should terminate and replace it. :type should_respect_grace_period: bool :param should_respect_grace_period: If True, this call should respect the grace period associated with the group. """ params = {'InstanceId': instance_id, 'HealthStatus': health_status} if should_respect_grace_period: params['ShouldRespectGracePeriod'] = 'true' else: params['ShouldRespectGracePeriod'] = 'false' return self.get_status('SetInstanceHealth', params) def set_desired_capacity(self, group_name, desired_capacity, honor_cooldown=False): """ Adjusts the desired size of the AutoScalingGroup by initiating scaling activities. When reducing the size of the group, it is not possible to define which Amazon EC2 instances will be terminated. This applies to any Auto Scaling decisions that might result in terminating instances. :type group_name: string :param group_name: name of the auto scaling group :type desired_capacity: integer :param desired_capacity: new capacity setting for auto scaling group :type honor_cooldown: boolean :param honor_cooldown: by default, overrides any cooldown period """ params = {'AutoScalingGroupName': group_name, 'DesiredCapacity': desired_capacity} if honor_cooldown: params['HonorCooldown'] = 'true' return self.get_status('SetDesiredCapacity', params) # Tag methods def get_all_tags(self, filters=None, max_records=None, next_token=None): """ Lists the Auto Scaling group tags. This action supports pagination by returning a token if there are more pages to retrieve. To get the next page, call this action again with the returned token as the NextToken parameter. :type filters: dict :param filters: The value of the filter type used to identify the tags to be returned. NOT IMPLEMENTED YET. :type max_records: int :param max_records: Maximum number of tags to return. :rtype: list :returns: List of :class:`boto.ec2.autoscale.tag.Tag` instances. """ params = {} if max_records: params['MaxRecords'] = max_records if next_token: params['NextToken'] = next_token return self.get_list('DescribeTags', params, [('member', Tag)]) def create_or_update_tags(self, tags): """ Creates new tags or updates existing tags for an Auto Scaling group. :type tags: List of :class:`boto.ec2.autoscale.tag.Tag` :param tags: The new or updated tags. """ params = {} for i, tag in enumerate(tags): tag.build_params(params, i + 1) return self.get_status('CreateOrUpdateTags', params, verb='POST') def delete_tags(self, tags): """ Deletes existing tags for an Auto Scaling group. :type tags: List of :class:`boto.ec2.autoscale.tag.Tag` :param tags: The new or updated tags. """ params = {} for i, tag in enumerate(tags): tag.build_params(params, i + 1) return self.get_status('DeleteTags', params, verb='POST') boto-2.20.1/boto/ec2/autoscale/activity.py000066400000000000000000000057631225267101000203500ustar00rootroot00000000000000# Copyright (c) 2009-2011 Reza Lotun http://reza.lotun.name/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from datetime import datetime class Activity(object): def __init__(self, connection=None): self.connection = connection self.start_time = None self.end_time = None self.activity_id = None self.progress = None self.status_code = None self.cause = None self.description = None self.status_message = None self.group_name = None def __repr__(self): return 'Activity<%s>: For group:%s, progress:%s, cause:%s' % (self.activity_id, self.group_name, self.status_message, self.cause) def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'ActivityId': self.activity_id = value elif name == 'AutoScalingGroupName': self.group_name = value elif name == 'StartTime': try: self.start_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ') except ValueError: self.start_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ') elif name == 'EndTime': try: self.end_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ') except ValueError: self.end_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ') elif name == 'Progress': self.progress = value elif name == 'Cause': self.cause = value elif name == 'Description': self.description = value elif name == 'StatusMessage': self.status_message = value elif name == 'StatusCode': self.status_code = value else: setattr(self, name, value) boto-2.20.1/boto/ec2/autoscale/group.py000066400000000000000000000315741225267101000176470ustar00rootroot00000000000000# Copyright (c) 2009-2011 Reza Lotun http://reza.lotun.name/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.ec2.elb.listelement import ListElement from boto.resultset import ResultSet from boto.ec2.autoscale.launchconfig import LaunchConfiguration from boto.ec2.autoscale.request import Request from boto.ec2.autoscale.instance import Instance from boto.ec2.autoscale.tag import Tag class ProcessType(object): def __init__(self, connection=None): self.connection = connection self.process_name = None def __repr__(self): return 'ProcessType(%s)' % self.process_name def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'ProcessName': self.process_name = value class SuspendedProcess(object): def __init__(self, connection=None): self.connection = connection self.process_name = None self.reason = None def __repr__(self): return 'SuspendedProcess(%s, %s)' % (self.process_name, self.reason) def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'ProcessName': self.process_name = value elif name == 'SuspensionReason': self.reason = value class EnabledMetric(object): def __init__(self, connection=None, metric=None, granularity=None): self.connection = connection self.metric = metric self.granularity = granularity def __repr__(self): return 'EnabledMetric(%s, %s)' % (self.metric, self.granularity) def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'Granularity': self.granularity = value elif name == 'Metric': self.metric = value class TerminationPolicies(list): def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'member': self.append(value) class AutoScalingGroup(object): def __init__(self, connection=None, name=None, launch_config=None, availability_zones=None, load_balancers=None, default_cooldown=None, health_check_type=None, health_check_period=None, placement_group=None, vpc_zone_identifier=None, desired_capacity=None, min_size=None, max_size=None, tags=None, termination_policies=None, **kwargs): """ Creates a new AutoScalingGroup with the specified name. You must not have already used up your entire quota of AutoScalingGroups in order for this call to be successful. Once the creation request is completed, the AutoScalingGroup is ready to be used in other calls. :type name: str :param name: Name of autoscaling group (required). :type availability_zones: list :param availability_zones: List of availability zones (required). :type default_cooldown: int :param default_cooldown: Number of seconds after a Scaling Activity completes before any further scaling activities can start. :type desired_capacity: int :param desired_capacity: The desired capacity for the group. :type health_check_period: str :param health_check_period: Length of time in seconds after a new EC2 instance comes into service that Auto Scaling starts checking its health. :type health_check_type: str :param health_check_type: The service you want the health status from, Amazon EC2 or Elastic Load Balancer. :type launch_config: str or LaunchConfiguration :param launch_config: Name of launch configuration (required). :type load_balancers: list :param load_balancers: List of load balancers. :type max_size: int :param max_size: Maximum size of group (required). :type min_size: int :param min_size: Minimum size of group (required). :type placement_group: str :param placement_group: Physical location of your cluster placement group created in Amazon EC2. :type vpc_zone_identifier: str :param vpc_zone_identifier: The subnet identifier of the Virtual Private Cloud. :type tags: list :param tags: List of :class:`boto.ec2.autoscale.tag.Tag`s :type termination_policies: list :param termination_policies: A list of termination policies. Valid values are: "OldestInstance", "NewestInstance", "OldestLaunchConfiguration", "ClosestToNextInstanceHour", "Default". If no value is specified, the "Default" value is used. :rtype: :class:`boto.ec2.autoscale.group.AutoScalingGroup` :return: An autoscale group. """ self.name = name or kwargs.get('group_name') # backwards compat self.connection = connection self.min_size = int(min_size) if min_size is not None else None self.max_size = int(max_size) if max_size is not None else None self.created_time = None # backwards compatibility default_cooldown = default_cooldown or kwargs.get('cooldown') if default_cooldown is not None: default_cooldown = int(default_cooldown) self.default_cooldown = default_cooldown self.launch_config_name = launch_config if launch_config and isinstance(launch_config, LaunchConfiguration): self.launch_config_name = launch_config.name self.desired_capacity = desired_capacity lbs = load_balancers or [] self.load_balancers = ListElement(lbs) zones = availability_zones or [] self.availability_zones = ListElement(zones) self.health_check_period = health_check_period self.health_check_type = health_check_type self.placement_group = placement_group self.autoscaling_group_arn = None self.vpc_zone_identifier = vpc_zone_identifier self.instances = None self.tags = tags or None termination_policies = termination_policies or [] self.termination_policies = ListElement(termination_policies) # backwards compatible access to 'cooldown' param def _get_cooldown(self): return self.default_cooldown def _set_cooldown(self, val): self.default_cooldown = val cooldown = property(_get_cooldown, _set_cooldown) def __repr__(self): return 'AutoScaleGroup<%s>' % self.name def startElement(self, name, attrs, connection): if name == 'Instances': self.instances = ResultSet([('member', Instance)]) return self.instances elif name == 'LoadBalancerNames': return self.load_balancers elif name == 'AvailabilityZones': return self.availability_zones elif name == 'EnabledMetrics': self.enabled_metrics = ResultSet([('member', EnabledMetric)]) return self.enabled_metrics elif name == 'SuspendedProcesses': self.suspended_processes = ResultSet([('member', SuspendedProcess)]) return self.suspended_processes elif name == 'Tags': self.tags = ResultSet([('member', Tag)]) return self.tags elif name == 'TerminationPolicies': return self.termination_policies else: return def endElement(self, name, value, connection): if name == 'MinSize': self.min_size = int(value) elif name == 'AutoScalingGroupARN': self.autoscaling_group_arn = value elif name == 'CreatedTime': self.created_time = value elif name == 'DefaultCooldown': self.default_cooldown = int(value) elif name == 'LaunchConfigurationName': self.launch_config_name = value elif name == 'DesiredCapacity': self.desired_capacity = int(value) elif name == 'MaxSize': self.max_size = int(value) elif name == 'AutoScalingGroupName': self.name = value elif name == 'PlacementGroup': self.placement_group = value elif name == 'HealthCheckGracePeriod': try: self.health_check_period = int(value) except ValueError: self.health_check_period = None elif name == 'HealthCheckType': self.health_check_type = value elif name == 'VPCZoneIdentifier': self.vpc_zone_identifier = value else: setattr(self, name, value) def set_capacity(self, capacity): """ Set the desired capacity for the group. """ params = {'AutoScalingGroupName': self.name, 'DesiredCapacity': capacity} req = self.connection.get_object('SetDesiredCapacity', params, Request) self.connection.last_request = req return req def update(self): """ Sync local changes with AutoScaling group. """ return self.connection._update_group('UpdateAutoScalingGroup', self) def shutdown_instances(self): """ Convenience method which shuts down all instances associated with this group. """ self.min_size = 0 self.max_size = 0 self.desired_capacity = 0 self.update() def delete(self, force_delete=False): """ Delete this auto-scaling group if no instances attached or no scaling activities in progress. """ return self.connection.delete_auto_scaling_group(self.name, force_delete) def get_activities(self, activity_ids=None, max_records=50): """ Get all activies for this group. """ return self.connection.get_all_activities(self, activity_ids, max_records) def put_notification_configuration(self, topic, notification_types): """ Configures an Auto Scaling group to send notifications when specified events take place. Valid notification types are: 'autoscaling:EC2_INSTANCE_LAUNCH', 'autoscaling:EC2_INSTANCE_LAUNCH_ERROR', 'autoscaling:EC2_INSTANCE_TERMINATE', 'autoscaling:EC2_INSTANCE_TERMINATE_ERROR', 'autoscaling:TEST_NOTIFICATION' """ return self.connection.put_notification_configuration(self, topic, notification_types) def delete_notification_configuration(self, topic): """ Deletes notifications created by put_notification_configuration. """ return self.connection.delete_notification_configuration(self, topic) def suspend_processes(self, scaling_processes=None): """ Suspends Auto Scaling processes for an Auto Scaling group. """ return self.connection.suspend_processes(self.name, scaling_processes) def resume_processes(self, scaling_processes=None): """ Resumes Auto Scaling processes for an Auto Scaling group. """ return self.connection.resume_processes(self.name, scaling_processes) class AutoScalingGroupMetric(object): def __init__(self, connection=None): self.connection = connection self.metric = None self.granularity = None def __repr__(self): return 'AutoScalingGroupMetric:%s' % self.metric def startElement(self, name, attrs, connection): return def endElement(self, name, value, connection): if name == 'Metric': self.metric = value elif name == 'Granularity': self.granularity = value else: setattr(self, name, value) boto-2.20.1/boto/ec2/autoscale/instance.py000066400000000000000000000045751225267101000203200ustar00rootroot00000000000000# Copyright (c) 2009 Reza Lotun http://reza.lotun.name/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class Instance(object): def __init__(self, connection=None): self.connection = connection self.instance_id = None self.health_status = None self.launch_config_name = None self.lifecycle_state = None self.availability_zone = None self.group_name = None def __repr__(self): r = 'Instance' % (self.metrics, self.granularities) def startElement(self, name, attrs, connection): if name == 'Granularities': self.granularities = ResultSet([('member', self.Granularity)]) return self.granularities elif name == 'Metrics': self.metrics = ResultSet([('member', self.Metric)]) return self.metrics def endElement(self, name, value, connection): return class ScalingPolicy(object): def __init__(self, connection=None, **kwargs): """ Scaling Policy :type name: str :param name: Name of scaling policy. :type adjustment_type: str :param adjustment_type: Specifies the type of adjustment. Valid values are `ChangeInCapacity`, `ExactCapacity` and `PercentChangeInCapacity`. :type as_name: str or int :param as_name: Name or ARN of the Auto Scaling Group. :type scaling_adjustment: int :param scaling_adjustment: Value of adjustment (type specified in `adjustment_type`). :type min_adjustment_step: int :param min_adjustment_step: Value of min adjustment step required to apply the scaling policy (only make sense when use `PercentChangeInCapacity` as adjustment_type.). :type cooldown: int :param cooldown: Time (in seconds) before Alarm related Scaling Activities can start after the previous Scaling Activity ends. """ self.name = kwargs.get('name', None) self.adjustment_type = kwargs.get('adjustment_type', None) self.as_name = kwargs.get('as_name', None) self.scaling_adjustment = kwargs.get('scaling_adjustment', None) self.cooldown = kwargs.get('cooldown', None) self.connection = connection self.min_adjustment_step = kwargs.get('min_adjustment_step', None) def __repr__(self): return 'ScalingPolicy(%s group:%s adjustment:%s)' % (self.name, self.as_name, self.adjustment_type) def startElement(self, name, attrs, connection): if name == 'Alarms': self.alarms = ResultSet([('member', Alarm)]) return self.alarms def endElement(self, name, value, connection): if name == 'PolicyName': self.name = value elif name == 'AutoScalingGroupName': self.as_name = value elif name == 'PolicyARN': self.policy_arn = value elif name == 'ScalingAdjustment': self.scaling_adjustment = int(value) elif name == 'Cooldown': self.cooldown = int(value) elif name == 'AdjustmentType': self.adjustment_type = value elif name == 'MinAdjustmentStep': self.min_adjustment_step = int(value) def delete(self): return self.connection.delete_policy(self.name, self.as_name) class TerminationPolicies(list): def __init__(self, connection=None, **kwargs): pass def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'member': self.append(value) boto-2.20.1/boto/ec2/autoscale/request.py000066400000000000000000000030151225267101000201700ustar00rootroot00000000000000# Copyright (c) 2009 Reza Lotun http://reza.lotun.name/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class Request(object): def __init__(self, connection=None): self.connection = connection self.request_id = '' def __repr__(self): return 'Request:%s' % self.request_id def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'RequestId': self.request_id = value else: setattr(self, name, value) boto-2.20.1/boto/ec2/autoscale/scheduled.py000066400000000000000000000057651225267101000204560ustar00rootroot00000000000000# Copyright (c) 2009-2010 Reza Lotun http://reza.lotun.name/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from datetime import datetime class ScheduledUpdateGroupAction(object): def __init__(self, connection=None): self.connection = connection self.name = None self.action_arn = None self.as_group = None self.time = None self.start_time = None self.end_time = None self.recurrence = None self.desired_capacity = None self.max_size = None self.min_size = None def __repr__(self): return 'ScheduledUpdateGroupAction:%s' % self.name def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'DesiredCapacity': self.desired_capacity = value elif name == 'ScheduledActionName': self.name = value elif name == 'AutoScalingGroupName': self.as_group = value elif name == 'MaxSize': self.max_size = int(value) elif name == 'MinSize': self.min_size = int(value) elif name == 'ScheduledActionARN': self.action_arn = value elif name == 'Recurrence': self.recurrence = value elif name == 'Time': try: self.time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ') except ValueError: self.time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ') elif name == 'StartTime': try: self.start_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ') except ValueError: self.start_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ') elif name == 'EndTime': try: self.end_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ') except ValueError: self.end_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ') else: setattr(self, name, value) boto-2.20.1/boto/ec2/autoscale/tag.py000066400000000000000000000064631225267101000172650ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class Tag(object): """ A name/value tag on an AutoScalingGroup resource. :ivar key: The key of the tag. :ivar value: The value of the tag. :ivar propagate_at_launch: Boolean value which specifies whether the new tag will be applied to instances launched after the tag is created. :ivar resource_id: The name of the autoscaling group. :ivar resource_type: The only supported resource type at this time is "auto-scaling-group". """ def __init__(self, connection=None, key=None, value=None, propagate_at_launch=False, resource_id=None, resource_type='auto-scaling-group'): self.connection = connection self.key = key self.value = value self.propagate_at_launch = propagate_at_launch self.resource_id = resource_id self.resource_type = resource_type def __repr__(self): return 'Tag(%s=%s)' % (self.key, self.value) def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'Key': self.key = value elif name == 'Value': self.value = value elif name == 'PropagateAtLaunch': if value.lower() == 'true': self.propagate_at_launch = True else: self.propagate_at_launch = False elif name == 'ResourceId': self.resource_id = value elif name == 'ResourceType': self.resource_type = value def build_params(self, params, i): """ Populates a dictionary with the name/value pairs necessary to identify this Tag in a request. """ prefix = 'Tags.member.%d.' % i params[prefix + 'ResourceId'] = self.resource_id params[prefix + 'ResourceType'] = self.resource_type params[prefix + 'Key'] = self.key params[prefix + 'Value'] = self.value if self.propagate_at_launch: params[prefix + 'PropagateAtLaunch'] = 'true' else: params[prefix + 'PropagateAtLaunch'] = 'false' def delete(self): return self.connection.delete_tags([self]) boto-2.20.1/boto/ec2/blockdevicemapping.py000066400000000000000000000131371225267101000203540ustar00rootroot00000000000000# Copyright (c) 2009-2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # class BlockDeviceType(object): """ Represents parameters for a block device. """ def __init__(self, connection=None, ephemeral_name=None, no_device=False, volume_id=None, snapshot_id=None, status=None, attach_time=None, delete_on_termination=False, size=None, volume_type=None, iops=None): self.connection = connection self.ephemeral_name = ephemeral_name self.no_device = no_device self.volume_id = volume_id self.snapshot_id = snapshot_id self.status = status self.attach_time = attach_time self.delete_on_termination = delete_on_termination self.size = size self.volume_type = volume_type self.iops = iops def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'volumeId': self.volume_id = value elif name == 'virtualName': self.ephemeral_name = value elif name == 'NoDevice': self.no_device = (value == 'true') elif name == 'snapshotId': self.snapshot_id = value elif name == 'volumeSize': self.size = int(value) elif name == 'status': self.status = value elif name == 'attachTime': self.attach_time = value elif name == 'deleteOnTermination': self.delete_on_termination = (value == 'true') elif name == 'volumeType': self.volume_type = value elif name == 'iops': self.iops = int(value) else: setattr(self, name, value) # for backwards compatibility EBSBlockDeviceType = BlockDeviceType class BlockDeviceMapping(dict): """ Represents a collection of BlockDeviceTypes when creating ec2 instances. Example: dev_sda1 = BlockDeviceType() dev_sda1.size = 100 # change root volume to 100GB instead of default bdm = BlockDeviceMapping() bdm['/dev/sda1'] = dev_sda1 reservation = image.run(..., block_device_map=bdm, ...) """ def __init__(self, connection=None): """ :type connection: :class:`boto.ec2.EC2Connection` :param connection: Optional connection. """ dict.__init__(self) self.connection = connection self.current_name = None self.current_value = None def startElement(self, name, attrs, connection): if name == 'ebs' or name == 'virtualName': self.current_value = BlockDeviceType(self) return self.current_value def endElement(self, name, value, connection): if name == 'device' or name == 'deviceName': self.current_name = value elif name == 'item': self[self.current_name] = self.current_value def ec2_build_list_params(self, params, prefix=''): pre = '%sBlockDeviceMapping' % prefix return self._build_list_params(params, prefix=pre) def autoscale_build_list_params(self, params, prefix=''): pre = '%sBlockDeviceMappings.member' % prefix return self._build_list_params(params, prefix=pre) def _build_list_params(self, params, prefix=''): i = 1 for dev_name in self: pre = '%s.%d' % (prefix, i) params['%s.DeviceName' % pre] = dev_name block_dev = self[dev_name] if block_dev.ephemeral_name: params['%s.VirtualName' % pre] = block_dev.ephemeral_name else: if block_dev.no_device: params['%s.NoDevice' % pre] = '' else: if block_dev.snapshot_id: params['%s.Ebs.SnapshotId' % pre] = block_dev.snapshot_id if block_dev.size: params['%s.Ebs.VolumeSize' % pre] = block_dev.size if block_dev.delete_on_termination: params['%s.Ebs.DeleteOnTermination' % pre] = 'true' else: params['%s.Ebs.DeleteOnTermination' % pre] = 'false' if block_dev.volume_type: params['%s.Ebs.VolumeType' % pre] = block_dev.volume_type if block_dev.iops is not None: params['%s.Ebs.Iops' % pre] = block_dev.iops i += 1 boto-2.20.1/boto/ec2/bundleinstance.py000066400000000000000000000052711225267101000175240ustar00rootroot00000000000000# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents an EC2 Bundle Task """ from boto.ec2.ec2object import EC2Object class BundleInstanceTask(EC2Object): def __init__(self, connection=None): EC2Object.__init__(self, connection) self.id = None self.instance_id = None self.progress = None self.start_time = None self.state = None self.bucket = None self.prefix = None self.upload_policy = None self.upload_policy_signature = None self.update_time = None self.code = None self.message = None def __repr__(self): return 'BundleInstanceTask:%s' % self.id def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'bundleId': self.id = value elif name == 'instanceId': self.instance_id = value elif name == 'progress': self.progress = value elif name == 'startTime': self.start_time = value elif name == 'state': self.state = value elif name == 'bucket': self.bucket = value elif name == 'prefix': self.prefix = value elif name == 'uploadPolicy': self.upload_policy = value elif name == 'uploadPolicySignature': self.upload_policy_signature = value elif name == 'updateTime': self.update_time = value elif name == 'code': self.code = value elif name == 'message': self.message = value else: setattr(self, name, value) boto-2.20.1/boto/ec2/buyreservation.py000066400000000000000000000073451225267101000176130ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import boto.ec2 from boto.sdb.db.property import StringProperty, IntegerProperty from boto.manage import propget InstanceTypes = ['m1.small', 'm1.large', 'm1.xlarge', 'c1.medium', 'c1.xlarge', 'm2.xlarge', 'm2.2xlarge', 'm2.4xlarge', 'cc1.4xlarge', 't1.micro'] class BuyReservation(object): def get_region(self, params): if not params.get('region', None): prop = StringProperty(name='region', verbose_name='EC2 Region', choices=boto.ec2.regions) params['region'] = propget.get(prop, choices=boto.ec2.regions) def get_instance_type(self, params): if not params.get('instance_type', None): prop = StringProperty(name='instance_type', verbose_name='Instance Type', choices=InstanceTypes) params['instance_type'] = propget.get(prop) def get_quantity(self, params): if not params.get('quantity', None): prop = IntegerProperty(name='quantity', verbose_name='Number of Instances') params['quantity'] = propget.get(prop) def get_zone(self, params): if not params.get('zone', None): prop = StringProperty(name='zone', verbose_name='EC2 Availability Zone', choices=self.ec2.get_all_zones) params['zone'] = propget.get(prop) def get(self, params): self.get_region(params) self.ec2 = params['region'].connect() self.get_instance_type(params) self.get_zone(params) self.get_quantity(params) if __name__ == "__main__": obj = BuyReservation() params = {} obj.get(params) offerings = obj.ec2.get_all_reserved_instances_offerings(instance_type=params['instance_type'], availability_zone=params['zone'].name) print '\nThe following Reserved Instances Offerings are available:\n' for offering in offerings: offering.describe() prop = StringProperty(name='offering', verbose_name='Offering', choices=offerings) offering = propget.get(prop) print '\nYou have chosen this offering:' offering.describe() unit_price = float(offering.fixed_price) total_price = unit_price * params['quantity'] print '!!! You are about to purchase %d of these offerings for a total of $%.2f !!!' % (params['quantity'], total_price) answer = raw_input('Are you sure you want to do this? If so, enter YES: ') if answer.strip().lower() == 'yes': offering.purchase(params['quantity']) else: print 'Purchase cancelled' boto-2.20.1/boto/ec2/cloudwatch/000077500000000000000000000000001225267101000163045ustar00rootroot00000000000000boto-2.20.1/boto/ec2/cloudwatch/__init__.py000066400000000000000000000600411225267101000204160ustar00rootroot00000000000000# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # """ This module provides an interface to the Elastic Compute Cloud (EC2) CloudWatch service from AWS. """ from boto.compat import json from boto.connection import AWSQueryConnection from boto.ec2.cloudwatch.metric import Metric from boto.ec2.cloudwatch.alarm import MetricAlarm, MetricAlarms, AlarmHistoryItem from boto.ec2.cloudwatch.datapoint import Datapoint from boto.regioninfo import RegionInfo import boto RegionData = { 'us-east-1': 'monitoring.us-east-1.amazonaws.com', 'us-gov-west-1': 'monitoring.us-gov-west-1.amazonaws.com', 'us-west-1': 'monitoring.us-west-1.amazonaws.com', 'us-west-2': 'monitoring.us-west-2.amazonaws.com', 'sa-east-1': 'monitoring.sa-east-1.amazonaws.com', 'eu-west-1': 'monitoring.eu-west-1.amazonaws.com', 'ap-northeast-1': 'monitoring.ap-northeast-1.amazonaws.com', 'ap-southeast-1': 'monitoring.ap-southeast-1.amazonaws.com', 'ap-southeast-2': 'monitoring.ap-southeast-2.amazonaws.com', } def regions(): """ Get all available regions for the CloudWatch service. :rtype: list :return: A list of :class:`boto.RegionInfo` instances """ regions = [] for region_name in RegionData: region = RegionInfo(name=region_name, endpoint=RegionData[region_name], connection_cls=CloudWatchConnection) regions.append(region) return regions def connect_to_region(region_name, **kw_params): """ Given a valid region name, return a :class:`boto.ec2.cloudwatch.CloudWatchConnection`. :param str region_name: The name of the region to connect to. :rtype: :class:`boto.ec2.CloudWatchConnection` or ``None`` :return: A connection to the given region, or None if an invalid region name is given """ for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None class CloudWatchConnection(AWSQueryConnection): APIVersion = boto.config.get('Boto', 'cloudwatch_version', '2010-08-01') DefaultRegionName = boto.config.get('Boto', 'cloudwatch_region_name', 'us-east-1') DefaultRegionEndpoint = boto.config.get('Boto', 'cloudwatch_region_endpoint', 'monitoring.us-east-1.amazonaws.com') def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/', security_token=None, validate_certs=True): """ Init method to create a new connection to EC2 Monitoring Service. B{Note:} The host argument is overridden by the host specified in the boto configuration file. """ if not region: region = RegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint) self.region = region # Ugly hack to get around both a bug in Python and a # misconfigured SSL cert for the eu-west-1 endpoint if self.region.name == 'eu-west-1': validate_certs = False AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, proxy_user, proxy_pass, self.region.endpoint, debug, https_connection_factory, path, security_token, validate_certs=validate_certs) def _required_auth_capability(self): return ['hmac-v4'] def build_dimension_param(self, dimension, params): prefix = 'Dimensions.member' i = 0 for dim_name in dimension: dim_value = dimension[dim_name] if dim_value: if isinstance(dim_value, basestring): dim_value = [dim_value] for value in dim_value: params['%s.%d.Name' % (prefix, i+1)] = dim_name params['%s.%d.Value' % (prefix, i+1)] = value i += 1 else: params['%s.%d.Name' % (prefix, i+1)] = dim_name i += 1 def build_list_params(self, params, items, label): if isinstance(items, basestring): items = [items] for index, item in enumerate(items): i = index + 1 if isinstance(item, dict): for k, v in item.iteritems(): params[label % (i, 'Name')] = k if v is not None: params[label % (i, 'Value')] = v else: params[label % i] = item def build_put_params(self, params, name, value=None, timestamp=None, unit=None, dimensions=None, statistics=None): args = (name, value, unit, dimensions, statistics, timestamp) length = max(map(lambda a: len(a) if isinstance(a, list) else 1, args)) def aslist(a): if isinstance(a, list): if len(a) != length: raise Exception('Must specify equal number of elements; expected %d.' % length) return a return [a] * length for index, (n, v, u, d, s, t) in enumerate(zip(*map(aslist, args))): metric_data = {'MetricName': n} if timestamp: metric_data['Timestamp'] = t.isoformat() if unit: metric_data['Unit'] = u if dimensions: self.build_dimension_param(d, metric_data) if statistics: metric_data['StatisticValues.Maximum'] = s['maximum'] metric_data['StatisticValues.Minimum'] = s['minimum'] metric_data['StatisticValues.SampleCount'] = s['samplecount'] metric_data['StatisticValues.Sum'] = s['sum'] if value != None: msg = 'You supplied a value and statistics for a ' + \ 'metric.Posting statistics and not value.' boto.log.warn(msg) elif value != None: metric_data['Value'] = v else: raise Exception('Must specify a value or statistics to put.') for key, val in metric_data.iteritems(): params['MetricData.member.%d.%s' % (index + 1, key)] = val def get_metric_statistics(self, period, start_time, end_time, metric_name, namespace, statistics, dimensions=None, unit=None): """ Get time-series data for one or more statistics of a given metric. :type period: integer :param period: The granularity, in seconds, of the returned datapoints. Period must be at least 60 seconds and must be a multiple of 60. The default value is 60. :type start_time: datetime :param start_time: The time stamp to use for determining the first datapoint to return. The value specified is inclusive; results include datapoints with the time stamp specified. :type end_time: datetime :param end_time: The time stamp to use for determining the last datapoint to return. The value specified is exclusive; results will include datapoints up to the time stamp specified. :type metric_name: string :param metric_name: The metric name. :type namespace: string :param namespace: The metric's namespace. :type statistics: list :param statistics: A list of statistics names Valid values: Average | Sum | SampleCount | Maximum | Minimum :type dimensions: dict :param dimensions: A dictionary of dimension key/values where the key is the dimension name and the value is either a scalar value or an iterator of values to be associated with that dimension. :type unit: string :param unit: The unit for the metric. Value values are: Seconds | Microseconds | Milliseconds | Bytes | Kilobytes | Megabytes | Gigabytes | Terabytes | Bits | Kilobits | Megabits | Gigabits | Terabits | Percent | Count | Bytes/Second | Kilobytes/Second | Megabytes/Second | Gigabytes/Second | Terabytes/Second | Bits/Second | Kilobits/Second | Megabits/Second | Gigabits/Second | Terabits/Second | Count/Second | None :rtype: list """ params = {'Period': period, 'MetricName': metric_name, 'Namespace': namespace, 'StartTime': start_time.isoformat(), 'EndTime': end_time.isoformat()} self.build_list_params(params, statistics, 'Statistics.member.%d') if dimensions: self.build_dimension_param(dimensions, params) if unit: params['Unit'] = unit return self.get_list('GetMetricStatistics', params, [('member', Datapoint)]) def list_metrics(self, next_token=None, dimensions=None, metric_name=None, namespace=None): """ Returns a list of the valid metrics for which there is recorded data available. :type next_token: str :param next_token: A maximum of 500 metrics will be returned at one time. If more results are available, the ResultSet returned will contain a non-Null next_token attribute. Passing that token as a parameter to list_metrics will retrieve the next page of metrics. :type dimensions: dict :param dimensions: A dictionary containing name/value pairs that will be used to filter the results. The key in the dictionary is the name of a Dimension. The value in the dictionary is either a scalar value of that Dimension name that you want to filter on, a list of values to filter on or None if you want all metrics with that Dimension name. :type metric_name: str :param metric_name: The name of the Metric to filter against. If None, all Metric names will be returned. :type namespace: str :param namespace: A Metric namespace to filter against (e.g. AWS/EC2). If None, Metrics from all namespaces will be returned. """ params = {} if next_token: params['NextToken'] = next_token if dimensions: self.build_dimension_param(dimensions, params) if metric_name: params['MetricName'] = metric_name if namespace: params['Namespace'] = namespace return self.get_list('ListMetrics', params, [('member', Metric)]) def put_metric_data(self, namespace, name, value=None, timestamp=None, unit=None, dimensions=None, statistics=None): """ Publishes metric data points to Amazon CloudWatch. Amazon Cloudwatch associates the data points with the specified metric. If the specified metric does not exist, Amazon CloudWatch creates the metric. If a list is specified for some, but not all, of the arguments, the remaining arguments are repeated a corresponding number of times. :type namespace: str :param namespace: The namespace of the metric. :type name: str or list :param name: The name of the metric. :type value: float or list :param value: The value for the metric. :type timestamp: datetime or list :param timestamp: The time stamp used for the metric. If not specified, the default value is set to the time the metric data was received. :type unit: string or list :param unit: The unit of the metric. Valid Values: Seconds | Microseconds | Milliseconds | Bytes | Kilobytes | Megabytes | Gigabytes | Terabytes | Bits | Kilobits | Megabits | Gigabits | Terabits | Percent | Count | Bytes/Second | Kilobytes/Second | Megabytes/Second | Gigabytes/Second | Terabytes/Second | Bits/Second | Kilobits/Second | Megabits/Second | Gigabits/Second | Terabits/Second | Count/Second | None :type dimensions: dict :param dimensions: Add extra name value pairs to associate with the metric, i.e.: {'name1': value1, 'name2': (value2, value3)} :type statistics: dict or list :param statistics: Use a statistic set instead of a value, for example:: {'maximum': 30, 'minimum': 1, 'samplecount': 100, 'sum': 10000} """ params = {'Namespace': namespace} self.build_put_params(params, name, value=value, timestamp=timestamp, unit=unit, dimensions=dimensions, statistics=statistics) return self.get_status('PutMetricData', params, verb="POST") def describe_alarms(self, action_prefix=None, alarm_name_prefix=None, alarm_names=None, max_records=None, state_value=None, next_token=None): """ Retrieves alarms with the specified names. If no name is specified, all alarms for the user are returned. Alarms can be retrieved by using only a prefix for the alarm name, the alarm state, or a prefix for any action. :type action_prefix: string :param action_name: The action name prefix. :type alarm_name_prefix: string :param alarm_name_prefix: The alarm name prefix. AlarmNames cannot be specified if this parameter is specified. :type alarm_names: list :param alarm_names: A list of alarm names to retrieve information for. :type max_records: int :param max_records: The maximum number of alarm descriptions to retrieve. :type state_value: string :param state_value: The state value to be used in matching alarms. :type next_token: string :param next_token: The token returned by a previous call to indicate that there is more data. :rtype list """ params = {} if action_prefix: params['ActionPrefix'] = action_prefix if alarm_name_prefix: params['AlarmNamePrefix'] = alarm_name_prefix elif alarm_names: self.build_list_params(params, alarm_names, 'AlarmNames.member.%s') if max_records: params['MaxRecords'] = max_records if next_token: params['NextToken'] = next_token if state_value: params['StateValue'] = state_value result = self.get_list('DescribeAlarms', params, [('MetricAlarms', MetricAlarms)]) ret = result[0] ret.next_token = result.next_token return ret def describe_alarm_history(self, alarm_name=None, start_date=None, end_date=None, max_records=None, history_item_type=None, next_token=None): """ Retrieves history for the specified alarm. Filter alarms by date range or item type. If an alarm name is not specified, Amazon CloudWatch returns histories for all of the owner's alarms. Amazon CloudWatch retains the history of deleted alarms for a period of six weeks. If an alarm has been deleted, its history can still be queried. :type alarm_name: string :param alarm_name: The name of the alarm. :type start_date: datetime :param start_date: The starting date to retrieve alarm history. :type end_date: datetime :param end_date: The starting date to retrieve alarm history. :type history_item_type: string :param history_item_type: The type of alarm histories to retreive (ConfigurationUpdate | StateUpdate | Action) :type max_records: int :param max_records: The maximum number of alarm descriptions to retrieve. :type next_token: string :param next_token: The token returned by a previous call to indicate that there is more data. :rtype list """ params = {} if alarm_name: params['AlarmName'] = alarm_name if start_date: params['StartDate'] = start_date.isoformat() if end_date: params['EndDate'] = end_date.isoformat() if history_item_type: params['HistoryItemType'] = history_item_type if max_records: params['MaxRecords'] = max_records if next_token: params['NextToken'] = next_token return self.get_list('DescribeAlarmHistory', params, [('member', AlarmHistoryItem)]) def describe_alarms_for_metric(self, metric_name, namespace, period=None, statistic=None, dimensions=None, unit=None): """ Retrieves all alarms for a single metric. Specify a statistic, period, or unit to filter the set of alarms further. :type metric_name: string :param metric_name: The name of the metric :type namespace: string :param namespace: The namespace of the metric. :type period: int :param period: The period in seconds over which the statistic is applied. :type statistic: string :param statistic: The statistic for the metric. :param dimension_filters: A dictionary containing name/value pairs that will be used to filter the results. The key in the dictionary is the name of a Dimension. The value in the dictionary is either a scalar value of that Dimension name that you want to filter on, a list of values to filter on or None if you want all metrics with that Dimension name. :type unit: string :rtype list """ params = {'MetricName': metric_name, 'Namespace': namespace} if period: params['Period'] = period if statistic: params['Statistic'] = statistic if dimensions: self.build_dimension_param(dimensions, params) if unit: params['Unit'] = unit return self.get_list('DescribeAlarmsForMetric', params, [('member', MetricAlarm)]) def put_metric_alarm(self, alarm): """ Creates or updates an alarm and associates it with the specified Amazon CloudWatch metric. Optionally, this operation can associate one or more Amazon Simple Notification Service resources with the alarm. When this operation creates an alarm, the alarm state is immediately set to INSUFFICIENT_DATA. The alarm is evaluated and its StateValue is set appropriately. Any actions associated with the StateValue is then executed. When updating an existing alarm, its StateValue is left unchanged. :type alarm: boto.ec2.cloudwatch.alarm.MetricAlarm :param alarm: MetricAlarm object. """ params = { 'AlarmName': alarm.name, 'MetricName': alarm.metric, 'Namespace': alarm.namespace, 'Statistic': alarm.statistic, 'ComparisonOperator': alarm.comparison, 'Threshold': alarm.threshold, 'EvaluationPeriods': alarm.evaluation_periods, 'Period': alarm.period, } if alarm.actions_enabled is not None: params['ActionsEnabled'] = alarm.actions_enabled if alarm.alarm_actions: self.build_list_params(params, alarm.alarm_actions, 'AlarmActions.member.%s') if alarm.description: params['AlarmDescription'] = alarm.description if alarm.dimensions: self.build_dimension_param(alarm.dimensions, params) if alarm.insufficient_data_actions: self.build_list_params(params, alarm.insufficient_data_actions, 'InsufficientDataActions.member.%s') if alarm.ok_actions: self.build_list_params(params, alarm.ok_actions, 'OKActions.member.%s') if alarm.unit: params['Unit'] = alarm.unit alarm.connection = self return self.get_status('PutMetricAlarm', params) create_alarm = put_metric_alarm update_alarm = put_metric_alarm def delete_alarms(self, alarms): """ Deletes all specified alarms. In the event of an error, no alarms are deleted. :type alarms: list :param alarms: List of alarm names. """ params = {} self.build_list_params(params, alarms, 'AlarmNames.member.%s') return self.get_status('DeleteAlarms', params) def set_alarm_state(self, alarm_name, state_reason, state_value, state_reason_data=None): """ Temporarily sets the state of an alarm. When the updated StateValue differs from the previous value, the action configured for the appropriate state is invoked. This is not a permanent change. The next periodic alarm check (in about a minute) will set the alarm to its actual state. :type alarm_name: string :param alarm_name: Descriptive name for alarm. :type state_reason: string :param state_reason: Human readable reason. :type state_value: string :param state_value: OK | ALARM | INSUFFICIENT_DATA :type state_reason_data: string :param state_reason_data: Reason string (will be jsonified). """ params = {'AlarmName': alarm_name, 'StateReason': state_reason, 'StateValue': state_value} if state_reason_data: params['StateReasonData'] = json.dumps(state_reason_data) return self.get_status('SetAlarmState', params) def enable_alarm_actions(self, alarm_names): """ Enables actions for the specified alarms. :type alarms: list :param alarms: List of alarm names. """ params = {} self.build_list_params(params, alarm_names, 'AlarmNames.member.%s') return self.get_status('EnableAlarmActions', params) def disable_alarm_actions(self, alarm_names): """ Disables actions for the specified alarms. :type alarms: list :param alarms: List of alarm names. """ params = {} self.build_list_params(params, alarm_names, 'AlarmNames.member.%s') return self.get_status('DisableAlarmActions', params) boto-2.20.1/boto/ec2/cloudwatch/alarm.py000066400000000000000000000300011225267101000177440ustar00rootroot00000000000000# Copyright (c) 2010 Reza Lotun http://reza.lotun.name # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from datetime import datetime from boto.resultset import ResultSet from boto.ec2.cloudwatch.listelement import ListElement from boto.ec2.cloudwatch.dimension import Dimension from boto.compat import json class MetricAlarms(list): def __init__(self, connection=None): """ Parses a list of MetricAlarms. """ list.__init__(self) self.connection = connection def startElement(self, name, attrs, connection): if name == 'member': metric_alarm = MetricAlarm(connection) self.append(metric_alarm) return metric_alarm def endElement(self, name, value, connection): pass class MetricAlarm(object): OK = 'OK' ALARM = 'ALARM' INSUFFICIENT_DATA = 'INSUFFICIENT_DATA' _cmp_map = { '>=': 'GreaterThanOrEqualToThreshold', '>': 'GreaterThanThreshold', '<': 'LessThanThreshold', '<=': 'LessThanOrEqualToThreshold', } _rev_cmp_map = dict((v, k) for (k, v) in _cmp_map.iteritems()) def __init__(self, connection=None, name=None, metric=None, namespace=None, statistic=None, comparison=None, threshold=None, period=None, evaluation_periods=None, unit=None, description='', dimensions=None, alarm_actions=None, insufficient_data_actions=None, ok_actions=None): """ Creates a new Alarm. :type name: str :param name: Name of alarm. :type metric: str :param metric: Name of alarm's associated metric. :type namespace: str :param namespace: The namespace for the alarm's metric. :type statistic: str :param statistic: The statistic to apply to the alarm's associated metric. Valid values: SampleCount|Average|Sum|Minimum|Maximum :type comparison: str :param comparison: Comparison used to compare statistic with threshold. Valid values: >= | > | < | <= :type threshold: float :param threshold: The value against which the specified statistic is compared. :type period: int :param period: The period in seconds over which teh specified statistic is applied. :type evaluation_periods: int :param evaluation_periods: The number of periods over which data is compared to the specified threshold. :type unit: str :param unit: Allowed Values are: Seconds|Microseconds|Milliseconds, Bytes|Kilobytes|Megabytes|Gigabytes|Terabytes, Bits|Kilobits|Megabits|Gigabits|Terabits, Percent|Count| Bytes/Second|Kilobytes/Second|Megabytes/Second| Gigabytes/Second|Terabytes/Second, Bits/Second|Kilobits/Second|Megabits/Second, Gigabits/Second|Terabits/Second|Count/Second|None :type description: str :param description: Description of MetricAlarm :type dimensions: dict :param dimensions: A dictionary of dimension key/values where the key is the dimension name and the value is either a scalar value or an iterator of values to be associated with that dimension. Example: { 'InstanceId': ['i-0123456', 'i-0123457'], 'LoadBalancerName': 'test-lb' } :type alarm_actions: list of strs :param alarm_actions: A list of the ARNs of the actions to take in ALARM state :type insufficient_data_actions: list of strs :param insufficient_data_actions: A list of the ARNs of the actions to take in INSUFFICIENT_DATA state :type ok_actions: list of strs :param ok_actions: A list of the ARNs of the actions to take in OK state """ self.name = name self.connection = connection self.metric = metric self.namespace = namespace self.statistic = statistic if threshold is not None: self.threshold = float(threshold) else: self.threshold = None self.comparison = self._cmp_map.get(comparison) if period is not None: self.period = int(period) else: self.period = None if evaluation_periods is not None: self.evaluation_periods = int(evaluation_periods) else: self.evaluation_periods = None self.actions_enabled = None self.alarm_arn = None self.last_updated = None self.description = description self.dimensions = dimensions self.state_reason = None self.state_value = None self.unit = unit self.alarm_actions = alarm_actions self.insufficient_data_actions = insufficient_data_actions self.ok_actions = ok_actions def __repr__(self): return 'MetricAlarm:%s[%s(%s) %s %s]' % (self.name, self.metric, self.statistic, self.comparison, self.threshold) def startElement(self, name, attrs, connection): if name == 'AlarmActions': self.alarm_actions = ListElement() return self.alarm_actions elif name == 'InsufficientDataActions': self.insufficient_data_actions = ListElement() return self.insufficient_data_actions elif name == 'OKActions': self.ok_actions = ListElement() return self.ok_actions elif name == 'Dimensions': self.dimensions = Dimension() return self.dimensions else: pass def endElement(self, name, value, connection): if name == 'ActionsEnabled': self.actions_enabled = value elif name == 'AlarmArn': self.alarm_arn = value elif name == 'AlarmConfigurationUpdatedTimestamp': self.last_updated = value elif name == 'AlarmDescription': self.description = value elif name == 'AlarmName': self.name = value elif name == 'ComparisonOperator': setattr(self, 'comparison', self._rev_cmp_map[value]) elif name == 'EvaluationPeriods': self.evaluation_periods = int(value) elif name == 'MetricName': self.metric = value elif name == 'Namespace': self.namespace = value elif name == 'Period': self.period = int(value) elif name == 'StateReason': self.state_reason = value elif name == 'StateValue': self.state_value = value elif name == 'Statistic': self.statistic = value elif name == 'Threshold': self.threshold = float(value) elif name == 'Unit': self.unit = value else: setattr(self, name, value) def set_state(self, value, reason, data=None): """ Temporarily sets the state of an alarm. :type value: str :param value: OK | ALARM | INSUFFICIENT_DATA :type reason: str :param reason: Reason alarm set (human readable). :type data: str :param data: Reason data (will be jsonified). """ return self.connection.set_alarm_state(self.name, reason, value, data) def update(self): return self.connection.update_alarm(self) def enable_actions(self): return self.connection.enable_alarm_actions([self.name]) def disable_actions(self): return self.connection.disable_alarm_actions([self.name]) def describe_history(self, start_date=None, end_date=None, max_records=None, history_item_type=None, next_token=None): return self.connection.describe_alarm_history(self.name, start_date, end_date, max_records, history_item_type, next_token) def add_alarm_action(self, action_arn=None): """ Adds an alarm action, represented as an SNS topic, to this alarm. What do do when alarm is triggered. :type action_arn: str :param action_arn: SNS topics to which notification should be sent if the alarm goes to state ALARM. """ if not action_arn: return # Raise exception instead? self.actions_enabled = 'true' self.alarm_actions.append(action_arn) def add_insufficient_data_action(self, action_arn=None): """ Adds an insufficient_data action, represented as an SNS topic, to this alarm. What to do when the insufficient_data state is reached. :type action_arn: str :param action_arn: SNS topics to which notification should be sent if the alarm goes to state INSUFFICIENT_DATA. """ if not action_arn: return self.actions_enabled = 'true' self.insufficient_data_actions.append(action_arn) def add_ok_action(self, action_arn=None): """ Adds an ok action, represented as an SNS topic, to this alarm. What to do when the ok state is reached. :type action_arn: str :param action_arn: SNS topics to which notification should be sent if the alarm goes to state INSUFFICIENT_DATA. """ if not action_arn: return self.actions_enabled = 'true' self.ok_actions.append(action_arn) def delete(self): self.connection.delete_alarms([self.name]) class AlarmHistoryItem(object): def __init__(self, connection=None): self.connection = connection def __repr__(self): return 'AlarmHistory:%s[%s at %s]' % (self.name, self.summary, self.timestamp) def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'AlarmName': self.name = value elif name == 'HistoryData': self.data = json.loads(value) elif name == 'HistoryItemType': self.tem_type = value elif name == 'HistorySummary': self.summary = value elif name == 'Timestamp': try: self.timestamp = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ') except ValueError: self.timestamp = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ') boto-2.20.1/boto/ec2/cloudwatch/datapoint.py000066400000000000000000000032041225267101000206400ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from datetime import datetime class Datapoint(dict): def __init__(self, connection=None): dict.__init__(self) self.connection = connection def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name in ['Average', 'Maximum', 'Minimum', 'Sum', 'SampleCount']: self[name] = float(value) elif name == 'Timestamp': self[name] = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ') elif name != 'member': self[name] = value boto-2.20.1/boto/ec2/cloudwatch/dimension.py000066400000000000000000000027751225267101000206560ustar00rootroot00000000000000# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # class Dimension(dict): def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'Name': self._name = value elif name == 'Value': if self._name in self: self[self._name].append(value) else: self[self._name] = [value] else: setattr(self, name, value) boto-2.20.1/boto/ec2/cloudwatch/listelement.py000066400000000000000000000024471225267101000212120ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class ListElement(list): def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'member': self.append(value) boto-2.20.1/boto/ec2/cloudwatch/metric.py000066400000000000000000000165031225267101000201460ustar00rootroot00000000000000# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.ec2.cloudwatch.alarm import MetricAlarm from boto.ec2.cloudwatch.dimension import Dimension class Metric(object): Statistics = ['Minimum', 'Maximum', 'Sum', 'Average', 'SampleCount'] Units = ['Seconds', 'Microseconds', 'Milliseconds', 'Bytes', 'Kilobytes', 'Megabytes', 'Gigabytes', 'Terabytes', 'Bits', 'Kilobits', 'Megabits', 'Gigabits', 'Terabits', 'Percent', 'Count', 'Bytes/Second', 'Kilobytes/Second', 'Megabytes/Second', 'Gigabytes/Second', 'Terabytes/Second', 'Bits/Second', 'Kilobits/Second', 'Megabits/Second', 'Gigabits/Second', 'Terabits/Second', 'Count/Second', None] def __init__(self, connection=None): self.connection = connection self.name = None self.namespace = None self.dimensions = None def __repr__(self): return 'Metric:%s' % self.name def startElement(self, name, attrs, connection): if name == 'Dimensions': self.dimensions = Dimension() return self.dimensions def endElement(self, name, value, connection): if name == 'MetricName': self.name = value elif name == 'Namespace': self.namespace = value else: setattr(self, name, value) def query(self, start_time, end_time, statistics, unit=None, period=60): """ :type start_time: datetime :param start_time: The time stamp to use for determining the first datapoint to return. The value specified is inclusive; results include datapoints with the time stamp specified. :type end_time: datetime :param end_time: The time stamp to use for determining the last datapoint to return. The value specified is exclusive; results will include datapoints up to the time stamp specified. :type statistics: list :param statistics: A list of statistics names Valid values: Average | Sum | SampleCount | Maximum | Minimum :type unit: string :param unit: The unit for the metric. Value values are: Seconds | Microseconds | Milliseconds | Bytes | Kilobytes | Megabytes | Gigabytes | Terabytes | Bits | Kilobits | Megabits | Gigabits | Terabits | Percent | Count | Bytes/Second | Kilobytes/Second | Megabytes/Second | Gigabytes/Second | Terabytes/Second | Bits/Second | Kilobits/Second | Megabits/Second | Gigabits/Second | Terabits/Second | Count/Second | None :type period: integer :param period: The granularity, in seconds, of the returned datapoints. Period must be at least 60 seconds and must be a multiple of 60. The default value is 60. """ if not isinstance(statistics, list): statistics = [statistics] return self.connection.get_metric_statistics(period, start_time, end_time, self.name, self.namespace, statistics, self.dimensions, unit) def create_alarm(self, name, comparison, threshold, period, evaluation_periods, statistic, enabled=True, description=None, dimensions=None, alarm_actions=None, ok_actions=None, insufficient_data_actions=None, unit=None): """ Creates or updates an alarm and associates it with this metric. Optionally, this operation can associate one or more Amazon Simple Notification Service resources with the alarm. When this operation creates an alarm, the alarm state is immediately set to INSUFFICIENT_DATA. The alarm is evaluated and its StateValue is set appropriately. Any actions associated with the StateValue is then executed. When updating an existing alarm, its StateValue is left unchanged. :type alarm: boto.ec2.cloudwatch.alarm.MetricAlarm :param alarm: MetricAlarm object. """ if not dimensions: dimensions = self.dimensions alarm = MetricAlarm(self.connection, name, self.name, self.namespace, statistic, comparison, threshold, period, evaluation_periods, unit, description, dimensions, alarm_actions, insufficient_data_actions, ok_actions) if self.connection.put_metric_alarm(alarm): return alarm def describe_alarms(self, period=None, statistic=None, dimensions=None, unit=None): """ Retrieves all alarms for this metric. Specify a statistic, period, or unit to filter the set of alarms further. :type period: int :param period: The period in seconds over which the statistic is applied. :type statistic: string :param statistic: The statistic for the metric. :param dimension_filters: A dictionary containing name/value pairs that will be used to filter the results. The key in the dictionary is the name of a Dimension. The value in the dictionary is either a scalar value of that Dimension name that you want to filter on, a list of values to filter on or None if you want all metrics with that Dimension name. :type unit: string :rtype list """ return self.connection.describe_alarms_for_metric(self.name, self.namespace, period, statistic, dimensions, unit) boto-2.20.1/boto/ec2/connection.py000066400000000000000000005115171225267101000166720ustar00rootroot00000000000000# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents a connection to the EC2 service. """ import base64 import warnings from datetime import datetime from datetime import timedelta import boto from boto.connection import AWSQueryConnection from boto.resultset import ResultSet from boto.ec2.image import Image, ImageAttribute, CopyImage from boto.ec2.instance import Reservation, Instance from boto.ec2.instance import ConsoleOutput, InstanceAttribute from boto.ec2.keypair import KeyPair from boto.ec2.address import Address from boto.ec2.volume import Volume, VolumeAttribute from boto.ec2.snapshot import Snapshot from boto.ec2.snapshot import SnapshotAttribute from boto.ec2.zone import Zone from boto.ec2.securitygroup import SecurityGroup from boto.ec2.regioninfo import RegionInfo from boto.ec2.instanceinfo import InstanceInfo from boto.ec2.reservedinstance import ReservedInstancesOffering from boto.ec2.reservedinstance import ReservedInstance from boto.ec2.reservedinstance import ReservedInstanceListing from boto.ec2.reservedinstance import ReservedInstancesConfiguration from boto.ec2.reservedinstance import ModifyReservedInstancesResult from boto.ec2.reservedinstance import ReservedInstancesModification from boto.ec2.spotinstancerequest import SpotInstanceRequest from boto.ec2.spotpricehistory import SpotPriceHistory from boto.ec2.spotdatafeedsubscription import SpotDatafeedSubscription from boto.ec2.bundleinstance import BundleInstanceTask from boto.ec2.placementgroup import PlacementGroup from boto.ec2.tag import Tag from boto.ec2.vmtype import VmType from boto.ec2.instancestatus import InstanceStatusSet from boto.ec2.volumestatus import VolumeStatusSet from boto.ec2.networkinterface import NetworkInterface from boto.ec2.attributes import AccountAttribute, VPCAttribute from boto.ec2.blockdevicemapping import BlockDeviceMapping, BlockDeviceType from boto.exception import EC2ResponseError #boto.set_stream_logger('ec2') class EC2Connection(AWSQueryConnection): APIVersion = boto.config.get('Boto', 'ec2_version', '2013-10-15') DefaultRegionName = boto.config.get('Boto', 'ec2_region_name', 'us-east-1') DefaultRegionEndpoint = boto.config.get('Boto', 'ec2_region_endpoint', 'ec2.us-east-1.amazonaws.com') ResponseError = EC2ResponseError def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, host=None, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/', api_version=None, security_token=None, validate_certs=True): """ Init method to create a new connection to EC2. """ if not region: region = RegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint) self.region = region AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, proxy_user, proxy_pass, self.region.endpoint, debug, https_connection_factory, path, security_token, validate_certs=validate_certs) if api_version: self.APIVersion = api_version def _required_auth_capability(self): return ['ec2'] def get_params(self): """ Returns a dictionary containing the value of of all of the keyword arguments passed when constructing this connection. """ param_names = ['aws_access_key_id', 'aws_secret_access_key', 'is_secure', 'port', 'proxy', 'proxy_port', 'proxy_user', 'proxy_pass', 'debug', 'https_connection_factory'] params = {} for name in param_names: params[name] = getattr(self, name) return params def build_filter_params(self, params, filters): i = 1 for name in filters: aws_name = name if not aws_name.startswith('tag:'): aws_name = name.replace('_', '-') params['Filter.%d.Name' % i] = aws_name value = filters[name] if not isinstance(value, list): value = [value] j = 1 for v in value: params['Filter.%d.Value.%d' % (i, j)] = v j += 1 i += 1 # Image methods def get_all_images(self, image_ids=None, owners=None, executable_by=None, filters=None, dry_run=False): """ Retrieve all the EC2 images available on your account. :type image_ids: list :param image_ids: A list of strings with the image IDs wanted :type owners: list :param owners: A list of owner IDs, the special strings 'self', 'amazon', and 'aws-marketplace', may be used to describe images owned by you, Amazon or AWS Marketplace respectively :type executable_by: list :param executable_by: Returns AMIs for which the specified user ID has explicit launch permissions :type filters: dict :param filters: Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.ec2.image.Image` """ params = {} if image_ids: self.build_list_params(params, image_ids, 'ImageId') if owners: self.build_list_params(params, owners, 'Owner') if executable_by: self.build_list_params(params, executable_by, 'ExecutableBy') if filters: self.build_filter_params(params, filters) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeImages', params, [('item', Image)], verb='POST') def get_all_kernels(self, kernel_ids=None, owners=None, dry_run=False): """ Retrieve all the EC2 kernels available on your account. Constructs a filter to allow the processing to happen server side. :type kernel_ids: list :param kernel_ids: A list of strings with the image IDs wanted :type owners: list :param owners: A list of owner IDs :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.ec2.image.Image` """ params = {} if kernel_ids: self.build_list_params(params, kernel_ids, 'ImageId') if owners: self.build_list_params(params, owners, 'Owner') filter = {'image-type': 'kernel'} self.build_filter_params(params, filter) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeImages', params, [('item', Image)], verb='POST') def get_all_ramdisks(self, ramdisk_ids=None, owners=None, dry_run=False): """ Retrieve all the EC2 ramdisks available on your account. Constructs a filter to allow the processing to happen server side. :type ramdisk_ids: list :param ramdisk_ids: A list of strings with the image IDs wanted :type owners: list :param owners: A list of owner IDs :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.ec2.image.Image` """ params = {} if ramdisk_ids: self.build_list_params(params, ramdisk_ids, 'ImageId') if owners: self.build_list_params(params, owners, 'Owner') filter = {'image-type': 'ramdisk'} self.build_filter_params(params, filter) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeImages', params, [('item', Image)], verb='POST') def get_image(self, image_id, dry_run=False): """ Shortcut method to retrieve a specific image (AMI). :type image_id: string :param image_id: the ID of the Image to retrieve :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: :class:`boto.ec2.image.Image` :return: The EC2 Image specified or None if the image is not found """ try: return self.get_all_images(image_ids=[image_id], dry_run=dry_run)[0] except IndexError: # None of those images available return None def register_image(self, name=None, description=None, image_location=None, architecture=None, kernel_id=None, ramdisk_id=None, root_device_name=None, block_device_map=None, dry_run=False, virtualization_type=None, snapshot_id=None): """ Register an image. :type name: string :param name: The name of the AMI. Valid only for EBS-based images. :type description: string :param description: The description of the AMI. :type image_location: string :param image_location: Full path to your AMI manifest in Amazon S3 storage. Only used for S3-based AMI's. :type architecture: string :param architecture: The architecture of the AMI. Valid choices are: * i386 * x86_64 :type kernel_id: string :param kernel_id: The ID of the kernel with which to launch the instances :type root_device_name: string :param root_device_name: The root device name (e.g. /dev/sdh) :type block_device_map: :class:`boto.ec2.blockdevicemapping.BlockDeviceMapping` :param block_device_map: A BlockDeviceMapping data structure describing the EBS volumes associated with the Image. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :type virtualization_type: string :param virtualization_type: The virutalization_type of the image. Valid choices are: * paravirtual * hvm :type snapshot_id: string :param snapshot_id: A snapshot ID for the snapshot to be used as root device for the image. Mutually exclusive with block_device_map, requires root_device_name :rtype: string :return: The new image id """ params = {} if name: params['Name'] = name if description: params['Description'] = description if architecture: params['Architecture'] = architecture if kernel_id: params['KernelId'] = kernel_id if ramdisk_id: params['RamdiskId'] = ramdisk_id if image_location: params['ImageLocation'] = image_location if root_device_name: params['RootDeviceName'] = root_device_name if snapshot_id: root_vol = BlockDeviceType(snapshot_id=snapshot_id) block_device_map = BlockDeviceMapping() block_device_map[root_device_name] = root_vol if block_device_map: block_device_map.ec2_build_list_params(params) if dry_run: params['DryRun'] = 'true' if virtualization_type: params['VirtualizationType'] = virtualization_type rs = self.get_object('RegisterImage', params, ResultSet, verb='POST') image_id = getattr(rs, 'imageId', None) return image_id def deregister_image(self, image_id, delete_snapshot=False, dry_run=False): """ Unregister an AMI. :type image_id: string :param image_id: the ID of the Image to unregister :type delete_snapshot: bool :param delete_snapshot: Set to True if we should delete the snapshot associated with an EBS volume mounted at /dev/sda1 :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ snapshot_id = None if delete_snapshot: image = self.get_image(image_id) for key in image.block_device_mapping: if key == "/dev/sda1": snapshot_id = image.block_device_mapping[key].snapshot_id break params = { 'ImageId': image_id, } if dry_run: params['DryRun'] = 'true' result = self.get_status('DeregisterImage', params, verb='POST') if result and snapshot_id: return result and self.delete_snapshot(snapshot_id) return result def create_image(self, instance_id, name, description=None, no_reboot=False, block_device_mapping=None, dry_run=False): """ Will create an AMI from the instance in the running or stopped state. :type instance_id: string :param instance_id: the ID of the instance to image. :type name: string :param name: The name of the new image :type description: string :param description: An optional human-readable string describing the contents and purpose of the AMI. :type no_reboot: bool :param no_reboot: An optional flag indicating that the bundling process should not attempt to shutdown the instance before bundling. If this flag is True, the responsibility of maintaining file system integrity is left to the owner of the instance. :type block_device_mapping: :class:`boto.ec2.blockdevicemapping.BlockDeviceMapping` :param block_device_mapping: A BlockDeviceMapping data structure describing the EBS volumes associated with the Image. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: string :return: The new image id """ params = {'InstanceId': instance_id, 'Name': name} if description: params['Description'] = description if no_reboot: params['NoReboot'] = 'true' if block_device_mapping: block_device_mapping.ec2_build_list_params(params) if dry_run: params['DryRun'] = 'true' img = self.get_object('CreateImage', params, Image, verb='POST') return img.id # ImageAttribute methods def get_image_attribute(self, image_id, attribute='launchPermission', dry_run=False): """ Gets an attribute from an image. :type image_id: string :param image_id: The Amazon image id for which you want info about :type attribute: string :param attribute: The attribute you need information about. Valid choices are: * launchPermission * productCodes * blockDeviceMapping :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: :class:`boto.ec2.image.ImageAttribute` :return: An ImageAttribute object representing the value of the attribute requested """ params = {'ImageId': image_id, 'Attribute': attribute} if dry_run: params['DryRun'] = 'true' return self.get_object('DescribeImageAttribute', params, ImageAttribute, verb='POST') def modify_image_attribute(self, image_id, attribute='launchPermission', operation='add', user_ids=None, groups=None, product_codes=None, dry_run=False): """ Changes an attribute of an image. :type image_id: string :param image_id: The image id you wish to change :type attribute: string :param attribute: The attribute you wish to change :type operation: string :param operation: Either add or remove (this is required for changing launchPermissions) :type user_ids: list :param user_ids: The Amazon IDs of users to add/remove attributes :type groups: list :param groups: The groups to add/remove attributes :type product_codes: list :param product_codes: Amazon DevPay product code. Currently only one product code can be associated with an AMI. Once set, the product code cannot be changed or reset. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = {'ImageId': image_id, 'Attribute': attribute, 'OperationType': operation} if user_ids: self.build_list_params(params, user_ids, 'UserId') if groups: self.build_list_params(params, groups, 'UserGroup') if product_codes: self.build_list_params(params, product_codes, 'ProductCode') if dry_run: params['DryRun'] = 'true' return self.get_status('ModifyImageAttribute', params, verb='POST') def reset_image_attribute(self, image_id, attribute='launchPermission', dry_run=False): """ Resets an attribute of an AMI to its default value. :type image_id: string :param image_id: ID of the AMI for which an attribute will be described :type attribute: string :param attribute: The attribute to reset :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: Whether the operation succeeded or not """ params = {'ImageId': image_id, 'Attribute': attribute} if dry_run: params['DryRun'] = 'true' return self.get_status('ResetImageAttribute', params, verb='POST') # Instance methods def get_all_instances(self, instance_ids=None, filters=None, dry_run=False, max_results=None): """ Retrieve all the instance reservations associated with your account. .. note:: This method's current behavior is deprecated in favor of :meth:`get_all_reservations`. A future major release will change :meth:`get_all_instances` to return a list of :class:`boto.ec2.instance.Instance` objects as its name suggests. To obtain that behavior today, use :meth:`get_only_instances`. :type instance_ids: list :param instance_ids: A list of strings of instance IDs :type filters: dict :param filters: Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :type max_results: int :param max_results: The maximum number of paginated instance items per response. :rtype: list :return: A list of :class:`boto.ec2.instance.Reservation` """ warnings.warn(('The current get_all_instances implementation will be ' 'replaced with get_all_reservations.'), PendingDeprecationWarning) return self.get_all_reservations(instance_ids=instance_ids, filters=filters, dry_run=dry_run, max_results=max_results) def get_only_instances(self, instance_ids=None, filters=None, dry_run=False, max_results=None): # A future release should rename this method to get_all_instances # and make get_only_instances an alias for that. """ Retrieve all the instances associated with your account. :type instance_ids: list :param instance_ids: A list of strings of instance IDs :type filters: dict :param filters: Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :type max_results: int :param max_results: The maximum number of paginated instance items per response. :rtype: list :return: A list of :class:`boto.ec2.instance.Instance` """ reservations = self.get_all_reservations(instance_ids=instance_ids, filters=filters, dry_run=dry_run, max_results=max_results) return [instance for reservation in reservations for instance in reservation.instances] def get_all_reservations(self, instance_ids=None, filters=None, dry_run=False, max_results=None): """ Retrieve all the instance reservations associated with your account. :type instance_ids: list :param instance_ids: A list of strings of instance IDs :type filters: dict :param filters: Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :type max_results: int :param max_results: The maximum number of paginated instance items per response. :rtype: list :return: A list of :class:`boto.ec2.instance.Reservation` """ params = {} if instance_ids: self.build_list_params(params, instance_ids, 'InstanceId') if filters: if 'group-id' in filters: gid = filters.get('group-id') if not gid.startswith('sg-') or len(gid) != 11: warnings.warn( "The group-id filter now requires a security group " "identifier (sg-*) instead of a group name. To filter " "by group name use the 'group-name' filter instead.", UserWarning) self.build_filter_params(params, filters) if dry_run: params['DryRun'] = 'true' if max_results is not None: params['MaxResults'] = max_results return self.get_list('DescribeInstances', params, [('item', Reservation)], verb='POST') def get_all_instance_status(self, instance_ids=None, max_results=None, next_token=None, filters=None, dry_run=False): """ Retrieve all the instances in your account scheduled for maintenance. :type instance_ids: list :param instance_ids: A list of strings of instance IDs :type max_results: int :param max_results: The maximum number of paginated instance items per response. :type next_token: str :param next_token: A string specifying the next paginated set of results to return. :type filters: dict :param filters: Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of instances that have maintenance scheduled. """ params = {} if instance_ids: self.build_list_params(params, instance_ids, 'InstanceId') if max_results: params['MaxResults'] = max_results if next_token: params['NextToken'] = next_token if filters: self.build_filter_params(params, filters) if dry_run: params['DryRun'] = 'true' return self.get_object('DescribeInstanceStatus', params, InstanceStatusSet, verb='POST') def run_instances(self, image_id, min_count=1, max_count=1, key_name=None, security_groups=None, user_data=None, addressing_type=None, instance_type='m1.small', placement=None, kernel_id=None, ramdisk_id=None, monitoring_enabled=False, subnet_id=None, block_device_map=None, disable_api_termination=False, instance_initiated_shutdown_behavior=None, private_ip_address=None, placement_group=None, client_token=None, security_group_ids=None, additional_info=None, instance_profile_name=None, instance_profile_arn=None, tenancy=None, ebs_optimized=False, network_interfaces=None, dry_run=False): """ Runs an image on EC2. :type image_id: string :param image_id: The ID of the image to run. :type min_count: int :param min_count: The minimum number of instances to launch. :type max_count: int :param max_count: The maximum number of instances to launch. :type key_name: string :param key_name: The name of the key pair with which to launch instances. :type security_groups: list of strings :param security_groups: The names of the security groups with which to associate instances. :type user_data: string :param user_data: The Base64-encoded MIME user data to be made available to the instance(s) in this reservation. :type instance_type: string :param instance_type: The type of instance to run: * t1.micro * m1.small * m1.medium * m1.large * m1.xlarge * m3.xlarge * m3.2xlarge * c1.medium * c1.xlarge * m2.xlarge * m2.2xlarge * m2.4xlarge * cr1.8xlarge * hi1.4xlarge * hs1.8xlarge * cc1.4xlarge * cg1.4xlarge * cc2.8xlarge * g2.2xlarge * i2.xlarge * i2.2xlarge * i2.4xlarge * i2.8xlarge :type placement: string :param placement: The Availability Zone to launch the instance into. :type kernel_id: string :param kernel_id: The ID of the kernel with which to launch the instances. :type ramdisk_id: string :param ramdisk_id: The ID of the RAM disk with which to launch the instances. :type monitoring_enabled: bool :param monitoring_enabled: Enable CloudWatch monitoring on the instance. :type subnet_id: string :param subnet_id: The subnet ID within which to launch the instances for VPC. :type private_ip_address: string :param private_ip_address: If you're using VPC, you can optionally use this parameter to assign the instance a specific available IP address from the subnet (e.g., 10.0.0.25). :type block_device_map: :class:`boto.ec2.blockdevicemapping.BlockDeviceMapping` :param block_device_map: A BlockDeviceMapping data structure describing the EBS volumes associated with the Image. :type disable_api_termination: bool :param disable_api_termination: If True, the instances will be locked and will not be able to be terminated via the API. :type instance_initiated_shutdown_behavior: string :param instance_initiated_shutdown_behavior: Specifies whether the instance stops or terminates on instance-initiated shutdown. Valid values are: * stop * terminate :type placement_group: string :param placement_group: If specified, this is the name of the placement group in which the instance(s) will be launched. :type client_token: string :param client_token: Unique, case-sensitive identifier you provide to ensure idempotency of the request. Maximum 64 ASCII characters. :type security_group_ids: list of strings :param security_group_ids: The ID of the VPC security groups with which to associate instances. :type additional_info: string :param additional_info: Specifies additional information to make available to the instance(s). :type tenancy: string :param tenancy: The tenancy of the instance you want to launch. An instance with a tenancy of 'dedicated' runs on single-tenant hardware and can only be launched into a VPC. Valid values are:"default" or "dedicated". NOTE: To use dedicated tenancy you MUST specify a VPC subnet-ID as well. :type instance_profile_arn: string :param instance_profile_arn: The Amazon resource name (ARN) of the IAM Instance Profile (IIP) to associate with the instances. :type instance_profile_name: string :param instance_profile_name: The name of the IAM Instance Profile (IIP) to associate with the instances. :type ebs_optimized: bool :param ebs_optimized: Whether the instance is optimized for EBS I/O. This optimization provides dedicated throughput to Amazon EBS and an optimized configuration stack to provide optimal EBS I/O performance. This optimization isn't available with all instance types. :type network_interfaces: list :param network_interfaces: A list of :class:`boto.ec2.networkinterface.NetworkInterfaceSpecification` :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: Reservation :return: The :class:`boto.ec2.instance.Reservation` associated with the request for machines """ params = {'ImageId': image_id, 'MinCount': min_count, 'MaxCount': max_count} if key_name: params['KeyName'] = key_name if security_group_ids: l = [] for group in security_group_ids: if isinstance(group, SecurityGroup): l.append(group.id) else: l.append(group) self.build_list_params(params, l, 'SecurityGroupId') if security_groups: l = [] for group in security_groups: if isinstance(group, SecurityGroup): l.append(group.name) else: l.append(group) self.build_list_params(params, l, 'SecurityGroup') if user_data: params['UserData'] = base64.b64encode(user_data) if addressing_type: params['AddressingType'] = addressing_type if instance_type: params['InstanceType'] = instance_type if placement: params['Placement.AvailabilityZone'] = placement if placement_group: params['Placement.GroupName'] = placement_group if tenancy: params['Placement.Tenancy'] = tenancy if kernel_id: params['KernelId'] = kernel_id if ramdisk_id: params['RamdiskId'] = ramdisk_id if monitoring_enabled: params['Monitoring.Enabled'] = 'true' if subnet_id: params['SubnetId'] = subnet_id if private_ip_address: params['PrivateIpAddress'] = private_ip_address if block_device_map: block_device_map.ec2_build_list_params(params) if disable_api_termination: params['DisableApiTermination'] = 'true' if instance_initiated_shutdown_behavior: val = instance_initiated_shutdown_behavior params['InstanceInitiatedShutdownBehavior'] = val if client_token: params['ClientToken'] = client_token if additional_info: params['AdditionalInfo'] = additional_info if instance_profile_name: params['IamInstanceProfile.Name'] = instance_profile_name if instance_profile_arn: params['IamInstanceProfile.Arn'] = instance_profile_arn if ebs_optimized: params['EbsOptimized'] = 'true' if network_interfaces: network_interfaces.build_list_params(params) if dry_run: params['DryRun'] = 'true' return self.get_object('RunInstances', params, Reservation, verb='POST') def terminate_instances(self, instance_ids=None, dry_run=False): """ Terminate the instances specified :type instance_ids: list :param instance_ids: A list of strings of the Instance IDs to terminate :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of the instances terminated """ params = {} if instance_ids: self.build_list_params(params, instance_ids, 'InstanceId') if dry_run: params['DryRun'] = 'true' return self.get_list('TerminateInstances', params, [('item', Instance)], verb='POST') def stop_instances(self, instance_ids=None, force=False, dry_run=False): """ Stop the instances specified :type instance_ids: list :param instance_ids: A list of strings of the Instance IDs to stop :type force: bool :param force: Forces the instance to stop :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of the instances stopped """ params = {} if force: params['Force'] = 'true' if instance_ids: self.build_list_params(params, instance_ids, 'InstanceId') if dry_run: params['DryRun'] = 'true' return self.get_list('StopInstances', params, [('item', Instance)], verb='POST') def start_instances(self, instance_ids=None, dry_run=False): """ Start the instances specified :type instance_ids: list :param instance_ids: A list of strings of the Instance IDs to start :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of the instances started """ params = {} if instance_ids: self.build_list_params(params, instance_ids, 'InstanceId') if dry_run: params['DryRun'] = 'true' return self.get_list('StartInstances', params, [('item', Instance)], verb='POST') def get_console_output(self, instance_id, dry_run=False): """ Retrieves the console output for the specified instance. :type instance_id: string :param instance_id: The instance ID of a running instance on the cloud. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: :class:`boto.ec2.instance.ConsoleOutput` :return: The console output as a ConsoleOutput object """ params = {} self.build_list_params(params, [instance_id], 'InstanceId') if dry_run: params['DryRun'] = 'true' return self.get_object('GetConsoleOutput', params, ConsoleOutput, verb='POST') def reboot_instances(self, instance_ids=None, dry_run=False): """ Reboot the specified instances. :type instance_ids: list :param instance_ids: The instances to terminate and reboot :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = {} if instance_ids: self.build_list_params(params, instance_ids, 'InstanceId') if dry_run: params['DryRun'] = 'true' return self.get_status('RebootInstances', params) def confirm_product_instance(self, product_code, instance_id, dry_run=False): """ :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = {'ProductCode': product_code, 'InstanceId': instance_id} if dry_run: params['DryRun'] = 'true' rs = self.get_object('ConfirmProductInstance', params, ResultSet, verb='POST') return (rs.status, rs.ownerId) # InstanceAttribute methods def get_instance_attribute(self, instance_id, attribute, dry_run=False): """ Gets an attribute from an instance. :type instance_id: string :param instance_id: The Amazon id of the instance :type attribute: string :param attribute: The attribute you need information about Valid choices are: * instanceType * kernel * ramdisk * userData * disableApiTermination * instanceInitiatedShutdownBehavior * rootDeviceName * blockDeviceMapping * productCodes * sourceDestCheck * groupSet * ebsOptimized :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: :class:`boto.ec2.image.InstanceAttribute` :return: An InstanceAttribute object representing the value of the attribute requested """ params = {'InstanceId': instance_id} if attribute: params['Attribute'] = attribute if dry_run: params['DryRun'] = 'true' return self.get_object('DescribeInstanceAttribute', params, InstanceAttribute, verb='POST') def modify_network_interface_attribute(self, interface_id, attr, value, attachment_id=None, dry_run=False): """ Changes an attribute of a network interface. :type interface_id: string :param interface_id: The interface id. Looks like 'eni-xxxxxxxx' :type attr: string :param attr: The attribute you wish to change. Learn more at http://docs.aws.amazon.com/AWSEC2/latest/API\ Reference/ApiReference-query-ModifyNetworkInterfaceAttribute.html * description - Textual description of interface * groupSet - List of security group ids or group objects * sourceDestCheck - Boolean * deleteOnTermination - Boolean. Must also specify attachment_id :type value: string :param value: The new value for the attribute :rtype: bool :return: Whether the operation succeeded or not :type attachment_id: string :param attachment_id: If you're modifying DeleteOnTermination you must specify the attachment_id. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ bool_reqs = ( 'deleteontermination', 'sourcedestcheck', ) if attr.lower() in bool_reqs: if isinstance(value, bool): if value: value = 'true' else: value = 'false' elif value not in ['true', 'false']: raise ValueError('%s must be a boolean, "true", or "false"!' % attr) params = {'NetworkInterfaceId': interface_id} # groupSet is handled differently from other arguments if attr.lower() == 'groupset': for idx, sg in enumerate(value): if isinstance(sg, SecurityGroup): sg = sg.id params['SecurityGroupId.%s' % (idx + 1)] = sg elif attr.lower() == 'description': params['Description.Value'] = value elif attr.lower() == 'sourcedestcheck': params['SourceDestCheck.Value'] = value elif attr.lower() == 'deleteontermination': params['Attachment.DeleteOnTermination'] = value if not attachment_id: raise ValueError('You must also specify an attachment_id') params['Attachment.AttachmentId'] = attachment_id else: raise ValueError('Unknown attribute "%s"' % (attr,)) if dry_run: params['DryRun'] = 'true' return self.get_status( 'ModifyNetworkInterfaceAttribute', params, verb='POST') def modify_instance_attribute(self, instance_id, attribute, value, dry_run=False): """ Changes an attribute of an instance :type instance_id: string :param instance_id: The instance id you wish to change :type attribute: string :param attribute: The attribute you wish to change. * instanceType - A valid instance type (m1.small) * kernel - Kernel ID (None) * ramdisk - Ramdisk ID (None) * userData - Base64 encoded String (None) * disableApiTermination - Boolean (true) * instanceInitiatedShutdownBehavior - stop|terminate * blockDeviceMapping - List of strings - ie: ['/dev/sda=false'] * sourceDestCheck - Boolean (true) * groupSet - Set of Security Groups or IDs * ebsOptimized - Boolean (false) :type value: string :param value: The new value for the attribute :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: Whether the operation succeeded or not """ # Allow a bool to be passed in for value of disableApiTermination bool_reqs = ('disableapitermination', 'sourcedestcheck', 'ebsoptimized') if attribute.lower() in bool_reqs: if isinstance(value, bool): if value: value = 'true' else: value = 'false' params = {'InstanceId': instance_id} # groupSet is handled differently from other arguments if attribute.lower() == 'groupset': for idx, sg in enumerate(value): if isinstance(sg, SecurityGroup): sg = sg.id params['GroupId.%s' % (idx + 1)] = sg elif attribute.lower() == 'blockdevicemapping': for idx, kv in enumerate(value): dev_name, _, flag = kv.partition('=') pre = 'BlockDeviceMapping.%d' % (idx + 1) params['%s.DeviceName' % pre] = dev_name params['%s.Ebs.DeleteOnTermination' % pre] = flag or 'true' else: # for backwards compatibility handle lowercase first letter attribute = attribute[0].upper() + attribute[1:] params['%s.Value' % attribute] = value if dry_run: params['DryRun'] = 'true' return self.get_status('ModifyInstanceAttribute', params, verb='POST') def reset_instance_attribute(self, instance_id, attribute, dry_run=False): """ Resets an attribute of an instance to its default value. :type instance_id: string :param instance_id: ID of the instance :type attribute: string :param attribute: The attribute to reset. Valid values are: kernel|ramdisk :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: Whether the operation succeeded or not """ params = {'InstanceId': instance_id, 'Attribute': attribute} if dry_run: params['DryRun'] = 'true' return self.get_status('ResetInstanceAttribute', params, verb='POST') # Spot Instances def get_all_spot_instance_requests(self, request_ids=None, filters=None, dry_run=False): """ Retrieve all the spot instances requests associated with your account. :type request_ids: list :param request_ids: A list of strings of spot instance request IDs :type filters: dict :param filters: Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.ec2.spotinstancerequest.SpotInstanceRequest` """ params = {} if request_ids: self.build_list_params(params, request_ids, 'SpotInstanceRequestId') if filters: if 'launch.group-id' in filters: lgid = filters.get('launch.group-id') if not lgid.startswith('sg-') or len(lgid) != 11: warnings.warn( "The 'launch.group-id' filter now requires a security " "group id (sg-*) and no longer supports filtering by " "group name. Please update your filters accordingly.", UserWarning) self.build_filter_params(params, filters) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeSpotInstanceRequests', params, [('item', SpotInstanceRequest)], verb='POST') def get_spot_price_history(self, start_time=None, end_time=None, instance_type=None, product_description=None, availability_zone=None, dry_run=False, max_results=None): """ Retrieve the recent history of spot instances pricing. :type start_time: str :param start_time: An indication of how far back to provide price changes for. An ISO8601 DateTime string. :type end_time: str :param end_time: An indication of how far forward to provide price changes for. An ISO8601 DateTime string. :type instance_type: str :param instance_type: Filter responses to a particular instance type. :type product_description: str :param product_description: Filter responses to a particular platform. Valid values are currently: * Linux/UNIX * SUSE Linux * Windows * Linux/UNIX (Amazon VPC) * SUSE Linux (Amazon VPC) * Windows (Amazon VPC) :type availability_zone: str :param availability_zone: The availability zone for which prices should be returned. If not specified, data for all availability zones will be returned. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :type max_results: int :param max_results: The maximum number of paginated items per response. :rtype: list :return: A list tuples containing price and timestamp. """ params = {} if start_time: params['StartTime'] = start_time if end_time: params['EndTime'] = end_time if instance_type: params['InstanceType'] = instance_type if product_description: params['ProductDescription'] = product_description if availability_zone: params['AvailabilityZone'] = availability_zone if dry_run: params['DryRun'] = 'true' if max_results is not None: params['MaxResults'] = max_results return self.get_list('DescribeSpotPriceHistory', params, [('item', SpotPriceHistory)], verb='POST') def request_spot_instances(self, price, image_id, count=1, type='one-time', valid_from=None, valid_until=None, launch_group=None, availability_zone_group=None, key_name=None, security_groups=None, user_data=None, addressing_type=None, instance_type='m1.small', placement=None, kernel_id=None, ramdisk_id=None, monitoring_enabled=False, subnet_id=None, placement_group=None, block_device_map=None, instance_profile_arn=None, instance_profile_name=None, security_group_ids=None, ebs_optimized=False, network_interfaces=None, dry_run=False): """ Request instances on the spot market at a particular price. :type price: str :param price: The maximum price of your bid :type image_id: string :param image_id: The ID of the image to run :type count: int :param count: The of instances to requested :type type: str :param type: Type of request. Can be 'one-time' or 'persistent'. Default is one-time. :type valid_from: str :param valid_from: Start date of the request. An ISO8601 time string. :type valid_until: str :param valid_until: End date of the request. An ISO8601 time string. :type launch_group: str :param launch_group: If supplied, all requests will be fulfilled as a group. :type availability_zone_group: str :param availability_zone_group: If supplied, all requests will be fulfilled within a single availability zone. :type key_name: string :param key_name: The name of the key pair with which to launch instances :type security_groups: list of strings :param security_groups: The names of the security groups with which to associate instances :type user_data: string :param user_data: The user data passed to the launched instances :type instance_type: string :param instance_type: The type of instance to run: * t1.micro * m1.small * m1.medium * m1.large * m1.xlarge * m3.xlarge * m3.2xlarge * c1.medium * c1.xlarge * m2.xlarge * m2.2xlarge * m2.4xlarge * cr1.8xlarge * hi1.4xlarge * hs1.8xlarge * cc1.4xlarge * cg1.4xlarge * cc2.8xlarge * g2.2xlarge * i2.xlarge * i2.2xlarge * i2.4xlarge * i2.8xlarge :type placement: string :param placement: The availability zone in which to launch the instances :type kernel_id: string :param kernel_id: The ID of the kernel with which to launch the instances :type ramdisk_id: string :param ramdisk_id: The ID of the RAM disk with which to launch the instances :type monitoring_enabled: bool :param monitoring_enabled: Enable CloudWatch monitoring on the instance. :type subnet_id: string :param subnet_id: The subnet ID within which to launch the instances for VPC. :type placement_group: string :param placement_group: If specified, this is the name of the placement group in which the instance(s) will be launched. :type block_device_map: :class:`boto.ec2.blockdevicemapping.BlockDeviceMapping` :param block_device_map: A BlockDeviceMapping data structure describing the EBS volumes associated with the Image. :type security_group_ids: list of strings :param security_group_ids: The ID of the VPC security groups with which to associate instances. :type instance_profile_arn: string :param instance_profile_arn: The Amazon resource name (ARN) of the IAM Instance Profile (IIP) to associate with the instances. :type instance_profile_name: string :param instance_profile_name: The name of the IAM Instance Profile (IIP) to associate with the instances. :type ebs_optimized: bool :param ebs_optimized: Whether the instance is optimized for EBS I/O. This optimization provides dedicated throughput to Amazon EBS and an optimized configuration stack to provide optimal EBS I/O performance. This optimization isn't available with all instance types. :type network_interfaces: list :param network_interfaces: A list of :class:`boto.ec2.networkinterface.NetworkInterfaceSpecification` :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: Reservation :return: The :class:`boto.ec2.spotinstancerequest.SpotInstanceRequest` associated with the request for machines """ ls = 'LaunchSpecification' params = {'%s.ImageId' % ls: image_id, 'Type': type, 'SpotPrice': price} if count: params['InstanceCount'] = count if valid_from: params['ValidFrom'] = valid_from if valid_until: params['ValidUntil'] = valid_until if launch_group: params['LaunchGroup'] = launch_group if availability_zone_group: params['AvailabilityZoneGroup'] = availability_zone_group if key_name: params['%s.KeyName' % ls] = key_name if security_group_ids: l = [] for group in security_group_ids: if isinstance(group, SecurityGroup): l.append(group.id) else: l.append(group) self.build_list_params(params, l, '%s.SecurityGroupId' % ls) if security_groups: l = [] for group in security_groups: if isinstance(group, SecurityGroup): l.append(group.name) else: l.append(group) self.build_list_params(params, l, '%s.SecurityGroup' % ls) if user_data: params['%s.UserData' % ls] = base64.b64encode(user_data) if addressing_type: params['%s.AddressingType' % ls] = addressing_type if instance_type: params['%s.InstanceType' % ls] = instance_type if placement: params['%s.Placement.AvailabilityZone' % ls] = placement if kernel_id: params['%s.KernelId' % ls] = kernel_id if ramdisk_id: params['%s.RamdiskId' % ls] = ramdisk_id if monitoring_enabled: params['%s.Monitoring.Enabled' % ls] = 'true' if subnet_id: params['%s.SubnetId' % ls] = subnet_id if placement_group: params['%s.Placement.GroupName' % ls] = placement_group if block_device_map: block_device_map.ec2_build_list_params(params, '%s.' % ls) if instance_profile_name: params['%s.IamInstanceProfile.Name' % ls] = instance_profile_name if instance_profile_arn: params['%s.IamInstanceProfile.Arn' % ls] = instance_profile_arn if ebs_optimized: params['%s.EbsOptimized' % ls] = 'true' if network_interfaces: network_interfaces.build_list_params(params, prefix=ls + '.') if dry_run: params['DryRun'] = 'true' return self.get_list('RequestSpotInstances', params, [('item', SpotInstanceRequest)], verb='POST') def cancel_spot_instance_requests(self, request_ids, dry_run=False): """ Cancel the specified Spot Instance Requests. :type request_ids: list :param request_ids: A list of strings of the Request IDs to terminate :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of the instances terminated """ params = {} if request_ids: self.build_list_params(params, request_ids, 'SpotInstanceRequestId') if dry_run: params['DryRun'] = 'true' return self.get_list('CancelSpotInstanceRequests', params, [('item', SpotInstanceRequest)], verb='POST') def get_spot_datafeed_subscription(self, dry_run=False): """ Return the current spot instance data feed subscription associated with this account, if any. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: :class:`boto.ec2.spotdatafeedsubscription.SpotDatafeedSubscription` :return: The datafeed subscription object or None """ params = {} if dry_run: params['DryRun'] = 'true' return self.get_object('DescribeSpotDatafeedSubscription', params, SpotDatafeedSubscription, verb='POST') def create_spot_datafeed_subscription(self, bucket, prefix, dry_run=False): """ Create a spot instance datafeed subscription for this account. :type bucket: str or unicode :param bucket: The name of the bucket where spot instance data will be written. The account issuing this request must have FULL_CONTROL access to the bucket specified in the request. :type prefix: str or unicode :param prefix: An optional prefix that will be pre-pended to all data files written to the bucket. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: :class:`boto.ec2.spotdatafeedsubscription.SpotDatafeedSubscription` :return: The datafeed subscription object or None """ params = {'Bucket': bucket} if prefix: params['Prefix'] = prefix if dry_run: params['DryRun'] = 'true' return self.get_object('CreateSpotDatafeedSubscription', params, SpotDatafeedSubscription, verb='POST') def delete_spot_datafeed_subscription(self, dry_run=False): """ Delete the current spot instance data feed subscription associated with this account :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = {} if dry_run: params['DryRun'] = 'true' return self.get_status('DeleteSpotDatafeedSubscription', params, verb='POST') # Zone methods def get_all_zones(self, zones=None, filters=None, dry_run=False): """ Get all Availability Zones associated with the current region. :type zones: list :param zones: Optional list of zones. If this list is present, only the Zones associated with these zone names will be returned. :type filters: dict :param filters: Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list of :class:`boto.ec2.zone.Zone` :return: The requested Zone objects """ params = {} if zones: self.build_list_params(params, zones, 'ZoneName') if filters: self.build_filter_params(params, filters) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeAvailabilityZones', params, [('item', Zone)], verb='POST') # Address methods def get_all_addresses(self, addresses=None, filters=None, allocation_ids=None, dry_run=False): """ Get all EIP's associated with the current credentials. :type addresses: list :param addresses: Optional list of addresses. If this list is present, only the Addresses associated with these addresses will be returned. :type filters: dict :param filters: Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details. :type allocation_ids: list :param allocation_ids: Optional list of allocation IDs. If this list is present, only the Addresses associated with the given allocation IDs will be returned. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list of :class:`boto.ec2.address.Address` :return: The requested Address objects """ params = {} if addresses: self.build_list_params(params, addresses, 'PublicIp') if allocation_ids: self.build_list_params(params, allocation_ids, 'AllocationId') if filters: self.build_filter_params(params, filters) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeAddresses', params, [('item', Address)], verb='POST') def allocate_address(self, domain=None, dry_run=False): """ Allocate a new Elastic IP address and associate it with your account. :type domain: string :param domain: Optional string. If domain is set to "vpc" the address will be allocated to VPC . Will return address object with allocation_id. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: :class:`boto.ec2.address.Address` :return: The newly allocated Address """ params = {} if domain is not None: params['Domain'] = domain if dry_run: params['DryRun'] = 'true' return self.get_object('AllocateAddress', params, Address, verb='POST') def assign_private_ip_addresses(self, network_interface_id=None, private_ip_addresses=None, secondary_private_ip_address_count=None, allow_reassignment=False, dry_run=False): """ Assigns one or more secondary private IP addresses to a network interface in Amazon VPC. :type network_interface_id: string :param network_interface_id: The network interface to which the IP address will be assigned. :type private_ip_addresses: list :param private_ip_addresses: Assigns the specified IP addresses as secondary IP addresses to the network interface. :type secondary_private_ip_address_count: int :param secondary_private_ip_address_count: The number of secondary IP addresses to assign to the network interface. You cannot specify this parameter when also specifying private_ip_addresses. :type allow_reassignment: bool :param allow_reassignment: Specifies whether to allow an IP address that is already assigned to another network interface or instance to be reassigned to the specified network interface. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = {} if network_interface_id is not None: params['NetworkInterfaceId'] = network_interface_id if private_ip_addresses is not None: self.build_list_params(params, private_ip_addresses, 'PrivateIpAddress') elif secondary_private_ip_address_count is not None: params['SecondaryPrivateIpAddressCount'] = \ secondary_private_ip_address_count if allow_reassignment: params['AllowReassignment'] = 'true' if dry_run: params['DryRun'] = 'true' return self.get_status('AssignPrivateIpAddresses', params, verb='POST') def associate_address(self, instance_id=None, public_ip=None, allocation_id=None, network_interface_id=None, private_ip_address=None, allow_reassociation=False, dry_run=False): """ Associate an Elastic IP address with a currently running instance. This requires one of ``public_ip`` or ``allocation_id`` depending on if you're associating a VPC address or a plain EC2 address. When using an Allocation ID, make sure to pass ``None`` for ``public_ip`` as EC2 expects a single parameter and if ``public_ip`` is passed boto will preference that instead of ``allocation_id``. :type instance_id: string :param instance_id: The ID of the instance :type public_ip: string :param public_ip: The public IP address for EC2 based allocations. :type allocation_id: string :param allocation_id: The allocation ID for a VPC-based elastic IP. :type network_interface_id: string :param network_interface_id: The network interface ID to which elastic IP is to be assigned to :type private_ip_address: string :param private_ip_address: The primary or secondary private IP address to associate with the Elastic IP address. :type allow_reassociation: bool :param allow_reassociation: Specify this option to allow an Elastic IP address that is already associated with another network interface or instance to be re-associated with the specified instance or interface. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = {} if instance_id is not None: params['InstanceId'] = instance_id elif network_interface_id is not None: params['NetworkInterfaceId'] = network_interface_id if public_ip is not None: params['PublicIp'] = public_ip elif allocation_id is not None: params['AllocationId'] = allocation_id if private_ip_address is not None: params['PrivateIpAddress'] = private_ip_address if allow_reassociation: params['AllowReassociation'] = 'true' if dry_run: params['DryRun'] = 'true' return self.get_status('AssociateAddress', params, verb='POST') def disassociate_address(self, public_ip=None, association_id=None, dry_run=False): """ Disassociate an Elastic IP address from a currently running instance. :type public_ip: string :param public_ip: The public IP address for EC2 elastic IPs. :type association_id: string :param association_id: The association ID for a VPC based elastic ip. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = {} if public_ip is not None: params['PublicIp'] = public_ip elif association_id is not None: params['AssociationId'] = association_id if dry_run: params['DryRun'] = 'true' return self.get_status('DisassociateAddress', params, verb='POST') def release_address(self, public_ip=None, allocation_id=None, dry_run=False): """ Free up an Elastic IP address. Pass a public IP address to release an EC2 Elastic IP address and an AllocationId to release a VPC Elastic IP address. You should only pass one value. This requires one of ``public_ip`` or ``allocation_id`` depending on if you're associating a VPC address or a plain EC2 address. When using an Allocation ID, make sure to pass ``None`` for ``public_ip`` as EC2 expects a single parameter and if ``public_ip`` is passed boto will preference that instead of ``allocation_id``. :type public_ip: string :param public_ip: The public IP address for EC2 elastic IPs. :type allocation_id: string :param allocation_id: The Allocation ID for VPC elastic IPs. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = {} if public_ip is not None: params['PublicIp'] = public_ip elif allocation_id is not None: params['AllocationId'] = allocation_id if dry_run: params['DryRun'] = 'true' return self.get_status('ReleaseAddress', params, verb='POST') def unassign_private_ip_addresses(self, network_interface_id=None, private_ip_addresses=None, dry_run=False): """ Unassigns one or more secondary private IP addresses from a network interface in Amazon VPC. :type network_interface_id: string :param network_interface_id: The network interface from which the secondary private IP address will be unassigned. :type private_ip_addresses: list :param private_ip_addresses: Specifies the secondary private IP addresses that you want to unassign from the network interface. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = {} if network_interface_id is not None: params['NetworkInterfaceId'] = network_interface_id if private_ip_addresses is not None: self.build_list_params(params, private_ip_addresses, 'PrivateIpAddress') if dry_run: params['DryRun'] = 'true' return self.get_status('UnassignPrivateIpAddresses', params, verb='POST') # Volume methods def get_all_volumes(self, volume_ids=None, filters=None, dry_run=False): """ Get all Volumes associated with the current credentials. :type volume_ids: list :param volume_ids: Optional list of volume ids. If this list is present, only the volumes associated with these volume ids will be returned. :type filters: dict :param filters: Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list of :class:`boto.ec2.volume.Volume` :return: The requested Volume objects """ params = {} if volume_ids: self.build_list_params(params, volume_ids, 'VolumeId') if filters: self.build_filter_params(params, filters) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeVolumes', params, [('item', Volume)], verb='POST') def get_all_volume_status(self, volume_ids=None, max_results=None, next_token=None, filters=None, dry_run=False): """ Retrieve the status of one or more volumes. :type volume_ids: list :param volume_ids: A list of strings of volume IDs :type max_results: int :param max_results: The maximum number of paginated instance items per response. :type next_token: str :param next_token: A string specifying the next paginated set of results to return. :type filters: dict :param filters: Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of volume status. """ params = {} if volume_ids: self.build_list_params(params, volume_ids, 'VolumeId') if max_results: params['MaxResults'] = max_results if next_token: params['NextToken'] = next_token if filters: self.build_filter_params(params, filters) if dry_run: params['DryRun'] = 'true' return self.get_object('DescribeVolumeStatus', params, VolumeStatusSet, verb='POST') def enable_volume_io(self, volume_id, dry_run=False): """ Enables I/O operations for a volume that had I/O operations disabled because the data on the volume was potentially inconsistent. :type volume_id: str :param volume_id: The ID of the volume. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = {'VolumeId': volume_id} if dry_run: params['DryRun'] = 'true' return self.get_status('EnableVolumeIO', params, verb='POST') def get_volume_attribute(self, volume_id, attribute='autoEnableIO', dry_run=False): """ Describes attribute of the volume. :type volume_id: str :param volume_id: The ID of the volume. :type attribute: str :param attribute: The requested attribute. Valid values are: * autoEnableIO :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list of :class:`boto.ec2.volume.VolumeAttribute` :return: The requested Volume attribute """ params = {'VolumeId': volume_id, 'Attribute': attribute} if dry_run: params['DryRun'] = 'true' return self.get_object('DescribeVolumeAttribute', params, VolumeAttribute, verb='POST') def modify_volume_attribute(self, volume_id, attribute, new_value, dry_run=False): """ Changes an attribute of an Volume. :type volume_id: string :param volume_id: The volume id you wish to change :type attribute: string :param attribute: The attribute you wish to change. Valid values are: AutoEnableIO. :type new_value: string :param new_value: The new value of the attribute. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = {'VolumeId': volume_id} if attribute == 'AutoEnableIO': params['AutoEnableIO.Value'] = new_value if dry_run: params['DryRun'] = 'true' return self.get_status('ModifyVolumeAttribute', params, verb='POST') def create_volume(self, size, zone, snapshot=None, volume_type=None, iops=None, dry_run=False): """ Create a new EBS Volume. :type size: int :param size: The size of the new volume, in GiB :type zone: string or :class:`boto.ec2.zone.Zone` :param zone: The availability zone in which the Volume will be created. :type snapshot: string or :class:`boto.ec2.snapshot.Snapshot` :param snapshot: The snapshot from which the new Volume will be created. :type volume_type: string :param volume_type: The type of the volume. (optional). Valid values are: standard | io1. :type iops: int :param iops: The provisioned IOPs you want to associate with this volume. (optional) :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ if isinstance(zone, Zone): zone = zone.name params = {'AvailabilityZone': zone} if size: params['Size'] = size if snapshot: if isinstance(snapshot, Snapshot): snapshot = snapshot.id params['SnapshotId'] = snapshot if volume_type: params['VolumeType'] = volume_type if iops: params['Iops'] = str(iops) if dry_run: params['DryRun'] = 'true' return self.get_object('CreateVolume', params, Volume, verb='POST') def delete_volume(self, volume_id, dry_run=False): """ Delete an EBS volume. :type volume_id: str :param volume_id: The ID of the volume to be delete. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = {'VolumeId': volume_id} if dry_run: params['DryRun'] = 'true' return self.get_status('DeleteVolume', params, verb='POST') def attach_volume(self, volume_id, instance_id, device, dry_run=False): """ Attach an EBS volume to an EC2 instance. :type volume_id: str :param volume_id: The ID of the EBS volume to be attached. :type instance_id: str :param instance_id: The ID of the EC2 instance to which it will be attached. :type device: str :param device: The device on the instance through which the volume will be exposted (e.g. /dev/sdh) :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = {'InstanceId': instance_id, 'VolumeId': volume_id, 'Device': device} if dry_run: params['DryRun'] = 'true' return self.get_status('AttachVolume', params, verb='POST') def detach_volume(self, volume_id, instance_id=None, device=None, force=False, dry_run=False): """ Detach an EBS volume from an EC2 instance. :type volume_id: str :param volume_id: The ID of the EBS volume to be attached. :type instance_id: str :param instance_id: The ID of the EC2 instance from which it will be detached. :type device: str :param device: The device on the instance through which the volume is exposted (e.g. /dev/sdh) :type force: bool :param force: Forces detachment if the previous detachment attempt did not occur cleanly. This option can lead to data loss or a corrupted file system. Use this option only as a last resort to detach a volume from a failed instance. The instance will not have an opportunity to flush file system caches nor file system meta data. If you use this option, you must perform file system check and repair procedures. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = {'VolumeId': volume_id} if instance_id: params['InstanceId'] = instance_id if device: params['Device'] = device if force: params['Force'] = 'true' if dry_run: params['DryRun'] = 'true' return self.get_status('DetachVolume', params, verb='POST') # Snapshot methods def get_all_snapshots(self, snapshot_ids=None, owner=None, restorable_by=None, filters=None, dry_run=False): """ Get all EBS Snapshots associated with the current credentials. :type snapshot_ids: list :param snapshot_ids: Optional list of snapshot ids. If this list is present, only the Snapshots associated with these snapshot ids will be returned. :type owner: str or list :param owner: If present, only the snapshots owned by the specified user(s) will be returned. Valid values are: * self * amazon * AWS Account ID :type restorable_by: str or list :param restorable_by: If present, only the snapshots that are restorable by the specified account id(s) will be returned. :type filters: dict :param filters: Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list of :class:`boto.ec2.snapshot.Snapshot` :return: The requested Snapshot objects """ params = {} if snapshot_ids: self.build_list_params(params, snapshot_ids, 'SnapshotId') if owner: self.build_list_params(params, owner, 'Owner') if restorable_by: self.build_list_params(params, restorable_by, 'RestorableBy') if filters: self.build_filter_params(params, filters) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeSnapshots', params, [('item', Snapshot)], verb='POST') def create_snapshot(self, volume_id, description=None, dry_run=False): """ Create a snapshot of an existing EBS Volume. :type volume_id: str :param volume_id: The ID of the volume to be snapshot'ed :type description: str :param description: A description of the snapshot. Limited to 255 characters. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: :class:`boto.ec2.snapshot.Snapshot` :return: The created Snapshot object """ params = {'VolumeId': volume_id} if description: params['Description'] = description[0:255] if dry_run: params['DryRun'] = 'true' snapshot = self.get_object('CreateSnapshot', params, Snapshot, verb='POST') volume = self.get_all_volumes([volume_id], dry_run=dry_run)[0] volume_name = volume.tags.get('Name') if volume_name: snapshot.add_tag('Name', volume_name) return snapshot def delete_snapshot(self, snapshot_id, dry_run=False): """ :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = {'SnapshotId': snapshot_id} if dry_run: params['DryRun'] = 'true' return self.get_status('DeleteSnapshot', params, verb='POST') def copy_snapshot(self, source_region, source_snapshot_id, description=None, dry_run=False): """ Copies a point-in-time snapshot of an Amazon Elastic Block Store (Amazon EBS) volume and stores it in Amazon Simple Storage Service (Amazon S3). You can copy the snapshot within the same region or from one region to another. You can use the snapshot to create new Amazon EBS volumes or Amazon Machine Images (AMIs). :type source_region: str :param source_region: The ID of the AWS region that contains the snapshot to be copied (e.g 'us-east-1', 'us-west-2', etc.). :type source_snapshot_id: str :param source_snapshot_id: The ID of the Amazon EBS snapshot to copy :type description: str :param description: A description of the new Amazon EBS snapshot. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: str :return: The snapshot ID """ params = { 'SourceRegion': source_region, 'SourceSnapshotId': source_snapshot_id, } if description is not None: params['Description'] = description if dry_run: params['DryRun'] = 'true' snapshot = self.get_object('CopySnapshot', params, Snapshot, verb='POST') return snapshot.id def trim_snapshots(self, hourly_backups=8, daily_backups=7, weekly_backups=4, monthly_backups=True): """ Trim excess snapshots, based on when they were taken. More current snapshots are retained, with the number retained decreasing as you move back in time. If ebs volumes have a 'Name' tag with a value, their snapshots will be assigned the same tag when they are created. The values of the 'Name' tags for snapshots are used by this function to group snapshots taken from the same volume (or from a series of like-named volumes over time) for trimming. For every group of like-named snapshots, this function retains the newest and oldest snapshots, as well as, by default, the first snapshots taken in each of the last eight hours, the first snapshots taken in each of the last seven days, the first snapshots taken in the last 4 weeks (counting Midnight Sunday morning as the start of the week), and the first snapshot from the first day of each month forever. :type hourly_backups: int :param hourly_backups: How many recent hourly backups should be saved. :type daily_backups: int :param daily_backups: How many recent daily backups should be saved. :type weekly_backups: int :param weekly_backups: How many recent weekly backups should be saved. :type monthly_backups: int :param monthly_backups: How many monthly backups should be saved. Use True for no limit. """ # This function first builds up an ordered list of target times # that snapshots should be saved for (last 8 hours, last 7 days, etc.). # Then a map of snapshots is constructed, with the keys being # the snapshot / volume names and the values being arrays of # chronologically sorted snapshots. # Finally, for each array in the map, we go through the snapshot # array and the target time array in an interleaved fashion, # deleting snapshots whose start_times don't immediately follow a # target time (we delete a snapshot if there's another snapshot # that was made closer to the preceding target time). now = datetime.utcnow() last_hour = datetime(now.year, now.month, now.day, now.hour) last_midnight = datetime(now.year, now.month, now.day) last_sunday = datetime(now.year, now.month, now.day) - timedelta(days = (now.weekday() + 1) % 7) start_of_month = datetime(now.year, now.month, 1) target_backup_times = [] # there are no snapshots older than 1/1/2007 oldest_snapshot_date = datetime(2007, 1, 1) for hour in range(0, hourly_backups): target_backup_times.append(last_hour - timedelta(hours = hour)) for day in range(0, daily_backups): target_backup_times.append(last_midnight - timedelta(days = day)) for week in range(0, weekly_backups): target_backup_times.append(last_sunday - timedelta(weeks = week)) one_day = timedelta(days = 1) monthly_snapshots_added = 0 while (start_of_month > oldest_snapshot_date and (monthly_backups is True or monthly_snapshots_added < monthly_backups)): # append the start of the month to the list of # snapshot dates to save: target_backup_times.append(start_of_month) monthly_snapshots_added += 1 # there's no timedelta setting for one month, so instead: # decrement the day by one, so we go to the final day of # the previous month... start_of_month -= one_day # ... and then go to the first day of that previous month: start_of_month = datetime(start_of_month.year, start_of_month.month, 1) temp = [] for t in target_backup_times: if temp.__contains__(t) == False: temp.append(t) # sort to make the oldest dates first, and make sure the month start # and last four week's start are in the proper order target_backup_times = sorted(temp) # get all the snapshots, sort them by date and time, and # organize them into one array for each volume: all_snapshots = self.get_all_snapshots(owner = 'self') all_snapshots.sort(cmp = lambda x, y: cmp(x.start_time, y.start_time)) snaps_for_each_volume = {} for snap in all_snapshots: # the snapshot name and the volume name are the same. # The snapshot name is set from the volume # name at the time the snapshot is taken volume_name = snap.tags.get('Name') if volume_name: # only examine snapshots that have a volume name snaps_for_volume = snaps_for_each_volume.get(volume_name) if not snaps_for_volume: snaps_for_volume = [] snaps_for_each_volume[volume_name] = snaps_for_volume snaps_for_volume.append(snap) # Do a running comparison of snapshot dates to desired time #periods, keeping the oldest snapshot in each # time period and deleting the rest: for volume_name in snaps_for_each_volume: snaps = snaps_for_each_volume[volume_name] snaps = snaps[:-1] # never delete the newest snapshot time_period_number = 0 snap_found_for_this_time_period = False for snap in snaps: check_this_snap = True while check_this_snap and time_period_number < target_backup_times.__len__(): snap_date = datetime.strptime(snap.start_time, '%Y-%m-%dT%H:%M:%S.000Z') if snap_date < target_backup_times[time_period_number]: # the snap date is before the cutoff date. # Figure out if it's the first snap in this # date range and act accordingly (since both #date the date ranges and the snapshots # are sorted chronologically, we know this #snapshot isn't in an earlier date range): if snap_found_for_this_time_period == True: if not snap.tags.get('preserve_snapshot'): # as long as the snapshot wasn't marked # with the 'preserve_snapshot' tag, delete it: try: self.delete_snapshot(snap.id) boto.log.info('Trimmed snapshot %s (%s)' % (snap.tags['Name'], snap.start_time)) except EC2ResponseError: boto.log.error('Attempt to trim snapshot %s (%s) failed. Possible result of a race condition with trimming on another server?' % (snap.tags['Name'], snap.start_time)) # go on and look at the next snapshot, #leaving the time period alone else: # this was the first snapshot found for this #time period. Leave it alone and look at the # next snapshot: snap_found_for_this_time_period = True check_this_snap = False else: # the snap is after the cutoff date. Check it # against the next cutoff date time_period_number += 1 snap_found_for_this_time_period = False def get_snapshot_attribute(self, snapshot_id, attribute='createVolumePermission', dry_run=False): """ Get information about an attribute of a snapshot. Only one attribute can be specified per call. :type snapshot_id: str :param snapshot_id: The ID of the snapshot. :type attribute: str :param attribute: The requested attribute. Valid values are: * createVolumePermission :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list of :class:`boto.ec2.snapshotattribute.SnapshotAttribute` :return: The requested Snapshot attribute """ params = {'Attribute': attribute} if snapshot_id: params['SnapshotId'] = snapshot_id if dry_run: params['DryRun'] = 'true' return self.get_object('DescribeSnapshotAttribute', params, SnapshotAttribute, verb='POST') def modify_snapshot_attribute(self, snapshot_id, attribute='createVolumePermission', operation='add', user_ids=None, groups=None, dry_run=False): """ Changes an attribute of an image. :type snapshot_id: string :param snapshot_id: The snapshot id you wish to change :type attribute: string :param attribute: The attribute you wish to change. Valid values are: createVolumePermission :type operation: string :param operation: Either add or remove (this is required for changing snapshot ermissions) :type user_ids: list :param user_ids: The Amazon IDs of users to add/remove attributes :type groups: list :param groups: The groups to add/remove attributes. The only valid value at this time is 'all'. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = {'SnapshotId': snapshot_id, 'Attribute': attribute, 'OperationType': operation} if user_ids: self.build_list_params(params, user_ids, 'UserId') if groups: self.build_list_params(params, groups, 'UserGroup') if dry_run: params['DryRun'] = 'true' return self.get_status('ModifySnapshotAttribute', params, verb='POST') def reset_snapshot_attribute(self, snapshot_id, attribute='createVolumePermission', dry_run=False): """ Resets an attribute of a snapshot to its default value. :type snapshot_id: string :param snapshot_id: ID of the snapshot :type attribute: string :param attribute: The attribute to reset :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: Whether the operation succeeded or not """ params = {'SnapshotId': snapshot_id, 'Attribute': attribute} if dry_run: params['DryRun'] = 'true' return self.get_status('ResetSnapshotAttribute', params, verb='POST') # Keypair methods def get_all_key_pairs(self, keynames=None, filters=None, dry_run=False): """ Get all key pairs associated with your account. :type keynames: list :param keynames: A list of the names of keypairs to retrieve. If not provided, all key pairs will be returned. :type filters: dict :param filters: Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.ec2.keypair.KeyPair` """ params = {} if keynames: self.build_list_params(params, keynames, 'KeyName') if filters: self.build_filter_params(params, filters) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeKeyPairs', params, [('item', KeyPair)], verb='POST') def get_key_pair(self, keyname, dry_run=False): """ Convenience method to retrieve a specific keypair (KeyPair). :type keyname: string :param keyname: The name of the keypair to retrieve :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: :class:`boto.ec2.keypair.KeyPair` :return: The KeyPair specified or None if it is not found """ try: return self.get_all_key_pairs( keynames=[keyname], dry_run=dry_run )[0] except self.ResponseError, e: if e.code == 'InvalidKeyPair.NotFound': return None else: raise def create_key_pair(self, key_name, dry_run=False): """ Create a new key pair for your account. This will create the key pair within the region you are currently connected to. :type key_name: string :param key_name: The name of the new keypair :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: :class:`boto.ec2.keypair.KeyPair` :return: The newly created :class:`boto.ec2.keypair.KeyPair`. The material attribute of the new KeyPair object will contain the the unencrypted PEM encoded RSA private key. """ params = {'KeyName': key_name} if dry_run: params['DryRun'] = 'true' return self.get_object('CreateKeyPair', params, KeyPair, verb='POST') def delete_key_pair(self, key_name, dry_run=False): """ Delete a key pair from your account. :type key_name: string :param key_name: The name of the keypair to delete :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = {'KeyName': key_name} if dry_run: params['DryRun'] = 'true' return self.get_status('DeleteKeyPair', params, verb='POST') def import_key_pair(self, key_name, public_key_material, dry_run=False): """ mports the public key from an RSA key pair that you created with a third-party tool. Supported formats: * OpenSSH public key format (e.g., the format in ~/.ssh/authorized_keys) * Base64 encoded DER format * SSH public key file format as specified in RFC4716 DSA keys are not supported. Make sure your key generator is set up to create RSA keys. Supported lengths: 1024, 2048, and 4096. :type key_name: string :param key_name: The name of the new keypair :type public_key_material: string :param public_key_material: The public key. You must base64 encode the public key material before sending it to AWS. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: :class:`boto.ec2.keypair.KeyPair` :return: A :class:`boto.ec2.keypair.KeyPair` object representing the newly imported key pair. This object will contain only the key name and the fingerprint. """ public_key_material = base64.b64encode(public_key_material) params = {'KeyName': key_name, 'PublicKeyMaterial': public_key_material} if dry_run: params['DryRun'] = 'true' return self.get_object('ImportKeyPair', params, KeyPair, verb='POST') # SecurityGroup methods def get_all_security_groups(self, groupnames=None, group_ids=None, filters=None, dry_run=False): """ Get all security groups associated with your account in a region. :type groupnames: list :param groupnames: A list of the names of security groups to retrieve. If not provided, all security groups will be returned. :type group_ids: list :param group_ids: A list of IDs of security groups to retrieve for security groups within a VPC. :type filters: dict :param filters: Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.ec2.securitygroup.SecurityGroup` """ params = {} if groupnames is not None: self.build_list_params(params, groupnames, 'GroupName') if group_ids is not None: self.build_list_params(params, group_ids, 'GroupId') if filters is not None: self.build_filter_params(params, filters) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeSecurityGroups', params, [('item', SecurityGroup)], verb='POST') def create_security_group(self, name, description, vpc_id=None, dry_run=False): """ Create a new security group for your account. This will create the security group within the region you are currently connected to. :type name: string :param name: The name of the new security group :type description: string :param description: The description of the new security group :type vpc_id: string :param vpc_id: The ID of the VPC to create the security group in, if any. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: :class:`boto.ec2.securitygroup.SecurityGroup` :return: The newly created :class:`boto.ec2.securitygroup.SecurityGroup`. """ params = {'GroupName': name, 'GroupDescription': description} if vpc_id is not None: params['VpcId'] = vpc_id if dry_run: params['DryRun'] = 'true' group = self.get_object('CreateSecurityGroup', params, SecurityGroup, verb='POST') group.name = name group.description = description if vpc_id is not None: group.vpc_id = vpc_id return group def delete_security_group(self, name=None, group_id=None, dry_run=False): """ Delete a security group from your account. :type name: string :param name: The name of the security group to delete. :type group_id: string :param group_id: The ID of the security group to delete within a VPC. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful. """ params = {} if name is not None: params['GroupName'] = name elif group_id is not None: params['GroupId'] = group_id if dry_run: params['DryRun'] = 'true' return self.get_status('DeleteSecurityGroup', params, verb='POST') def authorize_security_group_deprecated(self, group_name, src_security_group_name=None, src_security_group_owner_id=None, ip_protocol=None, from_port=None, to_port=None, cidr_ip=None, dry_run=False): """ NOTE: This method uses the old-style request parameters that did not allow a port to be specified when authorizing a group. :type group_name: string :param group_name: The name of the security group you are adding the rule to. :type src_security_group_name: string :param src_security_group_name: The name of the security group you are granting access to. :type src_security_group_owner_id: string :param src_security_group_owner_id: The ID of the owner of the security group you are granting access to. :type ip_protocol: string :param ip_protocol: Either tcp | udp | icmp :type from_port: int :param from_port: The beginning port number you are enabling :type to_port: int :param to_port: The ending port number you are enabling :type to_port: string :param to_port: The CIDR block you are providing access to. See http://goo.gl/Yj5QC :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful. """ params = {'GroupName':group_name} if src_security_group_name: params['SourceSecurityGroupName'] = src_security_group_name if src_security_group_owner_id: params['SourceSecurityGroupOwnerId'] = src_security_group_owner_id if ip_protocol: params['IpProtocol'] = ip_protocol if from_port: params['FromPort'] = from_port if to_port: params['ToPort'] = to_port if cidr_ip: params['CidrIp'] = cidr_ip if dry_run: params['DryRun'] = 'true' return self.get_status('AuthorizeSecurityGroupIngress', params) def authorize_security_group(self, group_name=None, src_security_group_name=None, src_security_group_owner_id=None, ip_protocol=None, from_port=None, to_port=None, cidr_ip=None, group_id=None, src_security_group_group_id=None, dry_run=False): """ Add a new rule to an existing security group. You need to pass in either src_security_group_name and src_security_group_owner_id OR ip_protocol, from_port, to_port, and cidr_ip. In other words, either you are authorizing another group or you are authorizing some ip-based rule. :type group_name: string :param group_name: The name of the security group you are adding the rule to. :type src_security_group_name: string :param src_security_group_name: The name of the security group you are granting access to. :type src_security_group_owner_id: string :param src_security_group_owner_id: The ID of the owner of the security group you are granting access to. :type ip_protocol: string :param ip_protocol: Either tcp | udp | icmp :type from_port: int :param from_port: The beginning port number you are enabling :type to_port: int :param to_port: The ending port number you are enabling :type cidr_ip: string or list of strings :param cidr_ip: The CIDR block you are providing access to. See http://goo.gl/Yj5QC :type group_id: string :param group_id: ID of the EC2 or VPC security group to modify. This is required for VPC security groups and can be used instead of group_name for EC2 security groups. :type src_security_group_group_id: string :param src_security_group_group_id: The ID of the security group you are granting access to. Can be used instead of src_security_group_name :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful. """ if src_security_group_name: if from_port is None and to_port is None and ip_protocol is None: return self.authorize_security_group_deprecated( group_name, src_security_group_name, src_security_group_owner_id) params = {} if group_name: params['GroupName'] = group_name if group_id: params['GroupId'] = group_id if src_security_group_name: param_name = 'IpPermissions.1.Groups.1.GroupName' params[param_name] = src_security_group_name if src_security_group_owner_id: param_name = 'IpPermissions.1.Groups.1.UserId' params[param_name] = src_security_group_owner_id if src_security_group_group_id: param_name = 'IpPermissions.1.Groups.1.GroupId' params[param_name] = src_security_group_group_id if ip_protocol: params['IpPermissions.1.IpProtocol'] = ip_protocol if from_port is not None: params['IpPermissions.1.FromPort'] = from_port if to_port is not None: params['IpPermissions.1.ToPort'] = to_port if cidr_ip: if not isinstance(cidr_ip, list): cidr_ip = [cidr_ip] for i, single_cidr_ip in enumerate(cidr_ip): params['IpPermissions.1.IpRanges.%d.CidrIp' % (i+1)] = \ single_cidr_ip if dry_run: params['DryRun'] = 'true' return self.get_status('AuthorizeSecurityGroupIngress', params, verb='POST') def authorize_security_group_egress(self, group_id, ip_protocol, from_port=None, to_port=None, src_group_id=None, cidr_ip=None, dry_run=False): """ The action adds one or more egress rules to a VPC security group. Specifically, this action permits instances in a security group to send traffic to one or more destination CIDR IP address ranges, or to one or more destination security groups in the same VPC. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = { 'GroupId': group_id, 'IpPermissions.1.IpProtocol': ip_protocol } if from_port is not None: params['IpPermissions.1.FromPort'] = from_port if to_port is not None: params['IpPermissions.1.ToPort'] = to_port if src_group_id is not None: params['IpPermissions.1.Groups.1.GroupId'] = src_group_id if cidr_ip is not None: params['IpPermissions.1.IpRanges.1.CidrIp'] = cidr_ip if dry_run: params['DryRun'] = 'true' return self.get_status('AuthorizeSecurityGroupEgress', params, verb='POST') def revoke_security_group_deprecated(self, group_name, src_security_group_name=None, src_security_group_owner_id=None, ip_protocol=None, from_port=None, to_port=None, cidr_ip=None, dry_run=False): """ NOTE: This method uses the old-style request parameters that did not allow a port to be specified when authorizing a group. Remove an existing rule from an existing security group. You need to pass in either src_security_group_name and src_security_group_owner_id OR ip_protocol, from_port, to_port, and cidr_ip. In other words, either you are revoking another group or you are revoking some ip-based rule. :type group_name: string :param group_name: The name of the security group you are removing the rule from. :type src_security_group_name: string :param src_security_group_name: The name of the security group you are revoking access to. :type src_security_group_owner_id: string :param src_security_group_owner_id: The ID of the owner of the security group you are revoking access to. :type ip_protocol: string :param ip_protocol: Either tcp | udp | icmp :type from_port: int :param from_port: The beginning port number you are disabling :type to_port: int :param to_port: The ending port number you are disabling :type to_port: string :param to_port: The CIDR block you are revoking access to. http://goo.gl/Yj5QC :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful. """ params = {'GroupName':group_name} if src_security_group_name: params['SourceSecurityGroupName'] = src_security_group_name if src_security_group_owner_id: params['SourceSecurityGroupOwnerId'] = src_security_group_owner_id if ip_protocol: params['IpProtocol'] = ip_protocol if from_port: params['FromPort'] = from_port if to_port: params['ToPort'] = to_port if cidr_ip: params['CidrIp'] = cidr_ip if dry_run: params['DryRun'] = 'true' return self.get_status('RevokeSecurityGroupIngress', params) def revoke_security_group(self, group_name=None, src_security_group_name=None, src_security_group_owner_id=None, ip_protocol=None, from_port=None, to_port=None, cidr_ip=None, group_id=None, src_security_group_group_id=None, dry_run=False): """ Remove an existing rule from an existing security group. You need to pass in either src_security_group_name and src_security_group_owner_id OR ip_protocol, from_port, to_port, and cidr_ip. In other words, either you are revoking another group or you are revoking some ip-based rule. :type group_name: string :param group_name: The name of the security group you are removing the rule from. :type src_security_group_name: string :param src_security_group_name: The name of the security group you are revoking access to. :type src_security_group_owner_id: string :param src_security_group_owner_id: The ID of the owner of the security group you are revoking access to. :type ip_protocol: string :param ip_protocol: Either tcp | udp | icmp :type from_port: int :param from_port: The beginning port number you are disabling :type to_port: int :param to_port: The ending port number you are disabling :type cidr_ip: string :param cidr_ip: The CIDR block you are revoking access to. See http://goo.gl/Yj5QC :type group_id: string :param group_id: ID of the EC2 or VPC security group to modify. This is required for VPC security groups and can be used instead of group_name for EC2 security groups. :type src_security_group_group_id: string :param src_security_group_group_id: The ID of the security group for which you are revoking access. Can be used instead of src_security_group_name :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful. """ if src_security_group_name: if from_port is None and to_port is None and ip_protocol is None: return self.revoke_security_group_deprecated( group_name, src_security_group_name, src_security_group_owner_id) params = {} if group_name is not None: params['GroupName'] = group_name if group_id is not None: params['GroupId'] = group_id if src_security_group_name: param_name = 'IpPermissions.1.Groups.1.GroupName' params[param_name] = src_security_group_name if src_security_group_group_id: param_name = 'IpPermissions.1.Groups.1.GroupId' params[param_name] = src_security_group_group_id if src_security_group_owner_id: param_name = 'IpPermissions.1.Groups.1.UserId' params[param_name] = src_security_group_owner_id if ip_protocol: params['IpPermissions.1.IpProtocol'] = ip_protocol if from_port is not None: params['IpPermissions.1.FromPort'] = from_port if to_port is not None: params['IpPermissions.1.ToPort'] = to_port if cidr_ip: params['IpPermissions.1.IpRanges.1.CidrIp'] = cidr_ip if dry_run: params['DryRun'] = 'true' return self.get_status('RevokeSecurityGroupIngress', params, verb='POST') def revoke_security_group_egress(self, group_id, ip_protocol, from_port=None, to_port=None, src_group_id=None, cidr_ip=None, dry_run=False): """ Remove an existing egress rule from an existing VPC security group. You need to pass in an ip_protocol, from_port and to_port range only if the protocol you are using is port-based. You also need to pass in either a src_group_id or cidr_ip. :type group_name: string :param group_id: The name of the security group you are removing the rule from. :type ip_protocol: string :param ip_protocol: Either tcp | udp | icmp | -1 :type from_port: int :param from_port: The beginning port number you are disabling :type to_port: int :param to_port: The ending port number you are disabling :type src_group_id: src_group_id :param src_group_id: The source security group you are revoking access to. :type cidr_ip: string :param cidr_ip: The CIDR block you are revoking access to. See http://goo.gl/Yj5QC :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful. """ params = {} if group_id: params['GroupId'] = group_id if ip_protocol: params['IpPermissions.1.IpProtocol'] = ip_protocol if from_port is not None: params['IpPermissions.1.FromPort'] = from_port if to_port is not None: params['IpPermissions.1.ToPort'] = to_port if src_group_id is not None: params['IpPermissions.1.Groups.1.GroupId'] = src_group_id if cidr_ip: params['IpPermissions.1.IpRanges.1.CidrIp'] = cidr_ip if dry_run: params['DryRun'] = 'true' return self.get_status('RevokeSecurityGroupEgress', params, verb='POST') # # Regions # def get_all_regions(self, region_names=None, filters=None, dry_run=False): """ Get all available regions for the EC2 service. :type region_names: list of str :param region_names: Names of regions to limit output :type filters: dict :param filters: Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.ec2.regioninfo.RegionInfo` """ params = {} if region_names: self.build_list_params(params, region_names, 'RegionName') if filters: self.build_filter_params(params, filters) if dry_run: params['DryRun'] = 'true' regions = self.get_list('DescribeRegions', params, [('item', RegionInfo)], verb='POST') for region in regions: region.connection_cls = EC2Connection return regions # # Reservation methods # def get_all_reserved_instances_offerings(self, reserved_instances_offering_ids=None, instance_type=None, availability_zone=None, product_description=None, filters=None, instance_tenancy=None, offering_type=None, include_marketplace=None, min_duration=None, max_duration=None, max_instance_count=None, next_token=None, max_results=None, dry_run=False): """ Describes Reserved Instance offerings that are available for purchase. :type reserved_instances_offering_ids: list :param reserved_instances_id: One or more Reserved Instances offering IDs. :type instance_type: str :param instance_type: Displays Reserved Instances of the specified instance type. :type availability_zone: str :param availability_zone: Displays Reserved Instances within the specified Availability Zone. :type product_description: str :param product_description: Displays Reserved Instances with the specified product description. :type filters: dict :param filters: Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details. :type instance_tenancy: string :param instance_tenancy: The tenancy of the Reserved Instance offering. A Reserved Instance with tenancy of dedicated will run on single-tenant hardware and can only be launched within a VPC. :type offering_type: string :param offering_type: The Reserved Instance offering type. Valid Values: `"Heavy Utilization" | "Medium Utilization" | "Light Utilization"` :type include_marketplace: bool :param include_marketplace: Include Marketplace offerings in the response. :type min_duration: int :param min_duration: Minimum duration (in seconds) to filter when searching for offerings. :type max_duration: int :param max_duration: Maximum duration (in seconds) to filter when searching for offerings. :type max_instance_count: int :param max_instance_count: Maximum number of instances to filter when searching for offerings. :type next_token: string :param next_token: Token to use when requesting the next paginated set of offerings. :type max_results: int :param max_results: Maximum number of offerings to return per call. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.ec2.reservedinstance.ReservedInstancesOffering`. """ params = {} if reserved_instances_offering_ids is not None: self.build_list_params(params, reserved_instances_offering_ids, 'ReservedInstancesOfferingId') if instance_type: params['InstanceType'] = instance_type if availability_zone: params['AvailabilityZone'] = availability_zone if product_description: params['ProductDescription'] = product_description if filters: self.build_filter_params(params, filters) if instance_tenancy is not None: params['InstanceTenancy'] = instance_tenancy if offering_type is not None: params['OfferingType'] = offering_type if include_marketplace is not None: if include_marketplace: params['IncludeMarketplace'] = 'true' else: params['IncludeMarketplace'] = 'false' if min_duration is not None: params['MinDuration'] = str(min_duration) if max_duration is not None: params['MaxDuration'] = str(max_duration) if max_instance_count is not None: params['MaxInstanceCount'] = str(max_instance_count) if next_token is not None: params['NextToken'] = next_token if max_results is not None: params['MaxResults'] = str(max_results) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeReservedInstancesOfferings', params, [('item', ReservedInstancesOffering)], verb='POST') def get_all_reserved_instances(self, reserved_instances_id=None, filters=None, dry_run=False): """ Describes one or more of the Reserved Instances that you purchased. :type reserved_instance_ids: list :param reserved_instance_ids: A list of the reserved instance ids that will be returned. If not provided, all reserved instances will be returned. :type filters: dict :param filters: Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.ec2.reservedinstance.ReservedInstance` """ params = {} if reserved_instances_id: self.build_list_params(params, reserved_instances_id, 'ReservedInstancesId') if filters: self.build_filter_params(params, filters) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeReservedInstances', params, [('item', ReservedInstance)], verb='POST') def purchase_reserved_instance_offering(self, reserved_instances_offering_id, instance_count=1, limit_price=None, dry_run=False): """ Purchase a Reserved Instance for use with your account. ** CAUTION ** This request can result in large amounts of money being charged to your AWS account. Use with caution! :type reserved_instances_offering_id: string :param reserved_instances_offering_id: The offering ID of the Reserved Instance to purchase :type instance_count: int :param instance_count: The number of Reserved Instances to purchase. Default value is 1. :type limit_price: tuple :param instance_count: Limit the price on the total order. Must be a tuple of (amount, currency_code), for example: (100.0, 'USD'). :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: :class:`boto.ec2.reservedinstance.ReservedInstance` :return: The newly created Reserved Instance """ params = { 'ReservedInstancesOfferingId': reserved_instances_offering_id, 'InstanceCount': instance_count} if limit_price is not None: params['LimitPrice.Amount'] = str(limit_price[0]) params['LimitPrice.CurrencyCode'] = str(limit_price[1]) if dry_run: params['DryRun'] = 'true' return self.get_object('PurchaseReservedInstancesOffering', params, ReservedInstance, verb='POST') def create_reserved_instances_listing(self, reserved_instances_id, instance_count, price_schedules, client_token, dry_run=False): """Creates a new listing for Reserved Instances. Creates a new listing for Amazon EC2 Reserved Instances that will be sold in the Reserved Instance Marketplace. You can submit one Reserved Instance listing at a time. The Reserved Instance Marketplace matches sellers who want to resell Reserved Instance capacity that they no longer need with buyers who want to purchase additional capacity. Reserved Instances bought and sold through the Reserved Instance Marketplace work like any other Reserved Instances. If you want to sell your Reserved Instances, you must first register as a Seller in the Reserved Instance Marketplace. After completing the registration process, you can create a Reserved Instance Marketplace listing of some or all of your Reserved Instances, and specify the upfront price you want to receive for them. Your Reserved Instance listings then become available for purchase. :type reserved_instances_id: string :param reserved_instances_id: The ID of the Reserved Instance that will be listed. :type instance_count: int :param instance_count: The number of instances that are a part of a Reserved Instance account that will be listed in the Reserved Instance Marketplace. This number should be less than or equal to the instance count associated with the Reserved Instance ID specified in this call. :type price_schedules: List of tuples :param price_schedules: A list specifying the price of the Reserved Instance for each month remaining in the Reserved Instance term. Each tuple contains two elements, the price and the term. For example, for an instance that 11 months remaining in its term, we can have a price schedule with an upfront price of $2.50. At 8 months remaining we can drop the price down to $2.00. This would be expressed as:: price_schedules=[('2.50', 11), ('2.00', 8)] :type client_token: string :param client_token: Unique, case-sensitive identifier you provide to ensure idempotency of the request. Maximum 64 ASCII characters. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.ec2.reservedinstance.ReservedInstanceListing` """ params = { 'ReservedInstancesId': reserved_instances_id, 'InstanceCount': str(instance_count), 'ClientToken': client_token, } for i, schedule in enumerate(price_schedules): price, term = schedule params['PriceSchedules.%s.Price' % i] = str(price) params['PriceSchedules.%s.Term' % i] = str(term) if dry_run: params['DryRun'] = 'true' return self.get_list('CreateReservedInstancesListing', params, [('item', ReservedInstanceListing)], verb='POST') def cancel_reserved_instances_listing(self, reserved_instances_listing_ids=None, dry_run=False): """Cancels the specified Reserved Instance listing. :type reserved_instances_listing_ids: List of strings :param reserved_instances_listing_ids: The ID of the Reserved Instance listing to be cancelled. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.ec2.reservedinstance.ReservedInstanceListing` """ params = {} if reserved_instances_listing_ids is not None: self.build_list_params(params, reserved_instances_listing_ids, 'ReservedInstancesListingId') if dry_run: params['DryRun'] = 'true' return self.get_list('CancelReservedInstancesListing', params, [('item', ReservedInstanceListing)], verb='POST') def build_configurations_param_list(self, params, target_configurations): for offset, tc in enumerate(target_configurations): prefix = 'ReservedInstancesConfigurationSetItemType.%d.' % offset if tc.availability_zone is not None: params[prefix + 'AvailabilityZone'] = tc.availability_zone if tc.platform is not None: params[prefix + 'Platform'] = tc.platform if tc.instance_count is not None: params[prefix + 'InstanceCount'] = tc.instance_count def modify_reserved_instances(self, client_token, reserved_instance_ids, target_configurations): """ Modifies the specified Reserved Instances. :type client_token: string :param client_token: A unique, case-sensitive, token you provide to ensure idempotency of your modification request. :type reserved_instance_ids: List of strings :param reserved_instance_ids: The IDs of the Reserved Instances to modify. :type target_configurations: List of :class:`boto.ec2.reservedinstance.ReservedInstancesConfiguration` :param target_configurations: The configuration settings for the modified Reserved Instances. :rtype: string :return: The unique ID for the submitted modification request. """ params = { 'ClientToken': client_token, } if reserved_instance_ids is not None: self.build_list_params(params, reserved_instance_ids, 'ReservedInstancesId') if target_configurations is not None: self.build_configurations_param_list(params, target_configurations) mrir = self.get_object( 'ModifyReservedInstances', params, ModifyReservedInstancesResult, verb='POST' ) return mrir.modification_id def describe_reserved_instances_modifications(self, reserved_instances_modification_ids=None, next_token=None, filters=None): """ A request to describe the modifications made to Reserved Instances in your account. :type reserved_instances_modification_ids: list :param reserved_instances_modification_ids: An optional list of Reserved Instances modification IDs to describe. :type next_token: str :param next_token: A string specifying the next paginated set of results to return. :type filters: dict :param filters: Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details. :rtype: list :return: A list of :class:`boto.ec2.reservedinstance.ReservedInstance` """ params = {} if reserved_instances_modification_ids: self.build_list_params(params, reserved_instances_modification_ids, 'ReservedInstancesModificationId') if next_token: params['NextToken'] = next_token if filters: self.build_filter_params(params, filters) return self.get_list('DescribeReservedInstancesModifications', params, [('item', ReservedInstancesModification)], verb='POST') # # Monitoring # def monitor_instances(self, instance_ids, dry_run=False): """ Enable CloudWatch monitoring for the supplied instances. :type instance_id: list of strings :param instance_id: The instance ids :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.ec2.instanceinfo.InstanceInfo` """ params = {} self.build_list_params(params, instance_ids, 'InstanceId') if dry_run: params['DryRun'] = 'true' return self.get_list('MonitorInstances', params, [('item', InstanceInfo)], verb='POST') def monitor_instance(self, instance_id, dry_run=False): """ Deprecated Version, maintained for backward compatibility. Enable CloudWatch monitoring for the supplied instance. :type instance_id: string :param instance_id: The instance id :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.ec2.instanceinfo.InstanceInfo` """ return self.monitor_instances([instance_id], dry_run=dry_run) def unmonitor_instances(self, instance_ids, dry_run=False): """ Disable CloudWatch monitoring for the supplied instance. :type instance_id: list of string :param instance_id: The instance id :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.ec2.instanceinfo.InstanceInfo` """ params = {} self.build_list_params(params, instance_ids, 'InstanceId') if dry_run: params['DryRun'] = 'true' return self.get_list('UnmonitorInstances', params, [('item', InstanceInfo)], verb='POST') def unmonitor_instance(self, instance_id, dry_run=False): """ Deprecated Version, maintained for backward compatibility. Disable CloudWatch monitoring for the supplied instance. :type instance_id: string :param instance_id: The instance id :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.ec2.instanceinfo.InstanceInfo` """ return self.unmonitor_instances([instance_id], dry_run=dry_run) # # Bundle Windows Instances # def bundle_instance(self, instance_id, s3_bucket, s3_prefix, s3_upload_policy, dry_run=False): """ Bundle Windows instance. :type instance_id: string :param instance_id: The instance id :type s3_bucket: string :param s3_bucket: The bucket in which the AMI should be stored. :type s3_prefix: string :param s3_prefix: The beginning of the file name for the AMI. :type s3_upload_policy: string :param s3_upload_policy: Base64 encoded policy that specifies condition and permissions for Amazon EC2 to upload the user's image into Amazon S3. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = {'InstanceId': instance_id, 'Storage.S3.Bucket': s3_bucket, 'Storage.S3.Prefix': s3_prefix, 'Storage.S3.UploadPolicy': s3_upload_policy} s3auth = boto.auth.get_auth_handler(None, boto.config, self.provider, ['s3']) params['Storage.S3.AWSAccessKeyId'] = self.aws_access_key_id signature = s3auth.sign_string(s3_upload_policy) params['Storage.S3.UploadPolicySignature'] = signature if dry_run: params['DryRun'] = 'true' return self.get_object('BundleInstance', params, BundleInstanceTask, verb='POST') def get_all_bundle_tasks(self, bundle_ids=None, filters=None, dry_run=False): """ Retrieve current bundling tasks. If no bundle id is specified, all tasks are retrieved. :type bundle_ids: list :param bundle_ids: A list of strings containing identifiers for previously created bundling tasks. :type filters: dict :param filters: Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = {} if bundle_ids: self.build_list_params(params, bundle_ids, 'BundleId') if filters: self.build_filter_params(params, filters) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeBundleTasks', params, [('item', BundleInstanceTask)], verb='POST') def cancel_bundle_task(self, bundle_id, dry_run=False): """ Cancel a previously submitted bundle task :type bundle_id: string :param bundle_id: The identifier of the bundle task to cancel. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = {'BundleId': bundle_id} if dry_run: params['DryRun'] = 'true' return self.get_object('CancelBundleTask', params, BundleInstanceTask, verb='POST') def get_password_data(self, instance_id, dry_run=False): """ Get encrypted administrator password for a Windows instance. :type instance_id: string :param instance_id: The identifier of the instance to retrieve the password for. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = {'InstanceId': instance_id} if dry_run: params['DryRun'] = 'true' rs = self.get_object('GetPasswordData', params, ResultSet, verb='POST') return rs.passwordData # # Cluster Placement Groups # def get_all_placement_groups(self, groupnames=None, filters=None, dry_run=False): """ Get all placement groups associated with your account in a region. :type groupnames: list :param groupnames: A list of the names of placement groups to retrieve. If not provided, all placement groups will be returned. :type filters: dict :param filters: Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.ec2.placementgroup.PlacementGroup` """ params = {} if groupnames: self.build_list_params(params, groupnames, 'GroupName') if filters: self.build_filter_params(params, filters) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribePlacementGroups', params, [('item', PlacementGroup)], verb='POST') def create_placement_group(self, name, strategy='cluster', dry_run=False): """ Create a new placement group for your account. This will create the placement group within the region you are currently connected to. :type name: string :param name: The name of the new placement group :type strategy: string :param strategy: The placement strategy of the new placement group. Currently, the only acceptable value is "cluster". :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = {'GroupName':name, 'Strategy':strategy} if dry_run: params['DryRun'] = 'true' group = self.get_status('CreatePlacementGroup', params, verb='POST') return group def delete_placement_group(self, name, dry_run=False): """ Delete a placement group from your account. :type key_name: string :param key_name: The name of the keypair to delete :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = {'GroupName':name} if dry_run: params['DryRun'] = 'true' return self.get_status('DeletePlacementGroup', params, verb='POST') # Tag methods def build_tag_param_list(self, params, tags): keys = sorted(tags.keys()) i = 1 for key in keys: value = tags[key] params['Tag.%d.Key'%i] = key if value is not None: params['Tag.%d.Value'%i] = value i += 1 def get_all_tags(self, filters=None, dry_run=False, max_results=None): """ Retrieve all the metadata tags associated with your account. :type filters: dict :param filters: Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :type max_results: int :param max_results: The maximum number of paginated instance items per response. :rtype: list :return: A list of :class:`boto.ec2.tag.Tag` objects """ params = {} if filters: self.build_filter_params(params, filters) if dry_run: params['DryRun'] = 'true' if max_results is not None: params['MaxResults'] = max_results return self.get_list('DescribeTags', params, [('item', Tag)], verb='POST') def create_tags(self, resource_ids, tags, dry_run=False): """ Create new metadata tags for the specified resource ids. :type resource_ids: list :param resource_ids: List of strings :type tags: dict :param tags: A dictionary containing the name/value pairs. If you want to create only a tag name, the value for that tag should be the empty string (e.g. ''). :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = {} self.build_list_params(params, resource_ids, 'ResourceId') self.build_tag_param_list(params, tags) if dry_run: params['DryRun'] = 'true' return self.get_status('CreateTags', params, verb='POST') def delete_tags(self, resource_ids, tags, dry_run=False): """ Delete metadata tags for the specified resource ids. :type resource_ids: list :param resource_ids: List of strings :type tags: dict or list :param tags: Either a dictionary containing name/value pairs or a list containing just tag names. If you pass in a dictionary, the values must match the actual tag values or the tag will not be deleted. If you pass in a value of None for the tag value, all tags with that name will be deleted. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ if isinstance(tags, list): tags = {}.fromkeys(tags, None) params = {} self.build_list_params(params, resource_ids, 'ResourceId') self.build_tag_param_list(params, tags) if dry_run: params['DryRun'] = 'true' return self.get_status('DeleteTags', params, verb='POST') # Network Interface methods def get_all_network_interfaces(self, filters=None, dry_run=False): """ Retrieve all of the Elastic Network Interfaces (ENI's) associated with your account. :type filters: dict :param filters: Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.ec2.networkinterface.NetworkInterface` """ params = {} if filters: self.build_filter_params(params, filters) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeNetworkInterfaces', params, [('item', NetworkInterface)], verb='POST') def create_network_interface(self, subnet_id, private_ip_address=None, description=None, groups=None, dry_run=False): """ Creates a network interface in the specified subnet. :type subnet_id: str :param subnet_id: The ID of the subnet to associate with the network interface. :type private_ip_address: str :param private_ip_address: The private IP address of the network interface. If not supplied, one will be chosen for you. :type description: str :param description: The description of the network interface. :type groups: list :param groups: Lists the groups for use by the network interface. This can be either a list of group ID's or a list of :class:`boto.ec2.securitygroup.SecurityGroup` objects. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: :class:`boto.ec2.networkinterface.NetworkInterface` :return: The newly created network interface. """ params = {'SubnetId': subnet_id} if private_ip_address: params['PrivateIpAddress'] = private_ip_address if description: params['Description'] = description if groups: ids = [] for group in groups: if isinstance(group, SecurityGroup): ids.append(group.id) else: ids.append(group) self.build_list_params(params, ids, 'SecurityGroupId') if dry_run: params['DryRun'] = 'true' return self.get_object('CreateNetworkInterface', params, NetworkInterface, verb='POST') def attach_network_interface(self, network_interface_id, instance_id, device_index, dry_run=False): """ Attaches a network interface to an instance. :type network_interface_id: str :param network_interface_id: The ID of the network interface to attach. :type instance_id: str :param instance_id: The ID of the instance that will be attached to the network interface. :type device_index: int :param device_index: The index of the device for the network interface attachment on the instance. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = {'NetworkInterfaceId': network_interface_id, 'InstanceId': instance_id, 'DeviceIndex': device_index} if dry_run: params['DryRun'] = 'true' return self.get_status('AttachNetworkInterface', params, verb='POST') def detach_network_interface(self, attachment_id, force=False, dry_run=False): """ Detaches a network interface from an instance. :type attachment_id: str :param attachment_id: The ID of the attachment. :type force: bool :param force: Set to true to force a detachment. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = {'AttachmentId': attachment_id} if force: params['Force'] = 'true' if dry_run: params['DryRun'] = 'true' return self.get_status('DetachNetworkInterface', params, verb='POST') def delete_network_interface(self, network_interface_id, dry_run=False): """ Delete the specified network interface. :type network_interface_id: str :param network_interface_id: The ID of the network interface to delete. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = {'NetworkInterfaceId': network_interface_id} if dry_run: params['DryRun'] = 'true' return self.get_status('DeleteNetworkInterface', params, verb='POST') def get_all_vmtypes(self): """ Get all vmtypes available on this cloud (eucalyptus specific) :rtype: list of :class:`boto.ec2.vmtype.VmType` :return: The requested VmType objects """ params = {} return self.get_list('DescribeVmTypes', params, [('euca:item', VmType)], verb='POST') def copy_image(self, source_region, source_image_id, name=None, description=None, client_token=None, dry_run=False): """ :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = { 'SourceRegion': source_region, 'SourceImageId': source_image_id, } if name is not None: params['Name'] = name if description is not None: params['Description'] = description if client_token is not None: params['ClientToken'] = client_token if dry_run: params['DryRun'] = 'true' return self.get_object('CopyImage', params, CopyImage, verb='POST') def describe_account_attributes(self, attribute_names=None, dry_run=False): """ :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = {} if attribute_names is not None: self.build_list_params(params, attribute_names, 'AttributeName') if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeAccountAttributes', params, [('item', AccountAttribute)], verb='POST') def describe_vpc_attribute(self, vpc_id, attribute=None, dry_run=False): """ :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = { 'VpcId': vpc_id } if attribute is not None: params['Attribute'] = attribute if dry_run: params['DryRun'] = 'true' return self.get_object('DescribeVpcAttribute', params, VPCAttribute, verb='POST') def modify_vpc_attribute(self, vpc_id, enable_dns_support=None, enable_dns_hostnames=None, dry_run=False): """ :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = { 'VpcId': vpc_id } if enable_dns_support is not None: params['EnableDnsSupport.Value'] = ( 'true' if enable_dns_support else 'false') if enable_dns_hostnames is not None: params['EnableDnsHostnames.Value'] = ( 'true' if enable_dns_hostnames else 'false') if dry_run: params['DryRun'] = 'true' return self.get_status('ModifyVpcAttribute', params, verb='POST') boto-2.20.1/boto/ec2/ec2object.py000066400000000000000000000101461225267101000163630ustar00rootroot00000000000000# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents an EC2 Object """ from boto.ec2.tag import TagSet class EC2Object(object): def __init__(self, connection=None): self.connection = connection if self.connection and hasattr(self.connection, 'region'): self.region = connection.region else: self.region = None def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): setattr(self, name, value) class TaggedEC2Object(EC2Object): """ Any EC2 resource that can be tagged should be represented by a Python object that subclasses this class. This class has the mechanism in place to handle the tagSet element in the Describe* responses. If tags are found, it will create a TagSet object and allow it to parse and collect the tags into a dict that is stored in the "tags" attribute of the object. """ def __init__(self, connection=None): EC2Object.__init__(self, connection) self.tags = TagSet() def startElement(self, name, attrs, connection): if name == 'tagSet': return self.tags else: return None def add_tag(self, key, value='', dry_run=False): """ Add a tag to this object. Tag's are stored by AWS and can be used to organize and filter resources. Adding a tag involves a round-trip to the EC2 service. :type key: str :param key: The key or name of the tag being stored. :type value: str :param value: An optional value that can be stored with the tag. If you want only the tag name and no value, the value should be the empty string. """ status = self.connection.create_tags( [self.id], {key : value}, dry_run=dry_run ) if self.tags is None: self.tags = TagSet() self.tags[key] = value def remove_tag(self, key, value=None, dry_run=False): """ Remove a tag from this object. Removing a tag involves a round-trip to the EC2 service. :type key: str :param key: The key or name of the tag being stored. :type value: str :param value: An optional value that can be stored with the tag. If a value is provided, it must match the value currently stored in EC2. If not, the tag will not be removed. If a value of None is provided, all tags with the specified name will be deleted. NOTE: There is an important distinction between a value of '' and a value of None. """ if value: tags = {key : value} else: tags = [key] status = self.connection.delete_tags( [self.id], tags, dry_run=dry_run ) if key in self.tags: del self.tags[key] boto-2.20.1/boto/ec2/elb/000077500000000000000000000000001225267101000147115ustar00rootroot00000000000000boto-2.20.1/boto/ec2/elb/__init__.py000066400000000000000000000740271225267101000170340ustar00rootroot00000000000000# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # """ This module provides an interface to the Elastic Compute Cloud (EC2) load balancing service from AWS. """ from boto.connection import AWSQueryConnection from boto.ec2.instanceinfo import InstanceInfo from boto.ec2.elb.loadbalancer import LoadBalancer, LoadBalancerZones from boto.ec2.elb.instancestate import InstanceState from boto.ec2.elb.healthcheck import HealthCheck from boto.ec2.elb.listelement import ListElement from boto.regioninfo import RegionInfo import boto RegionData = { 'us-east-1': 'elasticloadbalancing.us-east-1.amazonaws.com', 'us-gov-west-1': 'elasticloadbalancing.us-gov-west-1.amazonaws.com', 'us-west-1': 'elasticloadbalancing.us-west-1.amazonaws.com', 'us-west-2': 'elasticloadbalancing.us-west-2.amazonaws.com', 'sa-east-1': 'elasticloadbalancing.sa-east-1.amazonaws.com', 'eu-west-1': 'elasticloadbalancing.eu-west-1.amazonaws.com', 'ap-northeast-1': 'elasticloadbalancing.ap-northeast-1.amazonaws.com', 'ap-southeast-1': 'elasticloadbalancing.ap-southeast-1.amazonaws.com', 'ap-southeast-2': 'elasticloadbalancing.ap-southeast-2.amazonaws.com', } def regions(): """ Get all available regions for the ELB service. :rtype: list :return: A list of :class:`boto.RegionInfo` instances """ regions = [] for region_name in RegionData: region = RegionInfo(name=region_name, endpoint=RegionData[region_name], connection_cls=ELBConnection) regions.append(region) return regions def connect_to_region(region_name, **kw_params): """ Given a valid region name, return a :class:`boto.ec2.elb.ELBConnection`. :param str region_name: The name of the region to connect to. :rtype: :class:`boto.ec2.ELBConnection` or ``None`` :return: A connection to the given region, or None if an invalid region name is given """ for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None class ELBConnection(AWSQueryConnection): APIVersion = boto.config.get('Boto', 'elb_version', '2012-06-01') DefaultRegionName = boto.config.get('Boto', 'elb_region_name', 'us-east-1') DefaultRegionEndpoint = boto.config.get('Boto', 'elb_region_endpoint', 'elasticloadbalancing.us-east-1.amazonaws.com') def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/', security_token=None, validate_certs=True): """ Init method to create a new connection to EC2 Load Balancing Service. .. note:: The region argument is overridden by the region specified in the boto configuration file. """ if not region: region = RegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint) self.region = region AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, proxy_user, proxy_pass, self.region.endpoint, debug, https_connection_factory, path, security_token, validate_certs=validate_certs) def _required_auth_capability(self): return ['ec2'] def build_list_params(self, params, items, label): if isinstance(items, str): items = [items] for index, item in enumerate(items): params[label % (index + 1)] = item def get_all_load_balancers(self, load_balancer_names=None): """ Retrieve all load balancers associated with your account. :type load_balancer_names: list :keyword load_balancer_names: An optional list of load balancer names. :rtype: :py:class:`boto.resultset.ResultSet` :return: A ResultSet containing instances of :class:`boto.ec2.elb.loadbalancer.LoadBalancer` """ params = {} if load_balancer_names: self.build_list_params(params, load_balancer_names, 'LoadBalancerNames.member.%d') return self.get_list('DescribeLoadBalancers', params, [('member', LoadBalancer)]) def create_load_balancer(self, name, zones, listeners=None, subnets=None, security_groups=None, scheme='internet-facing', complex_listeners=None): """ Create a new load balancer for your account. By default the load balancer will be created in EC2. To create a load balancer inside a VPC, parameter zones must be set to None and subnets must not be None. The load balancer will be automatically created under the VPC that contains the subnet(s) specified. :type name: string :param name: The mnemonic name associated with the new load balancer :type zones: List of strings :param zones: The names of the availability zone(s) to add. :type listeners: List of tuples :param listeners: Each tuple contains three or four values, (LoadBalancerPortNumber, InstancePortNumber, Protocol, [SSLCertificateId]) where LoadBalancerPortNumber and InstancePortNumber are integer values between 1 and 65535, Protocol is a string containing either 'TCP', 'SSL', HTTP', or 'HTTPS'; SSLCertificateID is the ARN of a AWS IAM certificate, and must be specified when doing HTTPS. :type subnets: list of strings :param subnets: A list of subnet IDs in your VPC to attach to your LoadBalancer. :type security_groups: list of strings :param security_groups: The security groups assigned to your LoadBalancer within your VPC. :type scheme: string :param scheme: The type of a LoadBalancer. By default, Elastic Load Balancing creates an internet-facing LoadBalancer with a publicly resolvable DNS name, which resolves to public IP addresses. Specify the value internal for this option to create an internal LoadBalancer with a DNS name that resolves to private IP addresses. This option is only available for LoadBalancers attached to an Amazon VPC. :type complex_listeners: List of tuples :param complex_listeners: Each tuple contains four or five values, (LoadBalancerPortNumber, InstancePortNumber, Protocol, InstanceProtocol, SSLCertificateId). Where: - LoadBalancerPortNumber and InstancePortNumber are integer values between 1 and 65535 - Protocol and InstanceProtocol is a string containing either 'TCP', 'SSL', 'HTTP', or 'HTTPS' - SSLCertificateId is the ARN of an SSL certificate loaded into AWS IAM :rtype: :class:`boto.ec2.elb.loadbalancer.LoadBalancer` :return: The newly created :class:`boto.ec2.elb.loadbalancer.LoadBalancer` """ if not listeners and not complex_listeners: # Must specify one of the two options return None params = {'LoadBalancerName': name, 'Scheme': scheme} # Handle legacy listeners if listeners: for index, listener in enumerate(listeners): i = index + 1 protocol = listener[2].upper() params['Listeners.member.%d.LoadBalancerPort' % i] = listener[0] params['Listeners.member.%d.InstancePort' % i] = listener[1] params['Listeners.member.%d.Protocol' % i] = listener[2] if protocol == 'HTTPS' or protocol == 'SSL': params['Listeners.member.%d.SSLCertificateId' % i] = listener[3] # Handle the full listeners if complex_listeners: for index, listener in enumerate(complex_listeners): i = index + 1 protocol = listener[2].upper() InstanceProtocol = listener[3].upper() params['Listeners.member.%d.LoadBalancerPort' % i] = listener[0] params['Listeners.member.%d.InstancePort' % i] = listener[1] params['Listeners.member.%d.Protocol' % i] = listener[2] params['Listeners.member.%d.InstanceProtocol' % i] = listener[3] if protocol == 'HTTPS' or protocol == 'SSL': params['Listeners.member.%d.SSLCertificateId' % i] = listener[4] if zones: self.build_list_params(params, zones, 'AvailabilityZones.member.%d') if subnets: self.build_list_params(params, subnets, 'Subnets.member.%d') if security_groups: self.build_list_params(params, security_groups, 'SecurityGroups.member.%d') load_balancer = self.get_object('CreateLoadBalancer', params, LoadBalancer) load_balancer.name = name load_balancer.listeners = listeners load_balancer.availability_zones = zones load_balancer.subnets = subnets load_balancer.security_groups = security_groups return load_balancer def create_load_balancer_listeners(self, name, listeners=None, complex_listeners=None): """ Creates a Listener (or group of listeners) for an existing Load Balancer :type name: string :param name: The name of the load balancer to create the listeners for :type listeners: List of tuples :param listeners: Each tuple contains three or four values, (LoadBalancerPortNumber, InstancePortNumber, Protocol, [SSLCertificateId]) where LoadBalancerPortNumber and InstancePortNumber are integer values between 1 and 65535, Protocol is a string containing either 'TCP', 'SSL', HTTP', or 'HTTPS'; SSLCertificateID is the ARN of a AWS IAM certificate, and must be specified when doing HTTPS. :type complex_listeners: List of tuples :param complex_listeners: Each tuple contains four or five values, (LoadBalancerPortNumber, InstancePortNumber, Protocol, InstanceProtocol, SSLCertificateId). Where: - LoadBalancerPortNumber and InstancePortNumber are integer values between 1 and 65535 - Protocol and InstanceProtocol is a string containing either 'TCP', 'SSL', 'HTTP', or 'HTTPS' - SSLCertificateId is the ARN of an SSL certificate loaded into AWS IAM :return: The status of the request """ if not listeners and not complex_listeners: # Must specify one of the two options return None params = {'LoadBalancerName': name} # Handle the simple listeners if listeners: for index, listener in enumerate(listeners): i = index + 1 protocol = listener[2].upper() params['Listeners.member.%d.LoadBalancerPort' % i] = listener[0] params['Listeners.member.%d.InstancePort' % i] = listener[1] params['Listeners.member.%d.Protocol' % i] = listener[2] if protocol == 'HTTPS' or protocol == 'SSL': params['Listeners.member.%d.SSLCertificateId' % i] = listener[3] # Handle the full listeners if complex_listeners: for index, listener in enumerate(complex_listeners): i = index + 1 protocol = listener[2].upper() InstanceProtocol = listener[3].upper() params['Listeners.member.%d.LoadBalancerPort' % i] = listener[0] params['Listeners.member.%d.InstancePort' % i] = listener[1] params['Listeners.member.%d.Protocol' % i] = listener[2] params['Listeners.member.%d.InstanceProtocol' % i] = listener[3] if protocol == 'HTTPS' or protocol == 'SSL': params['Listeners.member.%d.SSLCertificateId' % i] = listener[4] return self.get_status('CreateLoadBalancerListeners', params) def delete_load_balancer(self, name): """ Delete a Load Balancer from your account. :type name: string :param name: The name of the Load Balancer to delete """ params = {'LoadBalancerName': name} return self.get_status('DeleteLoadBalancer', params) def delete_load_balancer_listeners(self, name, ports): """ Deletes a load balancer listener (or group of listeners) :type name: string :param name: The name of the load balancer to create the listeners for :type ports: List int :param ports: Each int represents the port on the ELB to be removed :return: The status of the request """ params = {'LoadBalancerName': name} for index, port in enumerate(ports): params['LoadBalancerPorts.member.%d' % (index + 1)] = port return self.get_status('DeleteLoadBalancerListeners', params) def enable_availability_zones(self, load_balancer_name, zones_to_add): """ Add availability zones to an existing Load Balancer All zones must be in the same region as the Load Balancer Adding zones that are already registered with the Load Balancer has no effect. :type load_balancer_name: string :param load_balancer_name: The name of the Load Balancer :type zones: List of strings :param zones: The name of the zone(s) to add. :rtype: List of strings :return: An updated list of zones for this Load Balancer. """ params = {'LoadBalancerName': load_balancer_name} self.build_list_params(params, zones_to_add, 'AvailabilityZones.member.%d') obj = self.get_object('EnableAvailabilityZonesForLoadBalancer', params, LoadBalancerZones) return obj.zones def disable_availability_zones(self, load_balancer_name, zones_to_remove): """ Remove availability zones from an existing Load Balancer. All zones must be in the same region as the Load Balancer. Removing zones that are not registered with the Load Balancer has no effect. You cannot remove all zones from an Load Balancer. :type load_balancer_name: string :param load_balancer_name: The name of the Load Balancer :type zones: List of strings :param zones: The name of the zone(s) to remove. :rtype: List of strings :return: An updated list of zones for this Load Balancer. """ params = {'LoadBalancerName': load_balancer_name} self.build_list_params(params, zones_to_remove, 'AvailabilityZones.member.%d') obj = self.get_object('DisableAvailabilityZonesForLoadBalancer', params, LoadBalancerZones) return obj.zones def modify_lb_attribute(self, load_balancer_name, attribute, value): """Changes an attribute of a Load Balancer :type load_balancer_name: string :param load_balancer_name: The name of the Load Balancer :type attribute: string :param attribute: The attribute you wish to change. * crossZoneLoadBalancing - Boolean (true) :type value: string :param value: The new value for the attribute :rtype: bool :return: Whether the operation succeeded or not """ bool_reqs = ('crosszoneloadbalancing',) if attribute.lower() in bool_reqs: if isinstance(value, bool): if value: value = 'true' else: value = 'false' params = {'LoadBalancerName': load_balancer_name} if attribute.lower() == 'crosszoneloadbalancing': params['LoadBalancerAttributes.CrossZoneLoadBalancing.Enabled' ] = value else: raise ValueError('InvalidAttribute', attribute) return self.get_status('ModifyLoadBalancerAttributes', params, verb='GET') def get_all_lb_attributes(self, load_balancer_name): """Gets all Attributes of a Load Balancer :type load_balancer_name: string :param load_balancer_name: The name of the Load Balancer :rtype: boto.ec2.elb.attribute.LbAttributes :return: The attribute object of the ELB. """ from boto.ec2.elb.attributes import LbAttributes params = {'LoadBalancerName': load_balancer_name} return self.get_object('DescribeLoadBalancerAttributes', params, LbAttributes) def get_lb_attribute(self, load_balancer_name, attribute): """Gets an attribute of a Load Balancer This will make an EC2 call for each method call. :type load_balancer_name: string :param load_balancer_name: The name of the Load Balancer :type attribute: string :param attribute: The attribute you wish to see. * crossZoneLoadBalancing - Boolean :rtype: Attribute dependent :return: The new value for the attribute """ attributes = self.get_all_lb_attributes(load_balancer_name) if attribute.lower() == 'crosszoneloadbalancing': return attributes.cross_zone_load_balancing.enabled return None def register_instances(self, load_balancer_name, instances): """ Add new Instances to an existing Load Balancer. :type load_balancer_name: string :param load_balancer_name: The name of the Load Balancer :type instances: List of strings :param instances: The instance ID's of the EC2 instances to add. :rtype: List of strings :return: An updated list of instances for this Load Balancer. """ params = {'LoadBalancerName': load_balancer_name} self.build_list_params(params, instances, 'Instances.member.%d.InstanceId') return self.get_list('RegisterInstancesWithLoadBalancer', params, [('member', InstanceInfo)]) def deregister_instances(self, load_balancer_name, instances): """ Remove Instances from an existing Load Balancer. :type load_balancer_name: string :param load_balancer_name: The name of the Load Balancer :type instances: List of strings :param instances: The instance ID's of the EC2 instances to remove. :rtype: List of strings :return: An updated list of instances for this Load Balancer. """ params = {'LoadBalancerName': load_balancer_name} self.build_list_params(params, instances, 'Instances.member.%d.InstanceId') return self.get_list('DeregisterInstancesFromLoadBalancer', params, [('member', InstanceInfo)]) def describe_instance_health(self, load_balancer_name, instances=None): """ Get current state of all Instances registered to an Load Balancer. :type load_balancer_name: string :param load_balancer_name: The name of the Load Balancer :type instances: List of strings :param instances: The instance ID's of the EC2 instances to return status for. If not provided, the state of all instances will be returned. :rtype: List of :class:`boto.ec2.elb.instancestate.InstanceState` :return: list of state info for instances in this Load Balancer. """ params = {'LoadBalancerName': load_balancer_name} if instances: self.build_list_params(params, instances, 'Instances.member.%d.InstanceId') return self.get_list('DescribeInstanceHealth', params, [('member', InstanceState)]) def configure_health_check(self, name, health_check): """ Define a health check for the EndPoints. :type name: string :param name: The mnemonic name associated with the load balancer :type health_check: :class:`boto.ec2.elb.healthcheck.HealthCheck` :param health_check: A HealthCheck object populated with the desired values. :rtype: :class:`boto.ec2.elb.healthcheck.HealthCheck` :return: The updated :class:`boto.ec2.elb.healthcheck.HealthCheck` """ params = {'LoadBalancerName': name, 'HealthCheck.Timeout': health_check.timeout, 'HealthCheck.Target': health_check.target, 'HealthCheck.Interval': health_check.interval, 'HealthCheck.UnhealthyThreshold': health_check.unhealthy_threshold, 'HealthCheck.HealthyThreshold': health_check.healthy_threshold} return self.get_object('ConfigureHealthCheck', params, HealthCheck) def set_lb_listener_SSL_certificate(self, lb_name, lb_port, ssl_certificate_id): """ Sets the certificate that terminates the specified listener's SSL connections. The specified certificate replaces any prior certificate that was used on the same LoadBalancer and port. """ params = {'LoadBalancerName': lb_name, 'LoadBalancerPort': lb_port, 'SSLCertificateId': ssl_certificate_id} return self.get_status('SetLoadBalancerListenerSSLCertificate', params) def create_app_cookie_stickiness_policy(self, name, lb_name, policy_name): """ Generates a stickiness policy with sticky session lifetimes that follow that of an application-generated cookie. This policy can only be associated with HTTP listeners. This policy is similar to the policy created by CreateLBCookieStickinessPolicy, except that the lifetime of the special Elastic Load Balancing cookie follows the lifetime of the application-generated cookie specified in the policy configuration. The load balancer only inserts a new stickiness cookie when the application response includes a new application cookie. If the application cookie is explicitly removed or expires, the session stops being sticky until a new application cookie is issued. """ params = {'CookieName': name, 'LoadBalancerName': lb_name, 'PolicyName': policy_name} return self.get_status('CreateAppCookieStickinessPolicy', params) def create_lb_cookie_stickiness_policy(self, cookie_expiration_period, lb_name, policy_name): """ Generates a stickiness policy with sticky session lifetimes controlled by the lifetime of the browser (user-agent) or a specified expiration period. This policy can only be associated only with HTTP listeners. When a load balancer implements this policy, the load balancer uses a special cookie to track the backend server instance for each request. When the load balancer receives a request, it first checks to see if this cookie is present in the request. If so, the load balancer sends the request to the application server specified in the cookie. If not, the load balancer sends the request to a server that is chosen based on the existing load balancing algorithm. A cookie is inserted into the response for binding subsequent requests from the same user to that server. The validity of the cookie is based on the cookie expiration time, which is specified in the policy configuration. None may be passed for cookie_expiration_period. """ params = {'LoadBalancerName': lb_name, 'PolicyName': policy_name} if cookie_expiration_period is not None: params['CookieExpirationPeriod'] = cookie_expiration_period return self.get_status('CreateLBCookieStickinessPolicy', params) def create_lb_policy(self, lb_name, policy_name, policy_type, policy_attributes): """ Creates a new policy that contais the necessary attributes depending on the policy type. Policies are settings that are saved for your load balancer and that can be applied to the front-end listener, or the back-end application server. """ params = {'LoadBalancerName': lb_name, 'PolicyName': policy_name, 'PolicyTypeName': policy_type} for index, (name, value) in enumerate(policy_attributes.iteritems(), 1): params['PolicyAttributes.member.%d.AttributeName' % index] = name params['PolicyAttributes.member.%d.AttributeValue' % index] = value else: params['PolicyAttributes'] = '' return self.get_status('CreateLoadBalancerPolicy', params) def delete_lb_policy(self, lb_name, policy_name): """ Deletes a policy from the LoadBalancer. The specified policy must not be enabled for any listeners. """ params = {'LoadBalancerName': lb_name, 'PolicyName': policy_name} return self.get_status('DeleteLoadBalancerPolicy', params) def set_lb_policies_of_listener(self, lb_name, lb_port, policies): """ Associates, updates, or disables a policy with a listener on the load balancer. Currently only zero (0) or one (1) policy can be associated with a listener. """ params = {'LoadBalancerName': lb_name, 'LoadBalancerPort': lb_port} self.build_list_params(params, policies, 'PolicyNames.member.%d') return self.get_status('SetLoadBalancerPoliciesOfListener', params) def set_lb_policies_of_backend_server(self, lb_name, instance_port, policies): """ Replaces the current set of policies associated with a port on which the back-end server is listening with a new set of policies. """ params = {'LoadBalancerName': lb_name, 'InstancePort': instance_port} if policies: self.build_list_params(params, policies, 'PolicyNames.member.%d') else: params['PolicyNames'] = '' return self.get_status('SetLoadBalancerPoliciesForBackendServer', params) def apply_security_groups_to_lb(self, name, security_groups): """ Applies security groups to the load balancer. Applying security groups that are already registered with the Load Balancer has no effect. :type name: string :param name: The name of the Load Balancer :type security_groups: List of strings :param security_groups: The name of the security group(s) to add. :rtype: List of strings :return: An updated list of security groups for this Load Balancer. """ params = {'LoadBalancerName': name} self.build_list_params(params, security_groups, 'SecurityGroups.member.%d') return self.get_list('ApplySecurityGroupsToLoadBalancer', params, None) def attach_lb_to_subnets(self, name, subnets): """ Attaches load balancer to one or more subnets. Attaching subnets that are already registered with the Load Balancer has no effect. :type name: string :param name: The name of the Load Balancer :type subnets: List of strings :param subnets: The name of the subnet(s) to add. :rtype: List of strings :return: An updated list of subnets for this Load Balancer. """ params = {'LoadBalancerName': name} self.build_list_params(params, subnets, 'Subnets.member.%d') return self.get_list('AttachLoadBalancerToSubnets', params, None) def detach_lb_from_subnets(self, name, subnets): """ Detaches load balancer from one or more subnets. :type name: string :param name: The name of the Load Balancer :type subnets: List of strings :param subnets: The name of the subnet(s) to detach. :rtype: List of strings :return: An updated list of subnets for this Load Balancer. """ params = {'LoadBalancerName': name} self.build_list_params(params, subnets, 'Subnets.member.%d') return self.get_list('DetachLoadBalancerFromSubnets', params, None) boto-2.20.1/boto/ec2/elb/attributes.py000066400000000000000000000043721225267101000174570ustar00rootroot00000000000000# Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # # Created by Chris Huegle for TellApart, Inc. class CrossZoneLoadBalancingAttribute(object): """ Represents the CrossZoneLoadBalancing segement of ELB Attributes. """ def __init__(self, connection=None): self.enabled = None def __repr__(self): return 'CrossZoneLoadBalancingAttribute(%s)' % ( self.enabled) def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'Enabled': if value.lower() == 'true': self.enabled = True else: self.enabled = False class LbAttributes(object): """ Represents the Attributes of an Elastic Load Balancer. """ def __init__(self, connection=None): self.connection = connection self.cross_zone_load_balancing = CrossZoneLoadBalancingAttribute( self.connection) def __repr__(self): return 'LbAttributes(%s)' % ( repr(self.cross_zone_load_balancing)) def startElement(self, name, attrs, connection): if name == 'CrossZoneLoadBalancing': return self.cross_zone_load_balancing def endElement(self, name, value, connection): pass boto-2.20.1/boto/ec2/elb/healthcheck.py000066400000000000000000000072771225267101000175430ustar00rootroot00000000000000# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class HealthCheck(object): """ Represents an EC2 Access Point Health Check. See :ref:`elb-configuring-a-health-check` for a walkthrough on configuring load balancer health checks. """ def __init__(self, access_point=None, interval=30, target=None, healthy_threshold=3, timeout=5, unhealthy_threshold=5): """ :ivar str access_point: The name of the load balancer this health check is associated with. :ivar int interval: Specifies how many seconds there are between health checks. :ivar str target: Determines what to check on an instance. See the Amazon HealthCheck_ documentation for possible Target values. .. _HealthCheck: http://docs.amazonwebservices.com/ElasticLoadBalancing/latest/APIReference/API_HealthCheck.html """ self.access_point = access_point self.interval = interval self.target = target self.healthy_threshold = healthy_threshold self.timeout = timeout self.unhealthy_threshold = unhealthy_threshold def __repr__(self): return 'HealthCheck:%s' % self.target def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'Interval': self.interval = int(value) elif name == 'Target': self.target = value elif name == 'HealthyThreshold': self.healthy_threshold = int(value) elif name == 'Timeout': self.timeout = int(value) elif name == 'UnhealthyThreshold': self.unhealthy_threshold = int(value) else: setattr(self, name, value) def update(self): """ In the case where you have accessed an existing health check on a load balancer, this method applies this instance's health check values to the load balancer it is attached to. .. note:: This method will not do anything if the :py:attr:`access_point` attribute isn't set, as is the case with a newly instantiated HealthCheck instance. """ if not self.access_point: return new_hc = self.connection.configure_health_check(self.access_point, self) self.interval = new_hc.interval self.target = new_hc.target self.healthy_threshold = new_hc.healthy_threshold self.unhealthy_threshold = new_hc.unhealthy_threshold self.timeout = new_hc.timeout boto-2.20.1/boto/ec2/elb/instancestate.py000066400000000000000000000051411225267101000201310ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class InstanceState(object): """ Represents the state of an EC2 Load Balancer Instance """ def __init__(self, load_balancer=None, description=None, state=None, instance_id=None, reason_code=None): """ :ivar boto.ec2.elb.loadbalancer.LoadBalancer load_balancer: The load balancer this instance is registered to. :ivar str description: A description of the instance. :ivar str instance_id: The EC2 instance ID. :ivar str reason_code: Provides information about the cause of an OutOfService instance. Specifically, it indicates whether the cause is Elastic Load Balancing or the instance behind the LoadBalancer. :ivar str state: Specifies the current state of the instance. """ self.load_balancer = load_balancer self.description = description self.state = state self.instance_id = instance_id self.reason_code = reason_code def __repr__(self): return 'InstanceState:(%s,%s)' % (self.instance_id, self.state) def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'Description': self.description = value elif name == 'State': self.state = value elif name == 'InstanceId': self.instance_id = value elif name == 'ReasonCode': self.reason_code = value else: setattr(self, name, value) boto-2.20.1/boto/ec2/elb/listelement.py000066400000000000000000000027501225267101000176140ustar00rootroot00000000000000# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class ListElement(list): """ A :py:class:`list` subclass that has some additional methods for interacting with Amazon's XML API. """ def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'member': self.append(value) boto-2.20.1/boto/ec2/elb/listener.py000066400000000000000000000063611225267101000171160ustar00rootroot00000000000000# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.ec2.elb.listelement import ListElement class Listener(object): """ Represents an EC2 Load Balancer Listener tuple """ def __init__(self, load_balancer=None, load_balancer_port=0, instance_port=0, protocol='', ssl_certificate_id=None, instance_protocol=None): self.load_balancer = load_balancer self.load_balancer_port = load_balancer_port self.instance_port = instance_port self.protocol = protocol self.instance_protocol = instance_protocol self.ssl_certificate_id = ssl_certificate_id self.policy_names = ListElement() def __repr__(self): r = "(%d, %d, '%s'" % (self.load_balancer_port, self.instance_port, self.protocol) if self.instance_protocol: r += ", '%s'" % self.instance_protocol if self.ssl_certificate_id: r += ', %s' % (self.ssl_certificate_id) r += ')' return r def startElement(self, name, attrs, connection): if name == 'PolicyNames': return self.policy_names return None def endElement(self, name, value, connection): if name == 'LoadBalancerPort': self.load_balancer_port = int(value) elif name == 'InstancePort': self.instance_port = int(value) elif name == 'InstanceProtocol': self.instance_protocol = value elif name == 'Protocol': self.protocol = value elif name == 'SSLCertificateId': self.ssl_certificate_id = value else: setattr(self, name, value) def get_tuple(self): return self.load_balancer_port, self.instance_port, self.protocol def get_complex_tuple(self): return self.load_balancer_port, self.instance_port, self.protocol, self.instance_protocol def __getitem__(self, key): if key == 0: return self.load_balancer_port if key == 1: return self.instance_port if key == 2: return self.protocol if key == 4: return self.instance_protocol raise KeyError boto-2.20.1/boto/ec2/elb/loadbalancer.py000066400000000000000000000404111225267101000176720ustar00rootroot00000000000000# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.ec2.elb.healthcheck import HealthCheck from boto.ec2.elb.listener import Listener from boto.ec2.elb.listelement import ListElement from boto.ec2.elb.policies import Policies, OtherPolicy from boto.ec2.elb.securitygroup import SecurityGroup from boto.ec2.instanceinfo import InstanceInfo from boto.resultset import ResultSet class Backend(object): """Backend server description""" def __init__(self, connection=None): self.connection = connection self.instance_port = None self.policies = None def __repr__(self): return 'Backend(%r:%r)' % (self.instance_port, self.policies) def startElement(self, name, attrs, connection): if name == 'PolicyNames': self.policies = ResultSet([('member', OtherPolicy)]) return self.policies def endElement(self, name, value, connection): if name == 'InstancePort': self.instance_port = int(value) return class LoadBalancerZones(object): """ Used to collect the zones for a Load Balancer when enable_zones or disable_zones are called. """ def __init__(self, connection=None): self.connection = connection self.zones = ListElement() def startElement(self, name, attrs, connection): if name == 'AvailabilityZones': return self.zones def endElement(self, name, value, connection): pass class LoadBalancer(object): """ Represents an EC2 Load Balancer. """ def __init__(self, connection=None, name=None, endpoints=None): """ :ivar boto.ec2.elb.ELBConnection connection: The connection this load balancer was instance was instantiated from. :ivar list listeners: A list of tuples in the form of ``(, , )`` :ivar boto.ec2.elb.healthcheck.HealthCheck health_check: The health check policy for this load balancer. :ivar boto.ec2.elb.policies.Policies policies: Cookie stickiness and other policies. :ivar str dns_name: The external DNS name for the balancer. :ivar str created_time: A date+time string showing when the load balancer was created. :ivar list instances: A list of :py:class:`boto.ec2.instanceinfo.InstanceInfo` instances, representing the EC2 instances this load balancer is distributing requests to. :ivar list availability_zones: The availability zones this balancer covers. :ivar str canonical_hosted_zone_name: Current CNAME for the balancer. :ivar str canonical_hosted_zone_name_id: The Route 53 hosted zone ID of this balancer. Needed when creating an Alias record in a Route 53 hosted zone. :ivar boto.ec2.elb.securitygroup.SecurityGroup source_security_group: The security group that you can use as part of your inbound rules for your load balancer back-end instances to disallow traffic from sources other than your load balancer. :ivar list subnets: A list of subnets this balancer is on. :ivar list security_groups: A list of additional security groups that have been applied. :ivar str vpc_id: The ID of the VPC that this ELB resides within. :ivar list backends: A list of :py:class:`boto.ec2.elb.loadbalancer.Backend back-end server descriptions. """ self.connection = connection self.name = name self.listeners = None self.health_check = None self.policies = None self.dns_name = None self.created_time = None self.instances = None self.availability_zones = ListElement() self.canonical_hosted_zone_name = None self.canonical_hosted_zone_name_id = None self.source_security_group = None self.subnets = ListElement() self.security_groups = ListElement() self.vpc_id = None self.scheme = None self.backends = None self._attributes = None def __repr__(self): return 'LoadBalancer:%s' % self.name def startElement(self, name, attrs, connection): if name == 'HealthCheck': self.health_check = HealthCheck(self) return self.health_check elif name == 'ListenerDescriptions': self.listeners = ResultSet([('member', Listener)]) return self.listeners elif name == 'AvailabilityZones': return self.availability_zones elif name == 'Instances': self.instances = ResultSet([('member', InstanceInfo)]) return self.instances elif name == 'Policies': self.policies = Policies(self) return self.policies elif name == 'SourceSecurityGroup': self.source_security_group = SecurityGroup() return self.source_security_group elif name == 'Subnets': return self.subnets elif name == 'SecurityGroups': return self.security_groups elif name == 'VPCId': pass elif name == "BackendServerDescriptions": self.backends = ResultSet([('member', Backend)]) return self.backends else: return None def endElement(self, name, value, connection): if name == 'LoadBalancerName': self.name = value elif name == 'DNSName': self.dns_name = value elif name == 'CreatedTime': self.created_time = value elif name == 'InstanceId': self.instances.append(value) elif name == 'CanonicalHostedZoneName': self.canonical_hosted_zone_name = value elif name == 'CanonicalHostedZoneNameID': self.canonical_hosted_zone_name_id = value elif name == 'VPCId': self.vpc_id = value elif name == 'Scheme': self.scheme = value else: setattr(self, name, value) def enable_zones(self, zones): """ Enable availability zones to this Access Point. All zones must be in the same region as the Access Point. :type zones: string or List of strings :param zones: The name of the zone(s) to add. """ if isinstance(zones, str) or isinstance(zones, unicode): zones = [zones] new_zones = self.connection.enable_availability_zones(self.name, zones) self.availability_zones = new_zones def disable_zones(self, zones): """ Disable availability zones from this Access Point. :type zones: string or List of strings :param zones: The name of the zone(s) to add. """ if isinstance(zones, str) or isinstance(zones, unicode): zones = [zones] new_zones = self.connection.disable_availability_zones(self.name, zones) self.availability_zones = new_zones def get_attributes(self, force=False): """ Gets the LbAttributes. The Attributes will be cached. :type force: bool :param force: Ignore cache value and reload. :rtype: boto.ec2.elb.attributes.LbAttributes :return: The LbAttribues object """ if not self._attributes or force: self._attributes = self.connection.get_all_lb_attributes(self.name) return self._attributes def is_cross_zone_load_balancing(self, force=False): """ Identifies if the ELB is current configured to do CrossZone Balancing. :type force: bool :param force: Ignore cache value and reload. :rtype: bool :return: True if balancing is enabled, False if not. """ return self.get_attributes(force).cross_zone_load_balancing.enabled def enable_cross_zone_load_balancing(self): """ Turns on CrossZone Load Balancing for this ELB. :rtype: bool :return: True if successful, False if not. """ success = self.connection.modify_lb_attribute( self.name, 'crossZoneLoadBalancing', True) if success and self._attributes: self._attributes.cross_zone_load_balancing.enabled = True return success def disable_cross_zone_load_balancing(self): """ Turns off CrossZone Load Balancing for this ELB. :rtype: bool :return: True if successful, False if not. """ success = self.connection.modify_lb_attribute( self.name, 'crossZoneLoadBalancing', False) if success and self._attributes: self._attributes.cross_zone_load_balancing.enabled = False return success def register_instances(self, instances): """ Adds instances to this load balancer. All instances must be in the same region as the load balancer. Adding endpoints that are already registered with the load balancer has no effect. :param list instances: List of instance IDs (strings) that you'd like to add to this load balancer. """ if isinstance(instances, str) or isinstance(instances, unicode): instances = [instances] new_instances = self.connection.register_instances(self.name, instances) self.instances = new_instances def deregister_instances(self, instances): """ Remove instances from this load balancer. Removing instances that are not registered with the load balancer has no effect. :param list instances: List of instance IDs (strings) that you'd like to remove from this load balancer. """ if isinstance(instances, str) or isinstance(instances, unicode): instances = [instances] new_instances = self.connection.deregister_instances(self.name, instances) self.instances = new_instances def delete(self): """ Delete this load balancer. """ return self.connection.delete_load_balancer(self.name) def configure_health_check(self, health_check): """ Configures the health check behavior for the instances behind this load balancer. See :ref:`elb-configuring-a-health-check` for a walkthrough. :param boto.ec2.elb.healthcheck.HealthCheck health_check: A HealthCheck instance that tells the load balancer how to check its instances for health. """ return self.connection.configure_health_check(self.name, health_check) def get_instance_health(self, instances=None): """ Returns a list of :py:class:`boto.ec2.elb.instancestate.InstanceState` objects, which show the health of the instances attached to this load balancer. :rtype: list :returns: A list of :py:class:`InstanceState ` instances, representing the instances attached to this load balancer. """ return self.connection.describe_instance_health(self.name, instances) def create_listeners(self, listeners): return self.connection.create_load_balancer_listeners(self.name, listeners) def create_listener(self, inPort, outPort=None, proto="tcp"): if outPort == None: outPort = inPort return self.create_listeners([(inPort, outPort, proto)]) def delete_listeners(self, listeners): return self.connection.delete_load_balancer_listeners(self.name, listeners) def delete_listener(self, inPort): return self.delete_listeners([inPort]) def delete_policy(self, policy_name): """ Deletes a policy from the LoadBalancer. The specified policy must not be enabled for any listeners. """ return self.connection.delete_lb_policy(self.name, policy_name) def set_policies_of_listener(self, lb_port, policies): return self.connection.set_lb_policies_of_listener(self.name, lb_port, policies) def set_policies_of_backend_server(self, instance_port, policies): return self.connection.set_lb_policies_of_backend_server(self.name, instance_port, policies) def create_cookie_stickiness_policy(self, cookie_expiration_period, policy_name): return self.connection.create_lb_cookie_stickiness_policy(cookie_expiration_period, self.name, policy_name) def create_app_cookie_stickiness_policy(self, name, policy_name): return self.connection.create_app_cookie_stickiness_policy(name, self.name, policy_name) def set_listener_SSL_certificate(self, lb_port, ssl_certificate_id): return self.connection.set_lb_listener_SSL_certificate(self.name, lb_port, ssl_certificate_id) def create_lb_policy(self, policy_name, policy_type, policy_attribute): return self.connection.create_lb_policy(self.name, policy_name, policy_type, policy_attribute) def attach_subnets(self, subnets): """ Attaches load balancer to one or more subnets. Attaching subnets that are already registered with the Load Balancer has no effect. :type subnets: string or List of strings :param subnets: The name of the subnet(s) to add. """ if isinstance(subnets, str) or isinstance(subnets, unicode): subnets = [subnets] new_subnets = self.connection.attach_lb_to_subnets(self.name, subnets) self.subnets = new_subnets def detach_subnets(self, subnets): """ Detaches load balancer from one or more subnets. :type subnets: string or List of strings :param subnets: The name of the subnet(s) to detach. """ if isinstance(subnets, str) or isinstance(subnets, unicode): subnets = [subnets] new_subnets = self.connection.detach_lb_from_subnets(self.name, subnets) self.subnets = new_subnets def apply_security_groups(self, security_groups): """ Applies security groups to the load balancer. Applying security groups that are already registered with the Load Balancer has no effect. :type security_groups: string or List of strings :param security_groups: The name of the security group(s) to add. """ if isinstance(security_groups, str) or \ isinstance(security_groups, unicode): security_groups = [security_groups] new_sgs = self.connection.apply_security_groups_to_lb( self.name, security_groups) self.security_groups = new_sgs boto-2.20.1/boto/ec2/elb/policies.py000066400000000000000000000074211225267101000170760ustar00rootroot00000000000000# Copyright (c) 2010 Reza Lotun http://reza.lotun.name # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.resultset import ResultSet class AppCookieStickinessPolicy(object): def __init__(self, connection=None): self.cookie_name = None self.policy_name = None def __repr__(self): return 'AppCookieStickiness(%s, %s)' % (self.policy_name, self.cookie_name) def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'CookieName': self.cookie_name = value elif name == 'PolicyName': self.policy_name = value class LBCookieStickinessPolicy(object): def __init__(self, connection=None): self.policy_name = None self.cookie_expiration_period = None def __repr__(self): return 'LBCookieStickiness(%s, %s)' % (self.policy_name, self.cookie_expiration_period) def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'CookieExpirationPeriod': self.cookie_expiration_period = value elif name == 'PolicyName': self.policy_name = value class OtherPolicy(object): def __init__(self, connection=None): self.policy_name = None def __repr__(self): return 'OtherPolicy(%s)' % (self.policy_name) def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): self.policy_name = value class Policies(object): """ ELB Policies """ def __init__(self, connection=None): self.connection = connection self.app_cookie_stickiness_policies = None self.lb_cookie_stickiness_policies = None self.other_policies = None def __repr__(self): app = 'AppCookieStickiness%s' % self.app_cookie_stickiness_policies lb = 'LBCookieStickiness%s' % self.lb_cookie_stickiness_policies other = 'Other%s' % self.other_policies return 'Policies(%s,%s,%s)' % (app, lb, other) def startElement(self, name, attrs, connection): if name == 'AppCookieStickinessPolicies': rs = ResultSet([('member', AppCookieStickinessPolicy)]) self.app_cookie_stickiness_policies = rs return rs elif name == 'LBCookieStickinessPolicies': rs = ResultSet([('member', LBCookieStickinessPolicy)]) self.lb_cookie_stickiness_policies = rs return rs elif name == 'OtherPolicies': rs = ResultSet([('member', OtherPolicy)]) self.other_policies = rs return rs def endElement(self, name, value, connection): return boto-2.20.1/boto/ec2/elb/securitygroup.py000066400000000000000000000030501225267101000202050ustar00rootroot00000000000000# Copyright (c) 2010 Reza Lotun http://reza.lotun.name # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class SecurityGroup(object): def __init__(self, connection=None): self.name = None self.owner_alias = None def __repr__(self): return 'SecurityGroup(%s, %s)' % (self.name, self.owner_alias) def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'GroupName': self.name = value elif name == 'OwnerAlias': self.owner_alias = value boto-2.20.1/boto/ec2/group.py000066400000000000000000000030251225267101000156550ustar00rootroot00000000000000# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class Group: def __init__(self, parent=None): self.id = None self.name = None def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'groupId': self.id = value elif name == 'groupName': self.name = value else: setattr(self, name, value) boto-2.20.1/boto/ec2/image.py000066400000000000000000000367601225267101000156170ustar00rootroot00000000000000# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.ec2.ec2object import EC2Object, TaggedEC2Object from boto.ec2.blockdevicemapping import BlockDeviceMapping class ProductCodes(list): def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'productCode': self.append(value) class BillingProducts(list): def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'billingProduct': self.append(value) class Image(TaggedEC2Object): """ Represents an EC2 Image """ def __init__(self, connection=None): TaggedEC2Object.__init__(self, connection) self.id = None self.location = None self.state = None self.ownerId = None # for backwards compatibility self.owner_id = None self.owner_alias = None self.is_public = False self.architecture = None self.platform = None self.type = None self.kernel_id = None self.ramdisk_id = None self.name = None self.description = None self.product_codes = ProductCodes() self.billing_products = BillingProducts() self.block_device_mapping = None self.root_device_type = None self.root_device_name = None self.virtualization_type = None self.hypervisor = None self.instance_lifecycle = None def __repr__(self): return 'Image:%s' % self.id def startElement(self, name, attrs, connection): retval = TaggedEC2Object.startElement(self, name, attrs, connection) if retval is not None: return retval if name == 'blockDeviceMapping': self.block_device_mapping = BlockDeviceMapping() return self.block_device_mapping elif name == 'productCodes': return self.product_codes elif name == 'billingProducts': return self.billing_products else: return None def endElement(self, name, value, connection): if name == 'imageId': self.id = value elif name == 'imageLocation': self.location = value elif name == 'imageState': self.state = value elif name == 'imageOwnerId': self.ownerId = value # for backwards compatibility self.owner_id = value elif name == 'isPublic': if value == 'false': self.is_public = False elif value == 'true': self.is_public = True else: raise Exception( 'Unexpected value of isPublic %s for image %s'%( value, self.id ) ) elif name == 'architecture': self.architecture = value elif name == 'imageType': self.type = value elif name == 'kernelId': self.kernel_id = value elif name == 'ramdiskId': self.ramdisk_id = value elif name == 'imageOwnerAlias': self.owner_alias = value elif name == 'platform': self.platform = value elif name == 'name': self.name = value elif name == 'description': self.description = value elif name == 'rootDeviceType': self.root_device_type = value elif name == 'rootDeviceName': self.root_device_name = value elif name == 'virtualizationType': self.virtualization_type = value elif name == 'hypervisor': self.hypervisor = value elif name == 'instanceLifecycle': self.instance_lifecycle = value else: setattr(self, name, value) def _update(self, updated): self.__dict__.update(updated.__dict__) def update(self, validate=False, dry_run=False): """ Update the image's state information by making a call to fetch the current image attributes from the service. :type validate: bool :param validate: By default, if EC2 returns no data about the image the update method returns quietly. If the validate param is True, however, it will raise a ValueError exception if no data is returned from EC2. """ rs = self.connection.get_all_images([self.id], dry_run=dry_run) if len(rs) > 0: img = rs[0] if img.id == self.id: self._update(img) elif validate: raise ValueError('%s is not a valid Image ID' % self.id) return self.state def run(self, min_count=1, max_count=1, key_name=None, security_groups=None, user_data=None, addressing_type=None, instance_type='m1.small', placement=None, kernel_id=None, ramdisk_id=None, monitoring_enabled=False, subnet_id=None, block_device_map=None, disable_api_termination=False, instance_initiated_shutdown_behavior=None, private_ip_address=None, placement_group=None, security_group_ids=None, additional_info=None, instance_profile_name=None, instance_profile_arn=None, tenancy=None, dry_run=False): """ Runs this instance. :type min_count: int :param min_count: The minimum number of instances to start :type max_count: int :param max_count: The maximum number of instances to start :type key_name: string :param key_name: The name of the key pair with which to launch instances. :type security_groups: list of strings :param security_groups: The names of the security groups with which to associate instances. :type user_data: string :param user_data: The Base64-encoded MIME user data to be made available to the instance(s) in this reservation. :type instance_type: string :param instance_type: The type of instance to run: * t1.micro * m1.small * m1.medium * m1.large * m1.xlarge * m3.xlarge * m3.2xlarge * c1.medium * c1.xlarge * m2.xlarge * m2.2xlarge * m2.4xlarge * cr1.8xlarge * hi1.4xlarge * hs1.8xlarge * cc1.4xlarge * cg1.4xlarge * cc2.8xlarge * g2.2xlarge * i2.xlarge * i2.2xlarge * i2.4xlarge * i2.8xlarge :type placement: string :param placement: The Availability Zone to launch the instance into. :type kernel_id: string :param kernel_id: The ID of the kernel with which to launch the instances. :type ramdisk_id: string :param ramdisk_id: The ID of the RAM disk with which to launch the instances. :type monitoring_enabled: bool :param monitoring_enabled: Enable CloudWatch monitoring on the instance. :type subnet_id: string :param subnet_id: The subnet ID within which to launch the instances for VPC. :type private_ip_address: string :param private_ip_address: If you're using VPC, you can optionally use this parameter to assign the instance a specific available IP address from the subnet (e.g., 10.0.0.25). :type block_device_map: :class:`boto.ec2.blockdevicemapping.BlockDeviceMapping` :param block_device_map: A BlockDeviceMapping data structure describing the EBS volumes associated with the Image. :type disable_api_termination: bool :param disable_api_termination: If True, the instances will be locked and will not be able to be terminated via the API. :type instance_initiated_shutdown_behavior: string :param instance_initiated_shutdown_behavior: Specifies whether the instance stops or terminates on instance-initiated shutdown. Valid values are: * stop * terminate :type placement_group: string :param placement_group: If specified, this is the name of the placement group in which the instance(s) will be launched. :type additional_info: string :param additional_info: Specifies additional information to make available to the instance(s). :type security_group_ids: list of strings :param security_group_ids: The ID of the VPC security groups with which to associate instances. :type instance_profile_name: string :param instance_profile_name: The name of the IAM Instance Profile (IIP) to associate with the instances. :type instance_profile_arn: string :param instance_profile_arn: The Amazon resource name (ARN) of the IAM Instance Profile (IIP) to associate with the instances. :type tenancy: string :param tenancy: The tenancy of the instance you want to launch. An instance with a tenancy of 'dedicated' runs on single-tenant hardware and can only be launched into a VPC. Valid values are:"default" or "dedicated". NOTE: To use dedicated tenancy you MUST specify a VPC subnet-ID as well. :rtype: Reservation :return: The :class:`boto.ec2.instance.Reservation` associated with the request for machines """ return self.connection.run_instances(self.id, min_count, max_count, key_name, security_groups, user_data, addressing_type, instance_type, placement, kernel_id, ramdisk_id, monitoring_enabled, subnet_id, block_device_map, disable_api_termination, instance_initiated_shutdown_behavior, private_ip_address, placement_group, security_group_ids=security_group_ids, additional_info=additional_info, instance_profile_name=instance_profile_name, instance_profile_arn=instance_profile_arn, tenancy=tenancy, dry_run=dry_run) def deregister(self, delete_snapshot=False, dry_run=False): return self.connection.deregister_image( self.id, delete_snapshot, dry_run=dry_run ) def get_launch_permissions(self, dry_run=False): img_attrs = self.connection.get_image_attribute( self.id, 'launchPermission', dry_run=dry_run ) return img_attrs.attrs def set_launch_permissions(self, user_ids=None, group_names=None, dry_run=False): return self.connection.modify_image_attribute(self.id, 'launchPermission', 'add', user_ids, group_names, dry_run=dry_run) def remove_launch_permissions(self, user_ids=None, group_names=None, dry_run=False): return self.connection.modify_image_attribute(self.id, 'launchPermission', 'remove', user_ids, group_names, dry_run=dry_run) def reset_launch_attributes(self, dry_run=False): return self.connection.reset_image_attribute( self.id, 'launchPermission', dry_run=dry_run ) def get_kernel(self, dry_run=False): img_attrs =self.connection.get_image_attribute( self.id, 'kernel', dry_run=dry_run ) return img_attrs.kernel def get_ramdisk(self, dry_run=False): img_attrs = self.connection.get_image_attribute( self.id, 'ramdisk', dry_run=dry_run ) return img_attrs.ramdisk class ImageAttribute: def __init__(self, parent=None): self.name = None self.kernel = None self.ramdisk = None self.attrs = {} def startElement(self, name, attrs, connection): if name == 'blockDeviceMapping': self.attrs['block_device_mapping'] = BlockDeviceMapping() return self.attrs['block_device_mapping'] else: return None def endElement(self, name, value, connection): if name == 'launchPermission': self.name = 'launch_permission' elif name == 'group': if 'groups' in self.attrs: self.attrs['groups'].append(value) else: self.attrs['groups'] = [value] elif name == 'userId': if 'user_ids' in self.attrs: self.attrs['user_ids'].append(value) else: self.attrs['user_ids'] = [value] elif name == 'productCode': if 'product_codes' in self.attrs: self.attrs['product_codes'].append(value) else: self.attrs['product_codes'] = [value] elif name == 'imageId': self.image_id = value elif name == 'kernel': self.kernel = value elif name == 'ramdisk': self.ramdisk = value else: setattr(self, name, value) class CopyImage(object): def __init__(self, parent=None): self._parent = parent self.image_id = None def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'imageId': self.image_id = value boto-2.20.1/boto/ec2/instance.py000066400000000000000000000557101225267101000163350ustar00rootroot00000000000000# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents an EC2 Instance """ import boto from boto.ec2.ec2object import EC2Object, TaggedEC2Object from boto.resultset import ResultSet from boto.ec2.address import Address from boto.ec2.blockdevicemapping import BlockDeviceMapping from boto.ec2.image import ProductCodes from boto.ec2.networkinterface import NetworkInterface from boto.ec2.group import Group import base64 class InstanceState(object): """ The state of the instance. :ivar code: The low byte represents the state. The high byte is an opaque internal value and should be ignored. Valid values: * 0 (pending) * 16 (running) * 32 (shutting-down) * 48 (terminated) * 64 (stopping) * 80 (stopped) :ivar name: The name of the state of the instance. Valid values: * "pending" * "running" * "shutting-down" * "terminated" * "stopping" * "stopped" """ def __init__(self, code=0, name=None): self.code = code self.name = name def __repr__(self): return '%s(%d)' % (self.name, self.code) def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'code': self.code = int(value) elif name == 'name': self.name = value else: setattr(self, name, value) class InstancePlacement(object): """ The location where the instance launched. :ivar zone: The Availability Zone of the instance. :ivar group_name: The name of the placement group the instance is in (for cluster compute instances). :ivar tenancy: The tenancy of the instance (if the instance is running within a VPC). An instance with a tenancy of dedicated runs on single-tenant hardware. """ def __init__(self, zone=None, group_name=None, tenancy=None): self.zone = zone self.group_name = group_name self.tenancy = tenancy def __repr__(self): return self.zone def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'availabilityZone': self.zone = value elif name == 'groupName': self.group_name = value elif name == 'tenancy': self.tenancy = value else: setattr(self, name, value) class Reservation(EC2Object): """ Represents a Reservation response object. :ivar id: The unique ID of the Reservation. :ivar owner_id: The unique ID of the owner of the Reservation. :ivar groups: A list of Group objects representing the security groups associated with launched instances. :ivar instances: A list of Instance objects launched in this Reservation. """ def __init__(self, connection=None): EC2Object.__init__(self, connection) self.id = None self.owner_id = None self.groups = [] self.instances = [] def __repr__(self): return 'Reservation:%s' % self.id def startElement(self, name, attrs, connection): if name == 'instancesSet': self.instances = ResultSet([('item', Instance)]) return self.instances elif name == 'groupSet': self.groups = ResultSet([('item', Group)]) return self.groups else: return None def endElement(self, name, value, connection): if name == 'reservationId': self.id = value elif name == 'ownerId': self.owner_id = value else: setattr(self, name, value) def stop_all(self, dry_run=False): for instance in self.instances: instance.stop(dry_run=dry_run) class Instance(TaggedEC2Object): """ Represents an instance. :ivar id: The unique ID of the Instance. :ivar groups: A list of Group objects representing the security groups associated with the instance. :ivar public_dns_name: The public dns name of the instance. :ivar private_dns_name: The private dns name of the instance. :ivar state: The string representation of the instance's current state. :ivar state_code: An integer representation of the instance's current state. :ivar previous_state: The string representation of the instance's previous state. :ivar previous_state_code: An integer representation of the instance's current state. :ivar key_name: The name of the SSH key associated with the instance. :ivar instance_type: The type of instance (e.g. m1.small). :ivar launch_time: The time the instance was launched. :ivar image_id: The ID of the AMI used to launch this instance. :ivar placement: The availability zone in which the instance is running. :ivar placement_group: The name of the placement group the instance is in (for cluster compute instances). :ivar placement_tenancy: The tenancy of the instance, if the instance is running within a VPC. An instance with a tenancy of dedicated runs on a single-tenant hardware. :ivar kernel: The kernel associated with the instance. :ivar ramdisk: The ramdisk associated with the instance. :ivar architecture: The architecture of the image (i386|x86_64). :ivar hypervisor: The hypervisor used. :ivar virtualization_type: The type of virtualization used. :ivar product_codes: A list of product codes associated with this instance. :ivar ami_launch_index: This instances position within it's launch group. :ivar monitored: A boolean indicating whether monitoring is enabled or not. :ivar monitoring_state: A string value that contains the actual value of the monitoring element returned by EC2. :ivar spot_instance_request_id: The ID of the spot instance request if this is a spot instance. :ivar subnet_id: The VPC Subnet ID, if running in VPC. :ivar vpc_id: The VPC ID, if running in VPC. :ivar private_ip_address: The private IP address of the instance. :ivar ip_address: The public IP address of the instance. :ivar platform: Platform of the instance (e.g. Windows) :ivar root_device_name: The name of the root device. :ivar root_device_type: The root device type (ebs|instance-store). :ivar block_device_mapping: The Block Device Mapping for the instance. :ivar state_reason: The reason for the most recent state transition. :ivar groups: List of security Groups associated with the instance. :ivar interfaces: List of Elastic Network Interfaces associated with this instance. :ivar ebs_optimized: Whether instance is using optimized EBS volumes or not. :ivar instance_profile: A Python dict containing the instance profile id and arn associated with this instance. """ def __init__(self, connection=None): TaggedEC2Object.__init__(self, connection) self.id = None self.dns_name = None self.public_dns_name = None self.private_dns_name = None self.key_name = None self.instance_type = None self.launch_time = None self.image_id = None self.kernel = None self.ramdisk = None self.product_codes = ProductCodes() self.ami_launch_index = None self.monitored = False self.monitoring_state = None self.spot_instance_request_id = None self.subnet_id = None self.vpc_id = None self.private_ip_address = None self.ip_address = None self.requester_id = None self._in_monitoring_element = False self.persistent = False self.root_device_name = None self.root_device_type = None self.block_device_mapping = None self.state_reason = None self.group_name = None self.client_token = None self.eventsSet = None self.groups = [] self.platform = None self.interfaces = [] self.hypervisor = None self.virtualization_type = None self.architecture = None self.instance_profile = None self._previous_state = None self._state = InstanceState() self._placement = InstancePlacement() def __repr__(self): return 'Instance:%s' % self.id @property def state(self): return self._state.name @property def state_code(self): return self._state.code @property def previous_state(self): if self._previous_state: return self._previous_state.name return None @property def previous_state_code(self): if self._previous_state: return self._previous_state.code return 0 @property def placement(self): return self._placement.zone @property def placement_group(self): return self._placement.group_name @property def placement_tenancy(self): return self._placement.tenancy def startElement(self, name, attrs, connection): retval = TaggedEC2Object.startElement(self, name, attrs, connection) if retval is not None: return retval if name == 'monitoring': self._in_monitoring_element = True elif name == 'blockDeviceMapping': self.block_device_mapping = BlockDeviceMapping() return self.block_device_mapping elif name == 'productCodes': return self.product_codes elif name == 'stateReason': self.state_reason = SubParse('stateReason') return self.state_reason elif name == 'groupSet': self.groups = ResultSet([('item', Group)]) return self.groups elif name == "eventsSet": self.eventsSet = SubParse('eventsSet') return self.eventsSet elif name == 'networkInterfaceSet': self.interfaces = ResultSet([('item', NetworkInterface)]) return self.interfaces elif name == 'iamInstanceProfile': self.instance_profile = SubParse('iamInstanceProfile') return self.instance_profile elif name == 'currentState': return self._state elif name == 'previousState': self._previous_state = InstanceState() return self._previous_state elif name == 'instanceState': return self._state elif name == 'placement': return self._placement return None def endElement(self, name, value, connection): if name == 'instanceId': self.id = value elif name == 'imageId': self.image_id = value elif name == 'dnsName' or name == 'publicDnsName': self.dns_name = value # backwards compatibility self.public_dns_name = value elif name == 'privateDnsName': self.private_dns_name = value elif name == 'keyName': self.key_name = value elif name == 'amiLaunchIndex': self.ami_launch_index = value elif name == 'previousState': self.previous_state = value elif name == 'instanceType': self.instance_type = value elif name == 'rootDeviceName': self.root_device_name = value elif name == 'rootDeviceType': self.root_device_type = value elif name == 'launchTime': self.launch_time = value elif name == 'platform': self.platform = value elif name == 'kernelId': self.kernel = value elif name == 'ramdiskId': self.ramdisk = value elif name == 'state': if self._in_monitoring_element: self.monitoring_state = value if value == 'enabled': self.monitored = True self._in_monitoring_element = False elif name == 'spotInstanceRequestId': self.spot_instance_request_id = value elif name == 'subnetId': self.subnet_id = value elif name == 'vpcId': self.vpc_id = value elif name == 'privateIpAddress': self.private_ip_address = value elif name == 'ipAddress': self.ip_address = value elif name == 'requesterId': self.requester_id = value elif name == 'persistent': if value == 'true': self.persistent = True else: self.persistent = False elif name == 'groupName': if self._in_monitoring_element: self.group_name = value elif name == 'clientToken': self.client_token = value elif name == "eventsSet": self.events = value elif name == 'hypervisor': self.hypervisor = value elif name == 'virtualizationType': self.virtualization_type = value elif name == 'architecture': self.architecture = value elif name == 'ebsOptimized': self.ebs_optimized = (value == 'true') else: setattr(self, name, value) def _update(self, updated): self.__dict__.update(updated.__dict__) def update(self, validate=False, dry_run=False): """ Update the instance's state information by making a call to fetch the current instance attributes from the service. :type validate: bool :param validate: By default, if EC2 returns no data about the instance the update method returns quietly. If the validate param is True, however, it will raise a ValueError exception if no data is returned from EC2. """ rs = self.connection.get_all_reservations([self.id], dry_run=dry_run) if len(rs) > 0: r = rs[0] for i in r.instances: if i.id == self.id: self._update(i) elif validate: raise ValueError('%s is not a valid Instance ID' % self.id) return self.state def terminate(self, dry_run=False): """ Terminate the instance """ rs = self.connection.terminate_instances([self.id], dry_run=dry_run) if len(rs) > 0: self._update(rs[0]) def stop(self, force=False, dry_run=False): """ Stop the instance :type force: bool :param force: Forces the instance to stop :rtype: list :return: A list of the instances stopped """ rs = self.connection.stop_instances([self.id], force, dry_run=dry_run) if len(rs) > 0: self._update(rs[0]) def start(self, dry_run=False): """ Start the instance. """ rs = self.connection.start_instances([self.id], dry_run=dry_run) if len(rs) > 0: self._update(rs[0]) def reboot(self, dry_run=False): return self.connection.reboot_instances([self.id], dry_run=dry_run) def get_console_output(self, dry_run=False): """ Retrieves the console output for the instance. :rtype: :class:`boto.ec2.instance.ConsoleOutput` :return: The console output as a ConsoleOutput object """ return self.connection.get_console_output(self.id, dry_run=dry_run) def confirm_product(self, product_code, dry_run=False): return self.connection.confirm_product_instance( self.id, product_code, dry_run=dry_run ) def use_ip(self, ip_address, dry_run=False): """ Associates an Elastic IP to the instance. :type ip_address: Either an instance of :class:`boto.ec2.address.Address` or a string. :param ip_address: The IP address to associate with the instance. :rtype: bool :return: True if successful """ if isinstance(ip_address, Address): ip_address = ip_address.public_ip return self.connection.associate_address( self.id, ip_address, dry_run=dry_run ) def monitor(self, dry_run=False): return self.connection.monitor_instance(self.id, dry_run=dry_run) def unmonitor(self, dry_run=False): return self.connection.unmonitor_instance(self.id, dry_run=dry_run) def get_attribute(self, attribute, dry_run=False): """ Gets an attribute from this instance. :type attribute: string :param attribute: The attribute you need information about Valid choices are: * instanceType * kernel * ramdisk * userData * disableApiTermination * instanceInitiatedShutdownBehavior * rootDeviceName * blockDeviceMapping * productCodes * sourceDestCheck * groupSet * ebsOptimized :rtype: :class:`boto.ec2.image.InstanceAttribute` :return: An InstanceAttribute object representing the value of the attribute requested """ return self.connection.get_instance_attribute( self.id, attribute, dry_run=dry_run ) def modify_attribute(self, attribute, value, dry_run=False): """ Changes an attribute of this instance :type attribute: string :param attribute: The attribute you wish to change. * instanceType - A valid instance type (m1.small) * kernel - Kernel ID (None) * ramdisk - Ramdisk ID (None) * userData - Base64 encoded String (None) * disableApiTermination - Boolean (true) * instanceInitiatedShutdownBehavior - stop|terminate * sourceDestCheck - Boolean (true) * groupSet - Set of Security Groups or IDs * ebsOptimized - Boolean (false) :type value: string :param value: The new value for the attribute :rtype: bool :return: Whether the operation succeeded or not """ return self.connection.modify_instance_attribute( self.id, attribute, value, dry_run=dry_run ) def reset_attribute(self, attribute, dry_run=False): """ Resets an attribute of this instance to its default value. :type attribute: string :param attribute: The attribute to reset. Valid values are: kernel|ramdisk :rtype: bool :return: Whether the operation succeeded or not """ return self.connection.reset_instance_attribute( self.id, attribute, dry_run=dry_run ) def create_image(self, name, description=None, no_reboot=False, dry_run=False): """ Will create an AMI from the instance in the running or stopped state. :type name: string :param name: The name of the new image :type description: string :param description: An optional human-readable string describing the contents and purpose of the AMI. :type no_reboot: bool :param no_reboot: An optional flag indicating that the bundling process should not attempt to shutdown the instance before bundling. If this flag is True, the responsibility of maintaining file system integrity is left to the owner of the instance. :rtype: string :return: The new image id """ return self.connection.create_image( self.id, name, description, no_reboot, dry_run=dry_run ) class ConsoleOutput: def __init__(self, parent=None): self.parent = parent self.instance_id = None self.timestamp = None self.output = None def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'instanceId': self.instance_id = value elif name == 'timestamp': self.timestamp = value elif name == 'output': self.output = base64.b64decode(value) else: setattr(self, name, value) class InstanceAttribute(dict): ValidValues = ['instanceType', 'kernel', 'ramdisk', 'userData', 'disableApiTermination', 'instanceInitiatedShutdownBehavior', 'rootDeviceName', 'blockDeviceMapping', 'sourceDestCheck', 'groupSet'] def __init__(self, parent=None): dict.__init__(self) self.instance_id = None self.request_id = None self._current_value = None def startElement(self, name, attrs, connection): if name == 'blockDeviceMapping': self[name] = BlockDeviceMapping() return self[name] elif name == 'groupSet': self[name] = ResultSet([('item', Group)]) return self[name] else: return None def endElement(self, name, value, connection): if name == 'instanceId': self.instance_id = value elif name == 'requestId': self.request_id = value elif name == 'value': if value == 'true': value = True elif value == 'false': value = False self._current_value = value elif name in self.ValidValues: self[name] = self._current_value class SubParse(dict): def __init__(self, section, parent=None): dict.__init__(self) self.section = section def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name != self.section: self[name] = value boto-2.20.1/boto/ec2/instanceinfo.py000066400000000000000000000035701225267101000172060ustar00rootroot00000000000000# Copyright (c) 2006-2008 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class InstanceInfo(object): """ Represents an EC2 Instance status response from CloudWatch """ def __init__(self, connection=None, id=None, state=None): """ :ivar str id: The instance's EC2 ID. :ivar str state: Specifies the current status of the instance. """ self.connection = connection self.id = id self.state = state def __repr__(self): return 'InstanceInfo:%s' % self.id def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'instanceId' or name == 'InstanceId': self.id = value elif name == 'state': self.state = value else: setattr(self, name, value) boto-2.20.1/boto/ec2/instancestatus.py000066400000000000000000000153061225267101000175760ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class Details(dict): """ A dict object that contains name/value pairs which provide more detailed information about the status of the system or the instance. """ def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'name': self._name = value elif name == 'status': self[self._name] = value else: setattr(self, name, value) class Event(object): """ A status event for an instance. :ivar code: A string indicating the event type. :ivar description: A string describing the reason for the event. :ivar not_before: A datestring describing the earliest time for the event. :ivar not_after: A datestring describing the latest time for the event. """ def __init__(self, code=None, description=None, not_before=None, not_after=None): self.code = code self.description = description self.not_before = not_before self.not_after = not_after def __repr__(self): return 'Event:%s' % self.code def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'code': self.code = value elif name == 'description': self.description = value elif name == 'notBefore': self.not_before = value elif name == 'notAfter': self.not_after = value else: setattr(self, name, value) class Status(object): """ A generic Status object used for system status and instance status. :ivar status: A string indicating overall status. :ivar details: A dict containing name-value pairs which provide more details about the current status. """ def __init__(self, status=None, details=None): self.status = status if not details: details = Details() self.details = details def __repr__(self): return 'Status:%s' % self.status def startElement(self, name, attrs, connection): if name == 'details': return self.details return None def endElement(self, name, value, connection): if name == 'status': self.status = value else: setattr(self, name, value) class EventSet(list): def startElement(self, name, attrs, connection): if name == 'item': event = Event() self.append(event) return event else: return None def endElement(self, name, value, connection): setattr(self, name, value) class InstanceStatus(object): """ Represents an EC2 Instance status as reported by DescribeInstanceStatus request. :ivar id: The instance identifier. :ivar zone: The availability zone of the instance. :ivar events: A list of events relevant to the instance. :ivar state_code: An integer representing the current state of the instance. :ivar state_name: A string describing the current state of the instance. :ivar system_status: A Status object that reports impaired functionality that stems from issues related to the systems that support an instance, such as such as hardware failures and network connectivity problems. :ivar instance_status: A Status object that reports impaired functionality that arises from problems internal to the instance. """ def __init__(self, id=None, zone=None, events=None, state_code=None, state_name=None): self.id = id self.zone = zone self.events = events self.state_code = state_code self.state_name = state_name self.system_status = Status() self.instance_status = Status() def __repr__(self): return 'InstanceStatus:%s' % self.id def startElement(self, name, attrs, connection): if name == 'eventsSet': self.events = EventSet() return self.events elif name == 'systemStatus': return self.system_status elif name == 'instanceStatus': return self.instance_status else: return None def endElement(self, name, value, connection): if name == 'instanceId': self.id = value elif name == 'availabilityZone': self.zone = value elif name == 'code': self.state_code = int(value) elif name == 'name': self.state_name = value else: setattr(self, name, value) class InstanceStatusSet(list): """ A list object that contains the results of a call to DescribeInstanceStatus request. Each element of the list will be an InstanceStatus object. :ivar next_token: If the response was truncated by the EC2 service, the next_token attribute of the object will contain the string that needs to be passed in to the next request to retrieve the next set of results. """ def __init__(self, connection=None): list.__init__(self) self.connection = connection self.next_token = None def startElement(self, name, attrs, connection): if name == 'item': status = InstanceStatus() self.append(status) return status else: return None def endElement(self, name, value, connection): if name == 'nextToken': self.next_token = value setattr(self, name, value) boto-2.20.1/boto/ec2/keypair.py000066400000000000000000000103731225267101000161710ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents an EC2 Keypair """ import os from boto.ec2.ec2object import EC2Object from boto.exception import BotoClientError class KeyPair(EC2Object): def __init__(self, connection=None): EC2Object.__init__(self, connection) self.name = None self.fingerprint = None self.material = None def __repr__(self): return 'KeyPair:%s' % self.name def endElement(self, name, value, connection): if name == 'keyName': self.name = value elif name == 'keyFingerprint': self.fingerprint = value elif name == 'keyMaterial': self.material = value else: setattr(self, name, value) def delete(self, dry_run=False): """ Delete the KeyPair. :rtype: bool :return: True if successful, otherwise False. """ return self.connection.delete_key_pair(self.name, dry_run=dry_run) def save(self, directory_path): """ Save the material (the unencrypted PEM encoded RSA private key) of a newly created KeyPair to a local file. :type directory_path: string :param directory_path: The fully qualified path to the directory in which the keypair will be saved. The keypair file will be named using the name of the keypair as the base name and .pem for the file extension. If a file of that name already exists in the directory, an exception will be raised and the old file will not be overwritten. :rtype: bool :return: True if successful. """ if self.material: directory_path = os.path.expanduser(directory_path) file_path = os.path.join(directory_path, '%s.pem' % self.name) if os.path.exists(file_path): raise BotoClientError('%s already exists, it will not be overwritten' % file_path) fp = open(file_path, 'wb') fp.write(self.material) fp.close() os.chmod(file_path, 0600) return True else: raise BotoClientError('KeyPair contains no material') def copy_to_region(self, region, dry_run=False): """ Create a new key pair of the same new in another region. Note that the new key pair will use a different ssh cert than the this key pair. After doing the copy, you will need to save the material associated with the new key pair (use the save method) to a local file. :type region: :class:`boto.ec2.regioninfo.RegionInfo` :param region: The region to which this security group will be copied. :rtype: :class:`boto.ec2.keypair.KeyPair` :return: The new key pair """ if region.name == self.region: raise BotoClientError('Unable to copy to the same Region') conn_params = self.connection.get_params() rconn = region.connect(**conn_params) kp = rconn.create_key_pair(self.name, dry_run=dry_run) return kp boto-2.20.1/boto/ec2/launchspecification.py000066400000000000000000000073441225267101000205440ustar00rootroot00000000000000# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents a launch specification for Spot instances. """ from boto.ec2.ec2object import EC2Object from boto.resultset import ResultSet from boto.ec2.blockdevicemapping import BlockDeviceMapping from boto.ec2.group import Group from boto.ec2.instance import SubParse class GroupList(list): def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'groupId': self.append(value) class LaunchSpecification(EC2Object): def __init__(self, connection=None): EC2Object.__init__(self, connection) self.key_name = None self.instance_type = None self.image_id = None self.groups = [] self.placement = None self.kernel = None self.ramdisk = None self.monitored = False self.subnet_id = None self._in_monitoring_element = False self.block_device_mapping = None self.instance_profile = None self.ebs_optimized = False def __repr__(self): return 'LaunchSpecification(%s)' % self.image_id def startElement(self, name, attrs, connection): if name == 'groupSet': self.groups = ResultSet([('item', Group)]) return self.groups elif name == 'monitoring': self._in_monitoring_element = True elif name == 'blockDeviceMapping': self.block_device_mapping = BlockDeviceMapping() return self.block_device_mapping elif name == 'iamInstanceProfile': self.instance_profile = SubParse('iamInstanceProfile') return self.instance_profile else: return None def endElement(self, name, value, connection): if name == 'imageId': self.image_id = value elif name == 'keyName': self.key_name = value elif name == 'instanceType': self.instance_type = value elif name == 'availabilityZone': self.placement = value elif name == 'placement': pass elif name == 'kernelId': self.kernel = value elif name == 'ramdiskId': self.ramdisk = value elif name == 'subnetId': self.subnet_id = value elif name == 'state': if self._in_monitoring_element: if value == 'enabled': self.monitored = True self._in_monitoring_element = False elif name == 'ebsOptimized': self.ebs_optimized = (value == 'true') else: setattr(self, name, value) boto-2.20.1/boto/ec2/networkinterface.py000066400000000000000000000265021225267101000201000ustar00rootroot00000000000000# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents an EC2 Elastic Network Interface """ from boto.exception import BotoClientError from boto.ec2.ec2object import TaggedEC2Object from boto.resultset import ResultSet from boto.ec2.group import Group class Attachment(object): """ :ivar id: The ID of the attachment. :ivar instance_id: The ID of the instance. :ivar device_index: The index of this device. :ivar status: The status of the device. :ivar attach_time: The time the device was attached. :ivar delete_on_termination: Whether the device will be deleted when the instance is terminated. """ def __init__(self): self.id = None self.instance_id = None self.instance_owner_id = None self.device_index = 0 self.status = None self.attach_time = None self.delete_on_termination = False def __repr__(self): return 'Attachment:%s' % self.id def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'attachmentId': self.id = value elif name == 'instanceId': self.instance_id = value elif name == 'deviceIndex': self.device_index = int(value) elif name == 'instanceOwnerId': self.instance_owner_id = value elif name == 'status': self.status = value elif name == 'attachTime': self.attach_time = value elif name == 'deleteOnTermination': if value.lower() == 'true': self.delete_on_termination = True else: self.delete_on_termination = False else: setattr(self, name, value) class NetworkInterface(TaggedEC2Object): """ An Elastic Network Interface. :ivar id: The ID of the ENI. :ivar subnet_id: The ID of the VPC subnet. :ivar vpc_id: The ID of the VPC. :ivar description: The description. :ivar owner_id: The ID of the owner of the ENI. :ivar requester_managed: :ivar status: The interface's status (available|in-use). :ivar mac_address: The MAC address of the interface. :ivar private_ip_address: The IP address of the interface within the subnet. :ivar source_dest_check: Flag to indicate whether to validate network traffic to or from this network interface. :ivar groups: List of security groups associated with the interface. :ivar attachment: The attachment object. :ivar private_ip_addresses: A list of PrivateIPAddress objects. """ def __init__(self, connection=None): TaggedEC2Object.__init__(self, connection) self.id = None self.subnet_id = None self.vpc_id = None self.availability_zone = None self.description = None self.owner_id = None self.requester_managed = False self.status = None self.mac_address = None self.private_ip_address = None self.source_dest_check = None self.groups = [] self.attachment = None self.private_ip_addresses = [] def __repr__(self): return 'NetworkInterface:%s' % self.id def startElement(self, name, attrs, connection): retval = TaggedEC2Object.startElement(self, name, attrs, connection) if retval is not None: return retval if name == 'groupSet': self.groups = ResultSet([('item', Group)]) return self.groups elif name == 'attachment': self.attachment = Attachment() return self.attachment elif name == 'privateIpAddressesSet': self.private_ip_addresses = ResultSet([('item', PrivateIPAddress)]) return self.private_ip_addresses else: return None def endElement(self, name, value, connection): if name == 'networkInterfaceId': self.id = value elif name == 'subnetId': self.subnet_id = value elif name == 'vpcId': self.vpc_id = value elif name == 'availabilityZone': self.availability_zone = value elif name == 'description': self.description = value elif name == 'ownerId': self.owner_id = value elif name == 'requesterManaged': if value.lower() == 'true': self.requester_managed = True else: self.requester_managed = False elif name == 'status': self.status = value elif name == 'macAddress': self.mac_address = value elif name == 'privateIpAddress': self.private_ip_address = value elif name == 'sourceDestCheck': if value.lower() == 'true': self.source_dest_check = True else: self.source_dest_check = False else: setattr(self, name, value) def delete(self, dry_run=False): return self.connection.delete_network_interface( self.id, dry_run=dry_run ) class PrivateIPAddress(object): def __init__(self, connection=None, private_ip_address=None, primary=None): self.connection = connection self.private_ip_address = private_ip_address self.primary = primary def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'privateIpAddress': self.private_ip_address = value elif name == 'primary': self.primary = True if value.lower() == 'true' else False def __repr__(self): return "PrivateIPAddress(%s, primary=%s)" % (self.private_ip_address, self.primary) class NetworkInterfaceCollection(list): def __init__(self, *interfaces): self.extend(interfaces) def build_list_params(self, params, prefix=''): for i, spec in enumerate(self): full_prefix = '%sNetworkInterface.%s.' % (prefix, i) if spec.network_interface_id is not None: params[full_prefix + 'NetworkInterfaceId'] = \ str(spec.network_interface_id) if spec.device_index is not None: params[full_prefix + 'DeviceIndex'] = \ str(spec.device_index) else: params[full_prefix + 'DeviceIndex'] = 0 if spec.subnet_id is not None: params[full_prefix + 'SubnetId'] = str(spec.subnet_id) if spec.description is not None: params[full_prefix + 'Description'] = str(spec.description) if spec.delete_on_termination is not None: params[full_prefix + 'DeleteOnTermination'] = \ 'true' if spec.delete_on_termination else 'false' if spec.secondary_private_ip_address_count is not None: params[full_prefix + 'SecondaryPrivateIpAddressCount'] = \ str(spec.secondary_private_ip_address_count) if spec.private_ip_address is not None: params[full_prefix + 'PrivateIpAddress'] = \ str(spec.private_ip_address) if spec.groups is not None: for j, group_id in enumerate(spec.groups): query_param_key = '%sSecurityGroupId.%s' % (full_prefix, j) params[query_param_key] = str(group_id) if spec.private_ip_addresses is not None: for k, ip_addr in enumerate(spec.private_ip_addresses): query_param_key_prefix = ( '%sPrivateIpAddresses.%s' % (full_prefix, k)) params[query_param_key_prefix + '.PrivateIpAddress'] = \ str(ip_addr.private_ip_address) if ip_addr.primary is not None: params[query_param_key_prefix + '.Primary'] = \ 'true' if ip_addr.primary else 'false' # Associating Public IPs have special logic around them: # # * Only assignable on an device_index of ``0`` # * Only on one interface # * Only if there are no other interfaces being created # * Only if it's a new interface (which we can't really guard # against) # # More details on http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-RunInstances.html if spec.associate_public_ip_address is not None: if not params[full_prefix + 'DeviceIndex'] in (0, '0'): raise BotoClientError( "Only the interface with device index of 0 can " + \ "be provided when using " + \ "'associate_public_ip_address'." ) if len(self) > 1: raise BotoClientError( "Only one interface can be provided when using " + \ "'associate_public_ip_address'." ) key = full_prefix + 'AssociatePublicIpAddress' if spec.associate_public_ip_address: params[key] = 'true' else: params[key] = 'false' class NetworkInterfaceSpecification(object): def __init__(self, network_interface_id=None, device_index=None, subnet_id=None, description=None, private_ip_address=None, groups=None, delete_on_termination=None, private_ip_addresses=None, secondary_private_ip_address_count=None, associate_public_ip_address=None): self.network_interface_id = network_interface_id self.device_index = device_index self.subnet_id = subnet_id self.description = description self.private_ip_address = private_ip_address self.groups = groups self.delete_on_termination = delete_on_termination self.private_ip_addresses = private_ip_addresses self.secondary_private_ip_address_count = \ secondary_private_ip_address_count self.associate_public_ip_address = associate_public_ip_address boto-2.20.1/boto/ec2/placementgroup.py000066400000000000000000000037071225267101000175550ustar00rootroot00000000000000# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents an EC2 Placement Group """ from boto.ec2.ec2object import EC2Object from boto.exception import BotoClientError class PlacementGroup(EC2Object): def __init__(self, connection=None, name=None, strategy=None, state=None): EC2Object.__init__(self, connection) self.name = name self.strategy = strategy self.state = state def __repr__(self): return 'PlacementGroup:%s' % self.name def endElement(self, name, value, connection): if name == 'groupName': self.name = value elif name == 'strategy': self.strategy = value elif name == 'state': self.state = value else: setattr(self, name, value) def delete(self, dry_run=False): return self.connection.delete_placement_group( self.name, dry_run=dry_run ) boto-2.20.1/boto/ec2/regioninfo.py000066400000000000000000000027641225267101000166710ustar00rootroot00000000000000# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.regioninfo import RegionInfo class EC2RegionInfo(RegionInfo): """ Represents an EC2 Region """ def __init__(self, connection=None, name=None, endpoint=None): from boto.ec2.connection import EC2Connection RegionInfo.__init__(self, connection, name, endpoint, EC2Connection) boto-2.20.1/boto/ec2/reservedinstance.py000066400000000000000000000311261225267101000200700ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.resultset import ResultSet from boto.ec2.ec2object import EC2Object from boto.utils import parse_ts class ReservedInstancesOffering(EC2Object): def __init__(self, connection=None, id=None, instance_type=None, availability_zone=None, duration=None, fixed_price=None, usage_price=None, description=None, instance_tenancy=None, currency_code=None, offering_type=None, recurring_charges=None, pricing_details=None): EC2Object.__init__(self, connection) self.id = id self.instance_type = instance_type self.availability_zone = availability_zone self.duration = duration self.fixed_price = fixed_price self.usage_price = usage_price self.description = description self.instance_tenancy = instance_tenancy self.currency_code = currency_code self.offering_type = offering_type self.recurring_charges = recurring_charges self.pricing_details = pricing_details def __repr__(self): return 'ReservedInstanceOffering:%s' % self.id def startElement(self, name, attrs, connection): if name == 'recurringCharges': self.recurring_charges = ResultSet([('item', RecurringCharge)]) return self.recurring_charges elif name == 'pricingDetailsSet': self.pricing_details = ResultSet([('item', PricingDetail)]) return self.pricing_details return None def endElement(self, name, value, connection): if name == 'reservedInstancesOfferingId': self.id = value elif name == 'instanceType': self.instance_type = value elif name == 'availabilityZone': self.availability_zone = value elif name == 'duration': self.duration = int(value) elif name == 'fixedPrice': self.fixed_price = value elif name == 'usagePrice': self.usage_price = value elif name == 'productDescription': self.description = value elif name == 'instanceTenancy': self.instance_tenancy = value elif name == 'currencyCode': self.currency_code = value elif name == 'offeringType': self.offering_type = value elif name == 'marketplace': self.marketplace = True if value == 'true' else False def describe(self): print 'ID=%s' % self.id print '\tInstance Type=%s' % self.instance_type print '\tZone=%s' % self.availability_zone print '\tDuration=%s' % self.duration print '\tFixed Price=%s' % self.fixed_price print '\tUsage Price=%s' % self.usage_price print '\tDescription=%s' % self.description def purchase(self, instance_count=1, dry_run=False): return self.connection.purchase_reserved_instance_offering( self.id, instance_count, dry_run=dry_run ) class RecurringCharge(object): def __init__(self, connection=None, frequency=None, amount=None): self.frequency = frequency self.amount = amount def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): setattr(self, name, value) class PricingDetail(object): def __init__(self, connection=None, price=None, count=None): self.price = price self.count = count def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): setattr(self, name, value) class ReservedInstance(ReservedInstancesOffering): def __init__(self, connection=None, id=None, instance_type=None, availability_zone=None, duration=None, fixed_price=None, usage_price=None, description=None, instance_count=None, state=None): ReservedInstancesOffering.__init__(self, connection, id, instance_type, availability_zone, duration, fixed_price, usage_price, description) self.instance_count = instance_count self.state = state self.start = None def __repr__(self): return 'ReservedInstance:%s' % self.id def endElement(self, name, value, connection): if name == 'reservedInstancesId': self.id = value if name == 'instanceCount': self.instance_count = int(value) elif name == 'state': self.state = value elif name == 'start': self.start = value else: ReservedInstancesOffering.endElement(self, name, value, connection) class ReservedInstanceListing(EC2Object): def __init__(self, connection=None, listing_id=None, id=None, create_date=None, update_date=None, status=None, status_message=None, client_token=None): self.connection = connection self.listing_id = listing_id self.id = id self.create_date = create_date self.update_date = update_date self.status = status self.status_message = status_message self.client_token = client_token def startElement(self, name, attrs, connection): if name == 'instanceCounts': self.instance_counts = ResultSet([('item', InstanceCount)]) return self.instance_counts elif name == 'priceSchedules': self.price_schedules = ResultSet([('item', PriceSchedule)]) return self.price_schedules return None def endElement(self, name, value, connection): if name == 'reservedInstancesListingId': self.listing_id = value elif name == 'reservedInstancesId': self.id = value elif name == 'createDate': self.create_date = value elif name == 'updateDate': self.update_date = value elif name == 'status': self.status = value elif name == 'statusMessage': self.status_message = value else: setattr(self, name, value) class InstanceCount(object): def __init__(self, connection=None, state=None, instance_count=None): self.state = state self.instance_count = instance_count def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'state': self.state = value elif name == 'instanceCount': self.instance_count = int(value) else: setattr(self, name, value) class PriceSchedule(object): def __init__(self, connection=None, term=None, price=None, currency_code=None, active=None): self.connection = connection self.term = term self.price = price self.currency_code = currency_code self.active = active def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'term': self.term = int(value) elif name == 'price': self.price = value elif name == 'currencyCode': self.currency_code = value elif name == 'active': self.active = True if value == 'true' else False else: setattr(self, name, value) class ReservedInstancesConfiguration(object): def __init__(self, connection=None, availability_zone=None, platform=None, instance_count=None, instance_type=None): self.connection = connection self.availability_zone = availability_zone self.platform = platform self.instance_count = instance_count self.instance_type = instance_type def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'availabilityZone': self.availability_zone = value elif name == 'platform': self.platform = value elif name == 'instanceCount': self.instance_count = int(value) elif name == 'instanceType': self.instance_type = value else: setattr(self, name, value) class ModifyReservedInstancesResult(object): def __init__(self, connection=None, modification_id=None): self.connection = connection self.modification_id = modification_id def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'reservedInstancesModificationId': self.modification_id = value else: setattr(self, name, value) class ModificationResult(object): def __init__(self, connection=None, modification_id=None, availability_zone=None, platform=None, instance_count=None, instance_type=None): self.connection = connection self.modification_id = modification_id self.availability_zone = availability_zone self.platform = platform self.instance_count = instance_count self.instance_type = instance_type def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'reservedInstancesModificationId': self.modification_id = value elif name == 'availabilityZone': self.availability_zone = value elif name == 'platform': self.platform = value elif name == 'instanceCount': self.instance_count = int(value) elif name == 'instanceType': self.instance_type = value else: setattr(self, name, value) class ReservedInstancesModification(object): def __init__(self, connection=None, modification_id=None, reserved_instances=None, modification_results=None, create_date=None, update_date=None, effective_date=None, status=None, status_message=None, client_token=None): self.connection = connection self.modification_id = modification_id self.reserved_instances = reserved_instances self.modification_results = modification_results self.create_date = create_date self.update_date = update_date self.effective_date = effective_date self.status = status self.status_message = status_message self.client_token = client_token def startElement(self, name, attrs, connection): if name == 'reservedInstancesSet': self.reserved_instances = ResultSet([ ('item', ReservedInstance) ]) return self.reserved_instances elif name == 'modificationResultSet': self.modification_results = ResultSet([ ('item', ModificationResult) ]) return self.modification_results return None def endElement(self, name, value, connection): if name == 'reservedInstancesModificationId': self.modification_id = value elif name == 'createDate': self.create_date = parse_ts(value) elif name == 'updateDate': self.update_date = parse_ts(value) elif name == 'effectiveDate': self.effective_date = parse_ts(value) elif name == 'status': self.status = value elif name == 'statusMessage': self.status_message = value elif name == 'clientToken': self.client_token = value else: setattr(self, name, value) boto-2.20.1/boto/ec2/securitygroup.py000066400000000000000000000345251225267101000174560ustar00rootroot00000000000000# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2011, Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents an EC2 Security Group """ from boto.ec2.ec2object import TaggedEC2Object from boto.exception import BotoClientError class SecurityGroup(TaggedEC2Object): def __init__(self, connection=None, owner_id=None, name=None, description=None, id=None): TaggedEC2Object.__init__(self, connection) self.id = id self.owner_id = owner_id self.name = name self.description = description self.vpc_id = None self.rules = IPPermissionsList() self.rules_egress = IPPermissionsList() def __repr__(self): return 'SecurityGroup:%s' % self.name def startElement(self, name, attrs, connection): retval = TaggedEC2Object.startElement(self, name, attrs, connection) if retval is not None: return retval if name == 'ipPermissions': return self.rules elif name == 'ipPermissionsEgress': return self.rules_egress else: return None def endElement(self, name, value, connection): if name == 'ownerId': self.owner_id = value elif name == 'groupId': self.id = value elif name == 'groupName': self.name = value elif name == 'vpcId': self.vpc_id = value elif name == 'groupDescription': self.description = value elif name == 'ipRanges': pass elif name == 'return': if value == 'false': self.status = False elif value == 'true': self.status = True else: raise Exception( 'Unexpected value of status %s for group %s' % ( value, self.name ) ) else: setattr(self, name, value) def delete(self, dry_run=False): if self.vpc_id: return self.connection.delete_security_group( group_id=self.id, dry_run=dry_run ) else: return self.connection.delete_security_group( self.name, dry_run=dry_run ) def add_rule(self, ip_protocol, from_port, to_port, src_group_name, src_group_owner_id, cidr_ip, src_group_group_id, dry_run=False): """ Add a rule to the SecurityGroup object. Note that this method only changes the local version of the object. No information is sent to EC2. """ rule = IPPermissions(self) rule.ip_protocol = ip_protocol rule.from_port = from_port rule.to_port = to_port self.rules.append(rule) rule.add_grant( src_group_name, src_group_owner_id, cidr_ip, src_group_group_id, dry_run=dry_run ) def remove_rule(self, ip_protocol, from_port, to_port, src_group_name, src_group_owner_id, cidr_ip, src_group_group_id, dry_run=False): """ Remove a rule to the SecurityGroup object. Note that this method only changes the local version of the object. No information is sent to EC2. """ if not self.rules: raise ValueError("The security group has no rules") target_rule = None for rule in self.rules: if rule.ip_protocol == ip_protocol: if rule.from_port == from_port: if rule.to_port == to_port: target_rule = rule target_grant = None for grant in rule.grants: if grant.name == src_group_name or grant.group_id == src_group_group_id: if grant.owner_id == src_group_owner_id: if grant.cidr_ip == cidr_ip: target_grant = grant if target_grant: rule.grants.remove(target_grant) if len(rule.grants) == 0: self.rules.remove(target_rule) def authorize(self, ip_protocol=None, from_port=None, to_port=None, cidr_ip=None, src_group=None, dry_run=False): """ Add a new rule to this security group. You need to pass in either src_group_name OR ip_protocol, from_port, to_port, and cidr_ip. In other words, either you are authorizing another group or you are authorizing some ip-based rule. :type ip_protocol: string :param ip_protocol: Either tcp | udp | icmp :type from_port: int :param from_port: The beginning port number you are enabling :type to_port: int :param to_port: The ending port number you are enabling :type cidr_ip: string or list of strings :param cidr_ip: The CIDR block you are providing access to. See http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing :type src_group: :class:`boto.ec2.securitygroup.SecurityGroup` or :class:`boto.ec2.securitygroup.GroupOrCIDR` :param src_group: The Security Group you are granting access to. :rtype: bool :return: True if successful. """ group_name = None if not self.vpc_id: group_name = self.name group_id = None if self.vpc_id: group_id = self.id src_group_name = None src_group_owner_id = None src_group_group_id = None if src_group: cidr_ip = None src_group_owner_id = src_group.owner_id if not self.vpc_id: src_group_name = src_group.name else: if hasattr(src_group, 'group_id'): src_group_group_id = src_group.group_id else: src_group_group_id = src_group.id status = self.connection.authorize_security_group(group_name, src_group_name, src_group_owner_id, ip_protocol, from_port, to_port, cidr_ip, group_id, src_group_group_id, dry_run=dry_run) if status: if not isinstance(cidr_ip, list): cidr_ip = [cidr_ip] for single_cidr_ip in cidr_ip: self.add_rule(ip_protocol, from_port, to_port, src_group_name, src_group_owner_id, single_cidr_ip, src_group_group_id, dry_run=dry_run) return status def revoke(self, ip_protocol=None, from_port=None, to_port=None, cidr_ip=None, src_group=None, dry_run=False): group_name = None if not self.vpc_id: group_name = self.name group_id = None if self.vpc_id: group_id = self.id src_group_name = None src_group_owner_id = None src_group_group_id = None if src_group: cidr_ip = None src_group_owner_id = src_group.owner_id if not self.vpc_id: src_group_name = src_group.name else: if hasattr(src_group, 'group_id'): src_group_group_id = src_group.group_id else: src_group_group_id = src_group.id status = self.connection.revoke_security_group(group_name, src_group_name, src_group_owner_id, ip_protocol, from_port, to_port, cidr_ip, group_id, src_group_group_id, dry_run=dry_run) if status: self.remove_rule(ip_protocol, from_port, to_port, src_group_name, src_group_owner_id, cidr_ip, src_group_group_id, dry_run=dry_run) return status def copy_to_region(self, region, name=None, dry_run=False): """ Create a copy of this security group in another region. Note that the new security group will be a separate entity and will not stay in sync automatically after the copy operation. :type region: :class:`boto.ec2.regioninfo.RegionInfo` :param region: The region to which this security group will be copied. :type name: string :param name: The name of the copy. If not supplied, the copy will have the same name as this security group. :rtype: :class:`boto.ec2.securitygroup.SecurityGroup` :return: The new security group. """ if region.name == self.region: raise BotoClientError('Unable to copy to the same Region') conn_params = self.connection.get_params() rconn = region.connect(**conn_params) sg = rconn.create_security_group( name or self.name, self.description, dry_run=dry_run ) source_groups = [] for rule in self.rules: for grant in rule.grants: grant_nom = grant.name or grant.group_id if grant_nom: if grant_nom not in source_groups: source_groups.append(grant_nom) sg.authorize(None, None, None, None, grant, dry_run=dry_run) else: sg.authorize(rule.ip_protocol, rule.from_port, rule.to_port, grant.cidr_ip, dry_run=dry_run) return sg def instances(self, dry_run=False): """ Find all of the current instances that are running within this security group. :rtype: list of :class:`boto.ec2.instance.Instance` :return: A list of Instance objects """ rs = [] if self.vpc_id: rs.extend(self.connection.get_all_reservations( filters={'instance.group-id': self.id}, dry_run=dry_run )) else: rs.extend(self.connection.get_all_reservations( filters={'group-id': self.id}, dry_run=dry_run )) instances = [i for r in rs for i in r.instances] return instances class IPPermissionsList(list): def startElement(self, name, attrs, connection): if name == 'item': self.append(IPPermissions(self)) return self[-1] return None def endElement(self, name, value, connection): pass class IPPermissions(object): def __init__(self, parent=None): self.parent = parent self.ip_protocol = None self.from_port = None self.to_port = None self.grants = [] def __repr__(self): return 'IPPermissions:%s(%s-%s)' % (self.ip_protocol, self.from_port, self.to_port) def startElement(self, name, attrs, connection): if name == 'item': self.grants.append(GroupOrCIDR(self)) return self.grants[-1] return None def endElement(self, name, value, connection): if name == 'ipProtocol': self.ip_protocol = value elif name == 'fromPort': self.from_port = value elif name == 'toPort': self.to_port = value else: setattr(self, name, value) def add_grant(self, name=None, owner_id=None, cidr_ip=None, group_id=None, dry_run=False): grant = GroupOrCIDR(self) grant.owner_id = owner_id grant.group_id = group_id grant.name = name grant.cidr_ip = cidr_ip self.grants.append(grant) return grant class GroupOrCIDR(object): def __init__(self, parent=None): self.owner_id = None self.group_id = None self.name = None self.cidr_ip = None def __repr__(self): if self.cidr_ip: return '%s' % self.cidr_ip else: return '%s-%s' % (self.name or self.group_id, self.owner_id) def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'userId': self.owner_id = value elif name == 'groupId': self.group_id = value elif name == 'groupName': self.name = value if name == 'cidrIp': self.cidr_ip = value else: setattr(self, name, value) boto-2.20.1/boto/ec2/snapshot.py000066400000000000000000000150551225267101000163660ustar00rootroot00000000000000# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents an EC2 Elastic Block Store Snapshot """ from boto.ec2.ec2object import TaggedEC2Object from boto.ec2.zone import Zone class Snapshot(TaggedEC2Object): AttrName = 'createVolumePermission' def __init__(self, connection=None): TaggedEC2Object.__init__(self, connection) self.id = None self.volume_id = None self.status = None self.progress = None self.start_time = None self.owner_id = None self.owner_alias = None self.volume_size = None self.description = None def __repr__(self): return 'Snapshot:%s' % self.id def endElement(self, name, value, connection): if name == 'snapshotId': self.id = value elif name == 'volumeId': self.volume_id = value elif name == 'status': self.status = value elif name == 'startTime': self.start_time = value elif name == 'ownerId': self.owner_id = value elif name == 'ownerAlias': self.owner_alias = value elif name == 'volumeSize': try: self.volume_size = int(value) except: self.volume_size = value elif name == 'description': self.description = value else: setattr(self, name, value) def _update(self, updated): self.progress = updated.progress self.status = updated.status def update(self, validate=False, dry_run=False): """ Update the data associated with this snapshot by querying EC2. :type validate: bool :param validate: By default, if EC2 returns no data about the snapshot the update method returns quietly. If the validate param is True, however, it will raise a ValueError exception if no data is returned from EC2. """ rs = self.connection.get_all_snapshots([self.id], dry_run=dry_run) if len(rs) > 0: self._update(rs[0]) elif validate: raise ValueError('%s is not a valid Snapshot ID' % self.id) return self.progress def delete(self, dry_run=False): return self.connection.delete_snapshot(self.id, dry_run=dry_run) def get_permissions(self, dry_run=False): attrs = self.connection.get_snapshot_attribute( self.id, self.AttrName, dry_run=dry_run ) return attrs.attrs def share(self, user_ids=None, groups=None, dry_run=False): return self.connection.modify_snapshot_attribute(self.id, self.AttrName, 'add', user_ids, groups, dry_run=dry_run) def unshare(self, user_ids=None, groups=None, dry_run=False): return self.connection.modify_snapshot_attribute(self.id, self.AttrName, 'remove', user_ids, groups, dry_run=dry_run) def reset_permissions(self, dry_run=False): return self.connection.reset_snapshot_attribute( self.id, self.AttrName, dry_run=dry_run ) def create_volume(self, zone, size=None, volume_type=None, iops=None, dry_run=False): """ Create a new EBS Volume from this Snapshot :type zone: string or :class:`boto.ec2.zone.Zone` :param zone: The availability zone in which the Volume will be created. :type size: int :param size: The size of the new volume, in GiB. (optional). Defaults to the size of the snapshot. :type volume_type: string :param volume_type: The type of the volume. (optional). Valid values are: standard | io1. :type iops: int :param iops: The provisioned IOPs you want to associate with this volume. (optional) """ if isinstance(zone, Zone): zone = zone.name return self.connection.create_volume( size, zone, self.id, volume_type, iops, dry_run=dry_run ) class SnapshotAttribute: def __init__(self, parent=None): self.snapshot_id = None self.attrs = {} def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'createVolumePermission': self.name = 'create_volume_permission' elif name == 'group': if 'groups' in self.attrs: self.attrs['groups'].append(value) else: self.attrs['groups'] = [value] elif name == 'userId': if 'user_ids' in self.attrs: self.attrs['user_ids'].append(value) else: self.attrs['user_ids'] = [value] elif name == 'snapshotId': self.snapshot_id = value else: setattr(self, name, value) boto-2.20.1/boto/ec2/spotdatafeedsubscription.py000066400000000000000000000045531225267101000216400ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents an EC2 Spot Instance Datafeed Subscription """ from boto.ec2.ec2object import EC2Object from boto.ec2.spotinstancerequest import SpotInstanceStateFault class SpotDatafeedSubscription(EC2Object): def __init__(self, connection=None, owner_id=None, bucket=None, prefix=None, state=None,fault=None): EC2Object.__init__(self, connection) self.owner_id = owner_id self.bucket = bucket self.prefix = prefix self.state = state self.fault = fault def __repr__(self): return 'SpotDatafeedSubscription:%s' % self.bucket def startElement(self, name, attrs, connection): if name == 'fault': self.fault = SpotInstanceStateFault() return self.fault else: return None def endElement(self, name, value, connection): if name == 'ownerId': self.owner_id = value elif name == 'bucket': self.bucket = value elif name == 'prefix': self.prefix = value elif name == 'state': self.state = value else: setattr(self, name, value) def delete(self, dry_run=False): return self.connection.delete_spot_datafeed_subscription( dry_run=dry_run ) boto-2.20.1/boto/ec2/spotinstancerequest.py000066400000000000000000000161261225267101000206520ustar00rootroot00000000000000# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents an EC2 Spot Instance Request """ from boto.ec2.ec2object import TaggedEC2Object from boto.ec2.launchspecification import LaunchSpecification class SpotInstanceStateFault(object): """ The fault codes for the Spot Instance request, if any. :ivar code: The reason code for the Spot Instance state change. :ivar message: The message for the Spot Instance state change. """ def __init__(self, code=None, message=None): self.code = code self.message = message def __repr__(self): return '(%s, %s)' % (self.code, self.message) def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'code': self.code = value elif name == 'message': self.message = value setattr(self, name, value) class SpotInstanceStatus(object): """ Contains the status of a Spot Instance Request. :ivar code: Status code of the request. :ivar message: The description for the status code for the Spot request. :ivar update_time: Time the status was stated. """ def __init__(self, code=None, update_time=None, message=None): self.code = code self.update_time = update_time self.message = message def __repr__(self): return '' % self.code def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'code': self.code = value elif name == 'message': self.message = value elif name == 'updateTime': self.update_time = value class SpotInstanceRequest(TaggedEC2Object): """ :ivar id: The ID of the Spot Instance Request. :ivar price: The maximum hourly price for any Spot Instance launched to fulfill the request. :ivar type: The Spot Instance request type. :ivar state: The state of the Spot Instance request. :ivar fault: The fault codes for the Spot Instance request, if any. :ivar valid_from: The start date of the request. If this is a one-time request, the request becomes active at this date and time and remains active until all instances launch, the request expires, or the request is canceled. If the request is persistent, the request becomes active at this date and time and remains active until it expires or is canceled. :ivar valid_until: The end date of the request. If this is a one-time request, the request remains active until all instances launch, the request is canceled, or this date is reached. If the request is persistent, it remains active until it is canceled or this date is reached. :ivar launch_group: The instance launch group. Launch groups are Spot Instances that launch together and terminate together. :ivar launched_availability_zone: foo :ivar product_description: The Availability Zone in which the bid is launched. :ivar availability_zone_group: The Availability Zone group. If you specify the same Availability Zone group for all Spot Instance requests, all Spot Instances are launched in the same Availability Zone. :ivar create_time: The time stamp when the Spot Instance request was created. :ivar launch_specification: Additional information for launching instances. :ivar instance_id: The instance ID, if an instance has been launched to fulfill the Spot Instance request. :ivar status: The status code and status message describing the Spot Instance request. """ def __init__(self, connection=None): TaggedEC2Object.__init__(self, connection) self.id = None self.price = None self.type = None self.state = None self.fault = None self.valid_from = None self.valid_until = None self.launch_group = None self.launched_availability_zone = None self.product_description = None self.availability_zone_group = None self.create_time = None self.launch_specification = None self.instance_id = None self.status = None def __repr__(self): return 'SpotInstanceRequest:%s' % self.id def startElement(self, name, attrs, connection): retval = TaggedEC2Object.startElement(self, name, attrs, connection) if retval is not None: return retval if name == 'launchSpecification': self.launch_specification = LaunchSpecification(connection) return self.launch_specification elif name == 'fault': self.fault = SpotInstanceStateFault() return self.fault elif name == 'status': self.status = SpotInstanceStatus() return self.status else: return None def endElement(self, name, value, connection): if name == 'spotInstanceRequestId': self.id = value elif name == 'spotPrice': self.price = float(value) elif name == 'type': self.type = value elif name == 'state': self.state = value elif name == 'validFrom': self.valid_from = value elif name == 'validUntil': self.valid_until = value elif name == 'launchGroup': self.launch_group = value elif name == 'availabilityZoneGroup': self.availability_zone_group = value elif name == 'launchedAvailabilityZone': self.launched_availability_zone = value elif name == 'instanceId': self.instance_id = value elif name == 'createTime': self.create_time = value elif name == 'productDescription': self.product_description = value else: setattr(self, name, value) def cancel(self, dry_run=False): self.connection.cancel_spot_instance_requests( [self.id], dry_run=dry_run ) boto-2.20.1/boto/ec2/spotpricehistory.py000066400000000000000000000040451225267101000201560ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents an EC2 Spot Instance Request """ from boto.ec2.ec2object import EC2Object class SpotPriceHistory(EC2Object): def __init__(self, connection=None): EC2Object.__init__(self, connection) self.price = 0.0 self.instance_type = None self.product_description = None self.timestamp = None self.availability_zone = None def __repr__(self): return 'SpotPriceHistory(%s):%2f' % (self.instance_type, self.price) def endElement(self, name, value, connection): if name == 'instanceType': self.instance_type = value elif name == 'spotPrice': self.price = float(value) elif name == 'productDescription': self.product_description = value elif name == 'timestamp': self.timestamp = value elif name == 'availabilityZone': self.availability_zone = value else: setattr(self, name, value) boto-2.20.1/boto/ec2/tag.py000066400000000000000000000060041225267101000152740ustar00rootroot00000000000000# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class TagSet(dict): """ A TagSet is used to collect the tags associated with a particular EC2 resource. Not all resources can be tagged but for those that can, this dict object will be used to collect those values. See :class:`boto.ec2.ec2object.TaggedEC2Object` for more details. """ def __init__(self, connection=None): self.connection = connection self._current_key = None self._current_value = None def startElement(self, name, attrs, connection): if name == 'item': self._current_key = None self._current_value = None return None def endElement(self, name, value, connection): if name == 'key': self._current_key = value elif name == 'value': self._current_value = value elif name == 'item': self[self._current_key] = self._current_value class Tag(object): """ A Tag is used when creating or listing all tags related to an AWS account. It records not only the key and value but also the ID of the resource to which the tag is attached as well as the type of the resource. """ def __init__(self, connection=None, res_id=None, res_type=None, name=None, value=None): self.connection = connection self.res_id = res_id self.res_type = res_type self.name = name self.value = value def __repr__(self): return 'Tag:%s' % self.name def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'resourceId': self.res_id = value elif name == 'resourceType': self.res_type = value elif name == 'key': self.name = value elif name == 'value': self.value = value else: setattr(self, name, value) boto-2.20.1/boto/ec2/vmtype.py000066400000000000000000000043311225267101000160460ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.ec2.ec2object import EC2Object class VmType(EC2Object): """ Represents an EC2 VM Type :ivar name: The name of the vm type :ivar cores: The number of cpu cores for this vm type :ivar memory: The amount of memory in megabytes for this vm type :ivar disk: The amount of disk space in gigabytes for this vm type """ def __init__(self, connection=None, name=None, cores=None, memory=None, disk=None): EC2Object.__init__(self, connection) self.connection = connection self.name = name self.cores = cores self.memory = memory self.disk = disk def __repr__(self): return 'VmType:%s-%s,%s,%s' % (self.name, self.cores, self.memory, self.disk) def endElement(self, name, value, connection): if name == 'euca:name': self.name = value elif name == 'euca:cpu': self.cores = value elif name == 'euca:disk': self.disk = value elif name == 'euca:memory': self.memory = value else: setattr(self, name, value) boto-2.20.1/boto/ec2/volume.py000066400000000000000000000240241225267101000160320ustar00rootroot00000000000000# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents an EC2 Elastic Block Storage Volume """ from boto.resultset import ResultSet from boto.ec2.tag import Tag from boto.ec2.ec2object import TaggedEC2Object class Volume(TaggedEC2Object): """ Represents an EBS volume. :ivar id: The unique ID of the volume. :ivar create_time: The timestamp of when the volume was created. :ivar status: The status of the volume. :ivar size: The size (in GB) of the volume. :ivar snapshot_id: The ID of the snapshot this volume was created from, if applicable. :ivar attach_data: An AttachmentSet object. :ivar zone: The availability zone this volume is in. :ivar type: The type of volume (standard or consistent-iops) :ivar iops: If this volume is of type consistent-iops, this is the number of IOPS provisioned (10-300). """ def __init__(self, connection=None): TaggedEC2Object.__init__(self, connection) self.id = None self.create_time = None self.status = None self.size = None self.snapshot_id = None self.attach_data = None self.zone = None self.type = None self.iops = None def __repr__(self): return 'Volume:%s' % self.id def startElement(self, name, attrs, connection): retval = TaggedEC2Object.startElement(self, name, attrs, connection) if retval is not None: return retval if name == 'attachmentSet': self.attach_data = AttachmentSet() return self.attach_data elif name == 'tagSet': self.tags = ResultSet([('item', Tag)]) return self.tags else: return None def endElement(self, name, value, connection): if name == 'volumeId': self.id = value elif name == 'createTime': self.create_time = value elif name == 'status': if value != '': self.status = value elif name == 'size': self.size = int(value) elif name == 'snapshotId': self.snapshot_id = value elif name == 'availabilityZone': self.zone = value elif name == 'volumeType': self.type = value elif name == 'iops': self.iops = int(value) else: setattr(self, name, value) def _update(self, updated): self.__dict__.update(updated.__dict__) def update(self, validate=False, dry_run=False): """ Update the data associated with this volume by querying EC2. :type validate: bool :param validate: By default, if EC2 returns no data about the volume the update method returns quietly. If the validate param is True, however, it will raise a ValueError exception if no data is returned from EC2. """ # Check the resultset since Eucalyptus ignores the volumeId param unfiltered_rs = self.connection.get_all_volumes( [self.id], dry_run=dry_run ) rs = [x for x in unfiltered_rs if x.id == self.id] if len(rs) > 0: self._update(rs[0]) elif validate: raise ValueError('%s is not a valid Volume ID' % self.id) return self.status def delete(self, dry_run=False): """ Delete this EBS volume. :rtype: bool :return: True if successful """ return self.connection.delete_volume(self.id, dry_run=dry_run) def attach(self, instance_id, device, dry_run=False): """ Attach this EBS volume to an EC2 instance. :type instance_id: str :param instance_id: The ID of the EC2 instance to which it will be attached. :type device: str :param device: The device on the instance through which the volume will be exposed (e.g. /dev/sdh) :rtype: bool :return: True if successful """ return self.connection.attach_volume( self.id, instance_id, device, dry_run=dry_run ) def detach(self, force=False, dry_run=False): """ Detach this EBS volume from an EC2 instance. :type force: bool :param force: Forces detachment if the previous detachment attempt did not occur cleanly. This option can lead to data loss or a corrupted file system. Use this option only as a last resort to detach a volume from a failed instance. The instance will not have an opportunity to flush file system caches nor file system meta data. If you use this option, you must perform file system check and repair procedures. :rtype: bool :return: True if successful """ instance_id = None if self.attach_data: instance_id = self.attach_data.instance_id device = None if self.attach_data: device = self.attach_data.device return self.connection.detach_volume( self.id, instance_id, device, force, dry_run=dry_run ) def create_snapshot(self, description=None, dry_run=False): """ Create a snapshot of this EBS Volume. :type description: str :param description: A description of the snapshot. Limited to 256 characters. :rtype: :class:`boto.ec2.snapshot.Snapshot` :return: The created Snapshot object """ return self.connection.create_snapshot( self.id, description, dry_run=dry_run ) def volume_state(self): """ Returns the state of the volume. Same value as the status attribute. """ return self.status def attachment_state(self): """ Get the attachment state. """ state = None if self.attach_data: state = self.attach_data.status return state def snapshots(self, owner=None, restorable_by=None, dry_run=False): """ Get all snapshots related to this volume. Note that this requires that all available snapshots for the account be retrieved from EC2 first and then the list is filtered client-side to contain only those for this volume. :type owner: str :param owner: If present, only the snapshots owned by the specified user will be returned. Valid values are: * self * amazon * AWS Account ID :type restorable_by: str :param restorable_by: If present, only the snapshots that are restorable by the specified account id will be returned. :rtype: list of L{boto.ec2.snapshot.Snapshot} :return: The requested Snapshot objects """ rs = self.connection.get_all_snapshots( owner=owner, restorable_by=restorable_by, dry_run=dry_run ) mine = [] for snap in rs: if snap.volume_id == self.id: mine.append(snap) return mine class AttachmentSet(object): """ Represents an EBS attachmentset. :ivar id: The unique ID of the volume. :ivar instance_id: The unique ID of the attached instance :ivar status: The status of the attachment :ivar attach_time: Attached since :ivar device: The device the instance has mapped """ def __init__(self): self.id = None self.instance_id = None self.status = None self.attach_time = None self.device = None def __repr__(self): return 'AttachmentSet:%s' % self.id def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'volumeId': self.id = value elif name == 'instanceId': self.instance_id = value elif name == 'status': self.status = value elif name == 'attachTime': self.attach_time = value elif name == 'device': self.device = value else: setattr(self, name, value) class VolumeAttribute: def __init__(self, parent=None): self.id = None self._key_name = None self.attrs = {} def startElement(self, name, attrs, connection): if name == 'autoEnableIO': self._key_name = name return None def endElement(self, name, value, connection): if name == 'value': if value.lower() == 'true': self.attrs[self._key_name] = True else: self.attrs[self._key_name] = False elif name == 'volumeId': self.id = value else: setattr(self, name, value) boto-2.20.1/boto/ec2/volumestatus.py000066400000000000000000000142711225267101000173010ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.ec2.instancestatus import Status, Details class Event(object): """ A status event for an instance. :ivar type: The type of the event. :ivar id: The ID of the event. :ivar description: A string describing the reason for the event. :ivar not_before: A datestring describing the earliest time for the event. :ivar not_after: A datestring describing the latest time for the event. """ def __init__(self, type=None, id=None, description=None, not_before=None, not_after=None): self.type = type self.id = id self.description = description self.not_before = not_before self.not_after = not_after def __repr__(self): return 'Event:%s' % self.type def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'eventType': self.type = value elif name == 'eventId': self.id = value elif name == 'description': self.description = value elif name == 'notBefore': self.not_before = value elif name == 'notAfter': self.not_after = value else: setattr(self, name, value) class EventSet(list): def startElement(self, name, attrs, connection): if name == 'item': event = Event() self.append(event) return event else: return None def endElement(self, name, value, connection): setattr(self, name, value) class Action(object): """ An action for an instance. :ivar code: The code for the type of the action. :ivar id: The ID of the event. :ivar type: The type of the event. :ivar description: A description of the action. """ def __init__(self, code=None, id=None, description=None, type=None): self.code = code self.id = id self.type = type self.description = description def __repr__(self): return 'Action:%s' % self.code def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'eventType': self.type = value elif name == 'eventId': self.id = value elif name == 'description': self.description = value elif name == 'code': self.code = value else: setattr(self, name, value) class ActionSet(list): def startElement(self, name, attrs, connection): if name == 'item': action = Action() self.append(action) return action else: return None def endElement(self, name, value, connection): setattr(self, name, value) class VolumeStatus(object): """ Represents an EC2 Volume status as reported by DescribeVolumeStatus request. :ivar id: The volume identifier. :ivar zone: The availability zone of the volume :ivar volume_status: A Status object that reports impaired functionality that arises from problems internal to the instance. :ivar events: A list of events relevant to the instance. :ivar actions: A list of events relevant to the instance. """ def __init__(self, id=None, zone=None): self.id = id self.zone = zone self.volume_status = Status() self.events = None self.actions = None def __repr__(self): return 'VolumeStatus:%s' % self.id def startElement(self, name, attrs, connection): if name == 'eventsSet': self.events = EventSet() return self.events elif name == 'actionsSet': self.actions = ActionSet() return self.actions elif name == 'volumeStatus': return self.volume_status else: return None def endElement(self, name, value, connection): if name == 'volumeId': self.id = value elif name == 'availabilityZone': self.zone = value else: setattr(self, name, value) class VolumeStatusSet(list): """ A list object that contains the results of a call to DescribeVolumeStatus request. Each element of the list will be an VolumeStatus object. :ivar next_token: If the response was truncated by the EC2 service, the next_token attribute of the object will contain the string that needs to be passed in to the next request to retrieve the next set of results. """ def __init__(self, connection=None): list.__init__(self) self.connection = connection self.next_token = None def startElement(self, name, attrs, connection): if name == 'item': status = VolumeStatus() self.append(status) return status else: return None def endElement(self, name, value, connection): if name == 'NextToken': self.next_token = value setattr(self, name, value) boto-2.20.1/boto/ec2/zone.py000066400000000000000000000050761225267101000155040ustar00rootroot00000000000000# Copyright (c) 2006-2008 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents an EC2 Availability Zone """ from boto.ec2.ec2object import EC2Object class MessageSet(list): """ A list object that contains messages associated with an availability zone. """ def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'message': self.append(value) else: setattr(self, name, value) class Zone(EC2Object): """ Represents an Availability Zone. :ivar name: The name of the zone. :ivar state: The current state of the zone. :ivar region_name: The name of the region the zone is associated with. :ivar messages: A list of messages related to the zone. """ def __init__(self, connection=None): EC2Object.__init__(self, connection) self.name = None self.state = None self.region_name = None self.messages = None def __repr__(self): return 'Zone:%s' % self.name def startElement(self, name, attrs, connection): if name == 'messageSet': self.messages = MessageSet() return self.messages return None def endElement(self, name, value, connection): if name == 'zoneName': self.name = value elif name == 'zoneState': self.state = value elif name == 'regionName': self.region_name = value else: setattr(self, name, value) boto-2.20.1/boto/ecs/000077500000000000000000000000001225267101000142505ustar00rootroot00000000000000boto-2.20.1/boto/ecs/__init__.py000066400000000000000000000066451225267101000163740ustar00rootroot00000000000000# Copyright (c) 2010 Chris Moyer http://coredumped.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import boto from boto.connection import AWSQueryConnection, AWSAuthConnection import time import urllib import xml.sax from boto.ecs.item import ItemSet from boto import handler class ECSConnection(AWSQueryConnection): """ ECommerce Connection For more information on how to use this module see: http://blog.coredumped.org/2010/09/search-for-books-on-amazon-using-boto.html """ APIVersion = '2010-11-01' def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, host='ecs.amazonaws.com', debug=0, https_connection_factory=None, path='/'): AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, proxy_user, proxy_pass, host, debug, https_connection_factory, path) def _required_auth_capability(self): return ['ecs'] def get_response(self, action, params, page=0, itemSet=None): """ Utility method to handle calls to ECS and parsing of responses. """ params['Service'] = "AWSECommerceService" params['Operation'] = action if page: params['ItemPage'] = page response = self.make_request(None, params, "/onca/xml") body = response.read() boto.log.debug(body) if response.status != 200: boto.log.error('%s %s' % (response.status, response.reason)) boto.log.error('%s' % body) raise self.ResponseError(response.status, response.reason, body) if itemSet == None: rs = ItemSet(self, action, params, page) else: rs = itemSet h = handler.XmlHandler(rs, self) xml.sax.parseString(body, h) return rs # # Group methods # def item_search(self, search_index, **params): """ Returns items that satisfy the search criteria, including one or more search indices. For a full list of search terms, :see: http://docs.amazonwebservices.com/AWSECommerceService/2010-09-01/DG/index.html?ItemSearch.html """ params['SearchIndex'] = search_index return self.get_response('ItemSearch', params) boto-2.20.1/boto/ecs/item.py000066400000000000000000000120341225267101000155600ustar00rootroot00000000000000# Copyright (c) 2010 Chris Moyer http://coredumped.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import xml.sax import cgi from StringIO import StringIO class ResponseGroup(xml.sax.ContentHandler): """A Generic "Response Group", which can be anything from the entire list of Items to specific response elements within an item""" def __init__(self, connection=None, nodename=None): """Initialize this Item""" self._connection = connection self._nodename = nodename self._nodepath = [] self._curobj = None self._xml = StringIO() def __repr__(self): return '<%s: %s>' % (self.__class__.__name__, self.__dict__) # # Attribute Functions # def get(self, name): return self.__dict__.get(name) def set(self, name, value): self.__dict__[name] = value def to_xml(self): return "<%s>%s" % (self._nodename, self._xml.getvalue(), self._nodename) # # XML Parser functions # def startElement(self, name, attrs, connection): self._xml.write("<%s>" % name) self._nodepath.append(name) if len(self._nodepath) == 1: obj = ResponseGroup(self._connection) self.set(name, obj) self._curobj = obj elif self._curobj: self._curobj.startElement(name, attrs, connection) return None def endElement(self, name, value, connection): self._xml.write("%s" % (cgi.escape(value).replace("&amp;", "&"), name)) if len(self._nodepath) == 0: return obj = None curval = self.get(name) if len(self._nodepath) == 1: if value or not curval: self.set(name, value) if self._curobj: self._curobj = None #elif len(self._nodepath) == 2: #self._curobj = None elif self._curobj: self._curobj.endElement(name, value, connection) self._nodepath.pop() return None class Item(ResponseGroup): """A single Item""" def __init__(self, connection=None): """Initialize this Item""" ResponseGroup.__init__(self, connection, "Item") class ItemSet(ResponseGroup): """A special ResponseGroup that has built-in paging, and only creates new Items on the "Item" tag""" def __init__(self, connection, action, params, page=0): ResponseGroup.__init__(self, connection, "Items") self.objs = [] self.iter = None self.page = page self.action = action self.params = params self.curItem = None self.total_results = 0 self.total_pages = 0 def startElement(self, name, attrs, connection): if name == "Item": self.curItem = Item(self._connection) elif self.curItem != None: self.curItem.startElement(name, attrs, connection) return None def endElement(self, name, value, connection): if name == 'TotalResults': self.total_results = value elif name == 'TotalPages': self.total_pages = value elif name == "Item": self.objs.append(self.curItem) self._xml.write(self.curItem.to_xml()) self.curItem = None elif self.curItem != None: self.curItem.endElement(name, value, connection) return None def next(self): """Special paging functionality""" if self.iter == None: self.iter = iter(self.objs) try: return self.iter.next() except StopIteration: self.iter = None self.objs = [] if int(self.page) < int(self.total_pages): self.page += 1 self._connection.get_response(self.action, self.params, self.page, self) return self.next() else: raise def __iter__(self): return self def to_xml(self): """Override to first fetch everything""" for item in self: pass return ResponseGroup.to_xml(self) boto-2.20.1/boto/elasticache/000077500000000000000000000000001225267101000157435ustar00rootroot00000000000000boto-2.20.1/boto/elasticache/__init__.py000066400000000000000000000057101225267101000200570ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.regioninfo import RegionInfo def regions(): """ Get all available regions for the AWS ElastiCache service. :rtype: list :return: A list of :class:`boto.regioninfo.RegionInfo` """ from boto.elasticache.layer1 import ElastiCacheConnection return [RegionInfo(name='us-east-1', endpoint='elasticache.us-east-1.amazonaws.com', connection_cls=ElastiCacheConnection), RegionInfo(name='us-west-1', endpoint='elasticache.us-west-1.amazonaws.com', connection_cls=ElastiCacheConnection), RegionInfo(name='us-west-2', endpoint='elasticache.us-west-2.amazonaws.com', connection_cls=ElastiCacheConnection), RegionInfo(name='eu-west-1', endpoint='elasticache.eu-west-1.amazonaws.com', connection_cls=ElastiCacheConnection), RegionInfo(name='ap-northeast-1', endpoint='elasticache.ap-northeast-1.amazonaws.com', connection_cls=ElastiCacheConnection), RegionInfo(name='ap-southeast-1', endpoint='elasticache.ap-southeast-1.amazonaws.com', connection_cls=ElastiCacheConnection), RegionInfo(name='ap-southeast-2', endpoint='elasticache.ap-southeast-2.amazonaws.com', connection_cls=ElastiCacheConnection), RegionInfo(name='sa-east-1', endpoint='elasticache.sa-east-1.amazonaws.com', connection_cls=ElastiCacheConnection), ] def connect_to_region(region_name, **kw_params): for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/elasticache/layer1.py000066400000000000000000002174511225267101000175240ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import boto from boto.compat import json from boto.connection import AWSQueryConnection from boto.regioninfo import RegionInfo class ElastiCacheConnection(AWSQueryConnection): """ Amazon ElastiCache Amazon ElastiCache is a web service that makes it easier to set up, operate, and scale a distributed cache in the cloud. With ElastiCache, customers gain all of the benefits of a high- performance, in-memory cache with far less of the administrative burden of launching and managing a distributed cache. The service makes set-up, scaling, and cluster failure handling much simpler than in a self-managed cache deployment. In addition, through integration with Amazon CloudWatch, customers get enhanced visibility into the key performance statistics associated with their cache and can receive alarms if a part of their cache runs hot. """ APIVersion = "2013-06-15" DefaultRegionName = "us-east-1" DefaultRegionEndpoint = "elasticache.us-east-1.amazonaws.com" def __init__(self, **kwargs): region = kwargs.get('region') if not region: region = RegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint) else: del kwargs['region'] kwargs['host'] = region.endpoint AWSQueryConnection.__init__(self, **kwargs) self.region = region def _required_auth_capability(self): return ['sign-v2'] def authorize_cache_security_group_ingress(self, cache_security_group_name, ec2_security_group_name, ec2_security_group_owner_id): """ The AuthorizeCacheSecurityGroupIngress operation allows network ingress to a cache security group. Applications using ElastiCache must be running on Amazon EC2, and Amazon EC2 security groups are used as the authorization mechanism. You cannot authorize ingress from an Amazon EC2 security group in one Region to an ElastiCache cluster in another Region. :type cache_security_group_name: string :param cache_security_group_name: The cache security group which will allow network ingress. :type ec2_security_group_name: string :param ec2_security_group_name: The Amazon EC2 security group to be authorized for ingress to the cache security group. :type ec2_security_group_owner_id: string :param ec2_security_group_owner_id: The AWS account number of the Amazon EC2 security group owner. Note that this is not the same thing as an AWS access key ID - you must provide a valid AWS account number for this parameter. """ params = { 'CacheSecurityGroupName': cache_security_group_name, 'EC2SecurityGroupName': ec2_security_group_name, 'EC2SecurityGroupOwnerId': ec2_security_group_owner_id, } return self._make_request( action='AuthorizeCacheSecurityGroupIngress', verb='POST', path='/', params=params) def create_cache_cluster(self, cache_cluster_id, num_cache_nodes=None, cache_node_type=None, engine=None, replication_group_id=None, engine_version=None, cache_parameter_group_name=None, cache_subnet_group_name=None, cache_security_group_names=None, security_group_ids=None, snapshot_arns=None, preferred_availability_zone=None, preferred_maintenance_window=None, port=None, notification_topic_arn=None, auto_minor_version_upgrade=None): """ The CreateCacheCluster operation creates a new cache cluster. All nodes in the cache cluster run the same protocol-compliant cache engine software - either Memcached or Redis. :type cache_cluster_id: string :param cache_cluster_id: The cache cluster identifier. This parameter is stored as a lowercase string. Constraints: + Must contain from 1 to 20 alphanumeric characters or hyphens. + First character must be a letter. + Cannot end with a hyphen or contain two consecutive hyphens. :type replication_group_id: string :param replication_group_id: The replication group to which this cache cluster should belong. If this parameter is specified, the cache cluster will be added to the specified replication group as a read replica; otherwise, the cache cluster will be a standalone primary that is not part of any replication group. :type num_cache_nodes: integer :param num_cache_nodes: The initial number of cache nodes that the cache cluster will have. For a Memcached cluster, valid values are between 1 and 20. If you need to exceed this limit, please fill out the ElastiCache Limit Increase Request form at ``_ . For Redis, only single-node cache clusters are supported at this time, so the value for this parameter must be 1. :type cache_node_type: string :param cache_node_type: The compute and memory capacity of the nodes in the cache cluster. Valid values for Memcached: `cache.t1.micro` | `cache.m1.small` | `cache.m1.medium` | `cache.m1.large` | `cache.m1.xlarge` | `cache.m3.xlarge` | `cache.m3.2xlarge` | `cache.m2.xlarge` | `cache.m2.2xlarge` | `cache.m2.4xlarge` | `cache.c1.xlarge` Valid values for Redis: `cache.t1.micro` | `cache.m1.small` | `cache.m1.medium` | `cache.m1.large` | `cache.m1.xlarge` | `cache.m2.xlarge` | `cache.m2.2xlarge` | `cache.m2.4xlarge` | `cache.c1.xlarge` For a complete listing of cache node types and specifications, see `. :type engine: string :param engine: The name of the cache engine to be used for this cache cluster. Valid values for this parameter are: `memcached` | `redis` :type engine_version: string :param engine_version: The version number of the cache engine to be used for this cluster. To view the supported cache engine versions, use the DescribeCacheEngineVersions operation. :type cache_parameter_group_name: string :param cache_parameter_group_name: The name of the cache parameter group to associate with this cache cluster. If this argument is omitted, the default cache parameter group for the specified engine will be used. :type cache_subnet_group_name: string :param cache_subnet_group_name: The name of the cache subnet group to be used for the cache cluster. Use this parameter only when you are creating a cluster in an Amazon Virtual Private Cloud (VPC). :type cache_security_group_names: list :param cache_security_group_names: A list of cache security group names to associate with this cache cluster. Use this parameter only when you are creating a cluster outside of an Amazon Virtual Private Cloud (VPC). :type security_group_ids: list :param security_group_ids: One or more VPC security groups associated with the cache cluster. Use this parameter only when you are creating a cluster in an Amazon Virtual Private Cloud (VPC). :type snapshot_arns: list :param snapshot_arns: A single-element string list containing an Amazon Resource Name (ARN) that uniquely identifies a Redis RDB snapshot file stored in Amazon S3. The snapshot file will be used to populate the Redis cache in the new cache cluster. The Amazon S3 object name in the ARN cannot contain any commas. Here is an example of an Amazon S3 ARN: `arn:aws:s3:::my_bucket/snapshot1.rdb` **Note:** This parameter is only valid if the `Engine` parameter is `redis`. :type preferred_availability_zone: string :param preferred_availability_zone: The EC2 Availability Zone in which the cache cluster will be created. All cache nodes belonging to a cache cluster are placed in the preferred availability zone. Default: System chosen availability zone. :type preferred_maintenance_window: string :param preferred_maintenance_window: The weekly time range (in UTC) during which system maintenance can occur. Example: `sun:05:00-sun:09:00` :type port: integer :param port: The port number on which each of the cache nodes will accept connections. :type notification_topic_arn: string :param notification_topic_arn: The Amazon Resource Name (ARN) of the Amazon Simple Notification Service (SNS) topic to which notifications will be sent. The Amazon SNS topic owner must be the same as the cache cluster owner. :type auto_minor_version_upgrade: boolean :param auto_minor_version_upgrade: Determines whether minor engine upgrades will be applied automatically to the cache cluster during the maintenance window. A value of `True` allows these upgrades to occur; `False` disables automatic upgrades. Default: `True` """ params = { 'CacheClusterId': cache_cluster_id, } if num_cache_nodes is not None: params['NumCacheNodes'] = num_cache_nodes if cache_node_type is not None: params['CacheNodeType'] = cache_node_type if engine is not None: params['Engine'] = engine if replication_group_id is not None: params['ReplicationGroupId'] = replication_group_id if engine_version is not None: params['EngineVersion'] = engine_version if cache_parameter_group_name is not None: params['CacheParameterGroupName'] = cache_parameter_group_name if cache_subnet_group_name is not None: params['CacheSubnetGroupName'] = cache_subnet_group_name if cache_security_group_names is not None: self.build_list_params(params, cache_security_group_names, 'CacheSecurityGroupNames.member') if security_group_ids is not None: self.build_list_params(params, security_group_ids, 'SecurityGroupIds.member') if snapshot_arns is not None: self.build_list_params(params, snapshot_arns, 'SnapshotArns.member') if preferred_availability_zone is not None: params['PreferredAvailabilityZone'] = preferred_availability_zone if preferred_maintenance_window is not None: params['PreferredMaintenanceWindow'] = preferred_maintenance_window if port is not None: params['Port'] = port if notification_topic_arn is not None: params['NotificationTopicArn'] = notification_topic_arn if auto_minor_version_upgrade is not None: params['AutoMinorVersionUpgrade'] = str( auto_minor_version_upgrade).lower() return self._make_request( action='CreateCacheCluster', verb='POST', path='/', params=params) def create_cache_parameter_group(self, cache_parameter_group_name, cache_parameter_group_family, description): """ The CreateCacheParameterGroup operation creates a new cache parameter group. A cache parameter group is a collection of parameters that you apply to all of the nodes in a cache cluster. :type cache_parameter_group_name: string :param cache_parameter_group_name: A user-specified name for the cache parameter group. :type cache_parameter_group_family: string :param cache_parameter_group_family: The name of the cache parameter group family the cache parameter group can be used with. Valid values are: `memcached1.4` | `redis2.6` :type description: string :param description: A user-specified description for the cache parameter group. """ params = { 'CacheParameterGroupName': cache_parameter_group_name, 'CacheParameterGroupFamily': cache_parameter_group_family, 'Description': description, } return self._make_request( action='CreateCacheParameterGroup', verb='POST', path='/', params=params) def create_cache_security_group(self, cache_security_group_name, description): """ The CreateCacheSecurityGroup operation creates a new cache security group. Use a cache security group to control access to one or more cache clusters. Cache security groups are only used when you are creating a cluster outside of an Amazon Virtual Private Cloud (VPC). If you are creating a cluster inside of a VPC, use a cache subnet group instead. For more information, see CreateCacheSubnetGroup . :type cache_security_group_name: string :param cache_security_group_name: A name for the cache security group. This value is stored as a lowercase string. Constraints: Must contain no more than 255 alphanumeric characters. Must not be the word "Default". Example: `mysecuritygroup` :type description: string :param description: A description for the cache security group. """ params = { 'CacheSecurityGroupName': cache_security_group_name, 'Description': description, } return self._make_request( action='CreateCacheSecurityGroup', verb='POST', path='/', params=params) def create_cache_subnet_group(self, cache_subnet_group_name, cache_subnet_group_description, subnet_ids): """ The CreateCacheSubnetGroup operation creates a new cache subnet group. Use this parameter only when you are creating a cluster in an Amazon Virtual Private Cloud (VPC). :type cache_subnet_group_name: string :param cache_subnet_group_name: A name for the cache subnet group. This value is stored as a lowercase string. Constraints: Must contain no more than 255 alphanumeric characters or hyphens. Example: `mysubnetgroup` :type cache_subnet_group_description: string :param cache_subnet_group_description: A description for the cache subnet group. :type subnet_ids: list :param subnet_ids: A list of VPC subnet IDs for the cache subnet group. """ params = { 'CacheSubnetGroupName': cache_subnet_group_name, 'CacheSubnetGroupDescription': cache_subnet_group_description, } self.build_list_params(params, subnet_ids, 'SubnetIds.member') return self._make_request( action='CreateCacheSubnetGroup', verb='POST', path='/', params=params) def create_replication_group(self, replication_group_id, primary_cluster_id, replication_group_description): """ The CreateReplicationGroup operation creates a replication group. A replication group is a collection of cache clusters, where one of the clusters is a read/write primary and the other clusters are read-only replicas. Writes to the primary are automatically propagated to the replicas. When you create a replication group, you must specify an existing cache cluster that is in the primary role. When the replication group has been successfully created, you can add one or more read replica replicas to it, up to a total of five read replicas. :type replication_group_id: string :param replication_group_id: The replication group identifier. This parameter is stored as a lowercase string. Constraints: + Must contain from 1 to 20 alphanumeric characters or hyphens. + First character must be a letter. + Cannot end with a hyphen or contain two consecutive hyphens. :type primary_cluster_id: string :param primary_cluster_id: The identifier of the cache cluster that will serve as the primary for this replication group. This cache cluster must already exist and have a status of available . :type replication_group_description: string :param replication_group_description: A user-specified description for the replication group. """ params = { 'ReplicationGroupId': replication_group_id, 'PrimaryClusterId': primary_cluster_id, 'ReplicationGroupDescription': replication_group_description, } return self._make_request( action='CreateReplicationGroup', verb='POST', path='/', params=params) def delete_cache_cluster(self, cache_cluster_id): """ The DeleteCacheCluster operation deletes a previously provisioned cache cluster. DeleteCacheCluster deletes all associated cache nodes, node endpoints and the cache cluster itself. When you receive a successful response from this operation, Amazon ElastiCache immediately begins deleting the cache cluster; you cannot cancel or revert this operation. :type cache_cluster_id: string :param cache_cluster_id: The cache cluster identifier for the cluster to be deleted. This parameter is not case sensitive. """ params = {'CacheClusterId': cache_cluster_id, } return self._make_request( action='DeleteCacheCluster', verb='POST', path='/', params=params) def delete_cache_parameter_group(self, cache_parameter_group_name): """ The DeleteCacheParameterGroup operation deletes the specified cache parameter group. You cannot delete a cache parameter group if it is associated with any cache clusters. :type cache_parameter_group_name: string :param cache_parameter_group_name: The name of the cache parameter group to delete. The specified cache security group must not be associated with any cache clusters. """ params = { 'CacheParameterGroupName': cache_parameter_group_name, } return self._make_request( action='DeleteCacheParameterGroup', verb='POST', path='/', params=params) def delete_cache_security_group(self, cache_security_group_name): """ The DeleteCacheSecurityGroup operation deletes a cache security group. You cannot delete a cache security group if it is associated with any cache clusters. :type cache_security_group_name: string :param cache_security_group_name: The name of the cache security group to delete. You cannot delete the default security group. """ params = { 'CacheSecurityGroupName': cache_security_group_name, } return self._make_request( action='DeleteCacheSecurityGroup', verb='POST', path='/', params=params) def delete_cache_subnet_group(self, cache_subnet_group_name): """ The DeleteCacheSubnetGroup operation deletes a cache subnet group. You cannot delete a cache subnet group if it is associated with any cache clusters. :type cache_subnet_group_name: string :param cache_subnet_group_name: The name of the cache subnet group to delete. Constraints: Must contain no more than 255 alphanumeric characters or hyphens. """ params = {'CacheSubnetGroupName': cache_subnet_group_name, } return self._make_request( action='DeleteCacheSubnetGroup', verb='POST', path='/', params=params) def delete_replication_group(self, replication_group_id): """ The DeleteReplicationGroup operation deletes an existing replication group. DeleteReplicationGroup deletes the primary cache cluster and all of the read replicas in the replication group. When you receive a successful response from this operation, Amazon ElastiCache immediately begins deleting the entire replication group; you cannot cancel or revert this operation. :type replication_group_id: string :param replication_group_id: The identifier for the replication group to be deleted. This parameter is not case sensitive. """ params = {'ReplicationGroupId': replication_group_id, } return self._make_request( action='DeleteReplicationGroup', verb='POST', path='/', params=params) def describe_cache_clusters(self, cache_cluster_id=None, max_records=None, marker=None, show_cache_node_info=None): """ The DescribeCacheClusters operation returns information about all provisioned cache clusters if no cache cluster identifier is specified, or about a specific cache cluster if a cache cluster identifier is supplied. By default, abbreviated information about the cache clusters(s) will be returned. You can use the optional ShowDetails flag to retrieve detailed information about the cache nodes associated with the cache clusters. These details include the DNS address and port for the cache node endpoint. If the cluster is in the CREATING state, only cluster level information will be displayed until all of the nodes are successfully provisioned. If the cluster is in the DELETING state, only cluster level information will be displayed. If cache nodes are currently being added to the cache cluster, node endpoint information and creation time for the additional nodes will not be displayed until they are completely provisioned. When the cache cluster state is available , the cluster is ready for use. If cache nodes are currently being removed from the cache cluster, no endpoint information for the removed nodes is displayed. :type cache_cluster_id: string :param cache_cluster_id: The user-supplied cluster identifier. If this parameter is specified, only information about that specific cache cluster is returned. This parameter isn't case sensitive. :type max_records: integer :param max_records: The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a marker is included in the response so that the remaining results can be retrieved. Default: 100 Constraints: minimum 20; maximum 100. :type marker: string :param marker: An optional marker returned from a prior request. Use this marker for pagination of results from this operation. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords . :type show_cache_node_info: boolean :param show_cache_node_info: An optional flag that can be included in the DescribeCacheCluster request to retrieve information about the individual cache nodes. """ params = {} if cache_cluster_id is not None: params['CacheClusterId'] = cache_cluster_id if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker if show_cache_node_info is not None: params['ShowCacheNodeInfo'] = str( show_cache_node_info).lower() return self._make_request( action='DescribeCacheClusters', verb='POST', path='/', params=params) def describe_cache_engine_versions(self, engine=None, engine_version=None, cache_parameter_group_family=None, max_records=None, marker=None, default_only=None): """ The DescribeCacheEngineVersions operation returns a list of the available cache engines and their versions. :type engine: string :param engine: The cache engine to return. Valid values: `memcached` | `redis` :type engine_version: string :param engine_version: The cache engine version to return. Example: `1.4.14` :type cache_parameter_group_family: string :param cache_parameter_group_family: The name of a specific cache parameter group family to return details for. Constraints: + Must be 1 to 255 alphanumeric characters + First character must be a letter + Cannot end with a hyphen or contain two consecutive hyphens :type max_records: integer :param max_records: The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a marker is included in the response so that the remaining results can be retrieved. Default: 100 Constraints: minimum 20; maximum 100. :type marker: string :param marker: An optional marker returned from a prior request. Use this marker for pagination of results from this operation. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords . :type default_only: boolean :param default_only: If true , specifies that only the default version of the specified engine or engine and major version combination is to be returned. """ params = {} if engine is not None: params['Engine'] = engine if engine_version is not None: params['EngineVersion'] = engine_version if cache_parameter_group_family is not None: params['CacheParameterGroupFamily'] = cache_parameter_group_family if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker if default_only is not None: params['DefaultOnly'] = str( default_only).lower() return self._make_request( action='DescribeCacheEngineVersions', verb='POST', path='/', params=params) def describe_cache_parameter_groups(self, cache_parameter_group_name=None, max_records=None, marker=None): """ The DescribeCacheParameterGroups operation returns a list of cache parameter group descriptions. If a cache parameter group name is specified, the list will contain only the descriptions for that group. :type cache_parameter_group_name: string :param cache_parameter_group_name: The name of a specific cache parameter group to return details for. :type max_records: integer :param max_records: The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a marker is included in the response so that the remaining results can be retrieved. Default: 100 Constraints: minimum 20; maximum 100. :type marker: string :param marker: An optional marker returned from a prior request. Use this marker for pagination of results from this operation. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords . """ params = {} if cache_parameter_group_name is not None: params['CacheParameterGroupName'] = cache_parameter_group_name if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeCacheParameterGroups', verb='POST', path='/', params=params) def describe_cache_parameters(self, cache_parameter_group_name, source=None, max_records=None, marker=None): """ The DescribeCacheParameters operation returns the detailed parameter list for a particular cache parameter group. :type cache_parameter_group_name: string :param cache_parameter_group_name: The name of a specific cache parameter group to return details for. :type source: string :param source: The parameter types to return. Valid values: `user` | `system` | `engine-default` :type max_records: integer :param max_records: The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a marker is included in the response so that the remaining results can be retrieved. Default: 100 Constraints: minimum 20; maximum 100. :type marker: string :param marker: An optional marker returned from a prior request. Use this marker for pagination of results from this operation. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords . """ params = { 'CacheParameterGroupName': cache_parameter_group_name, } if source is not None: params['Source'] = source if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeCacheParameters', verb='POST', path='/', params=params) def describe_cache_security_groups(self, cache_security_group_name=None, max_records=None, marker=None): """ The DescribeCacheSecurityGroups operation returns a list of cache security group descriptions. If a cache security group name is specified, the list will contain only the description of that group. :type cache_security_group_name: string :param cache_security_group_name: The name of the cache security group to return details for. :type max_records: integer :param max_records: The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a marker is included in the response so that the remaining results can be retrieved. Default: 100 Constraints: minimum 20; maximum 100. :type marker: string :param marker: An optional marker returned from a prior request. Use this marker for pagination of results from this operation. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords . """ params = {} if cache_security_group_name is not None: params['CacheSecurityGroupName'] = cache_security_group_name if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeCacheSecurityGroups', verb='POST', path='/', params=params) def describe_cache_subnet_groups(self, cache_subnet_group_name=None, max_records=None, marker=None): """ The DescribeCacheSubnetGroups operation returns a list of cache subnet group descriptions. If a subnet group name is specified, the list will contain only the description of that group. :type cache_subnet_group_name: string :param cache_subnet_group_name: The name of the cache subnet group to return details for. :type max_records: integer :param max_records: The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a marker is included in the response so that the remaining results can be retrieved. Default: 100 Constraints: minimum 20; maximum 100. :type marker: string :param marker: An optional marker returned from a prior request. Use this marker for pagination of results from this operation. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords . """ params = {} if cache_subnet_group_name is not None: params['CacheSubnetGroupName'] = cache_subnet_group_name if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeCacheSubnetGroups', verb='POST', path='/', params=params) def describe_engine_default_parameters(self, cache_parameter_group_family, max_records=None, marker=None): """ The DescribeEngineDefaultParameters operation returns the default engine and system parameter information for the specified cache engine. :type cache_parameter_group_family: string :param cache_parameter_group_family: The name of the cache parameter group family. Valid values are: `memcached1.4` | `redis2.6` :type max_records: integer :param max_records: The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a marker is included in the response so that the remaining results can be retrieved. Default: 100 Constraints: minimum 20; maximum 100. :type marker: string :param marker: An optional marker returned from a prior request. Use this marker for pagination of results from this operation. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords . """ params = { 'CacheParameterGroupFamily': cache_parameter_group_family, } if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeEngineDefaultParameters', verb='POST', path='/', params=params) def describe_events(self, source_identifier=None, source_type=None, start_time=None, end_time=None, duration=None, max_records=None, marker=None): """ The DescribeEvents operation returns events related to cache clusters, cache security groups, and cache parameter groups. You can obtain events specific to a particular cache cluster, cache security group, or cache parameter group by providing the name as a parameter. By default, only the events occurring within the last hour are returned; however, you can retrieve up to 14 days' worth of events if necessary. :type source_identifier: string :param source_identifier: The identifier of the event source for which events will be returned. If not specified, then all sources are included in the response. :type source_type: string :param source_type: The event source to retrieve events for. If no value is specified, all events are returned. Valid values are: `cache-cluster` | `cache-parameter-group` | `cache- security-group` | `cache-subnet-group` :type start_time: timestamp :param start_time: The beginning of the time interval to retrieve events for, specified in ISO 8601 format. :type end_time: timestamp :param end_time: The end of the time interval for which to retrieve events, specified in ISO 8601 format. :type duration: integer :param duration: The number of minutes' worth of events to retrieve. :type max_records: integer :param max_records: The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a marker is included in the response so that the remaining results can be retrieved. Default: 100 Constraints: minimum 20; maximum 100. :type marker: string :param marker: An optional marker returned from a prior request. Use this marker for pagination of results from this operation. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords . """ params = {} if source_identifier is not None: params['SourceIdentifier'] = source_identifier if source_type is not None: params['SourceType'] = source_type if start_time is not None: params['StartTime'] = start_time if end_time is not None: params['EndTime'] = end_time if duration is not None: params['Duration'] = duration if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeEvents', verb='POST', path='/', params=params) def describe_replication_groups(self, replication_group_id=None, max_records=None, marker=None): """ The DescribeReplicationGroups operation returns information about a particular replication group. If no identifier is specified, DescribeReplicationGroups returns information about all replication groups. :type replication_group_id: string :param replication_group_id: The identifier for the replication group to be described. This parameter is not case sensitive. If you do not specify this parameter, information about all replication groups is returned. :type max_records: integer :param max_records: The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a marker is included in the response so that the remaining results can be retrieved. Default: 100 Constraints: minimum 20; maximum 100. :type marker: string :param marker: An optional marker returned from a prior request. Use this marker for pagination of results from this operation. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords . """ params = {} if replication_group_id is not None: params['ReplicationGroupId'] = replication_group_id if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeReplicationGroups', verb='POST', path='/', params=params) def describe_reserved_cache_nodes(self, reserved_cache_node_id=None, reserved_cache_nodes_offering_id=None, cache_node_type=None, duration=None, product_description=None, offering_type=None, max_records=None, marker=None): """ The DescribeReservedCacheNodes operation returns information about reserved cache nodes for this account, or about a specified reserved cache node. :type reserved_cache_node_id: string :param reserved_cache_node_id: The reserved cache node identifier filter value. Use this parameter to show only the reservation that matches the specified reservation ID. :type reserved_cache_nodes_offering_id: string :param reserved_cache_nodes_offering_id: The offering identifier filter value. Use this parameter to show only purchased reservations matching the specified offering identifier. :type cache_node_type: string :param cache_node_type: The cache node type filter value. Use this parameter to show only those reservations matching the specified cache node type. :type duration: string :param duration: The duration filter value, specified in years or seconds. Use this parameter to show only reservations for this duration. Valid Values: `1 | 3 | 31536000 | 94608000` :type product_description: string :param product_description: The product description filter value. Use this parameter to show only those reservations matching the specified product description. :type offering_type: string :param offering_type: The offering type filter value. Use this parameter to show only the available offerings matching the specified offering type. Valid values: `"Light Utilization" | "Medium Utilization" | "Heavy Utilization" ` :type max_records: integer :param max_records: The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a marker is included in the response so that the remaining results can be retrieved. Default: 100 Constraints: minimum 20; maximum 100. :type marker: string :param marker: An optional marker returned from a prior request. Use this marker for pagination of results from this operation. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords . """ params = {} if reserved_cache_node_id is not None: params['ReservedCacheNodeId'] = reserved_cache_node_id if reserved_cache_nodes_offering_id is not None: params['ReservedCacheNodesOfferingId'] = reserved_cache_nodes_offering_id if cache_node_type is not None: params['CacheNodeType'] = cache_node_type if duration is not None: params['Duration'] = duration if product_description is not None: params['ProductDescription'] = product_description if offering_type is not None: params['OfferingType'] = offering_type if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeReservedCacheNodes', verb='POST', path='/', params=params) def describe_reserved_cache_nodes_offerings(self, reserved_cache_nodes_offering_id=None, cache_node_type=None, duration=None, product_description=None, offering_type=None, max_records=None, marker=None): """ The DescribeReservedCacheNodesOfferings operation lists available reserved cache node offerings. :type reserved_cache_nodes_offering_id: string :param reserved_cache_nodes_offering_id: The offering identifier filter value. Use this parameter to show only the available offering that matches the specified reservation identifier. Example: `438012d3-4052-4cc7-b2e3-8d3372e0e706` :type cache_node_type: string :param cache_node_type: The cache node type filter value. Use this parameter to show only the available offerings matching the specified cache node type. :type duration: string :param duration: Duration filter value, specified in years or seconds. Use this parameter to show only reservations for a given duration. Valid Values: `1 | 3 | 31536000 | 94608000` :type product_description: string :param product_description: The product description filter value. Use this parameter to show only the available offerings matching the specified product description. :type offering_type: string :param offering_type: The offering type filter value. Use this parameter to show only the available offerings matching the specified offering type. Valid Values: `"Light Utilization" | "Medium Utilization" | "Heavy Utilization" ` :type max_records: integer :param max_records: The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a marker is included in the response so that the remaining results can be retrieved. Default: 100 Constraints: minimum 20; maximum 100. :type marker: string :param marker: An optional marker returned from a prior request. Use this marker for pagination of results from this operation. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords . """ params = {} if reserved_cache_nodes_offering_id is not None: params['ReservedCacheNodesOfferingId'] = reserved_cache_nodes_offering_id if cache_node_type is not None: params['CacheNodeType'] = cache_node_type if duration is not None: params['Duration'] = duration if product_description is not None: params['ProductDescription'] = product_description if offering_type is not None: params['OfferingType'] = offering_type if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeReservedCacheNodesOfferings', verb='POST', path='/', params=params) def modify_cache_cluster(self, cache_cluster_id, num_cache_nodes=None, cache_node_ids_to_remove=None, cache_security_group_names=None, security_group_ids=None, preferred_maintenance_window=None, notification_topic_arn=None, cache_parameter_group_name=None, notification_topic_status=None, apply_immediately=None, engine_version=None, auto_minor_version_upgrade=None): """ The ModifyCacheCluster operation modifies the settings for a cache cluster. You can use this operation to change one or more cluster configuration parameters by specifying the parameters and the new values. :type cache_cluster_id: string :param cache_cluster_id: The cache cluster identifier. This value is stored as a lowercase string. :type num_cache_nodes: integer :param num_cache_nodes: The number of cache nodes that the cache cluster should have. If the value for NumCacheNodes is greater than the existing number of cache nodes, then more nodes will be added. If the value is less than the existing number of cache nodes, then cache nodes will be removed. If you are removing cache nodes, you must use the CacheNodeIdsToRemove parameter to provide the IDs of the specific cache nodes to be removed. :type cache_node_ids_to_remove: list :param cache_node_ids_to_remove: A list of cache node IDs to be removed. A node ID is a numeric identifier (0001, 0002, etc.). This parameter is only valid when NumCacheNodes is less than the existing number of cache nodes. The number of cache node IDs supplied in this parameter must match the difference between the existing number of cache nodes in the cluster and the value of NumCacheNodes in the request. :type cache_security_group_names: list :param cache_security_group_names: A list of cache security group names to authorize on this cache cluster. This change is asynchronously applied as soon as possible. This parameter can be used only with clusters that are created outside of an Amazon Virtual Private Cloud (VPC). Constraints: Must contain no more than 255 alphanumeric characters. Must not be "Default". :type security_group_ids: list :param security_group_ids: Specifies the VPC Security Groups associated with the cache cluster. This parameter can be used only with clusters that are created in an Amazon Virtual Private Cloud (VPC). :type preferred_maintenance_window: string :param preferred_maintenance_window: The weekly time range (in UTC) during which system maintenance can occur. Note that system maintenance may result in an outage. This change is made immediately. If you are moving this window to the current time, there must be at least 120 minutes between the current time and end of the window to ensure that pending changes are applied. :type notification_topic_arn: string :param notification_topic_arn: The Amazon Resource Name (ARN) of the SNS topic to which notifications will be sent. The SNS topic owner must be same as the cache cluster owner. :type cache_parameter_group_name: string :param cache_parameter_group_name: The name of the cache parameter group to apply to this cache cluster. This change is asynchronously applied as soon as possible for parameters when the ApplyImmediately parameter is specified as true for this request. :type notification_topic_status: string :param notification_topic_status: The status of the Amazon SNS notification topic. Notifications are sent only if the status is active . Valid values: `active` | `inactive` :type apply_immediately: boolean :param apply_immediately: If `True`, this parameter causes the modifications in this request and any pending modifications to be applied, asynchronously and as soon as possible, regardless of the PreferredMaintenanceWindow setting for the cache cluster. If `False`, then changes to the cache cluster are applied on the next maintenance reboot, or the next failure reboot, whichever occurs first. Valid values: `True` | `False` Default: `False` :type engine_version: string :param engine_version: The upgraded version of the cache engine to be run on the cache cluster nodes. :type auto_minor_version_upgrade: boolean :param auto_minor_version_upgrade: If `True`, then minor engine upgrades will be applied automatically to the cache cluster during the maintenance window. Valid values: `True` | `False` Default: `True` """ params = {'CacheClusterId': cache_cluster_id, } if num_cache_nodes is not None: params['NumCacheNodes'] = num_cache_nodes if cache_node_ids_to_remove is not None: self.build_list_params(params, cache_node_ids_to_remove, 'CacheNodeIdsToRemove.member') if cache_security_group_names is not None: self.build_list_params(params, cache_security_group_names, 'CacheSecurityGroupNames.member') if security_group_ids is not None: self.build_list_params(params, security_group_ids, 'SecurityGroupIds.member') if preferred_maintenance_window is not None: params['PreferredMaintenanceWindow'] = preferred_maintenance_window if notification_topic_arn is not None: params['NotificationTopicArn'] = notification_topic_arn if cache_parameter_group_name is not None: params['CacheParameterGroupName'] = cache_parameter_group_name if notification_topic_status is not None: params['NotificationTopicStatus'] = notification_topic_status if apply_immediately is not None: params['ApplyImmediately'] = str( apply_immediately).lower() if engine_version is not None: params['EngineVersion'] = engine_version if auto_minor_version_upgrade is not None: params['AutoMinorVersionUpgrade'] = str( auto_minor_version_upgrade).lower() return self._make_request( action='ModifyCacheCluster', verb='POST', path='/', params=params) def modify_cache_parameter_group(self, cache_parameter_group_name, parameter_name_values): """ The ModifyCacheParameterGroup operation modifies the parameters of a cache parameter group. You can modify up to 20 parameters in a single request by submitting a list parameter name and value pairs. :type cache_parameter_group_name: string :param cache_parameter_group_name: The name of the cache parameter group to modify. :type parameter_name_values: list :param parameter_name_values: An array of parameter names and values for the parameter update. You must supply at least one parameter name and value; subsequent arguments are optional. A maximum of 20 parameters may be modified per request. """ params = { 'CacheParameterGroupName': cache_parameter_group_name, } self.build_complex_list_params( params, parameter_name_values, 'ParameterNameValues.member', ('ParameterName', 'ParameterValue')) return self._make_request( action='ModifyCacheParameterGroup', verb='POST', path='/', params=params) def modify_cache_subnet_group(self, cache_subnet_group_name, cache_subnet_group_description=None, subnet_ids=None): """ The ModifyCacheSubnetGroup operation modifies an existing cache subnet group. :type cache_subnet_group_name: string :param cache_subnet_group_name: The name for the cache subnet group. This value is stored as a lowercase string. Constraints: Must contain no more than 255 alphanumeric characters or hyphens. Example: `mysubnetgroup` :type cache_subnet_group_description: string :param cache_subnet_group_description: A description for the cache subnet group. :type subnet_ids: list :param subnet_ids: The EC2 subnet IDs for the cache subnet group. """ params = {'CacheSubnetGroupName': cache_subnet_group_name, } if cache_subnet_group_description is not None: params['CacheSubnetGroupDescription'] = cache_subnet_group_description if subnet_ids is not None: self.build_list_params(params, subnet_ids, 'SubnetIds.member') return self._make_request( action='ModifyCacheSubnetGroup', verb='POST', path='/', params=params) def modify_replication_group(self, replication_group_id, replication_group_description=None, cache_security_group_names=None, security_group_ids=None, preferred_maintenance_window=None, notification_topic_arn=None, cache_parameter_group_name=None, notification_topic_status=None, apply_immediately=None, engine_version=None, auto_minor_version_upgrade=None, primary_cluster_id=None): """ The ModifyReplicationGroup operation modifies the settings for a replication group. :type replication_group_id: string :param replication_group_id: The identifier of the replication group to modify. :type replication_group_description: string :param replication_group_description: A description for the replication group. Maximum length is 255 characters. :type cache_security_group_names: list :param cache_security_group_names: A list of cache security group names to authorize for the clusters in this replication group. This change is asynchronously applied as soon as possible. This parameter can be used only with replication groups containing cache clusters running outside of an Amazon Virtual Private Cloud (VPC). Constraints: Must contain no more than 255 alphanumeric characters. Must not be "Default". :type security_group_ids: list :param security_group_ids: Specifies the VPC Security Groups associated with the cache clusters in the replication group. This parameter can be used only with replication groups containing cache clusters running in an Amazon Virtual Private Cloud (VPC). :type preferred_maintenance_window: string :param preferred_maintenance_window: The weekly time range (in UTC) during which replication group system maintenance can occur. Note that system maintenance may result in an outage. This change is made immediately. If you are moving this window to the current time, there must be at least 120 minutes between the current time and end of the window to ensure that pending changes are applied. :type notification_topic_arn: string :param notification_topic_arn: The Amazon Resource Name (ARN) of the SNS topic to which notifications will be sent. The SNS topic owner must be same as the replication group owner. :type cache_parameter_group_name: string :param cache_parameter_group_name: The name of the cache parameter group to apply to all of the cache nodes in this replication group. This change is asynchronously applied as soon as possible for parameters when the ApplyImmediately parameter is specified as true for this request. :type notification_topic_status: string :param notification_topic_status: The status of the Amazon SNS notification topic for the replication group. Notifications are sent only if the status is active . Valid values: `active` | `inactive` :type apply_immediately: boolean :param apply_immediately: If `True`, this parameter causes the modifications in this request and any pending modifications to be applied, asynchronously and as soon as possible, regardless of the PreferredMaintenanceWindow setting for the replication group. If `False`, then changes to the nodes in the replication group are applied on the next maintenance reboot, or the next failure reboot, whichever occurs first. Valid values: `True` | `False` Default: `False` :type engine_version: string :param engine_version: The upgraded version of the cache engine to be run on the nodes in the replication group.. :type auto_minor_version_upgrade: boolean :param auto_minor_version_upgrade: Determines whether minor engine upgrades will be applied automatically to all of the cache nodes in the replication group during the maintenance window. A value of `True` allows these upgrades to occur; `False` disables automatic upgrades. :type primary_cluster_id: string :param primary_cluster_id: If this parameter is specified, ElastiCache will promote each of the nodes in the specified cache cluster to the primary role. The nodes of all other clusters in the replication group will be read replicas. """ params = {'ReplicationGroupId': replication_group_id, } if replication_group_description is not None: params['ReplicationGroupDescription'] = replication_group_description if cache_security_group_names is not None: self.build_list_params(params, cache_security_group_names, 'CacheSecurityGroupNames.member') if security_group_ids is not None: self.build_list_params(params, security_group_ids, 'SecurityGroupIds.member') if preferred_maintenance_window is not None: params['PreferredMaintenanceWindow'] = preferred_maintenance_window if notification_topic_arn is not None: params['NotificationTopicArn'] = notification_topic_arn if cache_parameter_group_name is not None: params['CacheParameterGroupName'] = cache_parameter_group_name if notification_topic_status is not None: params['NotificationTopicStatus'] = notification_topic_status if apply_immediately is not None: params['ApplyImmediately'] = str( apply_immediately).lower() if engine_version is not None: params['EngineVersion'] = engine_version if auto_minor_version_upgrade is not None: params['AutoMinorVersionUpgrade'] = str( auto_minor_version_upgrade).lower() if primary_cluster_id is not None: params['PrimaryClusterId'] = primary_cluster_id return self._make_request( action='ModifyReplicationGroup', verb='POST', path='/', params=params) def purchase_reserved_cache_nodes_offering(self, reserved_cache_nodes_offering_id, reserved_cache_node_id=None, cache_node_count=None): """ The PurchaseReservedCacheNodesOffering operation allows you to purchase a reserved cache node offering. :type reserved_cache_nodes_offering_id: string :param reserved_cache_nodes_offering_id: The ID of the reserved cache node offering to purchase. Example: 438012d3-4052-4cc7-b2e3-8d3372e0e706 :type reserved_cache_node_id: string :param reserved_cache_node_id: A customer-specified identifier to track this reservation. Example: myreservationID :type cache_node_count: integer :param cache_node_count: The number of cache node instances to reserve. Default: `1` """ params = { 'ReservedCacheNodesOfferingId': reserved_cache_nodes_offering_id, } if reserved_cache_node_id is not None: params['ReservedCacheNodeId'] = reserved_cache_node_id if cache_node_count is not None: params['CacheNodeCount'] = cache_node_count return self._make_request( action='PurchaseReservedCacheNodesOffering', verb='POST', path='/', params=params) def reboot_cache_cluster(self, cache_cluster_id, cache_node_ids_to_reboot): """ The RebootCacheCluster operation reboots some, or all, of the cache cluster nodes within a provisioned cache cluster. This API will apply any modified cache parameter groups to the cache cluster. The reboot action takes place as soon as possible, and results in a momentary outage to the cache cluster. During the reboot, the cache cluster status is set to REBOOTING. The reboot causes the contents of the cache (for each cache cluster node being rebooted) to be lost. When the reboot is complete, a cache cluster event is created. :type cache_cluster_id: string :param cache_cluster_id: The cache cluster identifier. This parameter is stored as a lowercase string. :type cache_node_ids_to_reboot: list :param cache_node_ids_to_reboot: A list of cache cluster node IDs to reboot. A node ID is a numeric identifier (0001, 0002, etc.). To reboot an entire cache cluster, specify all of the cache cluster node IDs. """ params = {'CacheClusterId': cache_cluster_id, } self.build_list_params(params, cache_node_ids_to_reboot, 'CacheNodeIdsToReboot.member') return self._make_request( action='RebootCacheCluster', verb='POST', path='/', params=params) def reset_cache_parameter_group(self, cache_parameter_group_name, parameter_name_values, reset_all_parameters=None): """ The ResetCacheParameterGroup operation modifies the parameters of a cache parameter group to the engine or system default value. You can reset specific parameters by submitting a list of parameter names. To reset the entire cache parameter group, specify the ResetAllParameters and CacheParameterGroupName parameters. :type cache_parameter_group_name: string :param cache_parameter_group_name: The name of the cache parameter group to reset. :type reset_all_parameters: boolean :param reset_all_parameters: If true , all parameters in the cache parameter group will be reset to default values. If false , no such action occurs. Valid values: `True` | `False` :type parameter_name_values: list :param parameter_name_values: An array of parameter names to be reset. If you are not resetting the entire cache parameter group, you must specify at least one parameter name. """ params = { 'CacheParameterGroupName': cache_parameter_group_name, } self.build_complex_list_params( params, parameter_name_values, 'ParameterNameValues.member', ('ParameterName', 'ParameterValue')) if reset_all_parameters is not None: params['ResetAllParameters'] = str( reset_all_parameters).lower() return self._make_request( action='ResetCacheParameterGroup', verb='POST', path='/', params=params) def revoke_cache_security_group_ingress(self, cache_security_group_name, ec2_security_group_name, ec2_security_group_owner_id): """ The RevokeCacheSecurityGroupIngress operation revokes ingress from a cache security group. Use this operation to disallow access from an Amazon EC2 security group that had been previously authorized. :type cache_security_group_name: string :param cache_security_group_name: The name of the cache security group to revoke ingress from. :type ec2_security_group_name: string :param ec2_security_group_name: The name of the Amazon EC2 security group to revoke access from. :type ec2_security_group_owner_id: string :param ec2_security_group_owner_id: The AWS account number of the Amazon EC2 security group owner. Note that this is not the same thing as an AWS access key ID - you must provide a valid AWS account number for this parameter. """ params = { 'CacheSecurityGroupName': cache_security_group_name, 'EC2SecurityGroupName': ec2_security_group_name, 'EC2SecurityGroupOwnerId': ec2_security_group_owner_id, } return self._make_request( action='RevokeCacheSecurityGroupIngress', verb='POST', path='/', params=params) def _make_request(self, action, verb, path, params): params['ContentType'] = 'JSON' response = self.make_request(action=action, verb='POST', path='/', params=params) body = response.read() boto.log.debug(body) if response.status == 200: return json.loads(body) else: raise self.ResponseError(response.status, response.reason, body) boto-2.20.1/boto/elastictranscoder/000077500000000000000000000000001225267101000172075ustar00rootroot00000000000000boto-2.20.1/boto/elastictranscoder/__init__.py000066400000000000000000000050331225267101000213210ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.regioninfo import RegionInfo def regions(): """ Get all available regions for the AWS Elastic Transcoder service. :rtype: list :return: A list of :class:`boto.regioninfo.RegionInfo` """ from boto.elastictranscoder.layer1 import ElasticTranscoderConnection cls = ElasticTranscoderConnection return [ RegionInfo(name='us-east-1', endpoint='elastictranscoder.us-east-1.amazonaws.com', connection_cls=cls), RegionInfo(name='us-west-1', endpoint='elastictranscoder.us-west-1.amazonaws.com', connection_cls=cls), RegionInfo(name='us-west-2', endpoint='elastictranscoder.us-west-2.amazonaws.com', connection_cls=cls), RegionInfo(name='ap-northeast-1', endpoint='elastictranscoder.ap-northeast-1.amazonaws.com', connection_cls=cls), RegionInfo(name='ap-southeast-1', endpoint='elastictranscoder.ap-southeast-1.amazonaws.com', connection_cls=cls), RegionInfo(name='eu-west-1', endpoint='elastictranscoder.eu-west-1.amazonaws.com', connection_cls=cls), ] def connect_to_region(region_name, **kw_params): for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/elastictranscoder/exceptions.py000066400000000000000000000030731225267101000217450ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.exception import JSONResponseError class LimitExceededException(JSONResponseError): pass class ResourceInUseException(JSONResponseError): pass class AccessDeniedException(JSONResponseError): pass class ResourceNotFoundException(JSONResponseError): pass class InternalServiceException(JSONResponseError): pass class ValidationException(JSONResponseError): pass class IncompatibleVersionException(JSONResponseError): pass boto-2.20.1/boto/elastictranscoder/layer1.py000066400000000000000000001230451225267101000207630ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.compat import json from boto.exception import JSONResponseError from boto.connection import AWSAuthConnection from boto.regioninfo import RegionInfo from boto.elastictranscoder import exceptions class ElasticTranscoderConnection(AWSAuthConnection): """ AWS Elastic Transcoder Service The AWS Elastic Transcoder Service. """ APIVersion = "2012-09-25" DefaultRegionName = "us-east-1" DefaultRegionEndpoint = "elastictranscoder.us-east-1.amazonaws.com" ResponseError = JSONResponseError _faults = { "IncompatibleVersionException": exceptions.IncompatibleVersionException, "LimitExceededException": exceptions.LimitExceededException, "ResourceInUseException": exceptions.ResourceInUseException, "AccessDeniedException": exceptions.AccessDeniedException, "ResourceNotFoundException": exceptions.ResourceNotFoundException, "InternalServiceException": exceptions.InternalServiceException, "ValidationException": exceptions.ValidationException, } def __init__(self, **kwargs): region = kwargs.get('region') if not region: region = RegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint) else: del kwargs['region'] kwargs['host'] = region.endpoint AWSAuthConnection.__init__(self, **kwargs) self.region = region def _required_auth_capability(self): return ['hmac-v4'] def cancel_job(self, id=None): """ The CancelJob operation cancels an unfinished job. You can only cancel a job that has a status of `Submitted`. To prevent a pipeline from starting to process a job while you're getting the job identifier, use UpdatePipelineStatus to temporarily pause the pipeline. :type id: string :param id: The identifier of the job that you want to cancel. To get a list of the jobs (including their `jobId`) that have a status of `Submitted`, use the ListJobsByStatus API action. """ uri = '/2012-09-25/jobs/{0}'.format(id) return self.make_request('DELETE', uri, expected_status=202) def create_job(self, pipeline_id=None, input_name=None, output=None, outputs=None, output_key_prefix=None, playlists=None): """ When you create a job, Elastic Transcoder returns JSON data that includes the values that you specified plus information about the job that is created. If you have specified more than one output for your jobs (for example, one output for the Kindle Fire and another output for the Apple iPhone 4s), you currently must use the Elastic Transcoder API to list the jobs (as opposed to the AWS Console). :type pipeline_id: string :param pipeline_id: The `Id` of the pipeline that you want Elastic Transcoder to use for transcoding. The pipeline determines several settings, including the Amazon S3 bucket from which Elastic Transcoder gets the files to transcode and the bucket into which Elastic Transcoder puts the transcoded files. :type input_name: dict :param input_name: A section of the request body that provides information about the file that is being transcoded. :type output: dict :param output: The `CreateJobOutput` structure. :type outputs: list :param outputs: A section of the request body that provides information about the transcoded (target) files. We recommend that you use the `Outputs` syntax instead of the `Output` syntax. :type output_key_prefix: string :param output_key_prefix: The value, if any, that you want Elastic Transcoder to prepend to the names of all files that this job creates, including output files, thumbnails, and playlists. :type playlists: list :param playlists: If you specify a preset in `PresetId` for which the value of `Container` is ts (MPEG-TS), Playlists contains information about the master playlists that you want Elastic Transcoder to create. We recommend that you create only one master playlist. The maximum number of master playlists in a job is 30. """ uri = '/2012-09-25/jobs' params = {} if pipeline_id is not None: params['PipelineId'] = pipeline_id if input_name is not None: params['Input'] = input_name if output is not None: params['Output'] = output if outputs is not None: params['Outputs'] = outputs if output_key_prefix is not None: params['OutputKeyPrefix'] = output_key_prefix if playlists is not None: params['Playlists'] = playlists return self.make_request('POST', uri, expected_status=201, data=json.dumps(params)) def create_pipeline(self, name=None, input_bucket=None, output_bucket=None, role=None, notifications=None, content_config=None, thumbnail_config=None): """ The CreatePipeline operation creates a pipeline with settings that you specify. :type name: string :param name: The name of the pipeline. We recommend that the name be unique within the AWS account, but uniqueness is not enforced. Constraints: Maximum 40 characters. :type input_bucket: string :param input_bucket: The Amazon S3 bucket in which you saved the media files that you want to transcode. :type output_bucket: string :param output_bucket: The Amazon S3 bucket in which you want Elastic Transcoder to save the transcoded files. (Use this, or use ContentConfig:Bucket plus ThumbnailConfig:Bucket.) Specify this value when all of the following are true: + You want to save transcoded files, thumbnails (if any), and playlists (if any) together in one bucket. + You do not want to specify the users or groups who have access to the transcoded files, thumbnails, and playlists. + You do not want to specify the permissions that Elastic Transcoder grants to the files. When Elastic Transcoder saves files in `OutputBucket`, it grants full control over the files only to the AWS account that owns the role that is specified by `Role`. + You want to associate the transcoded files and thumbnails with the Amazon S3 Standard storage class. If you want to save transcoded files and playlists in one bucket and thumbnails in another bucket, specify which users can access the transcoded files or the permissions the users have, or change the Amazon S3 storage class, omit `OutputBucket` and specify values for `ContentConfig` and `ThumbnailConfig` instead. :type role: string :param role: The IAM Amazon Resource Name (ARN) for the role that you want Elastic Transcoder to use to create the pipeline. :type notifications: dict :param notifications: The Amazon Simple Notification Service (Amazon SNS) topic that you want to notify to report job status. To receive notifications, you must also subscribe to the new topic in the Amazon SNS console. + **Progressing**: The topic ARN for the Amazon Simple Notification Service (Amazon SNS) topic that you want to notify when Elastic Transcoder has started to process a job in this pipeline. This is the ARN that Amazon SNS returned when you created the topic. For more information, see Create a Topic in the Amazon Simple Notification Service Developer Guide. + **Completed**: The topic ARN for the Amazon SNS topic that you want to notify when Elastic Transcoder has finished processing a job in this pipeline. This is the ARN that Amazon SNS returned when you created the topic. + **Warning**: The topic ARN for the Amazon SNS topic that you want to notify when Elastic Transcoder encounters a warning condition while processing a job in this pipeline. This is the ARN that Amazon SNS returned when you created the topic. + **Error**: The topic ARN for the Amazon SNS topic that you want to notify when Elastic Transcoder encounters an error condition while processing a job in this pipeline. This is the ARN that Amazon SNS returned when you created the topic. :type content_config: dict :param content_config: The optional `ContentConfig` object specifies information about the Amazon S3 bucket in which you want Elastic Transcoder to save transcoded files and playlists: which bucket to use, which users you want to have access to the files, the type of access you want users to have, and the storage class that you want to assign to the files. If you specify values for `ContentConfig`, you must also specify values for `ThumbnailConfig`. If you specify values for `ContentConfig` and `ThumbnailConfig`, omit the `OutputBucket` object. + **Bucket**: The Amazon S3 bucket in which you want Elastic Transcoder to save transcoded files and playlists. + **Permissions** (Optional): The Permissions object specifies which users you want to have access to transcoded files and the type of access you want them to have. You can grant permissions to a maximum of 30 users and/or predefined Amazon S3 groups. + **Grantee Type**: Specify the type of value that appears in the `Grantee` object: + **Canonical**: The value in the `Grantee` object is either the canonical user ID for an AWS account or an origin access identity for an Amazon CloudFront distribution. For more information about canonical user IDs, see Access Control List (ACL) Overview in the Amazon Simple Storage Service Developer Guide. For more information about using CloudFront origin access identities to require that users use CloudFront URLs instead of Amazon S3 URLs, see Using an Origin Access Identity to Restrict Access to Your Amazon S3 Content. A canonical user ID is not the same as an AWS account number. + **Email**: The value in the `Grantee` object is the registered email address of an AWS account. + **Group**: The value in the `Grantee` object is one of the following predefined Amazon S3 groups: `AllUsers`, `AuthenticatedUsers`, or `LogDelivery`. + **Grantee**: The AWS user or group that you want to have access to transcoded files and playlists. To identify the user or group, you can specify the canonical user ID for an AWS account, an origin access identity for a CloudFront distribution, the registered email address of an AWS account, or a predefined Amazon S3 group + **Access**: The permission that you want to give to the AWS user that you specified in `Grantee`. Permissions are granted on the files that Elastic Transcoder adds to the bucket, including playlists and video files. Valid values include: + `READ`: The grantee can read the objects and metadata for objects that Elastic Transcoder adds to the Amazon S3 bucket. + `READ_ACP`: The grantee can read the object ACL for objects that Elastic Transcoder adds to the Amazon S3 bucket. + `WRITE_ACP`: The grantee can write the ACL for the objects that Elastic Transcoder adds to the Amazon S3 bucket. + `FULL_CONTROL`: The grantee has `READ`, `READ_ACP`, and `WRITE_ACP` permissions for the objects that Elastic Transcoder adds to the Amazon S3 bucket. + **StorageClass**: The Amazon S3 storage class, `Standard` or `ReducedRedundancy`, that you want Elastic Transcoder to assign to the video files and playlists that it stores in your Amazon S3 bucket. :type thumbnail_config: dict :param thumbnail_config: The `ThumbnailConfig` object specifies several values, including the Amazon S3 bucket in which you want Elastic Transcoder to save thumbnail files, which users you want to have access to the files, the type of access you want users to have, and the storage class that you want to assign to the files. If you specify values for `ContentConfig`, you must also specify values for `ThumbnailConfig` even if you don't want to create thumbnails. If you specify values for `ContentConfig` and `ThumbnailConfig`, omit the `OutputBucket` object. + **Bucket**: The Amazon S3 bucket in which you want Elastic Transcoder to save thumbnail files. + **Permissions** (Optional): The `Permissions` object specifies which users and/or predefined Amazon S3 groups you want to have access to thumbnail files, and the type of access you want them to have. You can grant permissions to a maximum of 30 users and/or predefined Amazon S3 groups. + **GranteeType**: Specify the type of value that appears in the Grantee object: + **Canonical**: The value in the `Grantee` object is either the canonical user ID for an AWS account or an origin access identity for an Amazon CloudFront distribution. A canonical user ID is not the same as an AWS account number. + **Email**: The value in the `Grantee` object is the registered email address of an AWS account. + **Group**: The value in the `Grantee` object is one of the following predefined Amazon S3 groups: `AllUsers`, `AuthenticatedUsers`, or `LogDelivery`. + **Grantee**: The AWS user or group that you want to have access to thumbnail files. To identify the user or group, you can specify the canonical user ID for an AWS account, an origin access identity for a CloudFront distribution, the registered email address of an AWS account, or a predefined Amazon S3 group. + **Access**: The permission that you want to give to the AWS user that you specified in `Grantee`. Permissions are granted on the thumbnail files that Elastic Transcoder adds to the bucket. Valid values include: + `READ`: The grantee can read the thumbnails and metadata for objects that Elastic Transcoder adds to the Amazon S3 bucket. + `READ_ACP`: The grantee can read the object ACL for thumbnails that Elastic Transcoder adds to the Amazon S3 bucket. + `WRITE_ACP`: The grantee can write the ACL for the thumbnails that Elastic Transcoder adds to the Amazon S3 bucket. + `FULL_CONTROL`: The grantee has `READ`, `READ_ACP`, and `WRITE_ACP` permissions for the thumbnails that Elastic Transcoder adds to the Amazon S3 bucket. + **StorageClass**: The Amazon S3 storage class, `Standard` or `ReducedRedundancy`, that you want Elastic Transcoder to assign to the thumbnails that it stores in your Amazon S3 bucket. """ uri = '/2012-09-25/pipelines' params = {} if name is not None: params['Name'] = name if input_bucket is not None: params['InputBucket'] = input_bucket if output_bucket is not None: params['OutputBucket'] = output_bucket if role is not None: params['Role'] = role if notifications is not None: params['Notifications'] = notifications if content_config is not None: params['ContentConfig'] = content_config if thumbnail_config is not None: params['ThumbnailConfig'] = thumbnail_config return self.make_request('POST', uri, expected_status=201, data=json.dumps(params)) def create_preset(self, name=None, description=None, container=None, video=None, audio=None, thumbnails=None): """ The CreatePreset operation creates a preset with settings that you specify. Elastic Transcoder checks the CreatePreset settings to ensure that they meet Elastic Transcoder requirements and to determine whether they comply with H.264 standards. If your settings are not valid for Elastic Transcoder, Elastic Transcoder returns an HTTP 400 response ( `ValidationException`) and does not create the preset. If the settings are valid for Elastic Transcoder but aren't strictly compliant with the H.264 standard, Elastic Transcoder creates the preset and returns a warning message in the response. This helps you determine whether your settings comply with the H.264 standard while giving you greater flexibility with respect to the video that Elastic Transcoder produces. Elastic Transcoder uses the H.264 video-compression format. For more information, see the International Telecommunication Union publication Recommendation ITU-T H.264: Advanced video coding for generic audiovisual services . :type name: string :param name: The name of the preset. We recommend that the name be unique within the AWS account, but uniqueness is not enforced. :type description: string :param description: A description of the preset. :type container: string :param container: The container type for the output file. Valid values include `mp3`, `mp4`, `ogg`, `ts`, and `webm`. :type video: dict :param video: A section of the request body that specifies the video parameters. :type audio: dict :param audio: A section of the request body that specifies the audio parameters. :type thumbnails: dict :param thumbnails: A section of the request body that specifies the thumbnail parameters, if any. """ uri = '/2012-09-25/presets' params = {} if name is not None: params['Name'] = name if description is not None: params['Description'] = description if container is not None: params['Container'] = container if video is not None: params['Video'] = video if audio is not None: params['Audio'] = audio if thumbnails is not None: params['Thumbnails'] = thumbnails return self.make_request('POST', uri, expected_status=201, data=json.dumps(params)) def delete_pipeline(self, id=None): """ The DeletePipeline operation removes a pipeline. You can only delete a pipeline that has never been used or that is not currently in use (doesn't contain any active jobs). If the pipeline is currently in use, `DeletePipeline` returns an error. :type id: string :param id: The identifier of the pipeline that you want to delete. """ uri = '/2012-09-25/pipelines/{0}'.format(id) return self.make_request('DELETE', uri, expected_status=202) def delete_preset(self, id=None): """ The DeletePreset operation removes a preset that you've added in an AWS region. You can't delete the default presets that are included with Elastic Transcoder. :type id: string :param id: The identifier of the preset for which you want to get detailed information. """ uri = '/2012-09-25/presets/{0}'.format(id) return self.make_request('DELETE', uri, expected_status=202) def list_jobs_by_pipeline(self, pipeline_id=None, ascending=None, page_token=None): """ The ListJobsByPipeline operation gets a list of the jobs currently in a pipeline. Elastic Transcoder returns all of the jobs currently in the specified pipeline. The response body contains one element for each job that satisfies the search criteria. :type pipeline_id: string :param pipeline_id: The ID of the pipeline for which you want to get job information. :type ascending: string :param ascending: To list jobs in chronological order by the date and time that they were submitted, enter `True`. To list jobs in reverse chronological order, enter `False`. :type page_token: string :param page_token: When Elastic Transcoder returns more than one page of results, use `pageToken` in subsequent `GET` requests to get each successive page of results. """ uri = '/2012-09-25/jobsByPipeline/{0}'.format(pipeline_id) params = {} if pipeline_id is not None: params['PipelineId'] = pipeline_id if ascending is not None: params['Ascending'] = ascending if page_token is not None: params['PageToken'] = page_token return self.make_request('GET', uri, expected_status=200, params=params) def list_jobs_by_status(self, status=None, ascending=None, page_token=None): """ The ListJobsByStatus operation gets a list of jobs that have a specified status. The response body contains one element for each job that satisfies the search criteria. :type status: string :param status: To get information about all of the jobs associated with the current AWS account that have a given status, specify the following status: `Submitted`, `Progressing`, `Complete`, `Canceled`, or `Error`. :type ascending: string :param ascending: To list jobs in chronological order by the date and time that they were submitted, enter `True`. To list jobs in reverse chronological order, enter `False`. :type page_token: string :param page_token: When Elastic Transcoder returns more than one page of results, use `pageToken` in subsequent `GET` requests to get each successive page of results. """ uri = '/2012-09-25/jobsByStatus/{0}'.format(status) params = {} if status is not None: params['Status'] = status if ascending is not None: params['Ascending'] = ascending if page_token is not None: params['PageToken'] = page_token return self.make_request('GET', uri, expected_status=200, params=params) def list_pipelines(self): """ The ListPipelines operation gets a list of the pipelines associated with the current AWS account. """ uri = '/2012-09-25/pipelines' return self.make_request('GET', uri, expected_status=200) def list_presets(self): """ The ListPresets operation gets a list of the default presets included with Elastic Transcoder and the presets that you've added in an AWS region. """ uri = '/2012-09-25/presets' return self.make_request('GET', uri, expected_status=200) def read_job(self, id=None): """ The ReadJob operation returns detailed information about a job. :type id: string :param id: The identifier of the job for which you want to get detailed information. """ uri = '/2012-09-25/jobs/{0}'.format(id) return self.make_request('GET', uri, expected_status=200) def read_pipeline(self, id=None): """ The ReadPipeline operation gets detailed information about a pipeline. :type id: string :param id: The identifier of the pipeline to read. """ uri = '/2012-09-25/pipelines/{0}'.format(id) return self.make_request('GET', uri, expected_status=200) def read_preset(self, id=None): """ The ReadPreset operation gets detailed information about a preset. :type id: string :param id: The identifier of the preset for which you want to get detailed information. """ uri = '/2012-09-25/presets/{0}'.format(id) return self.make_request('GET', uri, expected_status=200) def test_role(self, role=None, input_bucket=None, output_bucket=None, topics=None): """ The TestRole operation tests the IAM role used to create the pipeline. The `TestRole` action lets you determine whether the IAM role you are using has sufficient permissions to let Elastic Transcoder perform tasks associated with the transcoding process. The action attempts to assume the specified IAM role, checks read access to the input and output buckets, and tries to send a test notification to Amazon SNS topics that you specify. :type role: string :param role: The IAM Amazon Resource Name (ARN) for the role that you want Elastic Transcoder to test. :type input_bucket: string :param input_bucket: The Amazon S3 bucket that contains media files to be transcoded. The action attempts to read from this bucket. :type output_bucket: string :param output_bucket: The Amazon S3 bucket that Elastic Transcoder will write transcoded media files to. The action attempts to read from this bucket. :type topics: list :param topics: The ARNs of one or more Amazon Simple Notification Service (Amazon SNS) topics that you want the action to send a test notification to. """ uri = '/2012-09-25/roleTests' params = {} if role is not None: params['Role'] = role if input_bucket is not None: params['InputBucket'] = input_bucket if output_bucket is not None: params['OutputBucket'] = output_bucket if topics is not None: params['Topics'] = topics return self.make_request('POST', uri, expected_status=200, data=json.dumps(params)) def update_pipeline(self, id, name=None, input_bucket=None, role=None, notifications=None, content_config=None, thumbnail_config=None): """ Use the `UpdatePipeline` operation to update settings for a pipeline. When you change pipeline settings, your changes take effect immediately. Jobs that you have already submitted and that Elastic Transcoder has not started to process are affected in addition to jobs that you submit after you change settings. :type id: string :param id: The ID of the pipeline that you want to update. :type name: string :param name: The name of the pipeline. We recommend that the name be unique within the AWS account, but uniqueness is not enforced. Constraints: Maximum 40 characters :type input_bucket: string :param input_bucket: The Amazon S3 bucket in which you saved the media files that you want to transcode and the graphics that you want to use as watermarks. :type role: string :param role: The IAM Amazon Resource Name (ARN) for the role that you want Elastic Transcoder to use to transcode jobs for this pipeline. :type notifications: dict :param notifications: The Amazon Simple Notification Service (Amazon SNS) topic or topics to notify in order to report job status. To receive notifications, you must also subscribe to the new topic in the Amazon SNS console. :type content_config: dict :param content_config: The optional `ContentConfig` object specifies information about the Amazon S3 bucket in which you want Elastic Transcoder to save transcoded files and playlists: which bucket to use, which users you want to have access to the files, the type of access you want users to have, and the storage class that you want to assign to the files. If you specify values for `ContentConfig`, you must also specify values for `ThumbnailConfig`. If you specify values for `ContentConfig` and `ThumbnailConfig`, omit the `OutputBucket` object. + **Bucket**: The Amazon S3 bucket in which you want Elastic Transcoder to save transcoded files and playlists. + **Permissions** (Optional): The Permissions object specifies which users you want to have access to transcoded files and the type of access you want them to have. You can grant permissions to a maximum of 30 users and/or predefined Amazon S3 groups. + **Grantee Type**: Specify the type of value that appears in the `Grantee` object: + **Canonical**: The value in the `Grantee` object is either the canonical user ID for an AWS account or an origin access identity for an Amazon CloudFront distribution. For more information about canonical user IDs, see Access Control List (ACL) Overview in the Amazon Simple Storage Service Developer Guide. For more information about using CloudFront origin access identities to require that users use CloudFront URLs instead of Amazon S3 URLs, see Using an Origin Access Identity to Restrict Access to Your Amazon S3 Content. A canonical user ID is not the same as an AWS account number. + **Email**: The value in the `Grantee` object is the registered email address of an AWS account. + **Group**: The value in the `Grantee` object is one of the following predefined Amazon S3 groups: `AllUsers`, `AuthenticatedUsers`, or `LogDelivery`. + **Grantee**: The AWS user or group that you want to have access to transcoded files and playlists. To identify the user or group, you can specify the canonical user ID for an AWS account, an origin access identity for a CloudFront distribution, the registered email address of an AWS account, or a predefined Amazon S3 group + **Access**: The permission that you want to give to the AWS user that you specified in `Grantee`. Permissions are granted on the files that Elastic Transcoder adds to the bucket, including playlists and video files. Valid values include: + `READ`: The grantee can read the objects and metadata for objects that Elastic Transcoder adds to the Amazon S3 bucket. + `READ_ACP`: The grantee can read the object ACL for objects that Elastic Transcoder adds to the Amazon S3 bucket. + `WRITE_ACP`: The grantee can write the ACL for the objects that Elastic Transcoder adds to the Amazon S3 bucket. + `FULL_CONTROL`: The grantee has `READ`, `READ_ACP`, and `WRITE_ACP` permissions for the objects that Elastic Transcoder adds to the Amazon S3 bucket. + **StorageClass**: The Amazon S3 storage class, `Standard` or `ReducedRedundancy`, that you want Elastic Transcoder to assign to the video files and playlists that it stores in your Amazon S3 bucket. :type thumbnail_config: dict :param thumbnail_config: The `ThumbnailConfig` object specifies several values, including the Amazon S3 bucket in which you want Elastic Transcoder to save thumbnail files, which users you want to have access to the files, the type of access you want users to have, and the storage class that you want to assign to the files. If you specify values for `ContentConfig`, you must also specify values for `ThumbnailConfig` even if you don't want to create thumbnails. If you specify values for `ContentConfig` and `ThumbnailConfig`, omit the `OutputBucket` object. + **Bucket**: The Amazon S3 bucket in which you want Elastic Transcoder to save thumbnail files. + **Permissions** (Optional): The `Permissions` object specifies which users and/or predefined Amazon S3 groups you want to have access to thumbnail files, and the type of access you want them to have. You can grant permissions to a maximum of 30 users and/or predefined Amazon S3 groups. + **GranteeType**: Specify the type of value that appears in the Grantee object: + **Canonical**: The value in the `Grantee` object is either the canonical user ID for an AWS account or an origin access identity for an Amazon CloudFront distribution. A canonical user ID is not the same as an AWS account number. + **Email**: The value in the `Grantee` object is the registered email address of an AWS account. + **Group**: The value in the `Grantee` object is one of the following predefined Amazon S3 groups: `AllUsers`, `AuthenticatedUsers`, or `LogDelivery`. + **Grantee**: The AWS user or group that you want to have access to thumbnail files. To identify the user or group, you can specify the canonical user ID for an AWS account, an origin access identity for a CloudFront distribution, the registered email address of an AWS account, or a predefined Amazon S3 group. + **Access**: The permission that you want to give to the AWS user that you specified in `Grantee`. Permissions are granted on the thumbnail files that Elastic Transcoder adds to the bucket. Valid values include: + `READ`: The grantee can read the thumbnails and metadata for objects that Elastic Transcoder adds to the Amazon S3 bucket. + `READ_ACP`: The grantee can read the object ACL for thumbnails that Elastic Transcoder adds to the Amazon S3 bucket. + `WRITE_ACP`: The grantee can write the ACL for the thumbnails that Elastic Transcoder adds to the Amazon S3 bucket. + `FULL_CONTROL`: The grantee has `READ`, `READ_ACP`, and `WRITE_ACP` permissions for the thumbnails that Elastic Transcoder adds to the Amazon S3 bucket. + **StorageClass**: The Amazon S3 storage class, `Standard` or `ReducedRedundancy`, that you want Elastic Transcoder to assign to the thumbnails that it stores in your Amazon S3 bucket. """ uri = '/2012-09-25/pipelines/{0}'.format(id) params = {} if name is not None: params['Name'] = name if input_bucket is not None: params['InputBucket'] = input_bucket if role is not None: params['Role'] = role if notifications is not None: params['Notifications'] = notifications if content_config is not None: params['ContentConfig'] = content_config if thumbnail_config is not None: params['ThumbnailConfig'] = thumbnail_config return self.make_request('PUT', uri, expected_status=200, data=json.dumps(params)) def update_pipeline_notifications(self, id=None, notifications=None): """ With the UpdatePipelineNotifications operation, you can update Amazon Simple Notification Service (Amazon SNS) notifications for a pipeline. When you update notifications for a pipeline, Elastic Transcoder returns the values that you specified in the request. :type id: string :param id: The identifier of the pipeline for which you want to change notification settings. :type notifications: dict :param notifications: The topic ARN for the Amazon Simple Notification Service (Amazon SNS) topic that you want to notify to report job status. To receive notifications, you must also subscribe to the new topic in the Amazon SNS console. + **Progressing**: The topic ARN for the Amazon Simple Notification Service (Amazon SNS) topic that you want to notify when Elastic Transcoder has started to process jobs that are added to this pipeline. This is the ARN that Amazon SNS returned when you created the topic. + **Completed**: The topic ARN for the Amazon SNS topic that you want to notify when Elastic Transcoder has finished processing a job. This is the ARN that Amazon SNS returned when you created the topic. + **Warning**: The topic ARN for the Amazon SNS topic that you want to notify when Elastic Transcoder encounters a warning condition. This is the ARN that Amazon SNS returned when you created the topic. + **Error**: The topic ARN for the Amazon SNS topic that you want to notify when Elastic Transcoder encounters an error condition. This is the ARN that Amazon SNS returned when you created the topic. """ uri = '/2012-09-25/pipelines/{0}/notifications'.format(id) params = {} if id is not None: params['Id'] = id if notifications is not None: params['Notifications'] = notifications return self.make_request('POST', uri, expected_status=200, data=json.dumps(params)) def update_pipeline_status(self, id=None, status=None): """ The UpdatePipelineStatus operation pauses or reactivates a pipeline, so that the pipeline stops or restarts the processing of jobs. Changing the pipeline status is useful if you want to cancel one or more jobs. You can't cancel jobs after Elastic Transcoder has started processing them; if you pause the pipeline to which you submitted the jobs, you have more time to get the job IDs for the jobs that you want to cancel, and to send a CancelJob request. :type id: string :param id: The identifier of the pipeline to update. :type status: string :param status: The desired status of the pipeline: + `Active`: The pipeline is processing jobs. + `Paused`: The pipeline is not currently processing jobs. """ uri = '/2012-09-25/pipelines/{0}/status'.format(id) params = {} if id is not None: params['Id'] = id if status is not None: params['Status'] = status return self.make_request('POST', uri, expected_status=200, data=json.dumps(params)) def make_request(self, verb, resource, headers=None, data='', expected_status=None, params=None): if headers is None: headers = {} response = AWSAuthConnection.make_request( self, verb, resource, headers=headers, data=data) body = json.load(response) if response.status == expected_status: return body else: error_type = response.getheader('x-amzn-ErrorType').split(':')[0] error_class = self._faults.get(error_type, self.ResponseError) raise error_class(response.status, response.reason, body) boto-2.20.1/boto/emr/000077500000000000000000000000001225267101000142615ustar00rootroot00000000000000boto-2.20.1/boto/emr/__init__.py000066400000000000000000000062531225267101000164000ustar00rootroot00000000000000# Copyright (c) 2010 Spotify AB # Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ This module provies an interface to the Elastic MapReduce (EMR) service from AWS. """ from connection import EmrConnection from step import Step, StreamingStep, JarStep from bootstrap_action import BootstrapAction from boto.regioninfo import RegionInfo def regions(): """ Get all available regions for the Amazon Elastic MapReduce service. :rtype: list :return: A list of :class:`boto.regioninfo.RegionInfo` """ return [RegionInfo(name='us-east-1', endpoint='elasticmapreduce.us-east-1.amazonaws.com', connection_cls=EmrConnection), RegionInfo(name='us-west-1', endpoint='us-west-1.elasticmapreduce.amazonaws.com', connection_cls=EmrConnection), RegionInfo(name='us-west-2', endpoint='us-west-2.elasticmapreduce.amazonaws.com', connection_cls=EmrConnection), RegionInfo(name='ap-northeast-1', endpoint='ap-northeast-1.elasticmapreduce.amazonaws.com', connection_cls=EmrConnection), RegionInfo(name='ap-southeast-1', endpoint='ap-southeast-1.elasticmapreduce.amazonaws.com', connection_cls=EmrConnection), RegionInfo(name='ap-southeast-2', endpoint='ap-southeast-2.elasticmapreduce.amazonaws.com', connection_cls=EmrConnection), RegionInfo(name='eu-west-1', endpoint='eu-west-1.elasticmapreduce.amazonaws.com', connection_cls=EmrConnection), RegionInfo(name='sa-east-1', endpoint='sa-east-1.elasticmapreduce.amazonaws.com', connection_cls=EmrConnection), ] def connect_to_region(region_name, **kw_params): for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/emr/bootstrap_action.py000066400000000000000000000034031225267101000202050ustar00rootroot00000000000000# Copyright (c) 2010 Spotify AB # Copyright (c) 2010 Yelp # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class BootstrapAction(object): def __init__(self, name, path, bootstrap_action_args): self.name = name self.path = path if isinstance(bootstrap_action_args, basestring): bootstrap_action_args = [bootstrap_action_args] self.bootstrap_action_args = bootstrap_action_args def args(self): args = [] if self.bootstrap_action_args: args.extend(self.bootstrap_action_args) return args def __repr__(self): return '%s.%s(name=%r, path=%r, bootstrap_action_args=%r)' % ( self.__class__.__module__, self.__class__.__name__, self.name, self.path, self.bootstrap_action_args) boto-2.20.1/boto/emr/connection.py000066400000000000000000000634311225267101000170010ustar00rootroot00000000000000# Copyright (c) 2010 Spotify AB # Copyright (c) 2010-2011 Yelp # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents a connection to the EMR service """ import types import boto import boto.utils from boto.ec2.regioninfo import RegionInfo from boto.emr.emrobject import AddInstanceGroupsResponse, BootstrapActionList, \ Cluster, ClusterSummaryList, HadoopStep, \ InstanceGroupList, InstanceList, JobFlow, \ JobFlowStepList, \ ModifyInstanceGroupsResponse, \ RunJobFlowResponse, StepSummaryList from boto.emr.step import JarStep from boto.connection import AWSQueryConnection from boto.exception import EmrResponseError class EmrConnection(AWSQueryConnection): APIVersion = boto.config.get('Boto', 'emr_version', '2009-03-31') DefaultRegionName = boto.config.get('Boto', 'emr_region_name', 'us-east-1') DefaultRegionEndpoint = boto.config.get('Boto', 'emr_region_endpoint', 'elasticmapreduce.us-east-1.amazonaws.com') ResponseError = EmrResponseError # Constants for AWS Console debugging DebuggingJar = 's3n://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar' DebuggingArgs = 's3n://us-east-1.elasticmapreduce/libs/state-pusher/0.1/fetch' def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/', security_token=None, validate_certs=True): if not region: region = RegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint) self.region = region AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, proxy_user, proxy_pass, self.region.endpoint, debug, https_connection_factory, path, security_token, validate_certs=validate_certs) # Many of the EMR hostnames are of the form: # ..amazonaws.com # rather than the more common: # ..amazonaws.com # so we need to explicitly set the region_name and service_name # for the SigV4 signing. self.auth_region_name = self.region.name self.auth_service_name = 'elasticmapreduce' def _required_auth_capability(self): return ['hmac-v4'] def describe_cluster(self, cluster_id): """ Describes an Elastic MapReduce cluster :type cluster_id: str :param cluster_id: The cluster id of interest """ params = { 'ClusterId': cluster_id } return self.get_object('DescribeCluster', params, Cluster) def describe_jobflow(self, jobflow_id): """ Describes a single Elastic MapReduce job flow :type jobflow_id: str :param jobflow_id: The job flow id of interest """ jobflows = self.describe_jobflows(jobflow_ids=[jobflow_id]) if jobflows: return jobflows[0] def describe_jobflows(self, states=None, jobflow_ids=None, created_after=None, created_before=None): """ Retrieve all the Elastic MapReduce job flows on your account :type states: list :param states: A list of strings with job flow states wanted :type jobflow_ids: list :param jobflow_ids: A list of job flow IDs :type created_after: datetime :param created_after: Bound on job flow creation time :type created_before: datetime :param created_before: Bound on job flow creation time """ params = {} if states: self.build_list_params(params, states, 'JobFlowStates.member') if jobflow_ids: self.build_list_params(params, jobflow_ids, 'JobFlowIds.member') if created_after: params['CreatedAfter'] = created_after.strftime( boto.utils.ISO8601) if created_before: params['CreatedBefore'] = created_before.strftime( boto.utils.ISO8601) return self.get_list('DescribeJobFlows', params, [('member', JobFlow)]) def describe_step(self, cluster_id, step_id): """ Describe an Elastic MapReduce step :type cluster_id: str :param cluster_id: The cluster id of interest :type step_id: str :param step_id: The step id of interest """ params = { 'ClusterId': cluster_id, 'StepId': step_id } return self.get_object('DescribeStep', params, HadoopStep) def list_bootstrap_actions(self, cluster_id, marker=None): """ Get a list of bootstrap actions for an Elastic MapReduce cluster :type cluster_id: str :param cluster_id: The cluster id of interest :type marker: str :param marker: Pagination marker """ params = { 'ClusterId': cluster_id } if marker: params['Marker'] = marker return self.get_object('ListBootstrapActions', params, BootstrapActionList) def list_clusters(self, created_after=None, created_before=None, cluster_states=None, marker=None): """ List Elastic MapReduce clusters with optional filtering :type created_after: datetime :param created_after: Bound on cluster creation time :type created_before: datetime :param created_before: Bound on cluster creation time :type cluster_states: list :param cluster_states: Bound on cluster states :type marker: str :param marker: Pagination marker """ params = {} if created_after: params['CreatedAfter'] = created_after.strftime( boto.utils.ISO8601) if created_before: params['CreatedBefore'] = created_before.strftime( boto.utils.ISO8601) if marker: params['Marker'] = marker if cluster_states: self.build_list_params(params, cluster_states, 'ClusterStates.member') return self.get_object('ListClusters', params, ClusterSummaryList) def list_instance_groups(self, cluster_id, marker=None): """ List EC2 instance groups in a cluster :type cluster_id: str :param cluster_id: The cluster id of interest :type marker: str :param marker: Pagination marker """ params = { 'ClusterId': cluster_id } if marker: params['Marker'] = marker return self.get_object('ListInstanceGroups', params, InstanceGroupList) def list_instances(self, cluster_id, instance_group_id=None, instance_group_types=None, marker=None): """ List EC2 instances in a cluster :type cluster_id: str :param cluster_id: The cluster id of interest :type instance_group_id: str :param instance_group_id: The EC2 instance group id of interest :type instance_group_types: list :param instance_group_types: Filter by EC2 instance group type :type marker: str :param marker: Pagination marker """ params = { 'ClusterId': cluster_id } if instance_group_id: params['InstanceGroupId'] = instance_group_id if marker: params['Marker'] = marker if instance_group_types: self.build_list_params(params, instance_group_types, 'InstanceGroupTypeList.member') return self.get_object('ListInstances', params, InstanceList) def list_steps(self, cluster_id, step_states=None, marker=None): """ List cluster steps :type cluster_id: str :param cluster_id: The cluster id of interest :type step_states: list :param step_states: Filter by step states :type marker: str :param marker: Pagination marker """ params = { 'ClusterId': cluster_id } if marker: params['Marker'] = marker if step_states: self.build_list_params(params, step_states, 'StepStateList.member') self.get_object('ListSteps', params, StepSummaryList) def terminate_jobflow(self, jobflow_id): """ Terminate an Elastic MapReduce job flow :type jobflow_id: str :param jobflow_id: A jobflow id """ self.terminate_jobflows([jobflow_id]) def terminate_jobflows(self, jobflow_ids): """ Terminate an Elastic MapReduce job flow :type jobflow_ids: list :param jobflow_ids: A list of job flow IDs """ params = {} self.build_list_params(params, jobflow_ids, 'JobFlowIds.member') return self.get_status('TerminateJobFlows', params, verb='POST') def add_jobflow_steps(self, jobflow_id, steps): """ Adds steps to a jobflow :type jobflow_id: str :param jobflow_id: The job flow id :type steps: list(boto.emr.Step) :param steps: A list of steps to add to the job """ if not isinstance(steps, types.ListType): steps = [steps] params = {} params['JobFlowId'] = jobflow_id # Step args step_args = [self._build_step_args(step) for step in steps] params.update(self._build_step_list(step_args)) return self.get_object( 'AddJobFlowSteps', params, JobFlowStepList, verb='POST') def add_instance_groups(self, jobflow_id, instance_groups): """ Adds instance groups to a running cluster. :type jobflow_id: str :param jobflow_id: The id of the jobflow which will take the new instance groups :type instance_groups: list(boto.emr.InstanceGroup) :param instance_groups: A list of instance groups to add to the job """ if not isinstance(instance_groups, types.ListType): instance_groups = [instance_groups] params = {} params['JobFlowId'] = jobflow_id params.update(self._build_instance_group_list_args(instance_groups)) return self.get_object('AddInstanceGroups', params, AddInstanceGroupsResponse, verb='POST') def modify_instance_groups(self, instance_group_ids, new_sizes): """ Modify the number of nodes and configuration settings in an instance group. :type instance_group_ids: list(str) :param instance_group_ids: A list of the ID's of the instance groups to be modified :type new_sizes: list(int) :param new_sizes: A list of the new sizes for each instance group """ if not isinstance(instance_group_ids, types.ListType): instance_group_ids = [instance_group_ids] if not isinstance(new_sizes, types.ListType): new_sizes = [new_sizes] instance_groups = zip(instance_group_ids, new_sizes) params = {} for k, ig in enumerate(instance_groups): # could be wrong - the example amazon gives uses # InstanceRequestCount, while the api documentation # says InstanceCount params['InstanceGroups.member.%d.InstanceGroupId' % (k+1) ] = ig[0] params['InstanceGroups.member.%d.InstanceCount' % (k+1) ] = ig[1] return self.get_object('ModifyInstanceGroups', params, ModifyInstanceGroupsResponse, verb='POST') def run_jobflow(self, name, log_uri=None, ec2_keyname=None, availability_zone=None, master_instance_type='m1.small', slave_instance_type='m1.small', num_instances=1, action_on_failure='TERMINATE_JOB_FLOW', keep_alive=False, enable_debugging=False, hadoop_version=None, steps=[], bootstrap_actions=[], instance_groups=None, additional_info=None, ami_version=None, api_params=None, visible_to_all_users=None, job_flow_role=None): """ Runs a job flow :type name: str :param name: Name of the job flow :type log_uri: str :param log_uri: URI of the S3 bucket to place logs :type ec2_keyname: str :param ec2_keyname: EC2 key used for the instances :type availability_zone: str :param availability_zone: EC2 availability zone of the cluster :type master_instance_type: str :param master_instance_type: EC2 instance type of the master :type slave_instance_type: str :param slave_instance_type: EC2 instance type of the slave nodes :type num_instances: int :param num_instances: Number of instances in the Hadoop cluster :type action_on_failure: str :param action_on_failure: Action to take if a step terminates :type keep_alive: bool :param keep_alive: Denotes whether the cluster should stay alive upon completion :type enable_debugging: bool :param enable_debugging: Denotes whether AWS console debugging should be enabled. :type hadoop_version: str :param hadoop_version: Version of Hadoop to use. This no longer defaults to '0.20' and now uses the AMI default. :type steps: list(boto.emr.Step) :param steps: List of steps to add with the job :type bootstrap_actions: list(boto.emr.BootstrapAction) :param bootstrap_actions: List of bootstrap actions that run before Hadoop starts. :type instance_groups: list(boto.emr.InstanceGroup) :param instance_groups: Optional list of instance groups to use when creating this job. NB: When provided, this argument supersedes num_instances and master/slave_instance_type. :type ami_version: str :param ami_version: Amazon Machine Image (AMI) version to use for instances. Values accepted by EMR are '1.0', '2.0', and 'latest'; EMR currently defaults to '1.0' if you don't set 'ami_version'. :type additional_info: JSON str :param additional_info: A JSON string for selecting additional features :type api_params: dict :param api_params: a dictionary of additional parameters to pass directly to the EMR API (so you don't have to upgrade boto to use new EMR features). You can also delete an API parameter by setting it to None. :type visible_to_all_users: bool :param visible_to_all_users: Whether the job flow is visible to all IAM users of the AWS account associated with the job flow. If this value is set to ``True``, all IAM users of that AWS account can view and (if they have the proper policy permissions set) manage the job flow. If it is set to ``False``, only the IAM user that created the job flow can view and manage it. :type job_flow_role: str :param job_flow_role: An IAM role for the job flow. The EC2 instances of the job flow assume this role. The default role is ``EMRJobflowDefault``. In order to use the default role, you must have already created it using the CLI. :rtype: str :return: The jobflow id """ params = {} if action_on_failure: params['ActionOnFailure'] = action_on_failure if log_uri: params['LogUri'] = log_uri params['Name'] = name # Common instance args common_params = self._build_instance_common_args(ec2_keyname, availability_zone, keep_alive, hadoop_version) params.update(common_params) # NB: according to the AWS API's error message, we must # "configure instances either using instance count, master and # slave instance type or instance groups but not both." # # Thus we switch here on the truthiness of instance_groups. if not instance_groups: # Instance args (the common case) instance_params = self._build_instance_count_and_type_args( master_instance_type, slave_instance_type, num_instances) params.update(instance_params) else: # Instance group args (for spot instances or a heterogenous cluster) list_args = self._build_instance_group_list_args(instance_groups) instance_params = dict( ('Instances.%s' % k, v) for k, v in list_args.iteritems() ) params.update(instance_params) # Debugging step from EMR API docs if enable_debugging: debugging_step = JarStep(name='Setup Hadoop Debugging', action_on_failure='TERMINATE_JOB_FLOW', main_class=None, jar=self.DebuggingJar, step_args=self.DebuggingArgs) steps.insert(0, debugging_step) # Step args if steps: step_args = [self._build_step_args(step) for step in steps] params.update(self._build_step_list(step_args)) if bootstrap_actions: bootstrap_action_args = [self._build_bootstrap_action_args(bootstrap_action) for bootstrap_action in bootstrap_actions] params.update(self._build_bootstrap_action_list(bootstrap_action_args)) if ami_version: params['AmiVersion'] = ami_version if additional_info is not None: params['AdditionalInfo'] = additional_info if api_params: for key, value in api_params.iteritems(): if value is None: params.pop(key, None) else: params[key] = value if visible_to_all_users is not None: if visible_to_all_users: params['VisibleToAllUsers'] = 'true' else: params['VisibleToAllUsers'] = 'false' if job_flow_role is not None: params['JobFlowRole'] = job_flow_role response = self.get_object( 'RunJobFlow', params, RunJobFlowResponse, verb='POST') return response.jobflowid def set_termination_protection(self, jobflow_id, termination_protection_status): """ Set termination protection on specified Elastic MapReduce job flows :type jobflow_ids: list or str :param jobflow_ids: A list of job flow IDs :type termination_protection_status: bool :param termination_protection_status: Termination protection status """ assert termination_protection_status in (True, False) params = {} params['TerminationProtected'] = (termination_protection_status and "true") or "false" self.build_list_params(params, [jobflow_id], 'JobFlowIds.member') return self.get_status('SetTerminationProtection', params, verb='POST') def set_visible_to_all_users(self, jobflow_id, visibility): """ Set whether specified Elastic Map Reduce job flows are visible to all IAM users :type jobflow_ids: list or str :param jobflow_ids: A list of job flow IDs :type visibility: bool :param visibility: Visibility """ assert visibility in (True, False) params = {} params['VisibleToAllUsers'] = (visibility and "true") or "false" self.build_list_params(params, [jobflow_id], 'JobFlowIds.member') return self.get_status('SetVisibleToAllUsers', params, verb='POST') def _build_bootstrap_action_args(self, bootstrap_action): bootstrap_action_params = {} bootstrap_action_params['ScriptBootstrapAction.Path'] = bootstrap_action.path try: bootstrap_action_params['Name'] = bootstrap_action.name except AttributeError: pass args = bootstrap_action.args() if args: self.build_list_params(bootstrap_action_params, args, 'ScriptBootstrapAction.Args.member') return bootstrap_action_params def _build_step_args(self, step): step_params = {} step_params['ActionOnFailure'] = step.action_on_failure step_params['HadoopJarStep.Jar'] = step.jar() main_class = step.main_class() if main_class: step_params['HadoopJarStep.MainClass'] = main_class args = step.args() if args: self.build_list_params(step_params, args, 'HadoopJarStep.Args.member') step_params['Name'] = step.name return step_params def _build_bootstrap_action_list(self, bootstrap_actions): if not isinstance(bootstrap_actions, types.ListType): bootstrap_actions = [bootstrap_actions] params = {} for i, bootstrap_action in enumerate(bootstrap_actions): for key, value in bootstrap_action.iteritems(): params['BootstrapActions.member.%s.%s' % (i + 1, key)] = value return params def _build_step_list(self, steps): if not isinstance(steps, types.ListType): steps = [steps] params = {} for i, step in enumerate(steps): for key, value in step.iteritems(): params['Steps.member.%s.%s' % (i+1, key)] = value return params def _build_instance_common_args(self, ec2_keyname, availability_zone, keep_alive, hadoop_version): """ Takes a number of parameters used when starting a jobflow (as specified in run_jobflow() above). Returns a comparable dict for use in making a RunJobFlow request. """ params = { 'Instances.KeepJobFlowAliveWhenNoSteps': str(keep_alive).lower(), } if hadoop_version: params['Instances.HadoopVersion'] = hadoop_version if ec2_keyname: params['Instances.Ec2KeyName'] = ec2_keyname if availability_zone: params['Instances.Placement.AvailabilityZone'] = availability_zone return params def _build_instance_count_and_type_args(self, master_instance_type, slave_instance_type, num_instances): """ Takes a master instance type (string), a slave instance type (string), and a number of instances. Returns a comparable dict for use in making a RunJobFlow request. """ params = {'Instances.MasterInstanceType': master_instance_type, 'Instances.SlaveInstanceType': slave_instance_type, 'Instances.InstanceCount': num_instances} return params def _build_instance_group_args(self, instance_group): """ Takes an InstanceGroup; returns a dict that, when its keys are properly prefixed, can be used for describing InstanceGroups in RunJobFlow or AddInstanceGroups requests. """ params = {'InstanceCount': instance_group.num_instances, 'InstanceRole': instance_group.role, 'InstanceType': instance_group.type, 'Name': instance_group.name, 'Market': instance_group.market} if instance_group.market == 'SPOT': params['BidPrice'] = instance_group.bidprice return params def _build_instance_group_list_args(self, instance_groups): """ Takes a list of InstanceGroups, or a single InstanceGroup. Returns a comparable dict for use in making a RunJobFlow or AddInstanceGroups request. """ if not isinstance(instance_groups, types.ListType): instance_groups = [instance_groups] params = {} for i, instance_group in enumerate(instance_groups): ig_dict = self._build_instance_group_args(instance_group) for key, value in ig_dict.iteritems(): params['InstanceGroups.member.%d.%s' % (i+1, key)] = value return params boto-2.20.1/boto/emr/emrobject.py000066400000000000000000000267071225267101000166210ustar00rootroot00000000000000# Copyright (c) 2010 Spotify AB # Copyright (c) 2010 Jeremy Thurgood # Copyright (c) 2010-2011 Yelp # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ This module contains EMR response objects """ from boto.resultset import ResultSet class EmrObject(object): Fields = set() def __init__(self, connection=None): self.connection = connection def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name in self.Fields: setattr(self, name.lower(), value) class RunJobFlowResponse(EmrObject): Fields = set(['JobFlowId']) class AddInstanceGroupsResponse(EmrObject): Fields = set(['InstanceGroupIds', 'JobFlowId']) class ModifyInstanceGroupsResponse(EmrObject): Fields = set(['RequestId']) class Arg(EmrObject): def __init__(self, connection=None): self.value = None def endElement(self, name, value, connection): self.value = value class StepId(Arg): pass class JobFlowStepList(EmrObject): def __ini__(self, connection=None): self.connection = connection self.stepids = None def startElement(self, name, attrs, connection): if name == 'StepIds': self.stepids = ResultSet([('member', StepId)]) return self.stepids else: return None class BootstrapAction(EmrObject): Fields = set([ 'Args', 'Name', 'Path', 'ScriptPath', ]) def startElement(self, name, attrs, connection): if name == 'Args': self.args = ResultSet([('member', Arg)]) return self.args class KeyValue(EmrObject): Fields = set([ 'Key', 'Value', ]) class Step(EmrObject): Fields = set([ 'ActionOnFailure', 'CreationDateTime', 'EndDateTime', 'Jar', 'LastStateChangeReason', 'MainClass', 'Name', 'StartDateTime', 'State', ]) def __init__(self, connection=None): self.connection = connection self.args = None def startElement(self, name, attrs, connection): if name == 'Args': self.args = ResultSet([('member', Arg)]) return self.args if name == 'Properties': self.properties = ResultSet([('member', KeyValue)]) return self.properties class InstanceGroup(EmrObject): Fields = set([ 'BidPrice', 'CreationDateTime', 'EndDateTime', 'InstanceGroupId', 'InstanceRequestCount', 'InstanceRole', 'InstanceRunningCount', 'InstanceType', 'LastStateChangeReason', 'LaunchGroup', 'Market', 'Name', 'ReadyDateTime', 'StartDateTime', 'State', ]) class JobFlow(EmrObject): Fields = set([ 'AmiVersion', 'AvailabilityZone', 'CreationDateTime', 'Ec2KeyName', 'EndDateTime', 'HadoopVersion', 'Id', 'InstanceCount', 'JobFlowId', 'KeepJobFlowAliveWhenNoSteps', 'LastStateChangeReason', 'LogUri', 'MasterInstanceId', 'MasterInstanceType', 'MasterPublicDnsName', 'Name', 'NormalizedInstanceHours', 'ReadyDateTime', 'RequestId', 'SlaveInstanceType', 'StartDateTime', 'State', 'TerminationProtected', 'Type', 'Value', 'VisibleToAllUsers', ]) def __init__(self, connection=None): self.connection = connection self.steps = None self.instancegroups = None self.bootstrapactions = None def startElement(self, name, attrs, connection): if name == 'Steps': self.steps = ResultSet([('member', Step)]) return self.steps elif name == 'InstanceGroups': self.instancegroups = ResultSet([('member', InstanceGroup)]) return self.instancegroups elif name == 'BootstrapActions': self.bootstrapactions = ResultSet([('member', BootstrapAction)]) return self.bootstrapactions else: return None class ClusterTimeline(EmrObject): Fields = set([ 'CreationDateTime', 'ReadyDateTime', 'EndDateTime' ]) class ClusterStatus(EmrObject): Fields = set([ 'State', 'StateChangeReason', 'Timeline' ]) def __init__(self, connection=None): self.connection = connection self.timeline = None def startElement(self, name, attrs, connection): if name == 'Timeline': self.timeline = ClusterTimeline() return self.timeline else: return None class Ec2InstanceAttributes(EmrObject): Fields = set([ 'Ec2KeyName', 'Ec2SubnetId', 'Ec2AvailabilityZone', 'IamInstanceProfile' ]) class Application(EmrObject): Fields = set([ 'Name', 'Version', 'Args', 'AdditionalInfo' ]) class Cluster(EmrObject): Fields = set([ 'Id', 'Name', 'LogUri', 'RequestedAmiVersion', 'RunningAmiVersion', 'AutoTerminate', 'TerminationProtected', 'VisibleToAllUsers' ]) def __init__(self, connection=None): self.connection = connection self.status = None self.ec2instanceattributes = None self.applications = None def startElement(self, name, attrs, connection): if name == 'Status': self.status = ClusterStatus() return self.status elif name == 'EC2InstanceAttributes': self.ec2instanceattributes = Ec2InstanceAttributes() return self.ec2instanceattributes elif name == 'Applications': self.applications = ResultSet([('member', Application)]) else: return None class ClusterSummary(Cluster): Fields = set([ 'Id', 'Name' ]) class ClusterSummaryList(EmrObject): Fields = set([ 'Marker' ]) def __init__(self, connection): self.connection = connection self.clusters = None def startElement(self, name, attrs, connection): if name == 'Clusters': self.clusters = ResultSet([('member', ClusterSummary)]) return self.clusters else: return None class StepConfig(EmrObject): Fields = set([ 'Jar' 'MainClass' ]) def __init__(self, connection=None): self.connection = connection self.properties = None self.args = None def startElement(self, name, attrs, connection): if name == 'Properties': self.properties = ResultSet([('member', KeyValue)]) return self.properties elif name == 'Args': self.args = ResultSet([('member', Arg)]) return self.args else: return None class HadoopStep(EmrObject): Fields = set([ 'Id', 'Name', 'ActionOnFailure' ]) def __init__(self, connection=None): self.connection = connection self.config = None self.status = None def startElement(self, name, attrs, connection): if name == 'Config': self.config = StepConfig() return self.config elif name == 'Status': self.status = ClusterStatus() return self.status else: return None class InstanceGroupInfo(EmrObject): Fields = set([ 'Id', 'Name', 'Market', 'InstanceGroupType', 'BidPrice', 'InstanceType', 'RequestedInstanceCount', 'RunningInstanceCount' ]) def __init__(self, connection=None): self.connection = connection self.status = None def startElement(self, name, attrs, connection): if name == 'Status': self.status = ClusterStatus() return self.status else: return None class InstanceGroupList(EmrObject): Fields = set([ 'Marker' ]) def __init__(self, connection=None): self.connection = connection self.instancegroups = None def startElement(self, name, attrs, connection): if name == 'InstanceGroups': self.instancegroups = ResultSet([('member', InstanceGroupInfo)]) return self.instancegroups else: return None class InstanceInfo(EmrObject): Fields = set([ 'Id', 'Ec2InstanceId', 'PublicDnsName', 'PublicIpAddress', 'PrivateDnsName', 'PrivateIpAddress' ]) def __init__(self, connection=None): self.connection = connection self.status = None def startElement(self, name, attrs, connection): if name == 'Status': self.status = ClusterStatus() return self.status else: return None class InstanceList(EmrObject): Fields = set([ 'Marker' ]) def __init__(self, connection=None): self.connection = connection self.instances = None def startElement(self, name, attrs, connection): if name == 'Instances': self.instances = ResultSet([('member', InstanceInfo)]) return self.instances else: return None class StepSummary(EmrObject): Fields = set([ 'Id', 'Name' ]) def __init__(self, connection=None): self.connection = connection self.status = None def startElement(self, name, attrs, connection): if name == 'Status': self.status = ClusterStatus() return self.status else: return None class StepSummaryList(EmrObject): Fields = set([ 'Marker' ]) def __init__(self, connection=None): self.connection = connection self.steps = None def startElement(self, name, attrs, connection): if name == 'Steps': self.steps = ResultSet([('member', StepSummary)]) return self.steps else: return None class BootstrapActionList(EmrObject): Fields = set([ 'Marker' ]) def __init__(self, connection=None): self.connection = connection self.actions = None def startElement(self, name, attrs, connection): if name == 'BootstrapActions': self.actions = ResultSet([('member', BootstrapAction)]) return self.actions else: return None boto-2.20.1/boto/emr/instance_group.py000066400000000000000000000040371225267101000176570ustar00rootroot00000000000000# # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class InstanceGroup(object): def __init__(self, num_instances, role, type, market, name, bidprice=None): self.num_instances = num_instances self.role = role self.type = type self.market = market self.name = name if market == 'SPOT': if not bidprice: raise ValueError('bidprice must be specified if market == SPOT') self.bidprice = str(bidprice) def __repr__(self): if self.market == 'SPOT': return '%s.%s(name=%r, num_instances=%r, role=%r, type=%r, market = %r, bidprice = %r)' % ( self.__class__.__module__, self.__class__.__name__, self.name, self.num_instances, self.role, self.type, self.market, self.bidprice) else: return '%s.%s(name=%r, num_instances=%r, role=%r, type=%r, market = %r)' % ( self.__class__.__module__, self.__class__.__name__, self.name, self.num_instances, self.role, self.type, self.market) boto-2.20.1/boto/emr/step.py000066400000000000000000000213621225267101000156120ustar00rootroot00000000000000# Copyright (c) 2010 Spotify AB # Copyright (c) 2010-2011 Yelp # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class Step(object): """ Jobflow Step base class """ def jar(self): """ :rtype: str :return: URI to the jar """ raise NotImplemented() def args(self): """ :rtype: list(str) :return: List of arguments for the step """ raise NotImplemented() def main_class(self): """ :rtype: str :return: The main class name """ raise NotImplemented() class JarStep(Step): """ Custom jar step """ def __init__(self, name, jar, main_class=None, action_on_failure='TERMINATE_JOB_FLOW', step_args=None): """ A elastic mapreduce step that executes a jar :type name: str :param name: The name of the step :type jar: str :param jar: S3 URI to the Jar file :type main_class: str :param main_class: The class to execute in the jar :type action_on_failure: str :param action_on_failure: An action, defined in the EMR docs to take on failure. :type step_args: list(str) :param step_args: A list of arguments to pass to the step """ self.name = name self._jar = jar self._main_class = main_class self.action_on_failure = action_on_failure if isinstance(step_args, basestring): step_args = [step_args] self.step_args = step_args def jar(self): return self._jar def args(self): args = [] if self.step_args: args.extend(self.step_args) return args def main_class(self): return self._main_class class StreamingStep(Step): """ Hadoop streaming step """ def __init__(self, name, mapper, reducer=None, combiner=None, action_on_failure='TERMINATE_JOB_FLOW', cache_files=None, cache_archives=None, step_args=None, input=None, output=None, jar='/home/hadoop/contrib/streaming/hadoop-streaming.jar'): """ A hadoop streaming elastic mapreduce step :type name: str :param name: The name of the step :type mapper: str :param mapper: The mapper URI :type reducer: str :param reducer: The reducer URI :type combiner: str :param combiner: The combiner URI. Only works for Hadoop 0.20 and later! :type action_on_failure: str :param action_on_failure: An action, defined in the EMR docs to take on failure. :type cache_files: list(str) :param cache_files: A list of cache files to be bundled with the job :type cache_archives: list(str) :param cache_archives: A list of jar archives to be bundled with the job :type step_args: list(str) :param step_args: A list of arguments to pass to the step :type input: str or a list of str :param input: The input uri :type output: str :param output: The output uri :type jar: str :param jar: The hadoop streaming jar. This can be either a local path on the master node, or an s3:// URI. """ self.name = name self.mapper = mapper self.reducer = reducer self.combiner = combiner self.action_on_failure = action_on_failure self.cache_files = cache_files self.cache_archives = cache_archives self.input = input self.output = output self._jar = jar if isinstance(step_args, basestring): step_args = [step_args] self.step_args = step_args def jar(self): return self._jar def main_class(self): return None def args(self): args = [] # put extra args BEFORE -mapper and -reducer so that e.g. -libjar # will work if self.step_args: args.extend(self.step_args) args.extend(['-mapper', self.mapper]) if self.combiner: args.extend(['-combiner', self.combiner]) if self.reducer: args.extend(['-reducer', self.reducer]) else: args.extend(['-jobconf', 'mapred.reduce.tasks=0']) if self.input: if isinstance(self.input, list): for input in self.input: args.extend(('-input', input)) else: args.extend(('-input', self.input)) if self.output: args.extend(('-output', self.output)) if self.cache_files: for cache_file in self.cache_files: args.extend(('-cacheFile', cache_file)) if self.cache_archives: for cache_archive in self.cache_archives: args.extend(('-cacheArchive', cache_archive)) return args def __repr__(self): return '%s.%s(name=%r, mapper=%r, reducer=%r, action_on_failure=%r, cache_files=%r, cache_archives=%r, step_args=%r, input=%r, output=%r, jar=%r)' % ( self.__class__.__module__, self.__class__.__name__, self.name, self.mapper, self.reducer, self.action_on_failure, self.cache_files, self.cache_archives, self.step_args, self.input, self.output, self._jar) class ScriptRunnerStep(JarStep): ScriptRunnerJar = 's3n://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar' def __init__(self, name, **kw): JarStep.__init__(self, name, self.ScriptRunnerJar, **kw) class PigBase(ScriptRunnerStep): BaseArgs = ['s3n://us-east-1.elasticmapreduce/libs/pig/pig-script', '--base-path', 's3n://us-east-1.elasticmapreduce/libs/pig/'] class InstallPigStep(PigBase): """ Install pig on emr step """ InstallPigName = 'Install Pig' def __init__(self, pig_versions='latest'): step_args = [] step_args.extend(self.BaseArgs) step_args.extend(['--install-pig']) step_args.extend(['--pig-versions', pig_versions]) ScriptRunnerStep.__init__(self, self.InstallPigName, step_args=step_args) class PigStep(PigBase): """ Pig script step """ def __init__(self, name, pig_file, pig_versions='latest', pig_args=[]): step_args = [] step_args.extend(self.BaseArgs) step_args.extend(['--pig-versions', pig_versions]) step_args.extend(['--run-pig-script', '--args', '-f', pig_file]) step_args.extend(pig_args) ScriptRunnerStep.__init__(self, name, step_args=step_args) class HiveBase(ScriptRunnerStep): BaseArgs = ['s3n://us-east-1.elasticmapreduce/libs/hive/hive-script', '--base-path', 's3n://us-east-1.elasticmapreduce/libs/hive/'] class InstallHiveStep(HiveBase): """ Install Hive on EMR step """ InstallHiveName = 'Install Hive' def __init__(self, hive_versions='latest', hive_site=None): step_args = [] step_args.extend(self.BaseArgs) step_args.extend(['--install-hive']) step_args.extend(['--hive-versions', hive_versions]) if hive_site is not None: step_args.extend(['--hive-site=%s' % hive_site]) ScriptRunnerStep.__init__(self, self.InstallHiveName, step_args=step_args) class HiveStep(HiveBase): """ Hive script step """ def __init__(self, name, hive_file, hive_versions='latest', hive_args=None): step_args = [] step_args.extend(self.BaseArgs) step_args.extend(['--hive-versions', hive_versions]) step_args.extend(['--run-hive-script', '--args', '-f', hive_file]) if hive_args is not None: step_args.extend(hive_args) ScriptRunnerStep.__init__(self, name, step_args=step_args) boto-2.20.1/boto/exception.py000066400000000000000000000354231225267101000160550ustar00rootroot00000000000000# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Exception classes - Subclassing allows you to check for specific errors """ import base64 import xml.sax from boto import handler from boto.resultset import ResultSet class BotoClientError(StandardError): """ General Boto Client error (error accessing AWS) """ def __init__(self, reason, *args): StandardError.__init__(self, reason, *args) self.reason = reason def __repr__(self): return 'BotoClientError: %s' % self.reason def __str__(self): return 'BotoClientError: %s' % self.reason class SDBPersistenceError(StandardError): pass class StoragePermissionsError(BotoClientError): """ Permissions error when accessing a bucket or key on a storage service. """ pass class S3PermissionsError(StoragePermissionsError): """ Permissions error when accessing a bucket or key on S3. """ pass class GSPermissionsError(StoragePermissionsError): """ Permissions error when accessing a bucket or key on GS. """ pass class BotoServerError(StandardError): def __init__(self, status, reason, body=None, *args): StandardError.__init__(self, status, reason, body, *args) self.status = status self.reason = reason self.body = body or '' self.request_id = None self.error_code = None self._error_message = None self.box_usage = None # Attempt to parse the error response. If body isn't present, # then just ignore the error response. if self.body: try: h = handler.XmlHandlerWrapper(self, self) h.parseString(self.body) except (TypeError, xml.sax.SAXParseException), pe: # Remove unparsable message body so we don't include garbage # in exception. But first, save self.body in self.error_message # because occasionally we get error messages from Eucalyptus # that are just text strings that we want to preserve. self.message = self.body self.body = None def __getattr__(self, name): if name == 'error_message': return self.message if name == 'code': return self.error_code raise AttributeError def __setattr__(self, name, value): if name == 'error_message': self.message = value else: super(BotoServerError, self).__setattr__(name, value) def __repr__(self): return '%s: %s %s\n%s' % (self.__class__.__name__, self.status, self.reason, self.body) def __str__(self): return '%s: %s %s\n%s' % (self.__class__.__name__, self.status, self.reason, self.body) def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name in ('RequestId', 'RequestID'): self.request_id = value elif name == 'Code': self.error_code = value elif name == 'Message': self.message = value elif name == 'BoxUsage': self.box_usage = value return None def _cleanupParsedProperties(self): self.request_id = None self.error_code = None self.message = None self.box_usage = None class ConsoleOutput: def __init__(self, parent=None): self.parent = parent self.instance_id = None self.timestamp = None self.comment = None self.output = None def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'instanceId': self.instance_id = value elif name == 'output': self.output = base64.b64decode(value) else: setattr(self, name, value) class StorageCreateError(BotoServerError): """ Error creating a bucket or key on a storage service. """ def __init__(self, status, reason, body=None): self.bucket = None BotoServerError.__init__(self, status, reason, body) def endElement(self, name, value, connection): if name == 'BucketName': self.bucket = value else: return BotoServerError.endElement(self, name, value, connection) class S3CreateError(StorageCreateError): """ Error creating a bucket or key on S3. """ pass class GSCreateError(StorageCreateError): """ Error creating a bucket or key on GS. """ pass class StorageCopyError(BotoServerError): """ Error copying a key on a storage service. """ pass class S3CopyError(StorageCopyError): """ Error copying a key on S3. """ pass class GSCopyError(StorageCopyError): """ Error copying a key on GS. """ pass class SQSError(BotoServerError): """ General Error on Simple Queue Service. """ def __init__(self, status, reason, body=None): self.detail = None self.type = None BotoServerError.__init__(self, status, reason, body) def startElement(self, name, attrs, connection): return BotoServerError.startElement(self, name, attrs, connection) def endElement(self, name, value, connection): if name == 'Detail': self.detail = value elif name == 'Type': self.type = value else: return BotoServerError.endElement(self, name, value, connection) def _cleanupParsedProperties(self): BotoServerError._cleanupParsedProperties(self) for p in ('detail', 'type'): setattr(self, p, None) class SQSDecodeError(BotoClientError): """ Error when decoding an SQS message. """ def __init__(self, reason, message): BotoClientError.__init__(self, reason, message) self.message = message def __repr__(self): return 'SQSDecodeError: %s' % self.reason def __str__(self): return 'SQSDecodeError: %s' % self.reason class StorageResponseError(BotoServerError): """ Error in response from a storage service. """ def __init__(self, status, reason, body=None): self.resource = None BotoServerError.__init__(self, status, reason, body) def startElement(self, name, attrs, connection): return BotoServerError.startElement(self, name, attrs, connection) def endElement(self, name, value, connection): if name == 'Resource': self.resource = value else: return BotoServerError.endElement(self, name, value, connection) def _cleanupParsedProperties(self): BotoServerError._cleanupParsedProperties(self) for p in ('resource'): setattr(self, p, None) class S3ResponseError(StorageResponseError): """ Error in response from S3. """ pass class GSResponseError(StorageResponseError): """ Error in response from GS. """ pass class EC2ResponseError(BotoServerError): """ Error in response from EC2. """ def __init__(self, status, reason, body=None): self.errors = None self._errorResultSet = [] BotoServerError.__init__(self, status, reason, body) self.errors = [ (e.error_code, e.error_message) \ for e in self._errorResultSet ] if len(self.errors): self.error_code, self.error_message = self.errors[0] def startElement(self, name, attrs, connection): if name == 'Errors': self._errorResultSet = ResultSet([('Error', _EC2Error)]) return self._errorResultSet else: return None def endElement(self, name, value, connection): if name == 'RequestID': self.request_id = value else: return None # don't call subclass here def _cleanupParsedProperties(self): BotoServerError._cleanupParsedProperties(self) self._errorResultSet = [] for p in ('errors'): setattr(self, p, None) class JSONResponseError(BotoServerError): """ This exception expects the fully parsed and decoded JSON response body to be passed as the body parameter. :ivar status: The HTTP status code. :ivar reason: The HTTP reason message. :ivar body: The Python dict that represents the decoded JSON response body. :ivar error_message: The full description of the AWS error encountered. :ivar error_code: A short string that identifies the AWS error (e.g. ConditionalCheckFailedException) """ def __init__(self, status, reason, body=None, *args): self.status = status self.reason = reason self.body = body if self.body: self.error_message = self.body.get('message', None) self.error_code = self.body.get('__type', None) if self.error_code: self.error_code = self.error_code.split('#')[-1] class DynamoDBResponseError(JSONResponseError): pass class SWFResponseError(JSONResponseError): pass class EmrResponseError(BotoServerError): """ Error in response from EMR """ pass class _EC2Error: def __init__(self, connection=None): self.connection = connection self.error_code = None self.error_message = None def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'Code': self.error_code = value elif name == 'Message': self.error_message = value else: return None class SDBResponseError(BotoServerError): """ Error in responses from SDB. """ pass class AWSConnectionError(BotoClientError): """ General error connecting to Amazon Web Services. """ pass class StorageDataError(BotoClientError): """ Error receiving data from a storage service. """ pass class S3DataError(StorageDataError): """ Error receiving data from S3. """ pass class GSDataError(StorageDataError): """ Error receiving data from GS. """ pass class InvalidUriError(Exception): """Exception raised when URI is invalid.""" def __init__(self, message): Exception.__init__(self, message) self.message = message class InvalidAclError(Exception): """Exception raised when ACL XML is invalid.""" def __init__(self, message): Exception.__init__(self, message) self.message = message class InvalidCorsError(Exception): """Exception raised when CORS XML is invalid.""" def __init__(self, message): Exception.__init__(self, message) self.message = message class NoAuthHandlerFound(Exception): """Is raised when no auth handlers were found ready to authenticate.""" pass class InvalidLifecycleConfigError(Exception): """Exception raised when GCS lifecycle configuration XML is invalid.""" def __init__(self, message): Exception.__init__(self, message) self.message = message # Enum class for resumable upload failure disposition. class ResumableTransferDisposition(object): # START_OVER means an attempt to resume an existing transfer failed, # and a new resumable upload should be attempted (without delay). START_OVER = 'START_OVER' # WAIT_BEFORE_RETRY means the resumable transfer failed but that it can # be retried after a time delay within the current process. WAIT_BEFORE_RETRY = 'WAIT_BEFORE_RETRY' # ABORT_CUR_PROCESS means the resumable transfer failed and that # delaying/retrying within the current process will not help. If # resumable transfer included a state tracker file the upload can be # retried again later, in another process (e.g., a later run of gsutil). ABORT_CUR_PROCESS = 'ABORT_CUR_PROCESS' # ABORT means the resumable transfer failed in a way that it does not # make sense to continue in the current process, and further that the # current tracker ID should not be preserved (in a tracker file if one # was specified at resumable upload start time). If the user tries again # later (e.g., a separate run of gsutil) it will get a new resumable # upload ID. ABORT = 'ABORT' class ResumableUploadException(Exception): """ Exception raised for various resumable upload problems. self.disposition is of type ResumableTransferDisposition. """ def __init__(self, message, disposition): Exception.__init__(self, message, disposition) self.message = message self.disposition = disposition def __repr__(self): return 'ResumableUploadException("%s", %s)' % ( self.message, self.disposition) class ResumableDownloadException(Exception): """ Exception raised for various resumable download problems. self.disposition is of type ResumableTransferDisposition. """ def __init__(self, message, disposition): Exception.__init__(self, message, disposition) self.message = message self.disposition = disposition def __repr__(self): return 'ResumableDownloadException("%s", %s)' % ( self.message, self.disposition) class TooManyRecordsException(Exception): """ Exception raised when a search of Route53 records returns more records than requested. """ def __init__(self, message): Exception.__init__(self, message) self.message = message class PleaseRetryException(Exception): """ Indicates a request should be retried. """ def __init__(self, message, response=None): self.message = message self.response = response def __repr__(self): return 'PleaseRetryException("%s", %s)' % ( self.message, self.response ) boto-2.20.1/boto/file/000077500000000000000000000000001225267101000144155ustar00rootroot00000000000000boto-2.20.1/boto/file/README000066400000000000000000000051241225267101000152770ustar00rootroot00000000000000Handling of file:// URIs: This directory contains code to map basic boto connection, bucket, and key operations onto files in the local filesystem, in support of file:// URI operations. Bucket storage operations cannot be mapped completely onto a file system because of the different naming semantics in these types of systems: the former have a flat name space of objects within each named bucket; the latter have a hierarchical name space of files, and nothing corresponding to the notion of a bucket. The mapping we selected was guided by the desire to achieve meaningful semantics for a useful subset of operations that can be implemented polymorphically across both types of systems. We considered several possibilities for mapping path names to bucket + object name: 1) bucket = the file system root or local directory (for absolute vs relative file:// URIs, respectively) and object = remainder of path. We discarded this choice because the get_all_keys() method doesn't make sense under this approach: Enumerating all files under the root or current directory could include more than the caller intended. For example, StorageUri("file:///usr/bin/X11/vim").get_all_keys() would enumerate all files in the file system. 2) bucket is treated mostly as an anonymous placeholder, with the object name holding the URI path (minus the "file://" part). Two sub-options, for object enumeration (the get_all_keys() call): a) disallow get_all_keys(). This isn't great, as then the caller must know the URI type before deciding whether to make this call. b) return the single key for which this "bucket" was defined. Note that this option means the app cannot use this API for listing contents of the file system. While that makes the API less generally useful, it avoids the potentially dangerous/unintended consequences noted in option (1) above. We selected 2b, resulting in a class hierarchy where StorageUri is an abstract class, with FileStorageUri and BucketStorageUri subclasses. Some additional notes: BucketStorageUri and FileStorageUri each implement these methods: - clone_replace_name() creates a same-type URI with a different object name - which is useful for various enumeration cases (e.g., implementing wildcarding in a command line utility). - names_container() determines if the given URI names a container for multiple objects/files - i.e., a bucket or directory. - names_singleton() determines if the given URI names an individual object or file. - is_file_uri() and is_cloud_uri() determine if the given URI is a FileStorageUri or BucketStorageUri, respectively boto-2.20.1/boto/file/__init__.py000077500000000000000000000023141225267101000165310ustar00rootroot00000000000000# Copyright 2010 Google Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import boto from connection import FileConnection as Connection from key import Key from bucket import Bucket __all__ = ['Connection', 'Key', 'Bucket'] boto-2.20.1/boto/file/bucket.py000066400000000000000000000077531225267101000162600ustar00rootroot00000000000000# Copyright 2010 Google Inc. # Copyright (c) 2011, Nexenta Systems Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # File representation of bucket, for use with "file://" URIs. import os from key import Key from boto.file.simpleresultset import SimpleResultSet from boto.s3.bucketlistresultset import BucketListResultSet class Bucket(object): def __init__(self, name, contained_key): """Instantiate an anonymous file-based Bucket around a single key. """ self.name = name self.contained_key = contained_key def __iter__(self): return iter(BucketListResultSet(self)) def __str__(self): return 'anonymous bucket for file://' + self.contained_key def delete_key(self, key_name, headers=None, version_id=None, mfa_token=None): """ Deletes a key from the bucket. :type key_name: string :param key_name: The key name to delete :type version_id: string :param version_id: Unused in this subclass. :type mfa_token: tuple or list of strings :param mfa_token: Unused in this subclass. """ os.remove(key_name) def get_all_keys(self, headers=None, **params): """ This method returns the single key around which this anonymous Bucket was instantiated. :rtype: SimpleResultSet :return: The result from file system listing the keys requested """ key = Key(self.name, self.contained_key) return SimpleResultSet([key]) def get_key(self, key_name, headers=None, version_id=None, key_type=Key.KEY_REGULAR_FILE): """ Check to see if a particular key exists within the bucket. Returns: An instance of a Key object or None :type key_name: string :param key_name: The name of the key to retrieve :type version_id: string :param version_id: Unused in this subclass. :type stream_type: integer :param stream_type: Type of the Key - Regular File or input/output Stream :rtype: :class:`boto.file.key.Key` :returns: A Key object from this bucket. """ if key_name == '-': return Key(self.name, '-', key_type=Key.KEY_STREAM_READABLE) else: fp = open(key_name, 'rb') return Key(self.name, key_name, fp) def new_key(self, key_name=None, key_type=Key.KEY_REGULAR_FILE): """ Creates a new key :type key_name: string :param key_name: The name of the key to create :rtype: :class:`boto.file.key.Key` :returns: An instance of the newly created key object """ if key_name == '-': return Key(self.name, '-', key_type=Key.KEY_STREAM_WRITABLE) else: dir_name = os.path.dirname(key_name) if dir_name and not os.path.exists(dir_name): os.makedirs(dir_name) fp = open(key_name, 'wb') return Key(self.name, key_name, fp) boto-2.20.1/boto/file/connection.py000077500000000000000000000027101225267101000171310ustar00rootroot00000000000000# Copyright 2010 Google Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # File representation of connection, for use with "file://" URIs. from bucket import Bucket class FileConnection(object): def __init__(self, file_storage_uri): # FileConnections are per-file storage URI. self.file_storage_uri = file_storage_uri def get_bucket(self, bucket_name, validate=True, headers=None): return Bucket(bucket_name, self.file_storage_uri.object_name) boto-2.20.1/boto/file/key.py000077500000000000000000000153231225267101000155660ustar00rootroot00000000000000# Copyright 2010 Google Inc. # Copyright (c) 2011, Nexenta Systems Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # File representation of key, for use with "file://" URIs. import os, shutil, StringIO import sys class Key(object): KEY_STREAM_READABLE = 0x01 KEY_STREAM_WRITABLE = 0x02 KEY_STREAM = (KEY_STREAM_READABLE | KEY_STREAM_WRITABLE) KEY_REGULAR_FILE = 0x00 def __init__(self, bucket, name, fp=None, key_type=KEY_REGULAR_FILE): self.bucket = bucket self.full_path = name if name == '-': self.name = None self.size = None else: self.name = name self.size = os.stat(name).st_size self.key_type = key_type if key_type == self.KEY_STREAM_READABLE: self.fp = sys.stdin self.full_path = '' elif key_type == self.KEY_STREAM_WRITABLE: self.fp = sys.stdout self.full_path = '' else: self.fp = fp def __str__(self): return 'file://' + self.full_path def get_file(self, fp, headers=None, cb=None, num_cb=10, torrent=False): """ Retrieves a file from a Key :type fp: file :param fp: File pointer to put the data into :type headers: string :param: ignored in this subclass. :type cb: function :param cb: ignored in this subclass. :type cb: int :param num_cb: ignored in this subclass. """ if self.key_type & self.KEY_STREAM_WRITABLE: raise BotoClientError('Stream is not readable') elif self.key_type & self.KEY_STREAM_READABLE: key_file = self.fp else: key_file = open(self.full_path, 'rb') try: shutil.copyfileobj(key_file, fp) finally: key_file.close() def set_contents_from_file(self, fp, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None): """ Store an object in a file using the name of the Key object as the key in file URI and the contents of the file pointed to by 'fp' as the contents. :type fp: file :param fp: the file whose contents to upload :type headers: dict :param headers: ignored in this subclass. :type replace: bool :param replace: If this parameter is False, the method will first check to see if an object exists in the bucket with the same key. If it does, it won't overwrite it. The default value is True which will overwrite the object. :type cb: function :param cb: ignored in this subclass. :type cb: int :param num_cb: ignored in this subclass. :type policy: :class:`boto.s3.acl.CannedACLStrings` :param policy: ignored in this subclass. :type md5: A tuple containing the hexdigest version of the MD5 checksum of the file as the first element and the Base64-encoded version of the plain checksum as the second element. This is the same format returned by the compute_md5 method. :param md5: ignored in this subclass. """ if self.key_type & self.KEY_STREAM_READABLE: raise BotoClientError('Stream is not writable') elif self.key_type & self.KEY_STREAM_WRITABLE: key_file = self.fp else: if not replace and os.path.exists(self.full_path): return key_file = open(self.full_path, 'wb') try: shutil.copyfileobj(fp, key_file) finally: key_file.close() def get_contents_to_file(self, fp, headers=None, cb=None, num_cb=None, torrent=False, version_id=None, res_download_handler=None, response_headers=None): """ Copy contents from the current file to the file pointed to by 'fp'. :type fp: File-like object :param fp: :type headers: dict :param headers: Unused in this subclass. :type cb: function :param cb: Unused in this subclass. :type cb: int :param num_cb: Unused in this subclass. :type torrent: bool :param torrent: Unused in this subclass. :type res_upload_handler: ResumableDownloadHandler :param res_download_handler: Unused in this subclass. :type response_headers: dict :param response_headers: Unused in this subclass. """ shutil.copyfileobj(self.fp, fp) def get_contents_as_string(self, headers=None, cb=None, num_cb=10, torrent=False): """ Retrieve file data from the Key, and return contents as a string. :type headers: dict :param headers: ignored in this subclass. :type cb: function :param cb: ignored in this subclass. :type cb: int :param num_cb: ignored in this subclass. :type cb: int :param num_cb: ignored in this subclass. :type torrent: bool :param torrent: ignored in this subclass. :rtype: string :returns: The contents of the file as a string """ fp = StringIO.StringIO() self.get_contents_to_file(fp) return fp.getvalue() def is_stream(self): return (self.key_type & self.KEY_STREAM) def close(self): """ Closes fp associated with underlying file. Caller should call this method when done with this class, to avoid using up OS resources (e.g., when iterating over a large number of files). """ self.fp.close() boto-2.20.1/boto/file/simpleresultset.py000077500000000000000000000024511225267101000202400ustar00rootroot00000000000000# Copyright 2010 Google Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class SimpleResultSet(list): """ ResultSet facade built from a simple list, rather than via XML parsing. """ def __init__(self, input_list): for x in input_list: self.append(x) self.is_truncated = False boto-2.20.1/boto/fps/000077500000000000000000000000001225267101000142665ustar00rootroot00000000000000boto-2.20.1/boto/fps/__init__.py000066400000000000000000000021151225267101000163760ustar00rootroot00000000000000# Copyright (c) 2008, Chris Moyer http://coredumped.org # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # boto-2.20.1/boto/fps/connection.py000066400000000000000000000337761225267101000170170ustar00rootroot00000000000000# Copyright (c) 2012 Andy Davidoff http://www.disruptek.com/ # Copyright (c) 2010 Jason R. Coombs http://www.jaraco.com/ # Copyright (c) 2008 Chris Moyer http://coredumped.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import urllib import uuid from boto.connection import AWSQueryConnection from boto.fps.exception import ResponseErrorFactory from boto.fps.response import ResponseFactory import boto.fps.response __all__ = ['FPSConnection'] decorated_attrs = ('action', 'response') def add_attrs_from(func, to): for attr in decorated_attrs: setattr(to, attr, getattr(func, attr, None)) return to def complex_amounts(*fields): def decorator(func): def wrapper(self, *args, **kw): for field in filter(kw.has_key, fields): amount = kw.pop(field) kw[field + '.Value'] = getattr(amount, 'Value', str(amount)) kw[field + '.CurrencyCode'] = getattr(amount, 'CurrencyCode', self.currencycode) return func(self, *args, **kw) wrapper.__doc__ = "{0}\nComplex Amounts: {1}".format(func.__doc__, ', '.join(fields)) return add_attrs_from(func, to=wrapper) return decorator def requires(*groups): def decorator(func): def wrapper(*args, **kw): hasgroup = lambda x: len(x) == len(filter(kw.has_key, x)) if 1 != len(filter(hasgroup, groups)): message = ' OR '.join(['+'.join(g) for g in groups]) message = "{0} requires {1} argument(s)" \ "".format(getattr(func, 'action', 'Method'), message) raise KeyError(message) return func(*args, **kw) message = ' OR '.join(['+'.join(g) for g in groups]) wrapper.__doc__ = "{0}\nRequired: {1}".format(func.__doc__, message) return add_attrs_from(func, to=wrapper) return decorator def needs_caller_reference(func): def wrapper(*args, **kw): kw.setdefault('CallerReference', uuid.uuid4()) return func(*args, **kw) wrapper.__doc__ = "{0}\nUses CallerReference, defaults " \ "to uuid.uuid4()".format(func.__doc__) return add_attrs_from(func, to=wrapper) def api_action(*api): def decorator(func): action = ''.join(api or map(str.capitalize, func.func_name.split('_'))) response = ResponseFactory(action) if hasattr(boto.fps.response, action + 'Response'): response = getattr(boto.fps.response, action + 'Response') def wrapper(self, *args, **kw): return func(self, action, response, *args, **kw) wrapper.action, wrapper.response = action, response wrapper.__doc__ = "FPS {0} API call\n{1}".format(action, func.__doc__) return wrapper return decorator class FPSConnection(AWSQueryConnection): APIVersion = '2010-08-28' ResponseError = ResponseErrorFactory currencycode = 'USD' def __init__(self, *args, **kw): self.currencycode = kw.pop('CurrencyCode', self.currencycode) kw.setdefault('host', 'fps.sandbox.amazonaws.com') AWSQueryConnection.__init__(self, *args, **kw) def _required_auth_capability(self): return ['fps'] @needs_caller_reference @complex_amounts('SettlementAmount') @requires(['CreditInstrumentId', 'SettlementAmount.Value', 'SenderTokenId', 'SettlementAmount.CurrencyCode']) @api_action() def settle_debt(self, action, response, **kw): """ Allows a caller to initiate a transaction that atomically transfers money from a sender's payment instrument to the recipient, while decreasing corresponding debt balance. """ return self.get_object(action, kw, response) @requires(['TransactionId']) @api_action() def get_transaction_status(self, action, response, **kw): """ Gets the latest status of a transaction. """ return self.get_object(action, kw, response) @requires(['StartDate']) @api_action() def get_account_activity(self, action, response, **kw): """ Returns transactions for a given date range. """ return self.get_object(action, kw, response) @requires(['TransactionId']) @api_action() def get_transaction(self, action, response, **kw): """ Returns all details of a transaction. """ return self.get_object(action, kw, response) @api_action() def get_outstanding_debt_balance(self, action, response): """ Returns the total outstanding balance for all the credit instruments for the given creditor account. """ return self.get_object(action, {}, response) @requires(['PrepaidInstrumentId']) @api_action() def get_prepaid_balance(self, action, response, **kw): """ Returns the balance available on the given prepaid instrument. """ return self.get_object(action, kw, response) @api_action() def get_total_prepaid_liability(self, action, response): """ Returns the total liability held by the given account corresponding to all the prepaid instruments owned by the account. """ return self.get_object(action, {}, response) @api_action() def get_account_balance(self, action, response): """ Returns the account balance for an account in real time. """ return self.get_object(action, {}, response) @needs_caller_reference @requires(['PaymentInstruction', 'TokenType']) @api_action() def install_payment_instruction(self, action, response, **kw): """ Installs a payment instruction for caller. """ return self.get_object(action, kw, response) @needs_caller_reference @requires(['returnURL', 'pipelineName']) def cbui_url(self, **kw): """ Generate a signed URL for the Co-Branded service API given arguments as payload. """ sandbox = 'sandbox' in self.host and 'payments-sandbox' or 'payments' endpoint = 'authorize.{0}.amazon.com'.format(sandbox) base = '/cobranded-ui/actions/start' validpipelines = ('SingleUse', 'MultiUse', 'Recurring', 'Recipient', 'SetupPrepaid', 'SetupPostpaid', 'EditToken') assert kw['pipelineName'] in validpipelines, "Invalid pipelineName" kw.update({ 'signatureMethod': 'HmacSHA256', 'signatureVersion': '2', }) kw.setdefault('callerKey', self.aws_access_key_id) safestr = lambda x: x is not None and str(x) or '' safequote = lambda x: urllib.quote(safestr(x), safe='~') payload = sorted([(k, safequote(v)) for k, v in kw.items()]) encoded = lambda p: '&'.join([k + '=' + v for k, v in p]) canonical = '\n'.join(['GET', endpoint, base, encoded(payload)]) signature = self._auth_handler.sign_string(canonical) payload += [('signature', safequote(signature))] payload.sort() return 'https://{0}{1}?{2}'.format(endpoint, base, encoded(payload)) @needs_caller_reference @complex_amounts('TransactionAmount') @requires(['SenderTokenId', 'TransactionAmount.Value', 'TransactionAmount.CurrencyCode']) @api_action() def reserve(self, action, response, **kw): """ Reserve API is part of the Reserve and Settle API conjunction that serve the purpose of a pay where the authorization and settlement have a timing difference. """ return self.get_object(action, kw, response) @needs_caller_reference @complex_amounts('TransactionAmount') @requires(['SenderTokenId', 'TransactionAmount.Value', 'TransactionAmount.CurrencyCode']) @api_action() def pay(self, action, response, **kw): """ Allows calling applications to move money from a sender to a recipient. """ return self.get_object(action, kw, response) @requires(['TransactionId']) @api_action() def cancel(self, action, response, **kw): """ Cancels an ongoing transaction and puts it in cancelled state. """ return self.get_object(action, kw, response) @complex_amounts('TransactionAmount') @requires(['ReserveTransactionId', 'TransactionAmount.Value', 'TransactionAmount.CurrencyCode']) @api_action() def settle(self, action, response, **kw): """ The Settle API is used in conjunction with the Reserve API and is used to settle previously reserved transaction. """ return self.get_object(action, kw, response) @complex_amounts('RefundAmount') @requires(['TransactionId', 'RefundAmount.Value', 'CallerReference', 'RefundAmount.CurrencyCode']) @api_action() def refund(self, action, response, **kw): """ Refunds a previously completed transaction. """ return self.get_object(action, kw, response) @requires(['RecipientTokenId']) @api_action() def get_recipient_verification_status(self, action, response, **kw): """ Returns the recipient status. """ return self.get_object(action, kw, response) @requires(['CallerReference'], ['TokenId']) @api_action() def get_token_by_caller(self, action, response, **kw): """ Returns the details of a particular token installed by this calling application using the subway co-branded UI. """ return self.get_object(action, kw, response) @requires(['UrlEndPoint', 'HttpParameters']) @api_action() def verify_signature(self, action, response, **kw): """ Verify the signature that FPS sent in IPN or callback urls. """ return self.get_object(action, kw, response) @api_action() def get_tokens(self, action, response, **kw): """ Returns a list of tokens installed on the given account. """ return self.get_object(action, kw, response) @requires(['TokenId']) @api_action() def get_token_usage(self, action, response, **kw): """ Returns the usage of a token. """ return self.get_object(action, kw, response) @requires(['TokenId']) @api_action() def cancel_token(self, action, response, **kw): """ Cancels any token installed by the calling application on its own account. """ return self.get_object(action, kw, response) @needs_caller_reference @complex_amounts('FundingAmount') @requires(['PrepaidInstrumentId', 'FundingAmount.Value', 'SenderTokenId', 'FundingAmount.CurrencyCode']) @api_action() def fund_prepaid(self, action, response, **kw): """ Funds the prepaid balance on the given prepaid instrument. """ return self.get_object(action, kw, response) @requires(['CreditInstrumentId']) @api_action() def get_debt_balance(self, action, response, **kw): """ Returns the balance corresponding to the given credit instrument. """ return self.get_object(action, kw, response) @needs_caller_reference @complex_amounts('AdjustmentAmount') @requires(['CreditInstrumentId', 'AdjustmentAmount.Value', 'AdjustmentAmount.CurrencyCode']) @api_action() def write_off_debt(self, action, response, **kw): """ Allows a creditor to write off the debt balance accumulated partially or fully at any time. """ return self.get_object(action, kw, response) @requires(['SubscriptionId']) @api_action() def get_transactions_for_subscription(self, action, response, **kw): """ Returns the transactions for a given subscriptionID. """ return self.get_object(action, kw, response) @requires(['SubscriptionId']) @api_action() def get_subscription_details(self, action, response, **kw): """ Returns the details of Subscription for a given subscriptionID. """ return self.get_object(action, kw, response) @needs_caller_reference @complex_amounts('RefundAmount') @requires(['SubscriptionId']) @api_action() def cancel_subscription_and_refund(self, action, response, **kw): """ Cancels a subscription. """ message = "If you specify a RefundAmount, " \ "you must specify CallerReference." assert not 'RefundAmount.Value' in kw \ or 'CallerReference' in kw, message return self.get_object(action, kw, response) @requires(['TokenId']) @api_action() def get_payment_instruction(self, action, response, **kw): """ Gets the payment instruction of a token. """ return self.get_object(action, kw, response) boto-2.20.1/boto/fps/exception.py000066400000000000000000000210001225267101000166270ustar00rootroot00000000000000from boto.exception import BotoServerError class ResponseErrorFactory(BotoServerError): def __new__(cls, *args, **kw): error = BotoServerError(*args, **kw) newclass = globals().get(error.error_code, ResponseError) obj = newclass.__new__(newclass, *args, **kw) obj.__dict__.update(error.__dict__) return obj class ResponseError(BotoServerError): """Undefined response error. """ retry = False def __repr__(self): return '{0}({1}, {2},\n\t{3})'.format(self.__class__.__name__, self.status, self.reason, self.error_message) def __str__(self): return 'FPS Response Error: {0.status} {0.__class__.__name__} {1}\n' \ '{2}\n' \ '{0.error_message}'.format(self, self.retry and '(Retriable)' or '', self.__doc__.strip()) class RetriableResponseError(ResponseError): retry = True class AccessFailure(RetriableResponseError): """Account cannot be accessed. """ class AccountClosed(RetriableResponseError): """Account is not active. """ class AccountLimitsExceeded(RetriableResponseError): """The spending or receiving limit on the account is exceeded. """ class AmountOutOfRange(ResponseError): """The transaction amount is more than the allowed range. """ class AuthFailure(RetriableResponseError): """AWS was not able to validate the provided access credentials. """ class ConcurrentModification(RetriableResponseError): """A retriable error can happen when two processes try to modify the same data at the same time. """ class DuplicateRequest(ResponseError): """A different request associated with this caller reference already exists. """ class InactiveInstrument(ResponseError): """Payment instrument is inactive. """ class IncompatibleTokens(ResponseError): """The transaction could not be completed because the tokens have incompatible payment instructions. """ class InstrumentAccessDenied(ResponseError): """The external calling application is not the recipient for this postpaid or prepaid instrument. """ class InstrumentExpired(ResponseError): """The prepaid or the postpaid instrument has expired. """ class InsufficientBalance(RetriableResponseError): """The sender, caller, or recipient's account balance has insufficient funds to complete the transaction. """ class InternalError(RetriableResponseError): """A retriable error that happens due to some transient problem in the system. """ class InvalidAccountState(RetriableResponseError): """The account is either suspended or closed. """ class InvalidAccountState_Caller(RetriableResponseError): """The developer account cannot participate in the transaction. """ class InvalidAccountState_Recipient(RetriableResponseError): """Recipient account cannot participate in the transaction. """ class InvalidAccountState_Sender(RetriableResponseError): """Sender account cannot participate in the transaction. """ class InvalidCallerReference(ResponseError): """The Caller Reference does not have a token associated with it. """ class InvalidClientTokenId(ResponseError): """The AWS Access Key Id you provided does not exist in our records. """ class InvalidDateRange(ResponseError): """The end date specified is before the start date or the start date is in the future. """ class InvalidParams(ResponseError): """One or more parameters in the request is invalid. """ class InvalidPaymentInstrument(ResponseError): """The payment method used in the transaction is invalid. """ class InvalidPaymentMethod(ResponseError): """Specify correct payment method. """ class InvalidRecipientForCCTransaction(ResponseError): """This account cannot receive credit card payments. """ class InvalidSenderRoleForAccountType(ResponseError): """This token cannot be used for this operation. """ class InvalidTokenId(ResponseError): """You did not install the token that you are trying to cancel. """ class InvalidTokenId_Recipient(ResponseError): """The recipient token specified is either invalid or canceled. """ class InvalidTokenId_Sender(ResponseError): """The sender token specified is either invalid or canceled or the token is not active. """ class InvalidTokenType(ResponseError): """An invalid operation was performed on the token, for example, getting the token usage information on a single use token. """ class InvalidTransactionId(ResponseError): """The specified transaction could not be found or the caller did not execute the transaction or this is not a Pay or Reserve call. """ class InvalidTransactionState(ResponseError): """The transaction is not complete, or it has temporarily failed. """ class NotMarketplaceApp(RetriableResponseError): """This is not an marketplace application or the caller does not match either the sender or the recipient. """ class OriginalTransactionFailed(ResponseError): """The original transaction has failed. """ class OriginalTransactionIncomplete(RetriableResponseError): """The original transaction is still in progress. """ class PaymentInstrumentNotCC(ResponseError): """The payment method specified in the transaction is not a credit card. You can only use a credit card for this transaction. """ class PaymentMethodNotDefined(ResponseError): """Payment method is not defined in the transaction. """ class PrepaidFundingLimitExceeded(RetriableResponseError): """An attempt has been made to fund the prepaid instrument at a level greater than its recharge limit. """ class RefundAmountExceeded(ResponseError): """The refund amount is more than the refundable amount. """ class SameSenderAndRecipient(ResponseError): """The sender and receiver are identical, which is not allowed. """ class SameTokenIdUsedMultipleTimes(ResponseError): """This token is already used in earlier transactions. """ class SenderNotOriginalRecipient(ResponseError): """The sender in the refund transaction is not the recipient of the original transaction. """ class SettleAmountGreaterThanDebt(ResponseError): """The amount being settled or written off is greater than the current debt. """ class SettleAmountGreaterThanReserveAmount(ResponseError): """The amount being settled is greater than the reserved amount. """ class SignatureDoesNotMatch(ResponseError): """The request signature calculated by Amazon does not match the signature you provided. """ class TokenAccessDenied(ResponseError): """Permission to cancel the token is denied. """ class TokenNotActive(ResponseError): """The token is canceled. """ class TokenNotActive_Recipient(ResponseError): """The recipient token is canceled. """ class TokenNotActive_Sender(ResponseError): """The sender token is canceled. """ class TokenUsageError(ResponseError): """The token usage limit is exceeded. """ class TransactionDenied(ResponseError): """The transaction is not allowed. """ class TransactionFullyRefundedAlready(ResponseError): """The transaction has already been completely refunded. """ class TransactionTypeNotRefundable(ResponseError): """You cannot refund this transaction. """ class UnverifiedAccount_Recipient(ResponseError): """The recipient's account must have a verified bank account or a credit card before this transaction can be initiated. """ class UnverifiedAccount_Sender(ResponseError): """The sender's account must have a verified U.S. credit card or a verified U.S bank account before this transaction can be initiated. """ class UnverifiedBankAccount(ResponseError): """A verified bank account should be used for this transaction. """ class UnverifiedEmailAddress_Caller(ResponseError): """The caller account must have a verified email address. """ class UnverifiedEmailAddress_Recipient(ResponseError): """The recipient account must have a verified email address for receiving payments. """ class UnverifiedEmailAddress_Sender(ResponseError): """The sender account must have a verified email address for this payment. """ boto-2.20.1/boto/fps/response.py000066400000000000000000000141571225267101000165060ustar00rootroot00000000000000from decimal import Decimal def ResponseFactory(action): class FPSResponse(Response): _action = action _Result = globals().get(action + 'Result', ResponseElement) # due to nodes receiving their closing tags def endElement(self, name, value, connection): if name != action + 'Response': Response.endElement(self, name, value, connection) return FPSResponse class ResponseElement(object): def __init__(self, connection=None, name=None): if connection is not None: self._connection = connection self._name = name or self.__class__.__name__ @property def connection(self): return self._connection def __repr__(self): render = lambda pair: '{!s}: {!r}'.format(*pair) do_show = lambda pair: not pair[0].startswith('_') attrs = filter(do_show, self.__dict__.items()) return '{0}({1})'.format(self.__class__.__name__, ', '.join(map(render, attrs))) def startElement(self, name, attrs, connection): return None # due to nodes receiving their closing tags def endElement(self, name, value, connection): if name != self._name: setattr(self, name, value) class Response(ResponseElement): _action = 'Undefined' def startElement(self, name, attrs, connection): if name == 'ResponseMetadata': setattr(self, name, ResponseElement(name=name)) elif name == self._action + 'Result': setattr(self, name, self._Result(name=name)) else: return ResponseElement.startElement(self, name, attrs, connection) return getattr(self, name) class ComplexAmount(ResponseElement): def __repr__(self): return '{0} {1}'.format(self.CurrencyCode, self.Value) def __float__(self): return float(self.Value) def __str__(self): return str(self.Value) def startElement(self, name, attrs, connection): if name not in ('CurrencyCode', 'Value'): message = 'Unrecognized tag {0} in ComplexAmount'.format(name) raise AssertionError(message) return ResponseElement.startElement(self, name, attrs, connection) def endElement(self, name, value, connection): if name == 'Value': value = Decimal(value) ResponseElement.endElement(self, name, value, connection) class AmountCollection(ResponseElement): def startElement(self, name, attrs, connection): setattr(self, name, ComplexAmount(name=name)) return getattr(self, name) class AccountBalance(AmountCollection): def startElement(self, name, attrs, connection): if name == 'AvailableBalances': setattr(self, name, AmountCollection(name=name)) return getattr(self, name) return AmountCollection.startElement(self, name, attrs, connection) class GetAccountBalanceResult(ResponseElement): def startElement(self, name, attrs, connection): if name == 'AccountBalance': setattr(self, name, AccountBalance(name=name)) return getattr(self, name) return Response.startElement(self, name, attrs, connection) class GetTotalPrepaidLiabilityResult(ResponseElement): def startElement(self, name, attrs, connection): if name == 'OutstandingPrepaidLiability': setattr(self, name, AmountCollection(name=name)) return getattr(self, name) return Response.startElement(self, name, attrs, connection) class GetPrepaidBalanceResult(ResponseElement): def startElement(self, name, attrs, connection): if name == 'PrepaidBalance': setattr(self, name, AmountCollection(name=name)) return getattr(self, name) return Response.startElement(self, name, attrs, connection) class GetOutstandingDebtBalanceResult(ResponseElement): def startElement(self, name, attrs, connection): if name == 'OutstandingDebt': setattr(self, name, AmountCollection(name=name)) return getattr(self, name) return Response.startElement(self, name, attrs, connection) class TransactionPart(ResponseElement): def startElement(self, name, attrs, connection): if name == 'FeesPaid': setattr(self, name, ComplexAmount(name=name)) return getattr(self, name) return ResponseElement.startElement(self, name, attrs, connection) class Transaction(ResponseElement): def __init__(self, *args, **kw): self.TransactionPart = [] ResponseElement.__init__(self, *args, **kw) def startElement(self, name, attrs, connection): if name == 'TransactionPart': getattr(self, name).append(TransactionPart(name=name)) return getattr(self, name)[-1] if name in ('TransactionAmount', 'FPSFees', 'Balance'): setattr(self, name, ComplexAmount(name=name)) return getattr(self, name) return ResponseElement.startElement(self, name, attrs, connection) class GetAccountActivityResult(ResponseElement): def __init__(self, *args, **kw): self.Transaction = [] ResponseElement.__init__(self, *args, **kw) def startElement(self, name, attrs, connection): if name == 'Transaction': getattr(self, name).append(Transaction(name=name)) return getattr(self, name)[-1] return ResponseElement.startElement(self, name, attrs, connection) class GetTransactionResult(ResponseElement): def startElement(self, name, attrs, connection): if name == 'Transaction': setattr(self, name, Transaction(name=name)) return getattr(self, name) return ResponseElement.startElement(self, name, attrs, connection) class GetTokensResult(ResponseElement): def __init__(self, *args, **kw): self.Token = [] ResponseElement.__init__(self, *args, **kw) def startElement(self, name, attrs, connection): if name == 'Token': getattr(self, name).append(ResponseElement(name=name)) return getattr(self, name)[-1] return ResponseElement.startElement(self, name, attrs, connection) boto-2.20.1/boto/glacier/000077500000000000000000000000001225267101000151045ustar00rootroot00000000000000boto-2.20.1/boto/glacier/__init__.py000066400000000000000000000050271225267101000172210ustar00rootroot00000000000000# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2011 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.ec2.regioninfo import RegionInfo def regions(): """ Get all available regions for the Amazon Glacier service. :rtype: list :return: A list of :class:`boto.regioninfo.RegionInfo` """ from boto.glacier.layer2 import Layer2 return [RegionInfo(name='us-east-1', endpoint='glacier.us-east-1.amazonaws.com', connection_cls=Layer2), RegionInfo(name='us-west-1', endpoint='glacier.us-west-1.amazonaws.com', connection_cls=Layer2), RegionInfo(name='us-west-2', endpoint='glacier.us-west-2.amazonaws.com', connection_cls=Layer2), RegionInfo(name='ap-northeast-1', endpoint='glacier.ap-northeast-1.amazonaws.com', connection_cls=Layer2), RegionInfo(name='eu-west-1', endpoint='glacier.eu-west-1.amazonaws.com', connection_cls=Layer2), RegionInfo(name='ap-southeast-2', endpoint='glacier.ap-southeast-2.amazonaws.com', connection_cls=Layer2), ] def connect_to_region(region_name, **kw_params): for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/glacier/concurrent.py000066400000000000000000000413511225267101000176440ustar00rootroot00000000000000# Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import os import math import threading import hashlib import time import logging from Queue import Queue, Empty import binascii from .utils import DEFAULT_PART_SIZE, minimum_part_size, chunk_hashes, \ tree_hash, bytes_to_hex from .exceptions import UploadArchiveError, DownloadArchiveError, \ TreeHashDoesNotMatchError _END_SENTINEL = object() log = logging.getLogger('boto.glacier.concurrent') class ConcurrentTransferer(object): def __init__(self, part_size=DEFAULT_PART_SIZE, num_threads=10): self._part_size = part_size self._num_threads = num_threads self._threads = [] def _calculate_required_part_size(self, total_size): min_part_size_required = minimum_part_size(total_size) if self._part_size >= min_part_size_required: part_size = self._part_size else: part_size = min_part_size_required log.debug("The part size specified (%s) is smaller than " "the minimum required part size. Using a part " "size of: %s", self._part_size, part_size) total_parts = int(math.ceil(total_size / float(part_size))) return total_parts, part_size def _shutdown_threads(self): log.debug("Shutting down threads.") for thread in self._threads: thread.should_continue = False for thread in self._threads: thread.join() log.debug("Threads have exited.") def _add_work_items_to_queue(self, total_parts, worker_queue, part_size): log.debug("Adding work items to queue.") for i in xrange(total_parts): worker_queue.put((i, part_size)) for i in xrange(self._num_threads): worker_queue.put(_END_SENTINEL) class ConcurrentUploader(ConcurrentTransferer): """Concurrently upload an archive to glacier. This class uses a thread pool to concurrently upload an archive to glacier using the multipart upload API. The threadpool is completely managed by this class and is transparent to the users of this class. """ def __init__(self, api, vault_name, part_size=DEFAULT_PART_SIZE, num_threads=10): """ :type api: :class:`boto.glacier.layer1.Layer1` :param api: A layer1 glacier object. :type vault_name: str :param vault_name: The name of the vault. :type part_size: int :param part_size: The size, in bytes, of the chunks to use when uploading the archive parts. The part size must be a megabyte multiplied by a power of two. :type num_threads: int :param num_threads: The number of threads to spawn for the thread pool. The number of threads will control how much parts are being concurrently uploaded. """ super(ConcurrentUploader, self).__init__(part_size, num_threads) self._api = api self._vault_name = vault_name def upload(self, filename, description=None): """Concurrently create an archive. The part_size value specified when the class was constructed will be used *unless* it is smaller than the minimum required part size needed for the size of the given file. In that case, the part size used will be the minimum part size required to properly upload the given file. :type file: str :param file: The filename to upload :type description: str :param description: The description of the archive. :rtype: str :return: The archive id of the newly created archive. """ total_size = os.stat(filename).st_size total_parts, part_size = self._calculate_required_part_size(total_size) hash_chunks = [None] * total_parts worker_queue = Queue() result_queue = Queue() response = self._api.initiate_multipart_upload(self._vault_name, part_size, description) upload_id = response['UploadId'] # The basic idea is to add the chunks (the offsets not the actual # contents) to a work queue, start up a thread pool, let the crank # through the items in the work queue, and then place their results # in a result queue which we use to complete the multipart upload. self._add_work_items_to_queue(total_parts, worker_queue, part_size) self._start_upload_threads(result_queue, upload_id, worker_queue, filename) try: self._wait_for_upload_threads(hash_chunks, result_queue, total_parts) except UploadArchiveError, e: log.debug("An error occurred while uploading an archive, " "aborting multipart upload.") self._api.abort_multipart_upload(self._vault_name, upload_id) raise e log.debug("Completing upload.") response = self._api.complete_multipart_upload( self._vault_name, upload_id, bytes_to_hex(tree_hash(hash_chunks)), total_size) log.debug("Upload finished.") return response['ArchiveId'] def _wait_for_upload_threads(self, hash_chunks, result_queue, total_parts): for _ in xrange(total_parts): result = result_queue.get() if isinstance(result, Exception): log.debug("An error was found in the result queue, terminating " "threads: %s", result) self._shutdown_threads() raise UploadArchiveError("An error occurred while uploading " "an archive: %s" % result) # Each unit of work returns the tree hash for the given part # number, which we use at the end to compute the tree hash of # the entire archive. part_number, tree_sha256 = result hash_chunks[part_number] = tree_sha256 self._shutdown_threads() def _start_upload_threads(self, result_queue, upload_id, worker_queue, filename): log.debug("Starting threads.") for _ in xrange(self._num_threads): thread = UploadWorkerThread(self._api, self._vault_name, filename, upload_id, worker_queue, result_queue) time.sleep(0.2) thread.start() self._threads.append(thread) class TransferThread(threading.Thread): def __init__(self, worker_queue, result_queue): super(TransferThread, self).__init__() self._worker_queue = worker_queue self._result_queue = result_queue # This value can be set externally by other objects # to indicate that the thread should be shut down. self.should_continue = True def run(self): while self.should_continue: try: work = self._worker_queue.get(timeout=1) except Empty: continue if work is _END_SENTINEL: self._cleanup() return result = self._process_chunk(work) self._result_queue.put(result) self._cleanup() def _process_chunk(self, work): pass def _cleanup(self): pass class UploadWorkerThread(TransferThread): def __init__(self, api, vault_name, filename, upload_id, worker_queue, result_queue, num_retries=5, time_between_retries=5, retry_exceptions=Exception): super(UploadWorkerThread, self).__init__(worker_queue, result_queue) self._api = api self._vault_name = vault_name self._filename = filename self._fileobj = open(filename, 'rb') self._upload_id = upload_id self._num_retries = num_retries self._time_between_retries = time_between_retries self._retry_exceptions = retry_exceptions def _process_chunk(self, work): result = None for i in xrange(self._num_retries + 1): try: result = self._upload_chunk(work) break except self._retry_exceptions, e: log.error("Exception caught uploading part number %s for " "vault %s, attempt: (%s / %s), filename: %s, " "exception: %s, msg: %s", work[0], self._vault_name, i + 1, self._num_retries + 1, self._filename, e.__class__, e) time.sleep(self._time_between_retries) result = e return result def _upload_chunk(self, work): part_number, part_size = work start_byte = part_number * part_size self._fileobj.seek(start_byte) contents = self._fileobj.read(part_size) linear_hash = hashlib.sha256(contents).hexdigest() tree_hash_bytes = tree_hash(chunk_hashes(contents)) byte_range = (start_byte, start_byte + len(contents) - 1) log.debug("Uploading chunk %s of size %s", part_number, part_size) response = self._api.upload_part(self._vault_name, self._upload_id, linear_hash, bytes_to_hex(tree_hash_bytes), byte_range, contents) # Reading the response allows the connection to be reused. response.read() return (part_number, tree_hash_bytes) def _cleanup(self): self._fileobj.close() class ConcurrentDownloader(ConcurrentTransferer): """ Concurrently download an archive from glacier. This class uses a thread pool to concurrently download an archive from glacier. The threadpool is completely managed by this class and is transparent to the users of this class. """ def __init__(self, job, part_size=DEFAULT_PART_SIZE, num_threads=10): """ :param job: A layer2 job object for archive retrieval object. :param part_size: The size, in bytes, of the chunks to use when uploading the archive parts. The part size must be a megabyte multiplied by a power of two. """ super(ConcurrentDownloader, self).__init__(part_size, num_threads) self._job = job def download(self, filename): """ Concurrently download an archive. :param filename: The filename to download the archive to :type filename: str """ total_size = self._job.archive_size total_parts, part_size = self._calculate_required_part_size(total_size) worker_queue = Queue() result_queue = Queue() self._add_work_items_to_queue(total_parts, worker_queue, part_size) self._start_download_threads(result_queue, worker_queue) try: self._wait_for_download_threads(filename, result_queue, total_parts) except DownloadArchiveError, e: log.debug("An error occurred while downloading an archive: %s", e) raise e log.debug("Download completed.") def _wait_for_download_threads(self, filename, result_queue, total_parts): """ Waits until the result_queue is filled with all the downloaded parts This indicates that all part downloads have completed Saves downloaded parts into filename :param filename: :param result_queue: :param total_parts: """ hash_chunks = [None] * total_parts with open(filename, "wb") as f: for _ in xrange(total_parts): result = result_queue.get() if isinstance(result, Exception): log.debug("An error was found in the result queue, " "terminating threads: %s", result) self._shutdown_threads() raise DownloadArchiveError( "An error occurred while uploading " "an archive: %s" % result) part_number, part_size, actual_hash, data = result hash_chunks[part_number] = actual_hash start_byte = part_number * part_size f.seek(start_byte) f.write(data) f.flush() final_hash = bytes_to_hex(tree_hash(hash_chunks)) log.debug("Verifying final tree hash of archive, expecting: %s, " "actual: %s", self._job.sha256_treehash, final_hash) if self._job.sha256_treehash != final_hash: self._shutdown_threads() raise TreeHashDoesNotMatchError( "Tree hash for entire archive does not match, " "expected: %s, got: %s" % (self._job.sha256_treehash, final_hash)) self._shutdown_threads() def _start_download_threads(self, result_queue, worker_queue): log.debug("Starting threads.") for _ in xrange(self._num_threads): thread = DownloadWorkerThread(self._job, worker_queue, result_queue) time.sleep(0.2) thread.start() self._threads.append(thread) class DownloadWorkerThread(TransferThread): def __init__(self, job, worker_queue, result_queue, num_retries=5, time_between_retries=5, retry_exceptions=Exception): """ Individual download thread that will download parts of the file from Glacier. Parts to download stored in work queue. Parts download to a temp dir with each part a separate file :param job: Glacier job object :param work_queue: A queue of tuples which include the part_number and part_size :param result_queue: A priority queue of tuples which include the part_number and the path to the temp file that holds that part's data. """ super(DownloadWorkerThread, self).__init__(worker_queue, result_queue) self._job = job self._num_retries = num_retries self._time_between_retries = time_between_retries self._retry_exceptions = retry_exceptions def _process_chunk(self, work): """ Attempt to download a part of the archive from Glacier Store the result in the result_queue :param work: """ result = None for _ in xrange(self._num_retries): try: result = self._download_chunk(work) break except self._retry_exceptions, e: log.error("Exception caught downloading part number %s for " "job %s", work[0], self._job,) time.sleep(self._time_between_retries) result = e return result def _download_chunk(self, work): """ Downloads a chunk of archive from Glacier. Saves the data to a temp file Returns the part number and temp file location :param work: """ part_number, part_size = work start_byte = part_number * part_size byte_range = (start_byte, start_byte + part_size - 1) log.debug("Downloading chunk %s of size %s", part_number, part_size) response = self._job.get_output(byte_range) data = response.read() actual_hash = bytes_to_hex(tree_hash(chunk_hashes(data))) if response['TreeHash'] != actual_hash: raise TreeHashDoesNotMatchError( "Tree hash for part number %s does not match, " "expected: %s, got: %s" % (part_number, response['TreeHash'], actual_hash)) return (part_number, part_size, binascii.unhexlify(actual_hash), data) boto-2.20.1/boto/glacier/exceptions.py000066400000000000000000000042231225267101000176400ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2012 Thomas Parslow http://almostobsolete.net/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.compat import json class UnexpectedHTTPResponseError(Exception): def __init__(self, expected_responses, response): self.status = response.status self.body = response.read() self.code = None try: body = json.loads(self.body) self.code = body["code"] msg = 'Expected %s, got ' % expected_responses msg += '(%d, code=%s, message=%s)' % (response.status, self.code, body["message"]) except Exception: msg = 'Expected %s, got (%d, %s)' % (expected_responses, response.status, self.body) super(UnexpectedHTTPResponseError, self).__init__(msg) class ArchiveError(Exception): pass class UploadArchiveError(ArchiveError): pass class DownloadArchiveError(ArchiveError): pass class TreeHashDoesNotMatchError(ArchiveError): pass boto-2.20.1/boto/glacier/job.py000066400000000000000000000155611225267101000162400ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2012 Thomas Parslow http://almostobsolete.net/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from __future__ import with_statement import math import socket from .exceptions import TreeHashDoesNotMatchError, DownloadArchiveError from .utils import tree_hash_from_str class Job(object): DefaultPartSize = 4 * 1024 * 1024 ResponseDataElements = (('Action', 'action', None), ('ArchiveId', 'archive_id', None), ('ArchiveSizeInBytes', 'archive_size', 0), ('Completed', 'completed', False), ('CompletionDate', 'completion_date', None), ('CreationDate', 'creation_date', None), ('InventorySizeInBytes', 'inventory_size', 0), ('JobDescription', 'description', None), ('JobId', 'id', None), ('SHA256TreeHash', 'sha256_treehash', None), ('SNSTopic', 'sns_topic', None), ('StatusCode', 'status_code', None), ('StatusMessage', 'status_message', None), ('VaultARN', 'arn', None)) def __init__(self, vault, response_data=None): self.vault = vault if response_data: for response_name, attr_name, default in self.ResponseDataElements: setattr(self, attr_name, response_data[response_name]) else: for response_name, attr_name, default in self.ResponseDataElements: setattr(self, attr_name, default) def __repr__(self): return 'Job(%s)' % self.arn def get_output(self, byte_range=None, validate_checksum=False): """ This operation downloads the output of the job. Depending on the job type you specified when you initiated the job, the output will be either the content of an archive or a vault inventory. You can download all the job output or download a portion of the output by specifying a byte range. In the case of an archive retrieval job, depending on the byte range you specify, Amazon Glacier returns the checksum for the portion of the data. You can compute the checksum on the client and verify that the values match to ensure the portion you downloaded is the correct data. :type byte_range: tuple :param range: A tuple of integer specifying the slice (in bytes) of the archive you want to receive :type validate_checksum: bool :param validate_checksum: Specify whether or not to validate the associate tree hash. If the response does not contain a TreeHash, then no checksum will be verified. """ response = self.vault.layer1.get_job_output(self.vault.name, self.id, byte_range) if validate_checksum and 'TreeHash' in response: data = response.read() actual_tree_hash = tree_hash_from_str(data) if response['TreeHash'] != actual_tree_hash: raise TreeHashDoesNotMatchError( "The calculated tree hash %s does not match the " "expected tree hash %s for the byte range %s" % ( actual_tree_hash, response['TreeHash'], byte_range)) return response def download_to_file(self, filename, chunk_size=DefaultPartSize, verify_hashes=True, retry_exceptions=(socket.error,)): """Download an archive to a file. :type filename: str :param filename: The name of the file where the archive contents will be saved. :type chunk_size: int :param chunk_size: The chunk size to use when downloading the archive. :type verify_hashes: bool :param verify_hashes: Indicates whether or not to verify the tree hashes for each downloaded chunk. """ num_chunks = int(math.ceil(self.archive_size / float(chunk_size))) with open(filename, 'wb') as output_file: self._download_to_fileob(output_file, num_chunks, chunk_size, verify_hashes, retry_exceptions) def _download_to_fileob(self, fileobj, num_chunks, chunk_size, verify_hashes, retry_exceptions): for i in xrange(num_chunks): byte_range = ((i * chunk_size), ((i + 1) * chunk_size) - 1) data, expected_tree_hash = self._download_byte_range( byte_range, retry_exceptions) if verify_hashes: actual_tree_hash = tree_hash_from_str(data) if expected_tree_hash != actual_tree_hash: raise TreeHashDoesNotMatchError( "The calculated tree hash %s does not match the " "expected tree hash %s for the byte range %s" % ( actual_tree_hash, expected_tree_hash, byte_range)) fileobj.write(data) def _download_byte_range(self, byte_range, retry_exceptions): # You can occasionally get socket.errors when downloading # chunks from Glacier, so each chunk can be retried up # to 5 times. for _ in xrange(5): try: response = self.get_output(byte_range) data = response.read() expected_tree_hash = response['TreeHash'] return data, expected_tree_hash except retry_exceptions, e: continue else: raise DownloadArchiveError("There was an error downloading" "byte range %s: %s" % (byte_range, e)) boto-2.20.1/boto/glacier/layer1.py000066400000000000000000000642441225267101000166650ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import os import boto.glacier from boto.compat import json from boto.connection import AWSAuthConnection from .exceptions import UnexpectedHTTPResponseError from .response import GlacierResponse from .utils import ResettingFileSender class Layer1(AWSAuthConnection): Version = '2012-06-01' """Glacier API version.""" def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, account_id='-', is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, path='/', provider='aws', security_token=None, suppress_consec_slashes=True, region=None, region_name='us-east-1'): if not region: for reg in boto.glacier.regions(): if reg.name == region_name: region = reg break self.region = region self.account_id = account_id AWSAuthConnection.__init__(self, region.endpoint, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, proxy_user, proxy_pass, debug, https_connection_factory, path, provider, security_token, suppress_consec_slashes) def _required_auth_capability(self): return ['hmac-v4'] def make_request(self, verb, resource, headers=None, data='', ok_responses=(200,), params=None, sender=None, response_headers=None): if headers is None: headers = {} headers['x-amz-glacier-version'] = self.Version uri = '/%s/%s' % (self.account_id, resource) response = AWSAuthConnection.make_request(self, verb, uri, params=params, headers=headers, sender=sender, data=data) if response.status in ok_responses: return GlacierResponse(response, response_headers) else: # create glacier-specific exceptions raise UnexpectedHTTPResponseError(ok_responses, response) # Vaults def list_vaults(self, limit=None, marker=None): """ This operation lists all vaults owned by the calling user’s account. The list returned in the response is ASCII-sorted by vault name. By default, this operation returns up to 1,000 items. If there are more vaults to list, the marker field in the response body contains the vault Amazon Resource Name (ARN) at which to continue the list with a new List Vaults request; otherwise, the marker field is null. In your next List Vaults request you set the marker parameter to the value Amazon Glacier returned in the responses to your previous List Vaults request. You can also limit the number of vaults returned in the response by specifying the limit parameter in the request. :type limit: int :param limit: The maximum number of items returned in the response. If you don't specify a value, the List Vaults operation returns up to 1,000 items. :type marker: str :param marker: A string used for pagination. marker specifies the vault ARN after which the listing of vaults should begin. (The vault specified by marker is not included in the returned list.) Get the marker value from a previous List Vaults response. You need to include the marker only if you are continuing the pagination of results started in a previous List Vaults request. Specifying an empty value ("") for the marker returns a list of vaults starting from the first vault. """ params = {} if limit: params['limit'] = limit if marker: params['marker'] = marker return self.make_request('GET', 'vaults', params=params) def describe_vault(self, vault_name): """ This operation returns information about a vault, including the vault Amazon Resource Name (ARN), the date the vault was created, the number of archives contained within the vault, and the total size of all the archives in the vault. The number of archives and their total size are as of the last vault inventory Amazon Glacier generated. Amazon Glacier generates vault inventories approximately daily. This means that if you add or remove an archive from a vault, and then immediately send a Describe Vault request, the response might not reflect the changes. :type vault_name: str :param vault_name: The name of the new vault """ uri = 'vaults/%s' % vault_name return self.make_request('GET', uri) def create_vault(self, vault_name): """ This operation creates a new vault with the specified name. The name of the vault must be unique within a region for an AWS account. You can create up to 1,000 vaults per account. For information on creating more vaults, go to the Amazon Glacier product detail page. You must use the following guidelines when naming a vault. Names can be between 1 and 255 characters long. Allowed characters are a–z, A–Z, 0–9, '_' (underscore), '-' (hyphen), and '.' (period). This operation is idempotent, you can send the same request multiple times and it has no further effect after the first time Amazon Glacier creates the specified vault. :type vault_name: str :param vault_name: The name of the new vault """ uri = 'vaults/%s' % vault_name return self.make_request('PUT', uri, ok_responses=(201,), response_headers=[('Location', 'Location')]) def delete_vault(self, vault_name): """ This operation deletes a vault. Amazon Glacier will delete a vault only if there are no archives in the vault as per the last inventory and there have been no writes to the vault since the last inventory. If either of these conditions is not satisfied, the vault deletion fails (that is, the vault is not removed) and Amazon Glacier returns an error. This operation is idempotent, you can send the same request multiple times and it has no further effect after the first time Amazon Glacier delete the specified vault. :type vault_name: str :param vault_name: The name of the new vault """ uri = 'vaults/%s' % vault_name return self.make_request('DELETE', uri, ok_responses=(204,)) def get_vault_notifications(self, vault_name): """ This operation retrieves the notification-configuration subresource set on the vault. :type vault_name: str :param vault_name: The name of the new vault """ uri = 'vaults/%s/notification-configuration' % vault_name return self.make_request('GET', uri) def set_vault_notifications(self, vault_name, notification_config): """ This operation retrieves the notification-configuration subresource set on the vault. :type vault_name: str :param vault_name: The name of the new vault :type notification_config: dict :param notification_config: A Python dictionary containing an SNS Topic and events for which you want Amazon Glacier to send notifications to the topic. Possible events are: * ArchiveRetrievalCompleted - occurs when a job that was initiated for an archive retrieval is completed. * InventoryRetrievalCompleted - occurs when a job that was initiated for an inventory retrieval is completed. The format of the dictionary is: {'SNSTopic': 'mytopic', 'Events': [event1,...]} """ uri = 'vaults/%s/notification-configuration' % vault_name json_config = json.dumps(notification_config) return self.make_request('PUT', uri, data=json_config, ok_responses=(204,)) def delete_vault_notifications(self, vault_name): """ This operation deletes the notification-configuration subresource set on the vault. :type vault_name: str :param vault_name: The name of the new vault """ uri = 'vaults/%s/notification-configuration' % vault_name return self.make_request('DELETE', uri, ok_responses=(204,)) # Jobs def list_jobs(self, vault_name, completed=None, status_code=None, limit=None, marker=None): """ This operation lists jobs for a vault including jobs that are in-progress and jobs that have recently finished. :type vault_name: str :param vault_name: The name of the vault. :type completed: boolean :param completed: Specifies the state of the jobs to return. If a value of True is passed, only completed jobs will be returned. If a value of False is passed, only uncompleted jobs will be returned. If no value is passed, all jobs will be returned. :type status_code: string :param status_code: Specifies the type of job status to return. Valid values are: InProgress|Succeeded|Failed. If not specified, jobs with all status codes are returned. :type limit: int :param limit: The maximum number of items returned in the response. If you don't specify a value, the List Jobs operation returns up to 1,000 items. :type marker: str :param marker: An opaque string used for pagination. marker specifies the job at which the listing of jobs should begin. Get the marker value from a previous List Jobs response. You need only include the marker if you are continuing the pagination of results started in a previous List Jobs request. """ params = {} if limit: params['limit'] = limit if marker: params['marker'] = marker if status_code: params['statuscode'] = status_code if completed is not None: params['completed'] = 'true' if completed else 'false' uri = 'vaults/%s/jobs' % vault_name return self.make_request('GET', uri, params=params) def describe_job(self, vault_name, job_id): """ This operation returns information about a job you previously initiated, including the job initiation date, the user who initiated the job, the job status code/message and the Amazon Simple Notification Service (Amazon SNS) topic to notify after Amazon Glacier completes the job. :type vault_name: str :param vault_name: The name of the new vault :type job_id: str :param job_id: The ID of the job. """ uri = 'vaults/%s/jobs/%s' % (vault_name, job_id) return self.make_request('GET', uri, ok_responses=(200,)) def initiate_job(self, vault_name, job_data): """ This operation initiates a job of the specified type. Retrieving an archive or a vault inventory are asynchronous operations that require you to initiate a job. It is a two-step process: * Initiate a retrieval job. * After the job completes, download the bytes. The retrieval is executed asynchronously. When you initiate a retrieval job, Amazon Glacier creates a job and returns a job ID in the response. :type vault_name: str :param vault_name: The name of the new vault :type job_data: dict :param job_data: A Python dictionary containing the information about the requested job. The dictionary can contain the following attributes: * ArchiveId - The ID of the archive you want to retrieve. This field is required only if the Type is set to archive-retrieval. * Description - The optional description for the job. * Format - When initiating a job to retrieve a vault inventory, you can optionally add this parameter to specify the output format. Valid values are: CSV|JSON. * SNSTopic - The Amazon SNS topic ARN where Amazon Glacier sends a notification when the job is completed and the output is ready for you to download. * Type - The job type. Valid values are: archive-retrieval|inventory-retrieval * RetrievalByteRange - Optionally specify the range of bytes to retrieve. """ uri = 'vaults/%s/jobs' % vault_name response_headers = [('x-amz-job-id', u'JobId'), ('Location', u'Location')] json_job_data = json.dumps(job_data) return self.make_request('POST', uri, data=json_job_data, ok_responses=(202,), response_headers=response_headers) def get_job_output(self, vault_name, job_id, byte_range=None): """ This operation downloads the output of the job you initiated using Initiate a Job. Depending on the job type you specified when you initiated the job, the output will be either the content of an archive or a vault inventory. You can download all the job output or download a portion of the output by specifying a byte range. In the case of an archive retrieval job, depending on the byte range you specify, Amazon Glacier returns the checksum for the portion of the data. You can compute the checksum on the client and verify that the values match to ensure the portion you downloaded is the correct data. :type vault_name: str :param :param vault_name: The name of the new vault :type job_id: str :param job_id: The ID of the job. :type byte_range: tuple :param range: A tuple of integers specifying the slice (in bytes) of the archive you want to receive """ response_headers = [('x-amz-sha256-tree-hash', u'TreeHash'), ('Content-Range', u'ContentRange'), ('Content-Type', u'ContentType')] headers = None if byte_range: headers = {'Range': 'bytes=%d-%d' % byte_range} uri = 'vaults/%s/jobs/%s/output' % (vault_name, job_id) response = self.make_request('GET', uri, headers=headers, ok_responses=(200, 206), response_headers=response_headers) return response # Archives def upload_archive(self, vault_name, archive, linear_hash, tree_hash, description=None): """ This operation adds an archive to a vault. For a successful upload, your data is durably persisted. In response, Amazon Glacier returns the archive ID in the x-amz-archive-id header of the response. You should save the archive ID returned so that you can access the archive later. :type vault_name: str :param :param vault_name: The name of the vault :type archive: bytes :param archive: The data to upload. :type linear_hash: str :param linear_hash: The SHA256 checksum (a linear hash) of the payload. :type tree_hash: str :param tree_hash: The user-computed SHA256 tree hash of the payload. For more information on computing the tree hash, see http://goo.gl/u7chF. :type description: str :param description: An optional description of the archive. """ response_headers = [('x-amz-archive-id', u'ArchiveId'), ('Location', u'Location'), ('x-amz-sha256-tree-hash', u'TreeHash')] uri = 'vaults/%s/archives' % vault_name try: content_length = str(len(archive)) except (TypeError, AttributeError): # If a file like object is provided, try to retrieve # the file size via fstat. content_length = str(os.fstat(archive.fileno()).st_size) headers = {'x-amz-content-sha256': linear_hash, 'x-amz-sha256-tree-hash': tree_hash, 'Content-Length': content_length} if description: headers['x-amz-archive-description'] = description if self._is_file_like(archive): sender = ResettingFileSender(archive) else: sender = None return self.make_request('POST', uri, headers=headers, sender=sender, data=archive, ok_responses=(201,), response_headers=response_headers) def _is_file_like(self, archive): return hasattr(archive, 'seek') and hasattr(archive, 'tell') def delete_archive(self, vault_name, archive_id): """ This operation deletes an archive from a vault. :type vault_name: str :param vault_name: The name of the new vault :type archive_id: str :param archive_id: The ID for the archive to be deleted. """ uri = 'vaults/%s/archives/%s' % (vault_name, archive_id) return self.make_request('DELETE', uri, ok_responses=(204,)) # Multipart def initiate_multipart_upload(self, vault_name, part_size, description=None): """ Initiate a multipart upload. Amazon Glacier creates a multipart upload resource and returns it's ID. You use this ID in subsequent multipart upload operations. :type vault_name: str :param vault_name: The name of the vault. :type description: str :param description: An optional description of the archive. :type part_size: int :param part_size: The size of each part except the last, in bytes. The part size must be a multiple of 1024 KB multiplied by a power of 2. The minimum allowable part size is 1MB and the maximum is 4GB. """ response_headers = [('x-amz-multipart-upload-id', u'UploadId'), ('Location', u'Location')] headers = {'x-amz-part-size': str(part_size)} if description: headers['x-amz-archive-description'] = description uri = 'vaults/%s/multipart-uploads' % vault_name response = self.make_request('POST', uri, headers=headers, ok_responses=(201,), response_headers=response_headers) return response def complete_multipart_upload(self, vault_name, upload_id, sha256_treehash, archive_size): """ Call this to inform Amazon Glacier that all of the archive parts have been uploaded and Amazon Glacier can now assemble the archive from the uploaded parts. :type vault_name: str :param vault_name: The name of the vault. :type upload_id: str :param upload_id: The unique ID associated with this upload operation. :type sha256_treehash: str :param sha256_treehash: The SHA256 tree hash of the entire archive. It is the tree hash of SHA256 tree hash of the individual parts. If the value you specify in the request does not match the SHA256 tree hash of the final assembled archive as computed by Amazon Glacier, Amazon Glacier returns an error and the request fails. :type archive_size: int :param archive_size: The total size, in bytes, of the entire archive. This value should be the sum of all the sizes of the individual parts that you uploaded. """ response_headers = [('x-amz-archive-id', u'ArchiveId'), ('Location', u'Location')] headers = {'x-amz-sha256-tree-hash': sha256_treehash, 'x-amz-archive-size': str(archive_size)} uri = 'vaults/%s/multipart-uploads/%s' % (vault_name, upload_id) response = self.make_request('POST', uri, headers=headers, ok_responses=(201,), response_headers=response_headers) return response def abort_multipart_upload(self, vault_name, upload_id): """ Call this to abort a multipart upload identified by the upload ID. :type vault_name: str :param vault_name: The name of the vault. :type upload_id: str :param upload_id: The unique ID associated with this upload operation. """ uri = 'vaults/%s/multipart-uploads/%s' % (vault_name, upload_id) return self.make_request('DELETE', uri, ok_responses=(204,)) def list_multipart_uploads(self, vault_name, limit=None, marker=None): """ Lists in-progress multipart uploads for the specified vault. :type vault_name: str :param vault_name: The name of the vault. :type limit: int :param limit: The maximum number of items returned in the response. If you don't specify a value, the operation returns up to 1,000 items. :type marker: str :param marker: An opaque string used for pagination. marker specifies the item at which the listing should begin. Get the marker value from a previous response. You need only include the marker if you are continuing the pagination of results started in a previous request. """ params = {} if limit: params['limit'] = limit if marker: params['marker'] = marker uri = 'vaults/%s/multipart-uploads' % vault_name return self.make_request('GET', uri, params=params) def list_parts(self, vault_name, upload_id, limit=None, marker=None): """ Lists in-progress multipart uploads for the specified vault. :type vault_name: str :param vault_name: The name of the vault. :type upload_id: str :param upload_id: The unique ID associated with this upload operation. :type limit: int :param limit: The maximum number of items returned in the response. If you don't specify a value, the operation returns up to 1,000 items. :type marker: str :param marker: An opaque string used for pagination. marker specifies the item at which the listing should begin. Get the marker value from a previous response. You need only include the marker if you are continuing the pagination of results started in a previous request. """ params = {} if limit: params['limit'] = limit if marker: params['marker'] = marker uri = 'vaults/%s/multipart-uploads/%s' % (vault_name, upload_id) return self.make_request('GET', uri, params=params) def upload_part(self, vault_name, upload_id, linear_hash, tree_hash, byte_range, part_data): """ Lists in-progress multipart uploads for the specified vault. :type vault_name: str :param vault_name: The name of the vault. :type linear_hash: str :param linear_hash: The SHA256 checksum (a linear hash) of the payload. :type tree_hash: str :param tree_hash: The user-computed SHA256 tree hash of the payload. For more information on computing the tree hash, see http://goo.gl/u7chF. :type upload_id: str :param upload_id: The unique ID associated with this upload operation. :type byte_range: tuple of ints :param byte_range: Identfies the range of bytes in the assembled archive that will be uploaded in this part. :type part_data: bytes :param part_data: The data to be uploaded for the part """ headers = {'x-amz-content-sha256': linear_hash, 'x-amz-sha256-tree-hash': tree_hash, 'Content-Range': 'bytes %d-%d/*' % byte_range} response_headers = [('x-amz-sha256-tree-hash', u'TreeHash')] uri = 'vaults/%s/multipart-uploads/%s' % (vault_name, upload_id) return self.make_request('PUT', uri, headers=headers, data=part_data, ok_responses=(204,), response_headers=response_headers) boto-2.20.1/boto/glacier/layer2.py000066400000000000000000000072111225267101000166550ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2012 Thomas Parslow http://almostobsolete.net/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from .layer1 import Layer1 from .vault import Vault class Layer2(object): """ Provides a more pythonic and friendly interface to Glacier based on Layer1 """ def __init__(self, *args, **kwargs): # Accept a passed in layer1, mainly to allow easier testing if "layer1" in kwargs: self.layer1 = kwargs["layer1"] else: self.layer1 = Layer1(*args, **kwargs) def create_vault(self, name): """Creates a vault. :type name: str :param name: The name of the vault :rtype: :class:`boto.glacier.vault.Vault` :return: A Vault object representing the vault. """ self.layer1.create_vault(name) return self.get_vault(name) def delete_vault(self, name): """Delete a vault. This operation deletes a vault. Amazon Glacier will delete a vault only if there are no archives in the vault as per the last inventory and there have been no writes to the vault since the last inventory. If either of these conditions is not satisfied, the vault deletion fails (that is, the vault is not removed) and Amazon Glacier returns an error. This operation is idempotent, you can send the same request multiple times and it has no further effect after the first time Amazon Glacier delete the specified vault. :type vault_name: str :param vault_name: The name of the vault to delete. """ return self.layer1.delete_vault(name) def get_vault(self, name): """ Get an object representing a named vault from Glacier. This operation does not check if the vault actually exists. :type name: str :param name: The name of the vault :rtype: :class:`boto.glacier.vault.Vault` :return: A Vault object representing the vault. """ response_data = self.layer1.describe_vault(name) return Vault(self.layer1, response_data) def list_vaults(self): """ Return a list of all vaults associated with the account ID. :rtype: List of :class:`boto.glacier.vault.Vault` :return: A list of Vault objects. """ vaults = [] marker = None while True: response_data = self.layer1.list_vaults(marker=marker, limit=1000) vaults.extend([Vault(self.layer1, rd) for rd in response_data['VaultList']]) marker = response_data.get('Marker') if not marker: break return vaults boto-2.20.1/boto/glacier/response.py000066400000000000000000000042011225267101000173110ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2012 Thomas Parslow http://almostobsolete.net/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.compat import json class GlacierResponse(dict): """ Represents a response from Glacier layer1. It acts as a dictionary containing the combined keys received via JSON in the body (if supplied) and headers. """ def __init__(self, http_response, response_headers): self.http_response = http_response self.status = http_response.status self[u'RequestId'] = http_response.getheader('x-amzn-requestid') if response_headers: for header_name, item_name in response_headers: self[item_name] = http_response.getheader(header_name) if http_response.getheader('Content-Type') == 'application/json': body = json.loads(http_response.read()) self.update(body) size = http_response.getheader('Content-Length', None) if size is not None: self.size = size def read(self, amt=None): "Reads and returns the response body, or up to the next amt bytes." return self.http_response.read(amt) boto-2.20.1/boto/glacier/utils.py000066400000000000000000000132351225267101000166220ustar00rootroot00000000000000# Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import hashlib import math _MEGABYTE = 1024 * 1024 DEFAULT_PART_SIZE = 4 * _MEGABYTE MAXIMUM_NUMBER_OF_PARTS = 10000 def minimum_part_size(size_in_bytes, default_part_size=DEFAULT_PART_SIZE): """Calculate the minimum part size needed for a multipart upload. Glacier allows a maximum of 10,000 parts per upload. It also states that the maximum archive size is 10,000 * 4 GB, which means the part size can range from 1MB to 4GB (provided it is one 1MB multiplied by a power of 2). This function will compute what the minimum part size must be in order to upload a file of size ``size_in_bytes``. It will first check if ``default_part_size`` is sufficient for a part size given the ``size_in_bytes``. If this is not the case, then the smallest part size than can accomodate a file of size ``size_in_bytes`` will be returned. If the file size is greater than the maximum allowed archive size of 10,000 * 4GB, a ``ValueError`` will be raised. """ # The default part size (4 MB) will be too small for a very large # archive, as there is a limit of 10,000 parts in a multipart upload. # This puts the maximum allowed archive size with the default part size # at 40,000 MB. We need to do a sanity check on the part size, and find # one that works if the default is too small. part_size = _MEGABYTE if (default_part_size * MAXIMUM_NUMBER_OF_PARTS) < size_in_bytes: if size_in_bytes > (4096 * _MEGABYTE * 10000): raise ValueError("File size too large: %s" % size_in_bytes) min_part_size = size_in_bytes / 10000 power = 3 while part_size < min_part_size: part_size = math.ldexp(_MEGABYTE, power) power += 1 part_size = int(part_size) else: part_size = default_part_size return part_size def chunk_hashes(bytestring, chunk_size=_MEGABYTE): chunk_count = int(math.ceil(len(bytestring) / float(chunk_size))) hashes = [] for i in xrange(chunk_count): start = i * chunk_size end = (i + 1) * chunk_size hashes.append(hashlib.sha256(bytestring[start:end]).digest()) if not hashes: return [hashlib.sha256('').digest()] return hashes def tree_hash(fo): """ Given a hash of each 1MB chunk (from chunk_hashes) this will hash together adjacent hashes until it ends up with one big one. So a tree of hashes. """ hashes = [] hashes.extend(fo) while len(hashes) > 1: new_hashes = [] while True: if len(hashes) > 1: first = hashes.pop(0) second = hashes.pop(0) new_hashes.append(hashlib.sha256(first + second).digest()) elif len(hashes) == 1: only = hashes.pop(0) new_hashes.append(only) else: break hashes.extend(new_hashes) return hashes[0] def compute_hashes_from_fileobj(fileobj, chunk_size=1024 * 1024): """Compute the linear and tree hash from a fileobj. This function will compute the linear/tree hash of a fileobj in a single pass through the fileobj. :param fileobj: A file like object. :param chunk_size: The size of the chunks to use for the tree hash. This is also the buffer size used to read from `fileobj`. :rtype: tuple :return: A tuple of (linear_hash, tree_hash). Both hashes are returned in hex. """ linear_hash = hashlib.sha256() chunks = [] chunk = fileobj.read(chunk_size) while chunk: linear_hash.update(chunk) chunks.append(hashlib.sha256(chunk).digest()) chunk = fileobj.read(chunk_size) if not chunks: chunks = [hashlib.sha256('').digest()] return linear_hash.hexdigest(), bytes_to_hex(tree_hash(chunks)) def bytes_to_hex(str_as_bytes): return ''.join(["%02x" % ord(x) for x in str_as_bytes]).strip() def tree_hash_from_str(str_as_bytes): """ :type str_as_bytes: str :param str_as_bytes: The string for which to compute the tree hash. :rtype: str :return: The computed tree hash, returned as hex. """ return bytes_to_hex(tree_hash(chunk_hashes(str_as_bytes))) class ResettingFileSender(object): def __init__(self, archive): self._archive = archive self._starting_offset = archive.tell() def __call__(self, connection, method, path, body, headers): try: connection.request(method, path, self._archive, headers) return connection.getresponse() finally: self._archive.seek(self._starting_offset) boto-2.20.1/boto/glacier/vault.py000066400000000000000000000375041225267101000166220ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2012 Thomas Parslow http://almostobsolete.net/ # Copyright (c) 2012 Robie Basak # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from __future__ import with_statement from .exceptions import UploadArchiveError from .job import Job from .writer import compute_hashes_from_fileobj, resume_file_upload, Writer from .concurrent import ConcurrentUploader from .utils import minimum_part_size, DEFAULT_PART_SIZE import os.path _MEGABYTE = 1024 * 1024 _GIGABYTE = 1024 * _MEGABYTE MAXIMUM_ARCHIVE_SIZE = 10000 * 4 * _GIGABYTE MAXIMUM_NUMBER_OF_PARTS = 10000 class Vault(object): DefaultPartSize = DEFAULT_PART_SIZE SingleOperationThreshold = 100 * _MEGABYTE ResponseDataElements = (('VaultName', 'name', None), ('VaultARN', 'arn', None), ('CreationDate', 'creation_date', None), ('LastInventoryDate', 'last_inventory_date', None), ('SizeInBytes', 'size', 0), ('NumberOfArchives', 'number_of_archives', 0)) def __init__(self, layer1, response_data=None): self.layer1 = layer1 if response_data: for response_name, attr_name, default in self.ResponseDataElements: value = response_data[response_name] if isinstance(value, unicode): value = value.encode('utf8') setattr(self, attr_name, value) else: for response_name, attr_name, default in self.ResponseDataElements: setattr(self, attr_name, default) def __repr__(self): return 'Vault("%s")' % self.arn def delete(self): """ Delete's this vault. WARNING! """ self.layer1.delete_vault(self.name) def upload_archive(self, filename, description=None): """ Adds an archive to a vault. For archives greater than 100MB the multipart upload will be used. :type file: str :param file: A filename to upload :type description: str :param description: An optional description for the archive. :rtype: str :return: The archive id of the newly created archive """ if os.path.getsize(filename) > self.SingleOperationThreshold: return self.create_archive_from_file(filename, description=description) return self._upload_archive_single_operation(filename, description) def _upload_archive_single_operation(self, filename, description): """ Adds an archive to a vault in a single operation. It's recommended for archives less than 100MB :type file: str :param file: A filename to upload :type description: str :param description: A description for the archive. :rtype: str :return: The archive id of the newly created archive """ with open(filename, 'rb') as fileobj: linear_hash, tree_hash = compute_hashes_from_fileobj(fileobj) fileobj.seek(0) response = self.layer1.upload_archive(self.name, fileobj, linear_hash, tree_hash, description) return response['ArchiveId'] def create_archive_writer(self, part_size=DefaultPartSize, description=None): """ Create a new archive and begin a multi-part upload to it. Returns a file-like object to which the data for the archive can be written. Once all the data is written the file-like object should be closed, you can then call the get_archive_id method on it to get the ID of the created archive. :type part_size: int :param part_size: The part size for the multipart upload. :type description: str :param description: An optional description for the archive. :rtype: :class:`boto.glacier.writer.Writer` :return: A Writer object that to which the archive data should be written. """ response = self.layer1.initiate_multipart_upload(self.name, part_size, description) return Writer(self, response['UploadId'], part_size=part_size) def create_archive_from_file(self, filename=None, file_obj=None, description=None, upload_id_callback=None): """ Create a new archive and upload the data from the given file or file-like object. :type filename: str :param filename: A filename to upload :type file_obj: file :param file_obj: A file-like object to upload :type description: str :param description: An optional description for the archive. :type upload_id_callback: function :param upload_id_callback: if set, call with the upload_id as the only parameter when it becomes known, to enable future calls to resume_archive_from_file in case resume is needed. :rtype: str :return: The archive id of the newly created archive """ part_size = self.DefaultPartSize if not file_obj: file_size = os.path.getsize(filename) try: part_size = minimum_part_size(file_size, part_size) except ValueError: raise UploadArchiveError("File size of %s bytes exceeds " "40,000 GB archive limit of Glacier.") file_obj = open(filename, "rb") writer = self.create_archive_writer( description=description, part_size=part_size) if upload_id_callback: upload_id_callback(writer.upload_id) while True: data = file_obj.read(part_size) if not data: break writer.write(data) writer.close() return writer.get_archive_id() @staticmethod def _range_string_to_part_index(range_string, part_size): start, inside_end = [int(value) for value in range_string.split('-')] end = inside_end + 1 length = end - start if length == part_size + 1: # Off-by-one bug in Amazon's Glacier implementation, # see: https://forums.aws.amazon.com/thread.jspa?threadID=106866 # Workaround: since part_size is too big by one byte, adjust it end -= 1 inside_end -= 1 length -= 1 assert not (start % part_size), ( "upload part start byte is not on a part boundary") assert (length <= part_size), "upload part is bigger than part size" return start // part_size def resume_archive_from_file(self, upload_id, filename=None, file_obj=None): """Resume upload of a file already part-uploaded to Glacier. The resumption of an upload where the part-uploaded section is empty is a valid degenerate case that this function can handle. One and only one of filename or file_obj must be specified. :type upload_id: str :param upload_id: existing Glacier upload id of upload being resumed. :type filename: str :param filename: file to open for resume :type fobj: file :param fobj: file-like object containing local data to resume. This must read from the start of the entire upload, not just from the point being resumed. Use fobj.seek(0) to achieve this if necessary. :rtype: str :return: The archive id of the newly created archive """ part_list_response = self.list_all_parts(upload_id) part_size = part_list_response['PartSizeInBytes'] part_hash_map = {} for part_desc in part_list_response['Parts']: part_index = self._range_string_to_part_index( part_desc['RangeInBytes'], part_size) part_tree_hash = part_desc['SHA256TreeHash'].decode('hex') part_hash_map[part_index] = part_tree_hash if not file_obj: file_obj = open(filename, "rb") return resume_file_upload( self, upload_id, part_size, file_obj, part_hash_map) def concurrent_create_archive_from_file(self, filename, description, **kwargs): """ Create a new archive from a file and upload the given file. This is a convenience method around the :class:`boto.glacier.concurrent.ConcurrentUploader` class. This method will perform a multipart upload and upload the parts of the file concurrently. :type filename: str :param filename: A filename to upload :param kwargs: Additional kwargs to pass through to :py:class:`boto.glacier.concurrent.ConcurrentUploader`. You can pass any argument besides the ``api`` and ``vault_name`` param (these arguments are already passed to the ``ConcurrentUploader`` for you). :raises: `boto.glacier.exception.UploadArchiveError` is an error occurs during the upload process. :rtype: str :return: The archive id of the newly created archive """ uploader = ConcurrentUploader(self.layer1, self.name, **kwargs) archive_id = uploader.upload(filename, description) return archive_id def retrieve_archive(self, archive_id, sns_topic=None, description=None): """ Initiate a archive retrieval job to download the data from an archive. You will need to wait for the notification from Amazon (via SNS) before you can actually download the data, this takes around 4 hours. :type archive_id: str :param archive_id: The id of the archive :type description: str :param description: An optional description for the job. :type sns_topic: str :param sns_topic: The Amazon SNS topic ARN where Amazon Glacier sends notification when the job is completed and the output is ready for you to download. :rtype: :class:`boto.glacier.job.Job` :return: A Job object representing the retrieval job. """ job_data = {'Type': 'archive-retrieval', 'ArchiveId': archive_id} if sns_topic is not None: job_data['SNSTopic'] = sns_topic if description is not None: job_data['Description'] = description response = self.layer1.initiate_job(self.name, job_data) return self.get_job(response['JobId']) def retrieve_inventory(self, sns_topic=None, description=None): """ Initiate a inventory retrieval job to list the items in the vault. You will need to wait for the notification from Amazon (via SNS) before you can actually download the data, this takes around 4 hours. :type description: str :param description: An optional description for the job. :type sns_topic: str :param sns_topic: The Amazon SNS topic ARN where Amazon Glacier sends notification when the job is completed and the output is ready for you to download. :rtype: str :return: The ID of the job """ job_data = {'Type': 'inventory-retrieval'} if sns_topic is not None: job_data['SNSTopic'] = sns_topic if description is not None: job_data['Description'] = description response = self.layer1.initiate_job(self.name, job_data) return response['JobId'] def retrieve_inventory_job(self, **kwargs): """ Identical to ``retrieve_inventory``, but returns a ``Job`` instance instead of just the job ID. :type description: str :param description: An optional description for the job. :type sns_topic: str :param sns_topic: The Amazon SNS topic ARN where Amazon Glacier sends notification when the job is completed and the output is ready for you to download. :rtype: :class:`boto.glacier.job.Job` :return: A Job object representing the retrieval job. """ job_id = self.retrieve_inventory(**kwargs) return self.get_job(job_id) def delete_archive(self, archive_id): """ This operation deletes an archive from the vault. :type archive_id: str :param archive_id: The ID for the archive to be deleted. """ return self.layer1.delete_archive(self.name, archive_id) def get_job(self, job_id): """ Get an object representing a job in progress. :type job_id: str :param job_id: The ID of the job :rtype: :class:`boto.glacier.job.Job` :return: A Job object representing the job. """ response_data = self.layer1.describe_job(self.name, job_id) return Job(self, response_data) def list_jobs(self, completed=None, status_code=None): """ Return a list of Job objects related to this vault. :type completed: boolean :param completed: Specifies the state of the jobs to return. If a value of True is passed, only completed jobs will be returned. If a value of False is passed, only uncompleted jobs will be returned. If no value is passed, all jobs will be returned. :type status_code: string :param status_code: Specifies the type of job status to return. Valid values are: InProgress|Succeeded|Failed. If not specified, jobs with all status codes are returned. :rtype: list of :class:`boto.glacier.job.Job` :return: A list of Job objects related to this vault. """ response_data = self.layer1.list_jobs(self.name, completed, status_code) return [Job(self, jd) for jd in response_data['JobList']] def list_all_parts(self, upload_id): """Automatically make and combine multiple calls to list_parts. Call list_parts as necessary, combining the results in case multiple calls were required to get data on all available parts. """ result = self.layer1.list_parts(self.name, upload_id) marker = result['Marker'] while marker: additional_result = self.layer1.list_parts( self.name, upload_id, marker=marker) result['Parts'].extend(additional_result['Parts']) marker = additional_result['Marker'] # The marker makes no sense in an unpaginated result, and clearing it # makes testing easier. This also has the nice property that the result # is a normal (but expanded) response. result['Marker'] = None return result boto-2.20.1/boto/glacier/writer.py000066400000000000000000000226621225267101000170020ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2012 Thomas Parslow http://almostobsolete.net/ # Copyright (c) 2012 Robie Basak # Tree hash implementation from Aaron Brady bradya@gmail.com # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import hashlib from boto.glacier.utils import chunk_hashes, tree_hash, bytes_to_hex # This import is provided for backwards compatibility. This function is # now in boto.glacier.utils, but any existing code can still import # this directly from this module. from boto.glacier.utils import compute_hashes_from_fileobj _ONE_MEGABYTE = 1024 * 1024 class _Partitioner(object): """Convert variable-size writes into part-sized writes Call write(data) with variable sized data as needed to write all data. Call flush() after all data is written. This instance will call send_fn(part_data) as needed in part_size pieces, except for the final part which may be shorter than part_size. Make sure to call flush() to ensure that a short final part results in a final send_fn call. """ def __init__(self, part_size, send_fn): self.part_size = part_size self.send_fn = send_fn self._buffer = [] self._buffer_size = 0 def write(self, data): if data == '': return self._buffer.append(data) self._buffer_size += len(data) while self._buffer_size > self.part_size: self._send_part() def _send_part(self): data = ''.join(self._buffer) # Put back any data remaining over the part size into the # buffer if len(data) > self.part_size: self._buffer = [data[self.part_size:]] self._buffer_size = len(self._buffer[0]) else: self._buffer = [] self._buffer_size = 0 # The part we will send part = data[:self.part_size] self.send_fn(part) def flush(self): if self._buffer_size > 0: self._send_part() class _Uploader(object): """Upload to a Glacier upload_id. Call upload_part for each part (in any order) and then close to complete the upload. """ def __init__(self, vault, upload_id, part_size, chunk_size=_ONE_MEGABYTE): self.vault = vault self.upload_id = upload_id self.part_size = part_size self.chunk_size = chunk_size self.archive_id = None self._uploaded_size = 0 self._tree_hashes = [] self.closed = False def _insert_tree_hash(self, index, raw_tree_hash): list_length = len(self._tree_hashes) if index >= list_length: self._tree_hashes.extend([None] * (list_length - index + 1)) self._tree_hashes[index] = raw_tree_hash def upload_part(self, part_index, part_data): """Upload a part to Glacier. :param part_index: part number where 0 is the first part :param part_data: data to upload corresponding to this part """ if self.closed: raise ValueError("I/O operation on closed file") # Create a request and sign it part_tree_hash = tree_hash(chunk_hashes(part_data, self.chunk_size)) self._insert_tree_hash(part_index, part_tree_hash) hex_tree_hash = bytes_to_hex(part_tree_hash) linear_hash = hashlib.sha256(part_data).hexdigest() start = self.part_size * part_index content_range = (start, (start + len(part_data)) - 1) response = self.vault.layer1.upload_part(self.vault.name, self.upload_id, linear_hash, hex_tree_hash, content_range, part_data) response.read() self._uploaded_size += len(part_data) def skip_part(self, part_index, part_tree_hash, part_length): """Skip uploading of a part. The final close call needs to calculate the tree hash and total size of all uploaded data, so this is the mechanism for resume functionality to provide it without actually uploading the data again. :param part_index: part number where 0 is the first part :param part_tree_hash: binary tree_hash of part being skipped :param part_length: length of part being skipped """ if self.closed: raise ValueError("I/O operation on closed file") self._insert_tree_hash(part_index, part_tree_hash) self._uploaded_size += part_length def close(self): if self.closed: return if None in self._tree_hashes: raise RuntimeError("Some parts were not uploaded.") # Complete the multiplart glacier upload hex_tree_hash = bytes_to_hex(tree_hash(self._tree_hashes)) response = self.vault.layer1.complete_multipart_upload( self.vault.name, self.upload_id, hex_tree_hash, self._uploaded_size) self.archive_id = response['ArchiveId'] self.closed = True def generate_parts_from_fobj(fobj, part_size): data = fobj.read(part_size) while data: yield data data = fobj.read(part_size) def resume_file_upload(vault, upload_id, part_size, fobj, part_hash_map, chunk_size=_ONE_MEGABYTE): """Resume upload of a file already part-uploaded to Glacier. The resumption of an upload where the part-uploaded section is empty is a valid degenerate case that this function can handle. In this case, part_hash_map should be an empty dict. :param vault: boto.glacier.vault.Vault object. :param upload_id: existing Glacier upload id of upload being resumed. :param part_size: part size of existing upload. :param fobj: file object containing local data to resume. This must read from the start of the entire upload, not just from the point being resumed. Use fobj.seek(0) to achieve this if necessary. :param part_hash_map: {part_index: part_tree_hash, ...} of data already uploaded. Each supplied part_tree_hash will be verified and the part re-uploaded if there is a mismatch. :param chunk_size: chunk size of tree hash calculation. This must be 1 MiB for Amazon. """ uploader = _Uploader(vault, upload_id, part_size, chunk_size) for part_index, part_data in enumerate( generate_parts_from_fobj(fobj, part_size)): part_tree_hash = tree_hash(chunk_hashes(part_data, chunk_size)) if (part_index not in part_hash_map or part_hash_map[part_index] != part_tree_hash): uploader.upload_part(part_index, part_data) else: uploader.skip_part(part_index, part_tree_hash, len(part_data)) uploader.close() return uploader.archive_id class Writer(object): """ Presents a file-like object for writing to a Amazon Glacier Archive. The data is written using the multi-part upload API. """ def __init__(self, vault, upload_id, part_size, chunk_size=_ONE_MEGABYTE): self.uploader = _Uploader(vault, upload_id, part_size, chunk_size) self.partitioner = _Partitioner(part_size, self._upload_part) self.closed = False self.next_part_index = 0 def write(self, data): if self.closed: raise ValueError("I/O operation on closed file") self.partitioner.write(data) def _upload_part(self, part_data): self.uploader.upload_part(self.next_part_index, part_data) self.next_part_index += 1 def close(self): if self.closed: return self.partitioner.flush() self.uploader.close() self.closed = True def get_archive_id(self): self.close() return self.uploader.archive_id @property def current_tree_hash(self): """ Returns the current tree hash for the data that's been written **so far**. Only once the writing is complete is the final tree hash returned. """ return tree_hash(self.uploader._tree_hashes) @property def current_uploaded_size(self): """ Returns the current uploaded size for the data that's been written **so far**. Only once the writing is complete is the final uploaded size returned. """ return self.uploader._uploaded_size @property def upload_id(self): return self.uploader.upload_id @property def vault(self): return self.uploader.vault boto-2.20.1/boto/gs/000077500000000000000000000000001225267101000141075ustar00rootroot00000000000000boto-2.20.1/boto/gs/__init__.py000077500000000000000000000020641225267101000162250ustar00rootroot00000000000000# Copyright 2010 Google Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # boto-2.20.1/boto/gs/acl.py000077500000000000000000000263011225267101000152250ustar00rootroot00000000000000# Copyright 2010 Google Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.gs.user import User from boto.exception import InvalidAclError ACCESS_CONTROL_LIST = 'AccessControlList' ALL_AUTHENTICATED_USERS = 'AllAuthenticatedUsers' ALL_USERS = 'AllUsers' DISPLAY_NAME = 'DisplayName' DOMAIN = 'Domain' EMAIL_ADDRESS = 'EmailAddress' ENTRY = 'Entry' ENTRIES = 'Entries' GROUP_BY_DOMAIN = 'GroupByDomain' GROUP_BY_EMAIL = 'GroupByEmail' GROUP_BY_ID = 'GroupById' ID = 'ID' NAME = 'Name' OWNER = 'Owner' PERMISSION = 'Permission' SCOPE = 'Scope' TYPE = 'type' USER_BY_EMAIL = 'UserByEmail' USER_BY_ID = 'UserById' CannedACLStrings = ['private', 'public-read', 'project-private', 'public-read-write', 'authenticated-read', 'bucket-owner-read', 'bucket-owner-full-control'] """A list of Google Cloud Storage predefined (canned) ACL strings.""" SupportedPermissions = ['READ', 'WRITE', 'FULL_CONTROL'] """A list of supported ACL permissions.""" class ACL(object): def __init__(self, parent=None): self.parent = parent self.entries = Entries(self) @property def acl(self): return self def __repr__(self): # Owner is optional in GS ACLs. if hasattr(self, 'owner'): entries_repr = ['Owner:%s' % self.owner.__repr__()] else: entries_repr = [''] acl_entries = self.entries if acl_entries: for e in acl_entries.entry_list: entries_repr.append(e.__repr__()) return '<%s>' % ', '.join(entries_repr) # Method with same signature as boto.s3.acl.ACL.add_email_grant(), to allow # polymorphic treatment at application layer. def add_email_grant(self, permission, email_address): entry = Entry(type=USER_BY_EMAIL, email_address=email_address, permission=permission) self.entries.entry_list.append(entry) # Method with same signature as boto.s3.acl.ACL.add_user_grant(), to allow # polymorphic treatment at application layer. def add_user_grant(self, permission, user_id): entry = Entry(permission=permission, type=USER_BY_ID, id=user_id) self.entries.entry_list.append(entry) def add_group_email_grant(self, permission, email_address): entry = Entry(type=GROUP_BY_EMAIL, email_address=email_address, permission=permission) self.entries.entry_list.append(entry) def add_group_grant(self, permission, group_id): entry = Entry(type=GROUP_BY_ID, id=group_id, permission=permission) self.entries.entry_list.append(entry) def startElement(self, name, attrs, connection): if name.lower() == OWNER.lower(): self.owner = User(self) return self.owner elif name.lower() == ENTRIES.lower(): self.entries = Entries(self) return self.entries else: return None def endElement(self, name, value, connection): if name.lower() == OWNER.lower(): pass elif name.lower() == ENTRIES.lower(): pass else: setattr(self, name, value) def to_xml(self): s = '<%s>' % ACCESS_CONTROL_LIST # Owner is optional in GS ACLs. if hasattr(self, 'owner'): s += self.owner.to_xml() acl_entries = self.entries if acl_entries: s += acl_entries.to_xml() s += '' % ACCESS_CONTROL_LIST return s class Entries(object): def __init__(self, parent=None): self.parent = parent # Entries is the class that represents the same-named XML # element. entry_list is the list within this class that holds the data. self.entry_list = [] def __repr__(self): entries_repr = [] for e in self.entry_list: entries_repr.append(e.__repr__()) return '' % ', '.join(entries_repr) def startElement(self, name, attrs, connection): if name.lower() == ENTRY.lower(): entry = Entry(self) self.entry_list.append(entry) return entry else: return None def endElement(self, name, value, connection): if name.lower() == ENTRY.lower(): pass else: setattr(self, name, value) def to_xml(self): if not self.entry_list: return '' s = '<%s>' % ENTRIES for entry in self.entry_list: s += entry.to_xml() s += '' % ENTRIES return s # Class that represents a single (Scope, Permission) entry in an ACL. class Entry(object): def __init__(self, scope=None, type=None, id=None, name=None, email_address=None, domain=None, permission=None): if not scope: scope = Scope(self, type, id, name, email_address, domain) self.scope = scope self.permission = permission def __repr__(self): return '<%s: %s>' % (self.scope.__repr__(), self.permission.__repr__()) def startElement(self, name, attrs, connection): if name.lower() == SCOPE.lower(): # The following if statement used to look like this: # if not TYPE in attrs: # which caused problems because older versions of the # AttributesImpl class in the xml.sax library neglected to include # a __contains__() method (which Python calls to implement the # 'in' operator). So when you use the in operator, like the if # statement above, Python invokes the __getiter__() method with # index 0, which raises an exception. More recent versions of # xml.sax include the __contains__() method, rendering the in # operator functional. The work-around here is to formulate the # if statement as below, which is the legal way to query # AttributesImpl for containment (and is also how the added # __contains__() method works). At one time gsutil disallowed # xmlplus-based parsers, until this more specific problem was # determined. if TYPE not in attrs: raise InvalidAclError('Missing "%s" in "%s" part of ACL' % (TYPE, SCOPE)) self.scope = Scope(self, attrs[TYPE]) return self.scope elif name.lower() == PERMISSION.lower(): pass else: return None def endElement(self, name, value, connection): if name.lower() == SCOPE.lower(): pass elif name.lower() == PERMISSION.lower(): value = value.strip() if not value in SupportedPermissions: raise InvalidAclError('Invalid Permission "%s"' % value) self.permission = value else: setattr(self, name, value) def to_xml(self): s = '<%s>' % ENTRY s += self.scope.to_xml() s += '<%s>%s' % (PERMISSION, self.permission, PERMISSION) s += '' % ENTRY return s class Scope(object): # Map from Scope type.lower() to lower-cased list of allowed sub-elems. ALLOWED_SCOPE_TYPE_SUB_ELEMS = { ALL_AUTHENTICATED_USERS.lower() : [], ALL_USERS.lower() : [], GROUP_BY_DOMAIN.lower() : [DOMAIN.lower()], GROUP_BY_EMAIL.lower() : [ DISPLAY_NAME.lower(), EMAIL_ADDRESS.lower(), NAME.lower()], GROUP_BY_ID.lower() : [DISPLAY_NAME.lower(), ID.lower(), NAME.lower()], USER_BY_EMAIL.lower() : [ DISPLAY_NAME.lower(), EMAIL_ADDRESS.lower(), NAME.lower()], USER_BY_ID.lower() : [DISPLAY_NAME.lower(), ID.lower(), NAME.lower()] } def __init__(self, parent, type=None, id=None, name=None, email_address=None, domain=None): self.parent = parent self.type = type self.name = name self.id = id self.domain = domain self.email_address = email_address if self.type.lower() not in self.ALLOWED_SCOPE_TYPE_SUB_ELEMS: raise InvalidAclError('Invalid %s %s "%s" ' % (SCOPE, TYPE, self.type)) def __repr__(self): named_entity = None if self.id: named_entity = self.id elif self.email_address: named_entity = self.email_address elif self.domain: named_entity = self.domain if named_entity: return '<%s: %s>' % (self.type, named_entity) else: return '<%s>' % self.type def startElement(self, name, attrs, connection): if (not name.lower() in self.ALLOWED_SCOPE_TYPE_SUB_ELEMS[self.type.lower()]): raise InvalidAclError('Element "%s" not allowed in %s %s "%s" ' % (name, SCOPE, TYPE, self.type)) return None def endElement(self, name, value, connection): value = value.strip() if name.lower() == DOMAIN.lower(): self.domain = value elif name.lower() == EMAIL_ADDRESS.lower(): self.email_address = value elif name.lower() == ID.lower(): self.id = value elif name.lower() == NAME.lower(): self.name = value else: setattr(self, name, value) def to_xml(self): s = '<%s type="%s">' % (SCOPE, self.type) if (self.type.lower() == ALL_AUTHENTICATED_USERS.lower() or self.type.lower() == ALL_USERS.lower()): pass elif self.type.lower() == GROUP_BY_DOMAIN.lower(): s += '<%s>%s' % (DOMAIN, self.domain, DOMAIN) elif (self.type.lower() == GROUP_BY_EMAIL.lower() or self.type.lower() == USER_BY_EMAIL.lower()): s += '<%s>%s' % (EMAIL_ADDRESS, self.email_address, EMAIL_ADDRESS) if self.name: s += '<%s>%s' % (NAME, self.name, NAME) elif (self.type.lower() == GROUP_BY_ID.lower() or self.type.lower() == USER_BY_ID.lower()): s += '<%s>%s' % (ID, self.id, ID) if self.name: s += '<%s>%s' % (NAME, self.name, NAME) else: raise InvalidAclError('Invalid scope type "%s" ', self.type) s += '' % SCOPE return s boto-2.20.1/boto/gs/bucket.py000066400000000000000000001221441225267101000157420ustar00rootroot00000000000000# Copyright 2010 Google Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import re import urllib import xml.sax import boto from boto import handler from boto.resultset import ResultSet from boto.exception import GSResponseError from boto.exception import InvalidAclError from boto.gs.acl import ACL, CannedACLStrings from boto.gs.acl import SupportedPermissions as GSPermissions from boto.gs.bucketlistresultset import VersionedBucketListResultSet from boto.gs.cors import Cors from boto.gs.lifecycle import LifecycleConfig from boto.gs.key import Key as GSKey from boto.s3.acl import Policy from boto.s3.bucket import Bucket as S3Bucket from boto.utils import get_utf8_value # constants for http query args DEF_OBJ_ACL = 'defaultObjectAcl' STANDARD_ACL = 'acl' CORS_ARG = 'cors' LIFECYCLE_ARG = 'lifecycle' ERROR_DETAILS_REGEX = re.compile(r'
(?P
.*)
') class Bucket(S3Bucket): """Represents a Google Cloud Storage bucket.""" VersioningBody = ('\n' '%s' '') WebsiteBody = ('\n' '%s%s') WebsiteMainPageFragment = '%s' WebsiteErrorFragment = '%s' def __init__(self, connection=None, name=None, key_class=GSKey): super(Bucket, self).__init__(connection, name, key_class) def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'Name': self.name = value elif name == 'CreationDate': self.creation_date = value else: setattr(self, name, value) def get_key(self, key_name, headers=None, version_id=None, response_headers=None, generation=None): """Returns a Key instance for an object in this bucket. Note that this method uses a HEAD request to check for the existence of the key. :type key_name: string :param key_name: The name of the key to retrieve :type response_headers: dict :param response_headers: A dictionary containing HTTP headers/values that will override any headers associated with the stored object in the response. See http://goo.gl/06N3b for details. :type version_id: string :param version_id: Unused in this subclass. :type generation: int :param generation: A specific generation number to fetch the key at. If not specified, the latest generation is fetched. :rtype: :class:`boto.gs.key.Key` :returns: A Key object from this bucket. """ query_args_l = [] if generation: query_args_l.append('generation=%s' % generation) if response_headers: for rk, rv in response_headers.iteritems(): query_args_l.append('%s=%s' % (rk, urllib.quote(rv))) try: key, resp = self._get_key_internal(key_name, headers, query_args_l=query_args_l) except GSResponseError, e: if e.status == 403 and 'Forbidden' in e.reason: # If we failed getting an object, let the user know which object # failed rather than just returning a generic 403. e.reason = ("Access denied to 'gs://%s/%s'." % (self.name, key_name)) raise return key def copy_key(self, new_key_name, src_bucket_name, src_key_name, metadata=None, src_version_id=None, storage_class='STANDARD', preserve_acl=False, encrypt_key=False, headers=None, query_args=None, src_generation=None): """Create a new key in the bucket by copying an existing key. :type new_key_name: string :param new_key_name: The name of the new key :type src_bucket_name: string :param src_bucket_name: The name of the source bucket :type src_key_name: string :param src_key_name: The name of the source key :type src_generation: int :param src_generation: The generation number of the source key to copy. If not specified, the latest generation is copied. :type metadata: dict :param metadata: Metadata to be associated with new key. If metadata is supplied, it will replace the metadata of the source key being copied. If no metadata is supplied, the source key's metadata will be copied to the new key. :type version_id: string :param version_id: Unused in this subclass. :type storage_class: string :param storage_class: The storage class of the new key. By default, the new key will use the standard storage class. Possible values are: STANDARD | DURABLE_REDUCED_AVAILABILITY :type preserve_acl: bool :param preserve_acl: If True, the ACL from the source key will be copied to the destination key. If False, the destination key will have the default ACL. Note that preserving the ACL in the new key object will require two additional API calls to GCS, one to retrieve the current ACL and one to set that ACL on the new object. If you don't care about the ACL (or if you have a default ACL set on the bucket), a value of False will be significantly more efficient. :type encrypt_key: bool :param encrypt_key: Included for compatibility with S3. This argument is ignored. :type headers: dict :param headers: A dictionary of header name/value pairs. :type query_args: string :param query_args: A string of additional querystring arguments to append to the request :rtype: :class:`boto.gs.key.Key` :returns: An instance of the newly created key object """ if src_generation: headers = headers or {} headers['x-goog-copy-source-generation'] = str(src_generation) return super(Bucket, self).copy_key( new_key_name, src_bucket_name, src_key_name, metadata=metadata, storage_class=storage_class, preserve_acl=preserve_acl, encrypt_key=encrypt_key, headers=headers, query_args=query_args) def list_versions(self, prefix='', delimiter='', marker='', generation_marker='', headers=None): """ List versioned objects within a bucket. This returns an instance of an VersionedBucketListResultSet that automatically handles all of the result paging, etc. from GCS. You just need to keep iterating until there are no more results. Called with no arguments, this will return an iterator object across all keys within the bucket. :type prefix: string :param prefix: allows you to limit the listing to a particular prefix. For example, if you call the method with prefix='/foo/' then the iterator will only cycle through the keys that begin with the string '/foo/'. :type delimiter: string :param delimiter: can be used in conjunction with the prefix to allow you to organize and browse your keys hierarchically. See: https://developers.google.com/storage/docs/reference-headers#delimiter for more details. :type marker: string :param marker: The "marker" of where you are in the result set :type generation_marker: string :param generation_marker: The "generation marker" of where you are in the result set. :type headers: dict :param headers: A dictionary of header name/value pairs. :rtype: :class:`boto.gs.bucketlistresultset.VersionedBucketListResultSet` :return: an instance of a BucketListResultSet that handles paging, etc. """ return VersionedBucketListResultSet(self, prefix, delimiter, marker, generation_marker, headers) def validate_get_all_versions_params(self, params): """ See documentation in boto/s3/bucket.py. """ self.validate_kwarg_names(params, ['version_id_marker', 'delimiter', 'marker', 'generation_marker', 'prefix', 'max_keys']) def delete_key(self, key_name, headers=None, version_id=None, mfa_token=None, generation=None): """ Deletes a key from the bucket. :type key_name: string :param key_name: The key name to delete :type headers: dict :param headers: A dictionary of header name/value pairs. :type version_id: string :param version_id: Unused in this subclass. :type mfa_token: tuple or list of strings :param mfa_token: Unused in this subclass. :type generation: int :param generation: The generation number of the key to delete. If not specified, the latest generation number will be deleted. :rtype: :class:`boto.gs.key.Key` :returns: A key object holding information on what was deleted. """ query_args_l = [] if generation: query_args_l.append('generation=%s' % generation) self._delete_key_internal(key_name, headers=headers, version_id=version_id, mfa_token=mfa_token, query_args_l=query_args_l) def set_acl(self, acl_or_str, key_name='', headers=None, version_id=None, generation=None, if_generation=None, if_metageneration=None): """Sets or changes a bucket's or key's ACL. :type acl_or_str: string or :class:`boto.gs.acl.ACL` :param acl_or_str: A canned ACL string (see :data:`~.gs.acl.CannedACLStrings`) or an ACL object. :type key_name: string :param key_name: A key name within the bucket to set the ACL for. If not specified, the ACL for the bucket will be set. :type headers: dict :param headers: Additional headers to set during the request. :type version_id: string :param version_id: Unused in this subclass. :type generation: int :param generation: If specified, sets the ACL for a specific generation of a versioned object. If not specified, the current version is modified. :type if_generation: int :param if_generation: (optional) If set to a generation number, the acl will only be updated if its current generation number is this value. :type if_metageneration: int :param if_metageneration: (optional) If set to a metageneration number, the acl will only be updated if its current metageneration number is this value. """ if isinstance(acl_or_str, Policy): raise InvalidAclError('Attempt to set S3 Policy on GS ACL') elif isinstance(acl_or_str, ACL): self.set_xml_acl(acl_or_str.to_xml(), key_name, headers=headers, generation=generation, if_generation=if_generation, if_metageneration=if_metageneration) else: self.set_canned_acl(acl_or_str, key_name, headers=headers, generation=generation, if_generation=if_generation, if_metageneration=if_metageneration) def set_def_acl(self, acl_or_str, headers=None): """Sets or changes a bucket's default ACL. :type acl_or_str: string or :class:`boto.gs.acl.ACL` :param acl_or_str: A canned ACL string (see :data:`~.gs.acl.CannedACLStrings`) or an ACL object. :type headers: dict :param headers: Additional headers to set during the request. """ if isinstance(acl_or_str, Policy): raise InvalidAclError('Attempt to set S3 Policy on GS ACL') elif isinstance(acl_or_str, ACL): self.set_def_xml_acl(acl_or_str.to_xml(), headers=headers) else: self.set_def_canned_acl(acl_or_str, headers=headers) def _get_xml_acl_helper(self, key_name, headers, query_args): """Provides common functionality for get_xml_acl and _get_acl_helper.""" response = self.connection.make_request('GET', self.name, key_name, query_args=query_args, headers=headers) body = response.read() if response.status != 200: if response.status == 403: match = ERROR_DETAILS_REGEX.search(body) details = match.group('details') if match else None if details: details = (('
%s. Note that Full Control access' ' is required to access ACLs.
') % details) body = re.sub(ERROR_DETAILS_REGEX, details, body) raise self.connection.provider.storage_response_error( response.status, response.reason, body) return body def _get_acl_helper(self, key_name, headers, query_args): """Provides common functionality for get_acl and get_def_acl.""" body = self._get_xml_acl_helper(key_name, headers, query_args) acl = ACL(self) h = handler.XmlHandler(acl, self) xml.sax.parseString(body, h) return acl def get_acl(self, key_name='', headers=None, version_id=None, generation=None): """Returns the ACL of the bucket or an object in the bucket. :param str key_name: The name of the object to get the ACL for. If not specified, the ACL for the bucket will be returned. :param dict headers: Additional headers to set during the request. :type version_id: string :param version_id: Unused in this subclass. :param int generation: If specified, gets the ACL for a specific generation of a versioned object. If not specified, the current version is returned. This parameter is only valid when retrieving the ACL of an object, not a bucket. :rtype: :class:`.gs.acl.ACL` """ query_args = STANDARD_ACL if generation: query_args += '&generation=%s' % generation return self._get_acl_helper(key_name, headers, query_args) def get_xml_acl(self, key_name='', headers=None, version_id=None, generation=None): """Returns the ACL string of the bucket or an object in the bucket. :param str key_name: The name of the object to get the ACL for. If not specified, the ACL for the bucket will be returned. :param dict headers: Additional headers to set during the request. :type version_id: string :param version_id: Unused in this subclass. :param int generation: If specified, gets the ACL for a specific generation of a versioned object. If not specified, the current version is returned. This parameter is only valid when retrieving the ACL of an object, not a bucket. :rtype: str """ query_args = STANDARD_ACL if generation: query_args += '&generation=%s' % generation return self._get_xml_acl_helper(key_name, headers, query_args) def get_def_acl(self, headers=None): """Returns the bucket's default ACL. :param dict headers: Additional headers to set during the request. :rtype: :class:`.gs.acl.ACL` """ return self._get_acl_helper('', headers, DEF_OBJ_ACL) def _set_acl_helper(self, acl_or_str, key_name, headers, query_args, generation, if_generation, if_metageneration, canned=False): """Provides common functionality for set_acl, set_xml_acl, set_canned_acl, set_def_acl, set_def_xml_acl, and set_def_canned_acl().""" headers = headers or {} data = '' if canned: headers[self.connection.provider.acl_header] = acl_or_str else: data = acl_or_str if generation: query_args += '&generation=%s' % generation if if_metageneration is not None and if_generation is None: raise ValueError("Received if_metageneration argument with no " "if_generation argument. A metageneration has no " "meaning without a content generation.") if not key_name and (if_generation or if_metageneration): raise ValueError("Received if_generation or if_metageneration " "parameter while setting the ACL of a bucket.") if if_generation is not None: headers['x-goog-if-generation-match'] = str(if_generation) if if_metageneration is not None: headers['x-goog-if-metageneration-match'] = str(if_metageneration) response = self.connection.make_request( 'PUT', get_utf8_value(self.name), get_utf8_value(key_name), data=get_utf8_value(data), headers=headers, query_args=query_args) body = response.read() if response.status != 200: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def set_xml_acl(self, acl_str, key_name='', headers=None, version_id=None, query_args='acl', generation=None, if_generation=None, if_metageneration=None): """Sets a bucket's or objects's ACL to an XML string. :type acl_str: string :param acl_str: A string containing the ACL XML. :type key_name: string :param key_name: A key name within the bucket to set the ACL for. If not specified, the ACL for the bucket will be set. :type headers: dict :param headers: Additional headers to set during the request. :type version_id: string :param version_id: Unused in this subclass. :type query_args: str :param query_args: The query parameters to pass with the request. :type generation: int :param generation: If specified, sets the ACL for a specific generation of a versioned object. If not specified, the current version is modified. :type if_generation: int :param if_generation: (optional) If set to a generation number, the acl will only be updated if its current generation number is this value. :type if_metageneration: int :param if_metageneration: (optional) If set to a metageneration number, the acl will only be updated if its current metageneration number is this value. """ return self._set_acl_helper(acl_str, key_name=key_name, headers=headers, query_args=query_args, generation=generation, if_generation=if_generation, if_metageneration=if_metageneration) def set_canned_acl(self, acl_str, key_name='', headers=None, version_id=None, generation=None, if_generation=None, if_metageneration=None): """Sets a bucket's or objects's ACL using a predefined (canned) value. :type acl_str: string :param acl_str: A canned ACL string. See :data:`~.gs.acl.CannedACLStrings`. :type key_name: string :param key_name: A key name within the bucket to set the ACL for. If not specified, the ACL for the bucket will be set. :type headers: dict :param headers: Additional headers to set during the request. :type version_id: string :param version_id: Unused in this subclass. :type generation: int :param generation: If specified, sets the ACL for a specific generation of a versioned object. If not specified, the current version is modified. :type if_generation: int :param if_generation: (optional) If set to a generation number, the acl will only be updated if its current generation number is this value. :type if_metageneration: int :param if_metageneration: (optional) If set to a metageneration number, the acl will only be updated if its current metageneration number is this value. """ if acl_str not in CannedACLStrings: raise ValueError("Provided canned ACL string (%s) is not valid." % acl_str) query_args = STANDARD_ACL return self._set_acl_helper(acl_str, key_name, headers, query_args, generation, if_generation, if_metageneration, canned=True) def set_def_canned_acl(self, acl_str, headers=None): """Sets a bucket's default ACL using a predefined (canned) value. :type acl_str: string :param acl_str: A canned ACL string. See :data:`~.gs.acl.CannedACLStrings`. :type headers: dict :param headers: Additional headers to set during the request. """ if acl_str not in CannedACLStrings: raise ValueError("Provided canned ACL string (%s) is not valid." % acl_str) query_args = DEF_OBJ_ACL return self._set_acl_helper(acl_str, '', headers, query_args, generation=None, if_generation=None, if_metageneration=None, canned=True) def set_def_xml_acl(self, acl_str, headers=None): """Sets a bucket's default ACL to an XML string. :type acl_str: string :param acl_str: A string containing the ACL XML. :type headers: dict :param headers: Additional headers to set during the request. """ return self.set_xml_acl(acl_str, '', headers, query_args=DEF_OBJ_ACL) def get_cors(self, headers=None): """Returns a bucket's CORS XML document. :param dict headers: Additional headers to send with the request. :rtype: :class:`~.cors.Cors` """ response = self.connection.make_request('GET', self.name, query_args=CORS_ARG, headers=headers) body = response.read() if response.status == 200: # Success - parse XML and return Cors object. cors = Cors() h = handler.XmlHandler(cors, self) xml.sax.parseString(body, h) return cors else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def set_cors(self, cors, headers=None): """Sets a bucket's CORS XML document. :param str cors: A string containing the CORS XML. :param dict headers: Additional headers to send with the request. """ response = self.connection.make_request( 'PUT', get_utf8_value(self.name), data=get_utf8_value(cors), query_args=CORS_ARG, headers=headers) body = response.read() if response.status != 200: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def get_storage_class(self): """ Returns the StorageClass for the bucket. :rtype: str :return: The StorageClass for the bucket. """ response = self.connection.make_request('GET', self.name, query_args='storageClass') body = response.read() if response.status == 200: rs = ResultSet(self) h = handler.XmlHandler(rs, self) xml.sax.parseString(body, h) return rs.StorageClass else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) # Method with same signature as boto.s3.bucket.Bucket.add_email_grant(), # to allow polymorphic treatment at application layer. def add_email_grant(self, permission, email_address, recursive=False, headers=None): """ Convenience method that provides a quick way to add an email grant to a bucket. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT's the new ACL back to GCS. :type permission: string :param permission: The permission being granted. Should be one of: (READ, WRITE, FULL_CONTROL). :type email_address: string :param email_address: The email address associated with the GS account your are granting the permission to. :type recursive: bool :param recursive: A boolean value to controls whether the call will apply the grant to all keys within the bucket or not. The default value is False. By passing a True value, the call will iterate through all keys in the bucket and apply the same grant to each key. CAUTION: If you have a lot of keys, this could take a long time! """ if permission not in GSPermissions: raise self.connection.provider.storage_permissions_error( 'Unknown Permission: %s' % permission) acl = self.get_acl(headers=headers) acl.add_email_grant(permission, email_address) self.set_acl(acl, headers=headers) if recursive: for key in self: key.add_email_grant(permission, email_address, headers=headers) # Method with same signature as boto.s3.bucket.Bucket.add_user_grant(), # to allow polymorphic treatment at application layer. def add_user_grant(self, permission, user_id, recursive=False, headers=None): """ Convenience method that provides a quick way to add a canonical user grant to a bucket. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUTs the new ACL back to GCS. :type permission: string :param permission: The permission being granted. Should be one of: (READ|WRITE|FULL_CONTROL) :type user_id: string :param user_id: The canonical user id associated with the GS account you are granting the permission to. :type recursive: bool :param recursive: A boolean value to controls whether the call will apply the grant to all keys within the bucket or not. The default value is False. By passing a True value, the call will iterate through all keys in the bucket and apply the same grant to each key. CAUTION: If you have a lot of keys, this could take a long time! """ if permission not in GSPermissions: raise self.connection.provider.storage_permissions_error( 'Unknown Permission: %s' % permission) acl = self.get_acl(headers=headers) acl.add_user_grant(permission, user_id) self.set_acl(acl, headers=headers) if recursive: for key in self: key.add_user_grant(permission, user_id, headers=headers) def add_group_email_grant(self, permission, email_address, recursive=False, headers=None): """ Convenience method that provides a quick way to add an email group grant to a bucket. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT's the new ACL back to GCS. :type permission: string :param permission: The permission being granted. Should be one of: READ|WRITE|FULL_CONTROL See http://code.google.com/apis/storage/docs/developer-guide.html#authorization for more details on permissions. :type email_address: string :param email_address: The email address associated with the Google Group to which you are granting the permission. :type recursive: bool :param recursive: A boolean value to controls whether the call will apply the grant to all keys within the bucket or not. The default value is False. By passing a True value, the call will iterate through all keys in the bucket and apply the same grant to each key. CAUTION: If you have a lot of keys, this could take a long time! """ if permission not in GSPermissions: raise self.connection.provider.storage_permissions_error( 'Unknown Permission: %s' % permission) acl = self.get_acl(headers=headers) acl.add_group_email_grant(permission, email_address) self.set_acl(acl, headers=headers) if recursive: for key in self: key.add_group_email_grant(permission, email_address, headers=headers) # Method with same input signature as boto.s3.bucket.Bucket.list_grants() # (but returning different object type), to allow polymorphic treatment # at application layer. def list_grants(self, headers=None): """Returns the ACL entries applied to this bucket. :param dict headers: Additional headers to send with the request. :rtype: list containing :class:`~.gs.acl.Entry` objects. """ acl = self.get_acl(headers=headers) return acl.entries def disable_logging(self, headers=None): """Disable logging on this bucket. :param dict headers: Additional headers to send with the request. """ xml_str = '' self.set_subresource('logging', xml_str, headers=headers) def enable_logging(self, target_bucket, target_prefix=None, headers=None): """Enable logging on a bucket. :type target_bucket: bucket or string :param target_bucket: The bucket to log to. :type target_prefix: string :param target_prefix: The prefix which should be prepended to the generated log files written to the target_bucket. :param dict headers: Additional headers to send with the request. """ if isinstance(target_bucket, Bucket): target_bucket = target_bucket.name xml_str = '' xml_str = (xml_str + '%s' % target_bucket) if target_prefix: xml_str = (xml_str + '%s' % target_prefix) xml_str = xml_str + '' self.set_subresource('logging', xml_str, headers=headers) def get_logging_config_with_xml(self, headers=None): """Returns the current status of logging configuration on the bucket as unparsed XML. :param dict headers: Additional headers to send with the request. :rtype: 2-Tuple :returns: 2-tuple containing: 1) A dictionary containing the parsed XML response from GCS. The overall structure is: * Logging * LogObjectPrefix: Prefix that is prepended to log objects. * LogBucket: Target bucket for log objects. 2) Unparsed XML describing the bucket's logging configuration. """ response = self.connection.make_request('GET', self.name, query_args='logging', headers=headers) body = response.read() boto.log.debug(body) if response.status != 200: raise self.connection.provider.storage_response_error( response.status, response.reason, body) e = boto.jsonresponse.Element() h = boto.jsonresponse.XmlHandler(e, None) h.parse(body) return e, body def get_logging_config(self, headers=None): """Returns the current status of logging configuration on the bucket. :param dict headers: Additional headers to send with the request. :rtype: dict :returns: A dictionary containing the parsed XML response from GCS. The overall structure is: * Logging * LogObjectPrefix: Prefix that is prepended to log objects. * LogBucket: Target bucket for log objects. """ return self.get_logging_config_with_xml(headers)[0] def configure_website(self, main_page_suffix=None, error_key=None, headers=None): """Configure this bucket to act as a website :type main_page_suffix: str :param main_page_suffix: Suffix that is appended to a request that is for a "directory" on the website endpoint (e.g. if the suffix is index.html and you make a request to samplebucket/images/ the data that is returned will be for the object with the key name images/index.html). The suffix must not be empty and must not include a slash character. This parameter is optional and the property is disabled if excluded. :type error_key: str :param error_key: The object key name to use when a 400 error occurs. This parameter is optional and the property is disabled if excluded. :param dict headers: Additional headers to send with the request. """ if main_page_suffix: main_page_frag = self.WebsiteMainPageFragment % main_page_suffix else: main_page_frag = '' if error_key: error_frag = self.WebsiteErrorFragment % error_key else: error_frag = '' body = self.WebsiteBody % (main_page_frag, error_frag) response = self.connection.make_request( 'PUT', get_utf8_value(self.name), data=get_utf8_value(body), query_args='websiteConfig', headers=headers) body = response.read() if response.status == 200: return True else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def get_website_configuration(self, headers=None): """Returns the current status of website configuration on the bucket. :param dict headers: Additional headers to send with the request. :rtype: dict :returns: A dictionary containing the parsed XML response from GCS. The overall structure is: * WebsiteConfiguration * MainPageSuffix: suffix that is appended to request that is for a "directory" on the website endpoint. * NotFoundPage: name of an object to serve when site visitors encounter a 404. """ return self.get_website_configuration_with_xml(headers)[0] def get_website_configuration_with_xml(self, headers=None): """Returns the current status of website configuration on the bucket as unparsed XML. :param dict headers: Additional headers to send with the request. :rtype: 2-Tuple :returns: 2-tuple containing: 1) A dictionary containing the parsed XML response from GCS. The overall structure is: * WebsiteConfiguration * MainPageSuffix: suffix that is appended to request that is for a "directory" on the website endpoint. * NotFoundPage: name of an object to serve when site visitors encounter a 404 2) Unparsed XML describing the bucket's website configuration. """ response = self.connection.make_request('GET', self.name, query_args='websiteConfig', headers=headers) body = response.read() boto.log.debug(body) if response.status != 200: raise self.connection.provider.storage_response_error( response.status, response.reason, body) e = boto.jsonresponse.Element() h = boto.jsonresponse.XmlHandler(e, None) h.parse(body) return e, body def delete_website_configuration(self, headers=None): """Remove the website configuration from this bucket. :param dict headers: Additional headers to send with the request. """ self.configure_website(headers=headers) def get_versioning_status(self, headers=None): """Returns the current status of versioning configuration on the bucket. :rtype: bool """ response = self.connection.make_request('GET', self.name, query_args='versioning', headers=headers) body = response.read() boto.log.debug(body) if response.status != 200: raise self.connection.provider.storage_response_error( response.status, response.reason, body) resp_json = boto.jsonresponse.Element() boto.jsonresponse.XmlHandler(resp_json, None).parse(body) resp_json = resp_json['VersioningConfiguration'] return ('Status' in resp_json) and (resp_json['Status'] == 'Enabled') def configure_versioning(self, enabled, headers=None): """Configure versioning for this bucket. :param bool enabled: If set to True, enables versioning on this bucket. If set to False, disables versioning. :param dict headers: Additional headers to send with the request. """ if enabled == True: req_body = self.VersioningBody % ('Enabled') else: req_body = self.VersioningBody % ('Suspended') self.set_subresource('versioning', req_body, headers=headers) def get_lifecycle_config(self, headers=None): """ Returns the current lifecycle configuration on the bucket. :rtype: :class:`boto.gs.lifecycle.LifecycleConfig` :returns: A LifecycleConfig object that describes all current lifecycle rules in effect for the bucket. """ response = self.connection.make_request('GET', self.name, query_args=LIFECYCLE_ARG, headers=headers) body = response.read() boto.log.debug(body) if response.status == 200: lifecycle_config = LifecycleConfig() h = handler.XmlHandler(lifecycle_config, self) xml.sax.parseString(body, h) return lifecycle_config else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def configure_lifecycle(self, lifecycle_config, headers=None): """ Configure lifecycle for this bucket. :type lifecycle_config: :class:`boto.gs.lifecycle.LifecycleConfig` :param lifecycle_config: The lifecycle configuration you want to configure for this bucket. """ xml = lifecycle_config.to_xml() response = self.connection.make_request( 'PUT', get_utf8_value(self.name), data=get_utf8_value(xml), query_args=LIFECYCLE_ARG, headers=headers) body = response.read() if response.status == 200: return True else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) boto-2.20.1/boto/gs/bucketlistresultset.py000066400000000000000000000055431225267101000206140ustar00rootroot00000000000000# Copyright 2012 Google Inc. # Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. def versioned_bucket_lister(bucket, prefix='', delimiter='', marker='', generation_marker='', headers=None): """ A generator function for listing versioned objects. """ more_results = True k = None while more_results: rs = bucket.get_all_versions(prefix=prefix, marker=marker, generation_marker=generation_marker, delimiter=delimiter, headers=headers, max_keys=999) for k in rs: yield k marker = rs.next_marker generation_marker = rs.next_generation_marker more_results= rs.is_truncated class VersionedBucketListResultSet: """ A resultset for listing versions within a bucket. Uses the bucket_lister generator function and implements the iterator interface. This transparently handles the results paging from GCS so even if you have many thousands of keys within the bucket you can iterate over all keys in a reasonably efficient manner. """ def __init__(self, bucket=None, prefix='', delimiter='', marker='', generation_marker='', headers=None): self.bucket = bucket self.prefix = prefix self.delimiter = delimiter self.marker = marker self.generation_marker = generation_marker self.headers = headers def __iter__(self): return versioned_bucket_lister(self.bucket, prefix=self.prefix, delimiter=self.delimiter, marker=self.marker, generation_marker=self.generation_marker, headers=self.headers) boto-2.20.1/boto/gs/connection.py000077500000000000000000000107741225267101000166340ustar00rootroot00000000000000# Copyright 2010 Google Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.gs.bucket import Bucket from boto.s3.connection import S3Connection from boto.s3.connection import SubdomainCallingFormat from boto.s3.connection import check_lowercase_bucketname from boto.utils import get_utf8_value class Location: DEFAULT = 'US' EU = 'EU' class GSConnection(S3Connection): DefaultHost = 'storage.googleapis.com' QueryString = 'Signature=%s&Expires=%d&AWSAccessKeyId=%s' def __init__(self, gs_access_key_id=None, gs_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, host=DefaultHost, debug=0, https_connection_factory=None, calling_format=SubdomainCallingFormat(), path='/', suppress_consec_slashes=True): S3Connection.__init__(self, gs_access_key_id, gs_secret_access_key, is_secure, port, proxy, proxy_port, proxy_user, proxy_pass, host, debug, https_connection_factory, calling_format, path, "google", Bucket, suppress_consec_slashes=suppress_consec_slashes) def create_bucket(self, bucket_name, headers=None, location=Location.DEFAULT, policy=None, storage_class='STANDARD'): """ Creates a new bucket. By default it's located in the USA. You can pass Location.EU to create bucket in the EU. You can also pass a LocationConstraint for where the bucket should be located, and a StorageClass describing how the data should be stored. :type bucket_name: string :param bucket_name: The name of the new bucket. :type headers: dict :param headers: Additional headers to pass along with the request to GCS. :type location: :class:`boto.gs.connection.Location` :param location: The location of the new bucket. :type policy: :class:`boto.gs.acl.CannedACLStrings` :param policy: A canned ACL policy that will be applied to the new key in GCS. :type storage_class: string :param storage_class: Either 'STANDARD' or 'DURABLE_REDUCED_AVAILABILITY'. """ check_lowercase_bucketname(bucket_name) if policy: if headers: headers[self.provider.acl_header] = policy else: headers = {self.provider.acl_header : policy} if not location: location = Location.DEFAULT location_elem = ('%s' % location) if storage_class: storage_class_elem = ('%s' % storage_class) else: storage_class_elem = '' data = ('%s%s' % (location_elem, storage_class_elem)) response = self.make_request( 'PUT', get_utf8_value(bucket_name), headers=headers, data=get_utf8_value(data)) body = response.read() if response.status == 409: raise self.provider.storage_create_error( response.status, response.reason, body) if response.status == 200: return self.bucket_class(self, bucket_name) else: raise self.provider.storage_response_error( response.status, response.reason, body) boto-2.20.1/boto/gs/cors.py000077500000000000000000000170631225267101000154410ustar00rootroot00000000000000# Copyright 2012 Google Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import types from boto.gs.user import User from boto.exception import InvalidCorsError from xml.sax import handler # Relevant tags for the CORS XML document. CORS_CONFIG = 'CorsConfig' CORS = 'Cors' ORIGINS = 'Origins' ORIGIN = 'Origin' METHODS = 'Methods' METHOD = 'Method' HEADERS = 'ResponseHeaders' HEADER = 'ResponseHeader' MAXAGESEC = 'MaxAgeSec' class Cors(handler.ContentHandler): """Encapsulates the CORS configuration XML document""" def __init__(self): # List of CORS elements found within a CorsConfig element. self.cors = [] # List of collections (e.g. Methods, ResponseHeaders, Origins) # found within a CORS element. We use a list of lists here # instead of a dictionary because the collections need to be # preserved in the order in which they appear in the input XML # document (and Python dictionary keys are inherently unordered). # The elements on this list are two element tuples of the form # (collection name, [list of collection contents]). self.collections = [] # Lists of elements within a collection. Again a list is needed to # preserve ordering but also because the same element may appear # multiple times within a collection. self.elements = [] # Dictionary mapping supported collection names to element types # which may be contained within each. self.legal_collections = { ORIGINS : [ORIGIN], METHODS : [METHOD], HEADERS : [HEADER], MAXAGESEC: [] } # List of supported element types within any collection, used for # checking validadity of a parsed element name. self.legal_elements = [ORIGIN, METHOD, HEADER] self.parse_level = 0 self.collection = None self.element = None def validateParseLevel(self, tag, level): """Verify parse level for a given tag.""" if self.parse_level != level: raise InvalidCorsError('Invalid tag %s at parse level %d: ' % (tag, self.parse_level)) def startElement(self, name, attrs, connection): """SAX XML logic for parsing new element found.""" if name == CORS_CONFIG: self.validateParseLevel(name, 0) self.parse_level += 1; elif name == CORS: self.validateParseLevel(name, 1) self.parse_level += 1; elif name in self.legal_collections: self.validateParseLevel(name, 2) self.parse_level += 1; self.collection = name elif name in self.legal_elements: self.validateParseLevel(name, 3) # Make sure this tag is found inside a collection tag. if self.collection is None: raise InvalidCorsError('Tag %s found outside collection' % name) # Make sure this tag is allowed for the current collection tag. if name not in self.legal_collections[self.collection]: raise InvalidCorsError('Tag %s not allowed in %s collection' % (name, self.collection)) self.element = name else: raise InvalidCorsError('Unsupported tag ' + name) def endElement(self, name, value, connection): """SAX XML logic for parsing new element found.""" if name == CORS_CONFIG: self.validateParseLevel(name, 1) self.parse_level -= 1; elif name == CORS: self.validateParseLevel(name, 2) self.parse_level -= 1; # Terminating a CORS element, save any collections we found # and re-initialize collections list. self.cors.append(self.collections) self.collections = [] elif name in self.legal_collections: self.validateParseLevel(name, 3) if name != self.collection: raise InvalidCorsError('Mismatched start and end tags (%s/%s)' % (self.collection, name)) self.parse_level -= 1; if not self.legal_collections[name]: # If this collection doesn't contain any sub-elements, store # a tuple of name and this tag's element value. self.collections.append((name, value.strip())) else: # Otherwise, we're terminating a collection of sub-elements, # so store a tuple of name and list of contained elements. self.collections.append((name, self.elements)) self.elements = [] self.collection = None elif name in self.legal_elements: self.validateParseLevel(name, 3) # Make sure this tag is found inside a collection tag. if self.collection is None: raise InvalidCorsError('Tag %s found outside collection' % name) # Make sure this end tag is allowed for the current collection tag. if name not in self.legal_collections[self.collection]: raise InvalidCorsError('Tag %s not allowed in %s collection' % (name, self.collection)) if name != self.element: raise InvalidCorsError('Mismatched start and end tags (%s/%s)' % (self.element, name)) # Terminating an element tag, add it to the list of elements # for the current collection. self.elements.append((name, value.strip())) self.element = None else: raise InvalidCorsError('Unsupported end tag ' + name) def to_xml(self): """Convert CORS object into XML string representation.""" s = '<' + CORS_CONFIG + '>' for collections in self.cors: s += '<' + CORS + '>' for (collection, elements_or_value) in collections: assert collection is not None s += '<' + collection + '>' # If collection elements has type string, append atomic value, # otherwise, append sequence of values in named tags. if isinstance(elements_or_value, types.StringTypes): s += elements_or_value else: for (name, value) in elements_or_value: assert name is not None assert value is not None s += '<' + name + '>' + value + '' s += '' s += '' s += '' return s boto-2.20.1/boto/gs/key.py000066400000000000000000001226231225267101000152570ustar00rootroot00000000000000# Copyright 2010 Google Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import base64 import binascii import os import re import StringIO from boto.exception import BotoClientError from boto.s3.key import Key as S3Key from boto.s3.keyfile import KeyFile from boto.utils import compute_hash from boto.utils import get_utf8_value class Key(S3Key): """ Represents a key (object) in a GS bucket. :ivar bucket: The parent :class:`boto.gs.bucket.Bucket`. :ivar name: The name of this Key object. :ivar metadata: A dictionary containing user metadata that you wish to store with the object or that has been retrieved from an existing object. :ivar cache_control: The value of the `Cache-Control` HTTP header. :ivar content_type: The value of the `Content-Type` HTTP header. :ivar content_encoding: The value of the `Content-Encoding` HTTP header. :ivar content_disposition: The value of the `Content-Disposition` HTTP header. :ivar content_language: The value of the `Content-Language` HTTP header. :ivar etag: The `etag` associated with this object. :ivar last_modified: The string timestamp representing the last time this object was modified in GS. :ivar owner: The ID of the owner of this object. :ivar storage_class: The storage class of the object. Currently, one of: STANDARD | DURABLE_REDUCED_AVAILABILITY. :ivar md5: The MD5 hash of the contents of the object. :ivar size: The size, in bytes, of the object. :ivar generation: The generation number of the object. :ivar metageneration: The generation number of the object metadata. :ivar encrypted: Whether the object is encrypted while at rest on the server. :ivar cloud_hashes: Dictionary of checksums as supplied by the storage provider. """ def __init__(self, bucket=None, name=None, generation=None): super(Key, self).__init__(bucket=bucket, name=name) self.generation = generation self.meta_generation = None self.cloud_hashes = {} self.component_count = None def __repr__(self): if self.generation and self.metageneration: ver_str = '#%s.%s' % (self.generation, self.metageneration) else: ver_str = '' if self.bucket: return '' % (self.bucket.name, self.name, ver_str) else: return '' % (self.name, ver_str) def endElement(self, name, value, connection): if name == 'Key': self.name = value elif name == 'ETag': self.etag = value elif name == 'IsLatest': if value == 'true': self.is_latest = True else: self.is_latest = False elif name == 'LastModified': self.last_modified = value elif name == 'Size': self.size = int(value) elif name == 'StorageClass': self.storage_class = value elif name == 'Owner': pass elif name == 'VersionId': self.version_id = value elif name == 'Generation': self.generation = value elif name == 'MetaGeneration': self.metageneration = value else: setattr(self, name, value) def handle_version_headers(self, resp, force=False): self.metageneration = resp.getheader('x-goog-metageneration', None) self.generation = resp.getheader('x-goog-generation', None) def handle_addl_headers(self, headers): for key, value in headers: if key == 'x-goog-hash': for hash_pair in value.split(','): alg, b64_digest = hash_pair.strip().split('=', 1) self.cloud_hashes[alg] = binascii.a2b_base64(b64_digest) elif key == 'x-goog-component-count': self.component_count = int(value) elif key == 'x-goog-generation': self.generation = value # Use x-goog-stored-content-encoding and # x-goog-stored-content-length to indicate original content length # and encoding, which are transcoding-invariant (so are preferable # over using content-encoding and size headers). elif key == 'x-goog-stored-content-encoding': self.content_encoding = value elif key == 'x-goog-stored-content-length': self.size = int(value) def open_read(self, headers=None, query_args='', override_num_retries=None, response_headers=None): """ Open this key for reading :type headers: dict :param headers: Headers to pass in the web request :type query_args: string :param query_args: Arguments to pass in the query string (ie, 'torrent') :type override_num_retries: int :param override_num_retries: If not None will override configured num_retries parameter for underlying GET. :type response_headers: dict :param response_headers: A dictionary containing HTTP headers/values that will override any headers associated with the stored object in the response. See http://goo.gl/EWOPb for details. """ # For GCS we need to include the object generation in the query args. # The rest of the processing is handled in the parent class. if self.generation: if query_args: query_args += '&' query_args += 'generation=%s' % self.generation super(Key, self).open_read(headers=headers, query_args=query_args, override_num_retries=override_num_retries, response_headers=response_headers) def get_file(self, fp, headers=None, cb=None, num_cb=10, torrent=False, version_id=None, override_num_retries=None, response_headers=None, hash_algs=None): query_args = None if self.generation: query_args = ['generation=%s' % self.generation] self._get_file_internal(fp, headers=headers, cb=cb, num_cb=num_cb, override_num_retries=override_num_retries, response_headers=response_headers, hash_algs=hash_algs, query_args=query_args) def get_contents_to_file(self, fp, headers=None, cb=None, num_cb=10, torrent=False, version_id=None, res_download_handler=None, response_headers=None, hash_algs=None): """ Retrieve an object from GCS using the name of the Key object as the key in GCS. Write the contents of the object to the file pointed to by 'fp'. :type fp: File -like object :param fp: :type headers: dict :param headers: additional HTTP headers that will be sent with the GET request. :type cb: function :param cb: a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to GCS and the second representing the size of the to be transmitted object. :type cb: int :param num_cb: (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer. :type torrent: bool :param torrent: If True, returns the contents of a torrent file as a string. :type res_upload_handler: ResumableDownloadHandler :param res_download_handler: If provided, this handler will perform the download. :type response_headers: dict :param response_headers: A dictionary containing HTTP headers/values that will override any headers associated with the stored object in the response. See http://goo.gl/sMkcC for details. """ if self.bucket != None: if res_download_handler: res_download_handler.get_file(self, fp, headers, cb, num_cb, torrent=torrent, version_id=version_id, hash_algs=hash_algs) else: self.get_file(fp, headers, cb, num_cb, torrent=torrent, version_id=version_id, response_headers=response_headers, hash_algs=hash_algs) def compute_hash(self, fp, algorithm, size=None): """ :type fp: file :param fp: File pointer to the file to hash. The file pointer will be reset to the same position before the method returns. :type algorithm: zero-argument constructor for hash objects that implements update() and digest() (e.g. hashlib.md5) :type size: int :param size: (optional) The Maximum number of bytes to read from the file pointer (fp). This is useful when uploading a file in multiple parts where the file is being split in place into different parts. Less bytes may be available. """ hex_digest, b64_digest, data_size = compute_hash( fp, size=size, hash_algorithm=algorithm) # The internal implementation of compute_hash() needs to return the # data size, but we don't want to return that value to the external # caller because it changes the class interface (i.e. it might # break some code), so we consume the third tuple value here and # return the remainder of the tuple to the caller, thereby preserving # the existing interface. self.size = data_size return (hex_digest, b64_digest) def send_file(self, fp, headers=None, cb=None, num_cb=10, query_args=None, chunked_transfer=False, size=None, hash_algs=None): """ Upload a file to GCS. :type fp: file :param fp: The file pointer to upload. The file pointer must point point at the offset from which you wish to upload. ie. if uploading the full file, it should point at the start of the file. Normally when a file is opened for reading, the fp will point at the first byte. See the bytes parameter below for more info. :type headers: dict :param headers: The headers to pass along with the PUT request :type num_cb: int :param num_cb: (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer. Providing a negative integer will cause your callback to be called with each buffer read. :type query_args: string :param query_args: Arguments to pass in the query string. :type chunked_transfer: boolean :param chunked_transfer: (optional) If true, we use chunked Transfer-Encoding. :type size: int :param size: (optional) The Maximum number of bytes to read from the file pointer (fp). This is useful when uploading a file in multiple parts where you are splitting the file up into different ranges to be uploaded. If not specified, the default behaviour is to read all bytes from the file pointer. Less bytes may be available. :type hash_algs: dictionary :param hash_algs: (optional) Dictionary of hash algorithms and corresponding hashing class that implements update() and digest(). Defaults to {'md5': hashlib.md5}. """ self._send_file_internal(fp, headers=headers, cb=cb, num_cb=num_cb, query_args=query_args, chunked_transfer=chunked_transfer, size=size, hash_algs=hash_algs) def delete(self, headers=None): return self.bucket.delete_key(self.name, version_id=self.version_id, generation=self.generation, headers=headers) def add_email_grant(self, permission, email_address): """ Convenience method that provides a quick way to add an email grant to a key. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT's the new ACL back to GS. :type permission: string :param permission: The permission being granted. Should be one of: READ|FULL_CONTROL See http://code.google.com/apis/storage/docs/developer-guide.html#authorization for more details on permissions. :type email_address: string :param email_address: The email address associated with the Google account to which you are granting the permission. """ acl = self.get_acl() acl.add_email_grant(permission, email_address) self.set_acl(acl) def add_user_grant(self, permission, user_id): """ Convenience method that provides a quick way to add a canonical user grant to a key. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT's the new ACL back to GS. :type permission: string :param permission: The permission being granted. Should be one of: READ|FULL_CONTROL See http://code.google.com/apis/storage/docs/developer-guide.html#authorization for more details on permissions. :type user_id: string :param user_id: The canonical user id associated with the GS account to which you are granting the permission. """ acl = self.get_acl() acl.add_user_grant(permission, user_id) self.set_acl(acl) def add_group_email_grant(self, permission, email_address, headers=None): """ Convenience method that provides a quick way to add an email group grant to a key. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT's the new ACL back to GS. :type permission: string :param permission: The permission being granted. Should be one of: READ|FULL_CONTROL See http://code.google.com/apis/storage/docs/developer-guide.html#authorization for more details on permissions. :type email_address: string :param email_address: The email address associated with the Google Group to which you are granting the permission. """ acl = self.get_acl(headers=headers) acl.add_group_email_grant(permission, email_address) self.set_acl(acl, headers=headers) def add_group_grant(self, permission, group_id): """ Convenience method that provides a quick way to add a canonical group grant to a key. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT's the new ACL back to GS. :type permission: string :param permission: The permission being granted. Should be one of: READ|FULL_CONTROL See http://code.google.com/apis/storage/docs/developer-guide.html#authorization for more details on permissions. :type group_id: string :param group_id: The canonical group id associated with the Google Groups account you are granting the permission to. """ acl = self.get_acl() acl.add_group_grant(permission, group_id) self.set_acl(acl) def set_contents_from_file(self, fp, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, res_upload_handler=None, size=None, rewind=False, if_generation=None): """ Store an object in GS using the name of the Key object as the key in GS and the contents of the file pointed to by 'fp' as the contents. :type fp: file :param fp: the file whose contents are to be uploaded :type headers: dict :param headers: additional HTTP headers to be sent with the PUT request. :type replace: bool :param replace: If this parameter is False, the method will first check to see if an object exists in the bucket with the same key. If it does, it won't overwrite it. The default value is True which will overwrite the object. :type cb: function :param cb: a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to GS and the second representing the total number of bytes that need to be transmitted. :type num_cb: int :param num_cb: (optional) If a callback is specified with the cb parameter, this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer. :type policy: :class:`boto.gs.acl.CannedACLStrings` :param policy: A canned ACL policy that will be applied to the new key in GS. :type md5: A tuple containing the hexdigest version of the MD5 checksum of the file as the first element and the Base64-encoded version of the plain checksum as the second element. This is the same format returned by the compute_md5 method. :param md5: If you need to compute the MD5 for any reason prior to upload, it's silly to have to do it twice so this param, if present, will be used as the MD5 values of the file. Otherwise, the checksum will be computed. :type res_upload_handler: ResumableUploadHandler :param res_upload_handler: If provided, this handler will perform the upload. :type size: int :param size: (optional) The Maximum number of bytes to read from the file pointer (fp). This is useful when uploading a file in multiple parts where you are splitting the file up into different ranges to be uploaded. If not specified, the default behaviour is to read all bytes from the file pointer. Less bytes may be available. Notes: 1. The "size" parameter currently cannot be used when a resumable upload handler is given but is still useful for uploading part of a file as implemented by the parent class. 2. At present Google Cloud Storage does not support multipart uploads. :type rewind: bool :param rewind: (optional) If True, the file pointer (fp) will be rewound to the start before any bytes are read from it. The default behaviour is False which reads from the current position of the file pointer (fp). :type if_generation: int :param if_generation: (optional) If set to a generation number, the object will only be written to if its current generation number is this value. If set to the value 0, the object will only be written if it doesn't already exist. :rtype: int :return: The number of bytes written to the key. TODO: At some point we should refactor the Bucket and Key classes, to move functionality common to all providers into a parent class, and provider-specific functionality into subclasses (rather than just overriding/sharing code the way it currently works). """ provider = self.bucket.connection.provider if res_upload_handler and size: # could use size instead of file_length if provided but... raise BotoClientError( '"size" param not supported for resumable uploads.') headers = headers or {} if policy: headers[provider.acl_header] = policy if rewind: # caller requests reading from beginning of fp. fp.seek(0, os.SEEK_SET) else: # The following seek/tell/seek logic is intended # to detect applications using the older interface to # set_contents_from_file(), which automatically rewound the # file each time the Key was reused. This changed with commit # 14ee2d03f4665fe20d19a85286f78d39d924237e, to support uploads # split into multiple parts and uploaded in parallel, and at # the time of that commit this check was added because otherwise # older programs would get a success status and upload an empty # object. Unfortuantely, it's very inefficient for fp's implemented # by KeyFile (used, for example, by gsutil when copying between # providers). So, we skip the check for the KeyFile case. # TODO: At some point consider removing this seek/tell/seek # logic, after enough time has passed that it's unlikely any # programs remain that assume the older auto-rewind interface. if not isinstance(fp, KeyFile): spos = fp.tell() fp.seek(0, os.SEEK_END) if fp.tell() == spos: fp.seek(0, os.SEEK_SET) if fp.tell() != spos: # Raise an exception as this is likely a programming # error whereby there is data before the fp but nothing # after it. fp.seek(spos) raise AttributeError('fp is at EOF. Use rewind option ' 'or seek() to data start.') # seek back to the correct position. fp.seek(spos) if hasattr(fp, 'name'): self.path = fp.name if self.bucket != None: if isinstance(fp, KeyFile): # Avoid EOF seek for KeyFile case as it's very inefficient. key = fp.getkey() size = key.size - fp.tell() self.size = size # At present both GCS and S3 use MD5 for the etag for # non-multipart-uploaded objects. If the etag is 32 hex # chars use it as an MD5, to avoid having to read the file # twice while transferring. if (re.match('^"[a-fA-F0-9]{32}"$', key.etag)): etag = key.etag.strip('"') md5 = (etag, base64.b64encode(binascii.unhexlify(etag))) if size: self.size = size else: # If md5 is provided, still need to size so # calculate based on bytes to end of content spos = fp.tell() fp.seek(0, os.SEEK_END) self.size = fp.tell() - spos fp.seek(spos) size = self.size if md5 == None: md5 = self.compute_md5(fp, size) self.md5 = md5[0] self.base64md5 = md5[1] if self.name == None: self.name = self.md5 if not replace: if self.bucket.lookup(self.name): return if if_generation is not None: headers['x-goog-if-generation-match'] = str(if_generation) if res_upload_handler: res_upload_handler.send_file(self, fp, headers, cb, num_cb) else: # Not a resumable transfer so use basic send_file mechanism. self.send_file(fp, headers, cb, num_cb, size=size) def set_contents_from_filename(self, filename, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, reduced_redundancy=None, res_upload_handler=None, if_generation=None): """ Store an object in GS using the name of the Key object as the key in GS and the contents of the file named by 'filename'. See set_contents_from_file method for details about the parameters. :type filename: string :param filename: The name of the file that you want to put onto GS :type headers: dict :param headers: Additional headers to pass along with the request to GS. :type replace: bool :param replace: If True, replaces the contents of the file if it already exists. :type cb: function :param cb: (optional) a callback function that will be called to report progress on the download. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted from GS and the second representing the total number of bytes that need to be transmitted. :type cb: int :param num_cb: (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer. :type policy: :class:`boto.gs.acl.CannedACLStrings` :param policy: A canned ACL policy that will be applied to the new key in GS. :type md5: A tuple containing the hexdigest version of the MD5 checksum of the file as the first element and the Base64-encoded version of the plain checksum as the second element. This is the same format returned by the compute_md5 method. :param md5: If you need to compute the MD5 for any reason prior to upload, it's silly to have to do it twice so this param, if present, will be used as the MD5 values of the file. Otherwise, the checksum will be computed. :type res_upload_handler: ResumableUploadHandler :param res_upload_handler: If provided, this handler will perform the upload. :type if_generation: int :param if_generation: (optional) If set to a generation number, the object will only be written to if its current generation number is this value. If set to the value 0, the object will only be written if it doesn't already exist. """ # Clear out any previously computed hashes, since we are setting the # content. self.local_hashes = {} with open(filename, 'rb') as fp: self.set_contents_from_file(fp, headers, replace, cb, num_cb, policy, md5, res_upload_handler, if_generation=if_generation) def set_contents_from_string(self, s, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, if_generation=None): """ Store an object in GCS using the name of the Key object as the key in GCS and the string 's' as the contents. See set_contents_from_file method for details about the parameters. :type headers: dict :param headers: Additional headers to pass along with the request to AWS. :type replace: bool :param replace: If True, replaces the contents of the file if it already exists. :type cb: function :param cb: a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to GCS and the second representing the size of the to be transmitted object. :type cb: int :param num_cb: (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer. :type policy: :class:`boto.gs.acl.CannedACLStrings` :param policy: A canned ACL policy that will be applied to the new key in GCS. :type md5: A tuple containing the hexdigest version of the MD5 checksum of the file as the first element and the Base64-encoded version of the plain checksum as the second element. This is the same format returned by the compute_md5 method. :param md5: If you need to compute the MD5 for any reason prior to upload, it's silly to have to do it twice so this param, if present, will be used as the MD5 values of the file. Otherwise, the checksum will be computed. :type if_generation: int :param if_generation: (optional) If set to a generation number, the object will only be written to if its current generation number is this value. If set to the value 0, the object will only be written if it doesn't already exist. """ # Clear out any previously computed md5 hashes, since we are setting the content. self.md5 = None self.base64md5 = None fp = StringIO.StringIO(get_utf8_value(s)) r = self.set_contents_from_file(fp, headers, replace, cb, num_cb, policy, md5, if_generation=if_generation) fp.close() return r def set_contents_from_stream(self, *args, **kwargs): """ Store an object using the name of the Key object as the key in cloud and the contents of the data stream pointed to by 'fp' as the contents. The stream object is not seekable and total size is not known. This has the implication that we can't specify the Content-Size and Content-MD5 in the header. So for huge uploads, the delay in calculating MD5 is avoided but with a penalty of inability to verify the integrity of the uploaded data. :type fp: file :param fp: the file whose contents are to be uploaded :type headers: dict :param headers: additional HTTP headers to be sent with the PUT request. :type replace: bool :param replace: If this parameter is False, the method will first check to see if an object exists in the bucket with the same key. If it does, it won't overwrite it. The default value is True which will overwrite the object. :type cb: function :param cb: a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to GS and the second representing the total number of bytes that need to be transmitted. :type num_cb: int :param num_cb: (optional) If a callback is specified with the cb parameter, this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer. :type policy: :class:`boto.gs.acl.CannedACLStrings` :param policy: A canned ACL policy that will be applied to the new key in GS. :type size: int :param size: (optional) The Maximum number of bytes to read from the file pointer (fp). This is useful when uploading a file in multiple parts where you are splitting the file up into different ranges to be uploaded. If not specified, the default behaviour is to read all bytes from the file pointer. Less bytes may be available. :type if_generation: int :param if_generation: (optional) If set to a generation number, the object will only be written to if its current generation number is this value. If set to the value 0, the object will only be written if it doesn't already exist. """ if_generation = kwargs.pop('if_generation', None) if if_generation is not None: headers = kwargs.get('headers', {}) headers['x-goog-if-generation-match'] = str(if_generation) kwargs['headers'] = headers super(Key, self).set_contents_from_stream(*args, **kwargs) def set_acl(self, acl_or_str, headers=None, generation=None, if_generation=None, if_metageneration=None): """Sets the ACL for this object. :type acl_or_str: string or :class:`boto.gs.acl.ACL` :param acl_or_str: A canned ACL string (see :data:`~.gs.acl.CannedACLStrings`) or an ACL object. :type headers: dict :param headers: Additional headers to set during the request. :type generation: int :param generation: If specified, sets the ACL for a specific generation of a versioned object. If not specified, the current version is modified. :type if_generation: int :param if_generation: (optional) If set to a generation number, the acl will only be updated if its current generation number is this value. :type if_metageneration: int :param if_metageneration: (optional) If set to a metageneration number, the acl will only be updated if its current metageneration number is this value. """ if self.bucket != None: self.bucket.set_acl(acl_or_str, self.name, headers=headers, generation=generation, if_generation=if_generation, if_metageneration=if_metageneration) def get_acl(self, headers=None, generation=None): """Returns the ACL of this object. :param dict headers: Additional headers to set during the request. :param int generation: If specified, gets the ACL for a specific generation of a versioned object. If not specified, the current version is returned. :rtype: :class:`.gs.acl.ACL` """ if self.bucket != None: return self.bucket.get_acl(self.name, headers=headers, generation=generation) def get_xml_acl(self, headers=None, generation=None): """Returns the ACL string of this object. :param dict headers: Additional headers to set during the request. :param int generation: If specified, gets the ACL for a specific generation of a versioned object. If not specified, the current version is returned. :rtype: str """ if self.bucket != None: return self.bucket.get_xml_acl(self.name, headers=headers, generation=generation) def set_xml_acl(self, acl_str, headers=None, generation=None, if_generation=None, if_metageneration=None): """Sets this objects's ACL to an XML string. :type acl_str: string :param acl_str: A string containing the ACL XML. :type headers: dict :param headers: Additional headers to set during the request. :type generation: int :param generation: If specified, sets the ACL for a specific generation of a versioned object. If not specified, the current version is modified. :type if_generation: int :param if_generation: (optional) If set to a generation number, the acl will only be updated if its current generation number is this value. :type if_metageneration: int :param if_metageneration: (optional) If set to a metageneration number, the acl will only be updated if its current metageneration number is this value. """ if self.bucket != None: return self.bucket.set_xml_acl(acl_str, self.name, headers=headers, generation=generation, if_generation=if_generation, if_metageneration=if_metageneration) def set_canned_acl(self, acl_str, headers=None, generation=None, if_generation=None, if_metageneration=None): """Sets this objects's ACL using a predefined (canned) value. :type acl_str: string :param acl_str: A canned ACL string. See :data:`~.gs.acl.CannedACLStrings`. :type headers: dict :param headers: Additional headers to set during the request. :type generation: int :param generation: If specified, sets the ACL for a specific generation of a versioned object. If not specified, the current version is modified. :type if_generation: int :param if_generation: (optional) If set to a generation number, the acl will only be updated if its current generation number is this value. :type if_metageneration: int :param if_metageneration: (optional) If set to a metageneration number, the acl will only be updated if its current metageneration number is this value. """ if self.bucket != None: return self.bucket.set_canned_acl( acl_str, self.name, headers=headers, generation=generation, if_generation=if_generation, if_metageneration=if_metageneration ) def compose(self, components, content_type=None, headers=None): """Create a new object from a sequence of existing objects. The content of the object representing this Key will be the concatenation of the given object sequence. For more detail, visit https://developers.google.com/storage/docs/composite-objects :type components list of Keys :param components List of gs.Keys representing the component objects :type content_type (optional) string :param content_type Content type for the new composite object. """ compose_req = [] for key in components: if key.bucket.name != self.bucket.name: raise BotoClientError( 'GCS does not support inter-bucket composing') generation_tag = '' if key.generation: generation_tag = ('%s' % str(key.generation)) compose_req.append('%s%s' % (key.name, generation_tag)) compose_req_xml = ('%s' % ''.join(compose_req)) headers = headers or {} if content_type: headers['Content-Type'] = content_type resp = self.bucket.connection.make_request( 'PUT', get_utf8_value(self.bucket.name), get_utf8_value(self.name), headers=headers, query_args='compose', data=get_utf8_value(compose_req_xml)) if resp.status < 200 or resp.status > 299: raise self.bucket.connection.provider.storage_response_error( resp.status, resp.reason, resp.read()) # Return the generation so that the result URI can be built with this # for automatic parallel uploads. return resp.getheader('x-goog-generation') boto-2.20.1/boto/gs/lifecycle.py000077500000000000000000000215761225267101000164360ustar00rootroot00000000000000# Copyright 2013 Google Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.exception import InvalidLifecycleConfigError # Relevant tags for the lifecycle configuration XML document. LIFECYCLE_CONFIG = 'LifecycleConfiguration' RULE = 'Rule' ACTION = 'Action' DELETE = 'Delete' CONDITION = 'Condition' AGE = 'Age' CREATED_BEFORE = 'CreatedBefore' NUM_NEWER_VERSIONS = 'NumberOfNewerVersions' IS_LIVE = 'IsLive' # List of all action elements. LEGAL_ACTIONS = [DELETE] # List of all action parameter elements. LEGAL_ACTION_PARAMS = [] # List of all condition elements. LEGAL_CONDITIONS = [AGE, CREATED_BEFORE, NUM_NEWER_VERSIONS, IS_LIVE] # Dictionary mapping actions to supported action parameters for each action. LEGAL_ACTION_ACTION_PARAMS = { DELETE: [], } class Rule(object): """ A lifecycle rule for a bucket. :ivar action: Action to be taken. :ivar action_params: A dictionary of action specific parameters. Each item in the dictionary represents the name and value of an action parameter. :ivar conditions: A dictionary of conditions that specify when the action should be taken. Each item in the dictionary represents the name and value of a condition. """ def __init__(self, action=None, action_params=None, conditions=None): self.action = action self.action_params = action_params or {} self.conditions = conditions or {} # Name of the current enclosing tag (used to validate the schema). self.current_tag = RULE def validateStartTag(self, tag, parent): """Verify parent of the start tag.""" if self.current_tag != parent: raise InvalidLifecycleConfigError( 'Invalid tag %s found inside %s tag' % (tag, self.current_tag)) def validateEndTag(self, tag): """Verify end tag against the start tag.""" if tag != self.current_tag: raise InvalidLifecycleConfigError( 'Mismatched start and end tags (%s/%s)' % (self.current_tag, tag)) def startElement(self, name, attrs, connection): if name == ACTION: self.validateStartTag(name, RULE) elif name in LEGAL_ACTIONS: self.validateStartTag(name, ACTION) # Verify there is only one action tag in the rule. if self.action is not None: raise InvalidLifecycleConfigError( 'Only one action tag is allowed in each rule') self.action = name elif name in LEGAL_ACTION_PARAMS: # Make sure this tag is found in an action tag. if self.current_tag not in LEGAL_ACTIONS: raise InvalidLifecycleConfigError( 'Tag %s found outside of action' % name) # Make sure this tag is allowed for the current action tag. if name not in LEGAL_ACTION_ACTION_PARAMS[self.action]: raise InvalidLifecycleConfigError( 'Tag %s not allowed in action %s' % (name, self.action)) elif name == CONDITION: self.validateStartTag(name, RULE) elif name in LEGAL_CONDITIONS: self.validateStartTag(name, CONDITION) # Verify there is no duplicate conditions. if name in self.conditions: raise InvalidLifecycleConfigError( 'Found duplicate conditions %s' % name) else: raise InvalidLifecycleConfigError('Unsupported tag ' + name) self.current_tag = name def endElement(self, name, value, connection): self.validateEndTag(name) if name == RULE: # We have to validate the rule after it is fully populated because # the action and condition elements could be in any order. self.validate() elif name == ACTION: self.current_tag = RULE elif name in LEGAL_ACTIONS: self.current_tag = ACTION elif name in LEGAL_ACTION_PARAMS: self.current_tag = self.action # Add the action parameter name and value to the dictionary. self.action_params[name] = value.strip() elif name == CONDITION: self.current_tag = RULE elif name in LEGAL_CONDITIONS: self.current_tag = CONDITION # Add the condition name and value to the dictionary. self.conditions[name] = value.strip() else: raise InvalidLifecycleConfigError('Unsupported end tag ' + name) def validate(self): """Validate the rule.""" if not self.action: raise InvalidLifecycleConfigError( 'No action was specified in the rule') if not self.conditions: raise InvalidLifecycleConfigError( 'No condition was specified for action %s' % self.action) def to_xml(self): """Convert the rule into XML string representation.""" s = '<' + RULE + '>' s += '<' + ACTION + '>' if self.action_params: s += '<' + self.action + '>' for param in LEGAL_ACTION_PARAMS: if param in self.action_params: s += ('<' + param + '>' + self.action_params[param] + '') s += '' else: s += '<' + self.action + '/>' s += '' s += '<' + CONDITION + '>' for condition in LEGAL_CONDITIONS: if condition in self.conditions: s += ('<' + condition + '>' + self.conditions[condition] + '') s += '' s += '' return s class LifecycleConfig(list): """ A container of rules associated with a lifecycle configuration. """ def __init__(self): # Track if root tag has been seen. self.has_root_tag = False def startElement(self, name, attrs, connection): if name == LIFECYCLE_CONFIG: if self.has_root_tag: raise InvalidLifecycleConfigError( 'Only one root tag is allowed in the XML') self.has_root_tag = True elif name == RULE: if not self.has_root_tag: raise InvalidLifecycleConfigError('Invalid root tag ' + name) rule = Rule() self.append(rule) return rule else: raise InvalidLifecycleConfigError('Unsupported tag ' + name) def endElement(self, name, value, connection): if name == LIFECYCLE_CONFIG: pass else: raise InvalidLifecycleConfigError('Unsupported end tag ' + name) def to_xml(self): """Convert LifecycleConfig object into XML string representation.""" s = '' s += '<' + LIFECYCLE_CONFIG + '>' for rule in self: s += rule.to_xml() s += '' return s def add_rule(self, action, action_params, conditions): """ Add a rule to this Lifecycle configuration. This only adds the rule to the local copy. To install the new rule(s) on the bucket, you need to pass this Lifecycle config object to the configure_lifecycle method of the Bucket object. :type action: str :param action: Action to be taken. :type action_params: dict :param action_params: A dictionary of action specific parameters. Each item in the dictionary represents the name and value of an action parameter. :type conditions: dict :param conditions: A dictionary of conditions that specify when the action should be taken. Each item in the dictionary represents the name and value of a condition. """ rule = Rule(action, action_params, conditions) self.append(rule) boto-2.20.1/boto/gs/resumable_upload_handler.py000066400000000000000000000753011225267101000215070ustar00rootroot00000000000000# Copyright 2010 Google Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import errno import httplib import os import random import re import socket import time import urlparse from boto import config, UserAgent from boto.connection import AWSAuthConnection from boto.exception import InvalidUriError from boto.exception import ResumableTransferDisposition from boto.exception import ResumableUploadException from boto.s3.keyfile import KeyFile try: from hashlib import md5 except ImportError: from md5 import md5 """ Handler for Google Cloud Storage resumable uploads. See http://code.google.com/apis/storage/docs/developer-guide.html#resumable for details. Resumable uploads will retry failed uploads, resuming at the byte count completed by the last upload attempt. If too many retries happen with no progress (per configurable num_retries param), the upload will be aborted in the current process. The caller can optionally specify a tracker_file_name param in the ResumableUploadHandler constructor. If you do this, that file will save the state needed to allow retrying later, in a separate process (e.g., in a later run of gsutil). """ class ResumableUploadHandler(object): BUFFER_SIZE = 8192 RETRYABLE_EXCEPTIONS = (httplib.HTTPException, IOError, socket.error, socket.gaierror) # (start, end) response indicating server has nothing (upload protocol uses # inclusive numbering). SERVER_HAS_NOTHING = (0, -1) def __init__(self, tracker_file_name=None, num_retries=None): """ Constructor. Instantiate once for each uploaded file. :type tracker_file_name: string :param tracker_file_name: optional file name to save tracker URI. If supplied and the current process fails the upload, it can be retried in a new process. If called with an existing file containing a valid tracker URI, we'll resume the upload from this URI; else we'll start a new resumable upload (and write the URI to this tracker file). :type num_retries: int :param num_retries: the number of times we'll re-try a resumable upload making no progress. (Count resets every time we get progress, so upload can span many more than this number of retries.) """ self.tracker_file_name = tracker_file_name self.num_retries = num_retries self.server_has_bytes = 0 # Byte count at last server check. self.tracker_uri = None if tracker_file_name: self._load_tracker_uri_from_file() # Save upload_start_point in instance state so caller can find how # much was transferred by this ResumableUploadHandler (across retries). self.upload_start_point = None def _load_tracker_uri_from_file(self): f = None try: f = open(self.tracker_file_name, 'r') uri = f.readline().strip() self._set_tracker_uri(uri) except IOError, e: # Ignore non-existent file (happens first time an upload # is attempted on a file), but warn user for other errors. if e.errno != errno.ENOENT: # Will restart because self.tracker_uri == None. print('Couldn\'t read URI tracker file (%s): %s. Restarting ' 'upload from scratch.' % (self.tracker_file_name, e.strerror)) except InvalidUriError, e: # Warn user, but proceed (will restart because # self.tracker_uri == None). print('Invalid tracker URI (%s) found in URI tracker file ' '(%s). Restarting upload from scratch.' % (uri, self.tracker_file_name)) finally: if f: f.close() def _save_tracker_uri_to_file(self): """ Saves URI to tracker file if one was passed to constructor. """ if not self.tracker_file_name: return f = None try: f = open(self.tracker_file_name, 'w') f.write(self.tracker_uri) except IOError, e: raise ResumableUploadException( 'Couldn\'t write URI tracker file (%s): %s.\nThis can happen' 'if you\'re using an incorrectly configured upload tool\n' '(e.g., gsutil configured to save tracker files to an ' 'unwritable directory)' % (self.tracker_file_name, e.strerror), ResumableTransferDisposition.ABORT) finally: if f: f.close() def _set_tracker_uri(self, uri): """ Called when we start a new resumable upload or get a new tracker URI for the upload. Saves URI and resets upload state. Raises InvalidUriError if URI is syntactically invalid. """ parse_result = urlparse.urlparse(uri) if (parse_result.scheme.lower() not in ['http', 'https'] or not parse_result.netloc): raise InvalidUriError('Invalid tracker URI (%s)' % uri) self.tracker_uri = uri self.tracker_uri_host = parse_result.netloc self.tracker_uri_path = '%s?%s' % ( parse_result.path, parse_result.query) self.server_has_bytes = 0 def get_tracker_uri(self): """ Returns upload tracker URI, or None if the upload has not yet started. """ return self.tracker_uri def get_upload_id(self): """ Returns the upload ID for the resumable upload, or None if the upload has not yet started. """ # We extract the upload_id from the tracker uri. We could retrieve the # upload_id from the headers in the response but this only works for # the case where we get the tracker uri from the service. In the case # where we get the tracker from the tracking file we need to do this # logic anyway. delim = '?upload_id=' if self.tracker_uri and delim in self.tracker_uri: return self.tracker_uri[self.tracker_uri.index(delim) + len(delim):] else: return None def _remove_tracker_file(self): if (self.tracker_file_name and os.path.exists(self.tracker_file_name)): os.unlink(self.tracker_file_name) def _build_content_range_header(self, range_spec='*', length_spec='*'): return 'bytes %s/%s' % (range_spec, length_spec) def _query_server_state(self, conn, file_length): """ Queries server to find out state of given upload. Note that this method really just makes special case use of the fact that the upload server always returns the current start/end state whenever a PUT doesn't complete. Returns HTTP response from sending request. Raises ResumableUploadException if problem querying server. """ # Send an empty PUT so that server replies with this resumable # transfer's state. put_headers = {} put_headers['Content-Range'] = ( self._build_content_range_header('*', file_length)) put_headers['Content-Length'] = '0' return AWSAuthConnection.make_request(conn, 'PUT', path=self.tracker_uri_path, auth_path=self.tracker_uri_path, headers=put_headers, host=self.tracker_uri_host) def _query_server_pos(self, conn, file_length): """ Queries server to find out what bytes it currently has. Returns (server_start, server_end), where the values are inclusive. For example, (0, 2) would mean that the server has bytes 0, 1, *and* 2. Raises ResumableUploadException if problem querying server. """ resp = self._query_server_state(conn, file_length) if resp.status == 200: # To handle the boundary condition where the server has the complete # file, we return (server_start, file_length-1). That way the # calling code can always simply read up through server_end. (If we # didn't handle this boundary condition here, the caller would have # to check whether server_end == file_length and read one fewer byte # in that case.) return (0, file_length - 1) # Completed upload. if resp.status != 308: # This means the server didn't have any state for the given # upload ID, which can happen (for example) if the caller saved # the tracker URI to a file and then tried to restart the transfer # after that upload ID has gone stale. In that case we need to # start a new transfer (and the caller will then save the new # tracker URI to the tracker file). raise ResumableUploadException( 'Got non-308 response (%s) from server state query' % resp.status, ResumableTransferDisposition.START_OVER) got_valid_response = False range_spec = resp.getheader('range') if range_spec: # Parse 'bytes=-' range_spec. m = re.search('bytes=(\d+)-(\d+)', range_spec) if m: server_start = long(m.group(1)) server_end = long(m.group(2)) got_valid_response = True else: # No Range header, which means the server does not yet have # any bytes. Note that the Range header uses inclusive 'from' # and 'to' values. Since Range 0-0 would mean that the server # has byte 0, omitting the Range header is used to indicate that # the server doesn't have any bytes. return self.SERVER_HAS_NOTHING if not got_valid_response: raise ResumableUploadException( 'Couldn\'t parse upload server state query response (%s)' % str(resp.getheaders()), ResumableTransferDisposition.START_OVER) if conn.debug >= 1: print 'Server has: Range: %d - %d.' % (server_start, server_end) return (server_start, server_end) def _start_new_resumable_upload(self, key, headers=None): """ Starts a new resumable upload. Raises ResumableUploadException if any errors occur. """ conn = key.bucket.connection if conn.debug >= 1: print 'Starting new resumable upload.' self.server_has_bytes = 0 # Start a new resumable upload by sending a POST request with an # empty body and the "X-Goog-Resumable: start" header. Include any # caller-provided headers (e.g., Content-Type) EXCEPT Content-Length # (and raise an exception if they tried to pass one, since it's # a semantic error to specify it at this point, and if we were to # include one now it would cause the server to expect that many # bytes; the POST doesn't include the actual file bytes We set # the Content-Length in the subsequent PUT, based on the uploaded # file size. post_headers = {} for k in headers: if k.lower() == 'content-length': raise ResumableUploadException( 'Attempt to specify Content-Length header (disallowed)', ResumableTransferDisposition.ABORT) post_headers[k] = headers[k] post_headers[conn.provider.resumable_upload_header] = 'start' resp = conn.make_request( 'POST', key.bucket.name, key.name, post_headers) # Get tracker URI from response 'Location' header. body = resp.read() # Check for various status conditions. if resp.status in [500, 503]: # Retry status 500 and 503 errors after a delay. raise ResumableUploadException( 'Got status %d from attempt to start resumable upload. ' 'Will wait/retry' % resp.status, ResumableTransferDisposition.WAIT_BEFORE_RETRY) elif resp.status != 200 and resp.status != 201: raise ResumableUploadException( 'Got status %d from attempt to start resumable upload. ' 'Aborting' % resp.status, ResumableTransferDisposition.ABORT) # Else we got 200 or 201 response code, indicating the resumable # upload was created. tracker_uri = resp.getheader('Location') if not tracker_uri: raise ResumableUploadException( 'No resumable tracker URI found in resumable initiation ' 'POST response (%s)' % body, ResumableTransferDisposition.WAIT_BEFORE_RETRY) self._set_tracker_uri(tracker_uri) self._save_tracker_uri_to_file() def _upload_file_bytes(self, conn, http_conn, fp, file_length, total_bytes_uploaded, cb, num_cb, headers): """ Makes one attempt to upload file bytes, using an existing resumable upload connection. Returns (etag, generation, metageneration) from server upon success. Raises ResumableUploadException if any problems occur. """ buf = fp.read(self.BUFFER_SIZE) if cb: # The cb_count represents the number of full buffers to send between # cb executions. if num_cb > 2: cb_count = file_length / self.BUFFER_SIZE / (num_cb-2) elif num_cb < 0: cb_count = -1 else: cb_count = 0 i = 0 cb(total_bytes_uploaded, file_length) # Build resumable upload headers for the transfer. Don't send a # Content-Range header if the file is 0 bytes long, because the # resumable upload protocol uses an *inclusive* end-range (so, sending # 'bytes 0-0/1' would actually mean you're sending a 1-byte file). if not headers: put_headers = {} else: put_headers = headers.copy() if file_length: if total_bytes_uploaded == file_length: range_header = self._build_content_range_header( '*', file_length) else: range_header = self._build_content_range_header( '%d-%d' % (total_bytes_uploaded, file_length - 1), file_length) put_headers['Content-Range'] = range_header # Set Content-Length to the total bytes we'll send with this PUT. put_headers['Content-Length'] = str(file_length - total_bytes_uploaded) http_request = AWSAuthConnection.build_base_http_request( conn, 'PUT', path=self.tracker_uri_path, auth_path=None, headers=put_headers, host=self.tracker_uri_host) http_conn.putrequest('PUT', http_request.path) for k in put_headers: http_conn.putheader(k, put_headers[k]) http_conn.endheaders() # Turn off debug on http connection so upload content isn't included # in debug stream. http_conn.set_debuglevel(0) while buf: http_conn.send(buf) for alg in self.digesters: self.digesters[alg].update(buf) total_bytes_uploaded += len(buf) if cb: i += 1 if i == cb_count or cb_count == -1: cb(total_bytes_uploaded, file_length) i = 0 buf = fp.read(self.BUFFER_SIZE) http_conn.set_debuglevel(conn.debug) if cb: cb(total_bytes_uploaded, file_length) if total_bytes_uploaded != file_length: # Abort (and delete the tracker file) so if the user retries # they'll start a new resumable upload rather than potentially # attempting to pick back up later where we left off. raise ResumableUploadException( 'File changed during upload: EOF at %d bytes of %d byte file.' % (total_bytes_uploaded, file_length), ResumableTransferDisposition.ABORT) resp = http_conn.getresponse() # Restore http connection debug level. http_conn.set_debuglevel(conn.debug) if resp.status == 200: # Success. return (resp.getheader('etag'), resp.getheader('x-goog-generation'), resp.getheader('x-goog-metageneration')) # Retry timeout (408) and status 500 and 503 errors after a delay. elif resp.status in [408, 500, 503]: disposition = ResumableTransferDisposition.WAIT_BEFORE_RETRY else: # Catch all for any other error codes. disposition = ResumableTransferDisposition.ABORT raise ResumableUploadException('Got response code %d while attempting ' 'upload (%s)' % (resp.status, resp.reason), disposition) def _attempt_resumable_upload(self, key, fp, file_length, headers, cb, num_cb): """ Attempts a resumable upload. Returns (etag, generation, metageneration) from server upon success. Raises ResumableUploadException if any problems occur. """ (server_start, server_end) = self.SERVER_HAS_NOTHING conn = key.bucket.connection if self.tracker_uri: # Try to resume existing resumable upload. try: (server_start, server_end) = ( self._query_server_pos(conn, file_length)) self.server_has_bytes = server_start if server_end: # If the server already has some of the content, we need to # update the digesters with the bytes that have already been # uploaded to ensure we get a complete hash in the end. print 'Catching up hash digest(s) for resumed upload' fp.seek(0) # Read local file's bytes through position server has. For # example, if server has (0, 3) we want to read 3-0+1=4 bytes. bytes_to_go = server_end + 1 while bytes_to_go: chunk = fp.read(min(key.BufferSize, bytes_to_go)) if not chunk: raise ResumableUploadException( 'Hit end of file during resumable upload hash ' 'catchup. This should not happen under\n' 'normal circumstances, as it indicates the ' 'server has more bytes of this transfer\nthan' ' the current file size. Restarting upload.', ResumableTransferDisposition.START_OVER) for alg in self.digesters: self.digesters[alg].update(chunk) bytes_to_go -= len(chunk) if conn.debug >= 1: print 'Resuming transfer.' except ResumableUploadException, e: if conn.debug >= 1: print 'Unable to resume transfer (%s).' % e.message self._start_new_resumable_upload(key, headers) else: self._start_new_resumable_upload(key, headers) # upload_start_point allows the code that instantiated the # ResumableUploadHandler to find out the point from which it started # uploading (e.g., so it can correctly compute throughput). if self.upload_start_point is None: self.upload_start_point = server_end total_bytes_uploaded = server_end + 1 # Corner case: Don't attempt to seek if we've already uploaded the # entire file, because if the file is a stream (e.g., the KeyFile # wrapper around input key when copying between providers), attempting # to seek to the end of file would result in an InvalidRange error. if file_length < total_bytes_uploaded: fp.seek(total_bytes_uploaded) conn = key.bucket.connection # Get a new HTTP connection (vs conn.get_http_connection(), which reuses # pool connections) because httplib requires a new HTTP connection per # transaction. (Without this, calling http_conn.getresponse() would get # "ResponseNotReady".) http_conn = conn.new_http_connection(self.tracker_uri_host, conn.port, conn.is_secure) http_conn.set_debuglevel(conn.debug) # Make sure to close http_conn at end so if a local file read # failure occurs partway through server will terminate current upload # and can report that progress on next attempt. try: return self._upload_file_bytes(conn, http_conn, fp, file_length, total_bytes_uploaded, cb, num_cb, headers) except (ResumableUploadException, socket.error): resp = self._query_server_state(conn, file_length) if resp.status == 400: raise ResumableUploadException('Got 400 response from server ' 'state query after failed resumable upload attempt. This ' 'can happen for various reasons, including specifying an ' 'invalid request (e.g., an invalid canned ACL) or if the ' 'file size changed between upload attempts', ResumableTransferDisposition.ABORT) else: raise finally: http_conn.close() def _check_final_md5(self, key, etag): """ Checks that etag from server agrees with md5 computed before upload. This is important, since the upload could have spanned a number of hours and multiple processes (e.g., gsutil runs), and the user could change some of the file and not realize they have inconsistent data. """ if key.bucket.connection.debug >= 1: print 'Checking md5 against etag.' if key.md5 != etag.strip('"\''): # Call key.open_read() before attempting to delete the # (incorrect-content) key, so we perform that request on a # different HTTP connection. This is neededb because httplib # will return a "Response not ready" error if you try to perform # a second transaction on the connection. key.open_read() key.close() key.delete() raise ResumableUploadException( 'File changed during upload: md5 signature doesn\'t match etag ' '(incorrect uploaded object deleted)', ResumableTransferDisposition.ABORT) def handle_resumable_upload_exception(self, e, debug): if (e.disposition == ResumableTransferDisposition.ABORT_CUR_PROCESS): if debug >= 1: print('Caught non-retryable ResumableUploadException (%s); ' 'aborting but retaining tracker file' % e.message) raise elif (e.disposition == ResumableTransferDisposition.ABORT): if debug >= 1: print('Caught non-retryable ResumableUploadException (%s); ' 'aborting and removing tracker file' % e.message) self._remove_tracker_file() raise else: if debug >= 1: print('Caught ResumableUploadException (%s) - will retry' % e.message) def track_progress_less_iterations(self, server_had_bytes_before_attempt, roll_back_md5=True, debug=0): # At this point we had a re-tryable failure; see if made progress. if self.server_has_bytes > server_had_bytes_before_attempt: self.progress_less_iterations = 0 # If progress, reset counter. else: self.progress_less_iterations += 1 if roll_back_md5: # Rollback any potential hash updates, as we did not # make any progress in this iteration. self.digesters = self.digesters_before_attempt if self.progress_less_iterations > self.num_retries: # Don't retry any longer in the current process. raise ResumableUploadException( 'Too many resumable upload attempts failed without ' 'progress. You might try this upload again later', ResumableTransferDisposition.ABORT_CUR_PROCESS) # Use binary exponential backoff to desynchronize client requests. sleep_time_secs = random.random() * (2**self.progress_less_iterations) if debug >= 1: print ('Got retryable failure (%d progress-less in a row).\n' 'Sleeping %3.1f seconds before re-trying' % (self.progress_less_iterations, sleep_time_secs)) time.sleep(sleep_time_secs) def send_file(self, key, fp, headers, cb=None, num_cb=10, hash_algs=None): """ Upload a file to a key into a bucket on GS, using GS resumable upload protocol. :type key: :class:`boto.s3.key.Key` or subclass :param key: The Key object to which data is to be uploaded :type fp: file-like object :param fp: The file pointer to upload :type headers: dict :param headers: The headers to pass along with the PUT request :type cb: function :param cb: a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to GS, and the second representing the total number of bytes that need to be transmitted. :type num_cb: int :param num_cb: (optional) If a callback is specified with the cb parameter, this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer. Providing a negative integer will cause your callback to be called with each buffer read. :type hash_algs: dictionary :param hash_algs: (optional) Dictionary mapping hash algorithm descriptions to corresponding state-ful hashing objects that implement update(), digest(), and copy() (e.g. hashlib.md5()). Defaults to {'md5': md5()}. Raises ResumableUploadException if a problem occurs during the transfer. """ if not headers: headers = {} # If Content-Type header is present and set to None, remove it. # This is gsutil's way of asking boto to refrain from auto-generating # that header. CT = 'Content-Type' if CT in headers and headers[CT] is None: del headers[CT] headers['User-Agent'] = UserAgent # Determine file size different ways for case where fp is actually a # wrapper around a Key vs an actual file. if isinstance(fp, KeyFile): file_length = fp.getkey().size else: fp.seek(0, os.SEEK_END) file_length = fp.tell() fp.seek(0) debug = key.bucket.connection.debug # Compute the MD5 checksum on the fly. if hash_algs is None: hash_algs = {'md5': md5} self.digesters = dict( (alg, hash_algs[alg]()) for alg in hash_algs or {}) # Use num-retries from constructor if one was provided; else check # for a value specified in the boto config file; else default to 5. if self.num_retries is None: self.num_retries = config.getint('Boto', 'num_retries', 6) self.progress_less_iterations = 0 while True: # Retry as long as we're making progress. server_had_bytes_before_attempt = self.server_has_bytes self.digesters_before_attempt = dict( (alg, self.digesters[alg].copy()) for alg in self.digesters) try: # Save generation and metageneration in class state so caller # can find these values, for use in preconditions of future # operations on the uploaded object. (etag, self.generation, self.metageneration) = ( self._attempt_resumable_upload(key, fp, file_length, headers, cb, num_cb)) # Get the final digests for the uploaded content. for alg in self.digesters: key.local_hashes[alg] = self.digesters[alg].digest() # Upload succceded, so remove the tracker file (if have one). self._remove_tracker_file() self._check_final_md5(key, etag) key.generation = self.generation if debug >= 1: print 'Resumable upload complete.' return except self.RETRYABLE_EXCEPTIONS, e: if debug >= 1: print('Caught exception (%s)' % e.__repr__()) if isinstance(e, IOError) and e.errno == errno.EPIPE: # Broken pipe error causes httplib to immediately # close the socket (http://bugs.python.org/issue5542), # so we need to close the connection before we resume # the upload (which will cause a new connection to be # opened the next time an HTTP request is sent). key.bucket.connection.connection.close() except ResumableUploadException, e: self.handle_resumable_upload_exception(e, debug) self.track_progress_less_iterations(server_had_bytes_before_attempt, True, debug) boto-2.20.1/boto/gs/user.py000077500000000000000000000036231225267101000154460ustar00rootroot00000000000000# Copyright 2010 Google Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class User: def __init__(self, parent=None, id='', name=''): if parent: parent.owner = self self.type = None self.id = id self.name = name def __repr__(self): return self.id def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'Name': self.name = value elif name == 'ID': self.id = value else: setattr(self, name, value) def to_xml(self, element_name='Owner'): if self.type: s = '<%s type="%s">' % (element_name, self.type) else: s = '<%s>' % element_name s += '%s' % self.id if self.name: s += '%s' % self.name s += '' % element_name return s boto-2.20.1/boto/handler.py000066400000000000000000000045011225267101000154650ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import StringIO import xml.sax class XmlHandler(xml.sax.ContentHandler): def __init__(self, root_node, connection): self.connection = connection self.nodes = [('root', root_node)] self.current_text = '' def startElement(self, name, attrs): self.current_text = '' new_node = self.nodes[-1][1].startElement(name, attrs, self.connection) if new_node != None: self.nodes.append((name, new_node)) def endElement(self, name): self.nodes[-1][1].endElement(name, self.current_text, self.connection) if self.nodes[-1][0] == name: if hasattr(self.nodes[-1][1], 'endNode'): self.nodes[-1][1].endNode(self.connection) self.nodes.pop() self.current_text = '' def characters(self, content): self.current_text += content class XmlHandlerWrapper(object): def __init__(self, root_node, connection): self.handler = XmlHandler(root_node, connection) self.parser = xml.sax.make_parser() self.parser.setContentHandler(self.handler) self.parser.setFeature(xml.sax.handler.feature_external_ges, 0) def parseString(self, content): return self.parser.parse(StringIO.StringIO(content)) boto-2.20.1/boto/https_connection.py000066400000000000000000000106201225267101000174300ustar00rootroot00000000000000# Copyright 2007,2011 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This file is derived from # http://googleappengine.googlecode.com/svn-history/r136/trunk/python/google/appengine/tools/https_wrapper.py """Extensions to allow HTTPS requests with SSL certificate validation.""" import httplib import re import socket import ssl import boto class InvalidCertificateException(httplib.HTTPException): """Raised when a certificate is provided with an invalid hostname.""" def __init__(self, host, cert, reason): """Constructor. Args: host: The hostname the connection was made to. cert: The SSL certificate (as a dictionary) the host returned. """ httplib.HTTPException.__init__(self) self.host = host self.cert = cert self.reason = reason def __str__(self): return ('Host %s returned an invalid certificate (%s): %s' % (self.host, self.reason, self.cert)) def GetValidHostsForCert(cert): """Returns a list of valid host globs for an SSL certificate. Args: cert: A dictionary representing an SSL certificate. Returns: list: A list of valid host globs. """ if 'subjectAltName' in cert: return [x[1] for x in cert['subjectAltName'] if x[0].lower() == 'dns'] else: return [x[0][1] for x in cert['subject'] if x[0][0].lower() == 'commonname'] def ValidateCertificateHostname(cert, hostname): """Validates that a given hostname is valid for an SSL certificate. Args: cert: A dictionary representing an SSL certificate. hostname: The hostname to test. Returns: bool: Whether or not the hostname is valid for this certificate. """ hosts = GetValidHostsForCert(cert) boto.log.debug( "validating server certificate: hostname=%s, certificate hosts=%s", hostname, hosts) for host in hosts: host_re = host.replace('.', '\.').replace('*', '[^.]*') if re.search('^%s$' % (host_re,), hostname, re.I): return True return False class CertValidatingHTTPSConnection(httplib.HTTPConnection): """An HTTPConnection that connects over SSL and validates certificates.""" default_port = httplib.HTTPS_PORT def __init__(self, host, port=default_port, key_file=None, cert_file=None, ca_certs=None, strict=None, **kwargs): """Constructor. Args: host: The hostname. Can be in 'host:port' form. port: The port. Defaults to 443. key_file: A file containing the client's private key cert_file: A file containing the client's certificates ca_certs: A file contianing a set of concatenated certificate authority certs for validating the server against. strict: When true, causes BadStatusLine to be raised if the status line can't be parsed as a valid HTTP/1.0 or 1.1 status line. """ httplib.HTTPConnection.__init__(self, host, port, strict, **kwargs) self.key_file = key_file self.cert_file = cert_file self.ca_certs = ca_certs def connect(self): "Connect to a host on a given (SSL) port." sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) if hasattr(self, "timeout") and self.timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: sock.settimeout(self.timeout) sock.connect((self.host, self.port)) boto.log.debug("wrapping ssl socket; CA certificate file=%s", self.ca_certs) self.sock = ssl.wrap_socket(sock, keyfile=self.key_file, certfile=self.cert_file, cert_reqs=ssl.CERT_REQUIRED, ca_certs=self.ca_certs) cert = self.sock.getpeercert() hostname = self.host.split(':', 0)[0] if not ValidateCertificateHostname(cert, hostname): raise InvalidCertificateException(hostname, cert, 'remote hostname "%s" does not match '\ 'certificate' % hostname) boto-2.20.1/boto/iam/000077500000000000000000000000001225267101000142445ustar00rootroot00000000000000boto-2.20.1/boto/iam/__init__.py000066400000000000000000000056451225267101000163670ustar00rootroot00000000000000# Copyright (c) 2010-2011 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010-2011, Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # this is here for backward compatibility # originally, the IAMConnection class was defined here from connection import IAMConnection from boto.regioninfo import RegionInfo class IAMRegionInfo(RegionInfo): def connect(self, **kw_params): """ Connect to this Region's endpoint. Returns an connection object pointing to the endpoint associated with this region. You may pass any of the arguments accepted by the connection class's constructor as keyword arguments and they will be passed along to the connection object. :rtype: Connection object :return: The connection to this regions endpoint """ if self.connection_cls: return self.connection_cls(host=self.endpoint, **kw_params) def regions(): """ Get all available regions for the IAM service. :rtype: list :return: A list of :class:`boto.regioninfo.RegionInfo` instances """ return [IAMRegionInfo(name='universal', endpoint='iam.amazonaws.com', connection_cls=IAMConnection), IAMRegionInfo(name='us-gov-west-1', endpoint='iam.us-gov.amazonaws.com', connection_cls=IAMConnection) ] def connect_to_region(region_name, **kw_params): """ Given a valid region name, return a :class:`boto.iam.connection.IAMConnection`. :type: str :param region_name: The name of the region to connect to. :rtype: :class:`boto.iam.connection.IAMConnection` or ``None`` :return: A connection to the given region, or None if an invalid region name is given """ for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/iam/connection.py000066400000000000000000001506341225267101000167660ustar00rootroot00000000000000# Copyright (c) 2010-2011 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010-2011, Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import boto import boto.jsonresponse from boto.compat import json from boto.resultset import ResultSet from boto.iam.summarymap import SummaryMap from boto.connection import AWSQueryConnection ASSUME_ROLE_POLICY_DOCUMENT = json.dumps({ 'Statement': [{'Principal': {'Service': ['ec2.amazonaws.com']}, 'Effect': 'Allow', 'Action': ['sts:AssumeRole']}]}) class IAMConnection(AWSQueryConnection): APIVersion = '2010-05-08' def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, host='iam.amazonaws.com', debug=0, https_connection_factory=None, path='/', security_token=None, validate_certs=True): AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, proxy_user, proxy_pass, host, debug, https_connection_factory, path, security_token, validate_certs=validate_certs) def _required_auth_capability(self): #return ['iam'] return ['hmac-v4'] def get_response(self, action, params, path='/', parent=None, verb='POST', list_marker='Set'): """ Utility method to handle calls to IAM and parsing of responses. """ if not parent: parent = self response = self.make_request(action, params, path, verb) body = response.read() boto.log.debug(body) if response.status == 200: if body: e = boto.jsonresponse.Element(list_marker=list_marker, pythonize_name=True) h = boto.jsonresponse.XmlHandler(e, parent) h.parse(body) return e else: # Support empty responses, e.g. deleting a SAML provider # according to the official documentation. return {} else: boto.log.error('%s %s' % (response.status, response.reason)) boto.log.error('%s' % body) raise self.ResponseError(response.status, response.reason, body) # # Group methods # def get_all_groups(self, path_prefix='/', marker=None, max_items=None): """ List the groups that have the specified path prefix. :type path_prefix: string :param path_prefix: If provided, only groups whose paths match the provided prefix will be returned. :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results to indicate the maximum number of groups you want in the response. """ params = {} if path_prefix: params['PathPrefix'] = path_prefix if marker: params['Marker'] = marker if max_items: params['MaxItems'] = max_items return self.get_response('ListGroups', params, list_marker='Groups') def get_group(self, group_name, marker=None, max_items=None): """ Return a list of users that are in the specified group. :type group_name: string :param group_name: The name of the group whose information should be returned. :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results to indicate the maximum number of groups you want in the response. """ params = {'GroupName': group_name} if marker: params['Marker'] = marker if max_items: params['MaxItems'] = max_items return self.get_response('GetGroup', params, list_marker='Users') def create_group(self, group_name, path='/'): """ Create a group. :type group_name: string :param group_name: The name of the new group :type path: string :param path: The path to the group (Optional). Defaults to /. """ params = {'GroupName': group_name, 'Path': path} return self.get_response('CreateGroup', params) def delete_group(self, group_name): """ Delete a group. The group must not contain any Users or have any attached policies :type group_name: string :param group_name: The name of the group to delete. """ params = {'GroupName': group_name} return self.get_response('DeleteGroup', params) def update_group(self, group_name, new_group_name=None, new_path=None): """ Updates name and/or path of the specified group. :type group_name: string :param group_name: The name of the new group :type new_group_name: string :param new_group_name: If provided, the name of the group will be changed to this name. :type new_path: string :param new_path: If provided, the path of the group will be changed to this path. """ params = {'GroupName': group_name} if new_group_name: params['NewGroupName'] = new_group_name if new_path: params['NewPath'] = new_path return self.get_response('UpdateGroup', params) def add_user_to_group(self, group_name, user_name): """ Add a user to a group :type group_name: string :param group_name: The name of the group :type user_name: string :param user_name: The to be added to the group. """ params = {'GroupName': group_name, 'UserName': user_name} return self.get_response('AddUserToGroup', params) def remove_user_from_group(self, group_name, user_name): """ Remove a user from a group. :type group_name: string :param group_name: The name of the group :type user_name: string :param user_name: The user to remove from the group. """ params = {'GroupName': group_name, 'UserName': user_name} return self.get_response('RemoveUserFromGroup', params) def put_group_policy(self, group_name, policy_name, policy_json): """ Adds or updates the specified policy document for the specified group. :type group_name: string :param group_name: The name of the group the policy is associated with. :type policy_name: string :param policy_name: The policy document to get. :type policy_json: string :param policy_json: The policy document. """ params = {'GroupName': group_name, 'PolicyName': policy_name, 'PolicyDocument': policy_json} return self.get_response('PutGroupPolicy', params, verb='POST') def get_all_group_policies(self, group_name, marker=None, max_items=None): """ List the names of the policies associated with the specified group. :type group_name: string :param group_name: The name of the group the policy is associated with. :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results to indicate the maximum number of groups you want in the response. """ params = {'GroupName': group_name} if marker: params['Marker'] = marker if max_items: params['MaxItems'] = max_items return self.get_response('ListGroupPolicies', params, list_marker='PolicyNames') def get_group_policy(self, group_name, policy_name): """ Retrieves the specified policy document for the specified group. :type group_name: string :param group_name: The name of the group the policy is associated with. :type policy_name: string :param policy_name: The policy document to get. """ params = {'GroupName': group_name, 'PolicyName': policy_name} return self.get_response('GetGroupPolicy', params, verb='POST') def delete_group_policy(self, group_name, policy_name): """ Deletes the specified policy document for the specified group. :type group_name: string :param group_name: The name of the group the policy is associated with. :type policy_name: string :param policy_name: The policy document to delete. """ params = {'GroupName': group_name, 'PolicyName': policy_name} return self.get_response('DeleteGroupPolicy', params, verb='POST') def get_all_users(self, path_prefix='/', marker=None, max_items=None): """ List the users that have the specified path prefix. :type path_prefix: string :param path_prefix: If provided, only users whose paths match the provided prefix will be returned. :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results to indicate the maximum number of groups you want in the response. """ params = {'PathPrefix': path_prefix} if marker: params['Marker'] = marker if max_items: params['MaxItems'] = max_items return self.get_response('ListUsers', params, list_marker='Users') # # User methods # def create_user(self, user_name, path='/'): """ Create a user. :type user_name: string :param user_name: The name of the new user :type path: string :param path: The path in which the user will be created. Defaults to /. """ params = {'UserName': user_name, 'Path': path} return self.get_response('CreateUser', params) def delete_user(self, user_name): """ Delete a user including the user's path, GUID and ARN. If the user_name is not specified, the user_name is determined implicitly based on the AWS Access Key ID used to sign the request. :type user_name: string :param user_name: The name of the user to delete. """ params = {'UserName': user_name} return self.get_response('DeleteUser', params) def get_user(self, user_name=None): """ Retrieve information about the specified user. If the user_name is not specified, the user_name is determined implicitly based on the AWS Access Key ID used to sign the request. :type user_name: string :param user_name: The name of the user to retrieve. If not specified, defaults to user making request. """ params = {} if user_name: params['UserName'] = user_name return self.get_response('GetUser', params) def update_user(self, user_name, new_user_name=None, new_path=None): """ Updates name and/or path of the specified user. :type user_name: string :param user_name: The name of the user :type new_user_name: string :param new_user_name: If provided, the username of the user will be changed to this username. :type new_path: string :param new_path: If provided, the path of the user will be changed to this path. """ params = {'UserName': user_name} if new_user_name: params['NewUserName'] = new_user_name if new_path: params['NewPath'] = new_path return self.get_response('UpdateUser', params) def get_all_user_policies(self, user_name, marker=None, max_items=None): """ List the names of the policies associated with the specified user. :type user_name: string :param user_name: The name of the user the policy is associated with. :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results to indicate the maximum number of groups you want in the response. """ params = {'UserName': user_name} if marker: params['Marker'] = marker if max_items: params['MaxItems'] = max_items return self.get_response('ListUserPolicies', params, list_marker='PolicyNames') def put_user_policy(self, user_name, policy_name, policy_json): """ Adds or updates the specified policy document for the specified user. :type user_name: string :param user_name: The name of the user the policy is associated with. :type policy_name: string :param policy_name: The policy document to get. :type policy_json: string :param policy_json: The policy document. """ params = {'UserName': user_name, 'PolicyName': policy_name, 'PolicyDocument': policy_json} return self.get_response('PutUserPolicy', params, verb='POST') def get_user_policy(self, user_name, policy_name): """ Retrieves the specified policy document for the specified user. :type user_name: string :param user_name: The name of the user the policy is associated with. :type policy_name: string :param policy_name: The policy document to get. """ params = {'UserName': user_name, 'PolicyName': policy_name} return self.get_response('GetUserPolicy', params, verb='POST') def delete_user_policy(self, user_name, policy_name): """ Deletes the specified policy document for the specified user. :type user_name: string :param user_name: The name of the user the policy is associated with. :type policy_name: string :param policy_name: The policy document to delete. """ params = {'UserName': user_name, 'PolicyName': policy_name} return self.get_response('DeleteUserPolicy', params, verb='POST') def get_groups_for_user(self, user_name, marker=None, max_items=None): """ List the groups that a specified user belongs to. :type user_name: string :param user_name: The name of the user to list groups for. :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results to indicate the maximum number of groups you want in the response. """ params = {'UserName': user_name} if marker: params['Marker'] = marker if max_items: params['MaxItems'] = max_items return self.get_response('ListGroupsForUser', params, list_marker='Groups') # # Access Keys # def get_all_access_keys(self, user_name, marker=None, max_items=None): """ Get all access keys associated with an account. :type user_name: string :param user_name: The username of the user :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results to indicate the maximum number of groups you want in the response. """ params = {'UserName': user_name} if marker: params['Marker'] = marker if max_items: params['MaxItems'] = max_items return self.get_response('ListAccessKeys', params, list_marker='AccessKeyMetadata') def create_access_key(self, user_name=None): """ Create a new AWS Secret Access Key and corresponding AWS Access Key ID for the specified user. The default status for new keys is Active If the user_name is not specified, the user_name is determined implicitly based on the AWS Access Key ID used to sign the request. :type user_name: string :param user_name: The username of the user """ params = {'UserName': user_name} return self.get_response('CreateAccessKey', params) def update_access_key(self, access_key_id, status, user_name=None): """ Changes the status of the specified access key from Active to Inactive or vice versa. This action can be used to disable a user's key as part of a key rotation workflow. If the user_name is not specified, the user_name is determined implicitly based on the AWS Access Key ID used to sign the request. :type access_key_id: string :param access_key_id: The ID of the access key. :type status: string :param status: Either Active or Inactive. :type user_name: string :param user_name: The username of user (optional). """ params = {'AccessKeyId': access_key_id, 'Status': status} if user_name: params['UserName'] = user_name return self.get_response('UpdateAccessKey', params) def delete_access_key(self, access_key_id, user_name=None): """ Delete an access key associated with a user. If the user_name is not specified, it is determined implicitly based on the AWS Access Key ID used to sign the request. :type access_key_id: string :param access_key_id: The ID of the access key to be deleted. :type user_name: string :param user_name: The username of the user """ params = {'AccessKeyId': access_key_id} if user_name: params['UserName'] = user_name return self.get_response('DeleteAccessKey', params) # # Signing Certificates # def get_all_signing_certs(self, marker=None, max_items=None, user_name=None): """ Get all signing certificates associated with an account. If the user_name is not specified, it is determined implicitly based on the AWS Access Key ID used to sign the request. :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results to indicate the maximum number of groups you want in the response. :type user_name: string :param user_name: The username of the user """ params = {} if marker: params['Marker'] = marker if max_items: params['MaxItems'] = max_items if user_name: params['UserName'] = user_name return self.get_response('ListSigningCertificates', params, list_marker='Certificates') def update_signing_cert(self, cert_id, status, user_name=None): """ Change the status of the specified signing certificate from Active to Inactive or vice versa. If the user_name is not specified, it is determined implicitly based on the AWS Access Key ID used to sign the request. :type cert_id: string :param cert_id: The ID of the signing certificate :type status: string :param status: Either Active or Inactive. :type user_name: string :param user_name: The username of the user """ params = {'CertificateId': cert_id, 'Status': status} if user_name: params['UserName'] = user_name return self.get_response('UpdateSigningCertificate', params) def upload_signing_cert(self, cert_body, user_name=None): """ Uploads an X.509 signing certificate and associates it with the specified user. If the user_name is not specified, it is determined implicitly based on the AWS Access Key ID used to sign the request. :type cert_body: string :param cert_body: The body of the signing certificate. :type user_name: string :param user_name: The username of the user """ params = {'CertificateBody': cert_body} if user_name: params['UserName'] = user_name return self.get_response('UploadSigningCertificate', params, verb='POST') def delete_signing_cert(self, cert_id, user_name=None): """ Delete a signing certificate associated with a user. If the user_name is not specified, it is determined implicitly based on the AWS Access Key ID used to sign the request. :type user_name: string :param user_name: The username of the user :type cert_id: string :param cert_id: The ID of the certificate. """ params = {'CertificateId': cert_id} if user_name: params['UserName'] = user_name return self.get_response('DeleteSigningCertificate', params) # # Server Certificates # def list_server_certs(self, path_prefix='/', marker=None, max_items=None): """ Lists the server certificates that have the specified path prefix. If none exist, the action returns an empty list. :type path_prefix: string :param path_prefix: If provided, only certificates whose paths match the provided prefix will be returned. :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results to indicate the maximum number of groups you want in the response. """ params = {} if path_prefix: params['PathPrefix'] = path_prefix if marker: params['Marker'] = marker if max_items: params['MaxItems'] = max_items return self.get_response('ListServerCertificates', params, list_marker='ServerCertificateMetadataList') # Preserves backwards compatibility. # TODO: Look into deprecating this eventually? get_all_server_certs = list_server_certs def update_server_cert(self, cert_name, new_cert_name=None, new_path=None): """ Updates the name and/or the path of the specified server certificate. :type cert_name: string :param cert_name: The name of the server certificate that you want to update. :type new_cert_name: string :param new_cert_name: The new name for the server certificate. Include this only if you are updating the server certificate's name. :type new_path: string :param new_path: If provided, the path of the certificate will be changed to this path. """ params = {'ServerCertificateName': cert_name} if new_cert_name: params['NewServerCertificateName'] = new_cert_name if new_path: params['NewPath'] = new_path return self.get_response('UpdateServerCertificate', params) def upload_server_cert(self, cert_name, cert_body, private_key, cert_chain=None, path=None): """ Uploads a server certificate entity for the AWS Account. The server certificate entity includes a public key certificate, a private key, and an optional certificate chain, which should all be PEM-encoded. :type cert_name: string :param cert_name: The name for the server certificate. Do not include the path in this value. :type cert_body: string :param cert_body: The contents of the public key certificate in PEM-encoded format. :type private_key: string :param private_key: The contents of the private key in PEM-encoded format. :type cert_chain: string :param cert_chain: The contents of the certificate chain. This is typically a concatenation of the PEM-encoded public key certificates of the chain. :type path: string :param path: The path for the server certificate. """ params = {'ServerCertificateName': cert_name, 'CertificateBody': cert_body, 'PrivateKey': private_key} if cert_chain: params['CertificateChain'] = cert_chain if path: params['Path'] = path return self.get_response('UploadServerCertificate', params, verb='POST') def get_server_certificate(self, cert_name): """ Retrieves information about the specified server certificate. :type cert_name: string :param cert_name: The name of the server certificate you want to retrieve information about. """ params = {'ServerCertificateName': cert_name} return self.get_response('GetServerCertificate', params) def delete_server_cert(self, cert_name): """ Delete the specified server certificate. :type cert_name: string :param cert_name: The name of the server certificate you want to delete. """ params = {'ServerCertificateName': cert_name} return self.get_response('DeleteServerCertificate', params) # # MFA Devices # def get_all_mfa_devices(self, user_name, marker=None, max_items=None): """ Get all MFA devices associated with an account. :type user_name: string :param user_name: The username of the user :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results to indicate the maximum number of groups you want in the response. """ params = {'UserName': user_name} if marker: params['Marker'] = marker if max_items: params['MaxItems'] = max_items return self.get_response('ListMFADevices', params, list_marker='MFADevices') def enable_mfa_device(self, user_name, serial_number, auth_code_1, auth_code_2): """ Enables the specified MFA device and associates it with the specified user. :type user_name: string :param user_name: The username of the user :type serial_number: string :param serial_number: The serial number which uniquely identifies the MFA device. :type auth_code_1: string :param auth_code_1: An authentication code emitted by the device. :type auth_code_2: string :param auth_code_2: A subsequent authentication code emitted by the device. """ params = {'UserName': user_name, 'SerialNumber': serial_number, 'AuthenticationCode1': auth_code_1, 'AuthenticationCode2': auth_code_2} return self.get_response('EnableMFADevice', params) def deactivate_mfa_device(self, user_name, serial_number): """ Deactivates the specified MFA device and removes it from association with the user. :type user_name: string :param user_name: The username of the user :type serial_number: string :param serial_number: The serial number which uniquely identifies the MFA device. """ params = {'UserName': user_name, 'SerialNumber': serial_number} return self.get_response('DeactivateMFADevice', params) def resync_mfa_device(self, user_name, serial_number, auth_code_1, auth_code_2): """ Syncronizes the specified MFA device with the AWS servers. :type user_name: string :param user_name: The username of the user :type serial_number: string :param serial_number: The serial number which uniquely identifies the MFA device. :type auth_code_1: string :param auth_code_1: An authentication code emitted by the device. :type auth_code_2: string :param auth_code_2: A subsequent authentication code emitted by the device. """ params = {'UserName': user_name, 'SerialNumber': serial_number, 'AuthenticationCode1': auth_code_1, 'AuthenticationCode2': auth_code_2} return self.get_response('ResyncMFADevice', params) # # Login Profiles # def get_login_profiles(self, user_name): """ Retrieves the login profile for the specified user. :type user_name: string :param user_name: The username of the user """ params = {'UserName': user_name} return self.get_response('GetLoginProfile', params) def create_login_profile(self, user_name, password): """ Creates a login profile for the specified user, give the user the ability to access AWS services and the AWS Management Console. :type user_name: string :param user_name: The name of the user :type password: string :param password: The new password for the user """ params = {'UserName': user_name, 'Password': password} return self.get_response('CreateLoginProfile', params) def delete_login_profile(self, user_name): """ Deletes the login profile associated with the specified user. :type user_name: string :param user_name: The name of the user to delete. """ params = {'UserName': user_name} return self.get_response('DeleteLoginProfile', params) def update_login_profile(self, user_name, password): """ Resets the password associated with the user's login profile. :type user_name: string :param user_name: The name of the user :type password: string :param password: The new password for the user """ params = {'UserName': user_name, 'Password': password} return self.get_response('UpdateLoginProfile', params) def create_account_alias(self, alias): """ Creates a new alias for the AWS account. For more information on account id aliases, please see http://goo.gl/ToB7G :type alias: string :param alias: The alias to attach to the account. """ params = {'AccountAlias': alias} return self.get_response('CreateAccountAlias', params) def delete_account_alias(self, alias): """ Deletes an alias for the AWS account. For more information on account id aliases, please see http://goo.gl/ToB7G :type alias: string :param alias: The alias to remove from the account. """ params = {'AccountAlias': alias} return self.get_response('DeleteAccountAlias', params) def get_account_alias(self): """ Get the alias for the current account. This is referred to in the docs as list_account_aliases, but it seems you can only have one account alias currently. For more information on account id aliases, please see http://goo.gl/ToB7G """ return self.get_response('ListAccountAliases', {}, list_marker='AccountAliases') def get_signin_url(self, service='ec2'): """ Get the URL where IAM users can use their login profile to sign in to this account's console. :type service: string :param service: Default service to go to in the console. """ alias = self.get_account_alias() if not alias: raise Exception('No alias associated with this account. Please use iam.create_account_alias() first.') if self.host == 'iam.us-gov.amazonaws.com': return "https://%s.signin.amazonaws-us-gov.com/console/%s" % (alias, service) else: return "https://%s.signin.aws.amazon.com/console/%s" % (alias, service) def get_account_summary(self): """ Get the alias for the current account. This is referred to in the docs as list_account_aliases, but it seems you can only have one account alias currently. For more information on account id aliases, please see http://goo.gl/ToB7G """ return self.get_object('GetAccountSummary', {}, SummaryMap) # # IAM Roles # def add_role_to_instance_profile(self, instance_profile_name, role_name): """ Adds the specified role to the specified instance profile. :type instance_profile_name: string :param instance_profile_name: Name of the instance profile to update. :type role_name: string :param role_name: Name of the role to add. """ return self.get_response('AddRoleToInstanceProfile', {'InstanceProfileName': instance_profile_name, 'RoleName': role_name}) def create_instance_profile(self, instance_profile_name, path=None): """ Creates a new instance profile. :type instance_profile_name: string :param instance_profile_name: Name of the instance profile to create. :type path: string :param path: The path to the instance profile. """ params = {'InstanceProfileName': instance_profile_name} if path is not None: params['Path'] = path return self.get_response('CreateInstanceProfile', params) def create_role(self, role_name, assume_role_policy_document=None, path=None): """ Creates a new role for your AWS account. The policy grants permission to an EC2 instance to assume the role. The policy is URL-encoded according to RFC 3986. Currently, only EC2 instances can assume roles. :type role_name: string :param role_name: Name of the role to create. :type assume_role_policy_document: string :param assume_role_policy_document: The policy that grants an entity permission to assume the role. :type path: string :param path: The path to the instance profile. """ params = {'RoleName': role_name} if assume_role_policy_document is None: # This is the only valid assume_role_policy_document currently, so # this is used as a default value if no assume_role_policy_document # is provided. params['AssumeRolePolicyDocument'] = ASSUME_ROLE_POLICY_DOCUMENT else: params['AssumeRolePolicyDocument'] = assume_role_policy_document if path is not None: params['Path'] = path return self.get_response('CreateRole', params) def delete_instance_profile(self, instance_profile_name): """ Deletes the specified instance profile. The instance profile must not have an associated role. :type instance_profile_name: string :param instance_profile_name: Name of the instance profile to delete. """ return self.get_response( 'DeleteInstanceProfile', {'InstanceProfileName': instance_profile_name}) def delete_role(self, role_name): """ Deletes the specified role. The role must not have any policies attached. :type role_name: string :param role_name: Name of the role to delete. """ return self.get_response('DeleteRole', {'RoleName': role_name}) def delete_role_policy(self, role_name, policy_name): """ Deletes the specified policy associated with the specified role. :type role_name: string :param role_name: Name of the role associated with the policy. :type policy_name: string :param policy_name: Name of the policy to delete. """ return self.get_response( 'DeleteRolePolicy', {'RoleName': role_name, 'PolicyName': policy_name}) def get_instance_profile(self, instance_profile_name): """ Retrieves information about the specified instance profile, including the instance profile's path, GUID, ARN, and role. :type instance_profile_name: string :param instance_profile_name: Name of the instance profile to get information about. """ return self.get_response('GetInstanceProfile', {'InstanceProfileName': instance_profile_name}) def get_role(self, role_name): """ Retrieves information about the specified role, including the role's path, GUID, ARN, and the policy granting permission to EC2 to assume the role. :type role_name: string :param role_name: Name of the role associated with the policy. """ return self.get_response('GetRole', {'RoleName': role_name}) def get_role_policy(self, role_name, policy_name): """ Retrieves the specified policy document for the specified role. :type role_name: string :param role_name: Name of the role associated with the policy. :type policy_name: string :param policy_name: Name of the policy to get. """ return self.get_response('GetRolePolicy', {'RoleName': role_name, 'PolicyName': policy_name}) def list_instance_profiles(self, path_prefix=None, marker=None, max_items=None): """ Lists the instance profiles that have the specified path prefix. If there are none, the action returns an empty list. :type path_prefix: string :param path_prefix: The path prefix for filtering the results. For example: /application_abc/component_xyz/, which would get all instance profiles whose path starts with /application_abc/component_xyz/. :type marker: string :param marker: Use this parameter only when paginating results, and only in a subsequent request after you've received a response where the results are truncated. Set it to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this parameter only when paginating results to indicate the maximum number of user names you want in the response. """ params = {} if path_prefix is not None: params['PathPrefix'] = path_prefix if marker is not None: params['Marker'] = marker if max_items is not None: params['MaxItems'] = max_items return self.get_response('ListInstanceProfiles', params, list_marker='InstanceProfiles') def list_instance_profiles_for_role(self, role_name, marker=None, max_items=None): """ Lists the instance profiles that have the specified associated role. If there are none, the action returns an empty list. :type role_name: string :param role_name: The name of the role to list instance profiles for. :type marker: string :param marker: Use this parameter only when paginating results, and only in a subsequent request after you've received a response where the results are truncated. Set it to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this parameter only when paginating results to indicate the maximum number of user names you want in the response. """ params = {'RoleName': role_name} if marker is not None: params['Marker'] = marker if max_items is not None: params['MaxItems'] = max_items return self.get_response('ListInstanceProfilesForRole', params, list_marker='InstanceProfiles') def list_role_policies(self, role_name, marker=None, max_items=None): """ Lists the names of the policies associated with the specified role. If there are none, the action returns an empty list. :type role_name: string :param role_name: The name of the role to list policies for. :type marker: string :param marker: Use this parameter only when paginating results, and only in a subsequent request after you've received a response where the results are truncated. Set it to the value of the marker element in the response you just received. :type max_items: int :param max_items: Use this parameter only when paginating results to indicate the maximum number of user names you want in the response. """ params = {'RoleName': role_name} if marker is not None: params['Marker'] = marker if max_items is not None: params['MaxItems'] = max_items return self.get_response('ListRolePolicies', params, list_marker='PolicyNames') def list_roles(self, path_prefix=None, marker=None, max_items=None): """ Lists the roles that have the specified path prefix. If there are none, the action returns an empty list. :type path_prefix: string :param path_prefix: The path prefix for filtering the results. :type marker: string :param marker: Use this parameter only when paginating results, and only in a subsequent request after you've received a response where the results are truncated. Set it to the value of the marker element in the response you just received. :type max_items: int :param max_items: Use this parameter only when paginating results to indicate the maximum number of user names you want in the response. """ params = {} if path_prefix is not None: params['PathPrefix'] = path_prefix if marker is not None: params['Marker'] = marker if max_items is not None: params['MaxItems'] = max_items return self.get_response('ListRoles', params, list_marker='Roles') def put_role_policy(self, role_name, policy_name, policy_document): """ Adds (or updates) a policy document associated with the specified role. :type role_name: string :param role_name: Name of the role to associate the policy with. :type policy_name: string :param policy_name: Name of the policy document. :type policy_document: string :param policy_document: The policy document. """ return self.get_response('PutRolePolicy', {'RoleName': role_name, 'PolicyName': policy_name, 'PolicyDocument': policy_document}) def remove_role_from_instance_profile(self, instance_profile_name, role_name): """ Removes the specified role from the specified instance profile. :type instance_profile_name: string :param instance_profile_name: Name of the instance profile to update. :type role_name: string :param role_name: Name of the role to remove. """ return self.get_response('RemoveRoleFromInstanceProfile', {'InstanceProfileName': instance_profile_name, 'RoleName': role_name}) def update_assume_role_policy(self, role_name, policy_document): """ Updates the policy that grants an entity permission to assume a role. Currently, only an Amazon EC2 instance can assume a role. :type role_name: string :param role_name: Name of the role to update. :type policy_document: string :param policy_document: The policy that grants an entity permission to assume the role. """ return self.get_response('UpdateAssumeRolePolicy', {'RoleName': role_name, 'PolicyDocument': policy_document}) def create_saml_provider(self, saml_metadata_document, name): """ Creates an IAM entity to describe an identity provider (IdP) that supports SAML 2.0. The SAML provider that you create with this operation can be used as a principal in a role's trust policy to establish a trust relationship between AWS and a SAML identity provider. You can create an IAM role that supports Web-based single sign-on (SSO) to the AWS Management Console or one that supports API access to AWS. When you create the SAML provider, you upload an a SAML metadata document that you get from your IdP and that includes the issuer's name, expiration information, and keys that can be used to validate the SAML authentication response (assertions) that are received from the IdP. You must generate the metadata document using the identity management software that is used as your organization's IdP. This operation requires `Signature Version 4`_. For more information, see `Giving Console Access Using SAML`_ and `Creating Temporary Security Credentials for SAML Federation`_ in the Using Temporary Credentials guide. :type saml_metadata_document: string :param saml_metadata_document: An XML document generated by an identity provider (IdP) that supports SAML 2.0. The document includes the issuer's name, expiration information, and keys that can be used to validate the SAML authentication response (assertions) that are received from the IdP. You must generate the metadata document using the identity management software that is used as your organization's IdP. For more information, see `Creating Temporary Security Credentials for SAML Federation`_ in the Using Temporary Security Credentials guide. :type name: string :param name: The name of the provider to create. """ params = { 'SAMLMetadataDocument': saml_metadata_document, 'Name': name, } return self.get_response('CreateSAMLProvider', params) def list_saml_providers(self): """ Lists the SAML providers in the account. This operation requires `Signature Version 4`_. """ return self.get_response('ListSAMLProviders', {}) def get_saml_provider(self, saml_provider_arn): """ Returns the SAML provider metadocument that was uploaded when the provider was created or updated. This operation requires `Signature Version 4`_. :type saml_provider_arn: string :param saml_provider_arn: The Amazon Resource Name (ARN) of the SAML provider to get information about. """ params = {'SAMLProviderArn': saml_provider_arn } return self.get_response('GetSAMLProvider', params) def update_saml_provider(self, saml_provider_arn, saml_metadata_document): """ Updates the metadata document for an existing SAML provider. This operation requires `Signature Version 4`_. :type saml_provider_arn: string :param saml_provider_arn: The Amazon Resource Name (ARN) of the SAML provider to update. :type saml_metadata_document: string :param saml_metadata_document: An XML document generated by an identity provider (IdP) that supports SAML 2.0. The document includes the issuer's name, expiration information, and keys that can be used to validate the SAML authentication response (assertions) that are received from the IdP. You must generate the metadata document using the identity management software that is used as your organization's IdP. """ params = { 'SAMLMetadataDocument': saml_metadata_document, 'SAMLProviderArn': saml_provider_arn, } return self.get_response('UpdateSAMLProvider', params) def delete_saml_provider(self, saml_provider_arn): """ Deletes a SAML provider. Deleting the provider does not update any roles that reference the SAML provider as a principal in their trust policies. Any attempt to assume a role that references a SAML provider that has been deleted will fail. This operation requires `Signature Version 4`_. :type saml_provider_arn: string :param saml_provider_arn: The Amazon Resource Name (ARN) of the SAML provider to delete. """ params = {'SAMLProviderArn': saml_provider_arn } return self.get_response('DeleteSAMLProvider', params) boto-2.20.1/boto/iam/summarymap.py000066400000000000000000000031741225267101000170160ustar00rootroot00000000000000# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class SummaryMap(dict): def __init__(self, parent=None): self.parent = parent dict.__init__(self) def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'key': self._name = value elif name == 'value': try: self[self._name] = int(value) except ValueError: self[self._name] = value else: setattr(self, name, value) boto-2.20.1/boto/jsonresponse.py000066400000000000000000000135001225267101000165770ustar00rootroot00000000000000# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import xml.sax import utils class XmlHandler(xml.sax.ContentHandler): def __init__(self, root_node, connection): self.connection = connection self.nodes = [('root', root_node)] self.current_text = '' def startElement(self, name, attrs): self.current_text = '' t = self.nodes[-1][1].startElement(name, attrs, self.connection) if t != None: if isinstance(t, tuple): self.nodes.append(t) else: self.nodes.append((name, t)) def endElement(self, name): self.nodes[-1][1].endElement(name, self.current_text, self.connection) if self.nodes[-1][0] == name: self.nodes.pop() self.current_text = '' def characters(self, content): self.current_text += content def parse(self, s): xml.sax.parseString(s, self) class Element(dict): def __init__(self, connection=None, element_name=None, stack=None, parent=None, list_marker=('Set',), item_marker=('member', 'item'), pythonize_name=False): dict.__init__(self) self.connection = connection self.element_name = element_name self.list_marker = utils.mklist(list_marker) self.item_marker = utils.mklist(item_marker) if stack is None: self.stack = [] else: self.stack = stack self.pythonize_name = pythonize_name self.parent = parent def __getattr__(self, key): if key in self: return self[key] for k in self: e = self[k] if isinstance(e, Element): try: return getattr(e, key) except AttributeError: pass raise AttributeError def get_name(self, name): if self.pythonize_name: name = utils.pythonize_name(name) return name def startElement(self, name, attrs, connection): self.stack.append(name) for lm in self.list_marker: if name.endswith(lm): l = ListElement(self.connection, name, self.list_marker, self.item_marker, self.pythonize_name) self[self.get_name(name)] = l return l if len(self.stack) > 0: element_name = self.stack[-1] e = Element(self.connection, element_name, self.stack, self, self.list_marker, self.item_marker, self.pythonize_name) self[self.get_name(element_name)] = e return (element_name, e) else: return None def endElement(self, name, value, connection): if len(self.stack) > 0: self.stack.pop() value = value.strip() if value: if isinstance(self.parent, Element): self.parent[self.get_name(name)] = value elif isinstance(self.parent, ListElement): self.parent.append(value) class ListElement(list): def __init__(self, connection=None, element_name=None, list_marker=['Set'], item_marker=('member', 'item'), pythonize_name=False): list.__init__(self) self.connection = connection self.element_name = element_name self.list_marker = list_marker self.item_marker = item_marker self.pythonize_name = pythonize_name def get_name(self, name): if self.pythonize_name: name = utils.pythonize_name(name) return name def startElement(self, name, attrs, connection): for lm in self.list_marker: if name.endswith(lm): l = ListElement(self.connection, name, self.list_marker, self.item_marker, self.pythonize_name) setattr(self, self.get_name(name), l) return l if name in self.item_marker: e = Element(self.connection, name, parent=self, list_marker=self.list_marker, item_marker=self.item_marker, pythonize_name=self.pythonize_name) self.append(e) return e else: return None def endElement(self, name, value, connection): if name == self.element_name: if len(self) > 0: empty = [] for e in self: if isinstance(e, Element): if len(e) == 0: empty.append(e) for e in empty: self.remove(e) else: setattr(self, self.get_name(name), value) boto-2.20.1/boto/kinesis/000077500000000000000000000000001225267101000151435ustar00rootroot00000000000000boto-2.20.1/boto/kinesis/__init__.py000066400000000000000000000033301225267101000172530ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.regioninfo import RegionInfo def regions(): """ Get all available regions for the Amazon Kinesis service. :rtype: list :return: A list of :class:`boto.regioninfo.RegionInfo` """ from boto.kinesis.layer1 import KinesisConnection return [RegionInfo(name='us-east-1', endpoint='kinesis.us-east-1.amazonaws.com', connection_cls=KinesisConnection), ] def connect_to_region(region_name, **kw_params): for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/kinesis/exceptions.py000066400000000000000000000031131225267101000176740ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2012 Thomas Parslow http://almostobsolete.net/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.exception import BotoServerError class ProvisionedThroughputExceededException(BotoServerError): pass class LimitExceededException(BotoServerError): pass class ExpiredIteratorException(BotoServerError): pass class ResourceInUseException(BotoServerError): pass class ResourceNotFoundException(BotoServerError): pass class InvalidArgumentException(BotoServerError): pass class SubscriptionRequiredException(BotoServerError): pass boto-2.20.1/boto/kinesis/layer1.py000066400000000000000000001002631225267101000167140ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # try: import json except ImportError: import simplejson as json import base64 import boto from boto.connection import AWSQueryConnection from boto.regioninfo import RegionInfo from boto.exception import JSONResponseError from boto.kinesis import exceptions class KinesisConnection(AWSQueryConnection): """ Amazon Kinesis Service API Reference Amazon Kinesis is a managed service that scales elastically for real time processing of streaming big data. """ APIVersion = "2013-12-02" DefaultRegionName = "us-east-1" DefaultRegionEndpoint = "kinesis.us-east-1.amazonaws.com" ServiceName = "Kinesis" TargetPrefix = "Kinesis_20131202" ResponseError = JSONResponseError _faults = { "ProvisionedThroughputExceededException": exceptions.ProvisionedThroughputExceededException, "LimitExceededException": exceptions.LimitExceededException, "ExpiredIteratorException": exceptions.ExpiredIteratorException, "ResourceInUseException": exceptions.ResourceInUseException, "ResourceNotFoundException": exceptions.ResourceNotFoundException, "InvalidArgumentException": exceptions.InvalidArgumentException, "SubscriptionRequiredException": exceptions.SubscriptionRequiredException } def __init__(self, **kwargs): region = kwargs.pop('region', None) if not region: region = RegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint) if 'host' not in kwargs: kwargs['host'] = region.endpoint AWSQueryConnection.__init__(self, **kwargs) self.region = region def _required_auth_capability(self): return ['hmac-v4'] def create_stream(self, stream_name, shard_count): """ This operation adds a new Amazon Kinesis stream to your AWS account. A stream captures and transports data records that are continuously emitted from different data sources or producers . Scale-out within an Amazon Kinesis stream is explicitly supported by means of shards, which are uniquely identified groups of data records in an Amazon Kinesis stream. You specify and control the number of shards that a stream is composed of. Each shard can support up to 5 read transactions per second up to a maximum total of 2 MB of data read per second. Each shard can support up to 1000 write transactions per second up to a maximum total of 1 MB data written per second. You can add shards to a stream if the amount of data input increases and you can remove shards if the amount of data input decreases. The stream name identifies the stream. The name is scoped to the AWS account used by the application. It is also scoped by region. That is, two streams in two different accounts can have the same name, and two streams in the same account, but in two different regions, can have the same name. `CreateStream` is an asynchronous operation. Upon receiving a `CreateStream` request, Amazon Kinesis immediately returns and sets the stream status to CREATING. After the stream is created, Amazon Kinesis sets the stream status to ACTIVE. You should perform read and write operations only on an ACTIVE stream. You receive a `LimitExceededException` when making a `CreateStream` request if you try to do one of the following: + Have more than five streams in the CREATING state at any point in time. + Create more shards than are authorized for your account. **Note:** The default limit for an AWS account is two shards per stream. If you need to create a stream with more than two shards, contact AWS Support to increase the limit on your account. You can use the `DescribeStream` operation to check the stream status, which is returned in `StreamStatus`. `CreateStream` has a limit of 5 transactions per second per account. :type stream_name: string :param stream_name: A name to identify the stream. The stream name is scoped to the AWS account used by the application that creates the stream. It is also scoped by region. That is, two streams in two different AWS accounts can have the same name, and two streams in the same AWS account, but in two different regions, can have the same name. :type shard_count: integer :param shard_count: The number of shards that the stream will use. The throughput of the stream is a function of the number of shards; more shards are required for greater provisioned throughput. **Note:** The default limit for an AWS account is two shards per stream. If you need to create a stream with more than two shards, contact AWS Support to increase the limit on your account. """ params = { 'StreamName': stream_name, 'ShardCount': shard_count, } return self.make_request(action='CreateStream', body=json.dumps(params)) def delete_stream(self, stream_name): """ This operation deletes a stream and all of its shards and data. You must shut down any applications that are operating on the stream before you delete the stream. If an application attempts to operate on a deleted stream, it will receive the exception `ResourceNotFoundException`. If the stream is in the ACTIVE state, you can delete it. After a `DeleteStream` request, the specified stream is in the DELETING state until Amazon Kinesis completes the deletion. **Note:** Amazon Kinesis might continue to accept data read and write operations, such as PutRecord and GetRecords, on a stream in the DELETING state until the stream deletion is complete. When you delete a stream, any shards in that stream are also deleted. You can use the DescribeStream operation to check the state of the stream, which is returned in `StreamStatus`. `DeleteStream` has a limit of 5 transactions per second per account. :type stream_name: string :param stream_name: The name of the stream to delete. """ params = {'StreamName': stream_name, } return self.make_request(action='DeleteStream', body=json.dumps(params)) def describe_stream(self, stream_name, limit=None, exclusive_start_shard_id=None): """ This operation returns the following information about the stream: the current status of the stream, the stream Amazon Resource Name (ARN), and an array of shard objects that comprise the stream. For each shard object there is information about the hash key and sequence number ranges that the shard spans, and the IDs of any earlier shards that played in a role in a MergeShards or SplitShard operation that created the shard. A sequence number is the identifier associated with every record ingested in the Amazon Kinesis stream. The sequence number is assigned by the Amazon Kinesis service when a record is put into the stream. You can limit the number of returned shards using the `Limit` parameter. The number of shards in a stream may be too large to return from a single call to `DescribeStream`. You can detect this by using the `HasMoreShards` flag in the returned output. `HasMoreShards` is set to `True` when there is more data available. If there are more shards available, you can request more shards by using the shard ID of the last shard returned by the `DescribeStream` request, in the `ExclusiveStartShardId` parameter in a subsequent request to `DescribeStream`. `DescribeStream` is a paginated operation. `DescribeStream` has a limit of 10 transactions per second per account. :type stream_name: string :param stream_name: The name of the stream to describe. :type limit: integer :param limit: The maximum number of shards to return. :type exclusive_start_shard_id: string :param exclusive_start_shard_id: The shard ID of the shard to start with for the stream description. """ params = {'StreamName': stream_name, } if limit is not None: params['Limit'] = limit if exclusive_start_shard_id is not None: params['ExclusiveStartShardId'] = exclusive_start_shard_id return self.make_request(action='DescribeStream', body=json.dumps(params)) def get_records(self, shard_iterator, limit=None, b64_decode=True): """ This operation returns one or more data records from a shard. A `GetRecords` operation request can retrieve up to 10 MB of data. You specify a shard iterator for the shard that you want to read data from in the `ShardIterator` parameter. The shard iterator specifies the position in the shard from which you want to start reading data records sequentially. A shard iterator specifies this position using the sequence number of a data record in the shard. For more information about the shard iterator, see GetShardIterator. `GetRecords` may return a partial result if the response size limit is exceeded. You will get an error, but not a partial result if the shard's provisioned throughput is exceeded, the shard iterator has expired, or an internal processing failure has occurred. Clients can request a smaller amount of data by specifying a maximum number of returned records using the `Limit` parameter. The `Limit` parameter can be set to an integer value of up to 10,000. If you set the value to an integer greater than 10,000, you will receive `InvalidArgumentException`. A new shard iterator is returned by every `GetRecords` request in `NextShardIterator`, which you use in the `ShardIterator` parameter of the next `GetRecords` request. When you repeatedly read from an Amazon Kinesis stream use a GetShardIterator request to get the first shard iterator to use in your first `GetRecords` request and then use the shard iterator returned in `NextShardIterator` for subsequent reads. `GetRecords` can return `null` for the `NextShardIterator` to reflect that the shard has been closed and that the requested shard iterator would never have returned more data. If no items can be processed because of insufficient provisioned throughput on the shard involved in the request, `GetRecords` throws `ProvisionedThroughputExceededException`. :type shard_iterator: string :param shard_iterator: The position in the shard from which you want to start sequentially reading data records. :type limit: integer :param limit: The maximum number of records to return, which can be set to a value of up to 10,000. :type b64_decode: boolean :param b64_decode: Decode the Base64-encoded ``Data`` field of records. """ params = {'ShardIterator': shard_iterator, } if limit is not None: params['Limit'] = limit response = self.make_request(action='GetRecords', body=json.dumps(params)) # Base64 decode the data if b64_decode: for record in response.get('Records', []): record['Data'] = base64.b64decode(record['Data']) return response def get_shard_iterator(self, stream_name, shard_id, shard_iterator_type, starting_sequence_number=None): """ This operation returns a shard iterator in `ShardIterator`. The shard iterator specifies the position in the shard from which you want to start reading data records sequentially. A shard iterator specifies this position using the sequence number of a data record in a shard. A sequence number is the identifier associated with every record ingested in the Amazon Kinesis stream. The sequence number is assigned by the Amazon Kinesis service when a record is put into the stream. You must specify the shard iterator type in the `GetShardIterator` request. For example, you can set the `ShardIteratorType` parameter to read exactly from the position denoted by a specific sequence number by using the AT_SEQUENCE_NUMBER shard iterator type, or right after the sequence number by using the AFTER_SEQUENCE_NUMBER shard iterator type, using sequence numbers returned by earlier PutRecord, GetRecords or DescribeStream requests. You can specify the shard iterator type TRIM_HORIZON in the request to cause `ShardIterator` to point to the last untrimmed record in the shard in the system, which is the oldest data record in the shard. Or you can point to just after the most recent record in the shard, by using the shard iterator type LATEST, so that you always read the most recent data in the shard. **Note:** Each shard iterator expires five minutes after it is returned to the requester. When you repeatedly read from an Amazon Kinesis stream use a GetShardIterator request to get the first shard iterator to to use in your first `GetRecords` request and then use the shard iterator returned by the `GetRecords` request in `NextShardIterator` for subsequent reads. A new shard iterator is returned by every `GetRecords` request in `NextShardIterator`, which you use in the `ShardIterator` parameter of the next `GetRecords` request. If a `GetShardIterator` request is made too often, you will receive a `ProvisionedThroughputExceededException`. For more information about throughput limits, see the `Amazon Kinesis Developer Guide`_. `GetShardIterator` can return `null` for its `ShardIterator` to indicate that the shard has been closed and that the requested iterator will return no more data. A shard can be closed by a SplitShard or MergeShards operation. `GetShardIterator` has a limit of 5 transactions per second per account per shard. :type stream_name: string :param stream_name: The name of the stream. :type shard_id: string :param shard_id: The shard ID of the shard to get the iterator for. :type shard_iterator_type: string :param shard_iterator_type: Determines how the shard iterator is used to start reading data records from the shard. The following are the valid shard iterator types: + AT_SEQUENCE_NUMBER - Start reading exactly from the position denoted by a specific sequence number. + AFTER_SEQUENCE_NUMBER - Start reading right after the position denoted by a specific sequence number. + TRIM_HORIZON - Start reading at the last untrimmed record in the shard in the system, which is the oldest data record in the shard. + LATEST - Start reading just after the most recent record in the shard, so that you always read the most recent data in the shard. :type starting_sequence_number: string :param starting_sequence_number: The sequence number of the data record in the shard from which to start reading from. """ params = { 'StreamName': stream_name, 'ShardId': shard_id, 'ShardIteratorType': shard_iterator_type, } if starting_sequence_number is not None: params['StartingSequenceNumber'] = starting_sequence_number return self.make_request(action='GetShardIterator', body=json.dumps(params)) def list_streams(self, limit=None, exclusive_start_stream_name=None): """ This operation returns an array of the names of all the streams that are associated with the AWS account making the `ListStreams` request. A given AWS account can have many streams active at one time. The number of streams may be too large to return from a single call to `ListStreams`. You can limit the number of returned streams using the `Limit` parameter. If you do not specify a value for the `Limit` parameter, Amazon Kinesis uses the default limit, which is currently 10. You can detect if there are more streams available to list by using the `HasMoreStreams` flag from the returned output. If there are more streams available, you can request more streams by using the name of the last stream returned by the `ListStreams` request in the `ExclusiveStartStreamName` parameter in a subsequent request to `ListStreams`. The group of stream names returned by the subsequent request is then added to the list. You can continue this process until all the stream names have been collected in the list. `ListStreams` has a limit of 5 transactions per second per account. :type limit: integer :param limit: The maximum number of streams to list. :type exclusive_start_stream_name: string :param exclusive_start_stream_name: The name of the stream to start the list with. """ params = {} if limit is not None: params['Limit'] = limit if exclusive_start_stream_name is not None: params['ExclusiveStartStreamName'] = exclusive_start_stream_name return self.make_request(action='ListStreams', body=json.dumps(params)) def merge_shards(self, stream_name, shard_to_merge, adjacent_shard_to_merge): """ This operation merges two adjacent shards in a stream and combines them into a single shard to reduce the stream's capacity to ingest and transport data. Two shards are considered adjacent if the union of the hash key ranges for the two shards form a contiguous set with no gaps. For example, if you have two shards, one with a hash key range of 276...381 and the other with a hash key range of 382...454, then you could merge these two shards into a single shard that would have a hash key range of 276...454. After the merge, the single child shard receives data for all hash key values covered by the two parent shards. `MergeShards` is called when there is a need to reduce the overall capacity of a stream because of excess capacity that is not being used. The operation requires that you specify the shard to be merged and the adjacent shard for a given stream. For more information about merging shards, see the `Amazon Kinesis Developer Guide`_. If the stream is in the ACTIVE state, you can call `MergeShards`. If a stream is in CREATING or UPDATING or DELETING states, then Amazon Kinesis returns a `ResourceInUseException`. If the specified stream does not exist, Amazon Kinesis returns a `ResourceNotFoundException`. You can use the DescribeStream operation to check the state of the stream, which is returned in `StreamStatus`. `MergeShards` is an asynchronous operation. Upon receiving a `MergeShards` request, Amazon Kinesis immediately returns a response and sets the `StreamStatus` to UPDATING. After the operation is completed, Amazon Kinesis sets the `StreamStatus` to ACTIVE. Read and write operations continue to work while the stream is in the UPDATING state. You use the DescribeStream operation to determine the shard IDs that are specified in the `MergeShards` request. If you try to operate on too many streams in parallel using CreateStream, DeleteStream, `MergeShards` or SplitShard, you will receive a `LimitExceededException`. `MergeShards` has limit of 5 transactions per second per account. :type stream_name: string :param stream_name: The name of the stream for the merge. :type shard_to_merge: string :param shard_to_merge: The shard ID of the shard to combine with the adjacent shard for the merge. :type adjacent_shard_to_merge: string :param adjacent_shard_to_merge: The shard ID of the adjacent shard for the merge. """ params = { 'StreamName': stream_name, 'ShardToMerge': shard_to_merge, 'AdjacentShardToMerge': adjacent_shard_to_merge, } return self.make_request(action='MergeShards', body=json.dumps(params)) def put_record(self, stream_name, data, partition_key, explicit_hash_key=None, sequence_number_for_ordering=None, exclusive_minimum_sequence_number=None, b64_encode=True): """ This operation puts a data record into an Amazon Kinesis stream from a producer. This operation must be called to send data from the producer into the Amazon Kinesis stream for real-time ingestion and subsequent processing. The `PutRecord` operation requires the name of the stream that captures, stores, and transports the data; a partition key; and the data blob itself. The data blob could be a segment from a log file, geographic/location data, website clickstream data, or any other data type. The partition key is used to distribute data across shards. Amazon Kinesis segregates the data records that belong to a data stream into multiple shards, using the partition key associated with each data record to determine which shard a given data record belongs to. Partition keys are Unicode strings, with a maximum length limit of 256 bytes. An MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards using the hash key ranges of the shards. You can override hashing the partition key to determine the shard by explicitly specifying a hash value using the `ExplicitHashKey` parameter. For more information, see the `Amazon Kinesis Developer Guide`_. `PutRecord` returns the shard ID of where the data record was placed and the sequence number that was assigned to the data record. The `SequenceNumberForOrdering` sets the initial sequence number for the partition key. Later `PutRecord` requests to the same partition key (from the same client) will automatically increase from `SequenceNumberForOrdering`, ensuring strict sequential ordering. If a `PutRecord` request cannot be processed because of insufficient provisioned throughput on the shard involved in the request, `PutRecord` throws `ProvisionedThroughputExceededException`. Data records are accessible for only 24 hours from the time that they are added to an Amazon Kinesis stream. :type stream_name: string :param stream_name: The name of the stream to put the data record into. :type data: blob :param data: The data blob to put into the record, which will be Base64 encoded. The maximum size of the data blob is 50 kilobytes (KB). Set `b64_encode` to disable automatic Base64 encoding. :type partition_key: string :param partition_key: Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 bytes. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key will map to the same shard within the stream. :type explicit_hash_key: string :param explicit_hash_key: The hash value used to explicitly determine the shard the data record is assigned to by overriding the partition key hash. :type sequence_number_for_ordering: string :param sequence_number_for_ordering: The sequence number to use as the initial number for the partition key. Subsequent calls to `PutRecord` from the same client and for the same partition key will increase from the `SequenceNumberForOrdering` value. :type b64_encode: boolean :param b64_encode: Whether to Base64 encode `data`. Can be set to ``False`` if `data` is already encoded to prevent double encoding. """ params = { 'StreamName': stream_name, 'Data': data, 'PartitionKey': partition_key, } if explicit_hash_key is not None: params['ExplicitHashKey'] = explicit_hash_key if sequence_number_for_ordering is not None: params['SequenceNumberForOrdering'] = sequence_number_for_ordering if b64_encode: params['Data'] = base64.b64encode(params['Data']) return self.make_request(action='PutRecord', body=json.dumps(params)) def split_shard(self, stream_name, shard_to_split, new_starting_hash_key): """ This operation splits a shard into two new shards in the stream, to increase the stream's capacity to ingest and transport data. `SplitShard` is called when there is a need to increase the overall capacity of stream because of an expected increase in the volume of data records being ingested. `SplitShard` can also be used when a given shard appears to be approaching its maximum utilization, for example, when the set of producers sending data into the specific shard are suddenly sending more than previously anticipated. You can also call the `SplitShard` operation to increase stream capacity, so that more Amazon Kinesis applications can simultaneously read data from the stream for real-time processing. The `SplitShard` operation requires that you specify the shard to be split and the new hash key, which is the position in the shard where the shard gets split in two. In many cases, the new hash key might simply be the average of the beginning and ending hash key, but it can be any hash key value in the range being mapped into the shard. For more information about splitting shards, see the `Amazon Kinesis Developer Guide`_. You can use the DescribeStream operation to determine the shard ID and hash key values for the `ShardToSplit` and `NewStartingHashKey` parameters that are specified in the `SplitShard` request. `SplitShard` is an asynchronous operation. Upon receiving a `SplitShard` request, Amazon Kinesis immediately returns a response and sets the stream status to UPDATING. After the operation is completed, Amazon Kinesis sets the stream status to ACTIVE. Read and write operations continue to work while the stream is in the UPDATING state. You can use `DescribeStream` to check the status of the stream, which is returned in `StreamStatus`. If the stream is in the ACTIVE state, you can call `SplitShard`. If a stream is in CREATING or UPDATING or DELETING states, then Amazon Kinesis returns a `ResourceInUseException`. If the specified stream does not exist, Amazon Kinesis returns a `ResourceNotFoundException`. If you try to create more shards than are authorized for your account, you receive a `LimitExceededException`. **Note:** The default limit for an AWS account is two shards per stream. If you need to create a stream with more than two shards, contact AWS Support to increase the limit on your account. If you try to operate on too many streams in parallel using CreateStream, DeleteStream, MergeShards or SplitShard, you will receive a `LimitExceededException`. `SplitShard` has limit of 5 transactions per second per account. :type stream_name: string :param stream_name: The name of the stream for the shard split. :type shard_to_split: string :param shard_to_split: The shard ID of the shard to split. :type new_starting_hash_key: string :param new_starting_hash_key: A hash key value for the starting hash key of one of the child shards created by the split. The hash key range for a given shard constitutes a set of ordered contiguous positive integers. The value for `NewStartingHashKey` must be in the range of hash keys being mapped into the shard. The `NewStartingHashKey` hash key value and all higher hash key values in hash key range are distributed to one of the child shards. All the lower hash key values in the range are distributed to the other child shard. """ params = { 'StreamName': stream_name, 'ShardToSplit': shard_to_split, 'NewStartingHashKey': new_starting_hash_key, } return self.make_request(action='SplitShard', body=json.dumps(params)) def make_request(self, action, body): headers = { 'X-Amz-Target': '%s.%s' % (self.TargetPrefix, action), 'Host': self.region.endpoint, 'Content-Type': 'application/x-amz-json-1.1', 'Content-Length': str(len(body)), } http_request = self.build_base_http_request( method='POST', path='/', auth_path='/', params={}, headers=headers, data=body) response = self._mexe(http_request, sender=None, override_num_retries=10) response_body = response.read() boto.log.debug(response.getheaders()) boto.log.debug(response_body) if response.status == 200: if response_body: return json.loads(response_body) else: json_body = json.loads(response_body) fault_name = json_body.get('__type', None) exception_class = self._faults.get(fault_name, self.ResponseError) raise exception_class(response.status, response.reason, body=json_body) boto-2.20.1/boto/manage/000077500000000000000000000000001225267101000147265ustar00rootroot00000000000000boto-2.20.1/boto/manage/__init__.py000066400000000000000000000021241225267101000170360ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # boto-2.20.1/boto/manage/cmdshell.py000066400000000000000000000206131225267101000170750ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.mashups.interactive import interactive_shell import boto import os import time import shutil import StringIO import paramiko import socket import subprocess class SSHClient(object): def __init__(self, server, host_key_file='~/.ssh/known_hosts', uname='root', timeout=None, ssh_pwd=None): self.server = server self.host_key_file = host_key_file self.uname = uname self._timeout = timeout self._pkey = paramiko.RSAKey.from_private_key_file(server.ssh_key_file, password=ssh_pwd) self._ssh_client = paramiko.SSHClient() self._ssh_client.load_system_host_keys() self._ssh_client.load_host_keys(os.path.expanduser(host_key_file)) self._ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) self.connect() def connect(self, num_retries=5): retry = 0 while retry < num_retries: try: self._ssh_client.connect(self.server.hostname, username=self.uname, pkey=self._pkey, timeout=self._timeout) return except socket.error, (value, message): if value in (51, 61, 111): print 'SSH Connection refused, will retry in 5 seconds' time.sleep(5) retry += 1 else: raise except paramiko.BadHostKeyException: print "%s has an entry in ~/.ssh/known_hosts and it doesn't match" % self.server.hostname print 'Edit that file to remove the entry and then hit return to try again' raw_input('Hit Enter when ready') retry += 1 except EOFError: print 'Unexpected Error from SSH Connection, retry in 5 seconds' time.sleep(5) retry += 1 print 'Could not establish SSH connection' def open_sftp(self): return self._ssh_client.open_sftp() def get_file(self, src, dst): sftp_client = self.open_sftp() sftp_client.get(src, dst) def put_file(self, src, dst): sftp_client = self.open_sftp() sftp_client.put(src, dst) def open(self, filename, mode='r', bufsize=-1): """ Open a file on the remote system and return a file-like object. """ sftp_client = self.open_sftp() return sftp_client.open(filename, mode, bufsize) def listdir(self, path): sftp_client = self.open_sftp() return sftp_client.listdir(path) def isdir(self, path): status = self.run('[ -d %s ] || echo "FALSE"' % path) if status[1].startswith('FALSE'): return 0 return 1 def exists(self, path): status = self.run('[ -a %s ] || echo "FALSE"' % path) if status[1].startswith('FALSE'): return 0 return 1 def shell(self): """ Start an interactive shell session on the remote host. """ channel = self._ssh_client.invoke_shell() interactive_shell(channel) def run(self, command): """ Execute a command on the remote host. Return a tuple containing an integer status and a two strings, the first containing stdout and the second containing stderr from the command. """ boto.log.debug('running:%s on %s' % (command, self.server.instance_id)) status = 0 try: t = self._ssh_client.exec_command(command) except paramiko.SSHException: status = 1 std_out = t[1].read() std_err = t[2].read() t[0].close() t[1].close() t[2].close() boto.log.debug('stdout: %s' % std_out) boto.log.debug('stderr: %s' % std_err) return (status, std_out, std_err) def run_pty(self, command): """ Execute a command on the remote host with a pseudo-terminal. Returns a string containing the output of the command. """ boto.log.debug('running:%s on %s' % (command, self.server.instance_id)) channel = self._ssh_client.get_transport().open_session() channel.get_pty() channel.exec_command(command) return channel def close(self): transport = self._ssh_client.get_transport() transport.close() self.server.reset_cmdshell() class LocalClient(object): def __init__(self, server, host_key_file=None, uname='root'): self.server = server self.host_key_file = host_key_file self.uname = uname def get_file(self, src, dst): shutil.copyfile(src, dst) def put_file(self, src, dst): shutil.copyfile(src, dst) def listdir(self, path): return os.listdir(path) def isdir(self, path): return os.path.isdir(path) def exists(self, path): return os.path.exists(path) def shell(self): raise NotImplementedError('shell not supported with LocalClient') def run(self): boto.log.info('running:%s' % self.command) log_fp = StringIO.StringIO() process = subprocess.Popen(self.command, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) while process.poll() == None: time.sleep(1) t = process.communicate() log_fp.write(t[0]) log_fp.write(t[1]) boto.log.info(log_fp.getvalue()) boto.log.info('output: %s' % log_fp.getvalue()) return (process.returncode, log_fp.getvalue()) def close(self): pass class FakeServer(object): """ A little class to fake out SSHClient (which is expecting a :class`boto.manage.server.Server` instance. This allows us to """ def __init__(self, instance, ssh_key_file): self.instance = instance self.ssh_key_file = ssh_key_file self.hostname = instance.dns_name self.instance_id = self.instance.id def start(server): instance_id = boto.config.get('Instance', 'instance-id', None) if instance_id == server.instance_id: return LocalClient(server) else: return SSHClient(server) def sshclient_from_instance(instance, ssh_key_file, host_key_file='~/.ssh/known_hosts', user_name='root', ssh_pwd=None): """ Create and return an SSHClient object given an instance object. :type instance: :class`boto.ec2.instance.Instance` object :param instance: The instance object. :type ssh_key_file: str :param ssh_key_file: A path to the private key file used to log into instance. :type host_key_file: str :param host_key_file: A path to the known_hosts file used by the SSH client. Defaults to ~/.ssh/known_hosts :type user_name: str :param user_name: The username to use when logging into the instance. Defaults to root. :type ssh_pwd: str :param ssh_pwd: The passphrase, if any, associated with private key. """ s = FakeServer(instance, ssh_key_file) return SSHClient(s, host_key_file, user_name, ssh_pwd) boto-2.20.1/boto/manage/propget.py000066400000000000000000000047021225267101000167630ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. def get(prop, choices=None): prompt = prop.verbose_name if not prompt: prompt = prop.name if choices: if callable(choices): choices = choices() else: choices = prop.get_choices() valid = False while not valid: if choices: min = 1 max = len(choices) for i in range(min, max+1): value = choices[i-1] if isinstance(value, tuple): value = value[0] print '[%d] %s' % (i, value) value = raw_input('%s [%d-%d]: ' % (prompt, min, max)) try: int_value = int(value) value = choices[int_value-1] if isinstance(value, tuple): value = value[1] valid = True except ValueError: print '%s is not a valid choice' % value except IndexError: print '%s is not within the range[%d-%d]' % (min, max) else: value = raw_input('%s: ' % prompt) try: value = prop.validate(value) if prop.empty(value) and prop.required: print 'A value is required' else: valid = True except: print 'Invalid value: %s' % value return value boto-2.20.1/boto/manage/server.py000066400000000000000000000527641225267101000166240ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010 Chris Moyer http://coredumped.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ High-level abstraction of an EC2 server """ from __future__ import with_statement import boto.ec2 from boto.mashups.iobject import IObject from boto.pyami.config import BotoConfigPath, Config from boto.sdb.db.model import Model from boto.sdb.db.property import StringProperty, IntegerProperty, BooleanProperty, CalculatedProperty from boto.manage import propget from boto.ec2.zone import Zone from boto.ec2.keypair import KeyPair import os, time, StringIO from contextlib import closing from boto.exception import EC2ResponseError InstanceTypes = ['m1.small', 'm1.large', 'm1.xlarge', 'c1.medium', 'c1.xlarge', 'm2.2xlarge', 'm2.4xlarge'] class Bundler(object): def __init__(self, server, uname='root'): from boto.manage.cmdshell import SSHClient self.server = server self.uname = uname self.ssh_client = SSHClient(server, uname=uname) def copy_x509(self, key_file, cert_file): print '\tcopying cert and pk over to /mnt directory on server' self.ssh_client.open_sftp() path, name = os.path.split(key_file) self.remote_key_file = '/mnt/%s' % name self.ssh_client.put_file(key_file, self.remote_key_file) path, name = os.path.split(cert_file) self.remote_cert_file = '/mnt/%s' % name self.ssh_client.put_file(cert_file, self.remote_cert_file) print '...complete!' def bundle_image(self, prefix, size, ssh_key): command = "" if self.uname != 'root': command = "sudo " command += 'ec2-bundle-vol ' command += '-c %s -k %s ' % (self.remote_cert_file, self.remote_key_file) command += '-u %s ' % self.server._reservation.owner_id command += '-p %s ' % prefix command += '-s %d ' % size command += '-d /mnt ' if self.server.instance_type == 'm1.small' or self.server.instance_type == 'c1.medium': command += '-r i386' else: command += '-r x86_64' return command def upload_bundle(self, bucket, prefix, ssh_key): command = "" if self.uname != 'root': command = "sudo " command += 'ec2-upload-bundle ' command += '-m /mnt/%s.manifest.xml ' % prefix command += '-b %s ' % bucket command += '-a %s ' % self.server.ec2.aws_access_key_id command += '-s %s ' % self.server.ec2.aws_secret_access_key return command def bundle(self, bucket=None, prefix=None, key_file=None, cert_file=None, size=None, ssh_key=None, fp=None, clear_history=True): iobject = IObject() if not bucket: bucket = iobject.get_string('Name of S3 bucket') if not prefix: prefix = iobject.get_string('Prefix for AMI file') if not key_file: key_file = iobject.get_filename('Path to RSA private key file') if not cert_file: cert_file = iobject.get_filename('Path to RSA public cert file') if not size: size = iobject.get_int('Size (in MB) of bundled image') if not ssh_key: ssh_key = self.server.get_ssh_key_file() self.copy_x509(key_file, cert_file) if not fp: fp = StringIO.StringIO() fp.write('sudo mv %s /mnt/boto.cfg; ' % BotoConfigPath) fp.write('mv ~/.ssh/authorized_keys /mnt/authorized_keys; ') if clear_history: fp.write('history -c; ') fp.write(self.bundle_image(prefix, size, ssh_key)) fp.write('; ') fp.write(self.upload_bundle(bucket, prefix, ssh_key)) fp.write('; ') fp.write('sudo mv /mnt/boto.cfg %s; ' % BotoConfigPath) fp.write('mv /mnt/authorized_keys ~/.ssh/authorized_keys') command = fp.getvalue() print 'running the following command on the remote server:' print command t = self.ssh_client.run(command) print '\t%s' % t[0] print '\t%s' % t[1] print '...complete!' print 'registering image...' self.image_id = self.server.ec2.register_image(name=prefix, image_location='%s/%s.manifest.xml' % (bucket, prefix)) return self.image_id class CommandLineGetter(object): def get_ami_list(self): my_amis = [] for ami in self.ec2.get_all_images(): # hack alert, need a better way to do this! if ami.location.find('pyami') >= 0: my_amis.append((ami.location, ami)) return my_amis def get_region(self, params): region = params.get('region', None) if isinstance(region, str) or isinstance(region, unicode): region = boto.ec2.get_region(region) params['region'] = region if not region: prop = self.cls.find_property('region_name') params['region'] = propget.get(prop, choices=boto.ec2.regions) self.ec2 = params['region'].connect() def get_name(self, params): if not params.get('name', None): prop = self.cls.find_property('name') params['name'] = propget.get(prop) def get_description(self, params): if not params.get('description', None): prop = self.cls.find_property('description') params['description'] = propget.get(prop) def get_instance_type(self, params): if not params.get('instance_type', None): prop = StringProperty(name='instance_type', verbose_name='Instance Type', choices=InstanceTypes) params['instance_type'] = propget.get(prop) def get_quantity(self, params): if not params.get('quantity', None): prop = IntegerProperty(name='quantity', verbose_name='Number of Instances') params['quantity'] = propget.get(prop) def get_zone(self, params): if not params.get('zone', None): prop = StringProperty(name='zone', verbose_name='EC2 Availability Zone', choices=self.ec2.get_all_zones) params['zone'] = propget.get(prop) def get_ami_id(self, params): valid = False while not valid: ami = params.get('ami', None) if not ami: prop = StringProperty(name='ami', verbose_name='AMI') ami = propget.get(prop) try: rs = self.ec2.get_all_images([ami]) if len(rs) == 1: valid = True params['ami'] = rs[0] except EC2ResponseError: pass def get_group(self, params): group = params.get('group', None) if isinstance(group, str) or isinstance(group, unicode): group_list = self.ec2.get_all_security_groups() for g in group_list: if g.name == group: group = g params['group'] = g if not group: prop = StringProperty(name='group', verbose_name='EC2 Security Group', choices=self.ec2.get_all_security_groups) params['group'] = propget.get(prop) def get_key(self, params): keypair = params.get('keypair', None) if isinstance(keypair, str) or isinstance(keypair, unicode): key_list = self.ec2.get_all_key_pairs() for k in key_list: if k.name == keypair: keypair = k.name params['keypair'] = k.name if not keypair: prop = StringProperty(name='keypair', verbose_name='EC2 KeyPair', choices=self.ec2.get_all_key_pairs) params['keypair'] = propget.get(prop).name def get(self, cls, params): self.cls = cls self.get_region(params) self.ec2 = params['region'].connect() self.get_name(params) self.get_description(params) self.get_instance_type(params) self.get_zone(params) self.get_quantity(params) self.get_ami_id(params) self.get_group(params) self.get_key(params) class Server(Model): # # The properties of this object consists of real properties for data that # is not already stored in EC2 somewhere (e.g. name, description) plus # calculated properties for all of the properties that are already in # EC2 (e.g. hostname, security groups, etc.) # name = StringProperty(unique=True, verbose_name="Name") description = StringProperty(verbose_name="Description") region_name = StringProperty(verbose_name="EC2 Region Name") instance_id = StringProperty(verbose_name="EC2 Instance ID") elastic_ip = StringProperty(verbose_name="EC2 Elastic IP Address") production = BooleanProperty(verbose_name="Is This Server Production", default=False) ami_id = CalculatedProperty(verbose_name="AMI ID", calculated_type=str, use_method=True) zone = CalculatedProperty(verbose_name="Availability Zone Name", calculated_type=str, use_method=True) hostname = CalculatedProperty(verbose_name="Public DNS Name", calculated_type=str, use_method=True) private_hostname = CalculatedProperty(verbose_name="Private DNS Name", calculated_type=str, use_method=True) groups = CalculatedProperty(verbose_name="Security Groups", calculated_type=list, use_method=True) security_group = CalculatedProperty(verbose_name="Primary Security Group Name", calculated_type=str, use_method=True) key_name = CalculatedProperty(verbose_name="Key Name", calculated_type=str, use_method=True) instance_type = CalculatedProperty(verbose_name="Instance Type", calculated_type=str, use_method=True) status = CalculatedProperty(verbose_name="Current Status", calculated_type=str, use_method=True) launch_time = CalculatedProperty(verbose_name="Server Launch Time", calculated_type=str, use_method=True) console_output = CalculatedProperty(verbose_name="Console Output", calculated_type=file, use_method=True) packages = [] plugins = [] @classmethod def add_credentials(cls, cfg, aws_access_key_id, aws_secret_access_key): if not cfg.has_section('Credentials'): cfg.add_section('Credentials') cfg.set('Credentials', 'aws_access_key_id', aws_access_key_id) cfg.set('Credentials', 'aws_secret_access_key', aws_secret_access_key) if not cfg.has_section('DB_Server'): cfg.add_section('DB_Server') cfg.set('DB_Server', 'db_type', 'SimpleDB') cfg.set('DB_Server', 'db_name', cls._manager.domain.name) @classmethod def create(cls, config_file=None, logical_volume = None, cfg = None, **params): """ Create a new instance based on the specified configuration file or the specified configuration and the passed in parameters. If the config_file argument is not None, the configuration is read from there. Otherwise, the cfg argument is used. The config file may include other config files with a #import reference. The included config files must reside in the same directory as the specified file. The logical_volume argument, if supplied, will be used to get the current physical volume ID and use that as an override of the value specified in the config file. This may be useful for debugging purposes when you want to debug with a production config file but a test Volume. The dictionary argument may be used to override any EC2 configuration values in the config file. """ if config_file: cfg = Config(path=config_file) if cfg.has_section('EC2'): # include any EC2 configuration values that aren't specified in params: for option in cfg.options('EC2'): if option not in params: params[option] = cfg.get('EC2', option) getter = CommandLineGetter() getter.get(cls, params) region = params.get('region') ec2 = region.connect() cls.add_credentials(cfg, ec2.aws_access_key_id, ec2.aws_secret_access_key) ami = params.get('ami') kp = params.get('keypair') group = params.get('group') zone = params.get('zone') # deal with possibly passed in logical volume: if logical_volume != None: cfg.set('EBS', 'logical_volume_name', logical_volume.name) cfg_fp = StringIO.StringIO() cfg.write(cfg_fp) # deal with the possibility that zone and/or keypair are strings read from the config file: if isinstance(zone, Zone): zone = zone.name if isinstance(kp, KeyPair): kp = kp.name reservation = ami.run(min_count=1, max_count=params.get('quantity', 1), key_name=kp, security_groups=[group], instance_type=params.get('instance_type'), placement = zone, user_data = cfg_fp.getvalue()) l = [] i = 0 elastic_ip = params.get('elastic_ip') instances = reservation.instances if elastic_ip != None and instances.__len__() > 0: instance = instances[0] print 'Waiting for instance to start so we can set its elastic IP address...' # Sometimes we get a message from ec2 that says that the instance does not exist. # Hopefully the following delay will giv eec2 enough time to get to a stable state: time.sleep(5) while instance.update() != 'running': time.sleep(1) instance.use_ip(elastic_ip) print 'set the elastic IP of the first instance to %s' % elastic_ip for instance in instances: s = cls() s.ec2 = ec2 s.name = params.get('name') + '' if i==0 else str(i) s.description = params.get('description') s.region_name = region.name s.instance_id = instance.id if elastic_ip and i == 0: s.elastic_ip = elastic_ip s.put() l.append(s) i += 1 return l @classmethod def create_from_instance_id(cls, instance_id, name, description=''): regions = boto.ec2.regions() for region in regions: ec2 = region.connect() try: rs = ec2.get_all_reservations([instance_id]) except: rs = [] if len(rs) == 1: s = cls() s.ec2 = ec2 s.name = name s.description = description s.region_name = region.name s.instance_id = instance_id s._reservation = rs[0] for instance in s._reservation.instances: if instance.id == instance_id: s._instance = instance s.put() return s return None @classmethod def create_from_current_instances(cls): servers = [] regions = boto.ec2.regions() for region in regions: ec2 = region.connect() rs = ec2.get_all_reservations() for reservation in rs: for instance in reservation.instances: try: Server.find(instance_id=instance.id).next() boto.log.info('Server for %s already exists' % instance.id) except StopIteration: s = cls() s.ec2 = ec2 s.name = instance.id s.region_name = region.name s.instance_id = instance.id s._reservation = reservation s.put() servers.append(s) return servers def __init__(self, id=None, **kw): Model.__init__(self, id, **kw) self.ssh_key_file = None self.ec2 = None self._cmdshell = None self._reservation = None self._instance = None self._setup_ec2() def _setup_ec2(self): if self.ec2 and self._instance and self._reservation: return if self.id: if self.region_name: for region in boto.ec2.regions(): if region.name == self.region_name: self.ec2 = region.connect() if self.instance_id and not self._instance: try: rs = self.ec2.get_all_reservations([self.instance_id]) if len(rs) >= 1: for instance in rs[0].instances: if instance.id == self.instance_id: self._reservation = rs[0] self._instance = instance except EC2ResponseError: pass def _status(self): status = '' if self._instance: self._instance.update() status = self._instance.state return status def _hostname(self): hostname = '' if self._instance: hostname = self._instance.public_dns_name return hostname def _private_hostname(self): hostname = '' if self._instance: hostname = self._instance.private_dns_name return hostname def _instance_type(self): it = '' if self._instance: it = self._instance.instance_type return it def _launch_time(self): lt = '' if self._instance: lt = self._instance.launch_time return lt def _console_output(self): co = '' if self._instance: co = self._instance.get_console_output() return co def _groups(self): gn = [] if self._reservation: gn = self._reservation.groups return gn def _security_group(self): groups = self._groups() if len(groups) >= 1: return groups[0].id return "" def _zone(self): zone = None if self._instance: zone = self._instance.placement return zone def _key_name(self): kn = None if self._instance: kn = self._instance.key_name return kn def put(self): Model.put(self) self._setup_ec2() def delete(self): if self.production: raise ValueError("Can't delete a production server") #self.stop() Model.delete(self) def stop(self): if self.production: raise ValueError("Can't delete a production server") if self._instance: self._instance.stop() def terminate(self): if self.production: raise ValueError("Can't delete a production server") if self._instance: self._instance.terminate() def reboot(self): if self._instance: self._instance.reboot() def wait(self): while self.status != 'running': time.sleep(5) def get_ssh_key_file(self): if not self.ssh_key_file: ssh_dir = os.path.expanduser('~/.ssh') if os.path.isdir(ssh_dir): ssh_file = os.path.join(ssh_dir, '%s.pem' % self.key_name) if os.path.isfile(ssh_file): self.ssh_key_file = ssh_file if not self.ssh_key_file: iobject = IObject() self.ssh_key_file = iobject.get_filename('Path to OpenSSH Key file') return self.ssh_key_file def get_cmdshell(self): if not self._cmdshell: import cmdshell self.get_ssh_key_file() self._cmdshell = cmdshell.start(self) return self._cmdshell def reset_cmdshell(self): self._cmdshell = None def run(self, command): with closing(self.get_cmdshell()) as cmd: status = cmd.run(command) return status def get_bundler(self, uname='root'): self.get_ssh_key_file() return Bundler(self, uname) def get_ssh_client(self, uname='root', ssh_pwd=None): from boto.manage.cmdshell import SSHClient self.get_ssh_key_file() return SSHClient(self, uname=uname, ssh_pwd=ssh_pwd) def install(self, pkg): return self.run('apt-get -y install %s' % pkg) boto-2.20.1/boto/manage/task.py000066400000000000000000000152471225267101000162530ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import boto from boto.sdb.db.property import StringProperty, DateTimeProperty, IntegerProperty from boto.sdb.db.model import Model import datetime, subprocess, StringIO, time def check_hour(val): if val == '*': return if int(val) < 0 or int(val) > 23: raise ValueError class Task(Model): """ A scheduled, repeating task that can be executed by any participating servers. The scheduling is similar to cron jobs. Each task has an hour attribute. The allowable values for hour are [0-23|*]. To keep the operation reasonably efficient and not cause excessive polling, the minimum granularity of a Task is hourly. Some examples: hour='*' - the task would be executed each hour hour='3' - the task would be executed at 3AM GMT each day. """ name = StringProperty() hour = StringProperty(required=True, validator=check_hour, default='*') command = StringProperty(required=True) last_executed = DateTimeProperty() last_status = IntegerProperty() last_output = StringProperty() message_id = StringProperty() @classmethod def start_all(cls, queue_name): for task in cls.all(): task.start(queue_name) def __init__(self, id=None, **kw): Model.__init__(self, id, **kw) self.hourly = self.hour == '*' self.daily = self.hour != '*' self.now = datetime.datetime.utcnow() def check(self): """ Determine how long until the next scheduled time for a Task. Returns the number of seconds until the next scheduled time or zero if the task needs to be run immediately. If it's an hourly task and it's never been run, run it now. If it's a daily task and it's never been run and the hour is right, run it now. """ boto.log.info('checking Task[%s]-now=%s, last=%s' % (self.name, self.now, self.last_executed)) if self.hourly and not self.last_executed: return 0 if self.daily and not self.last_executed: if int(self.hour) == self.now.hour: return 0 else: return max( (int(self.hour)-self.now.hour), (self.now.hour-int(self.hour)) )*60*60 delta = self.now - self.last_executed if self.hourly: if delta.seconds >= 60*60: return 0 else: return 60*60 - delta.seconds else: if int(self.hour) == self.now.hour: if delta.days >= 1: return 0 else: return 82800 # 23 hours, just to be safe else: return max( (int(self.hour)-self.now.hour), (self.now.hour-int(self.hour)) )*60*60 def _run(self, msg, vtimeout): boto.log.info('Task[%s] - running:%s' % (self.name, self.command)) log_fp = StringIO.StringIO() process = subprocess.Popen(self.command, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) nsecs = 5 current_timeout = vtimeout while process.poll() == None: boto.log.info('nsecs=%s, timeout=%s' % (nsecs, current_timeout)) if nsecs >= current_timeout: current_timeout += vtimeout boto.log.info('Task[%s] - setting timeout to %d seconds' % (self.name, current_timeout)) if msg: msg.change_visibility(current_timeout) time.sleep(5) nsecs += 5 t = process.communicate() log_fp.write(t[0]) log_fp.write(t[1]) boto.log.info('Task[%s] - output: %s' % (self.name, log_fp.getvalue())) self.last_executed = self.now self.last_status = process.returncode self.last_output = log_fp.getvalue()[0:1023] def run(self, msg, vtimeout=60): delay = self.check() boto.log.info('Task[%s] - delay=%s seconds' % (self.name, delay)) if delay == 0: self._run(msg, vtimeout) queue = msg.queue new_msg = queue.new_message(self.id) new_msg = queue.write(new_msg) self.message_id = new_msg.id self.put() boto.log.info('Task[%s] - new message id=%s' % (self.name, new_msg.id)) msg.delete() boto.log.info('Task[%s] - deleted message %s' % (self.name, msg.id)) else: boto.log.info('new_vtimeout: %d' % delay) msg.change_visibility(delay) def start(self, queue_name): boto.log.info('Task[%s] - starting with queue: %s' % (self.name, queue_name)) queue = boto.lookup('sqs', queue_name) msg = queue.new_message(self.id) msg = queue.write(msg) self.message_id = msg.id self.put() boto.log.info('Task[%s] - start successful' % self.name) class TaskPoller(object): def __init__(self, queue_name): self.sqs = boto.connect_sqs() self.queue = self.sqs.lookup(queue_name) def poll(self, wait=60, vtimeout=60): while True: m = self.queue.read(vtimeout) if m: task = Task.get_by_id(m.get_body()) if task: if not task.message_id or m.id == task.message_id: boto.log.info('Task[%s] - read message %s' % (task.name, m.id)) task.run(m, vtimeout) else: boto.log.info('Task[%s] - found extraneous message, ignoring' % task.name) else: time.sleep(wait) boto-2.20.1/boto/manage/test_manage.py000066400000000000000000000014301225267101000175650ustar00rootroot00000000000000from boto.manage.server import Server from boto.manage.volume import Volume import time print '--> Creating New Volume' volume = Volume.create() print volume print '--> Creating New Server' server_list = Server.create() server = server_list[0] print server print '----> Waiting for Server to start up' while server.status != 'running': print '*' time.sleep(10) print '----> Server is running' print '--> Run "df -k" on Server' status = server.run('df -k') print status[1] print '--> Now run volume.make_ready to make the volume ready to use on server' volume.make_ready(server) print '--> Run "df -k" on Server' status = server.run('df -k') print status[1] print '--> Do an "ls -al" on the new filesystem' status = server.run('ls -al %s' % volume.mount_point) print status[1] boto-2.20.1/boto/manage/volume.py000066400000000000000000000377031225267101000166210ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from __future__ import with_statement from boto.sdb.db.model import Model from boto.sdb.db.property import StringProperty, IntegerProperty, ListProperty, ReferenceProperty, CalculatedProperty from boto.manage.server import Server from boto.manage import propget import boto.utils import boto.ec2 import time import traceback from contextlib import closing import datetime class CommandLineGetter(object): def get_region(self, params): if not params.get('region', None): prop = self.cls.find_property('region_name') params['region'] = propget.get(prop, choices=boto.ec2.regions) def get_zone(self, params): if not params.get('zone', None): prop = StringProperty(name='zone', verbose_name='EC2 Availability Zone', choices=self.ec2.get_all_zones) params['zone'] = propget.get(prop) def get_name(self, params): if not params.get('name', None): prop = self.cls.find_property('name') params['name'] = propget.get(prop) def get_size(self, params): if not params.get('size', None): prop = IntegerProperty(name='size', verbose_name='Size (GB)') params['size'] = propget.get(prop) def get_mount_point(self, params): if not params.get('mount_point', None): prop = self.cls.find_property('mount_point') params['mount_point'] = propget.get(prop) def get_device(self, params): if not params.get('device', None): prop = self.cls.find_property('device') params['device'] = propget.get(prop) def get(self, cls, params): self.cls = cls self.get_region(params) self.ec2 = params['region'].connect() self.get_zone(params) self.get_name(params) self.get_size(params) self.get_mount_point(params) self.get_device(params) class Volume(Model): name = StringProperty(required=True, unique=True, verbose_name='Name') region_name = StringProperty(required=True, verbose_name='EC2 Region') zone_name = StringProperty(required=True, verbose_name='EC2 Zone') mount_point = StringProperty(verbose_name='Mount Point') device = StringProperty(verbose_name="Device Name", default='/dev/sdp') volume_id = StringProperty(required=True) past_volume_ids = ListProperty(item_type=str) server = ReferenceProperty(Server, collection_name='volumes', verbose_name='Server Attached To') volume_state = CalculatedProperty(verbose_name="Volume State", calculated_type=str, use_method=True) attachment_state = CalculatedProperty(verbose_name="Attachment State", calculated_type=str, use_method=True) size = CalculatedProperty(verbose_name="Size (GB)", calculated_type=int, use_method=True) @classmethod def create(cls, **params): getter = CommandLineGetter() getter.get(cls, params) region = params.get('region') ec2 = region.connect() zone = params.get('zone') size = params.get('size') ebs_volume = ec2.create_volume(size, zone.name) v = cls() v.ec2 = ec2 v.volume_id = ebs_volume.id v.name = params.get('name') v.mount_point = params.get('mount_point') v.device = params.get('device') v.region_name = region.name v.zone_name = zone.name v.put() return v @classmethod def create_from_volume_id(cls, region_name, volume_id, name): vol = None ec2 = boto.ec2.connect_to_region(region_name) rs = ec2.get_all_volumes([volume_id]) if len(rs) == 1: v = rs[0] vol = cls() vol.volume_id = v.id vol.name = name vol.region_name = v.region.name vol.zone_name = v.zone vol.put() return vol def create_from_latest_snapshot(self, name, size=None): snapshot = self.get_snapshots()[-1] return self.create_from_snapshot(name, snapshot, size) def create_from_snapshot(self, name, snapshot, size=None): if size < self.size: size = self.size ec2 = self.get_ec2_connection() if self.zone_name == None or self.zone_name == '': # deal with the migration case where the zone is not set in the logical volume: current_volume = ec2.get_all_volumes([self.volume_id])[0] self.zone_name = current_volume.zone ebs_volume = ec2.create_volume(size, self.zone_name, snapshot) v = Volume() v.ec2 = self.ec2 v.volume_id = ebs_volume.id v.name = name v.mount_point = self.mount_point v.device = self.device v.region_name = self.region_name v.zone_name = self.zone_name v.put() return v def get_ec2_connection(self): if self.server: return self.server.ec2 if not hasattr(self, 'ec2') or self.ec2 == None: self.ec2 = boto.ec2.connect_to_region(self.region_name) return self.ec2 def _volume_state(self): ec2 = self.get_ec2_connection() rs = ec2.get_all_volumes([self.volume_id]) return rs[0].volume_state() def _attachment_state(self): ec2 = self.get_ec2_connection() rs = ec2.get_all_volumes([self.volume_id]) return rs[0].attachment_state() def _size(self): if not hasattr(self, '__size'): ec2 = self.get_ec2_connection() rs = ec2.get_all_volumes([self.volume_id]) self.__size = rs[0].size return self.__size def install_xfs(self): if self.server: self.server.install('xfsprogs xfsdump') def get_snapshots(self): """ Returns a list of all completed snapshots for this volume ID. """ ec2 = self.get_ec2_connection() rs = ec2.get_all_snapshots() all_vols = [self.volume_id] + self.past_volume_ids snaps = [] for snapshot in rs: if snapshot.volume_id in all_vols: if snapshot.progress == '100%': snapshot.date = boto.utils.parse_ts(snapshot.start_time) snapshot.keep = True snaps.append(snapshot) snaps.sort(cmp=lambda x, y: cmp(x.date, y.date)) return snaps def attach(self, server=None): if self.attachment_state == 'attached': print 'already attached' return None if server: self.server = server self.put() ec2 = self.get_ec2_connection() ec2.attach_volume(self.volume_id, self.server.instance_id, self.device) def detach(self, force=False): state = self.attachment_state if state == 'available' or state == None or state == 'detaching': print 'already detached' return None ec2 = self.get_ec2_connection() ec2.detach_volume(self.volume_id, self.server.instance_id, self.device, force) self.server = None self.put() def checkfs(self, use_cmd=None): if self.server == None: raise ValueError('server attribute must be set to run this command') # detemine state of file system on volume, only works if attached if use_cmd: cmd = use_cmd else: cmd = self.server.get_cmdshell() status = cmd.run('xfs_check %s' % self.device) if not use_cmd: cmd.close() if status[1].startswith('bad superblock magic number 0'): return False return True def wait(self): if self.server == None: raise ValueError('server attribute must be set to run this command') with closing(self.server.get_cmdshell()) as cmd: # wait for the volume device to appear cmd = self.server.get_cmdshell() while not cmd.exists(self.device): boto.log.info('%s still does not exist, waiting 10 seconds' % self.device) time.sleep(10) def format(self): if self.server == None: raise ValueError('server attribute must be set to run this command') status = None with closing(self.server.get_cmdshell()) as cmd: if not self.checkfs(cmd): boto.log.info('make_fs...') status = cmd.run('mkfs -t xfs %s' % self.device) return status def mount(self): if self.server == None: raise ValueError('server attribute must be set to run this command') boto.log.info('handle_mount_point') with closing(self.server.get_cmdshell()) as cmd: cmd = self.server.get_cmdshell() if not cmd.isdir(self.mount_point): boto.log.info('making directory') # mount directory doesn't exist so create it cmd.run("mkdir %s" % self.mount_point) else: boto.log.info('directory exists already') status = cmd.run('mount -l') lines = status[1].split('\n') for line in lines: t = line.split() if t and t[2] == self.mount_point: # something is already mounted at the mount point # unmount that and mount it as /tmp if t[0] != self.device: cmd.run('umount %s' % self.mount_point) cmd.run('mount %s /tmp' % t[0]) cmd.run('chmod 777 /tmp') break # Mount up our new EBS volume onto mount_point cmd.run("mount %s %s" % (self.device, self.mount_point)) cmd.run('xfs_growfs %s' % self.mount_point) def make_ready(self, server): self.server = server self.put() self.install_xfs() self.attach() self.wait() self.format() self.mount() def freeze(self): if self.server: return self.server.run("/usr/sbin/xfs_freeze -f %s" % self.mount_point) def unfreeze(self): if self.server: return self.server.run("/usr/sbin/xfs_freeze -u %s" % self.mount_point) def snapshot(self): # if this volume is attached to a server # we need to freeze the XFS file system try: self.freeze() if self.server == None: snapshot = self.get_ec2_connection().create_snapshot(self.volume_id) else: snapshot = self.server.ec2.create_snapshot(self.volume_id) boto.log.info('Snapshot of Volume %s created: %s' % (self.name, snapshot)) except Exception: boto.log.info('Snapshot error') boto.log.info(traceback.format_exc()) finally: status = self.unfreeze() return status def get_snapshot_range(self, snaps, start_date=None, end_date=None): l = [] for snap in snaps: if start_date and end_date: if snap.date >= start_date and snap.date <= end_date: l.append(snap) elif start_date: if snap.date >= start_date: l.append(snap) elif end_date: if snap.date <= end_date: l.append(snap) else: l.append(snap) return l def trim_snapshots(self, delete=False): """ Trim the number of snapshots for this volume. This method always keeps the oldest snapshot. It then uses the parameters passed in to determine how many others should be kept. The algorithm is to keep all snapshots from the current day. Then it will keep the first snapshot of the day for the previous seven days. Then, it will keep the first snapshot of the week for the previous four weeks. After than, it will keep the first snapshot of the month for as many months as there are. """ snaps = self.get_snapshots() # Always keep the oldest and the newest if len(snaps) <= 2: return snaps snaps = snaps[1:-1] now = datetime.datetime.now(snaps[0].date.tzinfo) midnight = datetime.datetime(year=now.year, month=now.month, day=now.day, tzinfo=now.tzinfo) # Keep the first snapshot from each day of the previous week one_week = datetime.timedelta(days=7, seconds=60*60) print midnight-one_week, midnight previous_week = self.get_snapshot_range(snaps, midnight-one_week, midnight) print previous_week if not previous_week: return snaps current_day = None for snap in previous_week: if current_day and current_day == snap.date.day: snap.keep = False else: current_day = snap.date.day # Get ourselves onto the next full week boundary if previous_week: week_boundary = previous_week[0].date if week_boundary.weekday() != 0: delta = datetime.timedelta(days=week_boundary.weekday()) week_boundary = week_boundary - delta # Keep one within this partial week partial_week = self.get_snapshot_range(snaps, week_boundary, previous_week[0].date) if len(partial_week) > 1: for snap in partial_week[1:]: snap.keep = False # Keep the first snapshot of each week for the previous 4 weeks for i in range(0, 4): weeks_worth = self.get_snapshot_range(snaps, week_boundary-one_week, week_boundary) if len(weeks_worth) > 1: for snap in weeks_worth[1:]: snap.keep = False week_boundary = week_boundary - one_week # Now look through all remaining snaps and keep one per month remainder = self.get_snapshot_range(snaps, end_date=week_boundary) current_month = None for snap in remainder: if current_month and current_month == snap.date.month: snap.keep = False else: current_month = snap.date.month if delete: for snap in snaps: if not snap.keep: boto.log.info('Deleting %s(%s) for %s' % (snap, snap.date, self.name)) snap.delete() return snaps def grow(self, size): pass def copy(self, snapshot): pass def get_snapshot_from_date(self, date): pass def delete(self, delete_ebs_volume=False): if delete_ebs_volume: self.detach() ec2 = self.get_ec2_connection() ec2.delete_volume(self.volume_id) Model.delete(self) def archive(self): # snapshot volume, trim snaps, delete volume-id pass boto-2.20.1/boto/mashups/000077500000000000000000000000001225267101000151565ustar00rootroot00000000000000boto-2.20.1/boto/mashups/__init__.py000066400000000000000000000021241225267101000172660ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # boto-2.20.1/boto/mashups/interactive.py000066400000000000000000000052611225267101000200510ustar00rootroot00000000000000# Copyright (C) 2003-2007 Robey Pointer # # This file is part of paramiko. # # Paramiko is free software; you can redistribute it and/or modify it under the # terms of the GNU Lesser General Public License as published by the Free # Software Foundation; either version 2.1 of the License, or (at your option) # any later version. # # Paramiko is distrubuted in the hope that it will be useful, but WITHOUT ANY # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR # A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more # details. # # You should have received a copy of the GNU Lesser General Public License # along with Paramiko; if not, write to the Free Software Foundation, Inc., # 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. import socket import sys # windows does not have termios... try: import termios import tty has_termios = True except ImportError: has_termios = False def interactive_shell(chan): if has_termios: posix_shell(chan) else: windows_shell(chan) def posix_shell(chan): import select oldtty = termios.tcgetattr(sys.stdin) try: tty.setraw(sys.stdin.fileno()) tty.setcbreak(sys.stdin.fileno()) chan.settimeout(0.0) while True: r, w, e = select.select([chan, sys.stdin], [], []) if chan in r: try: x = chan.recv(1024) if len(x) == 0: print '\r\n*** EOF\r\n', break sys.stdout.write(x) sys.stdout.flush() except socket.timeout: pass if sys.stdin in r: x = sys.stdin.read(1) if len(x) == 0: break chan.send(x) finally: termios.tcsetattr(sys.stdin, termios.TCSADRAIN, oldtty) # thanks to Mike Looijmans for this code def windows_shell(chan): import threading sys.stdout.write("Line-buffered terminal emulation. Press F6 or ^Z to send EOF.\r\n\r\n") def writeall(sock): while True: data = sock.recv(256) if not data: sys.stdout.write('\r\n*** EOF ***\r\n\r\n') sys.stdout.flush() break sys.stdout.write(data) sys.stdout.flush() writer = threading.Thread(target=writeall, args=(chan,)) writer.start() try: while True: d = sys.stdin.read(1) if not d: break chan.send(d) except EOFError: # user hit ^Z or F6 pass boto-2.20.1/boto/mashups/iobject.py000066400000000000000000000101271225267101000171500ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import os def int_val_fn(v): try: int(v) return True except: return False class IObject(object): def choose_from_list(self, item_list, search_str='', prompt='Enter Selection'): if not item_list: print 'No Choices Available' return choice = None while not choice: n = 1 choices = [] for item in item_list: if isinstance(item, basestring): print '[%d] %s' % (n, item) choices.append(item) n += 1 else: obj, id, desc = item if desc: if desc.find(search_str) >= 0: print '[%d] %s - %s' % (n, id, desc) choices.append(obj) n += 1 else: if id.find(search_str) >= 0: print '[%d] %s' % (n, id) choices.append(obj) n += 1 if choices: val = raw_input('%s[1-%d]: ' % (prompt, len(choices))) if val.startswith('/'): search_str = val[1:] else: try: int_val = int(val) if int_val == 0: return None choice = choices[int_val-1] except ValueError: print '%s is not a valid choice' % val except IndexError: print '%s is not within the range[1-%d]' % (val, len(choices)) else: print "No objects matched your pattern" search_str = '' return choice def get_string(self, prompt, validation_fn=None): okay = False while not okay: val = raw_input('%s: ' % prompt) if validation_fn: okay = validation_fn(val) if not okay: print 'Invalid value: %s' % val else: okay = True return val def get_filename(self, prompt): okay = False val = '' while not okay: val = raw_input('%s: %s' % (prompt, val)) val = os.path.expanduser(val) if os.path.isfile(val): okay = True elif os.path.isdir(val): path = val val = self.choose_from_list(os.listdir(path)) if val: val = os.path.join(path, val) okay = True else: val = '' else: print 'Invalid value: %s' % val val = '' return val def get_int(self, prompt): s = self.get_string(prompt, int_val_fn) return int(s) boto-2.20.1/boto/mashups/order.py000066400000000000000000000166141225267101000166530ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ High-level abstraction of an EC2 order for servers """ import boto import boto.ec2 from boto.mashups.server import Server, ServerSet from boto.mashups.iobject import IObject from boto.pyami.config import Config from boto.sdb.persist import get_domain, set_domain import time, StringIO InstanceTypes = ['m1.small', 'm1.large', 'm1.xlarge', 'c1.medium', 'c1.xlarge'] class Item(IObject): def __init__(self): self.region = None self.name = None self.instance_type = None self.quantity = 0 self.zone = None self.ami = None self.groups = [] self.key = None self.ec2 = None self.config = None def set_userdata(self, key, value): self.userdata[key] = value def get_userdata(self, key): return self.userdata[key] def set_region(self, region=None): if region: self.region = region else: l = [(r, r.name, r.endpoint) for r in boto.ec2.regions()] self.region = self.choose_from_list(l, prompt='Choose Region') def set_name(self, name=None): if name: self.name = name else: self.name = self.get_string('Name') def set_instance_type(self, instance_type=None): if instance_type: self.instance_type = instance_type else: self.instance_type = self.choose_from_list(InstanceTypes, 'Instance Type') def set_quantity(self, n=0): if n > 0: self.quantity = n else: self.quantity = self.get_int('Quantity') def set_zone(self, zone=None): if zone: self.zone = zone else: l = [(z, z.name, z.state) for z in self.ec2.get_all_zones()] self.zone = self.choose_from_list(l, prompt='Choose Availability Zone') def set_ami(self, ami=None): if ami: self.ami = ami else: l = [(a, a.id, a.location) for a in self.ec2.get_all_images()] self.ami = self.choose_from_list(l, prompt='Choose AMI') def add_group(self, group=None): if group: self.groups.append(group) else: l = [(s, s.name, s.description) for s in self.ec2.get_all_security_groups()] self.groups.append(self.choose_from_list(l, prompt='Choose Security Group')) def set_key(self, key=None): if key: self.key = key else: l = [(k, k.name, '') for k in self.ec2.get_all_key_pairs()] self.key = self.choose_from_list(l, prompt='Choose Keypair') def update_config(self): if not self.config.has_section('Credentials'): self.config.add_section('Credentials') self.config.set('Credentials', 'aws_access_key_id', self.ec2.aws_access_key_id) self.config.set('Credentials', 'aws_secret_access_key', self.ec2.aws_secret_access_key) if not self.config.has_section('Pyami'): self.config.add_section('Pyami') sdb_domain = get_domain() if sdb_domain: self.config.set('Pyami', 'server_sdb_domain', sdb_domain) self.config.set('Pyami', 'server_sdb_name', self.name) def set_config(self, config_path=None): if not config_path: config_path = self.get_filename('Specify Config file') self.config = Config(path=config_path) def get_userdata_string(self): s = StringIO.StringIO() self.config.write(s) return s.getvalue() def enter(self, **params): self.region = params.get('region', self.region) if not self.region: self.set_region() self.ec2 = self.region.connect() self.name = params.get('name', self.name) if not self.name: self.set_name() self.instance_type = params.get('instance_type', self.instance_type) if not self.instance_type: self.set_instance_type() self.zone = params.get('zone', self.zone) if not self.zone: self.set_zone() self.quantity = params.get('quantity', self.quantity) if not self.quantity: self.set_quantity() self.ami = params.get('ami', self.ami) if not self.ami: self.set_ami() self.groups = params.get('groups', self.groups) if not self.groups: self.add_group() self.key = params.get('key', self.key) if not self.key: self.set_key() self.config = params.get('config', self.config) if not self.config: self.set_config() self.update_config() class Order(IObject): def __init__(self): self.items = [] self.reservation = None def add_item(self, **params): item = Item() item.enter(**params) self.items.append(item) def display(self): print 'This Order consists of the following items' print print 'QTY\tNAME\tTYPE\nAMI\t\tGroups\t\t\tKeyPair' for item in self.items: print '%s\t%s\t%s\t%s\t%s\t%s' % (item.quantity, item.name, item.instance_type, item.ami.id, item.groups, item.key.name) def place(self, block=True): if get_domain() == None: print 'SDB Persistence Domain not set' domain_name = self.get_string('Specify SDB Domain') set_domain(domain_name) s = ServerSet() for item in self.items: r = item.ami.run(min_count=1, max_count=item.quantity, key_name=item.key.name, user_data=item.get_userdata_string(), security_groups=item.groups, instance_type=item.instance_type, placement=item.zone.name) if block: states = [i.state for i in r.instances] if states.count('running') != len(states): print states time.sleep(15) states = [i.update() for i in r.instances] for i in r.instances: server = Server() server.name = item.name server.instance_id = i.id server.reservation = r server.save() s.append(server) if len(s) == 1: return s[0] else: return s boto-2.20.1/boto/mashups/server.py000066400000000000000000000333361225267101000170460ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ High-level abstraction of an EC2 server """ import boto import boto.utils from boto.mashups.iobject import IObject from boto.pyami.config import Config, BotoConfigPath from boto.mashups.interactive import interactive_shell from boto.sdb.db.model import Model from boto.sdb.db.property import StringProperty import os import StringIO class ServerSet(list): def __getattr__(self, name): results = [] is_callable = False for server in self: try: val = getattr(server, name) if callable(val): is_callable = True results.append(val) except: results.append(None) if is_callable: self.map_list = results return self.map return results def map(self, *args): results = [] for fn in self.map_list: results.append(fn(*args)) return results class Server(Model): @property def ec2(self): if self._ec2 is None: self._ec2 = boto.connect_ec2() return self._ec2 @classmethod def Inventory(cls): """ Returns a list of Server instances, one for each Server object persisted in the db """ l = ServerSet() rs = cls.find() for server in rs: l.append(server) return l @classmethod def Register(cls, name, instance_id, description=''): s = cls() s.name = name s.instance_id = instance_id s.description = description s.save() return s def __init__(self, id=None, **kw): Model.__init__(self, id, **kw) self._reservation = None self._instance = None self._ssh_client = None self._pkey = None self._config = None self._ec2 = None name = StringProperty(unique=True, verbose_name="Name") instance_id = StringProperty(verbose_name="Instance ID") config_uri = StringProperty() ami_id = StringProperty(verbose_name="AMI ID") zone = StringProperty(verbose_name="Availability Zone") security_group = StringProperty(verbose_name="Security Group", default="default") key_name = StringProperty(verbose_name="Key Name") elastic_ip = StringProperty(verbose_name="Elastic IP") instance_type = StringProperty(verbose_name="Instance Type") description = StringProperty(verbose_name="Description") log = StringProperty() def setReadOnly(self, value): raise AttributeError def getInstance(self): if not self._instance: if self.instance_id: try: rs = self.ec2.get_all_reservations([self.instance_id]) except: return None if len(rs) > 0: self._reservation = rs[0] self._instance = self._reservation.instances[0] return self._instance instance = property(getInstance, setReadOnly, None, 'The Instance for the server') def getAMI(self): if self.instance: return self.instance.image_id ami = property(getAMI, setReadOnly, None, 'The AMI for the server') def getStatus(self): if self.instance: self.instance.update() return self.instance.state status = property(getStatus, setReadOnly, None, 'The status of the server') def getHostname(self): if self.instance: return self.instance.public_dns_name hostname = property(getHostname, setReadOnly, None, 'The public DNS name of the server') def getPrivateHostname(self): if self.instance: return self.instance.private_dns_name private_hostname = property(getPrivateHostname, setReadOnly, None, 'The private DNS name of the server') def getLaunchTime(self): if self.instance: return self.instance.launch_time launch_time = property(getLaunchTime, setReadOnly, None, 'The time the Server was started') def getConsoleOutput(self): if self.instance: return self.instance.get_console_output() console_output = property(getConsoleOutput, setReadOnly, None, 'Retrieve the console output for server') def getGroups(self): if self._reservation: return self._reservation.groups else: return None groups = property(getGroups, setReadOnly, None, 'The Security Groups controlling access to this server') def getConfig(self): if not self._config: remote_file = BotoConfigPath local_file = '%s.ini' % self.instance.id self.get_file(remote_file, local_file) self._config = Config(local_file) return self._config def setConfig(self, config): local_file = '%s.ini' % self.instance.id fp = open(local_file) config.write(fp) fp.close() self.put_file(local_file, BotoConfigPath) self._config = config config = property(getConfig, setConfig, None, 'The instance data for this server') def set_config(self, config): """ Set SDB based config """ self._config = config self._config.dump_to_sdb("botoConfigs", self.id) def load_config(self): self._config = Config(do_load=False) self._config.load_from_sdb("botoConfigs", self.id) def stop(self): if self.instance: self.instance.stop() def start(self): self.stop() ec2 = boto.connect_ec2() ami = ec2.get_all_images(image_ids = [str(self.ami_id)])[0] groups = ec2.get_all_security_groups(groupnames=[str(self.security_group)]) if not self._config: self.load_config() if not self._config.has_section("Credentials"): self._config.add_section("Credentials") self._config.set("Credentials", "aws_access_key_id", ec2.aws_access_key_id) self._config.set("Credentials", "aws_secret_access_key", ec2.aws_secret_access_key) if not self._config.has_section("Pyami"): self._config.add_section("Pyami") if self._manager.domain: self._config.set('Pyami', 'server_sdb_domain', self._manager.domain.name) self._config.set("Pyami", 'server_sdb_name', self.name) cfg = StringIO.StringIO() self._config.write(cfg) cfg = cfg.getvalue() r = ami.run(min_count=1, max_count=1, key_name=self.key_name, security_groups = groups, instance_type = self.instance_type, placement = self.zone, user_data = cfg) i = r.instances[0] self.instance_id = i.id self.put() if self.elastic_ip: ec2.associate_address(self.instance_id, self.elastic_ip) def reboot(self): if self.instance: self.instance.reboot() def get_ssh_client(self, key_file=None, host_key_file='~/.ssh/known_hosts', uname='root'): import paramiko if not self.instance: print 'No instance yet!' return if not self._ssh_client: if not key_file: iobject = IObject() key_file = iobject.get_filename('Path to OpenSSH Key file') self._pkey = paramiko.RSAKey.from_private_key_file(key_file) self._ssh_client = paramiko.SSHClient() self._ssh_client.load_system_host_keys() self._ssh_client.load_host_keys(os.path.expanduser(host_key_file)) self._ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) self._ssh_client.connect(self.instance.public_dns_name, username=uname, pkey=self._pkey) return self._ssh_client def get_file(self, remotepath, localpath): ssh_client = self.get_ssh_client() sftp_client = ssh_client.open_sftp() sftp_client.get(remotepath, localpath) def put_file(self, localpath, remotepath): ssh_client = self.get_ssh_client() sftp_client = ssh_client.open_sftp() sftp_client.put(localpath, remotepath) def listdir(self, remotepath): ssh_client = self.get_ssh_client() sftp_client = ssh_client.open_sftp() return sftp_client.listdir(remotepath) def shell(self, key_file=None): ssh_client = self.get_ssh_client(key_file) channel = ssh_client.invoke_shell() interactive_shell(channel) def bundle_image(self, prefix, key_file, cert_file, size): print 'bundling image...' print '\tcopying cert and pk over to /mnt directory on server' ssh_client = self.get_ssh_client() sftp_client = ssh_client.open_sftp() path, name = os.path.split(key_file) remote_key_file = '/mnt/%s' % name self.put_file(key_file, remote_key_file) path, name = os.path.split(cert_file) remote_cert_file = '/mnt/%s' % name self.put_file(cert_file, remote_cert_file) print '\tdeleting %s' % BotoConfigPath # delete the metadata.ini file if it exists try: sftp_client.remove(BotoConfigPath) except: pass command = 'sudo ec2-bundle-vol ' command += '-c %s -k %s ' % (remote_cert_file, remote_key_file) command += '-u %s ' % self._reservation.owner_id command += '-p %s ' % prefix command += '-s %d ' % size command += '-d /mnt ' if self.instance.instance_type == 'm1.small' or self.instance_type == 'c1.medium': command += '-r i386' else: command += '-r x86_64' print '\t%s' % command t = ssh_client.exec_command(command) response = t[1].read() print '\t%s' % response print '\t%s' % t[2].read() print '...complete!' def upload_bundle(self, bucket, prefix): print 'uploading bundle...' command = 'ec2-upload-bundle ' command += '-m /mnt/%s.manifest.xml ' % prefix command += '-b %s ' % bucket command += '-a %s ' % self.ec2.aws_access_key_id command += '-s %s ' % self.ec2.aws_secret_access_key print '\t%s' % command ssh_client = self.get_ssh_client() t = ssh_client.exec_command(command) response = t[1].read() print '\t%s' % response print '\t%s' % t[2].read() print '...complete!' def create_image(self, bucket=None, prefix=None, key_file=None, cert_file=None, size=None): iobject = IObject() if not bucket: bucket = iobject.get_string('Name of S3 bucket') if not prefix: prefix = iobject.get_string('Prefix for AMI file') if not key_file: key_file = iobject.get_filename('Path to RSA private key file') if not cert_file: cert_file = iobject.get_filename('Path to RSA public cert file') if not size: size = iobject.get_int('Size (in MB) of bundled image') self.bundle_image(prefix, key_file, cert_file, size) self.upload_bundle(bucket, prefix) print 'registering image...' self.image_id = self.ec2.register_image('%s/%s.manifest.xml' % (bucket, prefix)) return self.image_id def attach_volume(self, volume, device="/dev/sdp"): """ Attach an EBS volume to this server :param volume: EBS Volume to attach :type volume: boto.ec2.volume.Volume :param device: Device to attach to (default to /dev/sdp) :type device: string """ if hasattr(volume, "id"): volume_id = volume.id else: volume_id = volume return self.ec2.attach_volume(volume_id=volume_id, instance_id=self.instance_id, device=device) def detach_volume(self, volume): """ Detach an EBS volume from this server :param volume: EBS Volume to detach :type volume: boto.ec2.volume.Volume """ if hasattr(volume, "id"): volume_id = volume.id else: volume_id = volume return self.ec2.detach_volume(volume_id=volume_id, instance_id=self.instance_id) def install_package(self, package_name): print 'installing %s...' % package_name command = 'yum -y install %s' % package_name print '\t%s' % command ssh_client = self.get_ssh_client() t = ssh_client.exec_command(command) response = t[1].read() print '\t%s' % response print '\t%s' % t[2].read() print '...complete!' boto-2.20.1/boto/mturk/000077500000000000000000000000001225267101000146405ustar00rootroot00000000000000boto-2.20.1/boto/mturk/__init__.py000066400000000000000000000021241225267101000167500ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # boto-2.20.1/boto/mturk/connection.py000066400000000000000000001205551225267101000173610ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import xml.sax import datetime import itertools from boto import handler from boto import config from boto.mturk.price import Price import boto.mturk.notification from boto.connection import AWSQueryConnection from boto.exception import EC2ResponseError from boto.resultset import ResultSet from boto.mturk.question import QuestionForm, ExternalQuestion, HTMLQuestion class MTurkRequestError(EC2ResponseError): "Error for MTurk Requests" # todo: subclass from an abstract parent of EC2ResponseError class MTurkConnection(AWSQueryConnection): APIVersion = '2012-03-25' def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, host=None, debug=0, https_connection_factory=None): if not host: if config.has_option('MTurk', 'sandbox') and config.get('MTurk', 'sandbox') == 'True': host = 'mechanicalturk.sandbox.amazonaws.com' else: host = 'mechanicalturk.amazonaws.com' self.debug = debug AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, proxy_user, proxy_pass, host, debug, https_connection_factory) def _required_auth_capability(self): return ['mturk'] def get_account_balance(self): """ """ params = {} return self._process_request('GetAccountBalance', params, [('AvailableBalance', Price), ('OnHoldBalance', Price)]) def register_hit_type(self, title, description, reward, duration, keywords=None, approval_delay=None, qual_req=None): """ Register a new HIT Type title, description are strings reward is a Price object duration can be a timedelta, or an object castable to an int """ params = dict( Title=title, Description=description, AssignmentDurationInSeconds=self.duration_as_seconds(duration), ) params.update(MTurkConnection.get_price_as_price(reward).get_as_params('Reward')) if keywords: params['Keywords'] = self.get_keywords_as_string(keywords) if approval_delay is not None: d = self.duration_as_seconds(approval_delay) params['AutoApprovalDelayInSeconds'] = d if qual_req is not None: params.update(qual_req.get_as_params()) return self._process_request('RegisterHITType', params, [('HITTypeId', HITTypeId)]) def set_email_notification(self, hit_type, email, event_types=None): """ Performs a SetHITTypeNotification operation to set email notification for a specified HIT type """ return self._set_notification(hit_type, 'Email', email, 'SetHITTypeNotification', event_types) def set_rest_notification(self, hit_type, url, event_types=None): """ Performs a SetHITTypeNotification operation to set REST notification for a specified HIT type """ return self._set_notification(hit_type, 'REST', url, 'SetHITTypeNotification', event_types) def set_sqs_notification(self, hit_type, queue_url, event_types=None): """ Performs a SetHITTypeNotification operation so set SQS notification for a specified HIT type. Queue URL is of form: https://queue.amazonaws.com// and can be found when looking at the details for a Queue in the AWS Console """ return self._set_notification(hit_type, "SQS", queue_url, 'SetHITTypeNotification', event_types) def send_test_event_notification(self, hit_type, url, event_types=None, test_event_type='Ping'): """ Performs a SendTestEventNotification operation with REST notification for a specified HIT type """ return self._set_notification(hit_type, 'REST', url, 'SendTestEventNotification', event_types, test_event_type) def _set_notification(self, hit_type, transport, destination, request_type, event_types=None, test_event_type=None): """ Common operation to set notification or send a test event notification for a specified HIT type """ params = {'HITTypeId': hit_type} # from the Developer Guide: # The 'Active' parameter is optional. If omitted, the active status of # the HIT type's notification specification is unchanged. All HIT types # begin with their notification specifications in the "inactive" status. notification_params = {'Destination': destination, 'Transport': transport, 'Version': boto.mturk.notification.NotificationMessage.NOTIFICATION_VERSION, 'Active': True, } # add specific event types if required if event_types: self.build_list_params(notification_params, event_types, 'EventType') # Set up dict of 'Notification.1.Transport' etc. values notification_rest_params = {} num = 1 for key in notification_params: notification_rest_params['Notification.%d.%s' % (num, key)] = notification_params[key] # Update main params dict params.update(notification_rest_params) # If test notification, specify the notification type to be tested if test_event_type: params.update({'TestEventType': test_event_type}) # Execute operation return self._process_request(request_type, params) def create_hit(self, hit_type=None, question=None, hit_layout=None, lifetime=datetime.timedelta(days=7), max_assignments=1, title=None, description=None, keywords=None, reward=None, duration=datetime.timedelta(days=7), approval_delay=None, annotation=None, questions=None, qualifications=None, layout_params=None, response_groups=None): """ Creates a new HIT. Returns a ResultSet See: http://docs.amazonwebservices.com/AWSMechTurk/2012-03-25/AWSMturkAPI/ApiReference_CreateHITOperation.html """ # Handle basic required arguments and set up params dict params = {'LifetimeInSeconds': self.duration_as_seconds(lifetime), 'MaxAssignments': max_assignments, } # handle single or multiple questions or layouts neither = question is None and questions is None if hit_layout is None: both = question is not None and questions is not None if neither or both: raise ValueError("Must specify question (single Question instance) or questions (list or QuestionForm instance), but not both") if question: questions = [question] question_param = QuestionForm(questions) if isinstance(question, QuestionForm): question_param = question elif isinstance(question, ExternalQuestion): question_param = question elif isinstance(question, HTMLQuestion): question_param = question params['Question'] = question_param.get_as_xml() else: if not neither: raise ValueError("Must not specify question (single Question instance) or questions (list or QuestionForm instance) when specifying hit_layout") params['HITLayoutId'] = hit_layout if layout_params: params.update(layout_params.get_as_params()) # if hit type specified then add it # else add the additional required parameters if hit_type: params['HITTypeId'] = hit_type else: # Handle keywords final_keywords = MTurkConnection.get_keywords_as_string(keywords) # Handle price argument final_price = MTurkConnection.get_price_as_price(reward) final_duration = self.duration_as_seconds(duration) additional_params = dict( Title=title, Description=description, Keywords=final_keywords, AssignmentDurationInSeconds=final_duration, ) additional_params.update(final_price.get_as_params('Reward')) if approval_delay is not None: d = self.duration_as_seconds(approval_delay) additional_params['AutoApprovalDelayInSeconds'] = d # add these params to the others params.update(additional_params) # add the annotation if specified if annotation is not None: params['RequesterAnnotation'] = annotation # Add the Qualifications if specified if qualifications is not None: params.update(qualifications.get_as_params()) # Handle optional response groups argument if response_groups: self.build_list_params(params, response_groups, 'ResponseGroup') # Submit return self._process_request('CreateHIT', params, [('HIT', HIT)]) def change_hit_type_of_hit(self, hit_id, hit_type): """ Change the HIT type of an existing HIT. Note that the reward associated with the new HIT type must match the reward of the current HIT type in order for the operation to be valid. :type hit_id: str :type hit_type: str """ params = {'HITId': hit_id, 'HITTypeId': hit_type} return self._process_request('ChangeHITTypeOfHIT', params) def get_reviewable_hits(self, hit_type=None, status='Reviewable', sort_by='Expiration', sort_direction='Ascending', page_size=10, page_number=1): """ Retrieve the HITs that have a status of Reviewable, or HITs that have a status of Reviewing, and that belong to the Requester calling the operation. """ params = {'Status': status, 'SortProperty': sort_by, 'SortDirection': sort_direction, 'PageSize': page_size, 'PageNumber': page_number} # Handle optional hit_type argument if hit_type is not None: params.update({'HITTypeId': hit_type}) return self._process_request('GetReviewableHITs', params, [('HIT', HIT)]) @staticmethod def _get_pages(page_size, total_records): """ Given a page size (records per page) and a total number of records, return the page numbers to be retrieved. """ pages = total_records / page_size + bool(total_records % page_size) return range(1, pages + 1) def get_all_hits(self): """ Return all of a Requester's HITs Despite what search_hits says, it does not return all hits, but instead returns a page of hits. This method will pull the hits from the server 100 at a time, but will yield the results iteratively, so subsequent requests are made on demand. """ page_size = 100 search_rs = self.search_hits(page_size=page_size) total_records = int(search_rs.TotalNumResults) get_page_hits = lambda page: self.search_hits(page_size=page_size, page_number=page) page_nums = self._get_pages(page_size, total_records) hit_sets = itertools.imap(get_page_hits, page_nums) return itertools.chain.from_iterable(hit_sets) def search_hits(self, sort_by='CreationTime', sort_direction='Ascending', page_size=10, page_number=1, response_groups=None): """ Return a page of a Requester's HITs, on behalf of the Requester. The operation returns HITs of any status, except for HITs that have been disposed with the DisposeHIT operation. Note: The SearchHITs operation does not accept any search parameters that filter the results. """ params = {'SortProperty': sort_by, 'SortDirection': sort_direction, 'PageSize': page_size, 'PageNumber': page_number} # Handle optional response groups argument if response_groups: self.build_list_params(params, response_groups, 'ResponseGroup') return self._process_request('SearchHITs', params, [('HIT', HIT)]) def get_assignment(self, assignment_id, response_groups=None): """ Retrieves an assignment using the assignment's ID. Requesters can only retrieve their own assignments, and only assignments whose related HIT has not been disposed. The returned ResultSet will have the following attributes: Request This element is present only if the Request ResponseGroup is specified. Assignment The assignment. The response includes one Assignment object. HIT The HIT associated with this assignment. The response includes one HIT object. """ params = {'AssignmentId': assignment_id} # Handle optional response groups argument if response_groups: self.build_list_params(params, response_groups, 'ResponseGroup') return self._process_request('GetAssignment', params, [('Assignment', Assignment), ('HIT', HIT)]) def get_assignments(self, hit_id, status=None, sort_by='SubmitTime', sort_direction='Ascending', page_size=10, page_number=1, response_groups=None): """ Retrieves completed assignments for a HIT. Use this operation to retrieve the results for a HIT. The returned ResultSet will have the following attributes: NumResults The number of assignments on the page in the filtered results list, equivalent to the number of assignments being returned by this call. A non-negative integer PageNumber The number of the page in the filtered results list being returned. A positive integer TotalNumResults The total number of HITs in the filtered results list based on this call. A non-negative integer The ResultSet will contain zero or more Assignment objects """ params = {'HITId': hit_id, 'SortProperty': sort_by, 'SortDirection': sort_direction, 'PageSize': page_size, 'PageNumber': page_number} if status is not None: params['AssignmentStatus'] = status # Handle optional response groups argument if response_groups: self.build_list_params(params, response_groups, 'ResponseGroup') return self._process_request('GetAssignmentsForHIT', params, [('Assignment', Assignment)]) def approve_assignment(self, assignment_id, feedback=None): """ """ params = {'AssignmentId': assignment_id} if feedback: params['RequesterFeedback'] = feedback return self._process_request('ApproveAssignment', params) def reject_assignment(self, assignment_id, feedback=None): """ """ params = {'AssignmentId': assignment_id} if feedback: params['RequesterFeedback'] = feedback return self._process_request('RejectAssignment', params) def approve_rejected_assignment(self, assignment_id, feedback=None): """ """ params = {'AssignmentId': assignment_id} if feedback: params['RequesterFeedback'] = feedback return self._process_request('ApproveRejectedAssignment', params) def get_hit(self, hit_id, response_groups=None): """ """ params = {'HITId': hit_id} # Handle optional response groups argument if response_groups: self.build_list_params(params, response_groups, 'ResponseGroup') return self._process_request('GetHIT', params, [('HIT', HIT)]) def set_reviewing(self, hit_id, revert=None): """ Update a HIT with a status of Reviewable to have a status of Reviewing, or reverts a Reviewing HIT back to the Reviewable status. Only HITs with a status of Reviewable can be updated with a status of Reviewing. Similarly, only Reviewing HITs can be reverted back to a status of Reviewable. """ params = {'HITId': hit_id} if revert: params['Revert'] = revert return self._process_request('SetHITAsReviewing', params) def disable_hit(self, hit_id, response_groups=None): """ Remove a HIT from the Mechanical Turk marketplace, approves all submitted assignments that have not already been approved or rejected, and disposes of the HIT and all assignment data. Assignments for the HIT that have already been submitted, but not yet approved or rejected, will be automatically approved. Assignments in progress at the time of the call to DisableHIT will be approved once the assignments are submitted. You will be charged for approval of these assignments. DisableHIT completely disposes of the HIT and all submitted assignment data. Assignment results data cannot be retrieved for a HIT that has been disposed. It is not possible to re-enable a HIT once it has been disabled. To make the work from a disabled HIT available again, create a new HIT. """ params = {'HITId': hit_id} # Handle optional response groups argument if response_groups: self.build_list_params(params, response_groups, 'ResponseGroup') return self._process_request('DisableHIT', params) def dispose_hit(self, hit_id): """ Dispose of a HIT that is no longer needed. Only HITs in the "reviewable" state, with all submitted assignments approved or rejected, can be disposed. A Requester can call GetReviewableHITs to determine which HITs are reviewable, then call GetAssignmentsForHIT to retrieve the assignments. Disposing of a HIT removes the HIT from the results of a call to GetReviewableHITs. """ params = {'HITId': hit_id} return self._process_request('DisposeHIT', params) def expire_hit(self, hit_id): """ Expire a HIT that is no longer needed. The effect is identical to the HIT expiring on its own. The HIT no longer appears on the Mechanical Turk web site, and no new Workers are allowed to accept the HIT. Workers who have accepted the HIT prior to expiration are allowed to complete it or return it, or allow the assignment duration to elapse (abandon the HIT). Once all remaining assignments have been submitted, the expired HIT becomes"reviewable", and will be returned by a call to GetReviewableHITs. """ params = {'HITId': hit_id} return self._process_request('ForceExpireHIT', params) def extend_hit(self, hit_id, assignments_increment=None, expiration_increment=None): """ Increase the maximum number of assignments, or extend the expiration date, of an existing HIT. NOTE: If a HIT has a status of Reviewable and the HIT is extended to make it Available, the HIT will not be returned by GetReviewableHITs, and its submitted assignments will not be returned by GetAssignmentsForHIT, until the HIT is Reviewable again. Assignment auto-approval will still happen on its original schedule, even if the HIT has been extended. Be sure to retrieve and approve (or reject) submitted assignments before extending the HIT, if so desired. """ # must provide assignment *or* expiration increment if (assignments_increment is None and expiration_increment is None) or \ (assignments_increment is not None and expiration_increment is not None): raise ValueError("Must specify either assignments_increment or expiration_increment, but not both") params = {'HITId': hit_id} if assignments_increment: params['MaxAssignmentsIncrement'] = assignments_increment if expiration_increment: params['ExpirationIncrementInSeconds'] = expiration_increment return self._process_request('ExtendHIT', params) def get_help(self, about, help_type='Operation'): """ Return information about the Mechanical Turk Service operations and response group NOTE - this is basically useless as it just returns the URL of the documentation help_type: either 'Operation' or 'ResponseGroup' """ params = {'About': about, 'HelpType': help_type} return self._process_request('Help', params) def grant_bonus(self, worker_id, assignment_id, bonus_price, reason): """ Issues a payment of money from your account to a Worker. To be eligible for a bonus, the Worker must have submitted results for one of your HITs, and have had those results approved or rejected. This payment happens separately from the reward you pay to the Worker when you approve the Worker's assignment. The Bonus must be passed in as an instance of the Price object. """ params = bonus_price.get_as_params('BonusAmount', 1) params['WorkerId'] = worker_id params['AssignmentId'] = assignment_id params['Reason'] = reason return self._process_request('GrantBonus', params) def block_worker(self, worker_id, reason): """ Block a worker from working on my tasks. """ params = {'WorkerId': worker_id, 'Reason': reason} return self._process_request('BlockWorker', params) def unblock_worker(self, worker_id, reason): """ Unblock a worker from working on my tasks. """ params = {'WorkerId': worker_id, 'Reason': reason} return self._process_request('UnblockWorker', params) def notify_workers(self, worker_ids, subject, message_text): """ Send a text message to workers. """ params = {'Subject': subject, 'MessageText': message_text} self.build_list_params(params, worker_ids, 'WorkerId') return self._process_request('NotifyWorkers', params) def create_qualification_type(self, name, description, status, keywords=None, retry_delay=None, test=None, answer_key=None, answer_key_xml=None, test_duration=None, auto_granted=False, auto_granted_value=1): """ Create a new Qualification Type. name: This will be visible to workers and must be unique for a given requester. description: description shown to workers. Max 2000 characters. status: 'Active' or 'Inactive' keywords: list of keyword strings or comma separated string. Max length of 1000 characters when concatenated with commas. retry_delay: number of seconds after requesting a qualification the worker must wait before they can ask again. If not specified, workers can only request this qualification once. test: a QuestionForm answer_key: an XML string of your answer key, for automatically scored qualification tests. (Consider implementing an AnswerKey class for this to support.) test_duration: the number of seconds a worker has to complete the test. auto_granted: if True, requests for the Qualification are granted immediately. Can't coexist with a test. auto_granted_value: auto_granted qualifications are given this value. """ params = {'Name': name, 'Description': description, 'QualificationTypeStatus': status, } if retry_delay is not None: params['RetryDelayInSeconds'] = retry_delay if test is not None: assert(isinstance(test, QuestionForm)) assert(test_duration is not None) params['Test'] = test.get_as_xml() if test_duration is not None: params['TestDurationInSeconds'] = test_duration if answer_key is not None: if isinstance(answer_key, basestring): params['AnswerKey'] = answer_key # xml else: raise TypeError # Eventually someone will write an AnswerKey class. if auto_granted: assert(test is None) params['AutoGranted'] = True params['AutoGrantedValue'] = auto_granted_value if keywords: params['Keywords'] = self.get_keywords_as_string(keywords) return self._process_request('CreateQualificationType', params, [('QualificationType', QualificationType)]) def get_qualification_type(self, qualification_type_id): params = {'QualificationTypeId': qualification_type_id } return self._process_request('GetQualificationType', params, [('QualificationType', QualificationType)]) def get_all_qualifications_for_qual_type(self, qualification_type_id): page_size = 100 search_qual = self.get_qualifications_for_qualification_type(qualification_type_id) total_records = int(search_qual.TotalNumResults) get_page_quals = lambda page: self.get_qualifications_for_qualification_type(qualification_type_id = qualification_type_id, page_size=page_size, page_number = page) page_nums = self._get_pages(page_size, total_records) qual_sets = itertools.imap(get_page_quals, page_nums) return itertools.chain.from_iterable(qual_sets) def get_qualifications_for_qualification_type(self, qualification_type_id, page_size=100, page_number = 1): params = {'QualificationTypeId': qualification_type_id, 'PageSize': page_size, 'PageNumber': page_number} return self._process_request('GetQualificationsForQualificationType', params, [('Qualification', Qualification)]) def update_qualification_type(self, qualification_type_id, description=None, status=None, retry_delay=None, test=None, answer_key=None, test_duration=None, auto_granted=None, auto_granted_value=None): params = {'QualificationTypeId': qualification_type_id} if description is not None: params['Description'] = description if status is not None: params['QualificationTypeStatus'] = status if retry_delay is not None: params['RetryDelayInSeconds'] = retry_delay if test is not None: assert(isinstance(test, QuestionForm)) params['Test'] = test.get_as_xml() if test_duration is not None: params['TestDurationInSeconds'] = test_duration if answer_key is not None: if isinstance(answer_key, basestring): params['AnswerKey'] = answer_key # xml else: raise TypeError # Eventually someone will write an AnswerKey class. if auto_granted is not None: params['AutoGranted'] = auto_granted if auto_granted_value is not None: params['AutoGrantedValue'] = auto_granted_value return self._process_request('UpdateQualificationType', params, [('QualificationType', QualificationType)]) def dispose_qualification_type(self, qualification_type_id): """TODO: Document.""" params = {'QualificationTypeId': qualification_type_id} return self._process_request('DisposeQualificationType', params) def search_qualification_types(self, query=None, sort_by='Name', sort_direction='Ascending', page_size=10, page_number=1, must_be_requestable=True, must_be_owned_by_caller=True): """TODO: Document.""" params = {'Query': query, 'SortProperty': sort_by, 'SortDirection': sort_direction, 'PageSize': page_size, 'PageNumber': page_number, 'MustBeRequestable': must_be_requestable, 'MustBeOwnedByCaller': must_be_owned_by_caller} return self._process_request('SearchQualificationTypes', params, [('QualificationType', QualificationType)]) def get_qualification_requests(self, qualification_type_id, sort_by='Expiration', sort_direction='Ascending', page_size=10, page_number=1): """TODO: Document.""" params = {'QualificationTypeId': qualification_type_id, 'SortProperty': sort_by, 'SortDirection': sort_direction, 'PageSize': page_size, 'PageNumber': page_number} return self._process_request('GetQualificationRequests', params, [('QualificationRequest', QualificationRequest)]) def grant_qualification(self, qualification_request_id, integer_value=1): """TODO: Document.""" params = {'QualificationRequestId': qualification_request_id, 'IntegerValue': integer_value} return self._process_request('GrantQualification', params) def revoke_qualification(self, subject_id, qualification_type_id, reason=None): """TODO: Document.""" params = {'SubjectId': subject_id, 'QualificationTypeId': qualification_type_id, 'Reason': reason} return self._process_request('RevokeQualification', params) def assign_qualification(self, qualification_type_id, worker_id, value=1, send_notification=True): params = {'QualificationTypeId': qualification_type_id, 'WorkerId' : worker_id, 'IntegerValue' : value, 'SendNotification' : send_notification} return self._process_request('AssignQualification', params) def get_qualification_score(self, qualification_type_id, worker_id): """TODO: Document.""" params = {'QualificationTypeId' : qualification_type_id, 'SubjectId' : worker_id} return self._process_request('GetQualificationScore', params, [('Qualification', Qualification)]) def update_qualification_score(self, qualification_type_id, worker_id, value): """TODO: Document.""" params = {'QualificationTypeId' : qualification_type_id, 'SubjectId' : worker_id, 'IntegerValue' : value} return self._process_request('UpdateQualificationScore', params) def _process_request(self, request_type, params, marker_elems=None): """ Helper to process the xml response from AWS """ params['Operation'] = request_type response = self.make_request(None, params, verb='POST') return self._process_response(response, marker_elems) def _process_response(self, response, marker_elems=None): """ Helper to process the xml response from AWS """ body = response.read() if self.debug == 2: print body if '' not in body: rs = ResultSet(marker_elems) h = handler.XmlHandler(rs, self) xml.sax.parseString(body, h) return rs else: raise MTurkRequestError(response.status, response.reason, body) @staticmethod def get_keywords_as_string(keywords): """ Returns a comma+space-separated string of keywords from either a list or a string """ if isinstance(keywords, list): keywords = ', '.join(keywords) if isinstance(keywords, str): final_keywords = keywords elif isinstance(keywords, unicode): final_keywords = keywords.encode('utf-8') elif keywords is None: final_keywords = "" else: raise TypeError("keywords argument must be a string or a list of strings; got a %s" % type(keywords)) return final_keywords @staticmethod def get_price_as_price(reward): """ Returns a Price data structure from either a float or a Price """ if isinstance(reward, Price): final_price = reward else: final_price = Price(reward) return final_price @staticmethod def duration_as_seconds(duration): if isinstance(duration, datetime.timedelta): duration = duration.days * 86400 + duration.seconds try: duration = int(duration) except TypeError: raise TypeError("Duration must be a timedelta or int-castable, got %s" % type(duration)) return duration class BaseAutoResultElement: """ Base class to automatically add attributes when parsing XML """ def __init__(self, connection): pass def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): setattr(self, name, value) class HIT(BaseAutoResultElement): """ Class to extract a HIT structure from a response (used in ResultSet) Will have attributes named as per the Developer Guide, e.g. HITId, HITTypeId, CreationTime """ # property helper to determine if HIT has expired def _has_expired(self): """ Has this HIT expired yet? """ expired = False if hasattr(self, 'Expiration'): now = datetime.datetime.utcnow() expiration = datetime.datetime.strptime(self.Expiration, '%Y-%m-%dT%H:%M:%SZ') expired = (now >= expiration) else: raise ValueError("ERROR: Request for expired property, but no Expiration in HIT!") return expired # are we there yet? expired = property(_has_expired) class HITTypeId(BaseAutoResultElement): """ Class to extract an HITTypeId structure from a response """ pass class Qualification(BaseAutoResultElement): """ Class to extract an Qualification structure from a response (used in ResultSet) Will have attributes named as per the Developer Guide such as QualificationTypeId, IntegerValue. Does not seem to contain GrantTime. """ pass class QualificationType(BaseAutoResultElement): """ Class to extract an QualificationType structure from a response (used in ResultSet) Will have attributes named as per the Developer Guide, e.g. QualificationTypeId, CreationTime, Name, etc """ pass class QualificationRequest(BaseAutoResultElement): """ Class to extract an QualificationRequest structure from a response (used in ResultSet) Will have attributes named as per the Developer Guide, e.g. QualificationRequestId, QualificationTypeId, SubjectId, etc """ def __init__(self, connection): BaseAutoResultElement.__init__(self, connection) self.answers = [] def endElement(self, name, value, connection): # the answer consists of embedded XML, so it needs to be parsed independantly if name == 'Answer': answer_rs = ResultSet([('Answer', QuestionFormAnswer)]) h = handler.XmlHandler(answer_rs, connection) value = connection.get_utf8_value(value) xml.sax.parseString(value, h) self.answers.append(answer_rs) else: BaseAutoResultElement.endElement(self, name, value, connection) class Assignment(BaseAutoResultElement): """ Class to extract an Assignment structure from a response (used in ResultSet) Will have attributes named as per the Developer Guide, e.g. AssignmentId, WorkerId, HITId, Answer, etc """ def __init__(self, connection): BaseAutoResultElement.__init__(self, connection) self.answers = [] def endElement(self, name, value, connection): # the answer consists of embedded XML, so it needs to be parsed independantly if name == 'Answer': answer_rs = ResultSet([('Answer', QuestionFormAnswer)]) h = handler.XmlHandler(answer_rs, connection) value = connection.get_utf8_value(value) xml.sax.parseString(value, h) self.answers.append(answer_rs) else: BaseAutoResultElement.endElement(self, name, value, connection) class QuestionFormAnswer(BaseAutoResultElement): """ Class to extract Answers from inside the embedded XML QuestionFormAnswers element inside the Answer element which is part of the Assignment and QualificationRequest structures A QuestionFormAnswers element contains an Answer element for each question in the HIT or Qualification test for which the Worker provided an answer. Each Answer contains a QuestionIdentifier element whose value corresponds to the QuestionIdentifier of a Question in the QuestionForm. See the QuestionForm data structure for more information about questions and answer specifications. If the question expects a free-text answer, the Answer element contains a FreeText element. This element contains the Worker's answer *NOTE* - currently really only supports free-text and selection answers """ def __init__(self, connection): BaseAutoResultElement.__init__(self, connection) self.fields = [] self.qid = None def endElement(self, name, value, connection): if name == 'QuestionIdentifier': self.qid = value elif name in ['FreeText', 'SelectionIdentifier', 'OtherSelectionText'] and self.qid: self.fields.append(value) boto-2.20.1/boto/mturk/layoutparam.py000066400000000000000000000037721225267101000175610ustar00rootroot00000000000000# Copyright (c) 2008 Chris Moyer http://coredumped.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class LayoutParameters: def __init__(self, layoutParameters=None): if layoutParameters == None: layoutParameters = [] self.layoutParameters = layoutParameters def add(self, req): self.layoutParameters.append(req) def get_as_params(self): params = {} assert(len(self.layoutParameters) <= 25) for n, layoutParameter in enumerate(self.layoutParameters): kv = layoutParameter.get_as_params() for key in kv: params['HITLayoutParameter.%s.%s' % ((n+1), key) ] = kv[key] return params class LayoutParameter(object): """ Representation of a single HIT layout parameter """ def __init__(self, name, value): self.name = name self.value = value def get_as_params(self): params = { "Name": self.name, "Value": self.value, } return params boto-2.20.1/boto/mturk/notification.py000066400000000000000000000101221225267101000176740ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Provides NotificationMessage and Event classes, with utility methods, for implementations of the Mechanical Turk Notification API. """ import hmac try: from hashlib import sha1 as sha except ImportError: import sha import base64 import re class NotificationMessage: NOTIFICATION_WSDL = "http://mechanicalturk.amazonaws.com/AWSMechanicalTurk/2006-05-05/AWSMechanicalTurkRequesterNotification.wsdl" NOTIFICATION_VERSION = '2006-05-05' SERVICE_NAME = "AWSMechanicalTurkRequesterNotification" OPERATION_NAME = "Notify" EVENT_PATTERN = r"Event\.(?P\d+)\.(?P\w+)" EVENT_RE = re.compile(EVENT_PATTERN) def __init__(self, d): """ Constructor; expects parameter d to be a dict of string parameters from a REST transport notification message """ self.signature = d['Signature'] # vH6ZbE0NhkF/hfNyxz2OgmzXYKs= self.timestamp = d['Timestamp'] # 2006-05-23T23:22:30Z self.version = d['Version'] # 2006-05-05 assert d['method'] == NotificationMessage.OPERATION_NAME, "Method should be '%s'" % NotificationMessage.OPERATION_NAME # Build Events self.events = [] events_dict = {} if 'Event' in d: # TurboGears surprised me by 'doing the right thing' and making { 'Event': { '1': { 'EventType': ... } } } etc. events_dict = d['Event'] else: for k in d: v = d[k] if k.startswith('Event.'): ed = NotificationMessage.EVENT_RE.search(k).groupdict() n = int(ed['n']) param = str(ed['param']) if n not in events_dict: events_dict[n] = {} events_dict[n][param] = v for n in events_dict: self.events.append(Event(events_dict[n])) def verify(self, secret_key): """ Verifies the authenticity of a notification message. TODO: This is doing a form of authentication and this functionality should really be merged with the pluggable authentication mechanism at some point. """ verification_input = NotificationMessage.SERVICE_NAME verification_input += NotificationMessage.OPERATION_NAME verification_input += self.timestamp h = hmac.new(key=secret_key, digestmod=sha) h.update(verification_input) signature_calc = base64.b64encode(h.digest()) return self.signature == signature_calc class Event: def __init__(self, d): self.event_type = d['EventType'] self.event_time_str = d['EventTime'] self.hit_type = d['HITTypeId'] self.hit_id = d['HITId'] if 'AssignmentId' in d: # Not present in all event types self.assignment_id = d['AssignmentId'] #TODO: build self.event_time datetime from string self.event_time_str def __repr__(self): return "" % (self.event_type, self.hit_id) boto-2.20.1/boto/mturk/price.py000066400000000000000000000036501225267101000163200ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class Price: def __init__(self, amount=0.0, currency_code='USD'): self.amount = amount self.currency_code = currency_code self.formatted_price = '' def __repr__(self): if self.formatted_price: return self.formatted_price else: return str(self.amount) def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'Amount': self.amount = float(value) elif name == 'CurrencyCode': self.currency_code = value elif name == 'FormattedPrice': self.formatted_price = value def get_as_params(self, label, ord=1): return {'%s.%d.Amount'%(label, ord) : str(self.amount), '%s.%d.CurrencyCode'%(label, ord) : self.currency_code} boto-2.20.1/boto/mturk/qualification.py000066400000000000000000000151521225267101000200460ustar00rootroot00000000000000# Copyright (c) 2008 Chris Moyer http://coredumped.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class Qualifications: def __init__(self, requirements=None): if requirements == None: requirements = [] self.requirements = requirements def add(self, req): self.requirements.append(req) def get_as_params(self): params = {} assert(len(self.requirements) <= 10) for n, req in enumerate(self.requirements): reqparams = req.get_as_params() for rp in reqparams: params['QualificationRequirement.%s.%s' % ((n+1), rp) ] = reqparams[rp] return params class Requirement(object): """ Representation of a single requirement """ def __init__(self, qualification_type_id, comparator, integer_value=None, required_to_preview=False): self.qualification_type_id = qualification_type_id self.comparator = comparator self.integer_value = integer_value self.required_to_preview = required_to_preview def get_as_params(self): params = { "QualificationTypeId": self.qualification_type_id, "Comparator": self.comparator, } if self.comparator != 'Exists' and self.integer_value is not None: params['IntegerValue'] = self.integer_value if self.required_to_preview: params['RequiredToPreview'] = "true" return params class PercentAssignmentsSubmittedRequirement(Requirement): """ The percentage of assignments the Worker has submitted, over all assignments the Worker has accepted. The value is an integer between 0 and 100. """ def __init__(self, comparator, integer_value, required_to_preview=False): Requirement.__init__(self, qualification_type_id="00000000000000000000", comparator=comparator, integer_value=integer_value, required_to_preview=required_to_preview) class PercentAssignmentsAbandonedRequirement(Requirement): """ The percentage of assignments the Worker has abandoned (allowed the deadline to elapse), over all assignments the Worker has accepted. The value is an integer between 0 and 100. """ def __init__(self, comparator, integer_value, required_to_preview=False): Requirement.__init__(self, qualification_type_id="00000000000000000070", comparator=comparator, integer_value=integer_value, required_to_preview=required_to_preview) class PercentAssignmentsReturnedRequirement(Requirement): """ The percentage of assignments the Worker has returned, over all assignments the Worker has accepted. The value is an integer between 0 and 100. """ def __init__(self, comparator, integer_value, required_to_preview=False): Requirement.__init__(self, qualification_type_id="000000000000000000E0", comparator=comparator, integer_value=integer_value, required_to_preview=required_to_preview) class PercentAssignmentsApprovedRequirement(Requirement): """ The percentage of assignments the Worker has submitted that were subsequently approved by the Requester, over all assignments the Worker has submitted. The value is an integer between 0 and 100. """ def __init__(self, comparator, integer_value, required_to_preview=False): Requirement.__init__(self, qualification_type_id="000000000000000000L0", comparator=comparator, integer_value=integer_value, required_to_preview=required_to_preview) class PercentAssignmentsRejectedRequirement(Requirement): """ The percentage of assignments the Worker has submitted that were subsequently rejected by the Requester, over all assignments the Worker has submitted. The value is an integer between 0 and 100. """ def __init__(self, comparator, integer_value, required_to_preview=False): Requirement.__init__(self, qualification_type_id="000000000000000000S0", comparator=comparator, integer_value=integer_value, required_to_preview=required_to_preview) class NumberHitsApprovedRequirement(Requirement): """ Specifies the total number of HITs submitted by a Worker that have been approved. The value is an integer greater than or equal to 0. """ def __init__(self, comparator, integer_value, required_to_preview=False): Requirement.__init__(self, qualification_type_id="00000000000000000040", comparator=comparator, integer_value=integer_value, required_to_preview=required_to_preview) class LocaleRequirement(Requirement): """ A Qualification requirement based on the Worker's location. The Worker's location is specified by the Worker to Mechanical Turk when the Worker creates his account. """ def __init__(self, comparator, locale, required_to_preview=False): Requirement.__init__(self, qualification_type_id="00000000000000000071", comparator=comparator, integer_value=None, required_to_preview=required_to_preview) self.locale = locale def get_as_params(self): params = { "QualificationTypeId": self.qualification_type_id, "Comparator": self.comparator, 'LocaleValue.Country': self.locale, } if self.required_to_preview: params['RequiredToPreview'] = "true" return params class AdultRequirement(Requirement): """ Requires workers to acknowledge that they are over 18 and that they agree to work on potentially offensive content. The value type is boolean, 1 (required), 0 (not required, the default). """ def __init__(self, comparator, integer_value, required_to_preview=False): Requirement.__init__(self, qualification_type_id="00000000000000000060", comparator=comparator, integer_value=integer_value, required_to_preview=required_to_preview) boto-2.20.1/boto/mturk/question.py000066400000000000000000000374161225267101000170740ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import xml.sax.saxutils class Question(object): template = "%(items)s" def __init__(self, identifier, content, answer_spec, is_required=False, display_name=None): # copy all of the parameters into object attributes self.__dict__.update(vars()) del self.self def get_as_params(self, label='Question'): return {label: self.get_as_xml()} def get_as_xml(self): items = [ SimpleField('QuestionIdentifier', self.identifier), SimpleField('IsRequired', str(self.is_required).lower()), self.content, self.answer_spec, ] if self.display_name is not None: items.insert(1, SimpleField('DisplayName', self.display_name)) items = ''.join(item.get_as_xml() for item in items) return self.template % vars() try: from lxml import etree class ValidatingXML(object): def validate(self): import urllib2 schema_src_file = urllib2.urlopen(self.schema_url) schema_doc = etree.parse(schema_src_file) schema = etree.XMLSchema(schema_doc) doc = etree.fromstring(self.get_as_xml()) schema.assertValid(doc) except ImportError: class ValidatingXML(object): def validate(self): pass class ExternalQuestion(ValidatingXML): """ An object for constructing an External Question. """ schema_url = "http://mechanicalturk.amazonaws.com/AWSMechanicalTurkDataSchemas/2006-07-14/ExternalQuestion.xsd" template = '%%(external_url)s%%(frame_height)s' % vars() def __init__(self, external_url, frame_height): self.external_url = xml.sax.saxutils.escape( external_url ) self.frame_height = frame_height def get_as_params(self, label='ExternalQuestion'): return {label: self.get_as_xml()} def get_as_xml(self): return self.template % vars(self) class XMLTemplate: def get_as_xml(self): return self.template % vars(self) class SimpleField(object, XMLTemplate): """ A Simple name/value pair that can be easily rendered as XML. >>> SimpleField('Text', 'A text string').get_as_xml() 'A text string' """ template = '<%(field)s>%(value)s' def __init__(self, field, value): self.field = field self.value = value class Binary(object, XMLTemplate): template = """%(type)s%(subtype)s%(url)s%(alt_text)s""" def __init__(self, type, subtype, url, alt_text): self.__dict__.update(vars()) del self.self class List(list): """A bulleted list suitable for OrderedContent or Overview content""" def get_as_xml(self): items = ''.join('%s' % item for item in self) return '%s' % items class Application(object): template = "<%(class_)s>%(content)s" parameter_template = "%(name)s%(value)s" def __init__(self, width, height, **parameters): self.width = width self.height = height self.parameters = parameters def get_inner_content(self, content): content.append_field('Width', self.width) content.append_field('Height', self.height) for name, value in self.parameters.items(): value = self.parameter_template % vars() content.append_field('ApplicationParameter', value) def get_as_xml(self): content = OrderedContent() self.get_inner_content(content) content = content.get_as_xml() class_ = self.__class__.__name__ return self.template % vars() class HTMLQuestion(ValidatingXML): schema_url = 'http://mechanicalturk.amazonaws.com/AWSMechanicalTurkDataSchemas/2011-11-11/HTMLQuestion.xsd' template = '%%(html_form)s]]>%%(frame_height)s' % vars() def __init__(self, html_form, frame_height): self.html_form = html_form self.frame_height = frame_height def get_as_params(self, label="HTMLQuestion"): return {label: self.get_as_xml()} def get_as_xml(self): return self.template % vars(self) class JavaApplet(Application): def __init__(self, path, filename, *args, **kwargs): self.path = path self.filename = filename super(JavaApplet, self).__init__(*args, **kwargs) def get_inner_content(self, content): content = OrderedContent() content.append_field('AppletPath', self.path) content.append_field('AppletFilename', self.filename) super(JavaApplet, self).get_inner_content(content) class Flash(Application): def __init__(self, url, *args, **kwargs): self.url = url super(Flash, self).__init__(*args, **kwargs) def get_inner_content(self, content): content = OrderedContent() content.append_field('FlashMovieURL', self.url) super(Flash, self).get_inner_content(content) class FormattedContent(object, XMLTemplate): schema_url = 'http://mechanicalturk.amazonaws.com/AWSMechanicalTurkDataSchemas/2006-07-14/FormattedContentXHTMLSubset.xsd' template = '' def __init__(self, content): self.content = content class OrderedContent(list): def append_field(self, field, value): self.append(SimpleField(field, value)) def get_as_xml(self): return ''.join(item.get_as_xml() for item in self) class Overview(OrderedContent): template = '%(content)s' def get_as_params(self, label='Overview'): return {label: self.get_as_xml()} def get_as_xml(self): content = super(Overview, self).get_as_xml() return self.template % vars() class QuestionForm(ValidatingXML, list): """ From the AMT API docs: The top-most element of the QuestionForm data structure is a QuestionForm element. This element contains optional Overview elements and one or more Question elements. There can be any number of these two element types listed in any order. The following example structure has an Overview element and a Question element followed by a second Overview element and Question element--all within the same QuestionForm. :: [...] [...] [...] [...] [...] QuestionForm is implemented as a list, so to construct a QuestionForm, simply append Questions and Overviews (with at least one Question). """ schema_url = "http://mechanicalturk.amazonaws.com/AWSMechanicalTurkDataSchemas/2005-10-01/QuestionForm.xsd" xml_template = """%%(items)s""" % vars() def is_valid(self): return ( any(isinstance(item, Question) for item in self) and all(isinstance(item, (Question, Overview)) for item in self) ) def get_as_xml(self): assert self.is_valid(), "QuestionForm contains invalid elements" items = ''.join(item.get_as_xml() for item in self) return self.xml_template % vars() class QuestionContent(OrderedContent): template = '%(content)s' def get_as_xml(self): content = super(QuestionContent, self).get_as_xml() return self.template % vars() class AnswerSpecification(object): template = '%(spec)s' def __init__(self, spec): self.spec = spec def get_as_xml(self): spec = self.spec.get_as_xml() return self.template % vars() class Constraints(OrderedContent): template = '%(content)s' def get_as_xml(self): content = super(Constraints, self).get_as_xml() return self.template % vars() class Constraint(object): def get_attributes(self): pairs = zip(self.attribute_names, self.attribute_values) attrs = ' '.join( '%s="%d"' % (name, value) for (name, value) in pairs if value is not None ) return attrs def get_as_xml(self): attrs = self.get_attributes() return self.template % vars() class NumericConstraint(Constraint): attribute_names = 'minValue', 'maxValue' template = '' def __init__(self, min_value=None, max_value=None): self.attribute_values = min_value, max_value class LengthConstraint(Constraint): attribute_names = 'minLength', 'maxLength' template = '' def __init__(self, min_length=None, max_length=None): self.attribute_values = min_length, max_length class RegExConstraint(Constraint): attribute_names = 'regex', 'errorText', 'flags' template = '' def __init__(self, pattern, error_text=None, flags=None): self.attribute_values = pattern, error_text, flags def get_attributes(self): pairs = zip(self.attribute_names, self.attribute_values) attrs = ' '.join( '%s="%s"' % (name, value) for (name, value) in pairs if value is not None ) return attrs class NumberOfLinesSuggestion(object): template = '%(num_lines)s' def __init__(self, num_lines=1): self.num_lines = num_lines def get_as_xml(self): num_lines = self.num_lines return self.template % vars() class FreeTextAnswer(object): template = '%(items)s' def __init__(self, default=None, constraints=None, num_lines=None): self.default = default if constraints is None: self.constraints = Constraints() else: self.constraints = Constraints(constraints) self.num_lines = num_lines def get_as_xml(self): items = [self.constraints] if self.default: items.append(SimpleField('DefaultText', self.default)) if self.num_lines: items.append(NumberOfLinesSuggestion(self.num_lines)) items = ''.join(item.get_as_xml() for item in items) return self.template % vars() class FileUploadAnswer(object): template = """%(max_bytes)d%(min_bytes)d""" def __init__(self, min_bytes, max_bytes): assert 0 <= min_bytes <= max_bytes <= 2 * 10 ** 9 self.min_bytes = min_bytes self.max_bytes = max_bytes def get_as_xml(self): return self.template % vars(self) class SelectionAnswer(object): """ A class to generate SelectionAnswer XML data structures. Does not yet implement Binary selection options. """ SELECTIONANSWER_XML_TEMPLATE = """%s%s%s""" # % (count_xml, style_xml, selections_xml) SELECTION_XML_TEMPLATE = """%s%s""" # (identifier, value_xml) SELECTION_VALUE_XML_TEMPLATE = """<%s>%s""" # (type, value, type) STYLE_XML_TEMPLATE = """%s""" # (style) MIN_SELECTION_COUNT_XML_TEMPLATE = """%s""" # count MAX_SELECTION_COUNT_XML_TEMPLATE = """%s""" # count ACCEPTED_STYLES = ['radiobutton', 'dropdown', 'checkbox', 'list', 'combobox', 'multichooser'] OTHER_SELECTION_ELEMENT_NAME = 'OtherSelection' def __init__(self, min=1, max=1, style=None, selections=None, type='text', other=False): if style is not None: if style in SelectionAnswer.ACCEPTED_STYLES: self.style_suggestion = style else: raise ValueError("style '%s' not recognized; should be one of %s" % (style, ', '.join(SelectionAnswer.ACCEPTED_STYLES))) else: self.style_suggestion = None if selections is None: raise ValueError("SelectionAnswer.__init__(): selections must be a non-empty list of (content, identifier) tuples") else: self.selections = selections self.min_selections = min self.max_selections = max assert len(selections) >= self.min_selections, "# of selections is less than minimum of %d" % self.min_selections #assert len(selections) <= self.max_selections, "# of selections exceeds maximum of %d" % self.max_selections self.type = type self.other = other def get_as_xml(self): if self.type == 'text': TYPE_TAG = "Text" elif self.type == 'binary': TYPE_TAG = "Binary" else: raise ValueError("illegal type: %s; must be either 'text' or 'binary'" % str(self.type)) # build list of elements selections_xml = "" for tpl in self.selections: value_xml = SelectionAnswer.SELECTION_VALUE_XML_TEMPLATE % (TYPE_TAG, tpl[0], TYPE_TAG) selection_xml = SelectionAnswer.SELECTION_XML_TEMPLATE % (tpl[1], value_xml) selections_xml += selection_xml if self.other: # add OtherSelection element as xml if available if hasattr(self.other, 'get_as_xml'): assert isinstance(self.other, FreeTextAnswer), 'OtherSelection can only be a FreeTextAnswer' selections_xml += self.other.get_as_xml().replace('FreeTextAnswer', 'OtherSelection') else: selections_xml += "" if self.style_suggestion is not None: style_xml = SelectionAnswer.STYLE_XML_TEMPLATE % self.style_suggestion else: style_xml = "" if self.style_suggestion != 'radiobutton': count_xml = SelectionAnswer.MIN_SELECTION_COUNT_XML_TEMPLATE %self.min_selections count_xml += SelectionAnswer.MAX_SELECTION_COUNT_XML_TEMPLATE %self.max_selections else: count_xml = "" ret = SelectionAnswer.SELECTIONANSWER_XML_TEMPLATE % (count_xml, style_xml, selections_xml) # return XML return ret boto-2.20.1/boto/mws/000077500000000000000000000000001225267101000143045ustar00rootroot00000000000000boto-2.20.1/boto/mws/__init__.py000066400000000000000000000021151225267101000164140ustar00rootroot00000000000000# Copyright (c) 2008, Chris Moyer http://coredumped.org # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # boto-2.20.1/boto/mws/connection.py000066400000000000000000001034431225267101000170220ustar00rootroot00000000000000# Copyright (c) 2012 Andy Davidoff http://www.disruptek.com/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import xml.sax import hashlib import base64 import string from boto.connection import AWSQueryConnection from boto.mws.exception import ResponseErrorFactory from boto.mws.response import ResponseFactory, ResponseElement from boto.handler import XmlHandler import boto.mws.response __all__ = ['MWSConnection'] api_version_path = { 'Feeds': ('2009-01-01', 'Merchant', '/'), 'Reports': ('2009-01-01', 'Merchant', '/'), 'Orders': ('2011-01-01', 'SellerId', '/Orders/2011-01-01'), 'Products': ('2011-10-01', 'SellerId', '/Products/2011-10-01'), 'Sellers': ('2011-07-01', 'SellerId', '/Sellers/2011-07-01'), 'Inbound': ('2010-10-01', 'SellerId', '/FulfillmentInboundShipment/2010-10-01'), 'Outbound': ('2010-10-01', 'SellerId', '/FulfillmentOutboundShipment/2010-10-01'), 'Inventory': ('2010-10-01', 'SellerId', '/FulfillmentInventory/2010-10-01'), } content_md5 = lambda c: base64.encodestring(hashlib.md5(c).digest()).strip() decorated_attrs = ('action', 'response', 'section', 'quota', 'restore', 'version') api_call_map = {} def add_attrs_from(func, to): for attr in decorated_attrs: setattr(to, attr, getattr(func, attr, None)) return to def structured_lists(*fields): def decorator(func): def wrapper(self, *args, **kw): for key, acc in [f.split('.') for f in fields]: if key in kw: newkey = key + '.' + acc + (acc and '.' or '') for i in range(len(kw[key])): kw[newkey + str(i + 1)] = kw[key][i] kw.pop(key) return func(self, *args, **kw) wrapper.__doc__ = "{0}\nLists: {1}".format(func.__doc__, ', '.join(fields)) return add_attrs_from(func, to=wrapper) return decorator def http_body(field): def decorator(func): def wrapper(*args, **kw): if filter(lambda x: not x in kw, (field, 'content_type')): message = "{0} requires {1} and content_type arguments for " \ "building HTTP body".format(func.action, field) raise KeyError(message) kw['body'] = kw.pop(field) kw['headers'] = { 'Content-Type': kw.pop('content_type'), 'Content-MD5': content_md5(kw['body']), } return func(*args, **kw) wrapper.__doc__ = "{0}\nRequired HTTP Body: " \ "{1}".format(func.__doc__, field) return add_attrs_from(func, to=wrapper) return decorator def destructure_object(value, into={}, prefix=''): if isinstance(value, ResponseElement): for name, attr in value.__dict__.items(): if name.startswith('_'): continue destructure_object(attr, into=into, prefix=prefix + '.' + name) elif filter(lambda x: isinstance(value, x), (list, set, tuple)): for index, element in [(prefix + '.' + str(i + 1), value[i]) for i in range(len(value))]: destructure_object(element, into=into, prefix=index) elif isinstance(value, bool): into[prefix] = str(value).lower() else: into[prefix] = value def structured_objects(*fields): def decorator(func): def wrapper(*args, **kw): for field in filter(kw.has_key, fields): destructure_object(kw.pop(field), into=kw, prefix=field) return func(*args, **kw) wrapper.__doc__ = "{0}\nObjects: {1}".format(func.__doc__, ', '.join(fields)) return add_attrs_from(func, to=wrapper) return decorator def requires(*groups): def decorator(func): def wrapper(*args, **kw): hasgroup = lambda x: len(x) == len(filter(kw.has_key, x)) if 1 != len(filter(hasgroup, groups)): message = ' OR '.join(['+'.join(g) for g in groups]) message = "{0} requires {1} argument(s)" \ "".format(func.action, message) raise KeyError(message) return func(*args, **kw) message = ' OR '.join(['+'.join(g) for g in groups]) wrapper.__doc__ = "{0}\nRequired: {1}".format(func.__doc__, message) return add_attrs_from(func, to=wrapper) return decorator def exclusive(*groups): def decorator(func): def wrapper(*args, **kw): hasgroup = lambda x: len(x) == len(filter(kw.has_key, x)) if len(filter(hasgroup, groups)) not in (0, 1): message = ' OR '.join(['+'.join(g) for g in groups]) message = "{0} requires either {1}" \ "".format(func.action, message) raise KeyError(message) return func(*args, **kw) message = ' OR '.join(['+'.join(g) for g in groups]) wrapper.__doc__ = "{0}\nEither: {1}".format(func.__doc__, message) return add_attrs_from(func, to=wrapper) return decorator def dependent(field, *groups): def decorator(func): def wrapper(*args, **kw): hasgroup = lambda x: len(x) == len(filter(kw.has_key, x)) if field in kw and 1 > len(filter(hasgroup, groups)): message = ' OR '.join(['+'.join(g) for g in groups]) message = "{0} argument {1} requires {2}" \ "".format(func.action, field, message) raise KeyError(message) return func(*args, **kw) message = ' OR '.join(['+'.join(g) for g in groups]) wrapper.__doc__ = "{0}\n{1} requires: {2}".format(func.__doc__, field, message) return add_attrs_from(func, to=wrapper) return decorator def requires_some_of(*fields): def decorator(func): def wrapper(*args, **kw): if not filter(kw.has_key, fields): message = "{0} requires at least one of {1} argument(s)" \ "".format(func.action, ', '.join(fields)) raise KeyError(message) return func(*args, **kw) wrapper.__doc__ = "{0}\nSome Required: {1}".format(func.__doc__, ', '.join(fields)) return add_attrs_from(func, to=wrapper) return decorator def boolean_arguments(*fields): def decorator(func): def wrapper(*args, **kw): for field in filter(lambda x: isinstance(kw.get(x), bool), fields): kw[field] = str(kw[field]).lower() return func(*args, **kw) wrapper.__doc__ = "{0}\nBooleans: {1}".format(func.__doc__, ', '.join(fields)) return add_attrs_from(func, to=wrapper) return decorator def api_action(section, quota, restore, *api): def decorator(func, quota=int(quota), restore=float(restore)): version, accesskey, path = api_version_path[section] action = ''.join(api or map(str.capitalize, func.func_name.split('_'))) if hasattr(boto.mws.response, action + 'Response'): response = getattr(boto.mws.response, action + 'Response') else: response = ResponseFactory(action) response._action = action def wrapper(self, *args, **kw): kw.setdefault(accesskey, getattr(self, accesskey, None)) if kw[accesskey] is None: message = "{0} requires {1} argument. Set the " \ "MWSConnection.{2} attribute?" \ "".format(action, accesskey, accesskey) raise KeyError(message) kw['Action'] = action kw['Version'] = version return func(self, path, response, *args, **kw) for attr in decorated_attrs: setattr(wrapper, attr, locals().get(attr)) wrapper.__doc__ = "MWS {0}/{1} API call; quota={2} restore={3:.2f}\n" \ "{4}".format(action, version, quota, restore, func.__doc__) api_call_map[action] = func.func_name return wrapper return decorator class MWSConnection(AWSQueryConnection): ResponseError = ResponseErrorFactory def __init__(self, *args, **kw): kw.setdefault('host', 'mws.amazonservices.com') self.Merchant = kw.pop('Merchant', None) or kw.get('SellerId') self.SellerId = kw.pop('SellerId', None) or self.Merchant AWSQueryConnection.__init__(self, *args, **kw) def _required_auth_capability(self): return ['mws'] def post_request(self, path, params, cls, body='', headers={}, isXML=True): """Make a POST request, optionally with a content body, and return the response, optionally as raw text. Modelled off of the inherited get_object/make_request flow. """ request = self.build_base_http_request('POST', path, None, data=body, params=params, headers=headers, host=self.host) response = self._mexe(request, override_num_retries=None) body = response.read() boto.log.debug(body) if not body: boto.log.error('Null body %s' % body) raise self.ResponseError(response.status, response.reason, body) if response.status != 200: boto.log.error('%s %s' % (response.status, response.reason)) boto.log.error('%s' % body) raise self.ResponseError(response.status, response.reason, body) if not isXML: digest = response.getheader('Content-MD5') assert content_md5(body) == digest return body return self._parse_response(cls, body) def _parse_response(self, cls, body): obj = cls(self) h = XmlHandler(obj, self) xml.sax.parseString(body, h) return obj def method_for(self, name): """Return the MWS API method referred to in the argument. The named method can be in CamelCase or underlined_lower_case. This is the complement to MWSConnection.any_call.action """ action = '_' in name and string.capwords(name, '_') or name if action in api_call_map: return getattr(self, api_call_map[action]) return None def iter_call(self, call, *args, **kw): """Pass a call name as the first argument and a generator is returned for the initial response and any continuation call responses made using the NextToken. """ method = self.method_for(call) assert method, 'No call named "{0}"'.format(call) return self.iter_response(method(*args, **kw)) def iter_response(self, response): """Pass a call's response as the initial argument and a generator is returned for the initial response and any continuation call responses made using the NextToken. """ yield response more = self.method_for(response._action + 'ByNextToken') while more and response._result.HasNext == 'true': response = more(NextToken=response._result.NextToken) yield response @boolean_arguments('PurgeAndReplace') @http_body('FeedContent') @structured_lists('MarketplaceIdList.Id') @requires(['FeedType']) @api_action('Feeds', 15, 120) def submit_feed(self, path, response, headers={}, body='', **kw): """Uploads a feed for processing by Amazon MWS. """ return self.post_request(path, kw, response, body=body, headers=headers) @structured_lists('FeedSubmissionIdList.Id', 'FeedTypeList.Type', 'FeedProcessingStatusList.Status') @api_action('Feeds', 10, 45) def get_feed_submission_list(self, path, response, **kw): """Returns a list of all feed submissions submitted in the previous 90 days. """ return self.post_request(path, kw, response) @requires(['NextToken']) @api_action('Feeds', 0, 0) def get_feed_submission_list_by_next_token(self, path, response, **kw): """Returns a list of feed submissions using the NextToken parameter. """ return self.post_request(path, kw, response) @structured_lists('FeedTypeList.Type', 'FeedProcessingStatusList.Status') @api_action('Feeds', 10, 45) def get_feed_submission_count(self, path, response, **kw): """Returns a count of the feeds submitted in the previous 90 days. """ return self.post_request(path, kw, response) @structured_lists('FeedSubmissionIdList.Id', 'FeedTypeList.Type') @api_action('Feeds', 10, 45) def cancel_feed_submissions(self, path, response, **kw): """Cancels one or more feed submissions and returns a count of the feed submissions that were canceled. """ return self.post_request(path, kw, response) @requires(['FeedSubmissionId']) @api_action('Feeds', 15, 60) def get_feed_submission_result(self, path, response, **kw): """Returns the feed processing report. """ return self.post_request(path, kw, response, isXML=False) def get_service_status(self, **kw): """Instruct the user on how to get service status. """ sections = ', '.join(map(str.lower, api_version_path.keys())) message = "Use {0}.get_(section)_service_status(), " \ "where (section) is one of the following: " \ "{1}".format(self.__class__.__name__, sections) raise AttributeError(message) @structured_lists('MarketplaceIdList.Id') @boolean_arguments('ReportOptions=ShowSalesChannel') @requires(['ReportType']) @api_action('Reports', 15, 60) def request_report(self, path, response, **kw): """Creates a report request and submits the request to Amazon MWS. """ return self.post_request(path, kw, response) @structured_lists('ReportRequestIdList.Id', 'ReportTypeList.Type', 'ReportProcessingStatusList.Status') @api_action('Reports', 10, 45) def get_report_request_list(self, path, response, **kw): """Returns a list of report requests that you can use to get the ReportRequestId for a report. """ return self.post_request(path, kw, response) @requires(['NextToken']) @api_action('Reports', 0, 0) def get_report_request_list_by_next_token(self, path, response, **kw): """Returns a list of report requests using the NextToken, which was supplied by a previous request to either GetReportRequestListByNextToken or GetReportRequestList, where the value of HasNext was true in that previous request. """ return self.post_request(path, kw, response) @structured_lists('ReportTypeList.Type', 'ReportProcessingStatusList.Status') @api_action('Reports', 10, 45) def get_report_request_count(self, path, response, **kw): """Returns a count of report requests that have been submitted to Amazon MWS for processing. """ return self.post_request(path, kw, response) @api_action('Reports', 10, 45) def cancel_report_requests(self, path, response, **kw): """Cancel one or more report requests, returning the count of the canceled report requests and the report request information. """ return self.post_request(path, kw, response) @boolean_arguments('Acknowledged') @structured_lists('ReportRequestIdList.Id', 'ReportTypeList.Type') @api_action('Reports', 10, 60) def get_report_list(self, path, response, **kw): """Returns a list of reports that were created in the previous 90 days that match the query parameters. """ return self.post_request(path, kw, response) @requires(['NextToken']) @api_action('Reports', 0, 0) def get_report_list_by_next_token(self, path, response, **kw): """Returns a list of reports using the NextToken, which was supplied by a previous request to either GetReportListByNextToken or GetReportList, where the value of HasNext was true in the previous call. """ return self.post_request(path, kw, response) @boolean_arguments('Acknowledged') @structured_lists('ReportTypeList.Type') @api_action('Reports', 10, 45) def get_report_count(self, path, response, **kw): """Returns a count of the reports, created in the previous 90 days, with a status of _DONE_ and that are available for download. """ return self.post_request(path, kw, response) @requires(['ReportId']) @api_action('Reports', 15, 60) def get_report(self, path, response, **kw): """Returns the contents of a report. """ return self.post_request(path, kw, response, isXML=False) @requires(['ReportType', 'Schedule']) @api_action('Reports', 10, 45) def manage_report_schedule(self, path, response, **kw): """Creates, updates, or deletes a report request schedule for a specified report type. """ return self.post_request(path, kw, response) @structured_lists('ReportTypeList.Type') @api_action('Reports', 10, 45) def get_report_schedule_list(self, path, response, **kw): """Returns a list of order report requests that are scheduled to be submitted to Amazon MWS for processing. """ return self.post_request(path, kw, response) @requires(['NextToken']) @api_action('Reports', 0, 0) def get_report_schedule_list_by_next_token(self, path, response, **kw): """Returns a list of report requests using the NextToken, which was supplied by a previous request to either GetReportScheduleListByNextToken or GetReportScheduleList, where the value of HasNext was true in that previous request. """ return self.post_request(path, kw, response) @structured_lists('ReportTypeList.Type') @api_action('Reports', 10, 45) def get_report_schedule_count(self, path, response, **kw): """Returns a count of order report requests that are scheduled to be submitted to Amazon MWS. """ return self.post_request(path, kw, response) @boolean_arguments('Acknowledged') @requires(['ReportIdList']) @structured_lists('ReportIdList.Id') @api_action('Reports', 10, 45) def update_report_acknowledgements(self, path, response, **kw): """Updates the acknowledged status of one or more reports. """ return self.post_request(path, kw, response) @requires(['ShipFromAddress', 'InboundShipmentPlanRequestItems']) @structured_objects('ShipFromAddress', 'InboundShipmentPlanRequestItems') @api_action('Inbound', 30, 0.5) def create_inbound_shipment_plan(self, path, response, **kw): """Returns the information required to create an inbound shipment. """ return self.post_request(path, kw, response) @requires(['ShipmentId', 'InboundShipmentHeader', 'InboundShipmentItems']) @structured_objects('InboundShipmentHeader', 'InboundShipmentItems') @api_action('Inbound', 30, 0.5) def create_inbound_shipment(self, path, response, **kw): """Creates an inbound shipment. """ return self.post_request(path, kw, response) @requires(['ShipmentId']) @structured_objects('InboundShipmentHeader', 'InboundShipmentItems') @api_action('Inbound', 30, 0.5) def update_inbound_shipment(self, path, response, **kw): """Updates an existing inbound shipment. Amazon documentation is ambiguous as to whether the InboundShipmentHeader and InboundShipmentItems arguments are required. """ return self.post_request(path, kw, response) @requires_some_of('ShipmentIdList', 'ShipmentStatusList') @structured_lists('ShipmentIdList.Id', 'ShipmentStatusList.Status') @api_action('Inbound', 30, 0.5) def list_inbound_shipments(self, path, response, **kw): """Returns a list of inbound shipments based on criteria that you specify. """ return self.post_request(path, kw, response) @requires(['NextToken']) @api_action('Inbound', 30, 0.5) def list_inbound_shipments_by_next_token(self, path, response, **kw): """Returns the next page of inbound shipments using the NextToken parameter. """ return self.post_request(path, kw, response) @requires(['ShipmentId'], ['LastUpdatedAfter', 'LastUpdatedBefore']) @api_action('Inbound', 30, 0.5) def list_inbound_shipment_items(self, path, response, **kw): """Returns a list of items in a specified inbound shipment, or a list of items that were updated within a specified time frame. """ return self.post_request(path, kw, response) @requires(['NextToken']) @api_action('Inbound', 30, 0.5) def list_inbound_shipment_items_by_next_token(self, path, response, **kw): """Returns the next page of inbound shipment items using the NextToken parameter. """ return self.post_request(path, kw, response) @api_action('Inbound', 2, 300, 'GetServiceStatus') def get_inbound_service_status(self, path, response, **kw): """Returns the operational status of the Fulfillment Inbound Shipment API section. """ return self.post_request(path, kw, response) @requires(['SellerSkus'], ['QueryStartDateTime']) @structured_lists('SellerSkus.member') @api_action('Inventory', 30, 0.5) def list_inventory_supply(self, path, response, **kw): """Returns information about the availability of a seller's inventory. """ return self.post_request(path, kw, response) @requires(['NextToken']) @api_action('Inventory', 30, 0.5) def list_inventory_supply_by_next_token(self, path, response, **kw): """Returns the next page of information about the availability of a seller's inventory using the NextToken parameter. """ return self.post_request(path, kw, response) @api_action('Inventory', 2, 300, 'GetServiceStatus') def get_inventory_service_status(self, path, response, **kw): """Returns the operational status of the Fulfillment Inventory API section. """ return self.post_request(path, kw, response) @requires(['PackageNumber']) @api_action('Outbound', 30, 0.5) def get_package_tracking_details(self, path, response, **kw): """Returns delivery tracking information for a package in an outbound shipment for a Multi-Channel Fulfillment order. """ return self.post_request(path, kw, response) @structured_objects('Address', 'Items') @requires(['Address', 'Items']) @api_action('Outbound', 30, 0.5) def get_fulfillment_preview(self, path, response, **kw): """Returns a list of fulfillment order previews based on items and shipping speed categories that you specify. """ return self.post_request(path, kw, response) @structured_objects('DestinationAddress', 'Items') @requires(['SellerFulfillmentOrderId', 'DisplayableOrderId', 'ShippingSpeedCategory', 'DisplayableOrderDateTime', 'DestinationAddress', 'DisplayableOrderComment', 'Items']) @api_action('Outbound', 30, 0.5) def create_fulfillment_order(self, path, response, **kw): """Requests that Amazon ship items from the seller's inventory to a destination address. """ return self.post_request(path, kw, response) @requires(['SellerFulfillmentOrderId']) @api_action('Outbound', 30, 0.5) def get_fulfillment_order(self, path, response, **kw): """Returns a fulfillment order based on a specified SellerFulfillmentOrderId. """ return self.post_request(path, kw, response) @api_action('Outbound', 30, 0.5) def list_all_fulfillment_orders(self, path, response, **kw): """Returns a list of fulfillment orders fulfilled after (or at) a specified date or by fulfillment method. """ return self.post_request(path, kw, response) @requires(['NextToken']) @api_action('Outbound', 30, 0.5) def list_all_fulfillment_orders_by_next_token(self, path, response, **kw): """Returns the next page of inbound shipment items using the NextToken parameter. """ return self.post_request(path, kw, response) @requires(['SellerFulfillmentOrderId']) @api_action('Outbound', 30, 0.5) def cancel_fulfillment_order(self, path, response, **kw): """Requests that Amazon stop attempting to fulfill an existing fulfillment order. """ return self.post_request(path, kw, response) @api_action('Outbound', 2, 300, 'GetServiceStatus') def get_outbound_service_status(self, path, response, **kw): """Returns the operational status of the Fulfillment Outbound API section. """ return self.post_request(path, kw, response) @requires(['CreatedAfter'], ['LastUpdatedAfter']) @exclusive(['CreatedAfter'], ['LastUpdatedAfter']) @dependent('CreatedBefore', ['CreatedAfter']) @exclusive(['LastUpdatedAfter'], ['BuyerEmail'], ['SellerOrderId']) @dependent('LastUpdatedBefore', ['LastUpdatedAfter']) @exclusive(['CreatedAfter'], ['LastUpdatedBefore']) @requires(['MarketplaceId']) @structured_objects('OrderTotal', 'ShippingAddress', 'PaymentExecutionDetail') @structured_lists('MarketplaceId.Id', 'OrderStatus.Status', 'FulfillmentChannel.Channel', 'PaymentMethod.') @api_action('Orders', 6, 60) def list_orders(self, path, response, **kw): """Returns a list of orders created or updated during a time frame that you specify. """ toggle = set(('FulfillmentChannel.Channel.1', 'OrderStatus.Status.1', 'PaymentMethod.1', 'LastUpdatedAfter', 'LastUpdatedBefore')) for do, dont in { 'BuyerEmail': toggle.union(['SellerOrderId']), 'SellerOrderId': toggle.union(['BuyerEmail']), }.items(): if do in kw and filter(kw.has_key, dont): message = "Don't include {0} when specifying " \ "{1}".format(' or '.join(dont), do) raise AssertionError(message) return self.post_request(path, kw, response) @requires(['NextToken']) @api_action('Orders', 6, 60) def list_orders_by_next_token(self, path, response, **kw): """Returns the next page of orders using the NextToken value that was returned by your previous request to either ListOrders or ListOrdersByNextToken. """ return self.post_request(path, kw, response) @requires(['AmazonOrderId']) @structured_lists('AmazonOrderId.Id') @api_action('Orders', 6, 60) def get_order(self, path, response, **kw): """Returns an order for each AmazonOrderId that you specify. """ return self.post_request(path, kw, response) @requires(['AmazonOrderId']) @api_action('Orders', 30, 2) def list_order_items(self, path, response, **kw): """Returns order item information for an AmazonOrderId that you specify. """ return self.post_request(path, kw, response) @requires(['NextToken']) @api_action('Orders', 30, 2) def list_order_items_by_next_token(self, path, response, **kw): """Returns the next page of order items using the NextToken value that was returned by your previous request to either ListOrderItems or ListOrderItemsByNextToken. """ return self.post_request(path, kw, response) @api_action('Orders', 2, 300, 'GetServiceStatus') def get_orders_service_status(self, path, response, **kw): """Returns the operational status of the Orders API section. """ return self.post_request(path, kw, response) @requires(['MarketplaceId', 'Query']) @api_action('Products', 20, 20) def list_matching_products(self, path, response, **kw): """Returns a list of products and their attributes, ordered by relevancy, based on a search query that you specify. """ return self.post_request(path, kw, response) @requires(['MarketplaceId', 'ASINList']) @structured_lists('ASINList.ASIN') @api_action('Products', 20, 20) def get_matching_product(self, path, response, **kw): """Returns a list of products and their attributes, based on a list of ASIN values that you specify. """ return self.post_request(path, kw, response) @requires(['MarketplaceId', 'IdType', 'IdList']) @structured_lists('IdList.Id') @api_action('Products', 20, 20) def get_matching_product_for_id(self, path, response, **kw): """Returns a list of products and their attributes, based on a list of Product IDs that you specify. """ return self.post_request(path, kw, response) @requires(['MarketplaceId', 'SellerSKUList']) @structured_lists('SellerSKUList.SellerSKU') @api_action('Products', 20, 10, 'GetCompetitivePricingForSKU') def get_competitive_pricing_for_sku(self, path, response, **kw): """Returns the current competitive pricing of a product, based on the SellerSKUs and MarketplaceId that you specify. """ return self.post_request(path, kw, response) @requires(['MarketplaceId', 'ASINList']) @structured_lists('ASINList.ASIN') @api_action('Products', 20, 10, 'GetCompetitivePricingForASIN') def get_competitive_pricing_for_asin(self, path, response, **kw): """Returns the current competitive pricing of a product, based on the ASINs and MarketplaceId that you specify. """ return self.post_request(path, kw, response) @requires(['MarketplaceId', 'SellerSKUList']) @structured_lists('SellerSKUList.SellerSKU') @api_action('Products', 20, 5, 'GetLowestOfferListingsForSKU') def get_lowest_offer_listings_for_sku(self, path, response, **kw): """Returns the lowest price offer listings for a specific product by item condition and SellerSKUs. """ return self.post_request(path, kw, response) @requires(['MarketplaceId', 'ASINList']) @structured_lists('ASINList.ASIN') @api_action('Products', 20, 5, 'GetLowestOfferListingsForASIN') def get_lowest_offer_listings_for_asin(self, path, response, **kw): """Returns the lowest price offer listings for a specific product by item condition and ASINs. """ return self.post_request(path, kw, response) @requires(['MarketplaceId', 'SellerSKU']) @api_action('Products', 20, 20, 'GetProductCategoriesForSKU') def get_product_categories_for_sku(self, path, response, **kw): """Returns the product categories that a SellerSKU belongs to. """ return self.post_request(path, kw, response) @requires(['MarketplaceId', 'ASIN']) @api_action('Products', 20, 20, 'GetProductCategoriesForASIN') def get_product_categories_for_asin(self, path, response, **kw): """Returns the product categories that an ASIN belongs to. """ return self.post_request(path, kw, response) @api_action('Products', 2, 300, 'GetServiceStatus') def get_products_service_status(self, path, response, **kw): """Returns the operational status of the Products API section. """ return self.post_request(path, kw, response) @api_action('Sellers', 15, 60) def list_marketplace_participations(self, path, response, **kw): """Returns a list of marketplaces that the seller submitting the request can sell in, and a list of participations that include seller-specific information in that marketplace. """ return self.post_request(path, kw, response) @requires(['NextToken']) @api_action('Sellers', 15, 60) def list_marketplace_participations_by_next_token(self, path, response, **kw): """Returns the next page of marketplaces and participations using the NextToken value that was returned by your previous request to either ListMarketplaceParticipations or ListMarketplaceParticipationsByNextToken. """ return self.post_request(path, kw, response) boto-2.20.1/boto/mws/exception.py000066400000000000000000000047631225267101000166660ustar00rootroot00000000000000# Copyright (c) 2012 Andy Davidoff http://www.disruptek.com/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.exception import BotoServerError class ResponseErrorFactory(BotoServerError): def __new__(cls, *args, **kw): error = BotoServerError(*args, **kw) try: newclass = globals()[error.error_code] except KeyError: newclass = ResponseError obj = newclass.__new__(newclass, *args, **kw) obj.__dict__.update(error.__dict__) return obj class ResponseError(BotoServerError): """ Undefined response error. """ retry = False def __repr__(self): return '{0}({1}, {2},\n\t{3})'.format(self.__class__.__name__, self.status, self.reason, self.error_message) def __str__(self): return 'MWS Response Error: {0.status} {0.__class__.__name__} {1}\n' \ '{2}\n' \ '{0.error_message}'.format(self, self.retry and '(Retriable)' or '', self.__doc__.strip()) class RetriableResponseError(ResponseError): retry = True class InvalidParameterValue(ResponseError): """ One or more parameter values in the request is invalid. """ class InvalidParameter(ResponseError): """ One or more parameters in the request is invalid. """ class InvalidAddress(ResponseError): """ Invalid address. """ boto-2.20.1/boto/mws/response.py000066400000000000000000000463661225267101000165330ustar00rootroot00000000000000# Copyright (c) 2012 Andy Davidoff http://www.disruptek.com/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from decimal import Decimal class ComplexType(dict): _value = 'Value' def __repr__(self): return '{0}{1}'.format(getattr(self, self._value, None), self.copy()) def __str__(self): return str(getattr(self, self._value, '')) class DeclarativeType(object): def __init__(self, _hint=None, **kw): self._value = None if _hint is not None: self._hint = _hint return class JITResponse(ResponseElement): pass self._hint = JITResponse self._hint.__name__ = 'JIT_{0}/{1}'.format(self.__class__.__name__, hex(id(self._hint))[2:]) for name, value in kw.items(): setattr(self._hint, name, value) def __repr__(self): parent = getattr(self, '_parent', None) return '<{0}_{1}/{2}_{3}>'.format(self.__class__.__name__, parent and parent._name or '?', getattr(self, '_name', '?'), hex(id(self.__class__))) def setup(self, parent, name, *args, **kw): self._parent = parent self._name = name self._clone = self.__class__(_hint=self._hint) self._clone._parent = parent self._clone._name = name setattr(self._parent, self._name, self._clone) def start(self, *args, **kw): raise NotImplemented def end(self, *args, **kw): raise NotImplemented def teardown(self, *args, **kw): setattr(self._parent, self._name, self._value) class Element(DeclarativeType): def start(self, *args, **kw): self._value = self._hint(parent=self._parent, **kw) return self._value def end(self, *args, **kw): pass class SimpleList(DeclarativeType): def __init__(self, *args, **kw): DeclarativeType.__init__(self, *args, **kw) self._value = [] def start(self, *args, **kw): return None def end(self, name, value, *args, **kw): self._value.append(value) class ElementList(SimpleList): def start(self, *args, **kw): value = self._hint(parent=self._parent, **kw) self._value.append(value) return value def end(self, *args, **kw): pass class MemberList(Element): def __init__(self, _member=None, _hint=None, *args, **kw): message = 'Invalid `member` specification in {0}'.format(self.__class__.__name__) assert 'member' not in kw, message if _member is None: if _hint is None: Element.__init__(self, *args, member=ElementList(**kw)) else: Element.__init__(self, _hint=_hint) else: if _hint is None: if issubclass(_member, DeclarativeType): member = _member(**kw) else: member = ElementList(_member, **kw) Element.__init__(self, *args, member=member) else: message = 'Nonsensical {0} hint {1!r}'.format(self.__class__.__name__, _hint) raise AssertionError(message) def teardown(self, *args, **kw): if self._value is None: self._value = [] else: if isinstance(self._value.member, DeclarativeType): self._value.member = [] self._value = self._value.member Element.teardown(self, *args, **kw) def ResponseFactory(action, force=None): result = force or globals().get(action + 'Result', ResponseElement) class MWSResponse(Response): _name = action + 'Response' setattr(MWSResponse, action + 'Result', Element(result)) return MWSResponse def strip_namespace(func): def wrapper(self, name, *args, **kw): if self._namespace is not None: if name.startswith(self._namespace + ':'): name = name[len(self._namespace + ':'):] return func(self, name, *args, **kw) return wrapper class ResponseElement(dict): _override = {} _name = None _namespace = None def __init__(self, connection=None, name=None, parent=None, attrs=None): if parent is not None and self._namespace is None: self._namespace = parent._namespace if connection is not None: self._connection = connection self._name = name or self._name or self.__class__.__name__ self._declared('setup', attrs=attrs) dict.__init__(self, attrs and attrs.copy() or {}) def _declared(self, op, **kw): def inherit(obj): result = {} for cls in getattr(obj, '__bases__', ()): result.update(inherit(cls)) result.update(obj.__dict__) return result scope = inherit(self.__class__) scope.update(self.__dict__) declared = lambda attr: isinstance(attr[1], DeclarativeType) for name, node in filter(declared, scope.items()): getattr(node, op)(self, name, parentname=self._name, **kw) @property def connection(self): return self._connection def __repr__(self): render = lambda pair: '{0!s}: {1!r}'.format(*pair) do_show = lambda pair: not pair[0].startswith('_') attrs = filter(do_show, self.__dict__.items()) name = self.__class__.__name__ if name.startswith('JIT_'): name = '^{0}^'.format(self._name or '') elif name == 'MWSResponse': name = '^{0}^'.format(self._name or name) return '{0}{1!r}({2})'.format( name, self.copy(), ', '.join(map(render, attrs))) def _type_for(self, name, attrs): return self._override.get(name, globals().get(name, ResponseElement)) @strip_namespace def startElement(self, name, attrs, connection): attribute = getattr(self, name, None) if isinstance(attribute, DeclarativeType): return attribute.start(name=name, attrs=attrs, connection=connection) elif attrs.getLength(): setattr(self, name, ComplexType(attrs.copy())) else: return None @strip_namespace def endElement(self, name, value, connection): attribute = getattr(self, name, None) if name == self._name: self._declared('teardown') elif isinstance(attribute, DeclarativeType): attribute.end(name=name, value=value, connection=connection) elif isinstance(attribute, ComplexType): setattr(attribute, attribute._value, value) else: setattr(self, name, value) class Response(ResponseElement): ResponseMetadata = Element() @strip_namespace def startElement(self, name, attrs, connection): if name == self._name: self.update(attrs) else: return ResponseElement.startElement(self, name, attrs, connection) @property def _result(self): return getattr(self, self._action + 'Result', None) @property def _action(self): return (self._name or self.__class__.__name__)[:-len('Response')] class ResponseResultList(Response): _ResultClass = ResponseElement def __init__(self, *args, **kw): setattr(self, self._action + 'Result', ElementList(self._ResultClass)) Response.__init__(self, *args, **kw) class FeedSubmissionInfo(ResponseElement): pass class SubmitFeedResult(ResponseElement): FeedSubmissionInfo = Element(FeedSubmissionInfo) class GetFeedSubmissionListResult(ResponseElement): FeedSubmissionInfo = ElementList(FeedSubmissionInfo) class GetFeedSubmissionListByNextTokenResult(GetFeedSubmissionListResult): pass class GetFeedSubmissionCountResult(ResponseElement): pass class CancelFeedSubmissionsResult(GetFeedSubmissionListResult): pass class GetServiceStatusResult(ResponseElement): Messages = Element(Messages=ElementList()) class ReportRequestInfo(ResponseElement): pass class RequestReportResult(ResponseElement): ReportRequestInfo = Element() class GetReportRequestListResult(RequestReportResult): ReportRequestInfo = ElementList() class GetReportRequestListByNextTokenResult(GetReportRequestListResult): pass class CancelReportRequestsResult(RequestReportResult): pass class GetReportListResult(ResponseElement): ReportInfo = ElementList() class GetReportListByNextTokenResult(GetReportListResult): pass class ManageReportScheduleResult(ResponseElement): ReportSchedule = Element() class GetReportScheduleListResult(ManageReportScheduleResult): pass class GetReportScheduleListByNextTokenResult(GetReportScheduleListResult): pass class UpdateReportAcknowledgementsResult(GetReportListResult): pass class CreateInboundShipmentPlanResult(ResponseElement): InboundShipmentPlans = MemberList(ShipToAddress=Element(), Items=MemberList()) class ListInboundShipmentsResult(ResponseElement): ShipmentData = MemberList(ShipFromAddress=Element()) class ListInboundShipmentsByNextTokenResult(ListInboundShipmentsResult): pass class ListInboundShipmentItemsResult(ResponseElement): ItemData = MemberList() class ListInboundShipmentItemsByNextTokenResult(ListInboundShipmentItemsResult): pass class ListInventorySupplyResult(ResponseElement): InventorySupplyList = MemberList( EarliestAvailability=Element(), SupplyDetail=MemberList( EarliestAvailableToPick=Element(), LatestAvailableToPick=Element(), ) ) class ListInventorySupplyByNextTokenResult(ListInventorySupplyResult): pass class ComplexAmount(ResponseElement): _amount = 'Value' def __repr__(self): return '{0} {1}'.format(self.CurrencyCode, getattr(self, self._amount)) def __float__(self): return float(getattr(self, self._amount)) def __str__(self): return str(getattr(self, self._amount)) @strip_namespace def startElement(self, name, attrs, connection): if name not in ('CurrencyCode', self._amount): message = 'Unrecognized tag {0} in ComplexAmount'.format(name) raise AssertionError(message) return ResponseElement.startElement(self, name, attrs, connection) @strip_namespace def endElement(self, name, value, connection): if name == self._amount: value = Decimal(value) ResponseElement.endElement(self, name, value, connection) class ComplexMoney(ComplexAmount): _amount = 'Amount' class ComplexWeight(ResponseElement): def __repr__(self): return '{0} {1}'.format(self.Value, self.Unit) def __float__(self): return float(self.Value) def __str__(self): return str(self.Value) @strip_namespace def startElement(self, name, attrs, connection): if name not in ('Unit', 'Value'): message = 'Unrecognized tag {0} in ComplexWeight'.format(name) raise AssertionError(message) return ResponseElement.startElement(self, name, attrs, connection) @strip_namespace def endElement(self, name, value, connection): if name == 'Value': value = Decimal(value) ResponseElement.endElement(self, name, value, connection) class Dimension(ComplexType): _value = 'Value' class ComplexDimensions(ResponseElement): _dimensions = ('Height', 'Length', 'Width', 'Weight') def __repr__(self): values = [getattr(self, key, None) for key in self._dimensions] values = filter(None, values) return 'x'.join(map('{0.Value:0.2f}{0[Units]}'.format, values)) @strip_namespace def startElement(self, name, attrs, connection): if name not in self._dimensions: message = 'Unrecognized tag {0} in ComplexDimensions'.format(name) raise AssertionError(message) setattr(self, name, Dimension(attrs.copy())) @strip_namespace def endElement(self, name, value, connection): if name in self._dimensions: value = Decimal(value or '0') ResponseElement.endElement(self, name, value, connection) class FulfillmentPreviewItem(ResponseElement): EstimatedShippingWeight = Element(ComplexWeight) class FulfillmentPreview(ResponseElement): EstimatedShippingWeight = Element(ComplexWeight) EstimatedFees = MemberList(Amount=Element(ComplexAmount)) UnfulfillablePreviewItems = MemberList(FulfillmentPreviewItem) FulfillmentPreviewShipments = MemberList( FulfillmentPreviewItems=MemberList(FulfillmentPreviewItem), ) class GetFulfillmentPreviewResult(ResponseElement): FulfillmentPreviews = MemberList(FulfillmentPreview) class FulfillmentOrder(ResponseElement): DestinationAddress = Element() NotificationEmailList = MemberList(SimpleList) class GetFulfillmentOrderResult(ResponseElement): FulfillmentOrder = Element(FulfillmentOrder) FulfillmentShipment = MemberList( FulfillmentShipmentItem=MemberList(), FulfillmentShipmentPackage=MemberList(), ) FulfillmentOrderItem = MemberList() class ListAllFulfillmentOrdersResult(ResponseElement): FulfillmentOrders = MemberList(FulfillmentOrder) class ListAllFulfillmentOrdersByNextTokenResult(ListAllFulfillmentOrdersResult): pass class GetPackageTrackingDetailsResult(ResponseElement): ShipToAddress = Element() TrackingEvents = MemberList(EventAddress=Element()) class Image(ResponseElement): pass class AttributeSet(ResponseElement): ItemDimensions = Element(ComplexDimensions) ListPrice = Element(ComplexMoney) PackageDimensions = Element(ComplexDimensions) SmallImage = Element(Image) class ItemAttributes(AttributeSet): Languages = Element(Language=ElementList()) def __init__(self, *args, **kw): names = ('Actor', 'Artist', 'Author', 'Creator', 'Director', 'Feature', 'Format', 'GemType', 'MaterialType', 'MediaType', 'OperatingSystem', 'Platform') for name in names: setattr(self, name, SimpleList()) AttributeSet.__init__(self, *args, **kw) class VariationRelationship(ResponseElement): Identifiers = Element(MarketplaceASIN=Element(), SKUIdentifier=Element()) GemType = SimpleList() MaterialType = SimpleList() OperatingSystem = SimpleList() class Price(ResponseElement): LandedPrice = Element(ComplexMoney) ListingPrice = Element(ComplexMoney) Shipping = Element(ComplexMoney) class CompetitivePrice(ResponseElement): Price = Element(Price) class CompetitivePriceList(ResponseElement): CompetitivePrice = ElementList(CompetitivePrice) class CompetitivePricing(ResponseElement): CompetitivePrices = Element(CompetitivePriceList) NumberOfOfferListings = SimpleList() TradeInValue = Element(ComplexMoney) class SalesRank(ResponseElement): pass class LowestOfferListing(ResponseElement): Qualifiers = Element(ShippingTime=Element()) Price = Element(Price) class Product(ResponseElement): _namespace = 'ns2' Identifiers = Element(MarketplaceASIN=Element(), SKUIdentifier=Element()) AttributeSets = Element( ItemAttributes=ElementList(ItemAttributes), ) Relationships = Element( VariationParent=ElementList(VariationRelationship), ) CompetitivePricing = ElementList(CompetitivePricing) SalesRankings = Element( SalesRank=ElementList(SalesRank), ) LowestOfferListings = Element( LowestOfferListing=ElementList(LowestOfferListing), ) class ListMatchingProductsResult(ResponseElement): Products = Element(Product=ElementList(Product)) class ProductsBulkOperationResult(ResponseElement): Product = Element(Product) Error = Element() class ProductsBulkOperationResponse(ResponseResultList): _ResultClass = ProductsBulkOperationResult class GetMatchingProductResponse(ProductsBulkOperationResponse): pass class GetMatchingProductForIdResult(ListMatchingProductsResult): pass class GetMatchingProductForIdResponse(ResponseResultList): _ResultClass = GetMatchingProductForIdResult class GetCompetitivePricingForSKUResponse(ProductsBulkOperationResponse): pass class GetCompetitivePricingForASINResponse(ProductsBulkOperationResponse): pass class GetLowestOfferListingsForSKUResponse(ProductsBulkOperationResponse): pass class GetLowestOfferListingsForASINResponse(ProductsBulkOperationResponse): pass class ProductCategory(ResponseElement): def __init__(self, *args, **kw): setattr(self, 'Parent', Element(ProductCategory)) ResponseElement.__init__(self, *args, **kw) class GetProductCategoriesResult(ResponseElement): Self = Element(ProductCategory) class GetProductCategoriesForSKUResult(GetProductCategoriesResult): pass class GetProductCategoriesForASINResult(GetProductCategoriesResult): pass class Order(ResponseElement): OrderTotal = Element(ComplexMoney) ShippingAddress = Element() PaymentExecutionDetail = Element( PaymentExecutionDetailItem=ElementList( PaymentExecutionDetailItem=Element( Payment=Element(ComplexMoney) ) ) ) class ListOrdersResult(ResponseElement): Orders = Element(Order=ElementList(Order)) class ListOrdersByNextTokenResult(ListOrdersResult): pass class GetOrderResult(ListOrdersResult): pass class OrderItem(ResponseElement): ItemPrice = Element(ComplexMoney) ShippingPrice = Element(ComplexMoney) GiftWrapPrice = Element(ComplexMoney) ItemTax = Element(ComplexMoney) ShippingTax = Element(ComplexMoney) GiftWrapTax = Element(ComplexMoney) ShippingDiscount = Element(ComplexMoney) PromotionDiscount = Element(ComplexMoney) PromotionIds = SimpleList() CODFee = Element(ComplexMoney) CODFeeDiscount = Element(ComplexMoney) class ListOrderItemsResult(ResponseElement): OrderItems = Element(OrderItem=ElementList(OrderItem)) class ListMarketplaceParticipationsResult(ResponseElement): ListParticipations = Element(Participation=ElementList()) ListMarketplaces = Element(Marketplace=ElementList()) class ListMarketplaceParticipationsByNextTokenResult(ListMarketplaceParticipationsResult): pass boto-2.20.1/boto/opsworks/000077500000000000000000000000001225267101000153655ustar00rootroot00000000000000boto-2.20.1/boto/opsworks/__init__.py000066400000000000000000000000001225267101000174640ustar00rootroot00000000000000boto-2.20.1/boto/opsworks/exceptions.py000066400000000000000000000024101225267101000201150ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.exception import JSONResponseError class ResourceNotFoundException(JSONResponseError): pass class ValidationException(JSONResponseError): pass boto-2.20.1/boto/opsworks/layer1.py000066400000000000000000002566051225267101000171520ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import json import boto from boto.connection import AWSQueryConnection from boto.regioninfo import RegionInfo from boto.exception import JSONResponseError from boto.opsworks import exceptions class OpsWorksConnection(AWSQueryConnection): """ AWS OpsWorks Welcome to the AWS OpsWorks API Reference . This guide provides descriptions, syntax, and usage examples about AWS OpsWorks actions and data types, including common parameters and error codes. AWS OpsWorks is an application management service that provides an integrated experience for overseeing the complete application lifecycle. For information about this product, go to the `AWS OpsWorks`_ details page. **Endpoints** AWS OpsWorks supports only one endpoint, opsworks.us- east-1.amazonaws.com (HTTPS), so you must connect to that endpoint. You can then use the API to direct AWS OpsWorks to create stacks in any AWS Region. **Chef Version** When you call CreateStack, CloneStack, or UpdateStack we recommend you use the `ConfigurationManager` parameter to specify the Chef version, 0.9 or 11.4. The default value is currently 0.9. However, we expect to change the default value to 11.4 in September 2013. """ APIVersion = "2013-02-18" DefaultRegionName = "us-east-1" DefaultRegionEndpoint = "opsworks.us-east-1.amazonaws.com" ServiceName = "OpsWorks" TargetPrefix = "OpsWorks_20130218" ResponseError = JSONResponseError _faults = { "ResourceNotFoundException": exceptions.ResourceNotFoundException, "ValidationException": exceptions.ValidationException, } def __init__(self, **kwargs): region = kwargs.get('region') if not region: region = RegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint) kwargs['host'] = region.endpoint AWSQueryConnection.__init__(self, **kwargs) self.region = region def _required_auth_capability(self): return ['hmac-v4'] def assign_volume(self, volume_id, instance_id=None): """ Assigns one of the stack's registered Amazon EBS volumes to a specified instance. The volume must first be registered with the stack by calling RegisterVolume. For more information, see ``_. :type volume_id: string :param volume_id: The volume ID. :type instance_id: string :param instance_id: The instance ID. """ params = {'VolumeId': volume_id, } if instance_id is not None: params['InstanceId'] = instance_id return self.make_request(action='AssignVolume', body=json.dumps(params)) def associate_elastic_ip(self, elastic_ip, instance_id=None): """ Associates one of the stack's registered Elastic IP addresses with a specified instance. The address must first be registered with the stack by calling RegisterElasticIp. For more information, see ``_. :type elastic_ip: string :param elastic_ip: The Elastic IP address. :type instance_id: string :param instance_id: The instance ID. """ params = {'ElasticIp': elastic_ip, } if instance_id is not None: params['InstanceId'] = instance_id return self.make_request(action='AssociateElasticIp', body=json.dumps(params)) def attach_elastic_load_balancer(self, elastic_load_balancer_name, layer_id): """ Attaches an Elastic Load Balancing load balancer to a specified layer. You must create the Elastic Load Balancing instance separately, by using the Elastic Load Balancing console, API, or CLI. For more information, see ` Elastic Load Balancing Developer Guide`_. :type elastic_load_balancer_name: string :param elastic_load_balancer_name: The Elastic Load Balancing instance's name. :type layer_id: string :param layer_id: The ID of the layer that the Elastic Load Balancing instance is to be attached to. """ params = { 'ElasticLoadBalancerName': elastic_load_balancer_name, 'LayerId': layer_id, } return self.make_request(action='AttachElasticLoadBalancer', body=json.dumps(params)) def clone_stack(self, source_stack_id, service_role_arn, name=None, region=None, vpc_id=None, attributes=None, default_instance_profile_arn=None, default_os=None, hostname_theme=None, default_availability_zone=None, default_subnet_id=None, custom_json=None, configuration_manager=None, use_custom_cookbooks=None, custom_cookbooks_source=None, default_ssh_key_name=None, clone_permissions=None, clone_app_ids=None, default_root_device_type=None): """ Creates a clone of a specified stack. For more information, see `Clone a Stack`_. :type source_stack_id: string :param source_stack_id: The source stack ID. :type name: string :param name: The cloned stack name. :type region: string :param region: The cloned stack AWS region, such as "us-east-1". For more information about AWS regions, see `Regions and Endpoints`_. :type vpc_id: string :param vpc_id: The ID of the VPC that the cloned stack is to be launched into. It must be in the specified region. All instances will be launched into this VPC, and you cannot change the ID later. + If your account supports EC2 Classic, the default value is no VPC. + If your account does not support EC2 Classic, the default value is the default VPC for the specified region. If the VPC ID corresponds to a default VPC and you have specified either the `DefaultAvailabilityZone` or the `DefaultSubnetId` parameter only, AWS OpsWorks infers the value of the other parameter. If you specify neither parameter, AWS OpsWorks sets these parameters to the first valid Availability Zone for the specified region and the corresponding default VPC subnet ID, respectively. If you specify a nondefault VPC ID, note the following: + It must belong to a VPC in your account that is in the specified region. + You must specify a value for `DefaultSubnetId`. For more information on how to use AWS OpsWorks with a VPC, see `Running a Stack in a VPC`_. For more information on default VPC and EC2 Classic, see `Supported Platforms`_. :type attributes: map :param attributes: A list of stack attributes and values as key/value pairs to be added to the cloned stack. :type service_role_arn: string :param service_role_arn: The stack AWS Identity and Access Management (IAM) role, which allows AWS OpsWorks to work with AWS resources on your behalf. You must set this parameter to the Amazon Resource Name (ARN) for an existing IAM role. If you create a stack by using the AWS OpsWorks console, it creates the role for you. You can obtain an existing stack's IAM ARN programmatically by calling DescribePermissions. For more information about IAM ARNs, see `Using Identifiers`_. You must set this parameter to a valid service role ARN or the action will fail; there is no default value. You can specify the source stack's service role ARN, if you prefer, but you must do so explicitly. :type default_instance_profile_arn: string :param default_instance_profile_arn: The ARN of an IAM profile that is the default profile for all of the stack's EC2 instances. For more information about IAM ARNs, see `Using Identifiers`_. :type default_os: string :param default_os: The cloned stack's default operating system, which must be set to `Amazon Linux` or `Ubuntu 12.04 LTS`. The default option is `Amazon Linux`. :type hostname_theme: string :param hostname_theme: The stack's host name theme, with spaces are replaced by underscores. The theme is used to generate host names for the stack's instances. By default, `HostnameTheme` is set to Layer_Dependent, which creates host names by appending integers to the layer's short name. The other themes are: + Baked_Goods + Clouds + European_Cities + Fruits + Greek_Deities + Legendary_Creatures_from_Japan + Planets_and_Moons + Roman_Deities + Scottish_Islands + US_Cities + Wild_Cats To obtain a generated host name, call `GetHostNameSuggestion`, which returns a host name based on the current theme. :type default_availability_zone: string :param default_availability_zone: The cloned stack's default Availability Zone, which must be in the specified region. For more information, see `Regions and Endpoints`_. If you also specify a value for `DefaultSubnetId`, the subnet must be in the same zone. For more information, see the `VpcId` parameter description. :type default_subnet_id: string :param default_subnet_id: The stack's default subnet ID. All instances will be launched into this subnet unless you specify otherwise when you create the instance. If you also specify a value for `DefaultAvailabilityZone`, the subnet must be in the same zone. For information on default values and when this parameter is required, see the `VpcId` parameter description. :type custom_json: string :param custom_json: A string that contains user-defined, custom JSON. It is used to override the corresponding default stack configuration JSON values. The string should be in the following format and must escape characters such as '"'.: `"{\"key1\": \"value1\", \"key2\": \"value2\",...}"` For more information on custom JSON, see `Use Custom JSON to Modify the Stack Configuration JSON`_ :type configuration_manager: dict :param configuration_manager: The configuration manager. When you clone a stack we recommend that you use the configuration manager to specify the Chef version, 0.9 or 11.4. The default value is currently 0.9. However, we expect to change the default value to 11.4 in September 2013. :type use_custom_cookbooks: boolean :param use_custom_cookbooks: Whether to use custom cookbooks. :type custom_cookbooks_source: dict :param custom_cookbooks_source: Contains the information required to retrieve an app or cookbook from a repository. For more information, see `Creating Apps`_ or `Custom Recipes and Cookbooks`_. :type default_ssh_key_name: string :param default_ssh_key_name: A default SSH key for the stack instances. You can override this value when you create or update an instance. :type clone_permissions: boolean :param clone_permissions: Whether to clone the source stack's permissions. :type clone_app_ids: list :param clone_app_ids: A list of source stack app IDs to be included in the cloned stack. :type default_root_device_type: string :param default_root_device_type: The default root device type. This value is used by default for all instances in the cloned stack, but you can override it when you create an instance. For more information, see `Storage for the Root Device`_. """ params = { 'SourceStackId': source_stack_id, 'ServiceRoleArn': service_role_arn, } if name is not None: params['Name'] = name if region is not None: params['Region'] = region if vpc_id is not None: params['VpcId'] = vpc_id if attributes is not None: params['Attributes'] = attributes if default_instance_profile_arn is not None: params['DefaultInstanceProfileArn'] = default_instance_profile_arn if default_os is not None: params['DefaultOs'] = default_os if hostname_theme is not None: params['HostnameTheme'] = hostname_theme if default_availability_zone is not None: params['DefaultAvailabilityZone'] = default_availability_zone if default_subnet_id is not None: params['DefaultSubnetId'] = default_subnet_id if custom_json is not None: params['CustomJson'] = custom_json if configuration_manager is not None: params['ConfigurationManager'] = configuration_manager if use_custom_cookbooks is not None: params['UseCustomCookbooks'] = use_custom_cookbooks if custom_cookbooks_source is not None: params['CustomCookbooksSource'] = custom_cookbooks_source if default_ssh_key_name is not None: params['DefaultSshKeyName'] = default_ssh_key_name if clone_permissions is not None: params['ClonePermissions'] = clone_permissions if clone_app_ids is not None: params['CloneAppIds'] = clone_app_ids if default_root_device_type is not None: params['DefaultRootDeviceType'] = default_root_device_type return self.make_request(action='CloneStack', body=json.dumps(params)) def create_app(self, stack_id, name, type, shortname=None, description=None, app_source=None, domains=None, enable_ssl=None, ssl_configuration=None, attributes=None): """ Creates an app for a specified stack. For more information, see `Creating Apps`_. :type stack_id: string :param stack_id: The stack ID. :type shortname: string :param shortname: The app's short name. :type name: string :param name: The app name. :type description: string :param description: A description of the app. :type type: string :param type: The app type. Each supported type is associated with a particular layer. For example, PHP applications are associated with a PHP layer. AWS OpsWorks deploys an application to those instances that are members of the corresponding layer. :type app_source: dict :param app_source: A `Source` object that specifies the app repository. :type domains: list :param domains: The app virtual host settings, with multiple domains separated by commas. For example: `'www.example.com, example.com'` :type enable_ssl: boolean :param enable_ssl: Whether to enable SSL for the app. :type ssl_configuration: dict :param ssl_configuration: An `SslConfiguration` object with the SSL configuration. :type attributes: map :param attributes: One or more user-defined key/value pairs to be added to the stack attributes bag. """ params = {'StackId': stack_id, 'Name': name, 'Type': type, } if shortname is not None: params['Shortname'] = shortname if description is not None: params['Description'] = description if app_source is not None: params['AppSource'] = app_source if domains is not None: params['Domains'] = domains if enable_ssl is not None: params['EnableSsl'] = enable_ssl if ssl_configuration is not None: params['SslConfiguration'] = ssl_configuration if attributes is not None: params['Attributes'] = attributes return self.make_request(action='CreateApp', body=json.dumps(params)) def create_deployment(self, stack_id, command, app_id=None, instance_ids=None, comment=None, custom_json=None): """ Deploys a stack or app. + App deployment generates a `deploy` event, which runs the associated recipes and passes them a JSON stack configuration object that includes information about the app. + Stack deployment runs the `deploy` recipes but does not raise an event. For more information, see `Deploying Apps`_ and `Run Stack Commands`_. :type stack_id: string :param stack_id: The stack ID. :type app_id: string :param app_id: The app ID. This parameter is required for app deployments, but not for other deployment commands. :type instance_ids: list :param instance_ids: The instance IDs for the deployment targets. :type command: dict :param command: A `DeploymentCommand` object that specifies the deployment command and any associated arguments. :type comment: string :param comment: A user-defined comment. :type custom_json: string :param custom_json: A string that contains user-defined, custom JSON. It is used to override the corresponding default stack configuration JSON values. The string should be in the following format and must escape characters such as '"'.: `"{\"key1\": \"value1\", \"key2\": \"value2\",...}"` For more information on custom JSON, see `Use Custom JSON to Modify the Stack Configuration JSON`_. """ params = {'StackId': stack_id, 'Command': command, } if app_id is not None: params['AppId'] = app_id if instance_ids is not None: params['InstanceIds'] = instance_ids if comment is not None: params['Comment'] = comment if custom_json is not None: params['CustomJson'] = custom_json return self.make_request(action='CreateDeployment', body=json.dumps(params)) def create_instance(self, stack_id, layer_ids, instance_type, auto_scaling_type=None, hostname=None, os=None, ami_id=None, ssh_key_name=None, availability_zone=None, subnet_id=None, architecture=None, root_device_type=None, install_updates_on_boot=None): """ Creates an instance in a specified stack. For more information, see `Adding an Instance to a Layer`_. :type stack_id: string :param stack_id: The stack ID. :type layer_ids: list :param layer_ids: An array that contains the instance layer IDs. :type instance_type: string :param instance_type: The instance type. AWS OpsWorks supports all instance types except Cluster Compute, Cluster GPU, and High Memory Cluster. For more information, see `Instance Families and Types`_. The parameter values that you use to specify the various types are in the API Name column of the Available Instance Types table. :type auto_scaling_type: string :param auto_scaling_type: The instance auto scaling type, which has three possible values: + **AlwaysRunning**: A 24/7 instance, which is not affected by auto scaling. + **TimeBasedAutoScaling**: A time-based auto scaling instance, which is started and stopped based on a specified schedule. To specify the schedule, call SetTimeBasedAutoScaling. + **LoadBasedAutoScaling**: A load-based auto scaling instance, which is started and stopped based on load metrics. To use load-based auto scaling, you must enable it for the instance layer and configure the thresholds by calling SetLoadBasedAutoScaling. :type hostname: string :param hostname: The instance host name. :type os: string :param os: The instance operating system, which must be set to one of the following. + Standard operating systems: `Amazon Linux` or `Ubuntu 12.04 LTS` + Custom AMIs: `Custom` The default option is `Amazon Linux`. If you set this parameter to `Custom`, you must use the CreateInstance action's AmiId parameter to specify the custom AMI that you want to use. For more information on the standard operating systems, see `Operating Systems`_For more information on how to use custom AMIs with OpsWorks, see `Using Custom AMIs`_. :type ami_id: string :param ami_id: A custom AMI ID to be used to create the instance. The AMI should be based on one of the standard AWS OpsWorks APIs: Amazon Linux or Ubuntu 12.04 LTS. For more information, see `Instances`_ :type ssh_key_name: string :param ssh_key_name: The instance SSH key name. :type availability_zone: string :param availability_zone: The instance Availability Zone. For more information, see `Regions and Endpoints`_. :type subnet_id: string :param subnet_id: The ID of the instance's subnet. If the stack is running in a VPC, you can use this parameter to override the stack's default subnet ID value and direct AWS OpsWorks to launch the instance in a different subnet. :type architecture: string :param architecture: The instance architecture. Instance types do not necessarily support both architectures. For a list of the architectures that are supported by the different instance types, see `Instance Families and Types`_. :type root_device_type: string :param root_device_type: The instance root device type. For more information, see `Storage for the Root Device`_. :type install_updates_on_boot: boolean :param install_updates_on_boot: Whether to install operating system and package updates when the instance boots. The default value is `True`. To control when updates are installed, set this value to `False`. You must then update your instances manually by using CreateDeployment to run the `update_dependencies` stack command or manually running `yum` (Amazon Linux) or `apt-get` (Ubuntu) on the instances. We strongly recommend using the default value of `True`, to ensure that your instances have the latest security updates. """ params = { 'StackId': stack_id, 'LayerIds': layer_ids, 'InstanceType': instance_type, } if auto_scaling_type is not None: params['AutoScalingType'] = auto_scaling_type if hostname is not None: params['Hostname'] = hostname if os is not None: params['Os'] = os if ami_id is not None: params['AmiId'] = ami_id if ssh_key_name is not None: params['SshKeyName'] = ssh_key_name if availability_zone is not None: params['AvailabilityZone'] = availability_zone if subnet_id is not None: params['SubnetId'] = subnet_id if architecture is not None: params['Architecture'] = architecture if root_device_type is not None: params['RootDeviceType'] = root_device_type if install_updates_on_boot is not None: params['InstallUpdatesOnBoot'] = install_updates_on_boot return self.make_request(action='CreateInstance', body=json.dumps(params)) def create_layer(self, stack_id, type, name, shortname, attributes=None, custom_instance_profile_arn=None, custom_security_group_ids=None, packages=None, volume_configurations=None, enable_auto_healing=None, auto_assign_elastic_ips=None, auto_assign_public_ips=None, custom_recipes=None, install_updates_on_boot=None): """ Creates a layer. For more information, see `How to Create a Layer`_. You should use **CreateLayer** for noncustom layer types such as PHP App Server only if the stack does not have an existing layer of that type. A stack can have at most one instance of each noncustom layer; if you attempt to create a second instance, **CreateLayer** fails. A stack can have an arbitrary number of custom layers, so you can call **CreateLayer** as many times as you like for that layer type. :type stack_id: string :param stack_id: The layer stack ID. :type type: string :param type: The layer type. A stack cannot have more than one layer of the same type. This parameter must be set to one of the following: + lb: An HAProxy layer + web: A Static Web Server layer + rails-app: A Rails App Server layer + php-app: A PHP App Server layer + nodejs-app: A Node.js App Server layer + memcached: A Memcached layer + db-master: A MySQL layer + monitoring-master: A Ganglia layer + custom: A custom layer :type name: string :param name: The layer name, which is used by the console. :type shortname: string :param shortname: The layer short name, which is used internally by AWS OpsWorks and by Chef recipes. The short name is also used as the name for the directory where your app files are installed. It can have a maximum of 200 characters, which are limited to the alphanumeric characters, '-', '_', and '.'. :type attributes: map :param attributes: One or more user-defined key/value pairs to be added to the stack attributes bag. :type custom_instance_profile_arn: string :param custom_instance_profile_arn: The ARN of an IAM profile that to be used for the layer's EC2 instances. For more information about IAM ARNs, see `Using Identifiers`_. :type custom_security_group_ids: list :param custom_security_group_ids: An array containing the layer custom security group IDs. :type packages: list :param packages: An array of `Package` objects that describe the layer packages. :type volume_configurations: list :param volume_configurations: A `VolumeConfigurations` object that describes the layer Amazon EBS volumes. :type enable_auto_healing: boolean :param enable_auto_healing: Whether to disable auto healing for the layer. :type auto_assign_elastic_ips: boolean :param auto_assign_elastic_ips: Whether to automatically assign an `Elastic IP address`_ to the layer's instances. For more information, see `How to Edit a Layer`_. :type auto_assign_public_ips: boolean :param auto_assign_public_ips: For stacks that are running in a VPC, whether to automatically assign a public IP address to the layer's instances. For more information, see `How to Edit a Layer`_. :type custom_recipes: dict :param custom_recipes: A `LayerCustomRecipes` object that specifies the layer custom recipes. :type install_updates_on_boot: boolean :param install_updates_on_boot: Whether to install operating system and package updates when the instance boots. The default value is `True`. To control when updates are installed, set this value to `False`. You must then update your instances manually by using CreateDeployment to run the `update_dependencies` stack command or manually running `yum` (Amazon Linux) or `apt-get` (Ubuntu) on the instances. We strongly recommend using the default value of `True`, to ensure that your instances have the latest security updates. """ params = { 'StackId': stack_id, 'Type': type, 'Name': name, 'Shortname': shortname, } if attributes is not None: params['Attributes'] = attributes if custom_instance_profile_arn is not None: params['CustomInstanceProfileArn'] = custom_instance_profile_arn if custom_security_group_ids is not None: params['CustomSecurityGroupIds'] = custom_security_group_ids if packages is not None: params['Packages'] = packages if volume_configurations is not None: params['VolumeConfigurations'] = volume_configurations if enable_auto_healing is not None: params['EnableAutoHealing'] = enable_auto_healing if auto_assign_elastic_ips is not None: params['AutoAssignElasticIps'] = auto_assign_elastic_ips if auto_assign_public_ips is not None: params['AutoAssignPublicIps'] = auto_assign_public_ips if custom_recipes is not None: params['CustomRecipes'] = custom_recipes if install_updates_on_boot is not None: params['InstallUpdatesOnBoot'] = install_updates_on_boot return self.make_request(action='CreateLayer', body=json.dumps(params)) def create_stack(self, name, region, service_role_arn, default_instance_profile_arn, vpc_id=None, attributes=None, default_os=None, hostname_theme=None, default_availability_zone=None, default_subnet_id=None, custom_json=None, configuration_manager=None, use_custom_cookbooks=None, custom_cookbooks_source=None, default_ssh_key_name=None, default_root_device_type=None): """ Creates a new stack. For more information, see `Create a New Stack`_. :type name: string :param name: The stack name. :type region: string :param region: The stack AWS region, such as "us-east-1". For more information about Amazon regions, see `Regions and Endpoints`_. :type vpc_id: string :param vpc_id: The ID of the VPC that the stack is to be launched into. It must be in the specified region. All instances will be launched into this VPC, and you cannot change the ID later. + If your account supports EC2 Classic, the default value is no VPC. + If your account does not support EC2 Classic, the default value is the default VPC for the specified region. If the VPC ID corresponds to a default VPC and you have specified either the `DefaultAvailabilityZone` or the `DefaultSubnetId` parameter only, AWS OpsWorks infers the value of the other parameter. If you specify neither parameter, AWS OpsWorks sets these parameters to the first valid Availability Zone for the specified region and the corresponding default VPC subnet ID, respectively. If you specify a nondefault VPC ID, note the following: + It must belong to a VPC in your account that is in the specified region. + You must specify a value for `DefaultSubnetId`. For more information on how to use AWS OpsWorks with a VPC, see `Running a Stack in a VPC`_. For more information on default VPC and EC2 Classic, see `Supported Platforms`_. :type attributes: map :param attributes: One or more user-defined key/value pairs to be added to the stack attributes bag. :type service_role_arn: string :param service_role_arn: The stack AWS Identity and Access Management (IAM) role, which allows AWS OpsWorks to work with AWS resources on your behalf. You must set this parameter to the Amazon Resource Name (ARN) for an existing IAM role. For more information about IAM ARNs, see `Using Identifiers`_. :type default_instance_profile_arn: string :param default_instance_profile_arn: The ARN of an IAM profile that is the default profile for all of the stack's EC2 instances. For more information about IAM ARNs, see `Using Identifiers`_. :type default_os: string :param default_os: The stack's default operating system, which must be set to `Amazon Linux` or `Ubuntu 12.04 LTS`. The default option is `Amazon Linux`. :type hostname_theme: string :param hostname_theme: The stack's host name theme, with spaces are replaced by underscores. The theme is used to generate host names for the stack's instances. By default, `HostnameTheme` is set to Layer_Dependent, which creates host names by appending integers to the layer's short name. The other themes are: + Baked_Goods + Clouds + European_Cities + Fruits + Greek_Deities + Legendary_Creatures_from_Japan + Planets_and_Moons + Roman_Deities + Scottish_Islands + US_Cities + Wild_Cats To obtain a generated host name, call `GetHostNameSuggestion`, which returns a host name based on the current theme. :type default_availability_zone: string :param default_availability_zone: The stack's default Availability Zone, which must be in the specified region. For more information, see `Regions and Endpoints`_. If you also specify a value for `DefaultSubnetId`, the subnet must be in the same zone. For more information, see the `VpcId` parameter description. :type default_subnet_id: string :param default_subnet_id: The stack's default subnet ID. All instances will be launched into this subnet unless you specify otherwise when you create the instance. If you also specify a value for `DefaultAvailabilityZone`, the subnet must be in that zone. For information on default values and when this parameter is required, see the `VpcId` parameter description. :type custom_json: string :param custom_json: A string that contains user-defined, custom JSON. It is used to override the corresponding default stack configuration JSON values. The string should be in the following format and must escape characters such as '"'.: `"{\"key1\": \"value1\", \"key2\": \"value2\",...}"` For more information on custom JSON, see `Use Custom JSON to Modify the Stack Configuration JSON`_. :type configuration_manager: dict :param configuration_manager: The configuration manager. When you create a stack we recommend that you use the configuration manager to specify the Chef version, 0.9 or 11.4. The default value is currently 0.9. However, we expect to change the default value to 11.4 in September 2013. :type use_custom_cookbooks: boolean :param use_custom_cookbooks: Whether the stack uses custom cookbooks. :type custom_cookbooks_source: dict :param custom_cookbooks_source: Contains the information required to retrieve an app or cookbook from a repository. For more information, see `Creating Apps`_ or `Custom Recipes and Cookbooks`_. :type default_ssh_key_name: string :param default_ssh_key_name: A default SSH key for the stack instances. You can override this value when you create or update an instance. :type default_root_device_type: string :param default_root_device_type: The default root device type. This value is used by default for all instances in the cloned stack, but you can override it when you create an instance. For more information, see `Storage for the Root Device`_. """ params = { 'Name': name, 'Region': region, 'ServiceRoleArn': service_role_arn, 'DefaultInstanceProfileArn': default_instance_profile_arn, } if vpc_id is not None: params['VpcId'] = vpc_id if attributes is not None: params['Attributes'] = attributes if default_os is not None: params['DefaultOs'] = default_os if hostname_theme is not None: params['HostnameTheme'] = hostname_theme if default_availability_zone is not None: params['DefaultAvailabilityZone'] = default_availability_zone if default_subnet_id is not None: params['DefaultSubnetId'] = default_subnet_id if custom_json is not None: params['CustomJson'] = custom_json if configuration_manager is not None: params['ConfigurationManager'] = configuration_manager if use_custom_cookbooks is not None: params['UseCustomCookbooks'] = use_custom_cookbooks if custom_cookbooks_source is not None: params['CustomCookbooksSource'] = custom_cookbooks_source if default_ssh_key_name is not None: params['DefaultSshKeyName'] = default_ssh_key_name if default_root_device_type is not None: params['DefaultRootDeviceType'] = default_root_device_type return self.make_request(action='CreateStack', body=json.dumps(params)) def create_user_profile(self, iam_user_arn, ssh_username=None, ssh_public_key=None): """ Creates a new user profile. :type iam_user_arn: string :param iam_user_arn: The user's IAM ARN. :type ssh_username: string :param ssh_username: The user's SSH user name. :type ssh_public_key: string :param ssh_public_key: The user's public SSH key. """ params = {'IamUserArn': iam_user_arn, } if ssh_username is not None: params['SshUsername'] = ssh_username if ssh_public_key is not None: params['SshPublicKey'] = ssh_public_key return self.make_request(action='CreateUserProfile', body=json.dumps(params)) def delete_app(self, app_id): """ Deletes a specified app. :type app_id: string :param app_id: The app ID. """ params = {'AppId': app_id, } return self.make_request(action='DeleteApp', body=json.dumps(params)) def delete_instance(self, instance_id, delete_elastic_ip=None, delete_volumes=None): """ Deletes a specified instance. You must stop an instance before you can delete it. For more information, see `Deleting Instances`_. :type instance_id: string :param instance_id: The instance ID. :type delete_elastic_ip: boolean :param delete_elastic_ip: Whether to delete the instance Elastic IP address. :type delete_volumes: boolean :param delete_volumes: Whether to delete the instance Amazon EBS volumes. """ params = {'InstanceId': instance_id, } if delete_elastic_ip is not None: params['DeleteElasticIp'] = delete_elastic_ip if delete_volumes is not None: params['DeleteVolumes'] = delete_volumes return self.make_request(action='DeleteInstance', body=json.dumps(params)) def delete_layer(self, layer_id): """ Deletes a specified layer. You must first stop and then delete all associated instances. For more information, see `How to Delete a Layer`_. :type layer_id: string :param layer_id: The layer ID. """ params = {'LayerId': layer_id, } return self.make_request(action='DeleteLayer', body=json.dumps(params)) def delete_stack(self, stack_id): """ Deletes a specified stack. You must first delete all instances, layers, and apps. For more information, see `Shut Down a Stack`_. :type stack_id: string :param stack_id: The stack ID. """ params = {'StackId': stack_id, } return self.make_request(action='DeleteStack', body=json.dumps(params)) def delete_user_profile(self, iam_user_arn): """ Deletes a user profile. :type iam_user_arn: string :param iam_user_arn: The user's IAM ARN. """ params = {'IamUserArn': iam_user_arn, } return self.make_request(action='DeleteUserProfile', body=json.dumps(params)) def deregister_elastic_ip(self, elastic_ip): """ Deregisters a specified Elastic IP address. The address can then be registered by another stack. For more information, see ``_. :type elastic_ip: string :param elastic_ip: The Elastic IP address. """ params = {'ElasticIp': elastic_ip, } return self.make_request(action='DeregisterElasticIp', body=json.dumps(params)) def deregister_volume(self, volume_id): """ Deregisters an Amazon EBS volume. The volume can then be registered by another stack. For more information, see ``_. :type volume_id: string :param volume_id: The volume ID. """ params = {'VolumeId': volume_id, } return self.make_request(action='DeregisterVolume', body=json.dumps(params)) def describe_apps(self, stack_id=None, app_ids=None): """ Requests a description of a specified set of apps. You must specify at least one of the parameters. :type stack_id: string :param stack_id: The app stack ID. If you use this parameter, `DescribeApps` returns a description of the apps in the specified stack. :type app_ids: list :param app_ids: An array of app IDs for the apps to be described. If you use this parameter, `DescribeApps` returns a description of the specified apps. Otherwise, it returns a description of every app. """ params = {} if stack_id is not None: params['StackId'] = stack_id if app_ids is not None: params['AppIds'] = app_ids return self.make_request(action='DescribeApps', body=json.dumps(params)) def describe_commands(self, deployment_id=None, instance_id=None, command_ids=None): """ Describes the results of specified commands. You must specify at least one of the parameters. :type deployment_id: string :param deployment_id: The deployment ID. If you include this parameter, `DescribeCommands` returns a description of the commands associated with the specified deployment. :type instance_id: string :param instance_id: The instance ID. If you include this parameter, `DescribeCommands` returns a description of the commands associated with the specified instance. :type command_ids: list :param command_ids: An array of command IDs. If you include this parameter, `DescribeCommands` returns a description of the specified commands. Otherwise, it returns a description of every command. """ params = {} if deployment_id is not None: params['DeploymentId'] = deployment_id if instance_id is not None: params['InstanceId'] = instance_id if command_ids is not None: params['CommandIds'] = command_ids return self.make_request(action='DescribeCommands', body=json.dumps(params)) def describe_deployments(self, stack_id=None, app_id=None, deployment_ids=None): """ Requests a description of a specified set of deployments. You must specify at least one of the parameters. :type stack_id: string :param stack_id: The stack ID. If you include this parameter, `DescribeDeployments` returns a description of the commands associated with the specified stack. :type app_id: string :param app_id: The app ID. If you include this parameter, `DescribeDeployments` returns a description of the commands associated with the specified app. :type deployment_ids: list :param deployment_ids: An array of deployment IDs to be described. If you include this parameter, `DescribeDeployments` returns a description of the specified deployments. Otherwise, it returns a description of every deployment. """ params = {} if stack_id is not None: params['StackId'] = stack_id if app_id is not None: params['AppId'] = app_id if deployment_ids is not None: params['DeploymentIds'] = deployment_ids return self.make_request(action='DescribeDeployments', body=json.dumps(params)) def describe_elastic_ips(self, instance_id=None, stack_id=None, ips=None): """ Describes `Elastic IP addresses`_. You must specify at least one of the parameters. :type instance_id: string :param instance_id: The instance ID. If you include this parameter, `DescribeElasticIps` returns a description of the Elastic IP addresses associated with the specified instance. :type stack_id: string :param stack_id: A stack ID. If you include this parameter, `DescribeElasticIps` returns a description of the Elastic IP addresses that are registered with the specified stack. :type ips: list :param ips: An array of Elastic IP addresses to be described. If you include this parameter, `DescribeElasticIps` returns a description of the specified Elastic IP addresses. Otherwise, it returns a description of every Elastic IP address. """ params = {} if instance_id is not None: params['InstanceId'] = instance_id if stack_id is not None: params['StackId'] = stack_id if ips is not None: params['Ips'] = ips return self.make_request(action='DescribeElasticIps', body=json.dumps(params)) def describe_elastic_load_balancers(self, stack_id=None, layer_ids=None): """ Describes a stack's Elastic Load Balancing instances. You must specify at least one of the parameters. :type stack_id: string :param stack_id: A stack ID. The action describes the stack's Elastic Load Balancing instances. :type layer_ids: list :param layer_ids: A list of layer IDs. The action describes the Elastic Load Balancing instances for the specified layers. """ params = {} if stack_id is not None: params['StackId'] = stack_id if layer_ids is not None: params['LayerIds'] = layer_ids return self.make_request(action='DescribeElasticLoadBalancers', body=json.dumps(params)) def describe_instances(self, stack_id=None, layer_id=None, instance_ids=None): """ Requests a description of a set of instances. You must specify at least one of the parameters. :type stack_id: string :param stack_id: A stack ID. If you use this parameter, `DescribeInstances` returns descriptions of the instances associated with the specified stack. :type layer_id: string :param layer_id: A layer ID. If you use this parameter, `DescribeInstances` returns descriptions of the instances associated with the specified layer. :type instance_ids: list :param instance_ids: An array of instance IDs to be described. If you use this parameter, `DescribeInstances` returns a description of the specified instances. Otherwise, it returns a description of every instance. """ params = {} if stack_id is not None: params['StackId'] = stack_id if layer_id is not None: params['LayerId'] = layer_id if instance_ids is not None: params['InstanceIds'] = instance_ids return self.make_request(action='DescribeInstances', body=json.dumps(params)) def describe_layers(self, stack_id=None, layer_ids=None): """ Requests a description of one or more layers in a specified stack. You must specify at least one of the parameters. :type stack_id: string :param stack_id: The stack ID. :type layer_ids: list :param layer_ids: An array of layer IDs that specify the layers to be described. If you omit this parameter, `DescribeLayers` returns a description of every layer in the specified stack. """ params = {} if stack_id is not None: params['StackId'] = stack_id if layer_ids is not None: params['LayerIds'] = layer_ids return self.make_request(action='DescribeLayers', body=json.dumps(params)) def describe_load_based_auto_scaling(self, layer_ids): """ Describes load-based auto scaling configurations for specified layers. You must specify at least one of the parameters. :type layer_ids: list :param layer_ids: An array of layer IDs. """ params = {'LayerIds': layer_ids, } return self.make_request(action='DescribeLoadBasedAutoScaling', body=json.dumps(params)) def describe_permissions(self, iam_user_arn, stack_id): """ Describes the permissions for a specified stack. :type iam_user_arn: string :param iam_user_arn: The user's IAM ARN. For more information about IAM ARNs, see `Using Identifiers`_. :type stack_id: string :param stack_id: The stack ID. """ params = {'IamUserArn': iam_user_arn, 'StackId': stack_id, } return self.make_request(action='DescribePermissions', body=json.dumps(params)) def describe_raid_arrays(self, instance_id=None, raid_array_ids=None): """ Describe an instance's RAID arrays. You must specify at least one of the parameters. :type instance_id: string :param instance_id: The instance ID. If you use this parameter, `DescribeRaidArrays` returns descriptions of the RAID arrays associated with the specified instance. :type raid_array_ids: list :param raid_array_ids: An array of RAID array IDs. If you use this parameter, `DescribeRaidArrays` returns descriptions of the specified arrays. Otherwise, it returns a description of every array. """ params = {} if instance_id is not None: params['InstanceId'] = instance_id if raid_array_ids is not None: params['RaidArrayIds'] = raid_array_ids return self.make_request(action='DescribeRaidArrays', body=json.dumps(params)) def describe_service_errors(self, stack_id=None, instance_id=None, service_error_ids=None): """ Describes AWS OpsWorks service errors. :type stack_id: string :param stack_id: The stack ID. If you use this parameter, `DescribeServiceErrors` returns descriptions of the errors associated with the specified stack. :type instance_id: string :param instance_id: The instance ID. If you use this parameter, `DescribeServiceErrors` returns descriptions of the errors associated with the specified instance. :type service_error_ids: list :param service_error_ids: An array of service error IDs. If you use this parameter, `DescribeServiceErrors` returns descriptions of the specified errors. Otherwise, it returns a description of every error. """ params = {} if stack_id is not None: params['StackId'] = stack_id if instance_id is not None: params['InstanceId'] = instance_id if service_error_ids is not None: params['ServiceErrorIds'] = service_error_ids return self.make_request(action='DescribeServiceErrors', body=json.dumps(params)) def describe_stacks(self, stack_ids=None): """ Requests a description of one or more stacks. :type stack_ids: list :param stack_ids: An array of stack IDs that specify the stacks to be described. If you omit this parameter, `DescribeStacks` returns a description of every stack. """ params = {} if stack_ids is not None: params['StackIds'] = stack_ids return self.make_request(action='DescribeStacks', body=json.dumps(params)) def describe_time_based_auto_scaling(self, instance_ids): """ Describes time-based auto scaling configurations for specified instances. You must specify at least one of the parameters. :type instance_ids: list :param instance_ids: An array of instance IDs. """ params = {'InstanceIds': instance_ids, } return self.make_request(action='DescribeTimeBasedAutoScaling', body=json.dumps(params)) def describe_user_profiles(self, iam_user_arns): """ Describe specified users. :type iam_user_arns: list :param iam_user_arns: An array of IAM user ARNs that identify the users to be described. """ params = {'IamUserArns': iam_user_arns, } return self.make_request(action='DescribeUserProfiles', body=json.dumps(params)) def describe_volumes(self, instance_id=None, stack_id=None, raid_array_id=None, volume_ids=None): """ Describes an instance's Amazon EBS volumes. You must specify at least one of the parameters. :type instance_id: string :param instance_id: The instance ID. If you use this parameter, `DescribeVolumes` returns descriptions of the volumes associated with the specified instance. :type stack_id: string :param stack_id: A stack ID. The action describes the stack's registered Amazon EBS volumes. :type raid_array_id: string :param raid_array_id: The RAID array ID. If you use this parameter, `DescribeVolumes` returns descriptions of the volumes associated with the specified RAID array. :type volume_ids: list :param volume_ids: Am array of volume IDs. If you use this parameter, `DescribeVolumes` returns descriptions of the specified volumes. Otherwise, it returns a description of every volume. """ params = {} if instance_id is not None: params['InstanceId'] = instance_id if stack_id is not None: params['StackId'] = stack_id if raid_array_id is not None: params['RaidArrayId'] = raid_array_id if volume_ids is not None: params['VolumeIds'] = volume_ids return self.make_request(action='DescribeVolumes', body=json.dumps(params)) def detach_elastic_load_balancer(self, elastic_load_balancer_name, layer_id): """ Detaches a specified Elastic Load Balancing instance from its layer. :type elastic_load_balancer_name: string :param elastic_load_balancer_name: The Elastic Load Balancing instance's name. :type layer_id: string :param layer_id: The ID of the layer that the Elastic Load Balancing instance is attached to. """ params = { 'ElasticLoadBalancerName': elastic_load_balancer_name, 'LayerId': layer_id, } return self.make_request(action='DetachElasticLoadBalancer', body=json.dumps(params)) def disassociate_elastic_ip(self, elastic_ip): """ Disassociates an Elastic IP address from its instance. The address remains registered with the stack. For more information, see ``_. :type elastic_ip: string :param elastic_ip: The Elastic IP address. """ params = {'ElasticIp': elastic_ip, } return self.make_request(action='DisassociateElasticIp', body=json.dumps(params)) def get_hostname_suggestion(self, layer_id): """ Gets a generated host name for the specified layer, based on the current host name theme. :type layer_id: string :param layer_id: The layer ID. """ params = {'LayerId': layer_id, } return self.make_request(action='GetHostnameSuggestion', body=json.dumps(params)) def reboot_instance(self, instance_id): """ Reboots a specified instance. For more information, see `Starting, Stopping, and Rebooting Instances`_. :type instance_id: string :param instance_id: The instance ID. """ params = {'InstanceId': instance_id, } return self.make_request(action='RebootInstance', body=json.dumps(params)) def register_elastic_ip(self, elastic_ip, stack_id): """ Registers an Elastic IP address with a specified stack. An address can be registered with only one stack at a time. If the address is already registered, you must first deregister it by calling DeregisterElasticIp. For more information, see ``_. :type elastic_ip: string :param elastic_ip: The Elastic IP address. :type stack_id: string :param stack_id: The stack ID. """ params = {'ElasticIp': elastic_ip, 'StackId': stack_id, } return self.make_request(action='RegisterElasticIp', body=json.dumps(params)) def register_volume(self, stack_id, ec_2_volume_id=None): """ Registers an Amazon EBS volume with a specified stack. A volume can be registered with only one stack at a time. If the volume is already registered, you must first deregister it by calling DeregisterVolume. For more information, see ``_. :type ec_2_volume_id: string :param ec_2_volume_id: The Amazon EBS volume ID. :type stack_id: string :param stack_id: The stack ID. """ params = {'StackId': stack_id, } if ec_2_volume_id is not None: params['Ec2VolumeId'] = ec_2_volume_id return self.make_request(action='RegisterVolume', body=json.dumps(params)) def set_load_based_auto_scaling(self, layer_id, enable=None, up_scaling=None, down_scaling=None): """ Specify the load-based auto scaling configuration for a specified layer. For more information, see `Managing Load with Time-based and Load-based Instances`_. To use load-based auto scaling, you must create a set of load- based auto scaling instances. Load-based auto scaling operates only on the instances from that set, so you must ensure that you have created enough instances to handle the maximum anticipated load. :type layer_id: string :param layer_id: The layer ID. :type enable: boolean :param enable: Enables load-based auto scaling for the layer. :type up_scaling: dict :param up_scaling: An `AutoScalingThresholds` object with the upscaling threshold configuration. If the load exceeds these thresholds for a specified amount of time, AWS OpsWorks starts a specified number of instances. :type down_scaling: dict :param down_scaling: An `AutoScalingThresholds` object with the downscaling threshold configuration. If the load falls below these thresholds for a specified amount of time, AWS OpsWorks stops a specified number of instances. """ params = {'LayerId': layer_id, } if enable is not None: params['Enable'] = enable if up_scaling is not None: params['UpScaling'] = up_scaling if down_scaling is not None: params['DownScaling'] = down_scaling return self.make_request(action='SetLoadBasedAutoScaling', body=json.dumps(params)) def set_permission(self, stack_id, iam_user_arn, allow_ssh=None, allow_sudo=None): """ Specifies a stack's permissions. For more information, see `Security and Permissions`_. :type stack_id: string :param stack_id: The stack ID. :type iam_user_arn: string :param iam_user_arn: The user's IAM ARN. :type allow_ssh: boolean :param allow_ssh: The user is allowed to use SSH to communicate with the instance. :type allow_sudo: boolean :param allow_sudo: The user is allowed to use **sudo** to elevate privileges. """ params = {'StackId': stack_id, 'IamUserArn': iam_user_arn, } if allow_ssh is not None: params['AllowSsh'] = allow_ssh if allow_sudo is not None: params['AllowSudo'] = allow_sudo return self.make_request(action='SetPermission', body=json.dumps(params)) def set_time_based_auto_scaling(self, instance_id, auto_scaling_schedule=None): """ Specify the time-based auto scaling configuration for a specified instance. For more information, see `Managing Load with Time-based and Load-based Instances`_. :type instance_id: string :param instance_id: The instance ID. :type auto_scaling_schedule: dict :param auto_scaling_schedule: An `AutoScalingSchedule` with the instance schedule. """ params = {'InstanceId': instance_id, } if auto_scaling_schedule is not None: params['AutoScalingSchedule'] = auto_scaling_schedule return self.make_request(action='SetTimeBasedAutoScaling', body=json.dumps(params)) def start_instance(self, instance_id): """ Starts a specified instance. For more information, see `Starting, Stopping, and Rebooting Instances`_. :type instance_id: string :param instance_id: The instance ID. """ params = {'InstanceId': instance_id, } return self.make_request(action='StartInstance', body=json.dumps(params)) def start_stack(self, stack_id): """ Starts stack's instances. :type stack_id: string :param stack_id: The stack ID. """ params = {'StackId': stack_id, } return self.make_request(action='StartStack', body=json.dumps(params)) def stop_instance(self, instance_id): """ Stops a specified instance. When you stop a standard instance, the data disappears and must be reinstalled when you restart the instance. You can stop an Amazon EBS-backed instance without losing data. For more information, see `Starting, Stopping, and Rebooting Instances`_. :type instance_id: string :param instance_id: The instance ID. """ params = {'InstanceId': instance_id, } return self.make_request(action='StopInstance', body=json.dumps(params)) def stop_stack(self, stack_id): """ Stops a specified stack. :type stack_id: string :param stack_id: The stack ID. """ params = {'StackId': stack_id, } return self.make_request(action='StopStack', body=json.dumps(params)) def unassign_volume(self, volume_id): """ Unassigns an assigned Amazon EBS volume. The volume remains registered with the stack. For more information, see ``_. :type volume_id: string :param volume_id: The volume ID. """ params = {'VolumeId': volume_id, } return self.make_request(action='UnassignVolume', body=json.dumps(params)) def update_app(self, app_id, name=None, description=None, type=None, app_source=None, domains=None, enable_ssl=None, ssl_configuration=None, attributes=None): """ Updates a specified app. :type app_id: string :param app_id: The app ID. :type name: string :param name: The app name. :type description: string :param description: A description of the app. :type type: string :param type: The app type. :type app_source: dict :param app_source: A `Source` object that specifies the app repository. :type domains: list :param domains: The app's virtual host settings, with multiple domains separated by commas. For example: `'www.example.com, example.com'` :type enable_ssl: boolean :param enable_ssl: Whether SSL is enabled for the app. :type ssl_configuration: dict :param ssl_configuration: An `SslConfiguration` object with the SSL configuration. :type attributes: map :param attributes: One or more user-defined key/value pairs to be added to the stack attributes bag. """ params = {'AppId': app_id, } if name is not None: params['Name'] = name if description is not None: params['Description'] = description if type is not None: params['Type'] = type if app_source is not None: params['AppSource'] = app_source if domains is not None: params['Domains'] = domains if enable_ssl is not None: params['EnableSsl'] = enable_ssl if ssl_configuration is not None: params['SslConfiguration'] = ssl_configuration if attributes is not None: params['Attributes'] = attributes return self.make_request(action='UpdateApp', body=json.dumps(params)) def update_elastic_ip(self, elastic_ip, name=None): """ Updates a registered Elastic IP address's name. For more information, see ``_. :type elastic_ip: string :param elastic_ip: The address. :type name: string :param name: The new name. """ params = {'ElasticIp': elastic_ip, } if name is not None: params['Name'] = name return self.make_request(action='UpdateElasticIp', body=json.dumps(params)) def update_instance(self, instance_id, layer_ids=None, instance_type=None, auto_scaling_type=None, hostname=None, os=None, ami_id=None, ssh_key_name=None, architecture=None, install_updates_on_boot=None): """ Updates a specified instance. :type instance_id: string :param instance_id: The instance ID. :type layer_ids: list :param layer_ids: The instance's layer IDs. :type instance_type: string :param instance_type: The instance type. AWS OpsWorks supports all instance types except Cluster Compute, Cluster GPU, and High Memory Cluster. For more information, see `Instance Families and Types`_. The parameter values that you use to specify the various types are in the API Name column of the Available Instance Types table. :type auto_scaling_type: string :param auto_scaling_type: The instance's auto scaling type, which has three possible values: + **AlwaysRunning**: A 24/7 instance, which is not affected by auto scaling. + **TimeBasedAutoScaling**: A time-based auto scaling instance, which is started and stopped based on a specified schedule. + **LoadBasedAutoScaling**: A load-based auto scaling instance, which is started and stopped based on load metrics. :type hostname: string :param hostname: The instance host name. :type os: string :param os: The instance operating system, which must be set to one of the following. + Standard operating systems: `Amazon Linux` or `Ubuntu 12.04 LTS` + Custom AMIs: `Custom` The default option is `Amazon Linux`. If you set this parameter to `Custom`, you must use the CreateInstance action's AmiId parameter to specify the custom AMI that you want to use. For more information on the standard operating systems, see `Operating Systems`_For more information on how to use custom AMIs with OpsWorks, see `Using Custom AMIs`_. :type ami_id: string :param ami_id: A custom AMI ID to be used to create the instance. The AMI should be based on one of the standard AWS OpsWorks APIs: Amazon Linux or Ubuntu 12.04 LTS. For more information, see `Instances`_ :type ssh_key_name: string :param ssh_key_name: The instance SSH key name. :type architecture: string :param architecture: The instance architecture. Instance types do not necessarily support both architectures. For a list of the architectures that are supported by the different instance types, see `Instance Families and Types`_. :type install_updates_on_boot: boolean :param install_updates_on_boot: Whether to install operating system and package updates when the instance boots. The default value is `True`. To control when updates are installed, set this value to `False`. You must then update your instances manually by using CreateDeployment to run the `update_dependencies` stack command or manually running `yum` (Amazon Linux) or `apt-get` (Ubuntu) on the instances. We strongly recommend using the default value of `True`, to ensure that your instances have the latest security updates. """ params = {'InstanceId': instance_id, } if layer_ids is not None: params['LayerIds'] = layer_ids if instance_type is not None: params['InstanceType'] = instance_type if auto_scaling_type is not None: params['AutoScalingType'] = auto_scaling_type if hostname is not None: params['Hostname'] = hostname if os is not None: params['Os'] = os if ami_id is not None: params['AmiId'] = ami_id if ssh_key_name is not None: params['SshKeyName'] = ssh_key_name if architecture is not None: params['Architecture'] = architecture if install_updates_on_boot is not None: params['InstallUpdatesOnBoot'] = install_updates_on_boot return self.make_request(action='UpdateInstance', body=json.dumps(params)) def update_layer(self, layer_id, name=None, shortname=None, attributes=None, custom_instance_profile_arn=None, custom_security_group_ids=None, packages=None, volume_configurations=None, enable_auto_healing=None, auto_assign_elastic_ips=None, auto_assign_public_ips=None, custom_recipes=None, install_updates_on_boot=None): """ Updates a specified layer. :type layer_id: string :param layer_id: The layer ID. :type name: string :param name: The layer name, which is used by the console. :type shortname: string :param shortname: The layer short name, which is used internally by AWS OpsWorksand by Chef. The short name is also used as the name for the directory where your app files are installed. It can have a maximum of 200 characters and must be in the following format: /\A[a-z0-9\-\_\.]+\Z/. :type attributes: map :param attributes: One or more user-defined key/value pairs to be added to the stack attributes bag. :type custom_instance_profile_arn: string :param custom_instance_profile_arn: The ARN of an IAM profile to be used for all of the layer's EC2 instances. For more information about IAM ARNs, see `Using Identifiers`_. :type custom_security_group_ids: list :param custom_security_group_ids: An array containing the layer's custom security group IDs. :type packages: list :param packages: An array of `Package` objects that describe the layer's packages. :type volume_configurations: list :param volume_configurations: A `VolumeConfigurations` object that describes the layer's Amazon EBS volumes. :type enable_auto_healing: boolean :param enable_auto_healing: Whether to disable auto healing for the layer. :type auto_assign_elastic_ips: boolean :param auto_assign_elastic_ips: Whether to automatically assign an `Elastic IP address`_ to the layer's instances. For more information, see `How to Edit a Layer`_. :type auto_assign_public_ips: boolean :param auto_assign_public_ips: For stacks that are running in a VPC, whether to automatically assign a public IP address to the layer's instances. For more information, see `How to Edit a Layer`_. :type custom_recipes: dict :param custom_recipes: A `LayerCustomRecipes` object that specifies the layer's custom recipes. :type install_updates_on_boot: boolean :param install_updates_on_boot: Whether to install operating system and package updates when the instance boots. The default value is `True`. To control when updates are installed, set this value to `False`. You must then update your instances manually by using CreateDeployment to run the `update_dependencies` stack command or manually running `yum` (Amazon Linux) or `apt-get` (Ubuntu) on the instances. We strongly recommend using the default value of `True`, to ensure that your instances have the latest security updates. """ params = {'LayerId': layer_id, } if name is not None: params['Name'] = name if shortname is not None: params['Shortname'] = shortname if attributes is not None: params['Attributes'] = attributes if custom_instance_profile_arn is not None: params['CustomInstanceProfileArn'] = custom_instance_profile_arn if custom_security_group_ids is not None: params['CustomSecurityGroupIds'] = custom_security_group_ids if packages is not None: params['Packages'] = packages if volume_configurations is not None: params['VolumeConfigurations'] = volume_configurations if enable_auto_healing is not None: params['EnableAutoHealing'] = enable_auto_healing if auto_assign_elastic_ips is not None: params['AutoAssignElasticIps'] = auto_assign_elastic_ips if auto_assign_public_ips is not None: params['AutoAssignPublicIps'] = auto_assign_public_ips if custom_recipes is not None: params['CustomRecipes'] = custom_recipes if install_updates_on_boot is not None: params['InstallUpdatesOnBoot'] = install_updates_on_boot return self.make_request(action='UpdateLayer', body=json.dumps(params)) def update_stack(self, stack_id, name=None, attributes=None, service_role_arn=None, default_instance_profile_arn=None, default_os=None, hostname_theme=None, default_availability_zone=None, default_subnet_id=None, custom_json=None, configuration_manager=None, use_custom_cookbooks=None, custom_cookbooks_source=None, default_ssh_key_name=None, default_root_device_type=None): """ Updates a specified stack. :type stack_id: string :param stack_id: The stack ID. :type name: string :param name: The stack's new name. :type attributes: map :param attributes: One or more user-defined key/value pairs to be added to the stack attributes bag. :type service_role_arn: string :param service_role_arn: The stack AWS Identity and Access Management (IAM) role, which allows AWS OpsWorks to work with AWS resources on your behalf. You must set this parameter to the Amazon Resource Name (ARN) for an existing IAM role. For more information about IAM ARNs, see `Using Identifiers`_. You must set this parameter to a valid service role ARN or the action will fail; there is no default value. You can specify the stack's current service role ARN, if you prefer, but you must do so explicitly. :type default_instance_profile_arn: string :param default_instance_profile_arn: The ARN of an IAM profile that is the default profile for all of the stack's EC2 instances. For more information about IAM ARNs, see `Using Identifiers`_. :type default_os: string :param default_os: The stack's default operating system, which must be set to `Amazon Linux` or `Ubuntu 12.04 LTS`. The default option is `Amazon Linux`. :type hostname_theme: string :param hostname_theme: The stack's new host name theme, with spaces are replaced by underscores. The theme is used to generate host names for the stack's instances. By default, `HostnameTheme` is set to Layer_Dependent, which creates host names by appending integers to the layer's short name. The other themes are: + Baked_Goods + Clouds + European_Cities + Fruits + Greek_Deities + Legendary_Creatures_from_Japan + Planets_and_Moons + Roman_Deities + Scottish_Islands + US_Cities + Wild_Cats To obtain a generated host name, call `GetHostNameSuggestion`, which returns a host name based on the current theme. :type default_availability_zone: string :param default_availability_zone: The stack's default Availability Zone, which must be in the specified region. For more information, see `Regions and Endpoints`_. If you also specify a value for `DefaultSubnetId`, the subnet must be in the same zone. For more information, see CreateStack. :type default_subnet_id: string :param default_subnet_id: The stack's default subnet ID. All instances will be launched into this subnet unless you specify otherwise when you create the instance. If you also specify a value for `DefaultAvailabilityZone`, the subnet must be in that zone. For more information, see CreateStack. :type custom_json: string :param custom_json: A string that contains user-defined, custom JSON. It is used to override the corresponding default stack configuration JSON values. The string should be in the following format and must escape characters such as '"'.: `"{\"key1\": \"value1\", \"key2\": \"value2\",...}"` For more information on custom JSON, see `Use Custom JSON to Modify the Stack Configuration JSON`_. :type configuration_manager: dict :param configuration_manager: The configuration manager. When you update a stack you can optionally use the configuration manager to specify the Chef version, 0.9 or 11.4. If you omit this parameter, AWS OpsWorks does not change the Chef version. :type use_custom_cookbooks: boolean :param use_custom_cookbooks: Whether the stack uses custom cookbooks. :type custom_cookbooks_source: dict :param custom_cookbooks_source: Contains the information required to retrieve an app or cookbook from a repository. For more information, see `Creating Apps`_ or `Custom Recipes and Cookbooks`_. :type default_ssh_key_name: string :param default_ssh_key_name: A default SSH key for the stack instances. You can override this value when you create or update an instance. :type default_root_device_type: string :param default_root_device_type: The default root device type. This value is used by default for all instances in the cloned stack, but you can override it when you create an instance. For more information, see `Storage for the Root Device`_. """ params = {'StackId': stack_id, } if name is not None: params['Name'] = name if attributes is not None: params['Attributes'] = attributes if service_role_arn is not None: params['ServiceRoleArn'] = service_role_arn if default_instance_profile_arn is not None: params['DefaultInstanceProfileArn'] = default_instance_profile_arn if default_os is not None: params['DefaultOs'] = default_os if hostname_theme is not None: params['HostnameTheme'] = hostname_theme if default_availability_zone is not None: params['DefaultAvailabilityZone'] = default_availability_zone if default_subnet_id is not None: params['DefaultSubnetId'] = default_subnet_id if custom_json is not None: params['CustomJson'] = custom_json if configuration_manager is not None: params['ConfigurationManager'] = configuration_manager if use_custom_cookbooks is not None: params['UseCustomCookbooks'] = use_custom_cookbooks if custom_cookbooks_source is not None: params['CustomCookbooksSource'] = custom_cookbooks_source if default_ssh_key_name is not None: params['DefaultSshKeyName'] = default_ssh_key_name if default_root_device_type is not None: params['DefaultRootDeviceType'] = default_root_device_type return self.make_request(action='UpdateStack', body=json.dumps(params)) def update_user_profile(self, iam_user_arn, ssh_username=None, ssh_public_key=None): """ Updates a specified user profile. :type iam_user_arn: string :param iam_user_arn: The user IAM ARN. :type ssh_username: string :param ssh_username: The user's new SSH user name. :type ssh_public_key: string :param ssh_public_key: The user's new SSH public key. """ params = {'IamUserArn': iam_user_arn, } if ssh_username is not None: params['SshUsername'] = ssh_username if ssh_public_key is not None: params['SshPublicKey'] = ssh_public_key return self.make_request(action='UpdateUserProfile', body=json.dumps(params)) def update_volume(self, volume_id, name=None, mount_point=None): """ Updates an Amazon EBS volume's name or mount point. For more information, see ``_. :type volume_id: string :param volume_id: The volume ID. :type name: string :param name: The new name. :type mount_point: string :param mount_point: The new mount point. """ params = {'VolumeId': volume_id, } if name is not None: params['Name'] = name if mount_point is not None: params['MountPoint'] = mount_point return self.make_request(action='UpdateVolume', body=json.dumps(params)) def make_request(self, action, body): headers = { 'X-Amz-Target': '%s.%s' % (self.TargetPrefix, action), 'Host': self.region.endpoint, 'Content-Type': 'application/x-amz-json-1.1', 'Content-Length': str(len(body)), } http_request = self.build_base_http_request( method='POST', path='/', auth_path='/', params={}, headers=headers, data=body) response = self._mexe(http_request, sender=None, override_num_retries=10) response_body = response.read() boto.log.debug(response_body) if response.status == 200: if response_body: return json.loads(response_body) else: json_body = json.loads(response_body) fault_name = json_body.get('__type', None) exception_class = self._faults.get(fault_name, self.ResponseError) raise exception_class(response.status, response.reason, body=json_body) boto-2.20.1/boto/plugin.py000066400000000000000000000052071225267101000153520ustar00rootroot00000000000000# Copyright 2010 Google Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Implements plugin related api. To define a new plugin just subclass Plugin, like this. class AuthPlugin(Plugin): pass Then start creating subclasses of your new plugin. class MyFancyAuth(AuthPlugin): capability = ['sign', 'vmac'] The actual interface is duck typed. """ import glob import imp, os.path class Plugin(object): """Base class for all plugins.""" capability = [] @classmethod def is_capable(cls, requested_capability): """Returns true if the requested capability is supported by this plugin """ for c in requested_capability: if not c in cls.capability: return False return True def get_plugin(cls, requested_capability=None): if not requested_capability: requested_capability = [] result = [] for handler in cls.__subclasses__(): if handler.is_capable(requested_capability): result.append(handler) return result def _import_module(filename): (path, name) = os.path.split(filename) (name, ext) = os.path.splitext(name) (file, filename, data) = imp.find_module(name, [path]) try: return imp.load_module(name, file, filename, data) finally: if file: file.close() _plugin_loaded = False def load_plugins(config): global _plugin_loaded if _plugin_loaded: return _plugin_loaded = True if not config.has_option('Plugin', 'plugin_directory'): return directory = config.get('Plugin', 'plugin_directory') for file in glob.glob(os.path.join(directory, '*.py')): _import_module(file) boto-2.20.1/boto/provider.py000066400000000000000000000361471225267101000157150ustar00rootroot00000000000000# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/ # Copyright 2010 Google Inc. # Copyright (c) 2010, Eucalyptus Systems, Inc. # Copyright (c) 2011, Nexenta Systems Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ This class encapsulates the provider-specific header differences. """ import os from datetime import datetime import boto from boto import config from boto.gs.acl import ACL from boto.gs.acl import CannedACLStrings as CannedGSACLStrings from boto.s3.acl import CannedACLStrings as CannedS3ACLStrings from boto.s3.acl import Policy HEADER_PREFIX_KEY = 'header_prefix' METADATA_PREFIX_KEY = 'metadata_prefix' AWS_HEADER_PREFIX = 'x-amz-' GOOG_HEADER_PREFIX = 'x-goog-' ACL_HEADER_KEY = 'acl-header' AUTH_HEADER_KEY = 'auth-header' COPY_SOURCE_HEADER_KEY = 'copy-source-header' COPY_SOURCE_VERSION_ID_HEADER_KEY = 'copy-source-version-id-header' COPY_SOURCE_RANGE_HEADER_KEY = 'copy-source-range-header' DELETE_MARKER_HEADER_KEY = 'delete-marker-header' DATE_HEADER_KEY = 'date-header' METADATA_DIRECTIVE_HEADER_KEY = 'metadata-directive-header' RESUMABLE_UPLOAD_HEADER_KEY = 'resumable-upload-header' SECURITY_TOKEN_HEADER_KEY = 'security-token-header' STORAGE_CLASS_HEADER_KEY = 'storage-class' MFA_HEADER_KEY = 'mfa-header' SERVER_SIDE_ENCRYPTION_KEY = 'server-side-encryption-header' VERSION_ID_HEADER_KEY = 'version-id-header' STORAGE_COPY_ERROR = 'StorageCopyError' STORAGE_CREATE_ERROR = 'StorageCreateError' STORAGE_DATA_ERROR = 'StorageDataError' STORAGE_PERMISSIONS_ERROR = 'StoragePermissionsError' STORAGE_RESPONSE_ERROR = 'StorageResponseError' class Provider(object): CredentialMap = { 'aws': ('aws_access_key_id', 'aws_secret_access_key'), 'google': ('gs_access_key_id', 'gs_secret_access_key'), } AclClassMap = { 'aws': Policy, 'google': ACL } CannedAclsMap = { 'aws': CannedS3ACLStrings, 'google': CannedGSACLStrings } HostKeyMap = { 'aws': 's3', 'google': 'gs' } ChunkedTransferSupport = { 'aws': False, 'google': True } MetadataServiceSupport = { 'aws': True, 'google': False } # If you update this map please make sure to put "None" for the # right-hand-side for any headers that don't apply to a provider, rather # than simply leaving that header out (which would cause KeyErrors). HeaderInfoMap = { 'aws': { HEADER_PREFIX_KEY: AWS_HEADER_PREFIX, METADATA_PREFIX_KEY: AWS_HEADER_PREFIX + 'meta-', ACL_HEADER_KEY: AWS_HEADER_PREFIX + 'acl', AUTH_HEADER_KEY: 'AWS', COPY_SOURCE_HEADER_KEY: AWS_HEADER_PREFIX + 'copy-source', COPY_SOURCE_VERSION_ID_HEADER_KEY: AWS_HEADER_PREFIX + 'copy-source-version-id', COPY_SOURCE_RANGE_HEADER_KEY: AWS_HEADER_PREFIX + 'copy-source-range', DATE_HEADER_KEY: AWS_HEADER_PREFIX + 'date', DELETE_MARKER_HEADER_KEY: AWS_HEADER_PREFIX + 'delete-marker', METADATA_DIRECTIVE_HEADER_KEY: AWS_HEADER_PREFIX + 'metadata-directive', RESUMABLE_UPLOAD_HEADER_KEY: None, SECURITY_TOKEN_HEADER_KEY: AWS_HEADER_PREFIX + 'security-token', SERVER_SIDE_ENCRYPTION_KEY: AWS_HEADER_PREFIX + 'server-side-encryption', VERSION_ID_HEADER_KEY: AWS_HEADER_PREFIX + 'version-id', STORAGE_CLASS_HEADER_KEY: AWS_HEADER_PREFIX + 'storage-class', MFA_HEADER_KEY: AWS_HEADER_PREFIX + 'mfa', }, 'google': { HEADER_PREFIX_KEY: GOOG_HEADER_PREFIX, METADATA_PREFIX_KEY: GOOG_HEADER_PREFIX + 'meta-', ACL_HEADER_KEY: GOOG_HEADER_PREFIX + 'acl', AUTH_HEADER_KEY: 'GOOG1', COPY_SOURCE_HEADER_KEY: GOOG_HEADER_PREFIX + 'copy-source', COPY_SOURCE_VERSION_ID_HEADER_KEY: GOOG_HEADER_PREFIX + 'copy-source-version-id', COPY_SOURCE_RANGE_HEADER_KEY: None, DATE_HEADER_KEY: GOOG_HEADER_PREFIX + 'date', DELETE_MARKER_HEADER_KEY: GOOG_HEADER_PREFIX + 'delete-marker', METADATA_DIRECTIVE_HEADER_KEY: GOOG_HEADER_PREFIX + 'metadata-directive', RESUMABLE_UPLOAD_HEADER_KEY: GOOG_HEADER_PREFIX + 'resumable', SECURITY_TOKEN_HEADER_KEY: GOOG_HEADER_PREFIX + 'security-token', SERVER_SIDE_ENCRYPTION_KEY: None, # Note that this version header is not to be confused with # the Google Cloud Storage 'x-goog-api-version' header. VERSION_ID_HEADER_KEY: GOOG_HEADER_PREFIX + 'version-id', STORAGE_CLASS_HEADER_KEY: None, MFA_HEADER_KEY: None, } } ErrorMap = { 'aws': { STORAGE_COPY_ERROR: boto.exception.S3CopyError, STORAGE_CREATE_ERROR: boto.exception.S3CreateError, STORAGE_DATA_ERROR: boto.exception.S3DataError, STORAGE_PERMISSIONS_ERROR: boto.exception.S3PermissionsError, STORAGE_RESPONSE_ERROR: boto.exception.S3ResponseError, }, 'google': { STORAGE_COPY_ERROR: boto.exception.GSCopyError, STORAGE_CREATE_ERROR: boto.exception.GSCreateError, STORAGE_DATA_ERROR: boto.exception.GSDataError, STORAGE_PERMISSIONS_ERROR: boto.exception.GSPermissionsError, STORAGE_RESPONSE_ERROR: boto.exception.GSResponseError, } } def __init__(self, name, access_key=None, secret_key=None, security_token=None): self.host = None self.port = None self.host_header = None self.access_key = access_key self.secret_key = secret_key self.security_token = security_token self.name = name self.acl_class = self.AclClassMap[self.name] self.canned_acls = self.CannedAclsMap[self.name] self._credential_expiry_time = None self.get_credentials(access_key, secret_key) self.configure_headers() self.configure_errors() # Allow config file to override default host and port. host_opt_name = '%s_host' % self.HostKeyMap[self.name] if config.has_option('Credentials', host_opt_name): self.host = config.get('Credentials', host_opt_name) port_opt_name = '%s_port' % self.HostKeyMap[self.name] if config.has_option('Credentials', port_opt_name): self.port = config.getint('Credentials', port_opt_name) host_header_opt_name = '%s_host_header' % self.HostKeyMap[self.name] if config.has_option('Credentials', host_header_opt_name): self.host_header = config.get('Credentials', host_header_opt_name) def get_access_key(self): if self._credentials_need_refresh(): self._populate_keys_from_metadata_server() return self._access_key def set_access_key(self, value): self._access_key = value access_key = property(get_access_key, set_access_key) def get_secret_key(self): if self._credentials_need_refresh(): self._populate_keys_from_metadata_server() return self._secret_key def set_secret_key(self, value): self._secret_key = value secret_key = property(get_secret_key, set_secret_key) def get_security_token(self): if self._credentials_need_refresh(): self._populate_keys_from_metadata_server() return self._security_token def set_security_token(self, value): self._security_token = value security_token = property(get_security_token, set_security_token) def _credentials_need_refresh(self): if self._credential_expiry_time is None: return False else: # The credentials should be refreshed if they're going to expire # in less than 5 minutes. delta = self._credential_expiry_time - datetime.utcnow() # python2.6 does not have timedelta.total_seconds() so we have # to calculate this ourselves. This is straight from the # datetime docs. seconds_left = ( (delta.microseconds + (delta.seconds + delta.days * 24 * 3600) * 10**6) / 10**6) if seconds_left < (5 * 60): boto.log.debug("Credentials need to be refreshed.") return True else: return False def get_credentials(self, access_key=None, secret_key=None): access_key_name, secret_key_name = self.CredentialMap[self.name] if access_key is not None: self.access_key = access_key boto.log.debug("Using access key provided by client.") elif access_key_name.upper() in os.environ: self.access_key = os.environ[access_key_name.upper()] boto.log.debug("Using access key found in environment variable.") elif config.has_option('Credentials', access_key_name): self.access_key = config.get('Credentials', access_key_name) boto.log.debug("Using access key found in config file.") if secret_key is not None: self.secret_key = secret_key boto.log.debug("Using secret key provided by client.") elif secret_key_name.upper() in os.environ: self.secret_key = os.environ[secret_key_name.upper()] boto.log.debug("Using secret key found in environment variable.") elif config.has_option('Credentials', secret_key_name): self.secret_key = config.get('Credentials', secret_key_name) boto.log.debug("Using secret key found in config file.") elif config.has_option('Credentials', 'keyring'): keyring_name = config.get('Credentials', 'keyring') try: import keyring except ImportError: boto.log.error("The keyring module could not be imported. " "For keyring support, install the keyring " "module.") raise self.secret_key = keyring.get_password( keyring_name, self.access_key) boto.log.debug("Using secret key found in keyring.") if ((self._access_key is None or self._secret_key is None) and self.MetadataServiceSupport[self.name]): self._populate_keys_from_metadata_server() self._secret_key = self._convert_key_to_str(self._secret_key) def _populate_keys_from_metadata_server(self): # get_instance_metadata is imported here because of a circular # dependency. boto.log.debug("Retrieving credentials from metadata server.") from boto.utils import get_instance_metadata timeout = config.getfloat('Boto', 'metadata_service_timeout', 1.0) attempts = config.getint('Boto', 'metadata_service_num_attempts', 1) # The num_retries arg is actually the total number of attempts made, # so the config options is named *_num_attempts to make this more # clear to users. metadata = get_instance_metadata( timeout=timeout, num_retries=attempts, data='meta-data/iam/security-credentials/') if metadata: # I'm assuming there's only one role on the instance profile. security = metadata.values()[0] self._access_key = security['AccessKeyId'] self._secret_key = self._convert_key_to_str(security['SecretAccessKey']) self._security_token = security['Token'] expires_at = security['Expiration'] self._credential_expiry_time = datetime.strptime( expires_at, "%Y-%m-%dT%H:%M:%SZ") boto.log.debug("Retrieved credentials will expire in %s at: %s", self._credential_expiry_time - datetime.now(), expires_at) def _convert_key_to_str(self, key): if isinstance(key, unicode): # the secret key must be bytes and not unicode to work # properly with hmac.new (see http://bugs.python.org/issue5285) return str(key) return key def configure_headers(self): header_info_map = self.HeaderInfoMap[self.name] self.metadata_prefix = header_info_map[METADATA_PREFIX_KEY] self.header_prefix = header_info_map[HEADER_PREFIX_KEY] self.acl_header = header_info_map[ACL_HEADER_KEY] self.auth_header = header_info_map[AUTH_HEADER_KEY] self.copy_source_header = header_info_map[COPY_SOURCE_HEADER_KEY] self.copy_source_version_id = header_info_map[ COPY_SOURCE_VERSION_ID_HEADER_KEY] self.copy_source_range_header = header_info_map[ COPY_SOURCE_RANGE_HEADER_KEY] self.date_header = header_info_map[DATE_HEADER_KEY] self.delete_marker = header_info_map[DELETE_MARKER_HEADER_KEY] self.metadata_directive_header = ( header_info_map[METADATA_DIRECTIVE_HEADER_KEY]) self.security_token_header = header_info_map[SECURITY_TOKEN_HEADER_KEY] self.resumable_upload_header = ( header_info_map[RESUMABLE_UPLOAD_HEADER_KEY]) self.server_side_encryption_header = header_info_map[SERVER_SIDE_ENCRYPTION_KEY] self.storage_class_header = header_info_map[STORAGE_CLASS_HEADER_KEY] self.version_id = header_info_map[VERSION_ID_HEADER_KEY] self.mfa_header = header_info_map[MFA_HEADER_KEY] def configure_errors(self): error_map = self.ErrorMap[self.name] self.storage_copy_error = error_map[STORAGE_COPY_ERROR] self.storage_create_error = error_map[STORAGE_CREATE_ERROR] self.storage_data_error = error_map[STORAGE_DATA_ERROR] self.storage_permissions_error = error_map[STORAGE_PERMISSIONS_ERROR] self.storage_response_error = error_map[STORAGE_RESPONSE_ERROR] def get_provider_name(self): return self.HostKeyMap[self.name] def supports_chunked_transfer(self): return self.ChunkedTransferSupport[self.name] # Static utility method for getting default Provider. def get_default(): return Provider('aws') boto-2.20.1/boto/pyami/000077500000000000000000000000001225267101000146155ustar00rootroot00000000000000boto-2.20.1/boto/pyami/__init__.py000066400000000000000000000021231225267101000167240ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # boto-2.20.1/boto/pyami/bootstrap.py000066400000000000000000000131531225267101000172070ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import os import boto from boto.utils import get_instance_metadata, get_instance_userdata from boto.pyami.config import Config, BotoConfigPath from boto.pyami.scriptbase import ScriptBase import time class Bootstrap(ScriptBase): """ The Bootstrap class is instantiated and run as part of the PyAMI instance initialization process. The methods in this class will be run from the rc.local script of the instance and will be run as the root user. The main purpose of this class is to make sure the boto distribution on the instance is the one required. """ def __init__(self): self.working_dir = '/mnt/pyami' self.write_metadata() ScriptBase.__init__(self) def write_metadata(self): fp = open(os.path.expanduser(BotoConfigPath), 'w') fp.write('[Instance]\n') inst_data = get_instance_metadata() for key in inst_data: fp.write('%s = %s\n' % (key, inst_data[key])) user_data = get_instance_userdata() fp.write('\n%s\n' % user_data) fp.write('[Pyami]\n') fp.write('working_dir = %s\n' % self.working_dir) fp.close() # This file has the AWS credentials, should we lock it down? # os.chmod(BotoConfigPath, stat.S_IREAD | stat.S_IWRITE) # now that we have written the file, read it into a pyami Config object boto.config = Config() boto.init_logging() def create_working_dir(self): boto.log.info('Working directory: %s' % self.working_dir) if not os.path.exists(self.working_dir): os.mkdir(self.working_dir) def load_boto(self): update = boto.config.get('Boto', 'boto_update', 'svn:HEAD') if update.startswith('svn'): if update.find(':') >= 0: method, version = update.split(':') version = '-r%s' % version else: version = '-rHEAD' location = boto.config.get('Boto', 'boto_location', '/usr/local/boto') self.run('svn update %s %s' % (version, location)) elif update.startswith('git'): location = boto.config.get('Boto', 'boto_location', '/usr/share/python-support/python-boto/boto') num_remaining_attempts = 10 while num_remaining_attempts > 0: num_remaining_attempts -= 1 try: self.run('git pull', cwd=location) num_remaining_attempts = 0 except Exception, e: boto.log.info('git pull attempt failed with the following exception. Trying again in a bit. %s', e) time.sleep(2) if update.find(':') >= 0: method, version = update.split(':') else: version = 'master' self.run('git checkout %s' % version, cwd=location) else: # first remove the symlink needed when running from subversion self.run('rm /usr/local/lib/python2.5/site-packages/boto') self.run('easy_install %s' % update) def fetch_s3_file(self, s3_file): try: from boto.utils import fetch_file f = fetch_file(s3_file) path = os.path.join(self.working_dir, s3_file.split("/")[-1]) open(path, "w").write(f.read()) except: boto.log.exception('Problem Retrieving file: %s' % s3_file) path = None return path def load_packages(self): package_str = boto.config.get('Pyami', 'packages') if package_str: packages = package_str.split(',') for package in packages: package = package.strip() if package.startswith('s3:'): package = self.fetch_s3_file(package) if package: # if the "package" is really a .py file, it doesn't have to # be installed, just being in the working dir is enough if not package.endswith('.py'): self.run('easy_install -Z %s' % package, exit_on_error=False) def main(self): self.create_working_dir() self.load_boto() self.load_packages() self.notify('Bootstrap Completed for %s' % boto.config.get_instance('instance-id')) if __name__ == "__main__": # because bootstrap starts before any logging configuration can be loaded from # the boto config files, we will manually enable logging to /var/log/boto.log boto.set_file_logger('bootstrap', '/var/log/boto.log') bs = Bootstrap() bs.main() boto-2.20.1/boto/pyami/config.py000066400000000000000000000202661225267101000164420ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2011 Chris Moyer http://coredumped.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import StringIO, os, re import warnings import ConfigParser import boto # If running in Google App Engine there is no "user" and # os.path.expanduser() will fail. Attempt to detect this case and use a # no-op expanduser function in this case. try: os.path.expanduser('~') expanduser = os.path.expanduser except (AttributeError, ImportError): # This is probably running on App Engine. expanduser = (lambda x: x) # By default we use two locations for the boto configurations, # /etc/boto.cfg and ~/.boto (which works on Windows and Unix). BotoConfigPath = '/etc/boto.cfg' BotoConfigLocations = [BotoConfigPath] UserConfigPath = os.path.join(expanduser('~'), '.boto') BotoConfigLocations.append(UserConfigPath) # If there's a BOTO_CONFIG variable set, we load ONLY # that variable if 'BOTO_CONFIG' in os.environ: BotoConfigLocations = [expanduser(os.environ['BOTO_CONFIG'])] # If there's a BOTO_PATH variable set, we use anything there # as the current configuration locations, split with colons elif 'BOTO_PATH' in os.environ: BotoConfigLocations = [] for path in os.environ['BOTO_PATH'].split(":"): BotoConfigLocations.append(expanduser(path)) class Config(ConfigParser.SafeConfigParser): def __init__(self, path=None, fp=None, do_load=True): ConfigParser.SafeConfigParser.__init__(self, {'working_dir' : '/mnt/pyami', 'debug' : '0'}) if do_load: if path: self.load_from_path(path) elif fp: self.readfp(fp) else: self.read(BotoConfigLocations) if "AWS_CREDENTIAL_FILE" in os.environ: full_path = expanduser(os.environ['AWS_CREDENTIAL_FILE']) try: self.load_credential_file(full_path) except IOError: warnings.warn('Unable to load AWS_CREDENTIAL_FILE (%s)' % full_path) def load_credential_file(self, path): """Load a credential file as is setup like the Java utilities""" c_data = StringIO.StringIO() c_data.write("[Credentials]\n") for line in open(path, "r").readlines(): c_data.write(line.replace("AWSAccessKeyId", "aws_access_key_id").replace("AWSSecretKey", "aws_secret_access_key")) c_data.seek(0) self.readfp(c_data) def load_from_path(self, path): file = open(path) for line in file.readlines(): match = re.match("^#import[\s\t]*([^\s^\t]*)[\s\t]*$", line) if match: extended_file = match.group(1) (dir, file) = os.path.split(path) self.load_from_path(os.path.join(dir, extended_file)) self.read(path) def save_option(self, path, section, option, value): """ Write the specified Section.Option to the config file specified by path. Replace any previous value. If the path doesn't exist, create it. Also add the option the the in-memory config. """ config = ConfigParser.SafeConfigParser() config.read(path) if not config.has_section(section): config.add_section(section) config.set(section, option, value) fp = open(path, 'w') config.write(fp) fp.close() if not self.has_section(section): self.add_section(section) self.set(section, option, value) def save_user_option(self, section, option, value): self.save_option(UserConfigPath, section, option, value) def save_system_option(self, section, option, value): self.save_option(BotoConfigPath, section, option, value) def get_instance(self, name, default=None): try: val = self.get('Instance', name) except: val = default return val def get_user(self, name, default=None): try: val = self.get('User', name) except: val = default return val def getint_user(self, name, default=0): try: val = self.getint('User', name) except: val = default return val def get_value(self, section, name, default=None): return self.get(section, name, default) def get(self, section, name, default=None): try: val = ConfigParser.SafeConfigParser.get(self, section, name) except: val = default return val def getint(self, section, name, default=0): try: val = ConfigParser.SafeConfigParser.getint(self, section, name) except: val = int(default) return val def getfloat(self, section, name, default=0.0): try: val = ConfigParser.SafeConfigParser.getfloat(self, section, name) except: val = float(default) return val def getbool(self, section, name, default=False): if self.has_option(section, name): val = self.get(section, name) if val.lower() == 'true': val = True else: val = False else: val = default return val def setbool(self, section, name, value): if value: self.set(section, name, 'true') else: self.set(section, name, 'false') def dump(self): s = StringIO.StringIO() self.write(s) print s.getvalue() def dump_safe(self, fp=None): if not fp: fp = StringIO.StringIO() for section in self.sections(): fp.write('[%s]\n' % section) for option in self.options(section): if option == 'aws_secret_access_key': fp.write('%s = xxxxxxxxxxxxxxxxxx\n' % option) else: fp.write('%s = %s\n' % (option, self.get(section, option))) def dump_to_sdb(self, domain_name, item_name): from boto.compat import json sdb = boto.connect_sdb() domain = sdb.lookup(domain_name) if not domain: domain = sdb.create_domain(domain_name) item = domain.new_item(item_name) item.active = False for section in self.sections(): d = {} for option in self.options(section): d[option] = self.get(section, option) item[section] = json.dumps(d) item.save() def load_from_sdb(self, domain_name, item_name): from boto.compat import json sdb = boto.connect_sdb() domain = sdb.lookup(domain_name) item = domain.get_item(item_name) for section in item.keys(): if not self.has_section(section): self.add_section(section) d = json.loads(item[section]) for attr_name in d.keys(): attr_value = d[attr_name] if attr_value == None: attr_value = 'None' if isinstance(attr_value, bool): self.setbool(section, attr_name, attr_value) else: self.set(section, attr_name, attr_value) boto-2.20.1/boto/pyami/copybot.cfg000066400000000000000000000032531225267101000167600ustar00rootroot00000000000000# # Your AWS Credentials # [Credentials] aws_access_key_id = aws_secret_access_key = # # If you want to use a separate set of credentials when writing # to the destination bucket, put them here #dest_aws_access_key_id = #dest_aws_secret_access_key = # # Fill out this section if you want emails from CopyBot # when it starts and stops # [Notification] #smtp_host = #smtp_user = #smtp_pass = #smtp_from = #smtp_to = # # If you leave this section as is, it will automatically # update boto from subversion upon start up. # If you don't want that to happen, comment this out # [Boto] boto_location = /usr/local/boto boto_update = svn:HEAD # # This tells the Pyami code in boto what scripts # to run during startup # [Pyami] scripts = boto.pyami.copybot.CopyBot # # Source bucket and Destination Bucket, obviously. # If the Destination bucket does not exist, it will # attempt to create it. # If exit_on_completion is false, the instance # will keep running after the copy operation is # complete which might be handy for debugging. # If copy_acls is false, the ACL's will not be # copied with the objects to the new bucket. # If replace_dst is false, copybot will not # will only store the source file in the dest if # that file does not already exist. If it's true # it will replace it even if it does exist. # [CopyBot] src_bucket = dst_bucket = exit_on_completion = true copy_acls = true replace_dst = true boto-2.20.1/boto/pyami/copybot.py000066400000000000000000000102611225267101000166460ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import boto from boto.pyami.scriptbase import ScriptBase import os, StringIO class CopyBot(ScriptBase): def __init__(self): ScriptBase.__init__(self) self.wdir = boto.config.get('Pyami', 'working_dir') self.log_file = '%s.log' % self.instance_id self.log_path = os.path.join(self.wdir, self.log_file) boto.set_file_logger(self.name, self.log_path) self.src_name = boto.config.get(self.name, 'src_bucket') self.dst_name = boto.config.get(self.name, 'dst_bucket') self.replace = boto.config.getbool(self.name, 'replace_dst', True) s3 = boto.connect_s3() self.src = s3.lookup(self.src_name) if not self.src: boto.log.error('Source bucket does not exist: %s' % self.src_name) dest_access_key = boto.config.get(self.name, 'dest_aws_access_key_id', None) if dest_access_key: dest_secret_key = boto.config.get(self.name, 'dest_aws_secret_access_key', None) s3 = boto.connect(dest_access_key, dest_secret_key) self.dst = s3.lookup(self.dst_name) if not self.dst: self.dst = s3.create_bucket(self.dst_name) def copy_bucket_acl(self): if boto.config.get(self.name, 'copy_acls', True): acl = self.src.get_xml_acl() self.dst.set_xml_acl(acl) def copy_key_acl(self, src, dst): if boto.config.get(self.name, 'copy_acls', True): acl = src.get_xml_acl() dst.set_xml_acl(acl) def copy_keys(self): boto.log.info('src=%s' % self.src.name) boto.log.info('dst=%s' % self.dst.name) try: for key in self.src: if not self.replace: exists = self.dst.lookup(key.name) if exists: boto.log.info('key=%s already exists in %s, skipping' % (key.name, self.dst.name)) continue boto.log.info('copying %d bytes from key=%s' % (key.size, key.name)) prefix, base = os.path.split(key.name) path = os.path.join(self.wdir, base) key.get_contents_to_filename(path) new_key = self.dst.new_key(key.name) new_key.set_contents_from_filename(path) self.copy_key_acl(key, new_key) os.unlink(path) except: boto.log.exception('Error copying key: %s' % key.name) def copy_log(self): key = self.dst.new_key(self.log_file) key.set_contents_from_filename(self.log_path) def main(self): fp = StringIO.StringIO() boto.config.dump_safe(fp) self.notify('%s (%s) Starting' % (self.name, self.instance_id), fp.getvalue()) if self.src and self.dst: self.copy_keys() if self.dst: self.copy_log() self.notify('%s (%s) Stopping' % (self.name, self.instance_id), 'Copy Operation Complete') if boto.config.getbool(self.name, 'exit_on_completion', True): ec2 = boto.connect_ec2() ec2.terminate_instances([self.instance_id]) boto-2.20.1/boto/pyami/helloworld.py000066400000000000000000000023371225267101000173470ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.pyami.scriptbase import ScriptBase class HelloWorld(ScriptBase): def main(self): self.log('Hello World!!!') boto-2.20.1/boto/pyami/installers/000077500000000000000000000000001225267101000167755ustar00rootroot00000000000000boto-2.20.1/boto/pyami/installers/__init__.py000066400000000000000000000040001225267101000211000ustar00rootroot00000000000000# Copyright (c) 2006,2007,2008 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.pyami.scriptbase import ScriptBase class Installer(ScriptBase): """ Abstract base class for installers """ def add_cron(self, name, minute, hour, mday, month, wday, who, command, env=None): """ Add an entry to the system crontab. """ raise NotImplementedError def add_init_script(self, file): """ Add this file to the init.d directory """ def add_env(self, key, value): """ Add an environemnt variable """ raise NotImplementedError def stop(self, service_name): """ Stop a service. """ raise NotImplementedError def start(self, service_name): """ Start a service. """ raise NotImplementedError def install(self): """ Do whatever is necessary to "install" the package. """ raise NotImplementedError boto-2.20.1/boto/pyami/installers/ubuntu/000077500000000000000000000000001225267101000203175ustar00rootroot00000000000000boto-2.20.1/boto/pyami/installers/ubuntu/__init__.py000066400000000000000000000021301225267101000224240ustar00rootroot00000000000000# Copyright (c) 2006,2007,2008 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # boto-2.20.1/boto/pyami/installers/ubuntu/apache.py000066400000000000000000000036111225267101000221130ustar00rootroot00000000000000# Copyright (c) 2008 Chris Moyer http://coredumped.org # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.pyami.installers.ubuntu.installer import Installer class Apache(Installer): """ Install apache2, mod_python, and libapache2-svn """ def install(self): self.run("apt-get update") self.run('apt-get -y install apache2', notify=True, exit_on_error=True) self.run('apt-get -y install libapache2-mod-python', notify=True, exit_on_error=True) self.run('a2enmod rewrite', notify=True, exit_on_error=True) self.run('a2enmod ssl', notify=True, exit_on_error=True) self.run('a2enmod proxy', notify=True, exit_on_error=True) self.run('a2enmod proxy_ajp', notify=True, exit_on_error=True) # Hard reboot the apache2 server to enable these module self.stop("apache2") self.start("apache2") def main(self): self.install() boto-2.20.1/boto/pyami/installers/ubuntu/ebs.py000066400000000000000000000233241225267101000214460ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # """ Automated installer to attach, format and mount an EBS volume. This installer assumes that you want the volume formatted as an XFS file system. To drive this installer, you need the following section in the boto config passed to the new instance. You also need to install dateutil by listing python-dateutil in the list of packages to be installed in the Pyami seciont of your boto config file. If there is already a device mounted at the specified mount point, the installer assumes that it is the ephemeral drive and unmounts it, remounts it as /tmp and chmods it to 777. Config file section:: [EBS] volume_id = logical_volume_name = device = mount_point = """ import boto from boto.manage.volume import Volume from boto.exception import EC2ResponseError import os, time from boto.pyami.installers.ubuntu.installer import Installer from string import Template BackupScriptTemplate = """#!/usr/bin/env python # Backup EBS volume import boto from boto.pyami.scriptbase import ScriptBase import traceback class Backup(ScriptBase): def main(self): try: ec2 = boto.connect_ec2() self.run("/usr/sbin/xfs_freeze -f ${mount_point}", exit_on_error = True) snapshot = ec2.create_snapshot('${volume_id}') boto.log.info("Snapshot created: %s " % snapshot) except Exception, e: self.notify(subject="${instance_id} Backup Failed", body=traceback.format_exc()) boto.log.info("Snapshot created: ${volume_id}") except Exception, e: self.notify(subject="${instance_id} Backup Failed", body=traceback.format_exc()) finally: self.run("/usr/sbin/xfs_freeze -u ${mount_point}") if __name__ == "__main__": b = Backup() b.main() """ BackupCleanupScript= """#!/usr/bin/env python import boto from boto.manage.volume import Volume # Cleans Backups of EBS volumes for v in Volume.all(): v.trim_snapshots(True) """ TagBasedBackupCleanupScript= """#!/usr/bin/env python import boto # Cleans Backups of EBS volumes ec2 = boto.connect_ec2() ec2.trim_snapshots() """ class EBSInstaller(Installer): """ Set up the EBS stuff """ def __init__(self, config_file=None): Installer.__init__(self, config_file) self.instance_id = boto.config.get('Instance', 'instance-id') self.device = boto.config.get('EBS', 'device', '/dev/sdp') self.volume_id = boto.config.get('EBS', 'volume_id') self.logical_volume_name = boto.config.get('EBS', 'logical_volume_name') self.mount_point = boto.config.get('EBS', 'mount_point', '/ebs') def attach(self): ec2 = boto.connect_ec2() if self.logical_volume_name: # if a logical volume was specified, override the specified volume_id # (if there was one) with the current AWS volume for the logical volume: logical_volume = Volume.find(name = self.logical_volume_name).next() self.volume_id = logical_volume._volume_id volume = ec2.get_all_volumes([self.volume_id])[0] # wait for the volume to be available. The volume may still be being created # from a snapshot. while volume.update() != 'available': boto.log.info('Volume %s not yet available. Current status = %s.' % (volume.id, volume.status)) time.sleep(5) instance = ec2.get_only_instances([self.instance_id])[0] attempt_attach = True while attempt_attach: try: ec2.attach_volume(self.volume_id, self.instance_id, self.device) attempt_attach = False except EC2ResponseError, e: if e.error_code != 'IncorrectState': # if there's an EC2ResonseError with the code set to IncorrectState, delay a bit for ec2 # to realize the instance is running, then try again. Otherwise, raise the error: boto.log.info('Attempt to attach the EBS volume %s to this instance (%s) returned %s. Trying again in a bit.' % (self.volume_id, self.instance_id, e.errors)) time.sleep(2) else: raise e boto.log.info('Attached volume %s to instance %s as device %s' % (self.volume_id, self.instance_id, self.device)) # now wait for the volume device to appear while not os.path.exists(self.device): boto.log.info('%s still does not exist, waiting 2 seconds' % self.device) time.sleep(2) def make_fs(self): boto.log.info('make_fs...') has_fs = self.run('fsck %s' % self.device) if has_fs != 0: self.run('mkfs -t xfs %s' % self.device) def create_backup_script(self): t = Template(BackupScriptTemplate) s = t.substitute(volume_id=self.volume_id, instance_id=self.instance_id, mount_point=self.mount_point) fp = open('/usr/local/bin/ebs_backup', 'w') fp.write(s) fp.close() self.run('chmod +x /usr/local/bin/ebs_backup') def create_backup_cleanup_script(self, use_tag_based_cleanup = False): fp = open('/usr/local/bin/ebs_backup_cleanup', 'w') if use_tag_based_cleanup: fp.write(TagBasedBackupCleanupScript) else: fp.write(BackupCleanupScript) fp.close() self.run('chmod +x /usr/local/bin/ebs_backup_cleanup') def handle_mount_point(self): boto.log.info('handle_mount_point') if not os.path.isdir(self.mount_point): boto.log.info('making directory') # mount directory doesn't exist so create it self.run("mkdir %s" % self.mount_point) else: boto.log.info('directory exists already') self.run('mount -l') lines = self.last_command.output.split('\n') for line in lines: t = line.split() if t and t[2] == self.mount_point: # something is already mounted at the mount point # unmount that and mount it as /tmp if t[0] != self.device: self.run('umount %s' % self.mount_point) self.run('mount %s /tmp' % t[0]) break self.run('chmod 777 /tmp') # Mount up our new EBS volume onto mount_point self.run("mount %s %s" % (self.device, self.mount_point)) self.run('xfs_growfs %s' % self.mount_point) def update_fstab(self): f = open("/etc/fstab", "a") f.write('%s\t%s\txfs\tdefaults 0 0\n' % (self.device, self.mount_point)) f.close() def install(self): # First, find and attach the volume self.attach() # Install the xfs tools self.run('apt-get -y install xfsprogs xfsdump') # Check to see if the filesystem was created or not self.make_fs() # create the /ebs directory for mounting self.handle_mount_point() # create the backup script self.create_backup_script() # Set up the backup script minute = boto.config.get('EBS', 'backup_cron_minute', '0') hour = boto.config.get('EBS', 'backup_cron_hour', '4,16') self.add_cron("ebs_backup", "/usr/local/bin/ebs_backup", minute=minute, hour=hour) # Set up the backup cleanup script minute = boto.config.get('EBS', 'backup_cleanup_cron_minute') hour = boto.config.get('EBS', 'backup_cleanup_cron_hour') if (minute != None) and (hour != None): # Snapshot clean up can either be done via the manage module, or via the new tag based # snapshot code, if the snapshots have been tagged with the name of the associated # volume. Check for the presence of the new configuration flag, and use the appropriate # cleanup method / script: use_tag_based_cleanup = boto.config.has_option('EBS', 'use_tag_based_snapshot_cleanup') self.create_backup_cleanup_script(use_tag_based_cleanup); self.add_cron("ebs_backup_cleanup", "/usr/local/bin/ebs_backup_cleanup", minute=minute, hour=hour) # Set up the fstab self.update_fstab() def main(self): if not os.path.exists(self.device): self.install() else: boto.log.info("Device %s is already attached, skipping EBS Installer" % self.device) boto-2.20.1/boto/pyami/installers/ubuntu/installer.py000066400000000000000000000067331225267101000226770ustar00rootroot00000000000000# Copyright (c) 2006,2007,2008 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import boto.pyami.installers import os import os.path import stat import boto import random from pwd import getpwnam class Installer(boto.pyami.installers.Installer): """ Base Installer class for Ubuntu-based AMI's """ def add_cron(self, name, command, minute="*", hour="*", mday="*", month="*", wday="*", who="root", env=None): """ Write a file to /etc/cron.d to schedule a command env is a dict containing environment variables you want to set in the file name will be used as the name of the file """ if minute == 'random': minute = str(random.randrange(60)) if hour == 'random': hour = str(random.randrange(24)) fp = open('/etc/cron.d/%s' % name, "w") if env: for key, value in env.items(): fp.write('%s=%s\n' % (key, value)) fp.write('%s %s %s %s %s %s %s\n' % (minute, hour, mday, month, wday, who, command)) fp.close() def add_init_script(self, file, name): """ Add this file to the init.d directory """ f_path = os.path.join("/etc/init.d", name) f = open(f_path, "w") f.write(file) f.close() os.chmod(f_path, stat.S_IREAD| stat.S_IWRITE | stat.S_IEXEC) self.run("/usr/sbin/update-rc.d %s defaults" % name) def add_env(self, key, value): """ Add an environemnt variable For Ubuntu, the best place is /etc/environment. Values placed here do not need to be exported. """ boto.log.info('Adding env variable: %s=%s' % (key, value)) if not os.path.exists("/etc/environment.orig"): self.run('cp /etc/environment /etc/environment.orig', notify=False, exit_on_error=False) fp = open('/etc/environment', 'a') fp.write('\n%s="%s"' % (key, value)) fp.close() os.environ[key] = value def stop(self, service_name): self.run('/etc/init.d/%s stop' % service_name) def start(self, service_name): self.run('/etc/init.d/%s start' % service_name) def create_user(self, user): """ Create a user on the local system """ self.run("useradd -m %s" % user) usr = getpwnam(user) return usr def install(self): """ This is the only method you need to override """ raise NotImplementedError boto-2.20.1/boto/pyami/installers/ubuntu/mysql.py000066400000000000000000000114141225267101000220370ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # """ This installer will install mysql-server on an Ubuntu machine. In addition to the normal installation done by apt-get, it will also configure the new MySQL server to store it's data files in a different location. By default, this is /mnt but that can be configured in the [MySQL] section of the boto config file passed to the instance. """ from boto.pyami.installers.ubuntu.installer import Installer import os import boto from boto.utils import ShellCommand from ConfigParser import SafeConfigParser import time ConfigSection = """ [MySQL] root_password = data_dir = """ class MySQL(Installer): def install(self): self.run('apt-get update') self.run('apt-get -y install mysql-server', notify=True, exit_on_error=True) # def set_root_password(self, password=None): # if not password: # password = boto.config.get('MySQL', 'root_password') # if password: # self.run('mysqladmin -u root password %s' % password) # return password def change_data_dir(self, password=None): data_dir = boto.config.get('MySQL', 'data_dir', '/mnt') fresh_install = False; is_mysql_running_command = ShellCommand('mysqladmin ping') # exit status 0 if mysql is running is_mysql_running_command.run() if is_mysql_running_command.getStatus() == 0: # mysql is running. This is the state apt-get will leave it in. If it isn't running, # that means mysql was already installed on the AMI and there's no need to stop it, # saving 40 seconds on instance startup. time.sleep(10) #trying to stop mysql immediately after installing it fails # We need to wait until mysql creates the root account before we kill it # or bad things will happen i = 0 while self.run("echo 'quit' | mysql -u root") != 0 and i<5: time.sleep(5) i = i + 1 self.run('/etc/init.d/mysql stop') self.run("pkill -9 mysql") mysql_path = os.path.join(data_dir, 'mysql') if not os.path.exists(mysql_path): self.run('mkdir %s' % mysql_path) fresh_install = True; self.run('chown -R mysql:mysql %s' % mysql_path) fp = open('/etc/mysql/conf.d/use_mnt.cnf', 'w') fp.write('# created by pyami\n') fp.write('# use the %s volume for data\n' % data_dir) fp.write('[mysqld]\n') fp.write('datadir = %s\n' % mysql_path) fp.write('log_bin = %s\n' % os.path.join(mysql_path, 'mysql-bin.log')) fp.close() if fresh_install: self.run('cp -pr /var/lib/mysql/* %s/' % mysql_path) self.start('mysql') else: #get the password ubuntu expects to use: config_parser = SafeConfigParser() config_parser.read('/etc/mysql/debian.cnf') password = config_parser.get('client', 'password') # start the mysql deamon, then mysql with the required grant statement piped into it: self.start('mysql') time.sleep(10) #time for mysql to start grant_command = "echo \"GRANT ALL PRIVILEGES ON *.* TO 'debian-sys-maint'@'localhost' IDENTIFIED BY '%s' WITH GRANT OPTION;\" | mysql" % password while self.run(grant_command) != 0: time.sleep(5) # leave mysqld running def main(self): self.install() # change_data_dir runs 'mysql -u root' which assumes there is no mysql password, i # and changing that is too ugly to be worth it: #self.set_root_password() self.change_data_dir() boto-2.20.1/boto/pyami/installers/ubuntu/trac.py000066400000000000000000000141771225267101000216340ustar00rootroot00000000000000# Copyright (c) 2008 Chris Moyer http://coredumped.org # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.pyami.installers.ubuntu.installer import Installer import boto import os class Trac(Installer): """ Install Trac and DAV-SVN Sets up a Vhost pointing to [Trac]->home Using the config parameter [Trac]->hostname Sets up a trac environment for every directory found under [Trac]->data_dir [Trac] name = My Foo Server hostname = trac.foo.com home = /mnt/sites/trac data_dir = /mnt/trac svn_dir = /mnt/subversion server_admin = root@foo.com sdb_auth_domain = users # Optional SSLCertificateFile = /mnt/ssl/foo.crt SSLCertificateKeyFile = /mnt/ssl/foo.key SSLCertificateChainFile = /mnt/ssl/FooCA.crt """ def install(self): self.run('apt-get -y install trac', notify=True, exit_on_error=True) self.run('apt-get -y install libapache2-svn', notify=True, exit_on_error=True) self.run("a2enmod ssl") self.run("a2enmod mod_python") self.run("a2enmod dav_svn") self.run("a2enmod rewrite") # Make sure that boto.log is writable by everyone so that subversion post-commit hooks can # write to it. self.run("touch /var/log/boto.log") self.run("chmod a+w /var/log/boto.log") def setup_vhost(self): domain = boto.config.get("Trac", "hostname").strip() if domain: domain_info = domain.split('.') cnf = open("/etc/apache2/sites-available/%s" % domain_info[0], "w") cnf.write("NameVirtualHost *:80\n") if boto.config.get("Trac", "SSLCertificateFile"): cnf.write("NameVirtualHost *:443\n\n") cnf.write("\n") cnf.write("\tServerAdmin %s\n" % boto.config.get("Trac", "server_admin").strip()) cnf.write("\tServerName %s\n" % domain) cnf.write("\tRewriteEngine On\n") cnf.write("\tRewriteRule ^(.*)$ https://%s$1\n" % domain) cnf.write("\n\n") cnf.write("\n") else: cnf.write("\n") cnf.write("\tServerAdmin %s\n" % boto.config.get("Trac", "server_admin").strip()) cnf.write("\tServerName %s\n" % domain) cnf.write("\tDocumentRoot %s\n" % boto.config.get("Trac", "home").strip()) cnf.write("\t\n" % boto.config.get("Trac", "home").strip()) cnf.write("\t\tOptions FollowSymLinks Indexes MultiViews\n") cnf.write("\t\tAllowOverride All\n") cnf.write("\t\tOrder allow,deny\n") cnf.write("\t\tallow from all\n") cnf.write("\t\n") cnf.write("\t\n") cnf.write("\t\tAuthType Basic\n") cnf.write("\t\tAuthName \"%s\"\n" % boto.config.get("Trac", "name")) cnf.write("\t\tRequire valid-user\n") cnf.write("\t\tAuthUserFile /mnt/apache/passwd/passwords\n") cnf.write("\t\n") data_dir = boto.config.get("Trac", "data_dir") for env in os.listdir(data_dir): if(env[0] != "."): cnf.write("\t\n" % env) cnf.write("\t\tSetHandler mod_python\n") cnf.write("\t\tPythonInterpreter main_interpreter\n") cnf.write("\t\tPythonHandler trac.web.modpython_frontend\n") cnf.write("\t\tPythonOption TracEnv %s/%s\n" % (data_dir, env)) cnf.write("\t\tPythonOption TracUriRoot /trac/%s\n" % env) cnf.write("\t\n") svn_dir = boto.config.get("Trac", "svn_dir") for env in os.listdir(svn_dir): if(env[0] != "."): cnf.write("\t\n" % env) cnf.write("\t\tDAV svn\n") cnf.write("\t\tSVNPath %s/%s\n" % (svn_dir, env)) cnf.write("\t\n") cnf.write("\tErrorLog /var/log/apache2/error.log\n") cnf.write("\tLogLevel warn\n") cnf.write("\tCustomLog /var/log/apache2/access.log combined\n") cnf.write("\tServerSignature On\n") SSLCertificateFile = boto.config.get("Trac", "SSLCertificateFile") if SSLCertificateFile: cnf.write("\tSSLEngine On\n") cnf.write("\tSSLCertificateFile %s\n" % SSLCertificateFile) SSLCertificateKeyFile = boto.config.get("Trac", "SSLCertificateKeyFile") if SSLCertificateKeyFile: cnf.write("\tSSLCertificateKeyFile %s\n" % SSLCertificateKeyFile) SSLCertificateChainFile = boto.config.get("Trac", "SSLCertificateChainFile") if SSLCertificateChainFile: cnf.write("\tSSLCertificateChainFile %s\n" % SSLCertificateChainFile) cnf.write("\n") cnf.close() self.run("a2ensite %s" % domain_info[0]) self.run("/etc/init.d/apache2 force-reload") def main(self): self.install() self.setup_vhost() boto-2.20.1/boto/pyami/launch_ami.py000077500000000000000000000166401225267101000173010ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import getopt import sys import imp import time import boto usage_string = """ SYNOPSIS launch_ami.py -a ami_id [-b script_bucket] [-s script_name] [-m module] [-c class_name] [-r] [-g group] [-k key_name] [-n num_instances] [-w] [extra_data] Where: ami_id - the id of the AMI you wish to launch module - The name of the Python module containing the class you want to run when the instance is started. If you use this option the Python module must already be stored on the instance in a location that is on the Python path. script_file - The name of a local Python module that you would like to have copied to S3 and then run on the instance when it is started. The specified module must be import'able (i.e. in your local Python path). It will then be copied to the specified bucket in S3 (see the -b option). Once the new instance(s) start up the script will be copied from S3 and then run locally on the instance. class_name - The name of the class to be instantiated within the module or script file specified. script_bucket - the name of the bucket in which the script will be stored group - the name of the security group the instance will run in key_name - the name of the keypair to use when launching the AMI num_instances - how many instances of the AMI to launch (default 1) input_queue_name - Name of SQS to read input messages from output_queue_name - Name of SQS to write output messages to extra_data - additional name-value pairs that will be passed as userdata to the newly launched instance. These should be of the form "name=value" The -r option reloads the Python module to S3 without launching another instance. This can be useful during debugging to allow you to test a new version of your script without shutting down your instance and starting up another one. The -w option tells the script to run synchronously, meaning to wait until the instance is actually up and running. It then prints the IP address and internal and external DNS names before exiting. """ def usage(): print usage_string sys.exit() def main(): try: opts, args = getopt.getopt(sys.argv[1:], 'a:b:c:g:hi:k:m:n:o:rs:w', ['ami', 'bucket', 'class', 'group', 'help', 'inputqueue', 'keypair', 'module', 'numinstances', 'outputqueue', 'reload', 'script_name', 'wait']) except: usage() params = {'module_name' : None, 'script_name' : None, 'class_name' : None, 'script_bucket' : None, 'group' : 'default', 'keypair' : None, 'ami' : None, 'num_instances' : 1, 'input_queue_name' : None, 'output_queue_name' : None} reload = None wait = None for o, a in opts: if o in ('-a', '--ami'): params['ami'] = a if o in ('-b', '--bucket'): params['script_bucket'] = a if o in ('-c', '--class'): params['class_name'] = a if o in ('-g', '--group'): params['group'] = a if o in ('-h', '--help'): usage() if o in ('-i', '--inputqueue'): params['input_queue_name'] = a if o in ('-k', '--keypair'): params['keypair'] = a if o in ('-m', '--module'): params['module_name'] = a if o in ('-n', '--num_instances'): params['num_instances'] = int(a) if o in ('-o', '--outputqueue'): params['output_queue_name'] = a if o in ('-r', '--reload'): reload = True if o in ('-s', '--script'): params['script_name'] = a if o in ('-w', '--wait'): wait = True # check required fields required = ['ami'] for pname in required: if not params.get(pname, None): print '%s is required' % pname usage() if params['script_name']: # first copy the desired module file to S3 bucket if reload: print 'Reloading module %s to S3' % params['script_name'] else: print 'Copying module %s to S3' % params['script_name'] l = imp.find_module(params['script_name']) c = boto.connect_s3() bucket = c.get_bucket(params['script_bucket']) key = bucket.new_key(params['script_name']+'.py') key.set_contents_from_file(l[0]) params['script_md5'] = key.md5 # we have everything we need, now build userdata string l = [] for k, v in params.items(): if v: l.append('%s=%s' % (k, v)) c = boto.connect_ec2() l.append('aws_access_key_id=%s' % c.aws_access_key_id) l.append('aws_secret_access_key=%s' % c.aws_secret_access_key) for kv in args: l.append(kv) s = '|'.join(l) if not reload: rs = c.get_all_images([params['ami']]) img = rs[0] r = img.run(user_data=s, key_name=params['keypair'], security_groups=[params['group']], max_count=params.get('num_instances', 1)) print 'AMI: %s - %s (Started)' % (params['ami'], img.location) print 'Reservation %s contains the following instances:' % r.id for i in r.instances: print '\t%s' % i.id if wait: running = False while not running: time.sleep(30) [i.update() for i in r.instances] status = [i.state for i in r.instances] print status if status.count('running') == len(r.instances): running = True for i in r.instances: print 'Instance: %s' % i.ami_launch_index print 'Public DNS Name: %s' % i.public_dns_name print 'Private DNS Name: %s' % i.private_dns_name if __name__ == "__main__": main() boto-2.20.1/boto/pyami/scriptbase.py000066400000000000000000000026161225267101000173330ustar00rootroot00000000000000import os import sys from boto.utils import ShellCommand, get_ts import boto import boto.utils class ScriptBase: def __init__(self, config_file=None): self.instance_id = boto.config.get('Instance', 'instance-id', 'default') self.name = self.__class__.__name__ self.ts = get_ts() if config_file: boto.config.read(config_file) def notify(self, subject, body=''): boto.utils.notify(subject, body) def mkdir(self, path): if not os.path.isdir(path): try: os.mkdir(path) except: boto.log.error('Error creating directory: %s' % path) def umount(self, path): if os.path.ismount(path): self.run('umount %s' % path) def run(self, command, notify=True, exit_on_error=False, cwd=None): self.last_command = ShellCommand(command, cwd=cwd) if self.last_command.status != 0: boto.log.error('Error running command: "%s". Output: "%s"' % (command, self.last_command.output)) if notify: self.notify('Error encountered', \ 'Error running the following command:\n\t%s\n\nCommand output:\n\t%s' % \ (command, self.last_command.output)) if exit_on_error: sys.exit(-1) return self.last_command.status def main(self): pass boto-2.20.1/boto/pyami/startup.py000066400000000000000000000046471225267101000167040ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import sys import boto from boto.utils import find_class from boto import config from boto.pyami.scriptbase import ScriptBase class Startup(ScriptBase): def run_scripts(self): scripts = config.get('Pyami', 'scripts') if scripts: for script in scripts.split(','): script = script.strip(" ") try: pos = script.rfind('.') if pos > 0: mod_name = script[0:pos] cls_name = script[pos+1:] cls = find_class(mod_name, cls_name) boto.log.info('Running Script: %s' % script) s = cls() s.main() else: boto.log.warning('Trouble parsing script: %s' % script) except Exception, e: boto.log.exception('Problem Running Script: %s. Startup process halting.' % script) raise e def main(self): self.run_scripts() self.notify('Startup Completed for %s' % config.get('Instance', 'instance-id')) if __name__ == "__main__": if not config.has_section('loggers'): boto.set_file_logger('startup', '/var/log/boto.log') sys.path.append(config.get('Pyami', 'working_dir')) su = Startup() su.main() boto-2.20.1/boto/rds/000077500000000000000000000000001225267101000142665ustar00rootroot00000000000000boto-2.20.1/boto/rds/__init__.py000066400000000000000000001777131225267101000164170ustar00rootroot00000000000000# Copyright (c) 2009-2012 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import urllib from boto.connection import AWSQueryConnection from boto.rds.dbinstance import DBInstance from boto.rds.dbsecuritygroup import DBSecurityGroup from boto.rds.optiongroup import OptionGroup, OptionGroupOption from boto.rds.parametergroup import ParameterGroup from boto.rds.dbsnapshot import DBSnapshot from boto.rds.event import Event from boto.rds.regioninfo import RDSRegionInfo from boto.rds.dbsubnetgroup import DBSubnetGroup from boto.rds.vpcsecuritygroupmembership import VPCSecurityGroupMembership def regions(): """ Get all available regions for the RDS service. :rtype: list :return: A list of :class:`boto.rds.regioninfo.RDSRegionInfo` """ return [RDSRegionInfo(name='us-east-1', endpoint='rds.amazonaws.com'), RDSRegionInfo(name='us-gov-west-1', endpoint='rds.us-gov-west-1.amazonaws.com'), RDSRegionInfo(name='eu-west-1', endpoint='rds.eu-west-1.amazonaws.com'), RDSRegionInfo(name='us-west-1', endpoint='rds.us-west-1.amazonaws.com'), RDSRegionInfo(name='us-west-2', endpoint='rds.us-west-2.amazonaws.com'), RDSRegionInfo(name='sa-east-1', endpoint='rds.sa-east-1.amazonaws.com'), RDSRegionInfo(name='ap-northeast-1', endpoint='rds.ap-northeast-1.amazonaws.com'), RDSRegionInfo(name='ap-southeast-1', endpoint='rds.ap-southeast-1.amazonaws.com'), RDSRegionInfo(name='ap-southeast-2', endpoint='rds.ap-southeast-2.amazonaws.com'), ] def connect_to_region(region_name, **kw_params): """ Given a valid region name, return a :class:`boto.rds.RDSConnection`. Any additional parameters after the region_name are passed on to the connect method of the region object. :type: str :param region_name: The name of the region to connect to. :rtype: :class:`boto.rds.RDSConnection` or ``None`` :return: A connection to the given region, or None if an invalid region name is given """ for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None #boto.set_stream_logger('rds') class RDSConnection(AWSQueryConnection): DefaultRegionName = 'us-east-1' DefaultRegionEndpoint = 'rds.amazonaws.com' APIVersion = '2013-05-15' def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/', security_token=None, validate_certs=True): if not region: region = RDSRegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint) self.region = region AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, proxy_user, proxy_pass, self.region.endpoint, debug, https_connection_factory, path, security_token, validate_certs=validate_certs) def _required_auth_capability(self): return ['hmac-v4'] # DB Instance methods def get_all_dbinstances(self, instance_id=None, max_records=None, marker=None): """ Retrieve all the DBInstances in your account. :type instance_id: str :param instance_id: DB Instance identifier. If supplied, only information this instance will be returned. Otherwise, info about all DB Instances will be returned. :type max_records: int :param max_records: The maximum number of records to be returned. If more results are available, a MoreToken will be returned in the response that can be used to retrieve additional records. Default is 100. :type marker: str :param marker: The marker provided by a previous request. :rtype: list :return: A list of :class:`boto.rds.dbinstance.DBInstance` """ params = {} if instance_id: params['DBInstanceIdentifier'] = instance_id if max_records: params['MaxRecords'] = max_records if marker: params['Marker'] = marker return self.get_list('DescribeDBInstances', params, [('DBInstance', DBInstance)]) def create_dbinstance(self, id, allocated_storage, instance_class, master_username, master_password, port=3306, engine='MySQL5.1', db_name=None, param_group=None, security_groups=None, availability_zone=None, preferred_maintenance_window=None, backup_retention_period=None, preferred_backup_window=None, multi_az=False, engine_version=None, auto_minor_version_upgrade=True, character_set_name = None, db_subnet_group_name = None, license_model = None, option_group_name = None, iops=None, vpc_security_groups=None, ): # API version: 2012-09-17 # Parameter notes: # ================= # id should be db_instance_identifier according to API docs but has been left # id for backwards compatibility # # security_groups should be db_security_groups according to API docs but has been left # security_groups for backwards compatibility # # master_password should be master_user_password according to API docs but has been left # master_password for backwards compatibility # # instance_class should be db_instance_class according to API docs but has been left # instance_class for backwards compatibility """ Create a new DBInstance. :type id: str :param id: Unique identifier for the new instance. Must contain 1-63 alphanumeric characters. First character must be a letter. May not end with a hyphen or contain two consecutive hyphens :type allocated_storage: int :param allocated_storage: Initially allocated storage size, in GBs. Valid values are depending on the engine value. * MySQL = 5--1024 * oracle-se1 = 10--1024 * oracle-se = 10--1024 * oracle-ee = 10--1024 * sqlserver-ee = 200--1024 * sqlserver-se = 200--1024 * sqlserver-ex = 30--1024 * sqlserver-web = 30--1024 :type instance_class: str :param instance_class: The compute and memory capacity of the DBInstance. Valid values are: * db.m1.small * db.m1.large * db.m1.xlarge * db.m2.xlarge * db.m2.2xlarge * db.m2.4xlarge :type engine: str :param engine: Name of database engine. Defaults to MySQL but can be; * MySQL * oracle-se1 * oracle-se * oracle-ee * sqlserver-ee * sqlserver-se * sqlserver-ex * sqlserver-web :type master_username: str :param master_username: Name of master user for the DBInstance. * MySQL must be; - 1--16 alphanumeric characters - first character must be a letter - cannot be a reserved MySQL word * Oracle must be: - 1--30 alphanumeric characters - first character must be a letter - cannot be a reserved Oracle word * SQL Server must be: - 1--128 alphanumeric characters - first character must be a letter - cannot be a reserver SQL Server word :type master_password: str :param master_password: Password of master user for the DBInstance. * MySQL must be 8--41 alphanumeric characters * Oracle must be 8--30 alphanumeric characters * SQL Server must be 8--128 alphanumeric characters. :type port: int :param port: Port number on which database accepts connections. Valid values [1115-65535]. * MySQL defaults to 3306 * Oracle defaults to 1521 * SQL Server defaults to 1433 and _cannot_ be 1434 or 3389 :type db_name: str :param db_name: * MySQL: Name of a database to create when the DBInstance is created. Default is to create no databases. Must contain 1--64 alphanumeric characters and cannot be a reserved MySQL word. * Oracle: The Oracle System ID (SID) of the created DB instances. Default is ORCL. Cannot be longer than 8 characters. * SQL Server: Not applicable and must be None. :type param_group: str or ParameterGroup object :param param_group: Name of DBParameterGroup or ParameterGroup instance to associate with this DBInstance. If no groups are specified no parameter groups will be used. :type security_groups: list of str or list of DBSecurityGroup objects :param security_groups: List of names of DBSecurityGroup to authorize on this DBInstance. :type availability_zone: str :param availability_zone: Name of the availability zone to place DBInstance into. :type preferred_maintenance_window: str :param preferred_maintenance_window: The weekly time range (in UTC) during which maintenance can occur. Default is Sun:05:00-Sun:09:00 :type backup_retention_period: int :param backup_retention_period: The number of days for which automated backups are retained. Setting this to zero disables automated backups. :type preferred_backup_window: str :param preferred_backup_window: The daily time range during which automated backups are created (if enabled). Must be in h24:mi-hh24:mi format (UTC). :type multi_az: bool :param multi_az: If True, specifies the DB Instance will be deployed in multiple availability zones. For Microsoft SQL Server, must be set to false. You cannot set the AvailabilityZone parameter if the MultiAZ parameter is set to true. :type engine_version: str :param engine_version: The version number of the database engine to use. * MySQL format example: 5.1.42 * Oracle format example: 11.2.0.2.v2 * SQL Server format example: 10.50.2789.0.v1 :type auto_minor_version_upgrade: bool :param auto_minor_version_upgrade: Indicates that minor engine upgrades will be applied automatically to the Read Replica during the maintenance window. Default is True. :type character_set_name: str :param character_set_name: For supported engines, indicates that the DB Instance should be associated with the specified CharacterSet. :type db_subnet_group_name: str :param db_subnet_group_name: A DB Subnet Group to associate with this DB Instance. If there is no DB Subnet Group, then it is a non-VPC DB instance. :type license_model: str :param license_model: License model information for this DB Instance. Valid values are; - license-included - bring-your-own-license - general-public-license All license types are not supported on all engines. :type option_group_name: str :param option_group_name: Indicates that the DB Instance should be associated with the specified option group. :type iops: int :param iops: The amount of IOPS (input/output operations per second) to Provisioned for the DB Instance. Can be modified at a later date. Must scale linearly. For every 1000 IOPS provision, you must allocated 100 GB of storage space. This scales up to 1 TB / 10 000 IOPS for MySQL and Oracle. MSSQL is limited to 700 GB / 7 000 IOPS. If you specify a value, it must be at least 1000 IOPS and you must allocate 100 GB of storage. :type vpc_security_groups: list of str or a VPCSecurityGroupMembership object :param vpc_security_groups: List of VPC security group ids or a list of VPCSecurityGroupMembership objects this DBInstance should be a member of :rtype: :class:`boto.rds.dbinstance.DBInstance` :return: The new db instance. """ # boto argument alignment with AWS API parameter names: # ===================================================== # arg => AWS parameter # allocated_storage => AllocatedStorage # auto_minor_version_update => AutoMinorVersionUpgrade # availability_zone => AvailabilityZone # backup_retention_period => BackupRetentionPeriod # character_set_name => CharacterSetName # db_instance_class => DBInstanceClass # db_instance_identifier => DBInstanceIdentifier # db_name => DBName # db_parameter_group_name => DBParameterGroupName # db_security_groups => DBSecurityGroups.member.N # db_subnet_group_name => DBSubnetGroupName # engine => Engine # engine_version => EngineVersion # license_model => LicenseModel # master_username => MasterUsername # master_user_password => MasterUserPassword # multi_az => MultiAZ # option_group_name => OptionGroupName # port => Port # preferred_backup_window => PreferredBackupWindow # preferred_maintenance_window => PreferredMaintenanceWindow # vpc_security_groups => VpcSecurityGroupIds.member.N params = { 'AllocatedStorage': allocated_storage, 'AutoMinorVersionUpgrade': str(auto_minor_version_upgrade).lower() if auto_minor_version_upgrade else None, 'AvailabilityZone': availability_zone, 'BackupRetentionPeriod': backup_retention_period, 'CharacterSetName': character_set_name, 'DBInstanceClass': instance_class, 'DBInstanceIdentifier': id, 'DBName': db_name, 'DBParameterGroupName': (param_group.name if isinstance(param_group, ParameterGroup) else param_group), 'DBSubnetGroupName': db_subnet_group_name, 'Engine': engine, 'EngineVersion': engine_version, 'Iops': iops, 'LicenseModel': license_model, 'MasterUsername': master_username, 'MasterUserPassword': master_password, 'MultiAZ': str(multi_az).lower() if multi_az else None, 'OptionGroupName': option_group_name, 'Port': port, 'PreferredBackupWindow': preferred_backup_window, 'PreferredMaintenanceWindow': preferred_maintenance_window, } if security_groups: l = [] for group in security_groups: if isinstance(group, DBSecurityGroup): l.append(group.name) else: l.append(group) self.build_list_params(params, l, 'DBSecurityGroups.member') if vpc_security_groups: l = [] for vpc_grp in vpc_security_groups: if isinstance(vpc_grp, VPCSecurityGroupMembership): l.append(vpc_grp.vpc_group) else: l.append(vpc_grp) self.build_list_params(params, l, 'VpcSecurityGroupIds.member') # Remove any params set to None for k, v in params.items(): if v is None: del(params[k]) return self.get_object('CreateDBInstance', params, DBInstance) def create_dbinstance_read_replica(self, id, source_id, instance_class=None, port=3306, availability_zone=None, auto_minor_version_upgrade=None): """ Create a new DBInstance Read Replica. :type id: str :param id: Unique identifier for the new instance. Must contain 1-63 alphanumeric characters. First character must be a letter. May not end with a hyphen or contain two consecutive hyphens :type source_id: str :param source_id: Unique identifier for the DB Instance for which this DB Instance will act as a Read Replica. :type instance_class: str :param instance_class: The compute and memory capacity of the DBInstance. Default is to inherit from the source DB Instance. Valid values are: * db.m1.small * db.m1.large * db.m1.xlarge * db.m2.xlarge * db.m2.2xlarge * db.m2.4xlarge :type port: int :param port: Port number on which database accepts connections. Default is to inherit from source DB Instance. Valid values [1115-65535]. Defaults to 3306. :type availability_zone: str :param availability_zone: Name of the availability zone to place DBInstance into. :type auto_minor_version_upgrade: bool :param auto_minor_version_upgrade: Indicates that minor engine upgrades will be applied automatically to the Read Replica during the maintenance window. Default is to inherit this value from the source DB Instance. :rtype: :class:`boto.rds.dbinstance.DBInstance` :return: The new db instance. """ params = {'DBInstanceIdentifier': id, 'SourceDBInstanceIdentifier': source_id} if instance_class: params['DBInstanceClass'] = instance_class if port: params['Port'] = port if availability_zone: params['AvailabilityZone'] = availability_zone if auto_minor_version_upgrade is not None: if auto_minor_version_upgrade is True: params['AutoMinorVersionUpgrade'] = 'true' else: params['AutoMinorVersionUpgrade'] = 'false' return self.get_object('CreateDBInstanceReadReplica', params, DBInstance) def promote_read_replica(self, id, backup_retention_period=None, preferred_backup_window=None): """ Promote a Read Replica to a standalone DB Instance. :type id: str :param id: Unique identifier for the new instance. Must contain 1-63 alphanumeric characters. First character must be a letter. May not end with a hyphen or contain two consecutive hyphens :type backup_retention_period: int :param backup_retention_period: The number of days for which automated backups are retained. Setting this to zero disables automated backups. :type preferred_backup_window: str :param preferred_backup_window: The daily time range during which automated backups are created (if enabled). Must be in h24:mi-hh24:mi format (UTC). :rtype: :class:`boto.rds.dbinstance.DBInstance` :return: The new db instance. """ params = {'DBInstanceIdentifier': id} if backup_retention_period is not None: params['BackupRetentionPeriod'] = backup_retention_period if preferred_backup_window: params['PreferredBackupWindow'] = preferred_backup_window return self.get_object('PromoteReadReplica', params, DBInstance) def modify_dbinstance(self, id, param_group=None, security_groups=None, preferred_maintenance_window=None, master_password=None, allocated_storage=None, instance_class=None, backup_retention_period=None, preferred_backup_window=None, multi_az=False, apply_immediately=False, iops=None, vpc_security_groups=None, new_instance_id=None, ): """ Modify an existing DBInstance. :type id: str :param id: Unique identifier for the new instance. :type param_group: str or ParameterGroup object :param param_group: Name of DBParameterGroup or ParameterGroup instance to associate with this DBInstance. If no groups are specified no parameter groups will be used. :type security_groups: list of str or list of DBSecurityGroup objects :param security_groups: List of names of DBSecurityGroup to authorize on this DBInstance. :type preferred_maintenance_window: str :param preferred_maintenance_window: The weekly time range (in UTC) during which maintenance can occur. Default is Sun:05:00-Sun:09:00 :type master_password: str :param master_password: Password of master user for the DBInstance. Must be 4-15 alphanumeric characters. :type allocated_storage: int :param allocated_storage: The new allocated storage size, in GBs. Valid values are [5-1024] :type instance_class: str :param instance_class: The compute and memory capacity of the DBInstance. Changes will be applied at next maintenance window unless apply_immediately is True. Valid values are: * db.m1.small * db.m1.large * db.m1.xlarge * db.m2.xlarge * db.m2.2xlarge * db.m2.4xlarge :type apply_immediately: bool :param apply_immediately: If true, the modifications will be applied as soon as possible rather than waiting for the next preferred maintenance window. :type backup_retention_period: int :param backup_retention_period: The number of days for which automated backups are retained. Setting this to zero disables automated backups. :type preferred_backup_window: str :param preferred_backup_window: The daily time range during which automated backups are created (if enabled). Must be in h24:mi-hh24:mi format (UTC). :type multi_az: bool :param multi_az: If True, specifies the DB Instance will be deployed in multiple availability zones. :type iops: int :param iops: The amount of IOPS (input/output operations per second) to Provisioned for the DB Instance. Can be modified at a later date. Must scale linearly. For every 1000 IOPS provision, you must allocated 100 GB of storage space. This scales up to 1 TB / 10 000 IOPS for MySQL and Oracle. MSSQL is limited to 700 GB / 7 000 IOPS. If you specify a value, it must be at least 1000 IOPS and you must allocate 100 GB of storage. :type vpc_security_groups: list of str or a VPCSecurityGroupMembership object :param vpc_security_groups: List of VPC security group ids or a VPCSecurityGroupMembership object this DBInstance should be a member of :type new_instance_id: str :param new_instance_id: New name to rename the DBInstance to. :rtype: :class:`boto.rds.dbinstance.DBInstance` :return: The modified db instance. """ params = {'DBInstanceIdentifier': id} if param_group: params['DBParameterGroupName'] = (param_group.name if isinstance(param_group, ParameterGroup) else param_group) if security_groups: l = [] for group in security_groups: if isinstance(group, DBSecurityGroup): l.append(group.name) else: l.append(group) self.build_list_params(params, l, 'DBSecurityGroups.member') if vpc_security_groups: l = [] for vpc_grp in vpc_security_groups: if isinstance(vpc_grp, VPCSecurityGroupMembership): l.append(vpc_grp.vpc_group) else: l.append(vpc_grp) self.build_list_params(params, l, 'VpcSecurityGroupIds.member') if preferred_maintenance_window: params['PreferredMaintenanceWindow'] = preferred_maintenance_window if master_password: params['MasterUserPassword'] = master_password if allocated_storage: params['AllocatedStorage'] = allocated_storage if instance_class: params['DBInstanceClass'] = instance_class if backup_retention_period is not None: params['BackupRetentionPeriod'] = backup_retention_period if preferred_backup_window: params['PreferredBackupWindow'] = preferred_backup_window if multi_az: params['MultiAZ'] = 'true' if apply_immediately: params['ApplyImmediately'] = 'true' if iops: params['Iops'] = iops if new_instance_id: params['NewDBInstanceIdentifier'] = new_instance_id return self.get_object('ModifyDBInstance', params, DBInstance) def delete_dbinstance(self, id, skip_final_snapshot=False, final_snapshot_id=''): """ Delete an existing DBInstance. :type id: str :param id: Unique identifier for the new instance. :type skip_final_snapshot: bool :param skip_final_snapshot: This parameter determines whether a final db snapshot is created before the instance is deleted. If True, no snapshot is created. If False, a snapshot is created before deleting the instance. :type final_snapshot_id: str :param final_snapshot_id: If a final snapshot is requested, this is the identifier used for that snapshot. :rtype: :class:`boto.rds.dbinstance.DBInstance` :return: The deleted db instance. """ params = {'DBInstanceIdentifier': id} if skip_final_snapshot: params['SkipFinalSnapshot'] = 'true' else: params['SkipFinalSnapshot'] = 'false' params['FinalDBSnapshotIdentifier'] = final_snapshot_id return self.get_object('DeleteDBInstance', params, DBInstance) def reboot_dbinstance(self, id): """ Reboot DBInstance. :type id: str :param id: Unique identifier of the instance. :rtype: :class:`boto.rds.dbinstance.DBInstance` :return: The rebooting db instance. """ params = {'DBInstanceIdentifier': id} return self.get_object('RebootDBInstance', params, DBInstance) # DBParameterGroup methods def get_all_dbparameter_groups(self, groupname=None, max_records=None, marker=None): """ Get all parameter groups associated with your account in a region. :type groupname: str :param groupname: The name of the DBParameter group to retrieve. If not provided, all DBParameter groups will be returned. :type max_records: int :param max_records: The maximum number of records to be returned. If more results are available, a MoreToken will be returned in the response that can be used to retrieve additional records. Default is 100. :type marker: str :param marker: The marker provided by a previous request. :rtype: list :return: A list of :class:`boto.ec2.parametergroup.ParameterGroup` """ params = {} if groupname: params['DBParameterGroupName'] = groupname if max_records: params['MaxRecords'] = max_records if marker: params['Marker'] = marker return self.get_list('DescribeDBParameterGroups', params, [('DBParameterGroup', ParameterGroup)]) def get_all_dbparameters(self, groupname, source=None, max_records=None, marker=None): """ Get all parameters associated with a ParameterGroup :type groupname: str :param groupname: The name of the DBParameter group to retrieve. :type source: str :param source: Specifies which parameters to return. If not specified, all parameters will be returned. Valid values are: user|system|engine-default :type max_records: int :param max_records: The maximum number of records to be returned. If more results are available, a MoreToken will be returned in the response that can be used to retrieve additional records. Default is 100. :type marker: str :param marker: The marker provided by a previous request. :rtype: :class:`boto.ec2.parametergroup.ParameterGroup` :return: The ParameterGroup """ params = {'DBParameterGroupName': groupname} if source: params['Source'] = source if max_records: params['MaxRecords'] = max_records if marker: params['Marker'] = marker pg = self.get_object('DescribeDBParameters', params, ParameterGroup) pg.name = groupname return pg def create_parameter_group(self, name, engine='MySQL5.1', description=''): """ Create a new dbparameter group for your account. :type name: string :param name: The name of the new dbparameter group :type engine: str :param engine: Name of database engine. :type description: string :param description: The description of the new dbparameter group :rtype: :class:`boto.rds.parametergroup.ParameterGroup` :return: The newly created ParameterGroup """ params = {'DBParameterGroupName': name, 'DBParameterGroupFamily': engine, 'Description': description} return self.get_object('CreateDBParameterGroup', params, ParameterGroup) def modify_parameter_group(self, name, parameters=None): """ Modify a ParameterGroup for your account. :type name: string :param name: The name of the new ParameterGroup :type parameters: list of :class:`boto.rds.parametergroup.Parameter` :param parameters: The new parameters :rtype: :class:`boto.rds.parametergroup.ParameterGroup` :return: The newly created ParameterGroup """ params = {'DBParameterGroupName': name} for i in range(0, len(parameters)): parameter = parameters[i] parameter.merge(params, i+1) return self.get_list('ModifyDBParameterGroup', params, ParameterGroup, verb='POST') def reset_parameter_group(self, name, reset_all_params=False, parameters=None): """ Resets some or all of the parameters of a ParameterGroup to the default value :type key_name: string :param key_name: The name of the ParameterGroup to reset :type parameters: list of :class:`boto.rds.parametergroup.Parameter` :param parameters: The parameters to reset. If not supplied, all parameters will be reset. """ params = {'DBParameterGroupName': name} if reset_all_params: params['ResetAllParameters'] = 'true' else: params['ResetAllParameters'] = 'false' for i in range(0, len(parameters)): parameter = parameters[i] parameter.merge(params, i+1) return self.get_status('ResetDBParameterGroup', params) def delete_parameter_group(self, name): """ Delete a ParameterGroup from your account. :type key_name: string :param key_name: The name of the ParameterGroup to delete """ params = {'DBParameterGroupName': name} return self.get_status('DeleteDBParameterGroup', params) # DBSecurityGroup methods def get_all_dbsecurity_groups(self, groupname=None, max_records=None, marker=None): """ Get all security groups associated with your account in a region. :type groupnames: list :param groupnames: A list of the names of security groups to retrieve. If not provided, all security groups will be returned. :type max_records: int :param max_records: The maximum number of records to be returned. If more results are available, a MoreToken will be returned in the response that can be used to retrieve additional records. Default is 100. :type marker: str :param marker: The marker provided by a previous request. :rtype: list :return: A list of :class:`boto.rds.dbsecuritygroup.DBSecurityGroup` """ params = {} if groupname: params['DBSecurityGroupName'] = groupname if max_records: params['MaxRecords'] = max_records if marker: params['Marker'] = marker return self.get_list('DescribeDBSecurityGroups', params, [('DBSecurityGroup', DBSecurityGroup)]) def create_dbsecurity_group(self, name, description=None): """ Create a new security group for your account. This will create the security group within the region you are currently connected to. :type name: string :param name: The name of the new security group :type description: string :param description: The description of the new security group :rtype: :class:`boto.rds.dbsecuritygroup.DBSecurityGroup` :return: The newly created DBSecurityGroup """ params = {'DBSecurityGroupName': name} if description: params['DBSecurityGroupDescription'] = description group = self.get_object('CreateDBSecurityGroup', params, DBSecurityGroup) group.name = name group.description = description return group def delete_dbsecurity_group(self, name): """ Delete a DBSecurityGroup from your account. :type key_name: string :param key_name: The name of the DBSecurityGroup to delete """ params = {'DBSecurityGroupName': name} return self.get_status('DeleteDBSecurityGroup', params) def authorize_dbsecurity_group(self, group_name, cidr_ip=None, ec2_security_group_name=None, ec2_security_group_owner_id=None): """ Add a new rule to an existing security group. You need to pass in either src_security_group_name and src_security_group_owner_id OR a CIDR block but not both. :type group_name: string :param group_name: The name of the security group you are adding the rule to. :type ec2_security_group_name: string :param ec2_security_group_name: The name of the EC2 security group you are granting access to. :type ec2_security_group_owner_id: string :param ec2_security_group_owner_id: The ID of the owner of the EC2 security group you are granting access to. :type cidr_ip: string :param cidr_ip: The CIDR block you are providing access to. See http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing :rtype: bool :return: True if successful. """ params = {'DBSecurityGroupName': group_name} if ec2_security_group_name: params['EC2SecurityGroupName'] = ec2_security_group_name if ec2_security_group_owner_id: params['EC2SecurityGroupOwnerId'] = ec2_security_group_owner_id if cidr_ip: params['CIDRIP'] = urllib.quote(cidr_ip) return self.get_object('AuthorizeDBSecurityGroupIngress', params, DBSecurityGroup) def revoke_dbsecurity_group(self, group_name, ec2_security_group_name=None, ec2_security_group_owner_id=None, cidr_ip=None): """ Remove an existing rule from an existing security group. You need to pass in either ec2_security_group_name and ec2_security_group_owner_id OR a CIDR block. :type group_name: string :param group_name: The name of the security group you are removing the rule from. :type ec2_security_group_name: string :param ec2_security_group_name: The name of the EC2 security group from which you are removing access. :type ec2_security_group_owner_id: string :param ec2_security_group_owner_id: The ID of the owner of the EC2 security from which you are removing access. :type cidr_ip: string :param cidr_ip: The CIDR block from which you are removing access. See http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing :rtype: bool :return: True if successful. """ params = {'DBSecurityGroupName': group_name} if ec2_security_group_name: params['EC2SecurityGroupName'] = ec2_security_group_name if ec2_security_group_owner_id: params['EC2SecurityGroupOwnerId'] = ec2_security_group_owner_id if cidr_ip: params['CIDRIP'] = cidr_ip return self.get_object('RevokeDBSecurityGroupIngress', params, DBSecurityGroup) # For backwards compatibility. This method was improperly named # in previous versions. I have renamed it to match the others. revoke_security_group = revoke_dbsecurity_group # DBSnapshot methods def get_all_dbsnapshots(self, snapshot_id=None, instance_id=None, max_records=None, marker=None): """ Get information about DB Snapshots. :type snapshot_id: str :param snapshot_id: The unique identifier of an RDS snapshot. If not provided, all RDS snapshots will be returned. :type instance_id: str :param instance_id: The identifier of a DBInstance. If provided, only the DBSnapshots related to that instance will be returned. If not provided, all RDS snapshots will be returned. :type max_records: int :param max_records: The maximum number of records to be returned. If more results are available, a MoreToken will be returned in the response that can be used to retrieve additional records. Default is 100. :type marker: str :param marker: The marker provided by a previous request. :rtype: list :return: A list of :class:`boto.rds.dbsnapshot.DBSnapshot` """ params = {} if snapshot_id: params['DBSnapshotIdentifier'] = snapshot_id if instance_id: params['DBInstanceIdentifier'] = instance_id if max_records: params['MaxRecords'] = max_records if marker: params['Marker'] = marker return self.get_list('DescribeDBSnapshots', params, [('DBSnapshot', DBSnapshot)]) def create_dbsnapshot(self, snapshot_id, dbinstance_id): """ Create a new DB snapshot. :type snapshot_id: string :param snapshot_id: The identifier for the DBSnapshot :type dbinstance_id: string :param dbinstance_id: The source identifier for the RDS instance from which the snapshot is created. :rtype: :class:`boto.rds.dbsnapshot.DBSnapshot` :return: The newly created DBSnapshot """ params = {'DBSnapshotIdentifier': snapshot_id, 'DBInstanceIdentifier': dbinstance_id} return self.get_object('CreateDBSnapshot', params, DBSnapshot) def copy_dbsnapshot(self, source_snapshot_id, target_snapshot_id): """ Copies the specified DBSnapshot. :type source_snapshot_id: string :param source_snapshot_id: The identifier for the source DB snapshot. :type target_snapshot_id: string :param target_snapshot_id: The identifier for the copied snapshot. :rtype: :class:`boto.rds.dbsnapshot.DBSnapshot` :return: The newly created DBSnapshot. """ params = {'SourceDBSnapshotIdentifier': source_snapshot_id, 'TargetDBSnapshotIdentifier': target_snapshot_id} return self.get_object('CopyDBSnapshot', params, DBSnapshot) def delete_dbsnapshot(self, identifier): """ Delete a DBSnapshot :type identifier: string :param identifier: The identifier of the DBSnapshot to delete """ params = {'DBSnapshotIdentifier': identifier} return self.get_object('DeleteDBSnapshot', params, DBSnapshot) def restore_dbinstance_from_dbsnapshot(self, identifier, instance_id, instance_class, port=None, availability_zone=None, multi_az=None, auto_minor_version_upgrade=None, db_subnet_group_name=None): """ Create a new DBInstance from a DB snapshot. :type identifier: string :param identifier: The identifier for the DBSnapshot :type instance_id: string :param instance_id: The source identifier for the RDS instance from which the snapshot is created. :type instance_class: str :param instance_class: The compute and memory capacity of the DBInstance. Valid values are: db.m1.small | db.m1.large | db.m1.xlarge | db.m2.2xlarge | db.m2.4xlarge :type port: int :param port: Port number on which database accepts connections. Valid values [1115-65535]. Defaults to 3306. :type availability_zone: str :param availability_zone: Name of the availability zone to place DBInstance into. :type multi_az: bool :param multi_az: If True, specifies the DB Instance will be deployed in multiple availability zones. Default is the API default. :type auto_minor_version_upgrade: bool :param auto_minor_version_upgrade: Indicates that minor engine upgrades will be applied automatically to the Read Replica during the maintenance window. Default is the API default. :type db_subnet_group_name: str :param db_subnet_group_name: A DB Subnet Group to associate with this DB Instance. If there is no DB Subnet Group, then it is a non-VPC DB instance. :rtype: :class:`boto.rds.dbinstance.DBInstance` :return: The newly created DBInstance """ params = {'DBSnapshotIdentifier': identifier, 'DBInstanceIdentifier': instance_id, 'DBInstanceClass': instance_class} if port: params['Port'] = port if availability_zone: params['AvailabilityZone'] = availability_zone if multi_az is not None: params['MultiAZ'] = str(multi_az).lower() if auto_minor_version_upgrade is not None: params['AutoMinorVersionUpgrade'] = str(auto_minor_version_upgrade).lower() if db_subnet_group_name is not None: params['DBSubnetGroupName'] = db_subnet_group_name return self.get_object('RestoreDBInstanceFromDBSnapshot', params, DBInstance) def restore_dbinstance_from_point_in_time(self, source_instance_id, target_instance_id, use_latest=False, restore_time=None, dbinstance_class=None, port=None, availability_zone=None, db_subnet_group_name=None): """ Create a new DBInstance from a point in time. :type source_instance_id: string :param source_instance_id: The identifier for the source DBInstance. :type target_instance_id: string :param target_instance_id: The identifier of the new DBInstance. :type use_latest: bool :param use_latest: If True, the latest snapshot availabile will be used. :type restore_time: datetime :param restore_time: The date and time to restore from. Only used if use_latest is False. :type instance_class: str :param instance_class: The compute and memory capacity of the DBInstance. Valid values are: db.m1.small | db.m1.large | db.m1.xlarge | db.m2.2xlarge | db.m2.4xlarge :type port: int :param port: Port number on which database accepts connections. Valid values [1115-65535]. Defaults to 3306. :type availability_zone: str :param availability_zone: Name of the availability zone to place DBInstance into. :type db_subnet_group_name: str :param db_subnet_group_name: A DB Subnet Group to associate with this DB Instance. If there is no DB Subnet Group, then it is a non-VPC DB instance. :rtype: :class:`boto.rds.dbinstance.DBInstance` :return: The newly created DBInstance """ params = {'SourceDBInstanceIdentifier': source_instance_id, 'TargetDBInstanceIdentifier': target_instance_id} if use_latest: params['UseLatestRestorableTime'] = 'true' elif restore_time: params['RestoreTime'] = restore_time.isoformat() if dbinstance_class: params['DBInstanceClass'] = dbinstance_class if port: params['Port'] = port if availability_zone: params['AvailabilityZone'] = availability_zone if db_subnet_group_name is not None: params['DBSubnetGroupName'] = db_subnet_group_name return self.get_object('RestoreDBInstanceToPointInTime', params, DBInstance) # Events def get_all_events(self, source_identifier=None, source_type=None, start_time=None, end_time=None, max_records=None, marker=None): """ Get information about events related to your DBInstances, DBSecurityGroups and DBParameterGroups. :type source_identifier: str :param source_identifier: If supplied, the events returned will be limited to those that apply to the identified source. The value of this parameter depends on the value of source_type. If neither parameter is specified, all events in the time span will be returned. :type source_type: str :param source_type: Specifies how the source_identifier should be interpreted. Valid values are: b-instance | db-security-group | db-parameter-group | db-snapshot :type start_time: datetime :param start_time: The beginning of the time interval for events. If not supplied, all available events will be returned. :type end_time: datetime :param end_time: The ending of the time interval for events. If not supplied, all available events will be returned. :type max_records: int :param max_records: The maximum number of records to be returned. If more results are available, a MoreToken will be returned in the response that can be used to retrieve additional records. Default is 100. :type marker: str :param marker: The marker provided by a previous request. :rtype: list :return: A list of class:`boto.rds.event.Event` """ params = {} if source_identifier and source_type: params['SourceIdentifier'] = source_identifier params['SourceType'] = source_type if start_time: params['StartTime'] = start_time.isoformat() if end_time: params['EndTime'] = end_time.isoformat() if max_records: params['MaxRecords'] = max_records if marker: params['Marker'] = marker return self.get_list('DescribeEvents', params, [('Event', Event)]) def create_db_subnet_group(self, name, desc, subnet_ids): """ Create a new Database Subnet Group. :type name: string :param name: The identifier for the db_subnet_group :type desc: string :param desc: A description of the db_subnet_group :type subnet_ids: list :param subnets: A list of the subnet identifiers to include in the db_subnet_group :rtype: :class:`boto.rds.dbsubnetgroup.DBSubnetGroup :return: the created db_subnet_group """ params = {'DBSubnetGroupName': name, 'DBSubnetGroupDescription': desc} self.build_list_params(params, subnet_ids, 'SubnetIds.member') return self.get_object('CreateDBSubnetGroup', params, DBSubnetGroup) def delete_db_subnet_group(self, name): """ Delete a Database Subnet Group. :type name: string :param name: The identifier of the db_subnet_group to delete :rtype: :class:`boto.rds.dbsubnetgroup.DBSubnetGroup` :return: The deleted db_subnet_group. """ params = {'DBSubnetGroupName': name} return self.get_object('DeleteDBSubnetGroup', params, DBSubnetGroup) def get_all_db_subnet_groups(self, name=None, max_records=None, marker=None): """ Retrieve all the DBSubnetGroups in your account. :type name: str :param name: DBSubnetGroup name If supplied, only information about this DBSubnetGroup will be returned. Otherwise, info about all DBSubnetGroups will be returned. :type max_records: int :param max_records: The maximum number of records to be returned. If more results are available, a Token will be returned in the response that can be used to retrieve additional records. Default is 100. :type marker: str :param marker: The marker provided by a previous request. :rtype: list :return: A list of :class:`boto.rds.dbsubnetgroup.DBSubnetGroup` """ params = dict() if name != None: params['DBSubnetGroupName'] = name if max_records != None: params['MaxRecords'] = max_records if marker != None: params['Marker'] = marker return self.get_list('DescribeDBSubnetGroups', params, [('DBSubnetGroup',DBSubnetGroup)]) def modify_db_subnet_group(self, name, description=None, subnet_ids=None): """ Modify a parameter group for your account. :type name: string :param name: The name of the new parameter group :type parameters: list of :class:`boto.rds.parametergroup.Parameter` :param parameters: The new parameters :rtype: :class:`boto.rds.parametergroup.ParameterGroup` :return: The newly created ParameterGroup """ params = {'DBSubnetGroupName': name} if description != None: params['DBSubnetGroupDescription'] = description if subnet_ids != None: self.build_list_params(params, subnet_ids, 'SubnetIds.member') return self.get_object('ModifyDBSubnetGroup', params, DBSubnetGroup) def create_option_group(self, name, engine_name, major_engine_version, description=None): """ Create a new option group for your account. This will create the option group within the region you are currently connected to. :type name: string :param name: The name of the new option group :type engine_name: string :param engine_name: Specifies the name of the engine that this option group should be associated with. :type major_engine_version: string :param major_engine_version: Specifies the major version of the engine that this option group should be associated with. :type description: string :param description: The description of the new option group :rtype: :class:`boto.rds.optiongroup.OptionGroup` :return: The newly created OptionGroup """ params = { 'OptionGroupName': name, 'EngineName': engine_name, 'MajorEngineVersion': major_engine_version, 'OptionGroupDescription': description, } group = self.get_object('CreateOptionGroup', params, OptionGroup) group.name = name group.engine_name = engine_name group.major_engine_version = major_engine_version group.description = description return group def delete_option_group(self, name): """ Delete an OptionGroup from your account. :type key_name: string :param key_name: The name of the OptionGroup to delete """ params = {'OptionGroupName': name} return self.get_status('DeleteOptionGroup', params) def describe_option_groups(self, name=None, engine_name=None, major_engine_version=None, max_records=100, marker=None): """ Describes the available option groups. :type name: str :param name: The name of the option group to describe. Cannot be supplied together with engine_name or major_engine_version. :type engine_name: str :param engine_name: Filters the list of option groups to only include groups associated with a specific database engine. :type major_engine_version: datetime :param major_engine_version: Filters the list of option groups to only include groups associated with a specific database engine version. If specified, then engine_name must also be specified. :type max_records: int :param max_records: The maximum number of records to be returned. If more results are available, a MoreToken will be returned in the response that can be used to retrieve additional records. Default is 100. :type marker: str :param marker: The marker provided by a previous request. :rtype: list :return: A list of class:`boto.rds.optiongroup.OptionGroup` """ params = {} if name: params['OptionGroupName'] = name elif engine_name and major_engine_version: params['EngineName'] = engine_name params['MajorEngineVersion'] = major_engine_version if max_records: params['MaxRecords'] = int(max_records) if marker: params['Marker'] = marker return self.get_list('DescribeOptionGroups', params, [ ('OptionGroup', OptionGroup) ]) def describe_option_group_options(self, engine_name=None, major_engine_version=None, max_records=100, marker=None): """ Describes the available option group options. :type engine_name: str :param engine_name: Filters the list of option groups to only include groups associated with a specific database engine. :type major_engine_version: datetime :param major_engine_version: Filters the list of option groups to only include groups associated with a specific database engine version. If specified, then engine_name must also be specified. :type max_records: int :param max_records: The maximum number of records to be returned. If more results are available, a MoreToken will be returned in the response that can be used to retrieve additional records. Default is 100. :type marker: str :param marker: The marker provided by a previous request. :rtype: list :return: A list of class:`boto.rds.optiongroup.Option` """ params = {} if engine_name and major_engine_version: params['EngineName'] = engine_name params['MajorEngineVersion'] = major_engine_version if max_records: params['MaxRecords'] = int(max_records) if marker: params['Marker'] = marker return self.get_list('DescribeOptionGroupOptions', params, [ ('OptionGroupOptions', OptionGroupOption) ]) boto-2.20.1/boto/rds/dbinstance.py000066400000000000000000000417261225267101000167640ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.rds.dbsecuritygroup import DBSecurityGroup from boto.rds.parametergroup import ParameterGroup from boto.rds.statusinfo import StatusInfo from boto.rds.dbsubnetgroup import DBSubnetGroup from boto.rds.vpcsecuritygroupmembership import VPCSecurityGroupMembership from boto.resultset import ResultSet class DBInstance(object): """ Represents a RDS DBInstance Properties reference available from the AWS documentation at http://goo.gl/sC2Kn :ivar connection: connection :ivar id: The name and identifier of the DBInstance :ivar create_time: The date and time of creation :ivar engine: The database engine being used :ivar status: The status of the database in a string. e.g. "available" :ivar allocated_storage: The size of the disk in gigabytes (int). :ivar auto_minor_version_upgrade: Indicates that minor version patches are applied automatically. :ivar endpoint: A tuple that describes the hostname and port of the instance. This is only available when the database is in status "available". :ivar instance_class: Contains the name of the compute and memory capacity class of the DB Instance. :ivar master_username: The username that is set as master username at creation time. :ivar parameter_groups: Provides the list of DB Parameter Groups applied to this DB Instance. :ivar security_groups: Provides List of DB Security Group elements containing only DBSecurityGroup.Name and DBSecurityGroup.Status subelements. :ivar availability_zone: Specifies the name of the Availability Zone the DB Instance is located in. :ivar backup_retention_period: Specifies the number of days for which automatic DB Snapshots are retained. :ivar preferred_backup_window: Specifies the daily time range during which automated backups are created if automated backups are enabled, as determined by the backup_retention_period. :ivar preferred_maintenance_window: Specifies the weekly time range (in UTC) during which system maintenance can occur. (string) :ivar latest_restorable_time: Specifies the latest time to which a database can be restored with point-in-time restore. (string) :ivar multi_az: Boolean that specifies if the DB Instance is a Multi-AZ deployment. :ivar iops: The current number of provisioned IOPS for the DB Instance. Can be None if this is a standard instance. :ivar vpc_security_groups: List of VPC Security Group Membership elements containing only VpcSecurityGroupMembership.VpcSecurityGroupId and VpcSecurityGroupMembership.Status subelements. :ivar pending_modified_values: Specifies that changes to the DB Instance are pending. This element is only included when changes are pending. Specific changes are identified by subelements. :ivar read_replica_dbinstance_identifiers: List of read replicas associated with this DB instance. :ivar status_infos: The status of a Read Replica. If the instance is not a for a read replica, this will be blank. :ivar character_set_name: If present, specifies the name of the character set that this instance is associated with. :ivar subnet_group: Specifies information on the subnet group associated with the DB instance, including the name, description, and subnets in the subnet group. :ivar engine_version: Indicates the database engine version. :ivar license_model: License model information for this DB instance. """ def __init__(self, connection=None, id=None): self.connection = connection self.id = id self.create_time = None self.engine = None self.status = None self.allocated_storage = None self.auto_minor_version_upgrade = None self.endpoint = None self.instance_class = None self.master_username = None self.parameter_groups = [] self.security_groups = [] self.read_replica_dbinstance_identifiers = [] self.availability_zone = None self.backup_retention_period = None self.preferred_backup_window = None self.preferred_maintenance_window = None self.latest_restorable_time = None self.multi_az = False self.iops = None self.vpc_security_groups = None self.pending_modified_values = None self._in_endpoint = False self._port = None self._address = None self.status_infos = None self.character_set_name = None self.subnet_group = None self.engine_version = None self.license_model = None def __repr__(self): return 'DBInstance:%s' % self.id def startElement(self, name, attrs, connection): if name == 'Endpoint': self._in_endpoint = True elif name == 'DBParameterGroups': self.parameter_groups = ResultSet([('DBParameterGroup', ParameterGroup)]) return self.parameter_groups elif name == 'DBSecurityGroups': self.security_groups = ResultSet([('DBSecurityGroup', DBSecurityGroup)]) return self.security_groups elif name == 'VpcSecurityGroups': self.vpc_security_groups = ResultSet([('VpcSecurityGroupMembership', VPCSecurityGroupMembership)]) return self.vpc_security_groups elif name == 'PendingModifiedValues': self.pending_modified_values = PendingModifiedValues() return self.pending_modified_values elif name == 'ReadReplicaDBInstanceIdentifiers': self.read_replica_dbinstance_identifiers = \ ReadReplicaDBInstanceIdentifiers() return self.read_replica_dbinstance_identifiers elif name == 'StatusInfos': self.status_infos = ResultSet([ ('DBInstanceStatusInfo', StatusInfo) ]) return self.status_infos elif name == 'DBSubnetGroup': self.subnet_group = DBSubnetGroup() return self.subnet_group return None def endElement(self, name, value, connection): if name == 'DBInstanceIdentifier': self.id = value elif name == 'DBInstanceStatus': self.status = value elif name == 'InstanceCreateTime': self.create_time = value elif name == 'Engine': self.engine = value elif name == 'DBInstanceStatus': self.status = value elif name == 'AllocatedStorage': self.allocated_storage = int(value) elif name == 'AutoMinorVersionUpgrade': self.auto_minor_version_upgrade = value.lower() == 'true' elif name == 'DBInstanceClass': self.instance_class = value elif name == 'MasterUsername': self.master_username = value elif name == 'Port': if self._in_endpoint: self._port = int(value) elif name == 'Address': if self._in_endpoint: self._address = value elif name == 'Endpoint': self.endpoint = (self._address, self._port) self._in_endpoint = False elif name == 'AvailabilityZone': self.availability_zone = value elif name == 'BackupRetentionPeriod': self.backup_retention_period = int(value) elif name == 'LatestRestorableTime': self.latest_restorable_time = value elif name == 'PreferredMaintenanceWindow': self.preferred_maintenance_window = value elif name == 'PreferredBackupWindow': self.preferred_backup_window = value elif name == 'MultiAZ': if value.lower() == 'true': self.multi_az = True elif name == 'Iops': self.iops = int(value) elif name == 'CharacterSetName': self.character_set_name = value elif name == 'EngineVersion': self.engine_version = value elif name == 'LicenseModel': self.license_model = value else: setattr(self, name, value) @property def security_group(self): """ Provide backward compatibility for previous security_group attribute. """ if len(self.security_groups) > 0: return self.security_groups[-1] else: return None @property def parameter_group(self): """ Provide backward compatibility for previous parameter_group attribute. """ if len(self.parameter_groups) > 0: return self.parameter_groups[-1] else: return None def snapshot(self, snapshot_id): """ Create a new DB snapshot of this DBInstance. :type identifier: string :param identifier: The identifier for the DBSnapshot :rtype: :class:`boto.rds.dbsnapshot.DBSnapshot` :return: The newly created DBSnapshot """ return self.connection.create_dbsnapshot(snapshot_id, self.id) def reboot(self): """ Reboot this DBInstance :rtype: :class:`boto.rds.dbsnapshot.DBSnapshot` :return: The newly created DBSnapshot """ return self.connection.reboot_dbinstance(self.id) def update(self, validate=False): """ Update the DB instance's status information by making a call to fetch the current instance attributes from the service. :type validate: bool :param validate: By default, if EC2 returns no data about the instance the update method returns quietly. If the validate param is True, however, it will raise a ValueError exception if no data is returned from EC2. """ rs = self.connection.get_all_dbinstances(self.id) if len(rs) > 0: for i in rs: if i.id == self.id: self.__dict__.update(i.__dict__) elif validate: raise ValueError('%s is not a valid Instance ID' % self.id) return self.status def stop(self, skip_final_snapshot=False, final_snapshot_id=''): """ Delete this DBInstance. :type skip_final_snapshot: bool :param skip_final_snapshot: This parameter determines whether a final db snapshot is created before the instance is deleted. If True, no snapshot is created. If False, a snapshot is created before deleting the instance. :type final_snapshot_id: str :param final_snapshot_id: If a final snapshot is requested, this is the identifier used for that snapshot. :rtype: :class:`boto.rds.dbinstance.DBInstance` :return: The deleted db instance. """ return self.connection.delete_dbinstance(self.id, skip_final_snapshot, final_snapshot_id) def modify(self, param_group=None, security_groups=None, preferred_maintenance_window=None, master_password=None, allocated_storage=None, instance_class=None, backup_retention_period=None, preferred_backup_window=None, multi_az=False, iops=None, vpc_security_groups=None, apply_immediately=False, new_instance_id=None): """ Modify this DBInstance. :type param_group: str :param param_group: Name of DBParameterGroup to associate with this DBInstance. :type security_groups: list of str or list of DBSecurityGroup objects :param security_groups: List of names of DBSecurityGroup to authorize on this DBInstance. :type preferred_maintenance_window: str :param preferred_maintenance_window: The weekly time range (in UTC) during which maintenance can occur. Default is Sun:05:00-Sun:09:00 :type master_password: str :param master_password: Password of master user for the DBInstance. Must be 4-15 alphanumeric characters. :type allocated_storage: int :param allocated_storage: The new allocated storage size, in GBs. Valid values are [5-1024] :type instance_class: str :param instance_class: The compute and memory capacity of the DBInstance. Changes will be applied at next maintenance window unless apply_immediately is True. Valid values are: * db.m1.small * db.m1.large * db.m1.xlarge * db.m2.xlarge * db.m2.2xlarge * db.m2.4xlarge :type apply_immediately: bool :param apply_immediately: If true, the modifications will be applied as soon as possible rather than waiting for the next preferred maintenance window. :type new_instance_id: str :param new_instance_id: The new DB instance identifier. :type backup_retention_period: int :param backup_retention_period: The number of days for which automated backups are retained. Setting this to zero disables automated backups. :type preferred_backup_window: str :param preferred_backup_window: The daily time range during which automated backups are created (if enabled). Must be in h24:mi-hh24:mi format (UTC). :type multi_az: bool :param multi_az: If True, specifies the DB Instance will be deployed in multiple availability zones. :type iops: int :param iops: The amount of IOPS (input/output operations per second) to Provisioned for the DB Instance. Can be modified at a later date. Must scale linearly. For every 1000 IOPS provision, you must allocated 100 GB of storage space. This scales up to 1 TB / 10 000 IOPS for MySQL and Oracle. MSSQL is limited to 700 GB / 7 000 IOPS. If you specify a value, it must be at least 1000 IOPS and you must allocate 100 GB of storage. :type vpc_security_groups: list :param vpc_security_groups: List of VPCSecurityGroupMembership that this DBInstance is a memberof. :rtype: :class:`boto.rds.dbinstance.DBInstance` :return: The modified db instance. """ return self.connection.modify_dbinstance(self.id, param_group, security_groups, preferred_maintenance_window, master_password, allocated_storage, instance_class, backup_retention_period, preferred_backup_window, multi_az, apply_immediately, iops, vpc_security_groups, new_instance_id) class PendingModifiedValues(dict): def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name != 'PendingModifiedValues': self[name] = value class ReadReplicaDBInstanceIdentifiers(list): def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'ReadReplicaDBInstanceIdentifier': self.append(value) boto-2.20.1/boto/rds/dbsecuritygroup.py000066400000000000000000000147731225267101000201060ustar00rootroot00000000000000# Copyright (c) 2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents an DBSecurityGroup """ from boto.ec2.securitygroup import SecurityGroup class DBSecurityGroup(object): """ Represents an RDS database security group Properties reference available from the AWS documentation at http://docs.amazonwebservices.com/AmazonRDS/latest/APIReference/API_DeleteDBSecurityGroup.html :ivar Status: The current status of the security group. Possible values are [ active, ? ]. Reference documentation lacks specifics of possibilities :ivar connection: :py:class:`boto.rds.RDSConnection` associated with the current object :ivar description: The description of the security group :ivar ec2_groups: List of :py:class:`EC2 Security Group ` objects that this security group PERMITS :ivar ip_ranges: List of :py:class:`boto.rds.dbsecuritygroup.IPRange` objects (containing CIDR addresses) that this security group PERMITS :ivar name: Name of the security group :ivar owner_id: ID of the owner of the security group. Can be 'None' """ def __init__(self, connection=None, owner_id=None, name=None, description=None): self.connection = connection self.owner_id = owner_id self.name = name self.description = description self.ec2_groups = [] self.ip_ranges = [] def __repr__(self): return 'DBSecurityGroup:%s' % self.name def startElement(self, name, attrs, connection): if name == 'IPRange': cidr = IPRange(self) self.ip_ranges.append(cidr) return cidr elif name == 'EC2SecurityGroup': ec2_grp = EC2SecurityGroup(self) self.ec2_groups.append(ec2_grp) return ec2_grp else: return None def endElement(self, name, value, connection): if name == 'OwnerId': self.owner_id = value elif name == 'DBSecurityGroupName': self.name = value elif name == 'DBSecurityGroupDescription': self.description = value elif name == 'IPRanges': pass else: setattr(self, name, value) def delete(self): return self.connection.delete_dbsecurity_group(self.name) def authorize(self, cidr_ip=None, ec2_group=None): """ Add a new rule to this DBSecurity group. You need to pass in either a CIDR block to authorize or and EC2 SecurityGroup. :type cidr_ip: string :param cidr_ip: A valid CIDR IP range to authorize :type ec2_group: :class:`boto.ec2.securitygroup.SecurityGroup` :param ec2_group: An EC2 security group to authorize :rtype: bool :return: True if successful. """ if isinstance(ec2_group, SecurityGroup): group_name = ec2_group.name group_owner_id = ec2_group.owner_id else: group_name = None group_owner_id = None return self.connection.authorize_dbsecurity_group(self.name, cidr_ip, group_name, group_owner_id) def revoke(self, cidr_ip=None, ec2_group=None): """ Revoke access to a CIDR range or EC2 SecurityGroup. You need to pass in either a CIDR block or an EC2 SecurityGroup from which to revoke access. :type cidr_ip: string :param cidr_ip: A valid CIDR IP range to revoke :type ec2_group: :class:`boto.ec2.securitygroup.SecurityGroup` :param ec2_group: An EC2 security group to revoke :rtype: bool :return: True if successful. """ if isinstance(ec2_group, SecurityGroup): group_name = ec2_group.name group_owner_id = ec2_group.owner_id return self.connection.revoke_dbsecurity_group( self.name, ec2_security_group_name=group_name, ec2_security_group_owner_id=group_owner_id) # Revoking by CIDR IP range return self.connection.revoke_dbsecurity_group( self.name, cidr_ip=cidr_ip) class IPRange(object): """ Describes a CIDR address range for use in a DBSecurityGroup :ivar cidr_ip: IP Address range """ def __init__(self, parent=None): self.parent = parent self.cidr_ip = None self.status = None def __repr__(self): return 'IPRange:%s' % self.cidr_ip def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'CIDRIP': self.cidr_ip = value elif name == 'Status': self.status = value else: setattr(self, name, value) class EC2SecurityGroup(object): """ Describes an EC2 security group for use in a DBSecurityGroup """ def __init__(self, parent=None): self.parent = parent self.name = None self.owner_id = None def __repr__(self): return 'EC2SecurityGroup:%s' % self.name def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'EC2SecurityGroupName': self.name = value elif name == 'EC2SecurityGroupOwnerId': self.owner_id = value else: setattr(self, name, value) boto-2.20.1/boto/rds/dbsnapshot.py000066400000000000000000000143761225267101000170200ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class DBSnapshot(object): """ Represents a RDS DB Snapshot Properties reference available from the AWS documentation at http://docs.amazonwebservices.com/AmazonRDS/latest/APIReference/API_DBSnapshot.html :ivar engine_version: Specifies the version of the database engine :ivar license_model: License model information for the restored DB instance :ivar allocated_storage: Specifies the allocated storage size in gigabytes (GB) :ivar availability_zone: Specifies the name of the Availability Zone the DB Instance was located in at the time of the DB Snapshot :ivar connection: boto.rds.RDSConnection associated with the current object :ivar engine: Specifies the name of the database engine :ivar id: Specifies the identifier for the DB Snapshot (DBSnapshotIdentifier) :ivar instance_create_time: Specifies the time (UTC) when the snapshot was taken :ivar instance_id: Specifies the the DBInstanceIdentifier of the DB Instance this DB Snapshot was created from (DBInstanceIdentifier) :ivar master_username: Provides the master username for the DB Instance :ivar port: Specifies the port that the database engine was listening on at the time of the snapshot :ivar snapshot_create_time: Provides the time (UTC) when the snapshot was taken :ivar status: Specifies the status of this DB Snapshot. Possible values are [ available, backing-up, creating, deleted, deleting, failed, modifying, rebooting, resetting-master-credentials ] :ivar iops: Specifies the Provisioned IOPS (I/O operations per second) value of the DB instance at the time of the snapshot. :ivar option_group_name: Provides the option group name for the DB snapshot. :ivar percent_progress: The percentage of the estimated data that has been transferred. :ivar snapshot_type: Provides the type of the DB snapshot. :ivar source_region: The region that the DB snapshot was created in or copied from. :ivar vpc_id: Provides the Vpc Id associated with the DB snapshot. """ def __init__(self, connection=None, id=None): self.connection = connection self.id = id self.engine = None self.engine_version = None self.snapshot_create_time = None self.instance_create_time = None self.port = None self.status = None self.availability_zone = None self.master_username = None self.allocated_storage = None self.instance_id = None self.availability_zone = None self.license_model = None self.iops = None self.option_group_name = None self.percent_progress = None self.snapshot_type = None self.source_region = None self.vpc_id = None def __repr__(self): return 'DBSnapshot:%s' % self.id def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'Engine': self.engine = value elif name == 'EngineVersion': self.engine_version = value elif name == 'InstanceCreateTime': self.instance_create_time = value elif name == 'SnapshotCreateTime': self.snapshot_create_time = value elif name == 'DBInstanceIdentifier': self.instance_id = value elif name == 'DBSnapshotIdentifier': self.id = value elif name == 'Port': self.port = int(value) elif name == 'Status': self.status = value elif name == 'AvailabilityZone': self.availability_zone = value elif name == 'MasterUsername': self.master_username = value elif name == 'AllocatedStorage': self.allocated_storage = int(value) elif name == 'SnapshotTime': self.time = value elif name == 'LicenseModel': self.license_model = value elif name == 'Iops': self.iops = int(value) elif name == 'OptionGroupName': self.option_group_name = value elif name == 'PercentProgress': self.percent_progress = int(value) elif name == 'SnapshotType': self.snapshot_type = value elif name == 'SourceRegion': self.source_region = value elif name == 'VpcId': self.vpc_id = value else: setattr(self, name, value) def update(self, validate=False): """ Update the DB snapshot's status information by making a call to fetch the current snapshot attributes from the service. :type validate: bool :param validate: By default, if EC2 returns no data about the instance the update method returns quietly. If the validate param is True, however, it will raise a ValueError exception if no data is returned from EC2. """ rs = self.connection.get_all_dbsnapshots(self.id) if len(rs) > 0: for i in rs: if i.id == self.id: self.__dict__.update(i.__dict__) elif validate: raise ValueError('%s is not a valid Snapshot ID' % self.id) return self.status boto-2.20.1/boto/rds/dbsubnetgroup.py000066400000000000000000000054051225267101000175270ustar00rootroot00000000000000# Copyright (c) 2013 Franc Carter - franc.carter@gmail.com # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents an DBSubnetGroup """ class DBSubnetGroup(object): """ Represents an RDS database subnet group Properties reference available from the AWS documentation at http://docs.amazonwebservices.com/AmazonRDS/latest/APIReference/API_DeleteDBSubnetGroup.html :ivar status: The current status of the subnet group. Possibile values are [ active, ? ]. Reference documentation lacks specifics of possibilities :ivar connection: boto.rds.RDSConnection associated with the current object :ivar description: The description of the subnet group :ivar subnet_ids: List of subnet identifiers in the group :ivar name: Name of the subnet group :ivar vpc_id: The ID of the VPC the subnets are inside """ def __init__(self, connection=None, name=None, description=None, subnet_ids=None): self.connection = connection self.name = name self.description = description if subnet_ids != None: self.subnet_ids = subnet_ids else: self.subnet_ids = [] self.vpc_id = None self.status = None def __repr__(self): return 'DBSubnetGroup:%s' % self.name def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'SubnetIdentifier': self.subnet_ids.append(value) elif name == 'DBSubnetGroupName': self.name = value elif name == 'DBSubnetGroupDescription': self.description = value elif name == 'VpcId': self.vpc_id = value elif name == 'SubnetGroupStatus': self.status = value else: setattr(self, name, value) boto-2.20.1/boto/rds/event.py000066400000000000000000000035241225267101000157650ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class Event(object): def __init__(self, connection=None): self.connection = connection self.message = None self.source_identifier = None self.source_type = None self.engine = None self.date = None def __repr__(self): return '"%s"' % self.message def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'SourceIdentifier': self.source_identifier = value elif name == 'SourceType': self.source_type = value elif name == 'Message': self.message = value elif name == 'Date': self.date = value else: setattr(self, name, value) boto-2.20.1/boto/rds/optiongroup.py000066400000000000000000000364611225267101000172370ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents an OptionGroup """ from boto.rds.dbsecuritygroup import DBSecurityGroup from boto.resultset import ResultSet class OptionGroup(object): """ Represents an RDS option group Properties reference available from the AWS documentation at http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_OptionGroup.html :ivar connection: :py:class:`boto.rds.RDSConnection` associated with the current object :ivar name: Name of the option group :ivar description: The description of the option group :ivar engine_name: The name of the database engine to use :ivar major_engine_version: The major version number of the engine to use :ivar allow_both_vpc_and_nonvpc: Indicates whether this option group can be applied to both VPC and non-VPC instances. The value ``True`` indicates the option group can be applied to both VPC and non-VPC instances. :ivar vpc_id: If AllowsVpcAndNonVpcInstanceMemberships is 'false', this field is blank. If AllowsVpcAndNonVpcInstanceMemberships is ``True`` and this field is blank, then this option group can be applied to both VPC and non-VPC instances. If this field contains a value, then this option group can only be applied to instances that are in the VPC indicated by this field. :ivar options: The list of :py:class:`boto.rds.optiongroup.Option` objects associated with the group """ def __init__(self, connection=None, name=None, engine_name=None, major_engine_version=None, description=None, allow_both_vpc_and_nonvpc=False, vpc_id=None): self.name = name self.engine_name = engine_name self.major_engine_version = major_engine_version self.description = description self.allow_both_vpc_and_nonvpc = allow_both_vpc_and_nonvpc self.vpc_id = vpc_id self.options = [] def __repr__(self): return 'OptionGroup:%s' % self.name def startElement(self, name, attrs, connection): if name == 'Options': self.options = ResultSet([ ('Options', Option) ]) else: return None def endElement(self, name, value, connection): if name == 'OptionGroupName': self.name = value elif name == 'EngineName': self.engine_name = value elif name == 'MajorEngineVersion': self.major_engine_version = value elif name == 'OptionGroupDescription': self.description = value elif name == 'AllowsVpcAndNonVpcInstanceMemberships': if value.lower() == 'true': self.allow_both_vpc_and_nonvpc = True else: self.allow_both_vpc_and_nonvpc = False elif name == 'VpcId': self.vpc_id = value else: setattr(self, name, value) def delete(self): return self.connection.delete_option_group(self.name) class Option(object): """ Describes a Option for use in an OptionGroup :ivar name: The name of the option :ivar description: The description of the option. :ivar permanent: Indicate if this option is permanent. :ivar persistent: Indicate if this option is persistent. :ivar port: If required, the port configured for this option to use. :ivar settings: The option settings for this option. :ivar db_security_groups: If the option requires access to a port, then this DB Security Group allows access to the port. :ivar vpc_security_groups: If the option requires access to a port, then this VPC Security Group allows access to the port. """ def __init__(self, name=None, description=None, permanent=False, persistent=False, port=None, settings=None, db_security_groups=None, vpc_security_groups=None): self.name = name self.description = description self.permanent = permanent self.persistent = persistent self.port = port self.settings = settings self.db_security_groups = db_security_groups self.vpc_security_groups = vpc_security_groups if self.settings is None: self.settings = [] if self.db_security_groups is None: self.db_security_groups = [] if self.vpc_security_groups is None: self.vpc_security_groups = [] def __repr__(self): return 'Option:%s' % self.name def startElement(self, name, attrs, connection): if name == 'OptionSettings': self.settings = ResultSet([ ('OptionSettings', OptionSetting) ]) elif name == 'DBSecurityGroupMemberships': self.db_security_groups = ResultSet([ ('DBSecurityGroupMemberships', DBSecurityGroup) ]) elif name == 'VpcSecurityGroupMemberships': self.vpc_security_groups = ResultSet([ ('VpcSecurityGroupMemberships', VpcSecurityGroup) ]) else: return None def endElement(self, name, value, connection): if name == 'OptionName': self.name = value elif name == 'OptionDescription': self.description = value elif name == 'Permanent': if value.lower() == 'true': self.permenant = True else: self.permenant = False elif name == 'Persistent': if value.lower() == 'true': self.persistent = True else: self.persistent = False elif name == 'Port': self.port = int(value) else: setattr(self, name, value) class OptionSetting(object): """ Describes a OptionSetting for use in an Option :ivar name: The name of the option that has settings that you can set. :ivar description: The description of the option setting. :ivar value: The current value of the option setting. :ivar default_value: The default value of the option setting. :ivar allowed_values: The allowed values of the option setting. :ivar data_type: The data type of the option setting. :ivar apply_type: The DB engine specific parameter type. :ivar is_modifiable: A Boolean value that, when true, indicates the option setting can be modified from the default. :ivar is_collection: Indicates if the option setting is part of a collection. """ def __init__(self, name=None, description=None, value=None, default_value=False, allowed_values=None, data_type=None, apply_type=None, is_modifiable=False, is_collection=False): self.name = name self.description = description self.value = value self.default_value = default_value self.allowed_values = allowed_values self.data_type = data_type self.apply_type = apply_type self.is_modifiable = is_modifiable self.is_collection = is_collection def __repr__(self): return 'OptionSetting:%s' % self.name def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'Name': self.name = value elif name == 'Description': self.description = value elif name == 'Value': self.value = value elif name == 'DefaultValue': self.default_value = value elif name == 'AllowedValues': self.allowed_values = value elif name == 'DataType': self.data_type = value elif name == 'ApplyType': self.apply_type = value elif name == 'IsModifiable': if value.lower() == 'true': self.is_modifiable = True else: self.is_modifiable = False elif name == 'IsCollection': if value.lower() == 'true': self.is_collection = True else: self.is_collection = False else: setattr(self, name, value) class VpcSecurityGroup(object): """ Describes a VPC security group for use in a OptionGroup """ def __init__(self, vpc_id=None, status=None): self.vpc_id = vpc_id self.status = status def __repr__(self): return 'VpcSecurityGroup:%s' % self.vpc_id def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'VpcSecurityGroupId': self.vpc_id = value elif name == 'Status': self.status = value else: setattr(self, name, value) class OptionGroupOption(object): """ Describes a OptionGroupOption for use in an OptionGroup :ivar name: The name of the option :ivar description: The description of the option. :ivar engine_name: Engine name that this option can be applied to. :ivar major_engine_version: Indicates the major engine version that the option is available for. :ivar min_minor_engine_version: The minimum required engine version for the option to be applied. :ivar permanent: Indicate if this option is permanent. :ivar persistent: Indicate if this option is persistent. :ivar port_required: Specifies whether the option requires a port. :ivar default_port: If the option requires a port, specifies the default port for the option. :ivar settings: The option settings for this option. :ivar depends_on: List of all options that are prerequisites for this option. """ def __init__(self, name=None, description=None, engine_name=None, major_engine_version=None, min_minor_engine_version=None, permanent=False, persistent=False, port_required=False, default_port=None, settings=None, depends_on=None): self.name = name self.description = description self.engine_name = engine_name self.major_engine_version = major_engine_version self.min_minor_engine_version = min_minor_engine_version self.permanent = permanent self.persistent = persistent self.port_required = port_required self.default_port = default_port self.settings = settings self.depends_on = depends_on if self.settings is None: self.settings = [] if self.depends_on is None: self.depends_on = [] def __repr__(self): return 'OptionGroupOption:%s' % self.name def startElement(self, name, attrs, connection): if name == 'OptionGroupOptionSettings': self.settings = ResultSet([ ('OptionGroupOptionSettings', OptionGroupOptionSetting) ]) elif name == 'OptionsDependedOn': self.depends_on = [] else: return None def endElement(self, name, value, connection): if name == 'Name': self.name = value elif name == 'Description': self.description = value elif name == 'EngineName': self.engine_name = value elif name == 'MajorEngineVersion': self.major_engine_version = value elif name == 'MinimumRequiredMinorEngineVersion': self.min_minor_engine_version = value elif name == 'Permanent': if value.lower() == 'true': self.permenant = True else: self.permenant = False elif name == 'Persistent': if value.lower() == 'true': self.persistent = True else: self.persistent = False elif name == 'PortRequired': if value.lower() == 'true': self.port_required = True else: self.port_required = False elif name == 'DefaultPort': self.default_port = int(value) else: setattr(self, name, value) class OptionGroupOptionSetting(object): """ Describes a OptionGroupOptionSetting for use in an OptionGroupOption. :ivar name: The name of the option that has settings that you can set. :ivar description: The description of the option setting. :ivar value: The current value of the option setting. :ivar default_value: The default value of the option setting. :ivar allowed_values: The allowed values of the option setting. :ivar data_type: The data type of the option setting. :ivar apply_type: The DB engine specific parameter type. :ivar is_modifiable: A Boolean value that, when true, indicates the option setting can be modified from the default. :ivar is_collection: Indicates if the option setting is part of a collection. """ def __init__(self, name=None, description=None, default_value=False, allowed_values=None, apply_type=None, is_modifiable=False): self.name = name self.description = description self.default_value = default_value self.allowed_values = allowed_values self.apply_type = apply_type self.is_modifiable = is_modifiable def __repr__(self): return 'OptionGroupOptionSetting:%s' % self.name def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'SettingName': self.name = value elif name == 'SettingDescription': self.description = value elif name == 'DefaultValue': self.default_value = value elif name == 'AllowedValues': self.allowed_values = value elif name == 'ApplyType': self.apply_type = value elif name == 'IsModifiable': if value.lower() == 'true': self.is_modifiable = True else: self.is_modifiable = False else: setattr(self, name, value) boto-2.20.1/boto/rds/parametergroup.py000066400000000000000000000157021225267101000177020ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class ParameterGroup(dict): def __init__(self, connection=None): dict.__init__(self) self.connection = connection self.name = None self.description = None self.engine = None self._current_param = None def __repr__(self): return 'ParameterGroup:%s' % self.name def startElement(self, name, attrs, connection): if name == 'Parameter': if self._current_param: self[self._current_param.name] = self._current_param self._current_param = Parameter(self) return self._current_param def endElement(self, name, value, connection): if name == 'DBParameterGroupName': self.name = value elif name == 'Description': self.description = value elif name == 'Engine': self.engine = value else: setattr(self, name, value) def modifiable(self): mod = [] for key in self: p = self[key] if p.is_modifiable: mod.append(p) return mod def get_params(self): pg = self.connection.get_all_dbparameters(self.name) self.update(pg) def add_param(self, name, value, apply_method): param = Parameter() param.name = name param.value = value param.apply_method = apply_method self.params.append(param) class Parameter(object): """ Represents a RDS Parameter """ ValidTypes = {'integer' : int, 'string' : str, 'boolean' : bool} ValidSources = ['user', 'system', 'engine-default'] ValidApplyTypes = ['static', 'dynamic'] ValidApplyMethods = ['immediate', 'pending-reboot'] def __init__(self, group=None, name=None): self.group = group self.name = name self._value = None self.type = 'string' self.source = None self.is_modifiable = True self.description = None self.apply_method = None self.allowed_values = None def __repr__(self): return 'Parameter:%s' % self.name def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'ParameterName': self.name = value elif name == 'ParameterValue': self._value = value elif name == 'DataType': if value in self.ValidTypes: self.type = value elif name == 'Source': if value in self.ValidSources: self.source = value elif name == 'IsModifiable': if value.lower() == 'true': self.is_modifiable = True else: self.is_modifiable = False elif name == 'Description': self.description = value elif name == 'ApplyType': if value in self.ValidApplyTypes: self.apply_type = value elif name == 'AllowedValues': self.allowed_values = value else: setattr(self, name, value) def merge(self, d, i): prefix = 'Parameters.member.%d.' % i if self.name: d[prefix+'ParameterName'] = self.name if self._value is not None: d[prefix+'ParameterValue'] = self._value if self.apply_type: d[prefix+'ApplyMethod'] = self.apply_method def _set_string_value(self, value): if not isinstance(value, str) or isinstance(value, unicode): raise ValueError('value must be of type str') if self.allowed_values: choices = self.allowed_values.split(',') if value not in choices: raise ValueError('value must be in %s' % self.allowed_values) self._value = value def _set_integer_value(self, value): if isinstance(value, str) or isinstance(value, unicode): value = int(value) if isinstance(value, int) or isinstance(value, long): if self.allowed_values: min, max = self.allowed_values.split('-') if value < int(min) or value > int(max): raise ValueError('range is %s' % self.allowed_values) self._value = value else: raise ValueError('value must be integer') def _set_boolean_value(self, value): if isinstance(value, bool): self._value = value elif isinstance(value, str) or isinstance(value, unicode): if value.lower() == 'true': self._value = True else: self._value = False else: raise ValueError('value must be boolean') def set_value(self, value): if self.type == 'string': self._set_string_value(value) elif self.type == 'integer': self._set_integer_value(value) elif self.type == 'boolean': self._set_boolean_value(value) else: raise TypeError('unknown type (%s)' % self.type) def get_value(self): if self._value == None: return self._value if self.type == 'string': return self._value elif self.type == 'integer': if not isinstance(self._value, int) and not isinstance(self._value, long): self._set_integer_value(self._value) return self._value elif self.type == 'boolean': if not isinstance(self._value, bool): self._set_boolean_value(self._value) return self._value else: raise TypeError('unknown type (%s)' % self.type) value = property(get_value, set_value, 'The value of the parameter') def apply(self, immediate=False): if immediate: self.apply_method = 'immediate' else: self.apply_method = 'pending-reboot' self.group.connection.modify_parameter_group(self.group.name, [self]) boto-2.20.1/boto/rds/regioninfo.py000066400000000000000000000026721225267101000170060ustar00rootroot00000000000000# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.regioninfo import RegionInfo class RDSRegionInfo(RegionInfo): def __init__(self, connection=None, name=None, endpoint=None): from boto.rds import RDSConnection RegionInfo.__init__(self, connection, name, endpoint, RDSConnection) boto-2.20.1/boto/rds/statusinfo.py000066400000000000000000000037331225267101000170450ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # class StatusInfo(object): """ Describes a status message. """ def __init__(self, status_type=None, normal=None, status=None, message=None): self.status_type = status_type self.normal = normal self.status = status self.message = message def __repr__(self): return 'StatusInfo:%s' % self.message def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'StatusType': self.status_type = value elif name == 'Normal': if value.lower() == 'true': self.normal = True else: self.normal = False elif name == 'Status': self.status = value elif name == 'Message': self.message = value else: setattr(self, name, value) boto-2.20.1/boto/rds/vpcsecuritygroupmembership.py000066400000000000000000000060731225267101000223570ustar00rootroot00000000000000# Copyright (c) 2013 Anthony Tonns http://www.corsis.com/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents a VPCSecurityGroupMembership """ class VPCSecurityGroupMembership(object): """ Represents VPC Security Group that this RDS database is a member of Properties reference available from the AWS documentation at http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/\ API_VpcSecurityGroupMembership.html Example:: pri = "sg-abcdefgh" sec = "sg-hgfedcba" # Create with list of str db = c.create_dbinstance(... vpc_security_groups=[pri], ... ) # Modify with list of str db.modify(... vpc_security_groups=[pri,sec], ... ) # Create with objects memberships = [] membership = VPCSecurityGroupMembership() membership.vpc_group = pri memberships.append(membership) db = c.create_dbinstance(... vpc_security_groups=memberships, ... ) # Modify with objects memberships = d.vpc_security_groups membership = VPCSecurityGroupMembership() membership.vpc_group = sec memberships.append(membership) db.modify(... vpc_security_groups=memberships, ... ) :ivar connection: :py:class:`boto.rds.RDSConnection` associated with the current object :ivar vpc_group: This id of the VPC security group :ivar status: Status of the VPC security group membership ` objects that this RDS Instance is a member of """ def __init__(self, connection=None, status=None, vpc_group=None): self.connection = connection self.status = status self.vpc_group = vpc_group def __repr__(self): return 'VPCSecurityGroupMembership:%s' % self.vpc_group def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'VpcSecurityGroupId': self.vpc_group = value elif name == 'Status': self.status = value else: setattr(self, name, value) boto-2.20.1/boto/redshift/000077500000000000000000000000001225267101000153065ustar00rootroot00000000000000boto-2.20.1/boto/redshift/__init__.py000066400000000000000000000046771225267101000174350ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.regioninfo import RegionInfo def regions(): """ Get all available regions for the AWS Redshift service. :rtype: list :return: A list of :class:`boto.regioninfo.RegionInfo` """ from boto.redshift.layer1 import RedshiftConnection cls = RedshiftConnection return [ RegionInfo(name='us-east-1', endpoint='redshift.us-east-1.amazonaws.com', connection_cls=cls), RegionInfo(name='us-west-2', endpoint='redshift.us-west-2.amazonaws.com', connection_cls=cls), RegionInfo(name='eu-west-1', endpoint='redshift.eu-west-1.amazonaws.com', connection_cls=cls), RegionInfo(name='ap-northeast-1', endpoint='redshift.ap-northeast-1.amazonaws.com', connection_cls=cls), RegionInfo(name='ap-southeast-1', endpoint='redshift.ap-southeast-1.amazonaws.com', connection_cls=cls), RegionInfo(name='ap-southeast-2', endpoint='redshift.ap-southeast-2.amazonaws.com', connection_cls=cls), ] def connect_to_region(region_name, **kw_params): for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/redshift/exceptions.py000066400000000000000000000200501225267101000200360ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.exception import JSONResponseError class ClusterNotFoundFault(JSONResponseError): pass class InvalidClusterSnapshotStateFault(JSONResponseError): pass class ClusterSnapshotNotFoundFault(JSONResponseError): pass class ClusterNotFoundFault(JSONResponseError): pass class ClusterSecurityGroupQuotaExceededFault(JSONResponseError): pass class ReservedNodeOfferingNotFoundFault(JSONResponseError): pass class InvalidSubnet(JSONResponseError): pass class ClusterSubnetGroupQuotaExceededFault(JSONResponseError): pass class InvalidClusterStateFault(JSONResponseError): pass class InvalidClusterParameterGroupStateFault(JSONResponseError): pass class ClusterParameterGroupAlreadyExistsFault(JSONResponseError): pass class InvalidClusterSecurityGroupStateFault(JSONResponseError): pass class InvalidRestoreFault(JSONResponseError): pass class AuthorizationNotFoundFault(JSONResponseError): pass class ResizeNotFoundFault(JSONResponseError): pass class NumberOfNodesQuotaExceededFault(JSONResponseError): pass class ClusterSnapshotAlreadyExistsFault(JSONResponseError): pass class AuthorizationQuotaExceededFault(JSONResponseError): pass class AuthorizationAlreadyExistsFault(JSONResponseError): pass class ClusterSnapshotQuotaExceededFault(JSONResponseError): pass class ReservedNodeNotFoundFault(JSONResponseError): pass class ReservedNodeAlreadyExistsFault(JSONResponseError): pass class ClusterSecurityGroupAlreadyExistsFault(JSONResponseError): pass class ClusterParameterGroupNotFoundFault(JSONResponseError): pass class ReservedNodeQuotaExceededFault(JSONResponseError): pass class ClusterQuotaExceededFault(JSONResponseError): pass class ClusterSubnetQuotaExceededFault(JSONResponseError): pass class UnsupportedOptionFault(JSONResponseError): pass class InvalidVPCNetworkStateFault(JSONResponseError): pass class ClusterSecurityGroupNotFoundFault(JSONResponseError): pass class InvalidClusterSubnetGroupStateFault(JSONResponseError): pass class ClusterSubnetGroupAlreadyExistsFault(JSONResponseError): pass class NumberOfNodesPerClusterLimitExceededFault(JSONResponseError): pass class ClusterSubnetGroupNotFoundFault(JSONResponseError): pass class ClusterParameterGroupQuotaExceededFault(JSONResponseError): pass class ClusterAlreadyExistsFault(JSONResponseError): pass class InsufficientClusterCapacityFault(JSONResponseError): pass class InvalidClusterSubnetStateFault(JSONResponseError): pass class SubnetAlreadyInUse(JSONResponseError): pass class InvalidParameterCombinationFault(JSONResponseError): pass class AccessToSnapshotDeniedFault(JSONResponseError): pass class UnauthorizedOperationFault(JSONResponseError): pass class SnapshotCopyAlreadyDisabled(JSONResponseError): pass class ClusterNotFound(JSONResponseError): pass class UnknownSnapshotCopyRegion(JSONResponseError): pass class InvalidClusterSubnetState(JSONResponseError): pass class ReservedNodeQuotaExceeded(JSONResponseError): pass class InvalidClusterState(JSONResponseError): pass class HsmClientCertificateQuotaExceeded(JSONResponseError): pass class SubscriptionCategoryNotFound(JSONResponseError): pass class HsmClientCertificateNotFound(JSONResponseError): pass class SubscriptionEventIdNotFound(JSONResponseError): pass class ClusterSecurityGroupAlreadyExists(JSONResponseError): pass class HsmConfigurationAlreadyExists(JSONResponseError): pass class NumberOfNodesQuotaExceeded(JSONResponseError): pass class ReservedNodeOfferingNotFound(JSONResponseError): pass class BucketNotFound(JSONResponseError): pass class InsufficientClusterCapacity(JSONResponseError): pass class InvalidRestore(JSONResponseError): pass class UnauthorizedOperation(JSONResponseError): pass class ClusterQuotaExceeded(JSONResponseError): pass class InvalidVPCNetworkState(JSONResponseError): pass class ClusterSnapshotNotFound(JSONResponseError): pass class AuthorizationQuotaExceeded(JSONResponseError): pass class InvalidHsmClientCertificateState(JSONResponseError): pass class SNSTopicArnNotFound(JSONResponseError): pass class ResizeNotFound(JSONResponseError): pass class ClusterSubnetGroupNotFound(JSONResponseError): pass class SNSNoAuthorization(JSONResponseError): pass class ClusterSnapshotQuotaExceeded(JSONResponseError): pass class AccessToSnapshotDenied(JSONResponseError): pass class InvalidClusterSecurityGroupState(JSONResponseError): pass class NumberOfNodesPerClusterLimitExceeded(JSONResponseError): pass class ClusterSubnetQuotaExceeded(JSONResponseError): pass class SNSInvalidTopic(JSONResponseError): pass class ClusterSecurityGroupNotFound(JSONResponseError): pass class InvalidElasticIp(JSONResponseError): pass class InvalidClusterParameterGroupState(JSONResponseError): pass class InvalidHsmConfigurationState(JSONResponseError): pass class ClusterAlreadyExists(JSONResponseError): pass class HsmConfigurationQuotaExceeded(JSONResponseError): pass class ClusterSnapshotAlreadyExists(JSONResponseError): pass class SubscriptionSeverityNotFound(JSONResponseError): pass class SourceNotFound(JSONResponseError): pass class ReservedNodeAlreadyExists(JSONResponseError): pass class ClusterSubnetGroupQuotaExceeded(JSONResponseError): pass class ClusterParameterGroupNotFound(JSONResponseError): pass class InvalidS3BucketName(JSONResponseError): pass class InvalidS3KeyPrefix(JSONResponseError): pass class SubscriptionAlreadyExist(JSONResponseError): pass class HsmConfigurationNotFound(JSONResponseError): pass class AuthorizationNotFound(JSONResponseError): pass class ClusterSecurityGroupQuotaExceeded(JSONResponseError): pass class EventSubscriptionQuotaExceeded(JSONResponseError): pass class AuthorizationAlreadyExists(JSONResponseError): pass class InvalidClusterSnapshotState(JSONResponseError): pass class ClusterParameterGroupQuotaExceeded(JSONResponseError): pass class SnapshotCopyDisabled(JSONResponseError): pass class ClusterSubnetGroupAlreadyExists(JSONResponseError): pass class ReservedNodeNotFound(JSONResponseError): pass class HsmClientCertificateAlreadyExists(JSONResponseError): pass class InvalidClusterSubnetGroupState(JSONResponseError): pass class SubscriptionNotFound(JSONResponseError): pass class InsufficientS3BucketPolicy(JSONResponseError): pass class ClusterParameterGroupAlreadyExists(JSONResponseError): pass class UnsupportedOption(JSONResponseError): pass class CopyToRegionDisabled(JSONResponseError): pass class SnapshotCopyAlreadyEnabled(JSONResponseError): pass class IncompatibleOrderableOptions(JSONResponseError): pass boto-2.20.1/boto/redshift/layer1.py000066400000000000000000003751221225267101000170670ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import json import boto from boto.connection import AWSQueryConnection from boto.regioninfo import RegionInfo from boto.exception import JSONResponseError from boto.redshift import exceptions class RedshiftConnection(AWSQueryConnection): """ Amazon Redshift **Overview** This is an interface reference for Amazon Redshift. It contains documentation for one of the programming or command line interfaces you can use to manage Amazon Redshift clusters. Note that Amazon Redshift is asynchronous, which means that some interfaces may require techniques, such as polling or asynchronous callback handlers, to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a change is applied immediately, on the next instance reboot, or during the next maintenance window. For a summary of the Amazon Redshift cluster management interfaces, go to `Using the Amazon Redshift Management Interfaces `_. Amazon Redshift manages all the work of setting up, operating, and scaling a data warehouse: provisioning capacity, monitoring and backing up the cluster, and applying patches and upgrades to the Amazon Redshift engine. You can focus on using your data to acquire new insights for your business and customers. If you are a first-time user of Amazon Redshift, we recommend that you begin by reading the The `Amazon Redshift Getting Started Guide`_ If you are a database developer, the `Amazon Redshift Database Developer Guide`_ explains how to design, build, query, and maintain the databases that make up your data warehouse. """ APIVersion = "2012-12-01" DefaultRegionName = "us-east-1" DefaultRegionEndpoint = "redshift.us-east-1.amazonaws.com" ResponseError = JSONResponseError _faults = { "SnapshotCopyAlreadyDisabled": exceptions.SnapshotCopyAlreadyDisabled, "ClusterNotFound": exceptions.ClusterNotFound, "UnknownSnapshotCopyRegion": exceptions.UnknownSnapshotCopyRegion, "InvalidClusterSubnetState": exceptions.InvalidClusterSubnetState, "InvalidSubnet": exceptions.InvalidSubnet, "ReservedNodeQuotaExceeded": exceptions.ReservedNodeQuotaExceeded, "InvalidClusterState": exceptions.InvalidClusterState, "HsmClientCertificateQuotaExceeded": exceptions.HsmClientCertificateQuotaExceeded, "SubscriptionCategoryNotFound": exceptions.SubscriptionCategoryNotFound, "HsmClientCertificateNotFound": exceptions.HsmClientCertificateNotFound, "SubscriptionEventIdNotFound": exceptions.SubscriptionEventIdNotFound, "ClusterSecurityGroupAlreadyExists": exceptions.ClusterSecurityGroupAlreadyExists, "HsmConfigurationAlreadyExists": exceptions.HsmConfigurationAlreadyExists, "NumberOfNodesQuotaExceeded": exceptions.NumberOfNodesQuotaExceeded, "ReservedNodeOfferingNotFound": exceptions.ReservedNodeOfferingNotFound, "BucketNotFound": exceptions.BucketNotFound, "InsufficientClusterCapacity": exceptions.InsufficientClusterCapacity, "InvalidRestore": exceptions.InvalidRestore, "UnauthorizedOperation": exceptions.UnauthorizedOperation, "ClusterQuotaExceeded": exceptions.ClusterQuotaExceeded, "InvalidVPCNetworkState": exceptions.InvalidVPCNetworkState, "ClusterSnapshotNotFound": exceptions.ClusterSnapshotNotFound, "AuthorizationQuotaExceeded": exceptions.AuthorizationQuotaExceeded, "InvalidHsmClientCertificateState": exceptions.InvalidHsmClientCertificateState, "SNSTopicArnNotFound": exceptions.SNSTopicArnNotFound, "ResizeNotFound": exceptions.ResizeNotFound, "ClusterSubnetGroupNotFound": exceptions.ClusterSubnetGroupNotFound, "SNSNoAuthorization": exceptions.SNSNoAuthorization, "ClusterSnapshotQuotaExceeded": exceptions.ClusterSnapshotQuotaExceeded, "AccessToSnapshotDenied": exceptions.AccessToSnapshotDenied, "InvalidClusterSecurityGroupState": exceptions.InvalidClusterSecurityGroupState, "NumberOfNodesPerClusterLimitExceeded": exceptions.NumberOfNodesPerClusterLimitExceeded, "ClusterSubnetQuotaExceeded": exceptions.ClusterSubnetQuotaExceeded, "SNSInvalidTopic": exceptions.SNSInvalidTopic, "ClusterSecurityGroupNotFound": exceptions.ClusterSecurityGroupNotFound, "InvalidElasticIp": exceptions.InvalidElasticIp, "InvalidClusterParameterGroupState": exceptions.InvalidClusterParameterGroupState, "InvalidHsmConfigurationState": exceptions.InvalidHsmConfigurationState, "ClusterAlreadyExists": exceptions.ClusterAlreadyExists, "HsmConfigurationQuotaExceeded": exceptions.HsmConfigurationQuotaExceeded, "ClusterSnapshotAlreadyExists": exceptions.ClusterSnapshotAlreadyExists, "SubscriptionSeverityNotFound": exceptions.SubscriptionSeverityNotFound, "SourceNotFound": exceptions.SourceNotFound, "ReservedNodeAlreadyExists": exceptions.ReservedNodeAlreadyExists, "ClusterSubnetGroupQuotaExceeded": exceptions.ClusterSubnetGroupQuotaExceeded, "ClusterParameterGroupNotFound": exceptions.ClusterParameterGroupNotFound, "InvalidS3BucketName": exceptions.InvalidS3BucketName, "InvalidS3KeyPrefix": exceptions.InvalidS3KeyPrefix, "SubscriptionAlreadyExist": exceptions.SubscriptionAlreadyExist, "HsmConfigurationNotFound": exceptions.HsmConfigurationNotFound, "AuthorizationNotFound": exceptions.AuthorizationNotFound, "ClusterSecurityGroupQuotaExceeded": exceptions.ClusterSecurityGroupQuotaExceeded, "SubnetAlreadyInUse": exceptions.SubnetAlreadyInUse, "EventSubscriptionQuotaExceeded": exceptions.EventSubscriptionQuotaExceeded, "AuthorizationAlreadyExists": exceptions.AuthorizationAlreadyExists, "InvalidClusterSnapshotState": exceptions.InvalidClusterSnapshotState, "ClusterParameterGroupQuotaExceeded": exceptions.ClusterParameterGroupQuotaExceeded, "SnapshotCopyDisabled": exceptions.SnapshotCopyDisabled, "ClusterSubnetGroupAlreadyExists": exceptions.ClusterSubnetGroupAlreadyExists, "ReservedNodeNotFound": exceptions.ReservedNodeNotFound, "HsmClientCertificateAlreadyExists": exceptions.HsmClientCertificateAlreadyExists, "InvalidClusterSubnetGroupState": exceptions.InvalidClusterSubnetGroupState, "SubscriptionNotFound": exceptions.SubscriptionNotFound, "InsufficientS3BucketPolicy": exceptions.InsufficientS3BucketPolicy, "ClusterParameterGroupAlreadyExists": exceptions.ClusterParameterGroupAlreadyExists, "UnsupportedOption": exceptions.UnsupportedOption, "CopyToRegionDisabled": exceptions.CopyToRegionDisabled, "SnapshotCopyAlreadyEnabled": exceptions.SnapshotCopyAlreadyEnabled, "IncompatibleOrderableOptions": exceptions.IncompatibleOrderableOptions, } def __init__(self, **kwargs): region = kwargs.pop('region', None) if not region: region = RegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint) if 'host' not in kwargs: kwargs['host'] = region.endpoint AWSQueryConnection.__init__(self, **kwargs) self.region = region def _required_auth_capability(self): return ['hmac-v4'] def authorize_cluster_security_group_ingress(self, cluster_security_group_name, cidrip=None, ec2_security_group_name=None, ec2_security_group_owner_id=None): """ Adds an inbound (ingress) rule to an Amazon Redshift security group. Depending on whether the application accessing your cluster is running on the Internet or an EC2 instance, you can authorize inbound access to either a Classless Interdomain Routing (CIDR) IP address range or an EC2 security group. You can add as many as 20 ingress rules to an Amazon Redshift security group. The EC2 security group must be defined in the AWS region where the cluster resides. For an overview of CIDR blocks, see the Wikipedia article on `Classless Inter-Domain Routing`_. You must also associate the security group with a cluster so that clients running on these IP addresses or the EC2 instance are authorized to connect to the cluster. For information about managing security groups, go to `Working with Security Groups`_ in the Amazon Redshift Management Guide . :type cluster_security_group_name: string :param cluster_security_group_name: The name of the security group to which the ingress rule is added. :type cidrip: string :param cidrip: The IP range to be added the Amazon Redshift security group. :type ec2_security_group_name: string :param ec2_security_group_name: The EC2 security group to be added the Amazon Redshift security group. :type ec2_security_group_owner_id: string :param ec2_security_group_owner_id: The AWS account number of the owner of the security group specified by the EC2SecurityGroupName parameter. The AWS Access Key ID is not an acceptable value. Example: `111122223333` """ params = { 'ClusterSecurityGroupName': cluster_security_group_name, } if cidrip is not None: params['CIDRIP'] = cidrip if ec2_security_group_name is not None: params['EC2SecurityGroupName'] = ec2_security_group_name if ec2_security_group_owner_id is not None: params['EC2SecurityGroupOwnerId'] = ec2_security_group_owner_id return self._make_request( action='AuthorizeClusterSecurityGroupIngress', verb='POST', path='/', params=params) def authorize_snapshot_access(self, snapshot_identifier, account_with_restore_access, snapshot_cluster_identifier=None): """ Authorizes the specified AWS customer account to restore the specified snapshot. For more information about working with snapshots, go to `Amazon Redshift Snapshots`_ in the Amazon Redshift Management Guide . :type snapshot_identifier: string :param snapshot_identifier: The identifier of the snapshot the account is authorized to restore. :type snapshot_cluster_identifier: string :param snapshot_cluster_identifier: The identifier of the cluster the snapshot was created from. This parameter is required if your IAM user has a policy containing a snapshot resource element that specifies anything other than * for the cluster name. :type account_with_restore_access: string :param account_with_restore_access: The identifier of the AWS customer account authorized to restore the specified snapshot. """ params = { 'SnapshotIdentifier': snapshot_identifier, 'AccountWithRestoreAccess': account_with_restore_access, } if snapshot_cluster_identifier is not None: params['SnapshotClusterIdentifier'] = snapshot_cluster_identifier return self._make_request( action='AuthorizeSnapshotAccess', verb='POST', path='/', params=params) def copy_cluster_snapshot(self, source_snapshot_identifier, target_snapshot_identifier, source_snapshot_cluster_identifier=None): """ Copies the specified automated cluster snapshot to a new manual cluster snapshot. The source must be an automated snapshot and it must be in the available state. When you delete a cluster, Amazon Redshift deletes any automated snapshots of the cluster. Also, when the retention period of the snapshot expires, Amazon Redshift automatically deletes it. If you want to keep an automated snapshot for a longer period, you can make a manual copy of the snapshot. Manual snapshots are retained until you delete them. For more information about working with snapshots, go to `Amazon Redshift Snapshots`_ in the Amazon Redshift Management Guide . :type source_snapshot_identifier: string :param source_snapshot_identifier: The identifier for the source snapshot. Constraints: + Must be the identifier for a valid automated snapshot whose state is "available". :type source_snapshot_cluster_identifier: string :param source_snapshot_cluster_identifier: The identifier of the cluster the source snapshot was created from. This parameter is required if your IAM user has a policy containing a snapshot resource element that specifies anything other than * for the cluster name. Constraints: + Must be the identifier for a valid cluster. :type target_snapshot_identifier: string :param target_snapshot_identifier: The identifier given to the new manual snapshot. Constraints: + Cannot be null, empty, or blank. + Must contain from 1 to 255 alphanumeric characters or hyphens. + First character must be a letter. + Cannot end with a hyphen or contain two consecutive hyphens. + Must be unique for the AWS account that is making the request. """ params = { 'SourceSnapshotIdentifier': source_snapshot_identifier, 'TargetSnapshotIdentifier': target_snapshot_identifier, } if source_snapshot_cluster_identifier is not None: params['SourceSnapshotClusterIdentifier'] = source_snapshot_cluster_identifier return self._make_request( action='CopyClusterSnapshot', verb='POST', path='/', params=params) def create_cluster(self, cluster_identifier, node_type, master_username, master_user_password, db_name=None, cluster_type=None, cluster_security_groups=None, vpc_security_group_ids=None, cluster_subnet_group_name=None, availability_zone=None, preferred_maintenance_window=None, cluster_parameter_group_name=None, automated_snapshot_retention_period=None, port=None, cluster_version=None, allow_version_upgrade=None, number_of_nodes=None, publicly_accessible=None, encrypted=None, hsm_client_certificate_identifier=None, hsm_configuration_identifier=None, elastic_ip=None): """ Creates a new cluster. To create the cluster in virtual private cloud (VPC), you must provide cluster subnet group name. If you don't provide a cluster subnet group name or the cluster security group parameter, Amazon Redshift creates a non-VPC cluster, it associates the default cluster security group with the cluster. For more information about managing clusters, go to `Amazon Redshift Clusters`_ in the Amazon Redshift Management Guide . :type db_name: string :param db_name: The name of the first database to be created when the cluster is created. To create additional databases after the cluster is created, connect to the cluster with a SQL client and use SQL commands to create a database. For more information, go to `Create a Database`_ in the Amazon Redshift Database Developer Guide. Default: `dev` Constraints: + Must contain 1 to 64 alphanumeric characters. + Must contain only lowercase letters. + Cannot be a word that is reserved by the service. A list of reserved words can be found in `Reserved Words`_ in the Amazon Redshift Database Developer Guide. :type cluster_identifier: string :param cluster_identifier: A unique identifier for the cluster. You use this identifier to refer to the cluster for any subsequent cluster operations such as deleting or modifying. The identifier also appears in the Amazon Redshift console. Constraints: + Must contain from 1 to 63 alphanumeric characters or hyphens. + Alphabetic characters must be lowercase. + First character must be a letter. + Cannot end with a hyphen or contain two consecutive hyphens. + Must be unique for all clusters within an AWS account. Example: `myexamplecluster` :type cluster_type: string :param cluster_type: The type of the cluster. When cluster type is specified as + `single-node`, the **NumberOfNodes** parameter is not required. + `multi-node`, the **NumberOfNodes** parameter is required. Valid Values: `multi-node` | `single-node` Default: `multi-node` :type node_type: string :param node_type: The node type to be provisioned for the cluster. For information about node types, go to ` Working with Clusters`_ in the Amazon Redshift Management Guide . Valid Values: `dw.hs1.xlarge` | `dw.hs1.8xlarge`. :type master_username: string :param master_username: The user name associated with the master user account for the cluster that is being created. Constraints: + Must be 1 - 128 alphanumeric characters. + First character must be a letter. + Cannot be a reserved word. A list of reserved words can be found in `Reserved Words`_ in the Amazon Redshift Database Developer Guide. :type master_user_password: string :param master_user_password: The password associated with the master user account for the cluster that is being created. Constraints: + Must be between 8 and 64 characters in length. + Must contain at least one uppercase letter. + Must contain at least one lowercase letter. + Must contain one number. + Can be any printable ASCII character (ASCII code 33 to 126) except ' (single quote), " (double quote), \, /, @, or space. :type cluster_security_groups: list :param cluster_security_groups: A list of security groups to be associated with this cluster. Default: The default cluster security group for Amazon Redshift. :type vpc_security_group_ids: list :param vpc_security_group_ids: A list of Virtual Private Cloud (VPC) security groups to be associated with the cluster. Default: The default VPC security group is associated with the cluster. :type cluster_subnet_group_name: string :param cluster_subnet_group_name: The name of a cluster subnet group to be associated with this cluster. If this parameter is not provided the resulting cluster will be deployed outside virtual private cloud (VPC). :type availability_zone: string :param availability_zone: The EC2 Availability Zone (AZ) in which you want Amazon Redshift to provision the cluster. For example, if you have several EC2 instances running in a specific Availability Zone, then you might want the cluster to be provisioned in the same zone in order to decrease network latency. Default: A random, system-chosen Availability Zone in the region that is specified by the endpoint. Example: `us-east-1d` Constraint: The specified Availability Zone must be in the same region as the current endpoint. :type preferred_maintenance_window: string :param preferred_maintenance_window: The weekly time range (in UTC) during which automated cluster maintenance can occur. Format: `ddd:hh24:mi-ddd:hh24:mi` Default: A 30-minute window selected at random from an 8-hour block of time per region, occurring on a random day of the week. The following list shows the time blocks for each region from which the default maintenance windows are assigned. + **US-East (Northern Virginia) Region:** 03:00-11:00 UTC + **US-West (Oregon) Region** 06:00-14:00 UTC Valid Days: Mon | Tue | Wed | Thu | Fri | Sat | Sun Constraints: Minimum 30-minute window. :type cluster_parameter_group_name: string :param cluster_parameter_group_name: The name of the parameter group to be associated with this cluster. Default: The default Amazon Redshift cluster parameter group. For information about the default parameter group, go to `Working with Amazon Redshift Parameter Groups`_ Constraints: + Must be 1 to 255 alphanumeric characters or hyphens. + First character must be a letter. + Cannot end with a hyphen or contain two consecutive hyphens. :type automated_snapshot_retention_period: integer :param automated_snapshot_retention_period: The number of days that automated snapshots are retained. If the value is 0, automated snapshots are disabled. Even if automated snapshots are disabled, you can still create manual snapshots when you want with CreateClusterSnapshot. Default: `1` Constraints: Must be a value from 0 to 35. :type port: integer :param port: The port number on which the cluster accepts incoming connections. The cluster is accessible only via the JDBC and ODBC connection strings. Part of the connection string requires the port on which the cluster will listen for incoming connections. Default: `5439` Valid Values: `1150-65535` :type cluster_version: string :param cluster_version: The version of the Amazon Redshift engine software that you want to deploy on the cluster. The version selected runs on all the nodes in the cluster. Constraints: Only version 1.0 is currently available. Example: `1.0` :type allow_version_upgrade: boolean :param allow_version_upgrade: If `True`, upgrades can be applied during the maintenance window to the Amazon Redshift engine that is running on the cluster. When a new version of the Amazon Redshift engine is released, you can request that the service automatically apply upgrades during the maintenance window to the Amazon Redshift engine that is running on your cluster. Default: `True` :type number_of_nodes: integer :param number_of_nodes: The number of compute nodes in the cluster. This parameter is required when the **ClusterType** parameter is specified as `multi-node`. For information about determining how many nodes you need, go to ` Working with Clusters`_ in the Amazon Redshift Management Guide . If you don't specify this parameter, you get a single-node cluster. When requesting a multi-node cluster, you must specify the number of nodes that you want in the cluster. Default: `1` Constraints: Value must be at least 1 and no more than 100. :type publicly_accessible: boolean :param publicly_accessible: If `True`, the cluster can be accessed from a public network. :type encrypted: boolean :param encrypted: If `True`, the data in cluster is encrypted at rest. Default: false :type hsm_client_certificate_identifier: string :param hsm_client_certificate_identifier: Specifies the name of the HSM client certificate the Amazon Redshift cluster uses to retrieve the data encryption keys stored in an HSM. :type hsm_configuration_identifier: string :param hsm_configuration_identifier: Specifies the name of the HSM configuration that contains the information the Amazon Redshift cluster can use to retrieve and store keys in an HSM. :type elastic_ip: string :param elastic_ip: The Elastic IP (EIP) address for the cluster. Constraints: The cluster must be provisioned in EC2-VPC and publicly- accessible through an Internet gateway. For more information about provisioning clusters in EC2-VPC, go to `Supported Platforms to Launch Your Cluster`_ in the Amazon Redshift Management Guide. """ params = { 'ClusterIdentifier': cluster_identifier, 'NodeType': node_type, 'MasterUsername': master_username, 'MasterUserPassword': master_user_password, } if db_name is not None: params['DBName'] = db_name if cluster_type is not None: params['ClusterType'] = cluster_type if cluster_security_groups is not None: self.build_list_params(params, cluster_security_groups, 'ClusterSecurityGroups.member') if vpc_security_group_ids is not None: self.build_list_params(params, vpc_security_group_ids, 'VpcSecurityGroupIds.member') if cluster_subnet_group_name is not None: params['ClusterSubnetGroupName'] = cluster_subnet_group_name if availability_zone is not None: params['AvailabilityZone'] = availability_zone if preferred_maintenance_window is not None: params['PreferredMaintenanceWindow'] = preferred_maintenance_window if cluster_parameter_group_name is not None: params['ClusterParameterGroupName'] = cluster_parameter_group_name if automated_snapshot_retention_period is not None: params['AutomatedSnapshotRetentionPeriod'] = automated_snapshot_retention_period if port is not None: params['Port'] = port if cluster_version is not None: params['ClusterVersion'] = cluster_version if allow_version_upgrade is not None: params['AllowVersionUpgrade'] = str( allow_version_upgrade).lower() if number_of_nodes is not None: params['NumberOfNodes'] = number_of_nodes if publicly_accessible is not None: params['PubliclyAccessible'] = str( publicly_accessible).lower() if encrypted is not None: params['Encrypted'] = str( encrypted).lower() if hsm_client_certificate_identifier is not None: params['HsmClientCertificateIdentifier'] = hsm_client_certificate_identifier if hsm_configuration_identifier is not None: params['HsmConfigurationIdentifier'] = hsm_configuration_identifier if elastic_ip is not None: params['ElasticIp'] = elastic_ip return self._make_request( action='CreateCluster', verb='POST', path='/', params=params) def create_cluster_parameter_group(self, parameter_group_name, parameter_group_family, description): """ Creates an Amazon Redshift parameter group. Creating parameter groups is independent of creating clusters. You can associate a cluster with a parameter group when you create the cluster. You can also associate an existing cluster with a parameter group after the cluster is created by using ModifyCluster. Parameters in the parameter group define specific behavior that applies to the databases you create on the cluster. For more information about managing parameter groups, go to `Amazon Redshift Parameter Groups`_ in the Amazon Redshift Management Guide . :type parameter_group_name: string :param parameter_group_name: The name of the cluster parameter group. Constraints: + Must be 1 to 255 alphanumeric characters or hyphens + First character must be a letter. + Cannot end with a hyphen or contain two consecutive hyphens. + Must be unique withing your AWS account. This value is stored as a lower-case string. :type parameter_group_family: string :param parameter_group_family: The Amazon Redshift engine version to which the cluster parameter group applies. The cluster engine version determines the set of parameters. To get a list of valid parameter group family names, you can call DescribeClusterParameterGroups. By default, Amazon Redshift returns a list of all the parameter groups that are owned by your AWS account, including the default parameter groups for each Amazon Redshift engine version. The parameter group family names associated with the default parameter groups provide you the valid values. For example, a valid family name is "redshift-1.0". :type description: string :param description: A description of the parameter group. """ params = { 'ParameterGroupName': parameter_group_name, 'ParameterGroupFamily': parameter_group_family, 'Description': description, } return self._make_request( action='CreateClusterParameterGroup', verb='POST', path='/', params=params) def create_cluster_security_group(self, cluster_security_group_name, description): """ Creates a new Amazon Redshift security group. You use security groups to control access to non-VPC clusters. For information about managing security groups, go to`Amazon Redshift Cluster Security Groups`_ in the Amazon Redshift Management Guide . :type cluster_security_group_name: string :param cluster_security_group_name: The name for the security group. Amazon Redshift stores the value as a lowercase string. Constraints: + Must contain no more than 255 alphanumeric characters or hyphens. + Must not be "Default". + Must be unique for all security groups that are created by your AWS account. Example: `examplesecuritygroup` :type description: string :param description: A description for the security group. """ params = { 'ClusterSecurityGroupName': cluster_security_group_name, 'Description': description, } return self._make_request( action='CreateClusterSecurityGroup', verb='POST', path='/', params=params) def create_cluster_snapshot(self, snapshot_identifier, cluster_identifier): """ Creates a manual snapshot of the specified cluster. The cluster must be in the "available" state. For more information about working with snapshots, go to `Amazon Redshift Snapshots`_ in the Amazon Redshift Management Guide . :type snapshot_identifier: string :param snapshot_identifier: A unique identifier for the snapshot that you are requesting. This identifier must be unique for all snapshots within the AWS account. Constraints: + Cannot be null, empty, or blank + Must contain from 1 to 255 alphanumeric characters or hyphens + First character must be a letter + Cannot end with a hyphen or contain two consecutive hyphens Example: `my-snapshot-id` :type cluster_identifier: string :param cluster_identifier: The cluster identifier for which you want a snapshot. """ params = { 'SnapshotIdentifier': snapshot_identifier, 'ClusterIdentifier': cluster_identifier, } return self._make_request( action='CreateClusterSnapshot', verb='POST', path='/', params=params) def create_cluster_subnet_group(self, cluster_subnet_group_name, description, subnet_ids): """ Creates a new Amazon Redshift subnet group. You must provide a list of one or more subnets in your existing Amazon Virtual Private Cloud (Amazon VPC) when creating Amazon Redshift subnet group. For information about subnet groups, go to`Amazon Redshift Cluster Subnet Groups`_ in the Amazon Redshift Management Guide . :type cluster_subnet_group_name: string :param cluster_subnet_group_name: The name for the subnet group. Amazon Redshift stores the value as a lowercase string. Constraints: + Must contain no more than 255 alphanumeric characters or hyphens. + Must not be "Default". + Must be unique for all subnet groups that are created by your AWS account. Example: `examplesubnetgroup` :type description: string :param description: A description for the subnet group. :type subnet_ids: list :param subnet_ids: An array of VPC subnet IDs. A maximum of 20 subnets can be modified in a single request. """ params = { 'ClusterSubnetGroupName': cluster_subnet_group_name, 'Description': description, } self.build_list_params(params, subnet_ids, 'SubnetIds.member') return self._make_request( action='CreateClusterSubnetGroup', verb='POST', path='/', params=params) def create_event_subscription(self, subscription_name, sns_topic_arn, source_type=None, source_ids=None, event_categories=None, severity=None, enabled=None): """ Creates an Amazon Redshift event notification subscription. This action requires an ARN (Amazon Resource Name) of an Amazon SNS topic created by either the Amazon Redshift console, the Amazon SNS console, or the Amazon SNS API. To obtain an ARN with Amazon SNS, you must create a topic in Amazon SNS and subscribe to the topic. The ARN is displayed in the SNS console. You can specify the source type, and lists of Amazon Redshift source IDs, event categories, and event severities. Notifications will be sent for all events you want that match those criteria. For example, you can specify source type = cluster, source ID = my-cluster-1 and mycluster2, event categories = Availability, Backup, and severity = ERROR. The subsription will only send notifications for those ERROR events in the Availability and Backup categores for the specified clusters. If you specify both the source type and source IDs, such as source type = cluster and source identifier = my-cluster-1, notifiactions will be sent for all the cluster events for my- cluster-1. If you specify a source type but do not specify a source identifier, you will receive notice of the events for the objects of that type in your AWS account. If you do not specify either the SourceType nor the SourceIdentifier, you will be notified of events generated from all Amazon Redshift sources belonging to your AWS account. You must specify a source type if you specify a source ID. :type subscription_name: string :param subscription_name: The name of the event subscription to be created. Constraints: + Cannot be null, empty, or blank. + Must contain from 1 to 255 alphanumeric characters or hyphens. + First character must be a letter. + Cannot end with a hyphen or contain two consecutive hyphens. :type sns_topic_arn: string :param sns_topic_arn: The Amazon Resource Name (ARN) of the Amazon SNS topic used to transmit the event notifications. The ARN is created by Amazon SNS when you create a topic and subscribe to it. :type source_type: string :param source_type: The type of source that will be generating the events. For example, if you want to be notified of events generated by a cluster, you would set this parameter to cluster. If this value is not specified, events are returned for all Amazon Redshift objects in your AWS account. You must specify a source type in order to specify source IDs. Valid values: cluster, cluster-parameter-group, cluster-security-group, and cluster-snapshot. :type source_ids: list :param source_ids: A list of one or more identifiers of Amazon Redshift source objects. All of the objects must be of the same type as was specified in the source type parameter. The event subscription will return only events generated by the specified objects. If not specified, then events are returned for all objects within the source type specified. Example: my-cluster-1, my-cluster-2 Example: my-snapshot-20131010 :type event_categories: list :param event_categories: Specifies the Amazon Redshift event categories to be published by the event notification subscription. Values: Configuration, Management, Monitoring, Security :type severity: string :param severity: Specifies the Amazon Redshift event severity to be published by the event notification subscription. Values: ERROR, INFO :type enabled: boolean :param enabled: A Boolean value; set to `True` to activate the subscription, set to `False` to create the subscription but not active it. """ params = { 'SubscriptionName': subscription_name, 'SnsTopicArn': sns_topic_arn, } if source_type is not None: params['SourceType'] = source_type if source_ids is not None: self.build_list_params(params, source_ids, 'SourceIds.member') if event_categories is not None: self.build_list_params(params, event_categories, 'EventCategories.member') if severity is not None: params['Severity'] = severity if enabled is not None: params['Enabled'] = str( enabled).lower() return self._make_request( action='CreateEventSubscription', verb='POST', path='/', params=params) def create_hsm_client_certificate(self, hsm_client_certificate_identifier): """ Creates an HSM client certificate that an Amazon Redshift cluster will use to connect to the client's HSM in order to store and retrieve the keys used to encrypt the cluster databases. The command returns a public key, which you must store in the HSM. After creating the HSM certificate, you must create an Amazon Redshift HSM configuration that provides a cluster the information needed to store and retrieve database encryption keys in the HSM. For more information, go to aLinkToHSMTopic in the Amazon Redshift Management Guide. :type hsm_client_certificate_identifier: string :param hsm_client_certificate_identifier: The identifier to be assigned to the new HSM client certificate that the cluster will use to connect to the HSM to retrieve the database encryption keys. """ params = { 'HsmClientCertificateIdentifier': hsm_client_certificate_identifier, } return self._make_request( action='CreateHsmClientCertificate', verb='POST', path='/', params=params) def create_hsm_configuration(self, hsm_configuration_identifier, description, hsm_ip_address, hsm_partition_name, hsm_partition_password, hsm_server_public_certificate): """ Creates an HSM configuration that contains the information required by an Amazon Redshift cluster to store and retrieve database encryption keys in a Hardware Storeage Module (HSM). After creating the HSM configuration, you can specify it as a parameter when creating a cluster. The cluster will then store its encryption keys in the HSM. Before creating an HSM configuration, you must have first created an HSM client certificate. For more information, go to aLinkToHSMTopic in the Amazon Redshift Management Guide. :type hsm_configuration_identifier: string :param hsm_configuration_identifier: The identifier to be assigned to the new Amazon Redshift HSM configuration. :type description: string :param description: A text description of the HSM configuration to be created. :type hsm_ip_address: string :param hsm_ip_address: The IP address that the Amazon Redshift cluster must use to access the HSM. :type hsm_partition_name: string :param hsm_partition_name: The name of the partition in the HSM where the Amazon Redshift clusters will store their database encryption keys. :type hsm_partition_password: string :param hsm_partition_password: The password required to access the HSM partition. :type hsm_server_public_certificate: string :param hsm_server_public_certificate: The public key used to access the HSM client certificate, which was created by calling the Amazon Redshift create HSM certificate command. """ params = { 'HsmConfigurationIdentifier': hsm_configuration_identifier, 'Description': description, 'HsmIpAddress': hsm_ip_address, 'HsmPartitionName': hsm_partition_name, 'HsmPartitionPassword': hsm_partition_password, 'HsmServerPublicCertificate': hsm_server_public_certificate, } return self._make_request( action='CreateHsmConfiguration', verb='POST', path='/', params=params) def delete_cluster(self, cluster_identifier, skip_final_cluster_snapshot=None, final_cluster_snapshot_identifier=None): """ Deletes a previously provisioned cluster. A successful response from the web service indicates that the request was received correctly. If a final cluster snapshot is requested the status of the cluster will be "final-snapshot" while the snapshot is being taken, then it's "deleting" once Amazon Redshift begins deleting the cluster. Use DescribeClusters to monitor the status of the deletion. The delete operation cannot be canceled or reverted once submitted. For more information about managing clusters, go to `Amazon Redshift Clusters`_ in the Amazon Redshift Management Guide . :type cluster_identifier: string :param cluster_identifier: The identifier of the cluster to be deleted. Constraints: + Must contain lowercase characters. + Must contain from 1 to 63 alphanumeric characters or hyphens. + First character must be a letter. + Cannot end with a hyphen or contain two consecutive hyphens. :type skip_final_cluster_snapshot: boolean :param skip_final_cluster_snapshot: Determines whether a final snapshot of the cluster is created before Amazon Redshift deletes the cluster. If `True`, a final cluster snapshot is not created. If `False`, a final cluster snapshot is created before the cluster is deleted. The FinalClusterSnapshotIdentifier parameter must be specified if SkipFinalClusterSnapshot is `False`. Default: `False` :type final_cluster_snapshot_identifier: string :param final_cluster_snapshot_identifier: The identifier of the final snapshot that is to be created immediately before deleting the cluster. If this parameter is provided, SkipFinalClusterSnapshot must be `False`. Constraints: + Must be 1 to 255 alphanumeric characters. + First character must be a letter. + Cannot end with a hyphen or contain two consecutive hyphens. """ params = {'ClusterIdentifier': cluster_identifier, } if skip_final_cluster_snapshot is not None: params['SkipFinalClusterSnapshot'] = str( skip_final_cluster_snapshot).lower() if final_cluster_snapshot_identifier is not None: params['FinalClusterSnapshotIdentifier'] = final_cluster_snapshot_identifier return self._make_request( action='DeleteCluster', verb='POST', path='/', params=params) def delete_cluster_parameter_group(self, parameter_group_name): """ Deletes a specified Amazon Redshift parameter group. You cannot delete a parameter group if it is associated with a cluster. :type parameter_group_name: string :param parameter_group_name: The name of the parameter group to be deleted. Constraints: + Must be the name of an existing cluster parameter group. + Cannot delete a default cluster parameter group. """ params = {'ParameterGroupName': parameter_group_name, } return self._make_request( action='DeleteClusterParameterGroup', verb='POST', path='/', params=params) def delete_cluster_security_group(self, cluster_security_group_name): """ Deletes an Amazon Redshift security group. You cannot delete a security group that is associated with any clusters. You cannot delete the default security group. For information about managing security groups, go to`Amazon Redshift Cluster Security Groups`_ in the Amazon Redshift Management Guide . :type cluster_security_group_name: string :param cluster_security_group_name: The name of the cluster security group to be deleted. """ params = { 'ClusterSecurityGroupName': cluster_security_group_name, } return self._make_request( action='DeleteClusterSecurityGroup', verb='POST', path='/', params=params) def delete_cluster_snapshot(self, snapshot_identifier, snapshot_cluster_identifier=None): """ Deletes the specified manual snapshot. The snapshot must be in the "available" state, with no other users authorized to access the snapshot. Unlike automated snapshots, manual snapshots are retained even after you delete your cluster. Amazon Redshift does not delete your manual snapshots. You must delete manual snapshot explicitly to avoid getting charged. If other accounts are authorized to access the snapshot, you must revoke all of the authorizations before you can delete the snapshot. :type snapshot_identifier: string :param snapshot_identifier: The unique identifier of the manual snapshot to be deleted. Constraints: Must be the name of an existing snapshot that is in the `available` state. :type snapshot_cluster_identifier: string :param snapshot_cluster_identifier: The unique identifier of the cluster the snapshot was created from. This parameter is required if your IAM user has a policy containing a snapshot resource element that specifies anything other than * for the cluster name. Constraints: Must be the name of valid cluster. """ params = {'SnapshotIdentifier': snapshot_identifier, } if snapshot_cluster_identifier is not None: params['SnapshotClusterIdentifier'] = snapshot_cluster_identifier return self._make_request( action='DeleteClusterSnapshot', verb='POST', path='/', params=params) def delete_cluster_subnet_group(self, cluster_subnet_group_name): """ Deletes the specified cluster subnet group. :type cluster_subnet_group_name: string :param cluster_subnet_group_name: The name of the cluster subnet group name to be deleted. """ params = { 'ClusterSubnetGroupName': cluster_subnet_group_name, } return self._make_request( action='DeleteClusterSubnetGroup', verb='POST', path='/', params=params) def delete_event_subscription(self, subscription_name): """ Deletes an Amazon Redshift event notification subscription. :type subscription_name: string :param subscription_name: The name of the Amazon Redshift event notification subscription to be deleted. """ params = {'SubscriptionName': subscription_name, } return self._make_request( action='DeleteEventSubscription', verb='POST', path='/', params=params) def delete_hsm_client_certificate(self, hsm_client_certificate_identifier): """ Deletes the specified HSM client certificate. :type hsm_client_certificate_identifier: string :param hsm_client_certificate_identifier: The identifier of the HSM client certificate to be deleted. """ params = { 'HsmClientCertificateIdentifier': hsm_client_certificate_identifier, } return self._make_request( action='DeleteHsmClientCertificate', verb='POST', path='/', params=params) def delete_hsm_configuration(self, hsm_configuration_identifier): """ Deletes the specified Amazon Redshift HSM configuration. :type hsm_configuration_identifier: string :param hsm_configuration_identifier: The identifier of the Amazon Redshift HSM configuration to be deleted. """ params = { 'HsmConfigurationIdentifier': hsm_configuration_identifier, } return self._make_request( action='DeleteHsmConfiguration', verb='POST', path='/', params=params) def describe_cluster_parameter_groups(self, parameter_group_name=None, max_records=None, marker=None): """ Returns a list of Amazon Redshift parameter groups, including parameter groups you created and the default parameter group. For each parameter group, the response includes the parameter group name, description, and parameter group family name. You can optionally specify a name to retrieve the description of a specific parameter group. For more information about managing parameter groups, go to `Amazon Redshift Parameter Groups`_ in the Amazon Redshift Management Guide . :type parameter_group_name: string :param parameter_group_name: The name of a specific parameter group for which to return details. By default, details about all parameter groups and the default parameter group are returned. :type max_records: integer :param max_records: The maximum number of parameter group records to include in the response. If more records exist than the specified `MaxRecords` value, the response includes a marker that you can use in a subsequent DescribeClusterParameterGroups request to retrieve the next set of records. Default: `100` Constraints: Value must be at least 20 and no more than 100. :type marker: string :param marker: An optional marker returned by a previous DescribeClusterParameterGroups request to indicate the first parameter group that the current request will return. """ params = {} if parameter_group_name is not None: params['ParameterGroupName'] = parameter_group_name if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeClusterParameterGroups', verb='POST', path='/', params=params) def describe_cluster_parameters(self, parameter_group_name, source=None, max_records=None, marker=None): """ Returns a detailed list of parameters contained within the specified Amazon Redshift parameter group. For each parameter the response includes information such as parameter name, description, data type, value, whether the parameter value is modifiable, and so on. You can specify source filter to retrieve parameters of only specific type. For example, to retrieve parameters that were modified by a user action such as from ModifyClusterParameterGroup, you can specify source equal to user . For more information about managing parameter groups, go to `Amazon Redshift Parameter Groups`_ in the Amazon Redshift Management Guide . :type parameter_group_name: string :param parameter_group_name: The name of a cluster parameter group for which to return details. :type source: string :param source: The parameter types to return. Specify `user` to show parameters that are different form the default. Similarly, specify `engine-default` to show parameters that are the same as the default parameter group. Default: All parameter types returned. Valid Values: `user` | `engine-default` :type max_records: integer :param max_records: The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, response includes a marker that you can specify in your subsequent request to retrieve remaining result. Default: `100` Constraints: Value must be at least 20 and no more than 100. :type marker: string :param marker: An optional marker returned from a previous **DescribeClusterParameters** request. If this parameter is specified, the response includes only records beyond the specified marker, up to the value specified by `MaxRecords`. """ params = {'ParameterGroupName': parameter_group_name, } if source is not None: params['Source'] = source if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeClusterParameters', verb='POST', path='/', params=params) def describe_cluster_security_groups(self, cluster_security_group_name=None, max_records=None, marker=None): """ Returns information about Amazon Redshift security groups. If the name of a security group is specified, the response will contain only information about only that security group. For information about managing security groups, go to`Amazon Redshift Cluster Security Groups`_ in the Amazon Redshift Management Guide . :type cluster_security_group_name: string :param cluster_security_group_name: The name of a cluster security group for which you are requesting details. You can specify either the **Marker** parameter or a **ClusterSecurityGroupName** parameter, but not both. Example: `securitygroup1` :type max_records: integer :param max_records: The maximum number of records to be included in the response. If more records exist than the specified `MaxRecords` value, a marker is included in the response, which you can use in a subsequent DescribeClusterSecurityGroups request. Default: `100` Constraints: Value must be at least 20 and no more than 100. :type marker: string :param marker: An optional marker returned by a previous DescribeClusterSecurityGroups request to indicate the first security group that the current request will return. You can specify either the **Marker** parameter or a **ClusterSecurityGroupName** parameter, but not both. """ params = {} if cluster_security_group_name is not None: params['ClusterSecurityGroupName'] = cluster_security_group_name if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeClusterSecurityGroups', verb='POST', path='/', params=params) def describe_cluster_snapshots(self, cluster_identifier=None, snapshot_identifier=None, snapshot_type=None, start_time=None, end_time=None, max_records=None, marker=None, owner_account=None): """ Returns one or more snapshot objects, which contain metadata about your cluster snapshots. By default, this operation returns information about all snapshots of all clusters that are owned by you AWS customer account. No information is returned for snapshots owned by inactive AWS customer accounts. :type cluster_identifier: string :param cluster_identifier: The identifier of the cluster for which information about snapshots is requested. :type snapshot_identifier: string :param snapshot_identifier: The snapshot identifier of the snapshot about which to return information. :type snapshot_type: string :param snapshot_type: The type of snapshots for which you are requesting information. By default, snapshots of all types are returned. Valid Values: `automated` | `manual` :type start_time: timestamp :param start_time: A value that requests only snapshots created at or after the specified time. The time value is specified in ISO 8601 format. For more information about ISO 8601, go to the `ISO8601 Wikipedia page.`_ Example: `2012-07-16T18:00:00Z` :type end_time: timestamp :param end_time: A time value that requests only snapshots created at or before the specified time. The time value is specified in ISO 8601 format. For more information about ISO 8601, go to the `ISO8601 Wikipedia page.`_ Example: `2012-07-16T18:00:00Z` :type max_records: integer :param max_records: The maximum number of snapshot records to include in the response. If more records exist than the specified `MaxRecords` value, the response returns a marker that you can use in a subsequent DescribeClusterSnapshots request in order to retrieve the next set of snapshot records. Default: `100` Constraints: Must be at least 20 and no more than 100. :type marker: string :param marker: An optional marker returned by a previous DescribeClusterSnapshots request to indicate the first snapshot that the request will return. :type owner_account: string :param owner_account: The AWS customer account used to create or copy the snapshot. Use this field to filter the results to snapshots owned by a particular account. To describe snapshots you own, either specify your AWS customer account, or do not specify the parameter. """ params = {} if cluster_identifier is not None: params['ClusterIdentifier'] = cluster_identifier if snapshot_identifier is not None: params['SnapshotIdentifier'] = snapshot_identifier if snapshot_type is not None: params['SnapshotType'] = snapshot_type if start_time is not None: params['StartTime'] = start_time if end_time is not None: params['EndTime'] = end_time if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker if owner_account is not None: params['OwnerAccount'] = owner_account return self._make_request( action='DescribeClusterSnapshots', verb='POST', path='/', params=params) def describe_cluster_subnet_groups(self, cluster_subnet_group_name=None, max_records=None, marker=None): """ Returns one or more cluster subnet group objects, which contain metadata about your cluster subnet groups. By default, this operation returns information about all cluster subnet groups that are defined in you AWS account. :type cluster_subnet_group_name: string :param cluster_subnet_group_name: The name of the cluster subnet group for which information is requested. :type max_records: integer :param max_records: The maximum number of cluster subnet group records to include in the response. If more records exist than the specified `MaxRecords` value, the response returns a marker that you can use in a subsequent DescribeClusterSubnetGroups request in order to retrieve the next set of cluster subnet group records. Default: 100 Constraints: Must be at least 20 and no more than 100. :type marker: string :param marker: An optional marker returned by a previous DescribeClusterSubnetGroups request to indicate the first cluster subnet group that the current request will return. """ params = {} if cluster_subnet_group_name is not None: params['ClusterSubnetGroupName'] = cluster_subnet_group_name if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeClusterSubnetGroups', verb='POST', path='/', params=params) def describe_cluster_versions(self, cluster_version=None, cluster_parameter_group_family=None, max_records=None, marker=None): """ Returns descriptions of the available Amazon Redshift cluster versions. You can call this operation even before creating any clusters to learn more about the Amazon Redshift versions. For more information about managing clusters, go to `Amazon Redshift Clusters`_ in the Amazon Redshift Management Guide :type cluster_version: string :param cluster_version: The specific cluster version to return. Example: `1.0` :type cluster_parameter_group_family: string :param cluster_parameter_group_family: The name of a specific cluster parameter group family to return details for. Constraints: + Must be 1 to 255 alphanumeric characters + First character must be a letter + Cannot end with a hyphen or contain two consecutive hyphens :type max_records: integer :param max_records: The maximum number of records to include in the response. If more than the `MaxRecords` value is available, a marker is included in the response so that the following results can be retrieved. Default: `100` Constraints: Value must be at least 20 and no more than 100. :type marker: string :param marker: The marker returned from a previous request. If this parameter is specified, the response includes records beyond the marker only, up to `MaxRecords`. """ params = {} if cluster_version is not None: params['ClusterVersion'] = cluster_version if cluster_parameter_group_family is not None: params['ClusterParameterGroupFamily'] = cluster_parameter_group_family if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeClusterVersions', verb='POST', path='/', params=params) def describe_clusters(self, cluster_identifier=None, max_records=None, marker=None): """ Returns properties of provisioned clusters including general cluster properties, cluster database properties, maintenance and backup properties, and security and access properties. This operation supports pagination. For more information about managing clusters, go to `Amazon Redshift Clusters`_ in the Amazon Redshift Management Guide . :type cluster_identifier: string :param cluster_identifier: The unique identifier of a cluster whose properties you are requesting. This parameter isn't case sensitive. The default is that all clusters defined for an account are returned. :type max_records: integer :param max_records: The maximum number of records that the response can include. If more records exist than the specified `MaxRecords` value, a `marker` is included in the response that can be used in a new **DescribeClusters** request to continue listing results. Default: `100` Constraints: Value must be at least 20 and no more than 100. :type marker: string :param marker: An optional marker returned by a previous **DescribeClusters** request to indicate the first cluster that the current **DescribeClusters** request will return. You can specify either a **Marker** parameter or a **ClusterIdentifier** parameter in a **DescribeClusters** request, but not both. """ params = {} if cluster_identifier is not None: params['ClusterIdentifier'] = cluster_identifier if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeClusters', verb='POST', path='/', params=params) def describe_default_cluster_parameters(self, parameter_group_family, max_records=None, marker=None): """ Returns a list of parameter settings for the specified parameter group family. For more information about managing parameter groups, go to `Amazon Redshift Parameter Groups`_ in the Amazon Redshift Management Guide . :type parameter_group_family: string :param parameter_group_family: The name of the cluster parameter group family. :type max_records: integer :param max_records: The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a marker is included in the response so that the remaining results may be retrieved. Default: `100` Constraints: Value must be at least 20 and no more than 100. :type marker: string :param marker: An optional marker returned from a previous **DescribeDefaultClusterParameters** request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`. """ params = {'ParameterGroupFamily': parameter_group_family, } if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeDefaultClusterParameters', verb='POST', path='/', params=params) def describe_event_categories(self, source_type=None): """ Displays a list of event categories for all event source types, or for a specified source type. For a list of the event categories and source types, go to `Amazon Redshift Event Notifications`_. :type source_type: string :param source_type: The source type, such as cluster or parameter group, to which the described event categories apply. Valid values: cluster, snapshot, parameter group, and security group. """ params = {} if source_type is not None: params['SourceType'] = source_type return self._make_request( action='DescribeEventCategories', verb='POST', path='/', params=params) def describe_event_subscriptions(self, subscription_name=None, max_records=None, marker=None): """ Lists descriptions of all the Amazon Redshift event notifications subscription for a customer account. If you specify a subscription name, lists the description for that subscription. :type subscription_name: string :param subscription_name: The name of the Amazon Redshift event notification subscription to be described. :type max_records: integer :param max_records: The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100 Constraints: minimum 20, maximum 100 :type marker: string :param marker: An optional pagination token provided by a previous DescribeOrderableClusterOptions request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords. """ params = {} if subscription_name is not None: params['SubscriptionName'] = subscription_name if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeEventSubscriptions', verb='POST', path='/', params=params) def describe_events(self, source_identifier=None, source_type=None, start_time=None, end_time=None, duration=None, max_records=None, marker=None): """ Returns events related to clusters, security groups, snapshots, and parameter groups for the past 14 days. Events specific to a particular cluster, security group, snapshot or parameter group can be obtained by providing the name as a parameter. By default, the past hour of events are returned. :type source_identifier: string :param source_identifier: The identifier of the event source for which events will be returned. If this parameter is not specified, then all sources are included in the response. Constraints: If SourceIdentifier is supplied, SourceType must also be provided. + Specify a cluster identifier when SourceType is `cluster`. + Specify a cluster security group name when SourceType is `cluster- security-group`. + Specify a cluster parameter group name when SourceType is `cluster- parameter-group`. + Specify a cluster snapshot identifier when SourceType is `cluster- snapshot`. :type source_type: string :param source_type: The event source to retrieve events for. If no value is specified, all events are returned. Constraints: If SourceType is supplied, SourceIdentifier must also be provided. + Specify `cluster` when SourceIdentifier is a cluster identifier. + Specify `cluster-security-group` when SourceIdentifier is a cluster security group name. + Specify `cluster-parameter-group` when SourceIdentifier is a cluster parameter group name. + Specify `cluster-snapshot` when SourceIdentifier is a cluster snapshot identifier. :type start_time: timestamp :param start_time: The beginning of the time interval to retrieve events for, specified in ISO 8601 format. For more information about ISO 8601, go to the `ISO8601 Wikipedia page.`_ Example: `2009-07-08T18:00Z` :type end_time: timestamp :param end_time: The end of the time interval for which to retrieve events, specified in ISO 8601 format. For more information about ISO 8601, go to the `ISO8601 Wikipedia page.`_ Example: `2009-07-08T18:00Z` :type duration: integer :param duration: The number of minutes prior to the time of the request for which to retrieve events. For example, if the request is sent at 18:00 and you specify a duration of 60, then only events which have occurred after 17:00 will be returned. Default: `60` :type max_records: integer :param max_records: The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a marker is included in the response so that the remaining results may be retrieved. Default: `100` Constraints: Value must be at least 20 and no more than 100. :type marker: string :param marker: An optional marker returned from a previous **DescribeEvents** request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`. """ params = {} if source_identifier is not None: params['SourceIdentifier'] = source_identifier if source_type is not None: params['SourceType'] = source_type if start_time is not None: params['StartTime'] = start_time if end_time is not None: params['EndTime'] = end_time if duration is not None: params['Duration'] = duration if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeEvents', verb='POST', path='/', params=params) def describe_hsm_client_certificates(self, hsm_client_certificate_identifier=None, max_records=None, marker=None): """ Returns information about the specified HSM client certificate. If no certificate ID is specified, returns information about all the HSM certificates owned by your AWS customer account. :type hsm_client_certificate_identifier: string :param hsm_client_certificate_identifier: The identifier of a specific HSM client certificate for which you want information. If no identifier is specified, information is returned for all HSM client certificates associated with Amazon Redshift clusters owned by your AWS customer account. :type max_records: integer :param max_records: The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a marker is included in the response so that the remaining results may be retrieved. Default: `100` Constraints: minimum 20, maximum 100. :type marker: string :param marker: An optional marker returned from a previous **DescribeOrderableClusterOptions** request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`. """ params = {} if hsm_client_certificate_identifier is not None: params['HsmClientCertificateIdentifier'] = hsm_client_certificate_identifier if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeHsmClientCertificates', verb='POST', path='/', params=params) def describe_hsm_configurations(self, hsm_configuration_identifier=None, max_records=None, marker=None): """ Returns information about the specified Amazon Redshift HSM configuration. If no configuration ID is specified, returns information about all the HSM configurations owned by your AWS customer account. :type hsm_configuration_identifier: string :param hsm_configuration_identifier: The identifier of a specific Amazon Redshift HSM configuration to be described. If no identifier is specified, information is returned for all HSM configurations owned by your AWS customer account. :type max_records: integer :param max_records: The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a marker is included in the response so that the remaining results may be retrieved. Default: `100` Constraints: minimum 20, maximum 100. :type marker: string :param marker: An optional marker returned from a previous **DescribeOrderableClusterOptions** request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`. """ params = {} if hsm_configuration_identifier is not None: params['HsmConfigurationIdentifier'] = hsm_configuration_identifier if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeHsmConfigurations', verb='POST', path='/', params=params) def describe_logging_status(self, cluster_identifier): """ Describes whether information, such as queries and connection attempts, is being logged for the specified Amazon Redshift cluster. :type cluster_identifier: string :param cluster_identifier: The identifier of the cluster to get the logging status from. Example: `examplecluster` """ params = {'ClusterIdentifier': cluster_identifier, } return self._make_request( action='DescribeLoggingStatus', verb='POST', path='/', params=params) def describe_orderable_cluster_options(self, cluster_version=None, node_type=None, max_records=None, marker=None): """ Returns a list of orderable cluster options. Before you create a new cluster you can use this operation to find what options are available, such as the EC2 Availability Zones (AZ) in the specific AWS region that you can specify, and the node types you can request. The node types differ by available storage, memory, CPU and price. With the cost involved you might want to obtain a list of cluster options in the specific region and specify values when creating a cluster. For more information about managing clusters, go to `Amazon Redshift Clusters`_ in the Amazon Redshift Management Guide :type cluster_version: string :param cluster_version: The version filter value. Specify this parameter to show only the available offerings matching the specified version. Default: All versions. Constraints: Must be one of the version returned from DescribeClusterVersions. :type node_type: string :param node_type: The node type filter value. Specify this parameter to show only the available offerings matching the specified node type. :type max_records: integer :param max_records: The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a marker is included in the response so that the remaining results may be retrieved. Default: `100` Constraints: minimum 20, maximum 100. :type marker: string :param marker: An optional marker returned from a previous **DescribeOrderableClusterOptions** request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`. """ params = {} if cluster_version is not None: params['ClusterVersion'] = cluster_version if node_type is not None: params['NodeType'] = node_type if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeOrderableClusterOptions', verb='POST', path='/', params=params) def describe_reserved_node_offerings(self, reserved_node_offering_id=None, max_records=None, marker=None): """ Returns a list of the available reserved node offerings by Amazon Redshift with their descriptions including the node type, the fixed and recurring costs of reserving the node and duration the node will be reserved for you. These descriptions help you determine which reserve node offering you want to purchase. You then use the unique offering ID in you call to PurchaseReservedNodeOffering to reserve one or more nodes for your Amazon Redshift cluster. For more information about managing parameter groups, go to `Purchasing Reserved Nodes`_ in the Amazon Redshift Management Guide . :type reserved_node_offering_id: string :param reserved_node_offering_id: The unique identifier for the offering. :type max_records: integer :param max_records: The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a marker is included in the response so that the remaining results may be retrieved. Default: `100` Constraints: minimum 20, maximum 100. :type marker: string :param marker: An optional marker returned by a previous DescribeReservedNodeOfferings request to indicate the first offering that the request will return. You can specify either a **Marker** parameter or a **ClusterIdentifier** parameter in a DescribeClusters request, but not both. """ params = {} if reserved_node_offering_id is not None: params['ReservedNodeOfferingId'] = reserved_node_offering_id if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeReservedNodeOfferings', verb='POST', path='/', params=params) def describe_reserved_nodes(self, reserved_node_id=None, max_records=None, marker=None): """ Returns the descriptions of the reserved nodes. :type reserved_node_id: string :param reserved_node_id: Identifier for the node reservation. :type max_records: integer :param max_records: The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a marker is included in the response so that the remaining results may be retrieved. Default: `100` Constraints: minimum 20, maximum 100. :type marker: string :param marker: An optional marker returned by a previous DescribeReservedNodes request to indicate the first parameter group that the current request will return. """ params = {} if reserved_node_id is not None: params['ReservedNodeId'] = reserved_node_id if max_records is not None: params['MaxRecords'] = max_records if marker is not None: params['Marker'] = marker return self._make_request( action='DescribeReservedNodes', verb='POST', path='/', params=params) def describe_resize(self, cluster_identifier): """ Returns information about the last resize operation for the specified cluster. If no resize operation has ever been initiated for the specified cluster, a `HTTP 404` error is returned. If a resize operation was initiated and completed, the status of the resize remains as `SUCCEEDED` until the next resize. A resize operation can be requested using ModifyCluster and specifying a different number or type of nodes for the cluster. :type cluster_identifier: string :param cluster_identifier: The unique identifier of a cluster whose resize progress you are requesting. This parameter isn't case- sensitive. By default, resize operations for all clusters defined for an AWS account are returned. """ params = {'ClusterIdentifier': cluster_identifier, } return self._make_request( action='DescribeResize', verb='POST', path='/', params=params) def disable_logging(self, cluster_identifier): """ Stops logging information, such as queries and connection attempts, for the specified Amazon Redshift cluster. :type cluster_identifier: string :param cluster_identifier: The identifier of the cluster on which logging is to be stopped. Example: `examplecluster` """ params = {'ClusterIdentifier': cluster_identifier, } return self._make_request( action='DisableLogging', verb='POST', path='/', params=params) def disable_snapshot_copy(self, cluster_identifier): """ Disables the automatic copying of snapshots from one region to another region for a specified cluster. :type cluster_identifier: string :param cluster_identifier: The unique identifier of the source cluster that you want to disable copying of snapshots to a destination region. Constraints: Must be the valid name of an existing cluster that has cross-region snapshot copy enabled. """ params = {'ClusterIdentifier': cluster_identifier, } return self._make_request( action='DisableSnapshotCopy', verb='POST', path='/', params=params) def enable_logging(self, cluster_identifier, bucket_name, s3_key_prefix=None): """ Starts logging information, such as queries and connection attempts, for the specified Amazon Redshift cluster. :type cluster_identifier: string :param cluster_identifier: The identifier of the cluster on which logging is to be started. Example: `examplecluster` :type bucket_name: string :param bucket_name: The name of an existing S3 bucket where the log files are to be stored. Constraints: + Must be in the same region as the cluster + The cluster must have read bucket and put object permissions :type s3_key_prefix: string :param s3_key_prefix: The prefix applied to the log file names. Constraints: + Cannot exceed 512 characters + Cannot contain spaces( ), double quotes ("), single quotes ('), a backslash (\), or control characters. The hexadecimal codes for invalid characters are: + x00 to x20 + x22 + x27 + x5c + x7f or larger """ params = { 'ClusterIdentifier': cluster_identifier, 'BucketName': bucket_name, } if s3_key_prefix is not None: params['S3KeyPrefix'] = s3_key_prefix return self._make_request( action='EnableLogging', verb='POST', path='/', params=params) def enable_snapshot_copy(self, cluster_identifier, destination_region, retention_period=None): """ Enables the automatic copy of snapshots from one region to another region for a specified cluster. :type cluster_identifier: string :param cluster_identifier: The unique identifier of the source cluster to copy snapshots from. Constraints: Must be the valid name of an existing cluster that does not already have cross-region snapshot copy enabled. :type destination_region: string :param destination_region: The destination region that you want to copy snapshots to. Constraints: Must be the name of a valid region. For more information, see `Regions and Endpoints`_ in the Amazon Web Services General Reference. :type retention_period: integer :param retention_period: The number of days to retain automated snapshots in the destination region after they are copied from the source region. Default: 7. Constraints: Must be at least 1 and no more than 35. """ params = { 'ClusterIdentifier': cluster_identifier, 'DestinationRegion': destination_region, } if retention_period is not None: params['RetentionPeriod'] = retention_period return self._make_request( action='EnableSnapshotCopy', verb='POST', path='/', params=params) def modify_cluster(self, cluster_identifier, cluster_type=None, node_type=None, number_of_nodes=None, cluster_security_groups=None, vpc_security_group_ids=None, master_user_password=None, cluster_parameter_group_name=None, automated_snapshot_retention_period=None, preferred_maintenance_window=None, cluster_version=None, allow_version_upgrade=None, hsm_client_certificate_identifier=None, hsm_configuration_identifier=None): """ Modifies the settings for a cluster. For example, you can add another security or parameter group, update the preferred maintenance window, or change the master user password. Resetting a cluster password or modifying the security groups associated with a cluster do not need a reboot. However, modifying parameter group requires a reboot for parameters to take effect. For more information about managing clusters, go to `Amazon Redshift Clusters`_ in the Amazon Redshift Management Guide You can also change node type and the number of nodes to scale up or down the cluster. When resizing a cluster, you must specify both the number of nodes and the node type even if one of the parameters does not change. If you specify the same number of nodes and node type that are already configured for the cluster, an error is returned. :type cluster_identifier: string :param cluster_identifier: The unique identifier of the cluster to be modified. Example: `examplecluster` :type cluster_type: string :param cluster_type: The new cluster type. When you submit your cluster resize request, your existing cluster goes into a read-only mode. After Amazon Redshift provisions a new cluster based on your resize requirements, there will be outage for a period while the old cluster is deleted and your connection is switched to the new cluster. You can use DescribeResize to track the progress of the resize request. Valid Values: ` multi-node | single-node ` :type node_type: string :param node_type: The new node type of the cluster. If you specify a new node type, you must also specify the number of nodes parameter also. When you submit your request to resize a cluster, Amazon Redshift sets access permissions for the cluster to read-only. After Amazon Redshift provisions a new cluster according to your resize requirements, there will be a temporary outage while the old cluster is deleted and your connection is switched to the new cluster. When the new connection is complete, the original access permissions for the cluster are restored. You can use the DescribeResize to track the progress of the resize request. Valid Values: ` dw.hs1.xlarge` | `dw.hs1.8xlarge` :type number_of_nodes: integer :param number_of_nodes: The new number of nodes of the cluster. If you specify a new number of nodes, you must also specify the node type parameter also. When you submit your request to resize a cluster, Amazon Redshift sets access permissions for the cluster to read-only. After Amazon Redshift provisions a new cluster according to your resize requirements, there will be a temporary outage while the old cluster is deleted and your connection is switched to the new cluster. When the new connection is complete, the original access permissions for the cluster are restored. You can use DescribeResize to track the progress of the resize request. Valid Values: Integer greater than `0`. :type cluster_security_groups: list :param cluster_security_groups: A list of cluster security groups to be authorized on this cluster. This change is asynchronously applied as soon as possible. Security groups currently associated with the cluster and not in the list of groups to apply, will be revoked from the cluster. Constraints: + Must be 1 to 255 alphanumeric characters or hyphens + First character must be a letter + Cannot end with a hyphen or contain two consecutive hyphens :type vpc_security_group_ids: list :param vpc_security_group_ids: A list of Virtual Private Cloud (VPC) security groups to be associated with the cluster. :type master_user_password: string :param master_user_password: The new password for the cluster master user. This change is asynchronously applied as soon as possible. Between the time of the request and the completion of the request, the `MasterUserPassword` element exists in the `PendingModifiedValues` element of the operation response. Operations never return the password, so this operation provides a way to regain access to the master user account for a cluster if the password is lost. Default: Uses existing setting. Constraints: + Must be between 8 and 64 characters in length. + Must contain at least one uppercase letter. + Must contain at least one lowercase letter. + Must contain one number. + Can be any printable ASCII character (ASCII code 33 to 126) except ' (single quote), " (double quote), \, /, @, or space. :type cluster_parameter_group_name: string :param cluster_parameter_group_name: The name of the cluster parameter group to apply to this cluster. This change is applied only after the cluster is rebooted. To reboot a cluster use RebootCluster. Default: Uses existing setting. Constraints: The cluster parameter group must be in the same parameter group family that matches the cluster version. :type automated_snapshot_retention_period: integer :param automated_snapshot_retention_period: The number of days that automated snapshots are retained. If the value is 0, automated snapshots are disabled. Even if automated snapshots are disabled, you can still create manual snapshots when you want with CreateClusterSnapshot. If you decrease the automated snapshot retention period from its current value, existing automated snapshots which fall outside of the new retention period will be immediately deleted. Default: Uses existing setting. Constraints: Must be a value from 0 to 35. :type preferred_maintenance_window: string :param preferred_maintenance_window: The weekly time range (in UTC) during which system maintenance can occur, if necessary. If system maintenance is necessary during the window, it may result in an outage. This maintenance window change is made immediately. If the new maintenance window indicates the current time, there must be at least 120 minutes between the current time and end of the window in order to ensure that pending changes are applied. Default: Uses existing setting. Format: ddd:hh24:mi-ddd:hh24:mi, for example `wed:07:30-wed:08:00`. Valid Days: Mon | Tue | Wed | Thu | Fri | Sat | Sun Constraints: Must be at least 30 minutes. :type cluster_version: string :param cluster_version: The new version number of the Amazon Redshift engine to upgrade to. For major version upgrades, if a non-default cluster parameter group is currently in use, a new cluster parameter group in the cluster parameter group family for the new version must be specified. The new cluster parameter group can be the default for that cluster parameter group family. For more information about managing parameter groups, go to `Amazon Redshift Parameter Groups`_ in the Amazon Redshift Management Guide . Example: `1.0` :type allow_version_upgrade: boolean :param allow_version_upgrade: If `True`, upgrades will be applied automatically to the cluster during the maintenance window. Default: `False` :type hsm_client_certificate_identifier: string :param hsm_client_certificate_identifier: Specifies the name of the HSM client certificate the Amazon Redshift cluster uses to retrieve the data encryption keys stored in an HSM. :type hsm_configuration_identifier: string :param hsm_configuration_identifier: Specifies the name of the HSM configuration that contains the information the Amazon Redshift cluster can use to retrieve and store keys in an HSM. """ params = {'ClusterIdentifier': cluster_identifier, } if cluster_type is not None: params['ClusterType'] = cluster_type if node_type is not None: params['NodeType'] = node_type if number_of_nodes is not None: params['NumberOfNodes'] = number_of_nodes if cluster_security_groups is not None: self.build_list_params(params, cluster_security_groups, 'ClusterSecurityGroups.member') if vpc_security_group_ids is not None: self.build_list_params(params, vpc_security_group_ids, 'VpcSecurityGroupIds.member') if master_user_password is not None: params['MasterUserPassword'] = master_user_password if cluster_parameter_group_name is not None: params['ClusterParameterGroupName'] = cluster_parameter_group_name if automated_snapshot_retention_period is not None: params['AutomatedSnapshotRetentionPeriod'] = automated_snapshot_retention_period if preferred_maintenance_window is not None: params['PreferredMaintenanceWindow'] = preferred_maintenance_window if cluster_version is not None: params['ClusterVersion'] = cluster_version if allow_version_upgrade is not None: params['AllowVersionUpgrade'] = str( allow_version_upgrade).lower() if hsm_client_certificate_identifier is not None: params['HsmClientCertificateIdentifier'] = hsm_client_certificate_identifier if hsm_configuration_identifier is not None: params['HsmConfigurationIdentifier'] = hsm_configuration_identifier return self._make_request( action='ModifyCluster', verb='POST', path='/', params=params) def modify_cluster_parameter_group(self, parameter_group_name, parameters): """ Modifies the parameters of a parameter group. For more information about managing parameter groups, go to `Amazon Redshift Parameter Groups`_ in the Amazon Redshift Management Guide . :type parameter_group_name: string :param parameter_group_name: The name of the parameter group to be modified. :type parameters: list :param parameters: An array of parameters to be modified. A maximum of 20 parameters can be modified in a single request. For each parameter to be modified, you must supply at least the parameter name and parameter value; other name-value pairs of the parameter are optional. """ params = {'ParameterGroupName': parameter_group_name, } self.build_complex_list_params( params, parameters, 'Parameters.member', ('ParameterName', 'ParameterValue', 'Description', 'Source', 'DataType', 'AllowedValues', 'IsModifiable', 'MinimumEngineVersion')) return self._make_request( action='ModifyClusterParameterGroup', verb='POST', path='/', params=params) def modify_cluster_subnet_group(self, cluster_subnet_group_name, subnet_ids, description=None): """ Modifies a cluster subnet group to include the specified list of VPC subnets. The operation replaces the existing list of subnets with the new list of subnets. :type cluster_subnet_group_name: string :param cluster_subnet_group_name: The name of the subnet group to be modified. :type description: string :param description: A text description of the subnet group to be modified. :type subnet_ids: list :param subnet_ids: An array of VPC subnet IDs. A maximum of 20 subnets can be modified in a single request. """ params = { 'ClusterSubnetGroupName': cluster_subnet_group_name, } self.build_list_params(params, subnet_ids, 'SubnetIds.member') if description is not None: params['Description'] = description return self._make_request( action='ModifyClusterSubnetGroup', verb='POST', path='/', params=params) def modify_event_subscription(self, subscription_name, sns_topic_arn=None, source_type=None, source_ids=None, event_categories=None, severity=None, enabled=None): """ Modifies an existing Amazon Redshift event notification subscription. :type subscription_name: string :param subscription_name: The name of the modified Amazon Redshift event notification subscription. :type sns_topic_arn: string :param sns_topic_arn: The Amazon Resource Name (ARN) of the SNS topic to be used by the event notification subscription. :type source_type: string :param source_type: The type of source that will be generating the events. For example, if you want to be notified of events generated by a cluster, you would set this parameter to cluster. If this value is not specified, events are returned for all Amazon Redshift objects in your AWS account. You must specify a source type in order to specify source IDs. Valid values: cluster, cluster-parameter-group, cluster-security-group, and cluster-snapshot. :type source_ids: list :param source_ids: A list of one or more identifiers of Amazon Redshift source objects. All of the objects must be of the same type as was specified in the source type parameter. The event subscription will return only events generated by the specified objects. If not specified, then events are returned for all objects within the source type specified. Example: my-cluster-1, my-cluster-2 Example: my-snapshot-20131010 :type event_categories: list :param event_categories: Specifies the Amazon Redshift event categories to be published by the event notification subscription. Values: Configuration, Management, Monitoring, Security :type severity: string :param severity: Specifies the Amazon Redshift event severity to be published by the event notification subscription. Values: ERROR, INFO :type enabled: boolean :param enabled: A Boolean value indicating if the subscription is enabled. `True` indicates the subscription is enabled """ params = {'SubscriptionName': subscription_name, } if sns_topic_arn is not None: params['SnsTopicArn'] = sns_topic_arn if source_type is not None: params['SourceType'] = source_type if source_ids is not None: self.build_list_params(params, source_ids, 'SourceIds.member') if event_categories is not None: self.build_list_params(params, event_categories, 'EventCategories.member') if severity is not None: params['Severity'] = severity if enabled is not None: params['Enabled'] = str( enabled).lower() return self._make_request( action='ModifyEventSubscription', verb='POST', path='/', params=params) def modify_snapshot_copy_retention_period(self, cluster_identifier, retention_period): """ Modifies the number of days to retain automated snapshots in the destination region after they are copied from the source region. :type cluster_identifier: string :param cluster_identifier: The unique identifier of the cluster for which you want to change the retention period for automated snapshots that are copied to a destination region. Constraints: Must be the valid name of an existing cluster that has cross-region snapshot copy enabled. :type retention_period: integer :param retention_period: The number of days to retain automated snapshots in the destination region after they are copied from the source region. If you decrease the retention period for automated snapshots that are copied to a destination region, Amazon Redshift will delete any existing automated snapshots that were copied to the destination region and that fall outside of the new retention period. Constraints: Must be at least 1 and no more than 35. """ params = { 'ClusterIdentifier': cluster_identifier, 'RetentionPeriod': retention_period, } return self._make_request( action='ModifySnapshotCopyRetentionPeriod', verb='POST', path='/', params=params) def purchase_reserved_node_offering(self, reserved_node_offering_id, node_count=None): """ Allows you to purchase reserved nodes. Amazon Redshift offers a predefined set of reserved node offerings. You can purchase one of the offerings. You can call the DescribeReservedNodeOfferings API to obtain the available reserved node offerings. You can call this API by providing a specific reserved node offering and the number of nodes you want to reserve. For more information about managing parameter groups, go to `Purchasing Reserved Nodes`_ in the Amazon Redshift Management Guide . :type reserved_node_offering_id: string :param reserved_node_offering_id: The unique identifier of the reserved node offering you want to purchase. :type node_count: integer :param node_count: The number of reserved nodes you want to purchase. Default: `1` """ params = { 'ReservedNodeOfferingId': reserved_node_offering_id, } if node_count is not None: params['NodeCount'] = node_count return self._make_request( action='PurchaseReservedNodeOffering', verb='POST', path='/', params=params) def reboot_cluster(self, cluster_identifier): """ Reboots a cluster. This action is taken as soon as possible. It results in a momentary outage to the cluster, during which the cluster status is set to `rebooting`. A cluster event is created when the reboot is completed. Any pending cluster modifications (see ModifyCluster) are applied at this reboot. For more information about managing clusters, go to `Amazon Redshift Clusters`_ in the Amazon Redshift Management Guide :type cluster_identifier: string :param cluster_identifier: The cluster identifier. """ params = {'ClusterIdentifier': cluster_identifier, } return self._make_request( action='RebootCluster', verb='POST', path='/', params=params) def reset_cluster_parameter_group(self, parameter_group_name, reset_all_parameters=None, parameters=None): """ Sets one or more parameters of the specified parameter group to their default values and sets the source values of the parameters to "engine-default". To reset the entire parameter group specify the ResetAllParameters parameter. For parameter changes to take effect you must reboot any associated clusters. :type parameter_group_name: string :param parameter_group_name: The name of the cluster parameter group to be reset. :type reset_all_parameters: boolean :param reset_all_parameters: If `True`, all parameters in the specified parameter group will be reset to their default values. Default: `True` :type parameters: list :param parameters: An array of names of parameters to be reset. If ResetAllParameters option is not used, then at least one parameter name must be supplied. Constraints: A maximum of 20 parameters can be reset in a single request. """ params = {'ParameterGroupName': parameter_group_name, } if reset_all_parameters is not None: params['ResetAllParameters'] = str( reset_all_parameters).lower() if parameters is not None: self.build_complex_list_params( params, parameters, 'Parameters.member', ('ParameterName', 'ParameterValue', 'Description', 'Source', 'DataType', 'AllowedValues', 'IsModifiable', 'MinimumEngineVersion')) return self._make_request( action='ResetClusterParameterGroup', verb='POST', path='/', params=params) def restore_from_cluster_snapshot(self, cluster_identifier, snapshot_identifier, snapshot_cluster_identifier=None, port=None, availability_zone=None, allow_version_upgrade=None, cluster_subnet_group_name=None, publicly_accessible=None, owner_account=None, hsm_client_certificate_identifier=None, hsm_configuration_identifier=None, elastic_ip=None): """ Creates a new cluster from a snapshot. Amazon Redshift creates the resulting cluster with the same configuration as the original cluster from which the snapshot was created, except that the new cluster is created with the default cluster security and parameter group. After Amazon Redshift creates the cluster you can use the ModifyCluster API to associate a different security group and different parameter group with the restored cluster. If a snapshot is taken of a cluster in VPC, you can restore it only in VPC. In this case, you must provide a cluster subnet group where you want the cluster restored. If snapshot is taken of a cluster outside VPC, then you can restore it only outside VPC. For more information about working with snapshots, go to `Amazon Redshift Snapshots`_ in the Amazon Redshift Management Guide . :type cluster_identifier: string :param cluster_identifier: The identifier of the cluster that will be created from restoring the snapshot. Constraints: + Must contain from 1 to 63 alphanumeric characters or hyphens. + Alphabetic characters must be lowercase. + First character must be a letter. + Cannot end with a hyphen or contain two consecutive hyphens. + Must be unique for all clusters within an AWS account. :type snapshot_identifier: string :param snapshot_identifier: The name of the snapshot from which to create the new cluster. This parameter isn't case sensitive. Example: `my-snapshot-id` :type snapshot_cluster_identifier: string :param snapshot_cluster_identifier: The name of the cluster the source snapshot was created from. This parameter is required if your IAM user has a policy containing a snapshot resource element that specifies anything other than * for the cluster name. :type port: integer :param port: The port number on which the cluster accepts connections. Default: The same port as the original cluster. Constraints: Must be between `1115` and `65535`. :type availability_zone: string :param availability_zone: The Amazon EC2 Availability Zone in which to restore the cluster. Default: A random, system-chosen Availability Zone. Example: `us-east-1a` :type allow_version_upgrade: boolean :param allow_version_upgrade: If `True`, upgrades can be applied during the maintenance window to the Amazon Redshift engine that is running on the cluster. Default: `True` :type cluster_subnet_group_name: string :param cluster_subnet_group_name: The name of the subnet group where you want to cluster restored. A snapshot of cluster in VPC can be restored only in VPC. Therefore, you must provide subnet group name where you want the cluster restored. :type publicly_accessible: boolean :param publicly_accessible: If `True`, the cluster can be accessed from a public network. :type owner_account: string :param owner_account: The AWS customer account used to create or copy the snapshot. Required if you are restoring a snapshot you do not own, optional if you own the snapshot. :type hsm_client_certificate_identifier: string :param hsm_client_certificate_identifier: Specifies the name of the HSM client certificate the Amazon Redshift cluster uses to retrieve the data encryption keys stored in an HSM. :type hsm_configuration_identifier: string :param hsm_configuration_identifier: Specifies the name of the HSM configuration that contains the information the Amazon Redshift cluster can use to retrieve and store keys in an HSM. :type elastic_ip: string :param elastic_ip: The elastic IP (EIP) address for the cluster. """ params = { 'ClusterIdentifier': cluster_identifier, 'SnapshotIdentifier': snapshot_identifier, } if snapshot_cluster_identifier is not None: params['SnapshotClusterIdentifier'] = snapshot_cluster_identifier if port is not None: params['Port'] = port if availability_zone is not None: params['AvailabilityZone'] = availability_zone if allow_version_upgrade is not None: params['AllowVersionUpgrade'] = str( allow_version_upgrade).lower() if cluster_subnet_group_name is not None: params['ClusterSubnetGroupName'] = cluster_subnet_group_name if publicly_accessible is not None: params['PubliclyAccessible'] = str( publicly_accessible).lower() if owner_account is not None: params['OwnerAccount'] = owner_account if hsm_client_certificate_identifier is not None: params['HsmClientCertificateIdentifier'] = hsm_client_certificate_identifier if hsm_configuration_identifier is not None: params['HsmConfigurationIdentifier'] = hsm_configuration_identifier if elastic_ip is not None: params['ElasticIp'] = elastic_ip return self._make_request( action='RestoreFromClusterSnapshot', verb='POST', path='/', params=params) def revoke_cluster_security_group_ingress(self, cluster_security_group_name, cidrip=None, ec2_security_group_name=None, ec2_security_group_owner_id=None): """ Revokes an ingress rule in an Amazon Redshift security group for a previously authorized IP range or Amazon EC2 security group. To add an ingress rule, see AuthorizeClusterSecurityGroupIngress. For information about managing security groups, go to`Amazon Redshift Cluster Security Groups`_ in the Amazon Redshift Management Guide . :type cluster_security_group_name: string :param cluster_security_group_name: The name of the security Group from which to revoke the ingress rule. :type cidrip: string :param cidrip: The IP range for which to revoke access. This range must be a valid Classless Inter-Domain Routing (CIDR) block of IP addresses. If `CIDRIP` is specified, `EC2SecurityGroupName` and `EC2SecurityGroupOwnerId` cannot be provided. :type ec2_security_group_name: string :param ec2_security_group_name: The name of the EC2 Security Group whose access is to be revoked. If `EC2SecurityGroupName` is specified, `EC2SecurityGroupOwnerId` must also be provided and `CIDRIP` cannot be provided. :type ec2_security_group_owner_id: string :param ec2_security_group_owner_id: The AWS account number of the owner of the security group specified in the `EC2SecurityGroupName` parameter. The AWS access key ID is not an acceptable value. If `EC2SecurityGroupOwnerId` is specified, `EC2SecurityGroupName` must also be provided. and `CIDRIP` cannot be provided. Example: `111122223333` """ params = { 'ClusterSecurityGroupName': cluster_security_group_name, } if cidrip is not None: params['CIDRIP'] = cidrip if ec2_security_group_name is not None: params['EC2SecurityGroupName'] = ec2_security_group_name if ec2_security_group_owner_id is not None: params['EC2SecurityGroupOwnerId'] = ec2_security_group_owner_id return self._make_request( action='RevokeClusterSecurityGroupIngress', verb='POST', path='/', params=params) def revoke_snapshot_access(self, snapshot_identifier, account_with_restore_access, snapshot_cluster_identifier=None): """ Removes the ability of the specified AWS customer account to restore the specified snapshot. If the account is currently restoring the snapshot, the restore will run to completion. For more information about working with snapshots, go to `Amazon Redshift Snapshots`_ in the Amazon Redshift Management Guide . :type snapshot_identifier: string :param snapshot_identifier: The identifier of the snapshot that the account can no longer access. :type snapshot_cluster_identifier: string :param snapshot_cluster_identifier: The identifier of the cluster the snapshot was created from. This parameter is required if your IAM user has a policy containing a snapshot resource element that specifies anything other than * for the cluster name. :type account_with_restore_access: string :param account_with_restore_access: The identifier of the AWS customer account that can no longer restore the specified snapshot. """ params = { 'SnapshotIdentifier': snapshot_identifier, 'AccountWithRestoreAccess': account_with_restore_access, } if snapshot_cluster_identifier is not None: params['SnapshotClusterIdentifier'] = snapshot_cluster_identifier return self._make_request( action='RevokeSnapshotAccess', verb='POST', path='/', params=params) def rotate_encryption_key(self, cluster_identifier): """ Rotates the encryption keys for a cluster. :type cluster_identifier: string :param cluster_identifier: The unique identifier of the cluster that you want to rotate the encryption keys for. Constraints: Must be the name of valid cluster that has encryption enabled. """ params = {'ClusterIdentifier': cluster_identifier, } return self._make_request( action='RotateEncryptionKey', verb='POST', path='/', params=params) def _make_request(self, action, verb, path, params): params['ContentType'] = 'JSON' response = self.make_request(action=action, verb='POST', path='/', params=params) body = response.read() boto.log.debug(body) if response.status == 200: return json.loads(body) else: json_body = json.loads(body) fault_name = json_body.get('Error', {}).get('Code', None) exception_class = self._faults.get(fault_name, self.ResponseError) raise exception_class(response.status, response.reason, body=json_body) boto-2.20.1/boto/regioninfo.py000066400000000000000000000046111225267101000162110ustar00rootroot00000000000000# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class RegionInfo(object): """ Represents an AWS Region """ def __init__(self, connection=None, name=None, endpoint=None, connection_cls=None): self.connection = connection self.name = name self.endpoint = endpoint self.connection_cls = connection_cls def __repr__(self): return 'RegionInfo:%s' % self.name def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'regionName': self.name = value elif name == 'regionEndpoint': self.endpoint = value else: setattr(self, name, value) def connect(self, **kw_params): """ Connect to this Region's endpoint. Returns an connection object pointing to the endpoint associated with this region. You may pass any of the arguments accepted by the connection class's constructor as keyword arguments and they will be passed along to the connection object. :rtype: Connection object :return: The connection to this regions endpoint """ if self.connection_cls: return self.connection_cls(region=self, **kw_params) boto-2.20.1/boto/resultset.py000066400000000000000000000142601225267101000161050ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.s3.user import User class ResultSet(list): """ The ResultSet is used to pass results back from the Amazon services to the client. It is light wrapper around Python's :py:class:`list` class, with some additional methods for parsing XML results from AWS. Because I don't really want any dependencies on external libraries, I'm using the standard SAX parser that comes with Python. The good news is that it's quite fast and efficient but it makes some things rather difficult. You can pass in, as the marker_elem parameter, a list of tuples. Each tuple contains a string as the first element which represents the XML element that the resultset needs to be on the lookout for and a Python class as the second element of the tuple. Each time the specified element is found in the XML, a new instance of the class will be created and popped onto the stack. :ivar str next_token: A hash used to assist in paging through very long result sets. In most cases, passing this value to certain methods will give you another 'page' of results. """ def __init__(self, marker_elem=None): list.__init__(self) if isinstance(marker_elem, list): self.markers = marker_elem else: self.markers = [] self.marker = None self.key_marker = None self.next_marker = None # avail when delimiter used self.next_key_marker = None self.next_upload_id_marker = None self.next_version_id_marker = None self.next_generation_marker= None self.version_id_marker = None self.is_truncated = False self.next_token = None self.status = True def startElement(self, name, attrs, connection): for t in self.markers: if name == t[0]: obj = t[1](connection) self.append(obj) return obj if name == 'Owner': # Makes owner available for get_service and # perhaps other lists where not handled by # another element. self.owner = User() return self.owner return None def to_boolean(self, value, true_value='true'): if value == true_value: return True else: return False def endElement(self, name, value, connection): if name == 'IsTruncated': self.is_truncated = self.to_boolean(value) elif name == 'Marker': self.marker = value elif name == 'KeyMarker': self.key_marker = value elif name == 'NextMarker': self.next_marker = value elif name == 'NextKeyMarker': self.next_key_marker = value elif name == 'VersionIdMarker': self.version_id_marker = value elif name == 'NextVersionIdMarker': self.next_version_id_marker = value elif name == 'NextGenerationMarker': self.next_generation_marker = value elif name == 'UploadIdMarker': self.upload_id_marker = value elif name == 'NextUploadIdMarker': self.next_upload_id_marker = value elif name == 'Bucket': self.bucket = value elif name == 'MaxUploads': self.max_uploads = int(value) elif name == 'MaxItems': self.max_items = int(value) elif name == 'Prefix': self.prefix = value elif name == 'return': self.status = self.to_boolean(value) elif name == 'StatusCode': self.status = self.to_boolean(value, 'Success') elif name == 'ItemName': self.append(value) elif name == 'NextToken': self.next_token = value elif name == 'BoxUsage': try: connection.box_usage += float(value) except: pass elif name == 'IsValid': self.status = self.to_boolean(value, 'True') else: setattr(self, name, value) class BooleanResult(object): def __init__(self, marker_elem=None): self.status = True self.request_id = None self.box_usage = None def __repr__(self): if self.status: return 'True' else: return 'False' def __nonzero__(self): return self.status def startElement(self, name, attrs, connection): return None def to_boolean(self, value, true_value='true'): if value == true_value: return True else: return False def endElement(self, name, value, connection): if name == 'return': self.status = self.to_boolean(value) elif name == 'StatusCode': self.status = self.to_boolean(value, 'Success') elif name == 'IsValid': self.status = self.to_boolean(value, 'True') elif name == 'RequestId': self.request_id = value elif name == 'requestId': self.request_id = value elif name == 'BoxUsage': self.request_id = value else: setattr(self, name, value) boto-2.20.1/boto/roboto/000077500000000000000000000000001225267101000150025ustar00rootroot00000000000000boto-2.20.1/boto/roboto/__init__.py000066400000000000000000000000021225267101000171030ustar00rootroot00000000000000# boto-2.20.1/boto/roboto/awsqueryrequest.py000066400000000000000000000443361225267101000206570ustar00rootroot00000000000000# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import sys import os import boto import optparse import copy import boto.exception import boto.roboto.awsqueryservice import bdb import traceback try: import epdb as debugger except ImportError: import pdb as debugger def boto_except_hook(debugger_flag, debug_flag): def excepthook(typ, value, tb): if typ is bdb.BdbQuit: sys.exit(1) sys.excepthook = sys.__excepthook__ if debugger_flag and sys.stdout.isatty() and sys.stdin.isatty(): if debugger.__name__ == 'epdb': debugger.post_mortem(tb, typ, value) else: debugger.post_mortem(tb) elif debug_flag: print traceback.print_tb(tb) sys.exit(1) else: print value sys.exit(1) return excepthook class Line(object): def __init__(self, fmt, data, label): self.fmt = fmt self.data = data self.label = label self.line = '%s\t' % label self.printed = False def append(self, datum): self.line += '%s\t' % datum def print_it(self): if not self.printed: print self.line self.printed = True class RequiredParamError(boto.exception.BotoClientError): def __init__(self, required): self.required = required s = 'Required parameters are missing: %s' % self.required boto.exception.BotoClientError.__init__(self, s) class EncoderError(boto.exception.BotoClientError): def __init__(self, error_msg): s = 'Error encoding value (%s)' % error_msg boto.exception.BotoClientError.__init__(self, s) class FilterError(boto.exception.BotoClientError): def __init__(self, filters): self.filters = filters s = 'Unknown filters: %s' % self.filters boto.exception.BotoClientError.__init__(self, s) class Encoder: @classmethod def encode(cls, p, rp, v, label=None): if p.name.startswith('_'): return try: mthd = getattr(cls, 'encode_'+p.ptype) mthd(p, rp, v, label) except AttributeError: raise EncoderError('Unknown type: %s' % p.ptype) @classmethod def encode_string(cls, p, rp, v, l): if l: label = l else: label = p.name rp[label] = v encode_file = encode_string encode_enum = encode_string @classmethod def encode_integer(cls, p, rp, v, l): if l: label = l else: label = p.name rp[label] = '%d' % v @classmethod def encode_boolean(cls, p, rp, v, l): if l: label = l else: label = p.name if v: v = 'true' else: v = 'false' rp[label] = v @classmethod def encode_datetime(cls, p, rp, v, l): if l: label = l else: label = p.name rp[label] = v @classmethod def encode_array(cls, p, rp, v, l): v = boto.utils.mklist(v) if l: label = l else: label = p.name label = label + '.%d' for i, value in enumerate(v): rp[label%(i+1)] = value class AWSQueryRequest(object): ServiceClass = None Description = '' Params = [] Args = [] Filters = [] Response = {} CLITypeMap = {'string' : 'string', 'integer' : 'int', 'int' : 'int', 'enum' : 'choice', 'datetime' : 'string', 'dateTime' : 'string', 'file' : 'string', 'boolean' : None} @classmethod def name(cls): return cls.__name__ def __init__(self, **args): self.args = args self.parser = None self.cli_options = None self.cli_args = None self.cli_output_format = None self.connection = None self.list_markers = [] self.item_markers = [] self.request_params = {} self.connection_args = None def __repr__(self): return self.name() def get_connection(self, **args): if self.connection is None: self.connection = self.ServiceClass(**args) return self.connection @property def status(self): retval = None if self.http_response is not None: retval = self.http_response.status return retval @property def reason(self): retval = None if self.http_response is not None: retval = self.http_response.reason return retval @property def request_id(self): retval = None if self.aws_response is not None: retval = getattr(self.aws_response, 'requestId') return retval def process_filters(self): filters = self.args.get('filters', []) filter_names = [f['name'] for f in self.Filters] unknown_filters = [f for f in filters if f not in filter_names] if unknown_filters: raise FilterError('Unknown filters: %s' % unknown_filters) for i, filter in enumerate(self.Filters): name = filter['name'] if name in filters: self.request_params['Filter.%d.Name' % (i+1)] = name for j, value in enumerate(boto.utils.mklist(filters[name])): Encoder.encode(filter, self.request_params, value, 'Filter.%d.Value.%d' % (i+1, j+1)) def process_args(self, **args): """ Responsible for walking through Params defined for the request and: * Matching them with keyword parameters passed to the request constructor or via the command line. * Checking to see if all required parameters have been specified and raising an exception, if not. * Encoding each value into the set of request parameters that will be sent in the request to the AWS service. """ self.args.update(args) self.connection_args = copy.copy(self.args) if 'debug' in self.args and self.args['debug'] >= 2: boto.set_stream_logger(self.name()) required = [p.name for p in self.Params+self.Args if not p.optional] for param in self.Params+self.Args: if param.long_name: python_name = param.long_name.replace('-', '_') else: python_name = boto.utils.pythonize_name(param.name, '_') value = None if python_name in self.args: value = self.args[python_name] if value is None: value = param.default if value is not None: if param.name in required: required.remove(param.name) if param.request_param: if param.encoder: param.encoder(param, self.request_params, value) else: Encoder.encode(param, self.request_params, value) if python_name in self.args: del self.connection_args[python_name] if required: l = [] for p in self.Params+self.Args: if p.name in required: if p.short_name and p.long_name: l.append('(%s, %s)' % (p.optparse_short_name, p.optparse_long_name)) elif p.short_name: l.append('(%s)' % p.optparse_short_name) else: l.append('(%s)' % p.optparse_long_name) raise RequiredParamError(','.join(l)) boto.log.debug('request_params: %s' % self.request_params) self.process_markers(self.Response) def process_markers(self, fmt, prev_name=None): if fmt and fmt['type'] == 'object': for prop in fmt['properties']: self.process_markers(prop, fmt['name']) elif fmt and fmt['type'] == 'array': self.list_markers.append(prev_name) self.item_markers.append(fmt['name']) def send(self, verb='GET', **args): self.process_args(**args) self.process_filters() conn = self.get_connection(**self.connection_args) self.http_response = conn.make_request(self.name(), self.request_params, verb=verb) self.body = self.http_response.read() boto.log.debug(self.body) if self.http_response.status == 200: self.aws_response = boto.jsonresponse.Element(list_marker=self.list_markers, item_marker=self.item_markers) h = boto.jsonresponse.XmlHandler(self.aws_response, self) h.parse(self.body) return self.aws_response else: boto.log.error('%s %s' % (self.http_response.status, self.http_response.reason)) boto.log.error('%s' % self.body) raise conn.ResponseError(self.http_response.status, self.http_response.reason, self.body) def add_standard_options(self): group = optparse.OptionGroup(self.parser, 'Standard Options') # add standard options that all commands get group.add_option('-D', '--debug', action='store_true', help='Turn on all debugging output') group.add_option('--debugger', action='store_true', default=False, help='Enable interactive debugger on error') group.add_option('-U', '--url', action='store', help='Override service URL with value provided') group.add_option('--region', action='store', help='Name of the region to connect to') group.add_option('-I', '--access-key-id', action='store', help='Override access key value') group.add_option('-S', '--secret-key', action='store', help='Override secret key value') group.add_option('--version', action='store_true', help='Display version string') if self.Filters: self.group.add_option('--help-filters', action='store_true', help='Display list of available filters') self.group.add_option('--filter', action='append', metavar=' name=value', help='A filter for limiting the results') self.parser.add_option_group(group) def process_standard_options(self, options, args, d): if hasattr(options, 'help_filters') and options.help_filters: print 'Available filters:' for filter in self.Filters: print '%s\t%s' % (filter.name, filter.doc) sys.exit(0) if options.debug: self.args['debug'] = 2 if options.url: self.args['url'] = options.url if options.region: self.args['region'] = options.region if options.access_key_id: self.args['aws_access_key_id'] = options.access_key_id if options.secret_key: self.args['aws_secret_access_key'] = options.secret_key if options.version: # TODO - Where should the version # come from? print 'version x.xx' exit(0) sys.excepthook = boto_except_hook(options.debugger, options.debug) def get_usage(self): s = 'usage: %prog [options] ' l = [ a.long_name for a in self.Args ] s += ' '.join(l) for a in self.Args: if a.doc: s += '\n\n\t%s - %s' % (a.long_name, a.doc) return s def build_cli_parser(self): self.parser = optparse.OptionParser(description=self.Description, usage=self.get_usage()) self.add_standard_options() for param in self.Params: ptype = action = choices = None if param.ptype in self.CLITypeMap: ptype = self.CLITypeMap[param.ptype] action = 'store' if param.ptype == 'boolean': action = 'store_true' elif param.ptype == 'array': if len(param.items) == 1: ptype = param.items[0]['type'] action = 'append' elif param.cardinality != 1: action = 'append' if ptype or action == 'store_true': if param.short_name: self.parser.add_option(param.optparse_short_name, param.optparse_long_name, action=action, type=ptype, choices=param.choices, help=param.doc) elif param.long_name: self.parser.add_option(param.optparse_long_name, action=action, type=ptype, choices=param.choices, help=param.doc) def do_cli(self): if not self.parser: self.build_cli_parser() self.cli_options, self.cli_args = self.parser.parse_args() d = {} self.process_standard_options(self.cli_options, self.cli_args, d) for param in self.Params: if param.long_name: p_name = param.long_name.replace('-', '_') else: p_name = boto.utils.pythonize_name(param.name) value = getattr(self.cli_options, p_name) if param.ptype == 'file' and value: if value == '-': value = sys.stdin.read() else: path = os.path.expanduser(value) path = os.path.expandvars(path) if os.path.isfile(path): fp = open(path) value = fp.read() fp.close() else: self.parser.error('Unable to read file: %s' % path) d[p_name] = value for arg in self.Args: if arg.long_name: p_name = arg.long_name.replace('-', '_') else: p_name = boto.utils.pythonize_name(arg.name) value = None if arg.cardinality == 1: if len(self.cli_args) >= 1: value = self.cli_args[0] else: value = self.cli_args d[p_name] = value self.args.update(d) if hasattr(self.cli_options, 'filter') and self.cli_options.filter: d = {} for filter in self.cli_options.filter: name, value = filter.split('=') d[name] = value if 'filters' in self.args: self.args['filters'].update(d) else: self.args['filters'] = d try: response = self.main() self.cli_formatter(response) except RequiredParamError, e: print e sys.exit(1) except self.ServiceClass.ResponseError, err: print 'Error(%s): %s' % (err.error_code, err.error_message) sys.exit(1) except boto.roboto.awsqueryservice.NoCredentialsError, err: print 'Unable to find credentials.' sys.exit(1) except Exception, e: print e sys.exit(1) def _generic_cli_formatter(self, fmt, data, label=''): if fmt['type'] == 'object': for prop in fmt['properties']: if 'name' in fmt: if fmt['name'] in data: data = data[fmt['name']] if fmt['name'] in self.list_markers: label = fmt['name'] if label[-1] == 's': label = label[0:-1] label = label.upper() self._generic_cli_formatter(prop, data, label) elif fmt['type'] == 'array': for item in data: line = Line(fmt, item, label) if isinstance(item, dict): for field_name in item: line.append(item[field_name]) elif isinstance(item, basestring): line.append(item) line.print_it() def cli_formatter(self, data): """ This method is responsible for formatting the output for the command line interface. The default behavior is to call the generic CLI formatter which attempts to print something reasonable. If you want specific formatting, you should override this method and do your own thing. :type data: dict :param data: The data returned by AWS. """ if data: self._generic_cli_formatter(self.Response, data) boto-2.20.1/boto/roboto/awsqueryservice.py000066400000000000000000000105221225267101000206150ustar00rootroot00000000000000import os import urlparse import boto import boto.connection import boto.jsonresponse import boto.exception import awsqueryrequest class NoCredentialsError(boto.exception.BotoClientError): def __init__(self): s = 'Unable to find credentials' boto.exception.BotoClientError.__init__(self, s) class AWSQueryService(boto.connection.AWSQueryConnection): Name = '' Description = '' APIVersion = '' Authentication = 'sign-v2' Path = '/' Port = 443 Provider = 'aws' EnvURL = 'AWS_URL' Regions = [] def __init__(self, **args): self.args = args self.check_for_credential_file() self.check_for_env_url() if 'host' not in self.args: if self.Regions: region_name = self.args.get('region_name', self.Regions[0]['name']) for region in self.Regions: if region['name'] == region_name: self.args['host'] = region['endpoint'] if 'path' not in self.args: self.args['path'] = self.Path if 'port' not in self.args: self.args['port'] = self.Port try: boto.connection.AWSQueryConnection.__init__(self, **self.args) self.aws_response = None except boto.exception.NoAuthHandlerFound: raise NoCredentialsError() def check_for_credential_file(self): """ Checks for the existance of an AWS credential file. If the environment variable AWS_CREDENTIAL_FILE is set and points to a file, that file will be read and will be searched credentials. Note that if credentials have been explicitelypassed into the class constructor, those values always take precedence. """ if 'AWS_CREDENTIAL_FILE' in os.environ: path = os.environ['AWS_CREDENTIAL_FILE'] path = os.path.expanduser(path) path = os.path.expandvars(path) if os.path.isfile(path): fp = open(path) lines = fp.readlines() fp.close() for line in lines: if line[0] != '#': if '=' in line: name, value = line.split('=', 1) if name.strip() == 'AWSAccessKeyId': if 'aws_access_key_id' not in self.args: value = value.strip() self.args['aws_access_key_id'] = value elif name.strip() == 'AWSSecretKey': if 'aws_secret_access_key' not in self.args: value = value.strip() self.args['aws_secret_access_key'] = value else: print 'Warning: unable to read AWS_CREDENTIAL_FILE' def check_for_env_url(self): """ First checks to see if a url argument was explicitly passed in. If so, that will be used. If not, it checks for the existence of the environment variable specified in ENV_URL. If this is set, it should contain a fully qualified URL to the service you want to use. Note that any values passed explicitly to the class constructor will take precedence. """ url = self.args.get('url', None) if url: del self.args['url'] if not url and self.EnvURL in os.environ: url = os.environ[self.EnvURL] if url: rslt = urlparse.urlparse(url) if 'is_secure' not in self.args: if rslt.scheme == 'https': self.args['is_secure'] = True else: self.args['is_secure'] = False host = rslt.netloc port = None l = host.split(':') if len(l) > 1: host = l[0] port = int(l[1]) if 'host' not in self.args: self.args['host'] = host if port and 'port' not in self.args: self.args['port'] = port if rslt.path and 'path' not in self.args: self.args['path'] = rslt.path def _required_auth_capability(self): return [self.Authentication] boto-2.20.1/boto/roboto/param.py000066400000000000000000000106651225267101000164640ustar00rootroot00000000000000# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import os class Converter(object): @classmethod def convert_string(cls, param, value): # TODO: could do length validation, etc. here if not isinstance(value, basestring): raise ValueError return value @classmethod def convert_integer(cls, param, value): # TODO: could do range checking here return int(value) @classmethod def convert_boolean(cls, param, value): """ For command line arguments, just the presence of the option means True so just return True """ return True @classmethod def convert_file(cls, param, value): if os.path.isfile(value): return value raise ValueError @classmethod def convert_dir(cls, param, value): if os.path.isdir(value): return value raise ValueError @classmethod def convert(cls, param, value): try: if hasattr(cls, 'convert_'+param.ptype): mthd = getattr(cls, 'convert_'+param.ptype) else: mthd = cls.convert_string return mthd(param, value) except: raise ValidationException(param, '') class Param(object): def __init__(self, name=None, ptype='string', optional=True, short_name=None, long_name=None, doc='', metavar=None, cardinality=1, default=None, choices=None, encoder=None, request_param=True): self.name = name self.ptype = ptype self.optional = optional self.short_name = short_name self.long_name = long_name self.doc = doc self.metavar = metavar self.cardinality = cardinality self.default = default self.choices = choices self.encoder = encoder self.request_param = request_param @property def optparse_long_name(self): ln = None if self.long_name: ln = '--%s' % self.long_name return ln @property def synopsis_long_name(self): ln = None if self.long_name: ln = '--%s' % self.long_name return ln @property def getopt_long_name(self): ln = None if self.long_name: ln = '%s' % self.long_name if self.ptype != 'boolean': ln += '=' return ln @property def optparse_short_name(self): sn = None if self.short_name: sn = '-%s' % self.short_name return sn @property def synopsis_short_name(self): sn = None if self.short_name: sn = '-%s' % self.short_name return sn @property def getopt_short_name(self): sn = None if self.short_name: sn = '%s' % self.short_name if self.ptype != 'boolean': sn += ':' return sn def convert(self, value): """ Convert a string value as received in the command line tools and convert to the appropriate type of value. Raise a ValidationError if the value can't be converted. :type value: str :param value: The value to convert. This should always be a string. """ return Converter.convert(self, value) boto-2.20.1/boto/route53/000077500000000000000000000000001225267101000150045ustar00rootroot00000000000000boto-2.20.1/boto/route53/__init__.py000066400000000000000000000054561225267101000171270ustar00rootroot00000000000000# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # # this is here for backward compatibility # originally, the Route53Connection class was defined here from connection import Route53Connection from boto.regioninfo import RegionInfo class Route53RegionInfo(RegionInfo): def connect(self, **kw_params): """ Connect to this Region's endpoint. Returns an connection object pointing to the endpoint associated with this region. You may pass any of the arguments accepted by the connection class's constructor as keyword arguments and they will be passed along to the connection object. :rtype: Connection object :return: The connection to this regions endpoint """ if self.connection_cls: return self.connection_cls(host=self.endpoint, **kw_params) def regions(): """ Get all available regions for the Route53 service. :rtype: list :return: A list of :class:`boto.regioninfo.RegionInfo` instances """ return [Route53RegionInfo(name='universal', endpoint='route53.amazonaws.com', connection_cls=Route53Connection) ] def connect_to_region(region_name, **kw_params): """ Given a valid region name, return a :class:`boto.route53.connection.Route53Connection`. :type: str :param region_name: The name of the region to connect to. :rtype: :class:`boto.route53.connection.Route53Connection` or ``None`` :return: A connection to the given region, or None if an invalid region name is given """ for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/route53/connection.py000066400000000000000000000407351225267101000175260ustar00rootroot00000000000000# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # Copyright (c) 2011 Blue Pines Technologies LLC, Brad Carleton # www.bluepines.org # Copyright (c) 2012 42 Lines Inc., Jim Browne # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import exception import random import urllib import uuid import xml.sax import boto from boto.connection import AWSAuthConnection from boto import handler import boto.jsonresponse from boto.route53.record import ResourceRecordSets from boto.route53.zone import Zone HZXML = """ %(name)s %(caller_ref)s %(comment)s """ #boto.set_stream_logger('dns') class Route53Connection(AWSAuthConnection): DefaultHost = 'route53.amazonaws.com' """The default Route53 API endpoint to connect to.""" Version = '2012-02-29' """Route53 API version.""" XMLNameSpace = 'https://route53.amazonaws.com/doc/2012-02-29/' """XML schema for this Route53 API version.""" def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, port=None, proxy=None, proxy_port=None, host=DefaultHost, debug=0, security_token=None, validate_certs=True, https_connection_factory=None): AWSAuthConnection.__init__(self, host, aws_access_key_id, aws_secret_access_key, True, port, proxy, proxy_port, debug=debug, security_token=security_token, validate_certs=validate_certs, https_connection_factory=https_connection_factory) def _required_auth_capability(self): return ['route53'] def make_request(self, action, path, headers=None, data='', params=None): if params: pairs = [] for key, val in params.iteritems(): if val is None: continue pairs.append(key + '=' + urllib.quote(str(val))) path += '?' + '&'.join(pairs) return AWSAuthConnection.make_request(self, action, path, headers, data, retry_handler=self._retry_handler) # Hosted Zones def get_all_hosted_zones(self, start_marker=None, zone_list=None): """ Returns a Python data structure with information about all Hosted Zones defined for the AWS account. :param int start_marker: start marker to pass when fetching additional results after a truncated list :param list zone_list: a HostedZones list to prepend to results """ params = {} if start_marker: params = {'marker': start_marker} response = self.make_request('GET', '/%s/hostedzone' % self.Version, params=params) body = response.read() boto.log.debug(body) if response.status >= 300: raise exception.DNSServerError(response.status, response.reason, body) e = boto.jsonresponse.Element(list_marker='HostedZones', item_marker=('HostedZone',)) h = boto.jsonresponse.XmlHandler(e, None) h.parse(body) if zone_list: e['ListHostedZonesResponse']['HostedZones'].extend(zone_list) while 'NextMarker' in e['ListHostedZonesResponse']: next_marker = e['ListHostedZonesResponse']['NextMarker'] zone_list = e['ListHostedZonesResponse']['HostedZones'] e = self.get_all_hosted_zones(next_marker, zone_list) return e def get_hosted_zone(self, hosted_zone_id): """ Get detailed information about a particular Hosted Zone. :type hosted_zone_id: str :param hosted_zone_id: The unique identifier for the Hosted Zone """ uri = '/%s/hostedzone/%s' % (self.Version, hosted_zone_id) response = self.make_request('GET', uri) body = response.read() boto.log.debug(body) if response.status >= 300: raise exception.DNSServerError(response.status, response.reason, body) e = boto.jsonresponse.Element(list_marker='NameServers', item_marker=('NameServer',)) h = boto.jsonresponse.XmlHandler(e, None) h.parse(body) return e def get_hosted_zone_by_name(self, hosted_zone_name): """ Get detailed information about a particular Hosted Zone. :type hosted_zone_name: str :param hosted_zone_name: The fully qualified domain name for the Hosted Zone """ if hosted_zone_name[-1] != '.': hosted_zone_name += '.' all_hosted_zones = self.get_all_hosted_zones() for zone in all_hosted_zones['ListHostedZonesResponse']['HostedZones']: #check that they gave us the FQDN for their zone if zone['Name'] == hosted_zone_name: return self.get_hosted_zone(zone['Id'].split('/')[-1]) def create_hosted_zone(self, domain_name, caller_ref=None, comment=''): """ Create a new Hosted Zone. Returns a Python data structure with information about the newly created Hosted Zone. :type domain_name: str :param domain_name: The name of the domain. This should be a fully-specified domain, and should end with a final period as the last label indication. If you omit the final period, Amazon Route 53 assumes the domain is relative to the root. This is the name you have registered with your DNS registrar. It is also the name you will delegate from your registrar to the Amazon Route 53 delegation servers returned in response to this request.A list of strings with the image IDs wanted. :type caller_ref: str :param caller_ref: A unique string that identifies the request and that allows failed CreateHostedZone requests to be retried without the risk of executing the operation twice. If you don't provide a value for this, boto will generate a Type 4 UUID and use that. :type comment: str :param comment: Any comments you want to include about the hosted zone. """ if caller_ref is None: caller_ref = str(uuid.uuid4()) params = {'name': domain_name, 'caller_ref': caller_ref, 'comment': comment, 'xmlns': self.XMLNameSpace} xml_body = HZXML % params uri = '/%s/hostedzone' % self.Version response = self.make_request('POST', uri, {'Content-Type': 'text/xml'}, xml_body) body = response.read() boto.log.debug(body) if response.status == 201: e = boto.jsonresponse.Element(list_marker='NameServers', item_marker=('NameServer',)) h = boto.jsonresponse.XmlHandler(e, None) h.parse(body) return e else: raise exception.DNSServerError(response.status, response.reason, body) def delete_hosted_zone(self, hosted_zone_id): uri = '/%s/hostedzone/%s' % (self.Version, hosted_zone_id) response = self.make_request('DELETE', uri) body = response.read() boto.log.debug(body) if response.status not in (200, 204): raise exception.DNSServerError(response.status, response.reason, body) e = boto.jsonresponse.Element() h = boto.jsonresponse.XmlHandler(e, None) h.parse(body) return e # Resource Record Sets def get_all_rrsets(self, hosted_zone_id, type=None, name=None, identifier=None, maxitems=None): """ Retrieve the Resource Record Sets defined for this Hosted Zone. Returns the raw XML data returned by the Route53 call. :type hosted_zone_id: str :param hosted_zone_id: The unique identifier for the Hosted Zone :type type: str :param type: The type of resource record set to begin the record listing from. Valid choices are: * A * AAAA * CNAME * MX * NS * PTR * SOA * SPF * SRV * TXT Valid values for weighted resource record sets: * A * AAAA * CNAME * TXT Valid values for Zone Apex Aliases: * A * AAAA :type name: str :param name: The first name in the lexicographic ordering of domain names to be retrieved :type identifier: str :param identifier: In a hosted zone that includes weighted resource record sets (multiple resource record sets with the same DNS name and type that are differentiated only by SetIdentifier), if results were truncated for a given DNS name and type, the value of SetIdentifier for the next resource record set that has the current DNS name and type :type maxitems: int :param maxitems: The maximum number of records """ params = {'type': type, 'name': name, 'Identifier': identifier, 'maxitems': maxitems} uri = '/%s/hostedzone/%s/rrset' % (self.Version, hosted_zone_id) response = self.make_request('GET', uri, params=params) body = response.read() boto.log.debug(body) if response.status >= 300: raise exception.DNSServerError(response.status, response.reason, body) rs = ResourceRecordSets(connection=self, hosted_zone_id=hosted_zone_id) h = handler.XmlHandler(rs, self) xml.sax.parseString(body, h) return rs def change_rrsets(self, hosted_zone_id, xml_body): """ Create or change the authoritative DNS information for this Hosted Zone. Returns a Python data structure with information about the set of changes, including the Change ID. :type hosted_zone_id: str :param hosted_zone_id: The unique identifier for the Hosted Zone :type xml_body: str :param xml_body: The list of changes to be made, defined in the XML schema defined by the Route53 service. """ uri = '/%s/hostedzone/%s/rrset' % (self.Version, hosted_zone_id) response = self.make_request('POST', uri, {'Content-Type': 'text/xml'}, xml_body) body = response.read() boto.log.debug(body) if response.status >= 300: raise exception.DNSServerError(response.status, response.reason, body) e = boto.jsonresponse.Element() h = boto.jsonresponse.XmlHandler(e, None) h.parse(body) return e def get_change(self, change_id): """ Get information about a proposed set of changes, as submitted by the change_rrsets method. Returns a Python data structure with status information about the changes. :type change_id: str :param change_id: The unique identifier for the set of changes. This ID is returned in the response to the change_rrsets method. """ uri = '/%s/change/%s' % (self.Version, change_id) response = self.make_request('GET', uri) body = response.read() boto.log.debug(body) if response.status >= 300: raise exception.DNSServerError(response.status, response.reason, body) e = boto.jsonresponse.Element() h = boto.jsonresponse.XmlHandler(e, None) h.parse(body) return e def create_zone(self, name): """ Create a new Hosted Zone. Returns a Zone object for the newly created Hosted Zone. :type name: str :param name: The name of the domain. This should be a fully-specified domain, and should end with a final period as the last label indication. If you omit the final period, Amazon Route 53 assumes the domain is relative to the root. This is the name you have registered with your DNS registrar. It is also the name you will delegate from your registrar to the Amazon Route 53 delegation servers returned in response to this request. """ zone = self.create_hosted_zone(name) return Zone(self, zone['CreateHostedZoneResponse']['HostedZone']) def get_zone(self, name): """ Returns a Zone object for the specified Hosted Zone. :param name: The name of the domain. This should be a fully-specified domain, and should end with a final period as the last label indication. """ name = self._make_qualified(name) for zone in self.get_zones(): if name == zone.name: return zone def get_zones(self): """ Returns a list of Zone objects, one for each of the Hosted Zones defined for the AWS account. """ zones = self.get_all_hosted_zones() return [Zone(self, zone) for zone in zones['ListHostedZonesResponse']['HostedZones']] def _make_qualified(self, value): """ Ensure passed domain names end in a period (.) character. This will usually make a domain fully qualified. """ if type(value) in [list, tuple, set]: new_list = [] for record in value: if record and not record[-1] == '.': new_list.append("%s." % record) else: new_list.append(record) return new_list else: value = value.strip() if value and not value[-1] == '.': value = "%s." % value return value def _retry_handler(self, response, i, next_sleep): status = None boto.log.debug("Saw HTTP status: %s" % response.status) if response.status == 400: code = response.getheader('Code') if code and 'PriorRequestNotComplete' in code: # This is a case where we need to ignore a 400 error, as # Route53 returns this. See # http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/DNSLimitations.html msg = "%s, retry attempt %s" % ( 'PriorRequestNotComplete', i ) next_sleep = random.random() * (2 ** i) i += 1 status = (msg, i, next_sleep) return status boto-2.20.1/boto/route53/exception.py000066400000000000000000000023301225267101000173520ustar00rootroot00000000000000# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.exception import BotoServerError class DNSServerError(BotoServerError): pass boto-2.20.1/boto/route53/hostedzone.py000066400000000000000000000041141225267101000175400ustar00rootroot00000000000000# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # class HostedZone(object): def __init__(self, id=None, name=None, owner=None, version=None, caller_reference=None, config=None): self.id = id self.name = name self.owner = owner self.version = version self.caller_reference = caller_reference self.config = config def startElement(self, name, attrs, connection): if name == 'Config': self.config = Config() return self.config else: return None def endElement(self, name, value, connection): if name == 'Id': self.id = value elif name == 'Name': self.name = value elif name == 'Owner': self.owner = value elif name == 'Version': self.version = value elif name == 'CallerReference': self.caller_reference = value else: setattr(self, name, value) boto-2.20.1/boto/route53/record.py000066400000000000000000000260321225267101000166370ustar00rootroot00000000000000# Copyright (c) 2010 Chris Moyer http://coredumped.org/ # Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. RECORD_TYPES = ['A', 'AAAA', 'TXT', 'CNAME', 'MX', 'PTR', 'SRV', 'SPF'] from boto.resultset import ResultSet class ResourceRecordSets(ResultSet): """ A list of resource records. :ivar hosted_zone_id: The ID of the hosted zone. :ivar comment: A comment that will be stored with the change. :ivar changes: A list of changes. """ ChangeResourceRecordSetsBody = """ %(comment)s %(changes)s """ ChangeXML = """ %(action)s %(record)s """ def __init__(self, connection=None, hosted_zone_id=None, comment=None): self.connection = connection self.hosted_zone_id = hosted_zone_id self.comment = comment self.changes = [] self.next_record_name = None self.next_record_type = None ResultSet.__init__(self, [('ResourceRecordSet', Record)]) def __repr__(self): if self.changes: record_list = ','.join([c.__repr__() for c in self.changes]) else: record_list = ','.join([record.__repr__() for record in self]) return ' %(name)s %(type)s %(weight)s %(body)s """ WRRBody = """ %(identifier)s %(weight)s """ RRRBody = """ %(identifier)s %(region)s """ ResourceRecordsBody = """ %(ttl)s %(records)s """ ResourceRecordBody = """ %s """ AliasBody = """ %s %s """ def __init__(self, name=None, type=None, ttl=600, resource_records=None, alias_hosted_zone_id=None, alias_dns_name=None, identifier=None, weight=None, region=None): self.name = name self.type = type self.ttl = ttl if resource_records == None: resource_records = [] self.resource_records = resource_records self.alias_hosted_zone_id = alias_hosted_zone_id self.alias_dns_name = alias_dns_name self.identifier = identifier self.weight = weight self.region = region def __repr__(self): return '' % (self.name, self.type, self.to_print()) def add_value(self, value): """Add a resource record value""" self.resource_records.append(value) def set_alias(self, alias_hosted_zone_id, alias_dns_name): """Make this an alias resource record set""" self.alias_hosted_zone_id = alias_hosted_zone_id self.alias_dns_name = alias_dns_name def to_xml(self): """Spit this resource record set out as XML""" if self.alias_hosted_zone_id != None and self.alias_dns_name != None: # Use alias body = self.AliasBody % (self.alias_hosted_zone_id, self.alias_dns_name) else: # Use resource record(s) records = "" for r in self.resource_records: records += self.ResourceRecordBody % r body = self.ResourceRecordsBody % { "ttl": self.ttl, "records": records, } weight = "" if self.identifier != None and self.weight != None: weight = self.WRRBody % {"identifier": self.identifier, "weight": self.weight} elif self.identifier != None and self.region != None: weight = self.RRRBody % {"identifier": self.identifier, "region": self.region} params = { "name": self.name, "type": self.type, "weight": weight, "body": body, } return self.XMLBody % params def to_print(self): rr = "" if self.alias_hosted_zone_id != None and self.alias_dns_name != None: # Show alias rr = 'ALIAS ' + self.alias_hosted_zone_id + ' ' + self.alias_dns_name else: # Show resource record(s) rr = ",".join(self.resource_records) if self.identifier != None and self.weight != None: rr += ' (WRR id=%s, w=%s)' % (self.identifier, self.weight) elif self.identifier != None and self.region != None: rr += ' (LBR id=%s, region=%s)' % (self.identifier, self.region) return rr def endElement(self, name, value, connection): if name == 'Name': self.name = value elif name == 'Type': self.type = value elif name == 'TTL': self.ttl = value elif name == 'Value': self.resource_records.append(value) elif name == 'HostedZoneId': self.alias_hosted_zone_id = value elif name == 'DNSName': self.alias_dns_name = value elif name == 'SetIdentifier': self.identifier = value elif name == 'Weight': self.weight = value elif name == 'Region': self.region = value def startElement(self, name, attrs, connection): return None boto-2.20.1/boto/route53/status.py000066400000000000000000000034611225267101000167050ustar00rootroot00000000000000# Copyright (c) 2011 Blue Pines Technologies LLC, Brad Carleton # www.bluepines.org # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class Status(object): def __init__(self, route53connection, change_dict): self.route53connection = route53connection for key in change_dict: if key == 'Id': self.__setattr__(key.lower(), change_dict[key].replace('/change/', '')) else: self.__setattr__(key.lower(), change_dict[key]) def update(self): """ Update the status of this request.""" status = self.route53connection.get_change(self.id)['GetChangeResponse']['ChangeInfo']['Status'] self.status = status return status def __repr__(self): return '' % self.status boto-2.20.1/boto/route53/zone.py000066400000000000000000000371201225267101000163340ustar00rootroot00000000000000# Copyright (c) 2011 Blue Pines Technologies LLC, Brad Carleton # www.bluepines.org # Copyright (c) 2012 42 Lines Inc., Jim Browne # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. default_ttl = 60 import copy from boto.exception import TooManyRecordsException from boto.route53.record import ResourceRecordSets from boto.route53.status import Status class Zone(object): """ A Route53 Zone. :ivar Route53Connection route53connection :ivar str Id: The ID of the hosted zone. """ def __init__(self, route53connection, zone_dict): self.route53connection = route53connection for key in zone_dict: if key == 'Id': self.id = zone_dict['Id'].replace('/hostedzone/', '') else: self.__setattr__(key.lower(), zone_dict[key]) def __repr__(self): return '' % self.name def _commit(self, changes): """ Commit a set of changes and return the ChangeInfo portion of the response. :type changes: ResourceRecordSets :param changes: changes to be committed """ response = changes.commit() return response['ChangeResourceRecordSetsResponse']['ChangeInfo'] def _new_record(self, changes, resource_type, name, value, ttl, identifier, comment=""): """ Add a CREATE change record to an existing ResourceRecordSets :type changes: ResourceRecordSets :param changes: change set to append to :type name: str :param name: The name of the resource record you want to perform the action on. :type resource_type: str :param resource_type: The DNS record type :param value: Appropriate value for resource_type :type ttl: int :param ttl: The resource record cache time to live (TTL), in seconds. :type identifier: tuple :param identifier: A tuple for setting WRR or LBR attributes. Valid forms are: * (str, int): WRR record [e.g. ('foo',10)] * (str, str): LBR record [e.g. ('foo','us-east-1') :type comment: str :param comment: A comment that will be stored with the change. """ weight = None region = None if identifier is not None: try: int(identifier[1]) weight = identifier[1] identifier = identifier[0] except: region = identifier[1] identifier = identifier[0] change = changes.add_change("CREATE", name, resource_type, ttl, identifier=identifier, weight=weight, region=region) if type(value) in [list, tuple, set]: for record in value: change.add_value(record) else: change.add_value(value) def add_record(self, resource_type, name, value, ttl=60, identifier=None, comment=""): """ Add a new record to this Zone. See _new_record for parameter documentation. Returns a Status object. """ changes = ResourceRecordSets(self.route53connection, self.id, comment) self._new_record(changes, resource_type, name, value, ttl, identifier, comment) return Status(self.route53connection, self._commit(changes)) def update_record(self, old_record, new_value, new_ttl=None, new_identifier=None, comment=""): """ Update an existing record in this Zone. Returns a Status object. :type old_record: ResourceRecord :param old_record: A ResourceRecord (e.g. returned by find_records) See _new_record for additional parameter documentation. """ new_ttl = new_ttl or default_ttl record = copy.copy(old_record) changes = ResourceRecordSets(self.route53connection, self.id, comment) changes.add_change_record("DELETE", record) self._new_record(changes, record.type, record.name, new_value, new_ttl, new_identifier, comment) return Status(self.route53connection, self._commit(changes)) def delete_record(self, record, comment=""): """ Delete one or more records from this Zone. Returns a Status object. :param record: A ResourceRecord (e.g. returned by find_records) or list, tuple, or set of ResourceRecords. :type comment: str :param comment: A comment that will be stored with the change. """ changes = ResourceRecordSets(self.route53connection, self.id, comment) if type(record) in [list, tuple, set]: for r in record: changes.add_change_record("DELETE", r) else: changes.add_change_record("DELETE", record) return Status(self.route53connection, self._commit(changes)) def add_cname(self, name, value, ttl=None, identifier=None, comment=""): """ Add a new CNAME record to this Zone. See _new_record for parameter documentation. Returns a Status object. """ ttl = ttl or default_ttl name = self.route53connection._make_qualified(name) value = self.route53connection._make_qualified(value) return self.add_record(resource_type='CNAME', name=name, value=value, ttl=ttl, identifier=identifier, comment=comment) def add_a(self, name, value, ttl=None, identifier=None, comment=""): """ Add a new A record to this Zone. See _new_record for parameter documentation. Returns a Status object. """ ttl = ttl or default_ttl name = self.route53connection._make_qualified(name) return self.add_record(resource_type='A', name=name, value=value, ttl=ttl, identifier=identifier, comment=comment) def add_mx(self, name, records, ttl=None, identifier=None, comment=""): """ Add a new MX record to this Zone. See _new_record for parameter documentation. Returns a Status object. """ ttl = ttl or default_ttl records = self.route53connection._make_qualified(records) return self.add_record(resource_type='MX', name=name, value=records, ttl=ttl, identifier=identifier, comment=comment) def find_records(self, name, type, desired=1, all=False, identifier=None): """ Search this Zone for records that match given parameters. Returns None if no results, a ResourceRecord if one result, or a ResourceRecordSets if more than one result. :type name: str :param name: The name of the records should match this parameter :type type: str :param type: The type of the records should match this parameter :type desired: int :param desired: The number of desired results. If the number of matching records in the Zone exceeds the value of this parameter, throw TooManyRecordsException :type all: Boolean :param all: If true return all records that match name, type, and identifier parameters :type identifier: Tuple :param identifier: A tuple specifying WRR or LBR attributes. Valid forms are: * (str, int): WRR record [e.g. ('foo',10)] * (str, str): LBR record [e.g. ('foo','us-east-1') """ name = self.route53connection._make_qualified(name) returned = self.route53connection.get_all_rrsets(self.id, name=name, type=type) # name/type for get_all_rrsets sets the starting record; they # are not a filter results = [r for r in returned if r.name == name and r.type == type] weight = None region = None if identifier is not None: try: int(identifier[1]) weight = identifier[1] except: region = identifier[1] if weight is not None: results = [r for r in results if (r.weight == weight and r.identifier == identifier[0])] if region is not None: results = [r for r in results if (r.region == region and r.identifier == identifier[0])] if ((not all) and (len(results) > desired)): message = "Search: name %s type %s" % (name, type) message += "\nFound: " message += ", ".join(["%s %s %s" % (r.name, r.type, r.to_print()) for r in results]) raise TooManyRecordsException(message) elif len(results) > 1: return results elif len(results) == 1: return results[0] else: return None def get_cname(self, name, all=False): """ Search this Zone for CNAME records that match name. Returns a ResourceRecord. If there is more than one match return all as a ResourceRecordSets if all is True, otherwise throws TooManyRecordsException. """ return self.find_records(name, 'CNAME', all=all) def get_a(self, name, all=False): """ Search this Zone for A records that match name. Returns a ResourceRecord. If there is more than one match return all as a ResourceRecordSets if all is True, otherwise throws TooManyRecordsException. """ return self.find_records(name, 'A', all=all) def get_mx(self, name, all=False): """ Search this Zone for MX records that match name. Returns a ResourceRecord. If there is more than one match return all as a ResourceRecordSets if all is True, otherwise throws TooManyRecordsException. """ return self.find_records(name, 'MX', all=all) def update_cname(self, name, value, ttl=None, identifier=None, comment=""): """ Update the given CNAME record in this Zone to a new value, ttl, and identifier. Returns a Status object. Will throw TooManyRecordsException is name, value does not match a single record. """ name = self.route53connection._make_qualified(name) value = self.route53connection._make_qualified(value) old_record = self.get_cname(name) ttl = ttl or old_record.ttl return self.update_record(old_record, new_value=value, new_ttl=ttl, new_identifier=identifier, comment=comment) def update_a(self, name, value, ttl=None, identifier=None, comment=""): """ Update the given A record in this Zone to a new value, ttl, and identifier. Returns a Status object. Will throw TooManyRecordsException is name, value does not match a single record. """ name = self.route53connection._make_qualified(name) old_record = self.get_a(name) ttl = ttl or old_record.ttl return self.update_record(old_record, new_value=value, new_ttl=ttl, new_identifier=identifier, comment=comment) def update_mx(self, name, value, ttl=None, identifier=None, comment=""): """ Update the given MX record in this Zone to a new value, ttl, and identifier. Returns a Status object. Will throw TooManyRecordsException is name, value does not match a single record. """ name = self.route53connection._make_qualified(name) value = self.route53connection._make_qualified(value) old_record = self.get_mx(name) ttl = ttl or old_record.ttl return self.update_record(old_record, new_value=value, new_ttl=ttl, new_identifier=identifier, comment=comment) def delete_cname(self, name, identifier=None, all=False): """ Delete a CNAME record matching name and identifier from this Zone. Returns a Status object. If there is more than one match delete all matching records if all is True, otherwise throws TooManyRecordsException. """ name = self.route53connection._make_qualified(name) record = self.find_records(name, 'CNAME', identifier=identifier, all=all) return self.delete_record(record) def delete_a(self, name, identifier=None, all=False): """ Delete an A record matching name and identifier from this Zone. Returns a Status object. If there is more than one match delete all matching records if all is True, otherwise throws TooManyRecordsException. """ name = self.route53connection._make_qualified(name) record = self.find_records(name, 'A', identifier=identifier, all=all) return self.delete_record(record) def delete_mx(self, name, identifier=None, all=False): """ Delete an MX record matching name and identifier from this Zone. Returns a Status object. If there is more than one match delete all matching records if all is True, otherwise throws TooManyRecordsException. """ name = self.route53connection._make_qualified(name) record = self.find_records(name, 'MX', identifier=identifier, all=all) return self.delete_record(record) def get_records(self): """ Return a ResourceRecordsSets for all of the records in this zone. """ return self.route53connection.get_all_rrsets(self.id) def delete(self): """ Request that this zone be deleted by Amazon. """ self.route53connection.delete_hosted_zone(self.id) def get_nameservers(self): """ Get the list of nameservers for this zone.""" ns = self.find_records(self.name, 'NS') if ns is not None: ns = ns.resource_records return ns boto-2.20.1/boto/s3/000077500000000000000000000000001225267101000140235ustar00rootroot00000000000000boto-2.20.1/boto/s3/__init__.py000066400000000000000000000071741225267101000161450ustar00rootroot00000000000000# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.regioninfo import RegionInfo class S3RegionInfo(RegionInfo): def connect(self, **kw_params): """ Connect to this Region's endpoint. Returns an connection object pointing to the endpoint associated with this region. You may pass any of the arguments accepted by the connection class's constructor as keyword arguments and they will be passed along to the connection object. :rtype: Connection object :return: The connection to this regions endpoint """ if self.connection_cls: return self.connection_cls(host=self.endpoint, **kw_params) def regions(): """ Get all available regions for the Amazon S3 service. :rtype: list :return: A list of :class:`boto.regioninfo.RegionInfo` """ from .connection import S3Connection return [S3RegionInfo(name='us-east-1', endpoint='s3.amazonaws.com', connection_cls=S3Connection), S3RegionInfo(name='us-gov-west-1', endpoint='s3-us-gov-west-1.amazonaws.com', connection_cls=S3Connection), S3RegionInfo(name='us-west-1', endpoint='s3-us-west-1.amazonaws.com', connection_cls=S3Connection), S3RegionInfo(name='us-west-2', endpoint='s3-us-west-2.amazonaws.com', connection_cls=S3Connection), S3RegionInfo(name='ap-northeast-1', endpoint='s3-ap-northeast-1.amazonaws.com', connection_cls=S3Connection), S3RegionInfo(name='ap-southeast-1', endpoint='s3-ap-southeast-1.amazonaws.com', connection_cls=S3Connection), S3RegionInfo(name='ap-southeast-2', endpoint='s3-ap-southeast-2.amazonaws.com', connection_cls=S3Connection), S3RegionInfo(name='eu-west-1', endpoint='s3-eu-west-1.amazonaws.com', connection_cls=S3Connection), S3RegionInfo(name='sa-east-1', endpoint='s3-sa-east-1.amazonaws.com', connection_cls=S3Connection), ] def connect_to_region(region_name, **kw_params): for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/s3/acl.py000066400000000000000000000124771225267101000151470ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.s3.user import User CannedACLStrings = ['private', 'public-read', 'public-read-write', 'authenticated-read', 'bucket-owner-read', 'bucket-owner-full-control', 'log-delivery-write'] class Policy: def __init__(self, parent=None): self.parent = parent self.acl = None def __repr__(self): grants = [] for g in self.acl.grants: if g.id == self.owner.id: grants.append("%s (owner) = %s" % (g.display_name, g.permission)) else: if g.type == 'CanonicalUser': u = g.display_name elif g.type == 'Group': u = g.uri else: u = g.email_address grants.append("%s = %s" % (u, g.permission)) return "" % ", ".join(grants) def startElement(self, name, attrs, connection): if name == 'Owner': self.owner = User(self) return self.owner elif name == 'AccessControlList': self.acl = ACL(self) return self.acl else: return None def endElement(self, name, value, connection): if name == 'Owner': pass elif name == 'AccessControlList': pass else: setattr(self, name, value) def to_xml(self): s = '' s += self.owner.to_xml() s += self.acl.to_xml() s += '' return s class ACL: def __init__(self, policy=None): self.policy = policy self.grants = [] def add_grant(self, grant): self.grants.append(grant) def add_email_grant(self, permission, email_address): grant = Grant(permission=permission, type='AmazonCustomerByEmail', email_address=email_address) self.grants.append(grant) def add_user_grant(self, permission, user_id, display_name=None): grant = Grant(permission=permission, type='CanonicalUser', id=user_id, display_name=display_name) self.grants.append(grant) def startElement(self, name, attrs, connection): if name == 'Grant': self.grants.append(Grant(self)) return self.grants[-1] else: return None def endElement(self, name, value, connection): if name == 'Grant': pass else: setattr(self, name, value) def to_xml(self): s = '' for grant in self.grants: s += grant.to_xml() s += '' return s class Grant: NameSpace = 'xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"' def __init__(self, permission=None, type=None, id=None, display_name=None, uri=None, email_address=None): self.permission = permission self.id = id self.display_name = display_name self.uri = uri self.email_address = email_address self.type = type def startElement(self, name, attrs, connection): if name == 'Grantee': self.type = attrs['xsi:type'] return None def endElement(self, name, value, connection): if name == 'ID': self.id = value elif name == 'DisplayName': self.display_name = value elif name == 'URI': self.uri = value elif name == 'EmailAddress': self.email_address = value elif name == 'Grantee': pass elif name == 'Permission': self.permission = value else: setattr(self, name, value) def to_xml(self): s = '' s += '' % (self.NameSpace, self.type) if self.type == 'CanonicalUser': s += '%s' % self.id s += '%s' % self.display_name elif self.type == 'Group': s += '%s' % self.uri else: s += '%s' % self.email_address s += '' s += '%s' % self.permission s += '' return s boto-2.20.1/boto/s3/bucket.py000066400000000000000000002142421225267101000156570ustar00rootroot00000000000000# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import boto from boto import handler from boto.resultset import ResultSet from boto.exception import BotoClientError from boto.s3.acl import Policy, CannedACLStrings, Grant from boto.s3.key import Key from boto.s3.prefix import Prefix from boto.s3.deletemarker import DeleteMarker from boto.s3.multipart import MultiPartUpload from boto.s3.multipart import CompleteMultiPartUpload from boto.s3.multidelete import MultiDeleteResult from boto.s3.multidelete import Error from boto.s3.bucketlistresultset import BucketListResultSet from boto.s3.bucketlistresultset import VersionedBucketListResultSet from boto.s3.bucketlistresultset import MultiPartUploadListResultSet from boto.s3.lifecycle import Lifecycle from boto.s3.tagging import Tags from boto.s3.cors import CORSConfiguration from boto.s3.bucketlogging import BucketLogging from boto.s3 import website import boto.jsonresponse import boto.utils import xml.sax import xml.sax.saxutils import StringIO import urllib import re import base64 from collections import defaultdict # as per http://goo.gl/BDuud (02/19/2011) class S3WebsiteEndpointTranslate: trans_region = defaultdict(lambda: 's3-website-us-east-1') trans_region['eu-west-1'] = 's3-website-eu-west-1' trans_region['us-west-1'] = 's3-website-us-west-1' trans_region['us-west-2'] = 's3-website-us-west-2' trans_region['sa-east-1'] = 's3-website-sa-east-1' trans_region['ap-northeast-1'] = 's3-website-ap-northeast-1' trans_region['ap-southeast-1'] = 's3-website-ap-southeast-1' trans_region['ap-southeast-2'] = 's3-website-ap-southeast-2' @classmethod def translate_region(self, reg): return self.trans_region[reg] S3Permissions = ['READ', 'WRITE', 'READ_ACP', 'WRITE_ACP', 'FULL_CONTROL'] class Bucket(object): LoggingGroup = 'http://acs.amazonaws.com/groups/s3/LogDelivery' BucketPaymentBody = """ %s """ VersioningBody = """ %s %s """ VersionRE = '([A-Za-z]+)' MFADeleteRE = '([A-Za-z]+)' def __init__(self, connection=None, name=None, key_class=Key): self.name = name self.connection = connection self.key_class = key_class def __repr__(self): return '' % self.name def __iter__(self): return iter(BucketListResultSet(self)) def __contains__(self, key_name): return not (self.get_key(key_name) is None) def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'Name': self.name = value elif name == 'CreationDate': self.creation_date = value else: setattr(self, name, value) def set_key_class(self, key_class): """ Set the Key class associated with this bucket. By default, this would be the boto.s3.key.Key class but if you want to subclass that for some reason this allows you to associate your new class with a bucket so that when you call bucket.new_key() or when you get a listing of keys in the bucket you will get an instances of your key class rather than the default. :type key_class: class :param key_class: A subclass of Key that can be more specific """ self.key_class = key_class def lookup(self, key_name, headers=None): """ Deprecated: Please use get_key method. :type key_name: string :param key_name: The name of the key to retrieve :rtype: :class:`boto.s3.key.Key` :returns: A Key object from this bucket. """ return self.get_key(key_name, headers=headers) def get_key(self, key_name, headers=None, version_id=None, response_headers=None): """ Check to see if a particular key exists within the bucket. This method uses a HEAD request to check for the existance of the key. Returns: An instance of a Key object or None :type key_name: string :param key_name: The name of the key to retrieve :type response_headers: dict :param response_headers: A dictionary containing HTTP headers/values that will override any headers associated with the stored object in the response. See http://goo.gl/EWOPb for details. :rtype: :class:`boto.s3.key.Key` :returns: A Key object from this bucket. """ query_args_l = [] if version_id: query_args_l.append('versionId=%s' % version_id) if response_headers: for rk, rv in response_headers.iteritems(): query_args_l.append('%s=%s' % (rk, urllib.quote(rv))) key, resp = self._get_key_internal(key_name, headers, query_args_l) return key def _get_key_internal(self, key_name, headers, query_args_l): query_args = '&'.join(query_args_l) or None response = self.connection.make_request('HEAD', self.name, key_name, headers=headers, query_args=query_args) response.read() # Allow any success status (2xx) - for example this lets us # support Range gets, which return status 206: if response.status / 100 == 2: k = self.key_class(self) provider = self.connection.provider k.metadata = boto.utils.get_aws_metadata(response.msg, provider) k.etag = response.getheader('etag') k.content_type = response.getheader('content-type') k.content_encoding = response.getheader('content-encoding') k.content_disposition = response.getheader('content-disposition') k.content_language = response.getheader('content-language') k.last_modified = response.getheader('last-modified') # the following machinations are a workaround to the fact that # apache/fastcgi omits the content-length header on HEAD # requests when the content-length is zero. # See http://goo.gl/0Tdax for more details. clen = response.getheader('content-length') if clen: k.size = int(response.getheader('content-length')) else: k.size = 0 k.cache_control = response.getheader('cache-control') k.name = key_name k.handle_version_headers(response) k.handle_encryption_headers(response) k.handle_restore_headers(response) k.handle_addl_headers(response.getheaders()) return k, response else: if response.status == 404: return None, response else: raise self.connection.provider.storage_response_error( response.status, response.reason, '') def list(self, prefix='', delimiter='', marker='', headers=None): """ List key objects within a bucket. This returns an instance of an BucketListResultSet that automatically handles all of the result paging, etc. from S3. You just need to keep iterating until there are no more results. Called with no arguments, this will return an iterator object across all keys within the bucket. The Key objects returned by the iterator are obtained by parsing the results of a GET on the bucket, also known as the List Objects request. The XML returned by this request contains only a subset of the information about each key. Certain metadata fields such as Content-Type and user metadata are not available in the XML. Therefore, if you want these additional metadata fields you will have to do a HEAD request on the Key in the bucket. :type prefix: string :param prefix: allows you to limit the listing to a particular prefix. For example, if you call the method with prefix='/foo/' then the iterator will only cycle through the keys that begin with the string '/foo/'. :type delimiter: string :param delimiter: can be used in conjunction with the prefix to allow you to organize and browse your keys hierarchically. See http://goo.gl/Xx63h for more details. :type marker: string :param marker: The "marker" of where you are in the result set :rtype: :class:`boto.s3.bucketlistresultset.BucketListResultSet` :return: an instance of a BucketListResultSet that handles paging, etc """ return BucketListResultSet(self, prefix, delimiter, marker, headers) def list_versions(self, prefix='', delimiter='', key_marker='', version_id_marker='', headers=None): """ List version objects within a bucket. This returns an instance of an VersionedBucketListResultSet that automatically handles all of the result paging, etc. from S3. You just need to keep iterating until there are no more results. Called with no arguments, this will return an iterator object across all keys within the bucket. :type prefix: string :param prefix: allows you to limit the listing to a particular prefix. For example, if you call the method with prefix='/foo/' then the iterator will only cycle through the keys that begin with the string '/foo/'. :type delimiter: string :param delimiter: can be used in conjunction with the prefix to allow you to organize and browse your keys hierarchically. See: http://aws.amazon.com/releasenotes/Amazon-S3/213 for more details. :type marker: string :param marker: The "marker" of where you are in the result set :rtype: :class:`boto.s3.bucketlistresultset.BucketListResultSet` :return: an instance of a BucketListResultSet that handles paging, etc """ return VersionedBucketListResultSet(self, prefix, delimiter, key_marker, version_id_marker, headers) def list_multipart_uploads(self, key_marker='', upload_id_marker='', headers=None): """ List multipart upload objects within a bucket. This returns an instance of an MultiPartUploadListResultSet that automatically handles all of the result paging, etc. from S3. You just need to keep iterating until there are no more results. :type marker: string :param marker: The "marker" of where you are in the result set :rtype: :class:`boto.s3.bucketlistresultset.BucketListResultSet` :return: an instance of a BucketListResultSet that handles paging, etc """ return MultiPartUploadListResultSet(self, key_marker, upload_id_marker, headers) def _get_all_query_args(self, params, initial_query_string=''): pairs = [] if initial_query_string: pairs.append(initial_query_string) for key, value in params.items(): key = key.replace('_', '-') if key == 'maxkeys': key = 'max-keys' if isinstance(value, unicode): value = value.encode('utf-8') if value is not None and value != '': pairs.append('%s=%s' % ( urllib.quote(key), urllib.quote(str(value) ))) return '&'.join(pairs) def _get_all(self, element_map, initial_query_string='', headers=None, **params): query_args = self._get_all_query_args( params, initial_query_string=initial_query_string ) response = self.connection.make_request('GET', self.name, headers=headers, query_args=query_args) body = response.read() boto.log.debug(body) if response.status == 200: rs = ResultSet(element_map) h = handler.XmlHandler(rs, self) xml.sax.parseString(body, h) return rs else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def validate_kwarg_names(self, kwargs, names): """ Checks that all named arguments are in the specified list of names. :type kwargs: dict :param kwargs: Dictionary of kwargs to validate. :type names: list :param names: List of possible named arguments. """ for kwarg in kwargs: if kwarg not in names: raise TypeError('Invalid argument "%s"!' % kwarg) def get_all_keys(self, headers=None, **params): """ A lower-level method for listing contents of a bucket. This closely models the actual S3 API and requires you to manually handle the paging of results. For a higher-level method that handles the details of paging for you, you can use the list method. :type max_keys: int :param max_keys: The maximum number of keys to retrieve :type prefix: string :param prefix: The prefix of the keys you want to retrieve :type marker: string :param marker: The "marker" of where you are in the result set :type delimiter: string :param delimiter: If this optional, Unicode string parameter is included with your request, then keys that contain the same string between the prefix and the first occurrence of the delimiter will be rolled up into a single result element in the CommonPrefixes collection. These rolled-up keys are not returned elsewhere in the response. :rtype: ResultSet :return: The result from S3 listing the keys requested """ self.validate_kwarg_names(params, ['maxkeys', 'max_keys', 'prefix', 'marker', 'delimiter']) return self._get_all([('Contents', self.key_class), ('CommonPrefixes', Prefix)], '', headers, **params) def get_all_versions(self, headers=None, **params): """ A lower-level, version-aware method for listing contents of a bucket. This closely models the actual S3 API and requires you to manually handle the paging of results. For a higher-level method that handles the details of paging for you, you can use the list method. :type max_keys: int :param max_keys: The maximum number of keys to retrieve :type prefix: string :param prefix: The prefix of the keys you want to retrieve :type key_marker: string :param key_marker: The "marker" of where you are in the result set with respect to keys. :type version_id_marker: string :param version_id_marker: The "marker" of where you are in the result set with respect to version-id's. :type delimiter: string :param delimiter: If this optional, Unicode string parameter is included with your request, then keys that contain the same string between the prefix and the first occurrence of the delimiter will be rolled up into a single result element in the CommonPrefixes collection. These rolled-up keys are not returned elsewhere in the response. :rtype: ResultSet :return: The result from S3 listing the keys requested """ self.validate_get_all_versions_params(params) return self._get_all([('Version', self.key_class), ('CommonPrefixes', Prefix), ('DeleteMarker', DeleteMarker)], 'versions', headers, **params) def validate_get_all_versions_params(self, params): """ Validate that the parameters passed to get_all_versions are valid. Overridden by subclasses that allow a different set of parameters. :type params: dict :param params: Parameters to validate. """ self.validate_kwarg_names( params, ['maxkeys', 'max_keys', 'prefix', 'key_marker', 'version_id_marker', 'delimiter']) def get_all_multipart_uploads(self, headers=None, **params): """ A lower-level, version-aware method for listing active MultiPart uploads for a bucket. This closely models the actual S3 API and requires you to manually handle the paging of results. For a higher-level method that handles the details of paging for you, you can use the list method. :type max_uploads: int :param max_uploads: The maximum number of uploads to retrieve. Default value is 1000. :type key_marker: string :param key_marker: Together with upload_id_marker, this parameter specifies the multipart upload after which listing should begin. If upload_id_marker is not specified, only the keys lexicographically greater than the specified key_marker will be included in the list. If upload_id_marker is specified, any multipart uploads for a key equal to the key_marker might also be included, provided those multipart uploads have upload IDs lexicographically greater than the specified upload_id_marker. :type upload_id_marker: string :param upload_id_marker: Together with key-marker, specifies the multipart upload after which listing should begin. If key_marker is not specified, the upload_id_marker parameter is ignored. Otherwise, any multipart uploads for a key equal to the key_marker might be included in the list only if they have an upload ID lexicographically greater than the specified upload_id_marker. :rtype: ResultSet :return: The result from S3 listing the uploads requested """ self.validate_kwarg_names(params, ['max_uploads', 'key_marker', 'upload_id_marker']) return self._get_all([('Upload', MultiPartUpload), ('CommonPrefixes', Prefix)], 'uploads', headers, **params) def new_key(self, key_name=None): """ Creates a new key :type key_name: string :param key_name: The name of the key to create :rtype: :class:`boto.s3.key.Key` or subclass :returns: An instance of the newly created key object """ if not key_name: raise ValueError('Empty key names are not allowed') return self.key_class(self, key_name) def generate_url(self, expires_in, method='GET', headers=None, force_http=False, response_headers=None, expires_in_absolute=False): return self.connection.generate_url(expires_in, method, self.name, headers=headers, force_http=force_http, response_headers=response_headers, expires_in_absolute=expires_in_absolute) def delete_keys(self, keys, quiet=False, mfa_token=None, headers=None): """ Deletes a set of keys using S3's Multi-object delete API. If a VersionID is specified for that key then that version is removed. Returns a MultiDeleteResult Object, which contains Deleted and Error elements for each key you ask to delete. :type keys: list :param keys: A list of either key_names or (key_name, versionid) pairs or a list of Key instances. :type quiet: boolean :param quiet: In quiet mode the response includes only keys where the delete operation encountered an error. For a successful deletion, the operation does not return any information about the delete in the response body. :type mfa_token: tuple or list of strings :param mfa_token: A tuple or list consisting of the serial number from the MFA device and the current value of the six-digit token associated with the device. This value is required anytime you are deleting versioned objects from a bucket that has the MFADelete option on the bucket. :returns: An instance of MultiDeleteResult """ ikeys = iter(keys) result = MultiDeleteResult(self) provider = self.connection.provider query_args = 'delete' def delete_keys2(hdrs): hdrs = hdrs or {} data = u"""""" data += u"" if quiet: data += u"true" count = 0 while count < 1000: try: key = ikeys.next() except StopIteration: break if isinstance(key, basestring): key_name = key version_id = None elif isinstance(key, tuple) and len(key) == 2: key_name, version_id = key elif (isinstance(key, Key) or isinstance(key, DeleteMarker)) and key.name: key_name = key.name version_id = key.version_id else: if isinstance(key, Prefix): key_name = key.name code = 'PrefixSkipped' # Don't delete Prefix else: key_name = repr(key) # try get a string code = 'InvalidArgument' # other unknown type message = 'Invalid. No delete action taken for this object.' error = Error(key_name, code=code, message=message) result.errors.append(error) continue count += 1 data += u"%s" % xml.sax.saxutils.escape(key_name) if version_id: data += u"%s" % version_id data += u"" data += u"" if count <= 0: return False # no more data = data.encode('utf-8') fp = StringIO.StringIO(data) md5 = boto.utils.compute_md5(fp) hdrs['Content-MD5'] = md5[1] hdrs['Content-Type'] = 'text/xml' if mfa_token: hdrs[provider.mfa_header] = ' '.join(mfa_token) response = self.connection.make_request('POST', self.name, headers=hdrs, query_args=query_args, data=data) body = response.read() if response.status == 200: h = handler.XmlHandler(result, self) xml.sax.parseString(body, h) return count >= 1000 # more? else: raise provider.storage_response_error(response.status, response.reason, body) while delete_keys2(headers): pass return result def delete_key(self, key_name, headers=None, version_id=None, mfa_token=None): """ Deletes a key from the bucket. If a version_id is provided, only that version of the key will be deleted. :type key_name: string :param key_name: The key name to delete :type version_id: string :param version_id: The version ID (optional) :type mfa_token: tuple or list of strings :param mfa_token: A tuple or list consisting of the serial number from the MFA device and the current value of the six-digit token associated with the device. This value is required anytime you are deleting versioned objects from a bucket that has the MFADelete option on the bucket. :rtype: :class:`boto.s3.key.Key` or subclass :returns: A key object holding information on what was deleted. The Caller can see if a delete_marker was created or removed and what version_id the delete created or removed. """ if not key_name: raise ValueError('Empty key names are not allowed') return self._delete_key_internal(key_name, headers=headers, version_id=version_id, mfa_token=mfa_token, query_args_l=None) def _delete_key_internal(self, key_name, headers=None, version_id=None, mfa_token=None, query_args_l=None): query_args_l = query_args_l or [] provider = self.connection.provider if version_id: query_args_l.append('versionId=%s' % version_id) query_args = '&'.join(query_args_l) or None if mfa_token: if not headers: headers = {} headers[provider.mfa_header] = ' '.join(mfa_token) response = self.connection.make_request('DELETE', self.name, key_name, headers=headers, query_args=query_args) body = response.read() if response.status != 204: raise provider.storage_response_error(response.status, response.reason, body) else: # return a key object with information on what was deleted. k = self.key_class(self) k.name = key_name k.handle_version_headers(response) k.handle_addl_headers(response.getheaders()) return k def copy_key(self, new_key_name, src_bucket_name, src_key_name, metadata=None, src_version_id=None, storage_class='STANDARD', preserve_acl=False, encrypt_key=False, headers=None, query_args=None): """ Create a new key in the bucket by copying another existing key. :type new_key_name: string :param new_key_name: The name of the new key :type src_bucket_name: string :param src_bucket_name: The name of the source bucket :type src_key_name: string :param src_key_name: The name of the source key :type src_version_id: string :param src_version_id: The version id for the key. This param is optional. If not specified, the newest version of the key will be copied. :type metadata: dict :param metadata: Metadata to be associated with new key. If metadata is supplied, it will replace the metadata of the source key being copied. If no metadata is supplied, the source key's metadata will be copied to the new key. :type storage_class: string :param storage_class: The storage class of the new key. By default, the new key will use the standard storage class. Possible values are: STANDARD | REDUCED_REDUNDANCY :type preserve_acl: bool :param preserve_acl: If True, the ACL from the source key will be copied to the destination key. If False, the destination key will have the default ACL. Note that preserving the ACL in the new key object will require two additional API calls to S3, one to retrieve the current ACL and one to set that ACL on the new object. If you don't care about the ACL, a value of False will be significantly more efficient. :type encrypt_key: bool :param encrypt_key: If True, the new copy of the object will be encrypted on the server-side by S3 and will be stored in an encrypted form while at rest in S3. :type headers: dict :param headers: A dictionary of header name/value pairs. :type query_args: string :param query_args: A string of additional querystring arguments to append to the request :rtype: :class:`boto.s3.key.Key` or subclass :returns: An instance of the newly created key object """ headers = headers or {} provider = self.connection.provider src_key_name = boto.utils.get_utf8_value(src_key_name) if preserve_acl: if self.name == src_bucket_name: src_bucket = self else: src_bucket = self.connection.get_bucket( src_bucket_name, validate=False) acl = src_bucket.get_xml_acl(src_key_name) if encrypt_key: headers[provider.server_side_encryption_header] = 'AES256' src = '%s/%s' % (src_bucket_name, urllib.quote(src_key_name)) if src_version_id: src += '?versionId=%s' % src_version_id headers[provider.copy_source_header] = str(src) # make sure storage_class_header key exists before accessing it if provider.storage_class_header and storage_class: headers[provider.storage_class_header] = storage_class if metadata is not None: headers[provider.metadata_directive_header] = 'REPLACE' headers = boto.utils.merge_meta(headers, metadata, provider) elif not query_args: # Can't use this header with multi-part copy. headers[provider.metadata_directive_header] = 'COPY' response = self.connection.make_request('PUT', self.name, new_key_name, headers=headers, query_args=query_args) body = response.read() if response.status == 200: key = self.new_key(new_key_name) h = handler.XmlHandler(key, self) xml.sax.parseString(body, h) if hasattr(key, 'Error'): raise provider.storage_copy_error(key.Code, key.Message, body) key.handle_version_headers(response) key.handle_addl_headers(response.getheaders()) if preserve_acl: self.set_xml_acl(acl, new_key_name) return key else: raise provider.storage_response_error(response.status, response.reason, body) def set_canned_acl(self, acl_str, key_name='', headers=None, version_id=None): assert acl_str in CannedACLStrings if headers: headers[self.connection.provider.acl_header] = acl_str else: headers = {self.connection.provider.acl_header: acl_str} query_args = 'acl' if version_id: query_args += '&versionId=%s' % version_id response = self.connection.make_request('PUT', self.name, key_name, headers=headers, query_args=query_args) body = response.read() if response.status != 200: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def get_xml_acl(self, key_name='', headers=None, version_id=None): query_args = 'acl' if version_id: query_args += '&versionId=%s' % version_id response = self.connection.make_request('GET', self.name, key_name, query_args=query_args, headers=headers) body = response.read() if response.status != 200: raise self.connection.provider.storage_response_error( response.status, response.reason, body) return body def set_xml_acl(self, acl_str, key_name='', headers=None, version_id=None, query_args='acl'): if version_id: query_args += '&versionId=%s' % version_id response = self.connection.make_request('PUT', self.name, key_name, data=acl_str.encode('UTF-8'), query_args=query_args, headers=headers) body = response.read() if response.status != 200: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def set_acl(self, acl_or_str, key_name='', headers=None, version_id=None): if isinstance(acl_or_str, Policy): self.set_xml_acl(acl_or_str.to_xml(), key_name, headers, version_id) else: self.set_canned_acl(acl_or_str, key_name, headers, version_id) def get_acl(self, key_name='', headers=None, version_id=None): query_args = 'acl' if version_id: query_args += '&versionId=%s' % version_id response = self.connection.make_request('GET', self.name, key_name, query_args=query_args, headers=headers) body = response.read() if response.status == 200: policy = Policy(self) h = handler.XmlHandler(policy, self) xml.sax.parseString(body, h) return policy else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def set_subresource(self, subresource, value, key_name='', headers=None, version_id=None): """ Set a subresource for a bucket or key. :type subresource: string :param subresource: The subresource to set. :type value: string :param value: The value of the subresource. :type key_name: string :param key_name: The key to operate on, or None to operate on the bucket. :type headers: dict :param headers: Additional HTTP headers to include in the request. :type src_version_id: string :param src_version_id: Optional. The version id of the key to operate on. If not specified, operate on the newest version. """ if not subresource: raise TypeError('set_subresource called with subresource=None') query_args = subresource if version_id: query_args += '&versionId=%s' % version_id response = self.connection.make_request('PUT', self.name, key_name, data=value.encode('UTF-8'), query_args=query_args, headers=headers) body = response.read() if response.status != 200: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def get_subresource(self, subresource, key_name='', headers=None, version_id=None): """ Get a subresource for a bucket or key. :type subresource: string :param subresource: The subresource to get. :type key_name: string :param key_name: The key to operate on, or None to operate on the bucket. :type headers: dict :param headers: Additional HTTP headers to include in the request. :type src_version_id: string :param src_version_id: Optional. The version id of the key to operate on. If not specified, operate on the newest version. :rtype: string :returns: The value of the subresource. """ if not subresource: raise TypeError('get_subresource called with subresource=None') query_args = subresource if version_id: query_args += '&versionId=%s' % version_id response = self.connection.make_request('GET', self.name, key_name, query_args=query_args, headers=headers) body = response.read() if response.status != 200: raise self.connection.provider.storage_response_error( response.status, response.reason, body) return body def make_public(self, recursive=False, headers=None): self.set_canned_acl('public-read', headers=headers) if recursive: for key in self: self.set_canned_acl('public-read', key.name, headers=headers) def add_email_grant(self, permission, email_address, recursive=False, headers=None): """ Convenience method that provides a quick way to add an email grant to a bucket. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT's the new ACL back to S3. :type permission: string :param permission: The permission being granted. Should be one of: (READ, WRITE, READ_ACP, WRITE_ACP, FULL_CONTROL). :type email_address: string :param email_address: The email address associated with the AWS account your are granting the permission to. :type recursive: boolean :param recursive: A boolean value to controls whether the command will apply the grant to all keys within the bucket or not. The default value is False. By passing a True value, the call will iterate through all keys in the bucket and apply the same grant to each key. CAUTION: If you have a lot of keys, this could take a long time! """ if permission not in S3Permissions: raise self.connection.provider.storage_permissions_error( 'Unknown Permission: %s' % permission) policy = self.get_acl(headers=headers) policy.acl.add_email_grant(permission, email_address) self.set_acl(policy, headers=headers) if recursive: for key in self: key.add_email_grant(permission, email_address, headers=headers) def add_user_grant(self, permission, user_id, recursive=False, headers=None, display_name=None): """ Convenience method that provides a quick way to add a canonical user grant to a bucket. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT's the new ACL back to S3. :type permission: string :param permission: The permission being granted. Should be one of: (READ, WRITE, READ_ACP, WRITE_ACP, FULL_CONTROL). :type user_id: string :param user_id: The canonical user id associated with the AWS account your are granting the permission to. :type recursive: boolean :param recursive: A boolean value to controls whether the command will apply the grant to all keys within the bucket or not. The default value is False. By passing a True value, the call will iterate through all keys in the bucket and apply the same grant to each key. CAUTION: If you have a lot of keys, this could take a long time! :type display_name: string :param display_name: An option string containing the user's Display Name. Only required on Walrus. """ if permission not in S3Permissions: raise self.connection.provider.storage_permissions_error( 'Unknown Permission: %s' % permission) policy = self.get_acl(headers=headers) policy.acl.add_user_grant(permission, user_id, display_name=display_name) self.set_acl(policy, headers=headers) if recursive: for key in self: key.add_user_grant(permission, user_id, headers=headers, display_name=display_name) def list_grants(self, headers=None): policy = self.get_acl(headers=headers) return policy.acl.grants def get_location(self): """ Returns the LocationConstraint for the bucket. :rtype: str :return: The LocationConstraint for the bucket or the empty string if no constraint was specified when bucket was created. """ response = self.connection.make_request('GET', self.name, query_args='location') body = response.read() if response.status == 200: rs = ResultSet(self) h = handler.XmlHandler(rs, self) xml.sax.parseString(body, h) return rs.LocationConstraint else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def set_xml_logging(self, logging_str, headers=None): """ Set logging on a bucket directly to the given xml string. :type logging_str: unicode string :param logging_str: The XML for the bucketloggingstatus which will be set. The string will be converted to utf-8 before it is sent. Usually, you will obtain this XML from the BucketLogging object. :rtype: bool :return: True if ok or raises an exception. """ body = logging_str.encode('utf-8') response = self.connection.make_request('PUT', self.name, data=body, query_args='logging', headers=headers) body = response.read() if response.status == 200: return True else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def enable_logging(self, target_bucket, target_prefix='', grants=None, headers=None): """ Enable logging on a bucket. :type target_bucket: bucket or string :param target_bucket: The bucket to log to. :type target_prefix: string :param target_prefix: The prefix which should be prepended to the generated log files written to the target_bucket. :type grants: list of Grant objects :param grants: A list of extra permissions which will be granted on the log files which are created. :rtype: bool :return: True if ok or raises an exception. """ if isinstance(target_bucket, Bucket): target_bucket = target_bucket.name blogging = BucketLogging(target=target_bucket, prefix=target_prefix, grants=grants) return self.set_xml_logging(blogging.to_xml(), headers=headers) def disable_logging(self, headers=None): """ Disable logging on a bucket. :rtype: bool :return: True if ok or raises an exception. """ blogging = BucketLogging() return self.set_xml_logging(blogging.to_xml(), headers=headers) def get_logging_status(self, headers=None): """ Get the logging status for this bucket. :rtype: :class:`boto.s3.bucketlogging.BucketLogging` :return: A BucketLogging object for this bucket. """ response = self.connection.make_request('GET', self.name, query_args='logging', headers=headers) body = response.read() if response.status == 200: blogging = BucketLogging() h = handler.XmlHandler(blogging, self) xml.sax.parseString(body, h) return blogging else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def set_as_logging_target(self, headers=None): """ Setup the current bucket as a logging target by granting the necessary permissions to the LogDelivery group to write log files to this bucket. """ policy = self.get_acl(headers=headers) g1 = Grant(permission='WRITE', type='Group', uri=self.LoggingGroup) g2 = Grant(permission='READ_ACP', type='Group', uri=self.LoggingGroup) policy.acl.add_grant(g1) policy.acl.add_grant(g2) self.set_acl(policy, headers=headers) def get_request_payment(self, headers=None): response = self.connection.make_request('GET', self.name, query_args='requestPayment', headers=headers) body = response.read() if response.status == 200: return body else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def set_request_payment(self, payer='BucketOwner', headers=None): body = self.BucketPaymentBody % payer response = self.connection.make_request('PUT', self.name, data=body, query_args='requestPayment', headers=headers) body = response.read() if response.status == 200: return True else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def configure_versioning(self, versioning, mfa_delete=False, mfa_token=None, headers=None): """ Configure versioning for this bucket. ..note:: This feature is currently in beta. :type versioning: bool :param versioning: A boolean indicating whether version is enabled (True) or disabled (False). :type mfa_delete: bool :param mfa_delete: A boolean indicating whether the Multi-Factor Authentication Delete feature is enabled (True) or disabled (False). If mfa_delete is enabled then all Delete operations will require the token from your MFA device to be passed in the request. :type mfa_token: tuple or list of strings :param mfa_token: A tuple or list consisting of the serial number from the MFA device and the current value of the six-digit token associated with the device. This value is required when you are changing the status of the MfaDelete property of the bucket. """ if versioning: ver = 'Enabled' else: ver = 'Suspended' if mfa_delete: mfa = 'Enabled' else: mfa = 'Disabled' body = self.VersioningBody % (ver, mfa) if mfa_token: if not headers: headers = {} provider = self.connection.provider headers[provider.mfa_header] = ' '.join(mfa_token) response = self.connection.make_request('PUT', self.name, data=body, query_args='versioning', headers=headers) body = response.read() if response.status == 200: return True else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def get_versioning_status(self, headers=None): """ Returns the current status of versioning on the bucket. :rtype: dict :returns: A dictionary containing a key named 'Versioning' that can have a value of either Enabled, Disabled, or Suspended. Also, if MFADelete has ever been enabled on the bucket, the dictionary will contain a key named 'MFADelete' which will have a value of either Enabled or Suspended. """ response = self.connection.make_request('GET', self.name, query_args='versioning', headers=headers) body = response.read() boto.log.debug(body) if response.status == 200: d = {} ver = re.search(self.VersionRE, body) if ver: d['Versioning'] = ver.group(1) mfa = re.search(self.MFADeleteRE, body) if mfa: d['MfaDelete'] = mfa.group(1) return d else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def configure_lifecycle(self, lifecycle_config, headers=None): """ Configure lifecycle for this bucket. :type lifecycle_config: :class:`boto.s3.lifecycle.Lifecycle` :param lifecycle_config: The lifecycle configuration you want to configure for this bucket. """ xml = lifecycle_config.to_xml() xml = xml.encode('utf-8') fp = StringIO.StringIO(xml) md5 = boto.utils.compute_md5(fp) if headers is None: headers = {} headers['Content-MD5'] = md5[1] headers['Content-Type'] = 'text/xml' response = self.connection.make_request('PUT', self.name, data=fp.getvalue(), query_args='lifecycle', headers=headers) body = response.read() if response.status == 200: return True else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def get_lifecycle_config(self, headers=None): """ Returns the current lifecycle configuration on the bucket. :rtype: :class:`boto.s3.lifecycle.Lifecycle` :returns: A LifecycleConfig object that describes all current lifecycle rules in effect for the bucket. """ response = self.connection.make_request('GET', self.name, query_args='lifecycle', headers=headers) body = response.read() boto.log.debug(body) if response.status == 200: lifecycle = Lifecycle() h = handler.XmlHandler(lifecycle, self) xml.sax.parseString(body, h) return lifecycle else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def delete_lifecycle_configuration(self, headers=None): """ Removes all lifecycle configuration from the bucket. """ response = self.connection.make_request('DELETE', self.name, query_args='lifecycle', headers=headers) body = response.read() boto.log.debug(body) if response.status == 204: return True else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def configure_website(self, suffix=None, error_key=None, redirect_all_requests_to=None, routing_rules=None, headers=None): """ Configure this bucket to act as a website :type suffix: str :param suffix: Suffix that is appended to a request that is for a "directory" on the website endpoint (e.g. if the suffix is index.html and you make a request to samplebucket/images/ the data that is returned will be for the object with the key name images/index.html). The suffix must not be empty and must not include a slash character. :type error_key: str :param error_key: The object key name to use when a 4XX class error occurs. This is optional. :type redirect_all_requests_to: :class:`boto.s3.website.RedirectLocation` :param redirect_all_requests_to: Describes the redirect behavior for every request to this bucket's website endpoint. If this value is non None, no other values are considered when configuring the website configuration for the bucket. This is an instance of ``RedirectLocation``. :type routing_rules: :class:`boto.s3.website.RoutingRules` :param routing_rules: Object which specifies conditions and redirects that apply when the conditions are met. """ config = website.WebsiteConfiguration( suffix, error_key, redirect_all_requests_to, routing_rules) return self.set_website_configuration(config, headers=headers) def set_website_configuration(self, config, headers=None): """ :type config: boto.s3.website.WebsiteConfiguration :param config: Configuration data """ return self.set_website_configuration_xml(config.to_xml(), headers=headers) def set_website_configuration_xml(self, xml, headers=None): """Upload xml website configuration""" response = self.connection.make_request('PUT', self.name, data=xml, query_args='website', headers=headers) body = response.read() if response.status == 200: return True else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def get_website_configuration(self, headers=None): """ Returns the current status of website configuration on the bucket. :rtype: dict :returns: A dictionary containing a Python representation of the XML response from S3. The overall structure is: * WebsiteConfiguration * IndexDocument * Suffix : suffix that is appended to request that is for a "directory" on the website endpoint * ErrorDocument * Key : name of object to serve when an error occurs """ return self.get_website_configuration_with_xml(headers)[0] def get_website_configuration_obj(self, headers=None): """Get the website configuration as a :class:`boto.s3.website.WebsiteConfiguration` object. """ config_xml = self.get_website_configuration_xml(headers=headers) config = website.WebsiteConfiguration() h = handler.XmlHandler(config, self) xml.sax.parseString(config_xml, h) return config def get_website_configuration_with_xml(self, headers=None): """ Returns the current status of website configuration on the bucket as unparsed XML. :rtype: 2-Tuple :returns: 2-tuple containing: 1) A dictionary containing a Python representation \ of the XML response. The overall structure is: * WebsiteConfiguration * IndexDocument * Suffix : suffix that is appended to request that \ is for a "directory" on the website endpoint * ErrorDocument * Key : name of object to serve when an error occurs 2) unparsed XML describing the bucket's website configuration """ body = self.get_website_configuration_xml(headers=headers) e = boto.jsonresponse.Element() h = boto.jsonresponse.XmlHandler(e, None) h.parse(body) return e, body def get_website_configuration_xml(self, headers=None): """Get raw website configuration xml""" response = self.connection.make_request('GET', self.name, query_args='website', headers=headers) body = response.read() boto.log.debug(body) if response.status != 200: raise self.connection.provider.storage_response_error( response.status, response.reason, body) return body def delete_website_configuration(self, headers=None): """ Removes all website configuration from the bucket. """ response = self.connection.make_request('DELETE', self.name, query_args='website', headers=headers) body = response.read() boto.log.debug(body) if response.status == 204: return True else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def get_website_endpoint(self): """ Returns the fully qualified hostname to use is you want to access this bucket as a website. This doesn't validate whether the bucket has been correctly configured as a website or not. """ l = [self.name] l.append(S3WebsiteEndpointTranslate.translate_region(self.get_location())) l.append('.'.join(self.connection.host.split('.')[-2:])) return '.'.join(l) def get_policy(self, headers=None): """ Returns the JSON policy associated with the bucket. The policy is returned as an uninterpreted JSON string. """ response = self.connection.make_request('GET', self.name, query_args='policy', headers=headers) body = response.read() if response.status == 200: return body else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def set_policy(self, policy, headers=None): """ Add or replace the JSON policy associated with the bucket. :type policy: str :param policy: The JSON policy as a string. """ response = self.connection.make_request('PUT', self.name, data=policy, query_args='policy', headers=headers) body = response.read() if response.status >= 200 and response.status <= 204: return True else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def delete_policy(self, headers=None): response = self.connection.make_request('DELETE', self.name, data='/?policy', query_args='policy', headers=headers) body = response.read() if response.status >= 200 and response.status <= 204: return True else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def set_cors_xml(self, cors_xml, headers=None): """ Set the CORS (Cross-Origin Resource Sharing) for a bucket. :type cors_xml: str :param cors_xml: The XML document describing your desired CORS configuration. See the S3 documentation for details of the exact syntax required. """ fp = StringIO.StringIO(cors_xml) md5 = boto.utils.compute_md5(fp) if headers is None: headers = {} headers['Content-MD5'] = md5[1] headers['Content-Type'] = 'text/xml' response = self.connection.make_request('PUT', self.name, data=fp.getvalue(), query_args='cors', headers=headers) body = response.read() if response.status == 200: return True else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def set_cors(self, cors_config, headers=None): """ Set the CORS for this bucket given a boto CORSConfiguration object. :type cors_config: :class:`boto.s3.cors.CORSConfiguration` :param cors_config: The CORS configuration you want to configure for this bucket. """ return self.set_cors_xml(cors_config.to_xml()) def get_cors_xml(self, headers=None): """ Returns the current CORS configuration on the bucket as an XML document. """ response = self.connection.make_request('GET', self.name, query_args='cors', headers=headers) body = response.read() boto.log.debug(body) if response.status == 200: return body else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def get_cors(self, headers=None): """ Returns the current CORS configuration on the bucket. :rtype: :class:`boto.s3.cors.CORSConfiguration` :returns: A CORSConfiguration object that describes all current CORS rules in effect for the bucket. """ body = self.get_cors_xml(headers) cors = CORSConfiguration() h = handler.XmlHandler(cors, self) xml.sax.parseString(body, h) return cors def delete_cors(self, headers=None): """ Removes all CORS configuration from the bucket. """ response = self.connection.make_request('DELETE', self.name, query_args='cors', headers=headers) body = response.read() boto.log.debug(body) if response.status == 204: return True else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def initiate_multipart_upload(self, key_name, headers=None, reduced_redundancy=False, metadata=None, encrypt_key=False, policy=None): """ Start a multipart upload operation. :type key_name: string :param key_name: The name of the key that will ultimately result from this multipart upload operation. This will be exactly as the key appears in the bucket after the upload process has been completed. :type headers: dict :param headers: Additional HTTP headers to send and store with the resulting key in S3. :type reduced_redundancy: boolean :param reduced_redundancy: In multipart uploads, the storage class is specified when initiating the upload, not when uploading individual parts. So if you want the resulting key to use the reduced redundancy storage class set this flag when you initiate the upload. :type metadata: dict :param metadata: Any metadata that you would like to set on the key that results from the multipart upload. :type encrypt_key: bool :param encrypt_key: If True, the new copy of the object will be encrypted on the server-side by S3 and will be stored in an encrypted form while at rest in S3. :type policy: :class:`boto.s3.acl.CannedACLStrings` :param policy: A canned ACL policy that will be applied to the new key (once completed) in S3. """ query_args = 'uploads' provider = self.connection.provider headers = headers or {} if policy: headers[provider.acl_header] = policy if reduced_redundancy: storage_class_header = provider.storage_class_header if storage_class_header: headers[storage_class_header] = 'REDUCED_REDUNDANCY' # TODO: what if the provider doesn't support reduced redundancy? # (see boto.s3.key.Key.set_contents_from_file) if encrypt_key: headers[provider.server_side_encryption_header] = 'AES256' if metadata is None: metadata = {} headers = boto.utils.merge_meta(headers, metadata, self.connection.provider) response = self.connection.make_request('POST', self.name, key_name, query_args=query_args, headers=headers) body = response.read() boto.log.debug(body) if response.status == 200: resp = MultiPartUpload(self) h = handler.XmlHandler(resp, self) xml.sax.parseString(body, h) return resp else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def complete_multipart_upload(self, key_name, upload_id, xml_body, headers=None): """ Complete a multipart upload operation. """ query_args = 'uploadId=%s' % upload_id if headers is None: headers = {} headers['Content-Type'] = 'text/xml' response = self.connection.make_request('POST', self.name, key_name, query_args=query_args, headers=headers, data=xml_body) contains_error = False body = response.read() # Some errors will be reported in the body of the response # even though the HTTP response code is 200. This check # does a quick and dirty peek in the body for an error element. if body.find('') > 0: contains_error = True boto.log.debug(body) if response.status == 200 and not contains_error: resp = CompleteMultiPartUpload(self) h = handler.XmlHandler(resp, self) xml.sax.parseString(body, h) # Use a dummy key to parse various response headers # for versioning, encryption info and then explicitly # set the completed MPU object values from key. k = self.key_class(self) k.handle_version_headers(response) k.handle_encryption_headers(response) resp.version_id = k.version_id resp.encrypted = k.encrypted return resp else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def cancel_multipart_upload(self, key_name, upload_id, headers=None): query_args = 'uploadId=%s' % upload_id response = self.connection.make_request('DELETE', self.name, key_name, query_args=query_args, headers=headers) body = response.read() boto.log.debug(body) if response.status != 204: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def delete(self, headers=None): return self.connection.delete_bucket(self.name, headers=headers) def get_tags(self): response = self.get_xml_tags() tags = Tags() h = handler.XmlHandler(tags, self) xml.sax.parseString(response, h) return tags def get_xml_tags(self): response = self.connection.make_request('GET', self.name, query_args='tagging', headers=None) body = response.read() if response.status == 200: return body else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) def set_xml_tags(self, tag_str, headers=None, query_args='tagging'): if headers is None: headers = {} md5 = boto.utils.compute_md5(StringIO.StringIO(tag_str)) headers['Content-MD5'] = md5[1] headers['Content-Type'] = 'text/xml' response = self.connection.make_request('PUT', self.name, data=tag_str.encode('utf-8'), query_args=query_args, headers=headers) body = response.read() if response.status != 204: raise self.connection.provider.storage_response_error( response.status, response.reason, body) return True def set_tags(self, tags, headers=None): return self.set_xml_tags(tags.to_xml(), headers=headers) def delete_tags(self, headers=None): response = self.connection.make_request('DELETE', self.name, query_args='tagging', headers=headers) body = response.read() boto.log.debug(body) if response.status == 204: return True else: raise self.connection.provider.storage_response_error( response.status, response.reason, body) boto-2.20.1/boto/s3/bucketlistresultset.py000066400000000000000000000133671225267101000205330ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. def bucket_lister(bucket, prefix='', delimiter='', marker='', headers=None): """ A generator function for listing keys in a bucket. """ more_results = True k = None while more_results: rs = bucket.get_all_keys(prefix=prefix, marker=marker, delimiter=delimiter, headers=headers) for k in rs: yield k if k: marker = rs.next_marker or k.name more_results= rs.is_truncated class BucketListResultSet: """ A resultset for listing keys within a bucket. Uses the bucket_lister generator function and implements the iterator interface. This transparently handles the results paging from S3 so even if you have many thousands of keys within the bucket you can iterate over all keys in a reasonably efficient manner. """ def __init__(self, bucket=None, prefix='', delimiter='', marker='', headers=None): self.bucket = bucket self.prefix = prefix self.delimiter = delimiter self.marker = marker self.headers = headers def __iter__(self): return bucket_lister(self.bucket, prefix=self.prefix, delimiter=self.delimiter, marker=self.marker, headers=self.headers) def versioned_bucket_lister(bucket, prefix='', delimiter='', key_marker='', version_id_marker='', headers=None): """ A generator function for listing versions in a bucket. """ more_results = True k = None while more_results: rs = bucket.get_all_versions(prefix=prefix, key_marker=key_marker, version_id_marker=version_id_marker, delimiter=delimiter, headers=headers, max_keys=999) for k in rs: yield k key_marker = rs.next_key_marker version_id_marker = rs.next_version_id_marker more_results= rs.is_truncated class VersionedBucketListResultSet: """ A resultset for listing versions within a bucket. Uses the bucket_lister generator function and implements the iterator interface. This transparently handles the results paging from S3 so even if you have many thousands of keys within the bucket you can iterate over all keys in a reasonably efficient manner. """ def __init__(self, bucket=None, prefix='', delimiter='', key_marker='', version_id_marker='', headers=None): self.bucket = bucket self.prefix = prefix self.delimiter = delimiter self.key_marker = key_marker self.version_id_marker = version_id_marker self.headers = headers def __iter__(self): return versioned_bucket_lister(self.bucket, prefix=self.prefix, delimiter=self.delimiter, key_marker=self.key_marker, version_id_marker=self.version_id_marker, headers=self.headers) def multipart_upload_lister(bucket, key_marker='', upload_id_marker='', headers=None): """ A generator function for listing multipart uploads in a bucket. """ more_results = True k = None while more_results: rs = bucket.get_all_multipart_uploads(key_marker=key_marker, upload_id_marker=upload_id_marker, headers=headers) for k in rs: yield k key_marker = rs.next_key_marker upload_id_marker = rs.next_upload_id_marker more_results= rs.is_truncated class MultiPartUploadListResultSet: """ A resultset for listing multipart uploads within a bucket. Uses the multipart_upload_lister generator function and implements the iterator interface. This transparently handles the results paging from S3 so even if you have many thousands of uploads within the bucket you can iterate over all keys in a reasonably efficient manner. """ def __init__(self, bucket=None, key_marker='', upload_id_marker='', headers=None): self.bucket = bucket self.key_marker = key_marker self.upload_id_marker = upload_id_marker self.headers = headers def __iter__(self): return multipart_upload_lister(self.bucket, key_marker=self.key_marker, upload_id_marker=self.upload_id_marker, headers=self.headers) boto-2.20.1/boto/s3/bucketlogging.py000066400000000000000000000061411225267101000172230ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import xml.sax.saxutils from acl import Grant class BucketLogging: def __init__(self, target=None, prefix=None, grants=None): self.target = target self.prefix = prefix if grants is None: self.grants = [] else: self.grants = grants def __repr__(self): if self.target is None: return "" grants = [] for g in self.grants: if g.type == 'CanonicalUser': u = g.display_name elif g.type == 'Group': u = g.uri else: u = g.email_address grants.append("%s = %s" % (u, g.permission)) return "" % (self.target, self.prefix, ", ".join(grants)) def add_grant(self, grant): self.grants.append(grant) def startElement(self, name, attrs, connection): if name == 'Grant': self.grants.append(Grant()) return self.grants[-1] else: return None def endElement(self, name, value, connection): if name == 'TargetBucket': self.target = value elif name == 'TargetPrefix': self.prefix = value else: setattr(self, name, value) def to_xml(self): # caller is responsible to encode to utf-8 s = u'' s += u'' if self.target is not None: s += u'' s += u'%s' % self.target prefix = self.prefix or '' s += u'%s' % xml.sax.saxutils.escape(prefix) if self.grants: s += '' for grant in self.grants: s += grant.to_xml() s += '' s += u'' s += u'' return s boto-2.20.1/boto/s3/connection.py000066400000000000000000000514531225267101000165440ustar00rootroot00000000000000# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # Copyright (c) 2010, Eucalyptus Systems, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import xml.sax import urllib import base64 import time import boto.utils from boto.connection import AWSAuthConnection from boto import handler from boto.s3.bucket import Bucket from boto.s3.key import Key from boto.resultset import ResultSet from boto.exception import BotoClientError, S3ResponseError def check_lowercase_bucketname(n): """ Bucket names must not contain uppercase characters. We check for this by appending a lowercase character and testing with islower(). Note this also covers cases like numeric bucket names with dashes. >>> check_lowercase_bucketname("Aaaa") Traceback (most recent call last): ... BotoClientError: S3Error: Bucket names cannot contain upper-case characters when using either the sub-domain or virtual hosting calling format. >>> check_lowercase_bucketname("1234-5678-9123") True >>> check_lowercase_bucketname("abcdefg1234") True """ if not (n + 'a').islower(): raise BotoClientError("Bucket names cannot contain upper-case " \ "characters when using either the sub-domain or virtual " \ "hosting calling format.") return True def assert_case_insensitive(f): def wrapper(*args, **kwargs): if len(args) == 3 and check_lowercase_bucketname(args[2]): pass return f(*args, **kwargs) return wrapper class _CallingFormat(object): def get_bucket_server(self, server, bucket): return '' def build_url_base(self, connection, protocol, server, bucket, key=''): url_base = '%s://' % protocol url_base += self.build_host(server, bucket) url_base += connection.get_path(self.build_path_base(bucket, key)) return url_base def build_host(self, server, bucket): if bucket == '': return server else: return self.get_bucket_server(server, bucket) def build_auth_path(self, bucket, key=''): key = boto.utils.get_utf8_value(key) path = '' if bucket != '': path = '/' + bucket return path + '/%s' % urllib.quote(key) def build_path_base(self, bucket, key=''): key = boto.utils.get_utf8_value(key) return '/%s' % urllib.quote(key) class SubdomainCallingFormat(_CallingFormat): @assert_case_insensitive def get_bucket_server(self, server, bucket): return '%s.%s' % (bucket, server) class VHostCallingFormat(_CallingFormat): @assert_case_insensitive def get_bucket_server(self, server, bucket): return bucket class OrdinaryCallingFormat(_CallingFormat): def get_bucket_server(self, server, bucket): return server def build_path_base(self, bucket, key=''): key = boto.utils.get_utf8_value(key) path_base = '/' if bucket: path_base += "%s/" % bucket return path_base + urllib.quote(key) class ProtocolIndependentOrdinaryCallingFormat(OrdinaryCallingFormat): def build_url_base(self, connection, protocol, server, bucket, key=''): url_base = '//' url_base += self.build_host(server, bucket) url_base += connection.get_path(self.build_path_base(bucket, key)) return url_base class Location: DEFAULT = '' # US Classic Region EU = 'EU' USWest = 'us-west-1' USWest2 = 'us-west-2' SAEast = 'sa-east-1' APNortheast = 'ap-northeast-1' APSoutheast = 'ap-southeast-1' APSoutheast2 = 'ap-southeast-2' class S3Connection(AWSAuthConnection): DefaultHost = boto.config.get('s3', 'host', 's3.amazonaws.com') DefaultCallingFormat = boto.config.get('s3', 'calling_format', 'boto.s3.connection.SubdomainCallingFormat') QueryString = 'Signature=%s&Expires=%d&AWSAccessKeyId=%s' def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, host=DefaultHost, debug=0, https_connection_factory=None, calling_format=DefaultCallingFormat, path='/', provider='aws', bucket_class=Bucket, security_token=None, suppress_consec_slashes=True, anon=False, validate_certs=None): if isinstance(calling_format, str): calling_format=boto.utils.find_class(calling_format)() self.calling_format = calling_format self.bucket_class = bucket_class self.anon = anon AWSAuthConnection.__init__(self, host, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, proxy_user, proxy_pass, debug=debug, https_connection_factory=https_connection_factory, path=path, provider=provider, security_token=security_token, suppress_consec_slashes=suppress_consec_slashes, validate_certs=validate_certs) def _required_auth_capability(self): if self.anon: return ['anon'] else: return ['s3'] def __iter__(self): for bucket in self.get_all_buckets(): yield bucket def __contains__(self, bucket_name): return not (self.lookup(bucket_name) is None) def set_bucket_class(self, bucket_class): """ Set the Bucket class associated with this bucket. By default, this would be the boto.s3.key.Bucket class but if you want to subclass that for some reason this allows you to associate your new class. :type bucket_class: class :param bucket_class: A subclass of Bucket that can be more specific """ self.bucket_class = bucket_class def build_post_policy(self, expiration_time, conditions): """ Taken from the AWS book Python examples and modified for use with boto """ assert isinstance(expiration_time, time.struct_time), \ 'Policy document must include a valid expiration Time object' # Convert conditions object mappings to condition statements return '{"expiration": "%s",\n"conditions": [%s]}' % \ (time.strftime(boto.utils.ISO8601, expiration_time), ",".join(conditions)) def build_post_form_args(self, bucket_name, key, expires_in=6000, acl=None, success_action_redirect=None, max_content_length=None, http_method='http', fields=None, conditions=None, storage_class='STANDARD', server_side_encryption=None): """ Taken from the AWS book Python examples and modified for use with boto This only returns the arguments required for the post form, not the actual form. This does not return the file input field which also needs to be added :type bucket_name: string :param bucket_name: Bucket to submit to :type key: string :param key: Key name, optionally add ${filename} to the end to attach the submitted filename :type expires_in: integer :param expires_in: Time (in seconds) before this expires, defaults to 6000 :type acl: string :param acl: A canned ACL. One of: * private * public-read * public-read-write * authenticated-read * bucket-owner-read * bucket-owner-full-control :type success_action_redirect: string :param success_action_redirect: URL to redirect to on success :type max_content_length: integer :param max_content_length: Maximum size for this file :type http_method: string :param http_method: HTTP Method to use, "http" or "https" :type storage_class: string :param storage_class: Storage class to use for storing the object. Valid values: STANDARD | REDUCED_REDUNDANCY :type server_side_encryption: string :param server_side_encryption: Specifies server-side encryption algorithm to use when Amazon S3 creates an object. Valid values: None | AES256 :rtype: dict :return: A dictionary containing field names/values as well as a url to POST to .. code-block:: python """ if fields == None: fields = [] if conditions == None: conditions = [] expiration = time.gmtime(int(time.time() + expires_in)) # Generate policy document conditions.append('{"bucket": "%s"}' % bucket_name) if key.endswith("${filename}"): conditions.append('["starts-with", "$key", "%s"]' % key[:-len("${filename}")]) else: conditions.append('{"key": "%s"}' % key) if acl: conditions.append('{"acl": "%s"}' % acl) fields.append({"name": "acl", "value": acl}) if success_action_redirect: conditions.append('{"success_action_redirect": "%s"}' % success_action_redirect) fields.append({"name": "success_action_redirect", "value": success_action_redirect}) if max_content_length: conditions.append('["content-length-range", 0, %i]' % max_content_length) if self.provider.security_token: fields.append({'name': 'x-amz-security-token', 'value': self.provider.security_token}) conditions.append('{"x-amz-security-token": "%s"}' % self.provider.security_token) if storage_class: fields.append({'name': 'x-amz-storage-class', 'value': storage_class}) conditions.append('{"x-amz-storage-class": "%s"}' % storage_class) if server_side_encryption: fields.append({'name': 'x-amz-server-side-encryption', 'value': server_side_encryption}) conditions.append('{"x-amz-server-side-encryption": "%s"}' % server_side_encryption) policy = self.build_post_policy(expiration, conditions) # Add the base64-encoded policy document as the 'policy' field policy_b64 = base64.b64encode(policy) fields.append({"name": "policy", "value": policy_b64}) # Add the AWS access key as the 'AWSAccessKeyId' field fields.append({"name": "AWSAccessKeyId", "value": self.aws_access_key_id}) # Add signature for encoded policy document as the # 'signature' field signature = self._auth_handler.sign_string(policy_b64) fields.append({"name": "signature", "value": signature}) fields.append({"name": "key", "value": key}) # HTTPS protocol will be used if the secure HTTP option is enabled. url = '%s://%s/' % (http_method, self.calling_format.build_host(self.server_name(), bucket_name)) return {"action": url, "fields": fields} def generate_url(self, expires_in, method, bucket='', key='', headers=None, query_auth=True, force_http=False, response_headers=None, expires_in_absolute=False, version_id=None): headers = headers or {} if expires_in_absolute: expires = int(expires_in) else: expires = int(time.time() + expires_in) auth_path = self.calling_format.build_auth_path(bucket, key) auth_path = self.get_path(auth_path) # optional version_id and response_headers need to be added to # the query param list. extra_qp = [] if version_id is not None: extra_qp.append("versionId=%s" % version_id) if response_headers: for k, v in response_headers.items(): extra_qp.append("%s=%s" % (k, urllib.quote(v))) if self.provider.security_token: headers['x-amz-security-token'] = self.provider.security_token if extra_qp: delimiter = '?' if '?' not in auth_path else '&' auth_path += delimiter + '&'.join(extra_qp) c_string = boto.utils.canonical_string(method, auth_path, headers, expires, self.provider) b64_hmac = self._auth_handler.sign_string(c_string) encoded_canonical = urllib.quote(b64_hmac, safe='') self.calling_format.build_path_base(bucket, key) if query_auth: query_part = '?' + self.QueryString % (encoded_canonical, expires, self.aws_access_key_id) else: query_part = '' if headers: hdr_prefix = self.provider.header_prefix for k, v in headers.items(): if k.startswith(hdr_prefix): # headers used for sig generation must be # included in the url also. extra_qp.append("%s=%s" % (k, urllib.quote(v))) if extra_qp: delimiter = '?' if not query_part else '&' query_part += delimiter + '&'.join(extra_qp) if force_http: protocol = 'http' port = 80 else: protocol = self.protocol port = self.port return self.calling_format.build_url_base(self, protocol, self.server_name(port), bucket, key) + query_part def get_all_buckets(self, headers=None): response = self.make_request('GET', headers=headers) body = response.read() if response.status > 300: raise self.provider.storage_response_error( response.status, response.reason, body) rs = ResultSet([('Bucket', self.bucket_class)]) h = handler.XmlHandler(rs, self) xml.sax.parseString(body, h) return rs def get_canonical_user_id(self, headers=None): """ Convenience method that returns the "CanonicalUserID" of the user who's credentials are associated with the connection. The only way to get this value is to do a GET request on the service which returns all buckets associated with the account. As part of that response, the canonical userid is returned. This method simply does all of that and then returns just the user id. :rtype: string :return: A string containing the canonical user id. """ rs = self.get_all_buckets(headers=headers) return rs.owner.id def get_bucket(self, bucket_name, validate=True, headers=None): """ Retrieves a bucket by name. If the bucket does not exist, an ``S3ResponseError`` will be raised. If you are unsure if the bucket exists or not, you can use the ``S3Connection.lookup`` method, which will either return a valid bucket or ``None``. :type bucket_name: string :param bucket_name: The name of the bucket :type headers: dict :param headers: Additional headers to pass along with the request to AWS. :type validate: boolean :param validate: If ``True``, it will try to fetch all keys within the given bucket. (Default: ``True``) """ bucket = self.bucket_class(self, bucket_name) if validate: bucket.get_all_keys(headers, maxkeys=0) return bucket def lookup(self, bucket_name, validate=True, headers=None): """ Attempts to get a bucket from S3. Works identically to ``S3Connection.get_bucket``, save for that it will return ``None`` if the bucket does not exist instead of throwing an exception. :type bucket_name: string :param bucket_name: The name of the bucket :type headers: dict :param headers: Additional headers to pass along with the request to AWS. :type validate: boolean :param validate: If ``True``, it will try to fetch all keys within the given bucket. (Default: ``True``) """ try: bucket = self.get_bucket(bucket_name, validate, headers=headers) except: bucket = None return bucket def create_bucket(self, bucket_name, headers=None, location=Location.DEFAULT, policy=None): """ Creates a new located bucket. By default it's in the USA. You can pass Location.EU to create a European bucket (S3) or European Union bucket (GCS). :type bucket_name: string :param bucket_name: The name of the new bucket :type headers: dict :param headers: Additional headers to pass along with the request to AWS. :type location: str :param location: The location of the new bucket. You can use one of the constants in :class:`boto.s3.connection.Location` (e.g. Location.EU, Location.USWest, etc.). :type policy: :class:`boto.s3.acl.CannedACLStrings` :param policy: A canned ACL policy that will be applied to the new key in S3. """ check_lowercase_bucketname(bucket_name) if policy: if headers: headers[self.provider.acl_header] = policy else: headers = {self.provider.acl_header: policy} if location == Location.DEFAULT: data = '' else: data = '' + \ location + '' response = self.make_request('PUT', bucket_name, headers=headers, data=data) body = response.read() if response.status == 409: raise self.provider.storage_create_error( response.status, response.reason, body) if response.status == 200: return self.bucket_class(self, bucket_name) else: raise self.provider.storage_response_error( response.status, response.reason, body) def delete_bucket(self, bucket, headers=None): """ Removes an S3 bucket. In order to remove the bucket, it must first be empty. If the bucket is not empty, an ``S3ResponseError`` will be raised. :type bucket_name: string :param bucket_name: The name of the bucket :type headers: dict :param headers: Additional headers to pass along with the request to AWS. """ response = self.make_request('DELETE', bucket, headers=headers) body = response.read() if response.status != 204: raise self.provider.storage_response_error( response.status, response.reason, body) def make_request(self, method, bucket='', key='', headers=None, data='', query_args=None, sender=None, override_num_retries=None, retry_handler=None): if isinstance(bucket, self.bucket_class): bucket = bucket.name if isinstance(key, Key): key = key.name path = self.calling_format.build_path_base(bucket, key) boto.log.debug('path=%s' % path) auth_path = self.calling_format.build_auth_path(bucket, key) boto.log.debug('auth_path=%s' % auth_path) host = self.calling_format.build_host(self.server_name(), bucket) if query_args: path += '?' + query_args boto.log.debug('path=%s' % path) auth_path += '?' + query_args boto.log.debug('auth_path=%s' % auth_path) return AWSAuthConnection.make_request( self, method, path, headers, data, host, auth_path, sender, override_num_retries=override_num_retries, retry_handler=retry_handler ) boto-2.20.1/boto/s3/cors.py000066400000000000000000000213551225267101000153510ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # class CORSRule(object): """ CORS rule for a bucket. :ivar id: A unique identifier for the rule. The ID value can be up to 255 characters long. The IDs help you find a rule in the configuration. :ivar allowed_methods: An HTTP method that you want to allow the origin to execute. Each CORSRule must identify at least one origin and one method. Valid values are: GET|PUT|HEAD|POST|DELETE :ivar allowed_origin: An origin that you want to allow cross-domain requests from. This can contain at most one * wild character. Each CORSRule must identify at least one origin and one method. The origin value can include at most one '*' wild character. For example, "http://*.example.com". You can also specify only * as the origin value allowing all origins cross-domain access. :ivar allowed_header: Specifies which headers are allowed in a pre-flight OPTIONS request via the Access-Control-Request-Headers header. Each header name specified in the Access-Control-Request-Headers header must have a corresponding entry in the rule. Amazon S3 will send only the allowed headers in a response that were requested. This can contain at most one * wild character. :ivar max_age_seconds: The time in seconds that your browser is to cache the preflight response for the specified resource. :ivar expose_header: One or more headers in the response that you want customers to be able to access from their applications (for example, from a JavaScript XMLHttpRequest object). You add one ExposeHeader element in the rule for each header. """ def __init__(self, allowed_method=None, allowed_origin=None, id=None, allowed_header=None, max_age_seconds=None, expose_header=None): if allowed_method is None: allowed_method = [] self.allowed_method = allowed_method if allowed_origin is None: allowed_origin = [] self.allowed_origin = allowed_origin self.id = id if allowed_header is None: allowed_header = [] self.allowed_header = allowed_header self.max_age_seconds = max_age_seconds if expose_header is None: expose_header = [] self.expose_header = expose_header def __repr__(self): return '' % self.id def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'ID': self.id = value elif name == 'AllowedMethod': self.allowed_method.append(value) elif name == 'AllowedOrigin': self.allowed_origin.append(value) elif name == 'AllowedHeader': self.allowed_header.append(value) elif name == 'MaxAgeSeconds': self.max_age_seconds = int(value) elif name == 'ExposeHeader': self.expose_header.append(value) else: setattr(self, name, value) def to_xml(self): s = '' for allowed_method in self.allowed_method: s += '%s' % allowed_method for allowed_origin in self.allowed_origin: s += '%s' % allowed_origin for allowed_header in self.allowed_header: s += '%s' % allowed_header for expose_header in self.expose_header: s += '%s' % expose_header if self.max_age_seconds: s += '%d' % self.max_age_seconds if self.id: s += '%s' % self.id s += '' return s class CORSConfiguration(list): """ A container for the rules associated with a CORS configuration. """ def startElement(self, name, attrs, connection): if name == 'CORSRule': rule = CORSRule() self.append(rule) return rule return None def endElement(self, name, value, connection): setattr(self, name, value) def to_xml(self): """ Returns a string containing the XML version of the Lifecycle configuration as defined by S3. """ s = '' for rule in self: s += rule.to_xml() s += '' return s def add_rule(self, allowed_method, allowed_origin, id=None, allowed_header=None, max_age_seconds=None, expose_header=None): """ Add a rule to this CORS configuration. This only adds the rule to the local copy. To install the new rule(s) on the bucket, you need to pass this CORS config object to the set_cors method of the Bucket object. :type allowed_methods: list of str :param allowed_methods: An HTTP method that you want to allow the origin to execute. Each CORSRule must identify at least one origin and one method. Valid values are: GET|PUT|HEAD|POST|DELETE :type allowed_origin: list of str :param allowed_origin: An origin that you want to allow cross-domain requests from. This can contain at most one * wild character. Each CORSRule must identify at least one origin and one method. The origin value can include at most one '*' wild character. For example, "http://*.example.com". You can also specify only * as the origin value allowing all origins cross-domain access. :type id: str :param id: A unique identifier for the rule. The ID value can be up to 255 characters long. The IDs help you find a rule in the configuration. :type allowed_header: list of str :param allowed_header: Specifies which headers are allowed in a pre-flight OPTIONS request via the Access-Control-Request-Headers header. Each header name specified in the Access-Control-Request-Headers header must have a corresponding entry in the rule. Amazon S3 will send only the allowed headers in a response that were requested. This can contain at most one * wild character. :type max_age_seconds: int :param max_age_seconds: The time in seconds that your browser is to cache the preflight response for the specified resource. :type expose_header: list of str :param expose_header: One or more headers in the response that you want customers to be able to access from their applications (for example, from a JavaScript XMLHttpRequest object). You add one ExposeHeader element in the rule for each header. """ if not isinstance(allowed_method, (list, tuple)): allowed_method = [allowed_method] if not isinstance(allowed_origin, (list, tuple)): allowed_origin = [allowed_origin] if not isinstance(allowed_origin, (list, tuple)): if allowed_origin is None: allowed_origin = [] else: allowed_origin = [allowed_origin] if not isinstance(expose_header, (list, tuple)): if expose_header is None: expose_header = [] else: expose_header = [expose_header] rule = CORSRule(allowed_method, allowed_origin, id, allowed_header, max_age_seconds, expose_header) self.append(rule) boto-2.20.1/boto/s3/deletemarker.py000066400000000000000000000040401225267101000170370ustar00rootroot00000000000000# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.s3.user import User class DeleteMarker: def __init__(self, bucket=None, name=None): self.bucket = bucket self.name = name self.version_id = None self.is_latest = False self.last_modified = None self.owner = None def startElement(self, name, attrs, connection): if name == 'Owner': self.owner = User(self) return self.owner else: return None def endElement(self, name, value, connection): if name == 'Key': self.name = value elif name == 'IsLatest': if value == 'true': self.is_latest = True else: self.is_latest = False elif name == 'LastModified': self.last_modified = value elif name == 'Owner': pass elif name == 'VersionId': self.version_id = value else: setattr(self, name, value) boto-2.20.1/boto/s3/key.py000066400000000000000000002273561225267101000152040ustar00rootroot00000000000000# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2011, Nexenta Systems Inc. # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from __future__ import with_statement import errno import mimetypes import os import re import rfc822 import StringIO import base64 import binascii import math import urllib import boto.utils from boto.exception import BotoClientError from boto.exception import StorageDataError from boto.exception import PleaseRetryException from boto.provider import Provider from boto.s3.keyfile import KeyFile from boto.s3.user import User from boto import UserAgent from boto.utils import compute_md5 from boto.utils import find_matching_headers from boto.utils import merge_headers_by_name try: from hashlib import md5 except ImportError: from md5 import md5 class Key(object): """ Represents a key (object) in an S3 bucket. :ivar bucket: The parent :class:`boto.s3.bucket.Bucket`. :ivar name: The name of this Key object. :ivar metadata: A dictionary containing user metadata that you wish to store with the object or that has been retrieved from an existing object. :ivar cache_control: The value of the `Cache-Control` HTTP header. :ivar content_type: The value of the `Content-Type` HTTP header. :ivar content_encoding: The value of the `Content-Encoding` HTTP header. :ivar content_disposition: The value of the `Content-Disposition` HTTP header. :ivar content_language: The value of the `Content-Language` HTTP header. :ivar etag: The `etag` associated with this object. :ivar last_modified: The string timestamp representing the last time this object was modified in S3. :ivar owner: The ID of the owner of this object. :ivar storage_class: The storage class of the object. Currently, one of: STANDARD | REDUCED_REDUNDANCY | GLACIER :ivar md5: The MD5 hash of the contents of the object. :ivar size: The size, in bytes, of the object. :ivar version_id: The version ID of this object, if it is a versioned object. :ivar encrypted: Whether the object is encrypted while at rest on the server. """ DefaultContentType = 'application/octet-stream' RestoreBody = """ %s """ BufferSize = boto.config.getint('Boto', 'key_buffer_size', 8192) # The object metadata fields a user can set, other than custom metadata # fields (i.e., those beginning with a provider-specific prefix like # x-amz-meta). base_user_settable_fields = set(["cache-control", "content-disposition", "content-encoding", "content-language", "content-md5", "content-type"]) _underscore_base_user_settable_fields = set() for f in base_user_settable_fields: _underscore_base_user_settable_fields.add(f.replace('-', '_')) def __init__(self, bucket=None, name=None): self.bucket = bucket self.name = name self.metadata = {} self.cache_control = None self.content_type = self.DefaultContentType self.content_encoding = None self.content_disposition = None self.content_language = None self.filename = None self.etag = None self.is_latest = False self.last_modified = None self.owner = None self.storage_class = 'STANDARD' self.path = None self.resp = None self.mode = None self.size = None self.version_id = None self.source_version_id = None self.delete_marker = False self.encrypted = None # If the object is being restored, this attribute will be set to True. # If the object is restored, it will be set to False. Otherwise this # value will be None. If the restore is completed (ongoing_restore = # False), the expiry_date will be populated with the expiry date of the # restored object. self.ongoing_restore = None self.expiry_date = None self.local_hashes = {} def __repr__(self): if self.bucket: return '' % (self.bucket.name, self.name) else: return '' % self.name def __iter__(self): return self @property def provider(self): provider = None if self.bucket and self.bucket.connection: provider = self.bucket.connection.provider return provider def _get_key(self): return self.name def _set_key(self, value): self.name = value key = property(_get_key, _set_key); def _get_md5(self): if 'md5' in self.local_hashes and self.local_hashes['md5']: return binascii.b2a_hex(self.local_hashes['md5']) def _set_md5(self, value): if value: self.local_hashes['md5'] = binascii.a2b_hex(value) elif 'md5' in self.local_hashes: self.local_hashes.pop('md5', None) md5 = property(_get_md5, _set_md5); def _get_base64md5(self): if 'md5' in self.local_hashes and self.local_hashes['md5']: return binascii.b2a_base64(self.local_hashes['md5']).rstrip('\n') def _set_base64md5(self, value): if value: self.local_hashes['md5'] = binascii.a2b_base64(value) elif 'md5' in self.local_hashes: del self.local_hashes['md5'] base64md5 = property(_get_base64md5, _set_base64md5); def get_md5_from_hexdigest(self, md5_hexdigest): """ A utility function to create the 2-tuple (md5hexdigest, base64md5) from just having a precalculated md5_hexdigest. """ digest = binascii.unhexlify(md5_hexdigest) base64md5 = base64.encodestring(digest) if base64md5[-1] == '\n': base64md5 = base64md5[0:-1] return (md5_hexdigest, base64md5) def handle_encryption_headers(self, resp): provider = self.bucket.connection.provider if provider.server_side_encryption_header: self.encrypted = resp.getheader( provider.server_side_encryption_header, None) else: self.encrypted = None def handle_version_headers(self, resp, force=False): provider = self.bucket.connection.provider # If the Key object already has a version_id attribute value, it # means that it represents an explicit version and the user is # doing a get_contents_*(version_id=) to retrieve another # version of the Key. In that case, we don't really want to # overwrite the version_id in this Key object. Comprende? if self.version_id is None or force: self.version_id = resp.getheader(provider.version_id, None) self.source_version_id = resp.getheader(provider.copy_source_version_id, None) if resp.getheader(provider.delete_marker, 'false') == 'true': self.delete_marker = True else: self.delete_marker = False def handle_restore_headers(self, response): header = response.getheader('x-amz-restore') if header is None: return parts = header.split(',', 1) for part in parts: key, val = [i.strip() for i in part.split('=')] val = val.replace('"', '') if key == 'ongoing-request': self.ongoing_restore = True if val.lower() == 'true' else False elif key == 'expiry-date': self.expiry_date = val def handle_addl_headers(self, headers): """ Used by Key subclasses to do additional, provider-specific processing of response headers. No-op for this base class. """ pass def open_read(self, headers=None, query_args='', override_num_retries=None, response_headers=None): """ Open this key for reading :type headers: dict :param headers: Headers to pass in the web request :type query_args: string :param query_args: Arguments to pass in the query string (ie, 'torrent') :type override_num_retries: int :param override_num_retries: If not None will override configured num_retries parameter for underlying GET. :type response_headers: dict :param response_headers: A dictionary containing HTTP headers/values that will override any headers associated with the stored object in the response. See http://goo.gl/EWOPb for details. """ if self.resp == None: self.mode = 'r' provider = self.bucket.connection.provider self.resp = self.bucket.connection.make_request( 'GET', self.bucket.name, self.name, headers, query_args=query_args, override_num_retries=override_num_retries) if self.resp.status < 199 or self.resp.status > 299: body = self.resp.read() raise provider.storage_response_error(self.resp.status, self.resp.reason, body) response_headers = self.resp.msg self.metadata = boto.utils.get_aws_metadata(response_headers, provider) for name, value in response_headers.items(): # To get correct size for Range GETs, use Content-Range # header if one was returned. If not, use Content-Length # header. if (name.lower() == 'content-length' and 'Content-Range' not in response_headers): self.size = int(value) elif name.lower() == 'content-range': end_range = re.sub('.*/(.*)', '\\1', value) self.size = int(end_range) elif name.lower() == 'etag': self.etag = value elif name.lower() == 'content-type': self.content_type = value elif name.lower() == 'content-encoding': self.content_encoding = value elif name.lower() == 'content-language': self.content_language = value elif name.lower() == 'last-modified': self.last_modified = value elif name.lower() == 'cache-control': self.cache_control = value elif name.lower() == 'content-disposition': self.content_disposition = value self.handle_version_headers(self.resp) self.handle_encryption_headers(self.resp) self.handle_addl_headers(self.resp.getheaders()) def open_write(self, headers=None, override_num_retries=None): """ Open this key for writing. Not yet implemented :type headers: dict :param headers: Headers to pass in the write request :type override_num_retries: int :param override_num_retries: If not None will override configured num_retries parameter for underlying PUT. """ raise BotoClientError('Not Implemented') def open(self, mode='r', headers=None, query_args=None, override_num_retries=None): if mode == 'r': self.mode = 'r' self.open_read(headers=headers, query_args=query_args, override_num_retries=override_num_retries) elif mode == 'w': self.mode = 'w' self.open_write(headers=headers, override_num_retries=override_num_retries) else: raise BotoClientError('Invalid mode: %s' % mode) closed = False def close(self, fast=False): """ Close this key. :type fast: bool :param fast: True if you want the connection to be closed without first reading the content. This should only be used in cases where subsequent calls don't need to return the content from the open HTTP connection. Note: As explained at http://docs.python.org/2/library/httplib.html#httplib.HTTPConnection.getresponse, callers must read the whole response before sending a new request to the server. Calling Key.close(fast=True) and making a subsequent request to the server will work because boto will get an httplib exception and close/reopen the connection. """ if self.resp and not fast: self.resp.read() self.resp = None self.mode = None self.closed = True def next(self): """ By providing a next method, the key object supports use as an iterator. For example, you can now say: for bytes in key: write bytes to a file or whatever All of the HTTP connection stuff is handled for you. """ self.open_read() data = self.resp.read(self.BufferSize) if not data: self.close() raise StopIteration return data def read(self, size=0): self.open_read() if size == 0: data = self.resp.read() else: data = self.resp.read(size) if not data: self.close() return data def change_storage_class(self, new_storage_class, dst_bucket=None, validate_dst_bucket=True): """ Change the storage class of an existing key. Depending on whether a different destination bucket is supplied or not, this will either move the item within the bucket, preserving all metadata and ACL info bucket changing the storage class or it will copy the item to the provided destination bucket, also preserving metadata and ACL info. :type new_storage_class: string :param new_storage_class: The new storage class for the Key. Possible values are: * STANDARD * REDUCED_REDUNDANCY :type dst_bucket: string :param dst_bucket: The name of a destination bucket. If not provided the current bucket of the key will be used. :type validate_dst_bucket: bool :param validate_dst_bucket: If True, will validate the dst_bucket by using an extra list request. """ if new_storage_class == 'STANDARD': return self.copy(self.bucket.name, self.name, reduced_redundancy=False, preserve_acl=True, validate_dst_bucket=validate_dst_bucket) elif new_storage_class == 'REDUCED_REDUNDANCY': return self.copy(self.bucket.name, self.name, reduced_redundancy=True, preserve_acl=True, validate_dst_bucket=validate_dst_bucket) else: raise BotoClientError('Invalid storage class: %s' % new_storage_class) def copy(self, dst_bucket, dst_key, metadata=None, reduced_redundancy=False, preserve_acl=False, encrypt_key=False, validate_dst_bucket=True): """ Copy this Key to another bucket. :type dst_bucket: string :param dst_bucket: The name of the destination bucket :type dst_key: string :param dst_key: The name of the destination key :type metadata: dict :param metadata: Metadata to be associated with new key. If metadata is supplied, it will replace the metadata of the source key being copied. If no metadata is supplied, the source key's metadata will be copied to the new key. :type reduced_redundancy: bool :param reduced_redundancy: If True, this will force the storage class of the new Key to be REDUCED_REDUNDANCY regardless of the storage class of the key being copied. The Reduced Redundancy Storage (RRS) feature of S3, provides lower redundancy at lower storage cost. :type preserve_acl: bool :param preserve_acl: If True, the ACL from the source key will be copied to the destination key. If False, the destination key will have the default ACL. Note that preserving the ACL in the new key object will require two additional API calls to S3, one to retrieve the current ACL and one to set that ACL on the new object. If you don't care about the ACL, a value of False will be significantly more efficient. :type encrypt_key: bool :param encrypt_key: If True, the new copy of the object will be encrypted on the server-side by S3 and will be stored in an encrypted form while at rest in S3. :type validate_dst_bucket: bool :param validate_dst_bucket: If True, will validate the dst_bucket by using an extra list request. :rtype: :class:`boto.s3.key.Key` or subclass :returns: An instance of the newly created key object """ dst_bucket = self.bucket.connection.lookup(dst_bucket, validate_dst_bucket) if reduced_redundancy: storage_class = 'REDUCED_REDUNDANCY' else: storage_class = self.storage_class return dst_bucket.copy_key(dst_key, self.bucket.name, self.name, metadata, storage_class=storage_class, preserve_acl=preserve_acl, encrypt_key=encrypt_key) def startElement(self, name, attrs, connection): if name == 'Owner': self.owner = User(self) return self.owner else: return None def endElement(self, name, value, connection): if name == 'Key': self.name = value elif name == 'ETag': self.etag = value elif name == 'IsLatest': if value == 'true': self.is_latest = True else: self.is_latest = False elif name == 'LastModified': self.last_modified = value elif name == 'Size': self.size = int(value) elif name == 'StorageClass': self.storage_class = value elif name == 'Owner': pass elif name == 'VersionId': self.version_id = value else: setattr(self, name, value) def exists(self, headers=None): """ Returns True if the key exists :rtype: bool :return: Whether the key exists on S3 """ return bool(self.bucket.lookup(self.name, headers=headers)) def delete(self, headers=None): """ Delete this key from S3 """ return self.bucket.delete_key(self.name, version_id=self.version_id, headers=headers) def get_metadata(self, name): return self.metadata.get(name) def set_metadata(self, name, value): # Ensure that metadata that is vital to signing is in the correct # case. Applies to ``Content-Type`` & ``Content-MD5``. if name.lower() == 'content-type': self.metadata['Content-Type'] = value elif name.lower() == 'content-md5': self.metadata['Content-MD5'] = value else: self.metadata[name] = value def update_metadata(self, d): self.metadata.update(d) # convenience methods for setting/getting ACL def set_acl(self, acl_str, headers=None): if self.bucket != None: self.bucket.set_acl(acl_str, self.name, headers=headers) def get_acl(self, headers=None): if self.bucket != None: return self.bucket.get_acl(self.name, headers=headers) def get_xml_acl(self, headers=None): if self.bucket != None: return self.bucket.get_xml_acl(self.name, headers=headers) def set_xml_acl(self, acl_str, headers=None): if self.bucket != None: return self.bucket.set_xml_acl(acl_str, self.name, headers=headers) def set_canned_acl(self, acl_str, headers=None): return self.bucket.set_canned_acl(acl_str, self.name, headers) def get_redirect(self): """Return the redirect location configured for this key. If no redirect is configured (via set_redirect), then None will be returned. """ response = self.bucket.connection.make_request( 'HEAD', self.bucket.name, self.name) if response.status == 200: return response.getheader('x-amz-website-redirect-location') else: raise self.provider.storage_response_error( response.status, response.reason, response.read()) def set_redirect(self, redirect_location, headers=None): """Configure this key to redirect to another location. When the bucket associated with this key is accessed from the website endpoint, a 301 redirect will be issued to the specified `redirect_location`. :type redirect_location: string :param redirect_location: The location to redirect. """ if headers is None: headers = {} else: headers = headers.copy() headers['x-amz-website-redirect-location'] = redirect_location response = self.bucket.connection.make_request('PUT', self.bucket.name, self.name, headers) if response.status == 200: return True else: raise self.provider.storage_response_error( response.status, response.reason, response.read()) def make_public(self, headers=None): return self.bucket.set_canned_acl('public-read', self.name, headers) def generate_url(self, expires_in, method='GET', headers=None, query_auth=True, force_http=False, response_headers=None, expires_in_absolute=False, version_id=None, policy=None, reduced_redundancy=False, encrypt_key=False): """ Generate a URL to access this key. :type expires_in: int :param expires_in: How long the url is valid for, in seconds :type method: string :param method: The method to use for retrieving the file (default is GET) :type headers: dict :param headers: Any headers to pass along in the request :type query_auth: bool :param query_auth: :type force_http: bool :param force_http: If True, http will be used instead of https. :type response_headers: dict :param response_headers: A dictionary containing HTTP headers/values that will override any headers associated with the stored object in the response. See http://goo.gl/EWOPb for details. :type expires_in_absolute: bool :param expires_in_absolute: :type version_id: string :param version_id: The version_id of the object to GET. If specified this overrides any value in the key. :type policy: :class:`boto.s3.acl.CannedACLStrings` :param policy: A canned ACL policy that will be applied to the new key in S3. :type reduced_redundancy: bool :param reduced_redundancy: If True, this will set the storage class of the new Key to be REDUCED_REDUNDANCY. The Reduced Redundancy Storage (RRS) feature of S3, provides lower redundancy at lower storage cost. :type encrypt_key: bool :param encrypt_key: If True, the new copy of the object will be encrypted on the server-side by S3 and will be stored in an encrypted form while at rest in S3. :rtype: string :return: The URL to access the key """ provider = self.bucket.connection.provider version_id = version_id or self.version_id if headers is None: headers = {} else: headers = headers.copy() # add headers accordingly (usually PUT case) if policy: headers[provider.acl_header] = policy if reduced_redundancy: self.storage_class = 'REDUCED_REDUNDANCY' if provider.storage_class_header: headers[provider.storage_class_header] = self.storage_class if encrypt_key: headers[provider.server_side_encryption_header] = 'AES256' headers = boto.utils.merge_meta(headers, self.metadata, provider) return self.bucket.connection.generate_url(expires_in, method, self.bucket.name, self.name, headers, query_auth, force_http, response_headers, expires_in_absolute, version_id) def send_file(self, fp, headers=None, cb=None, num_cb=10, query_args=None, chunked_transfer=False, size=None): """ Upload a file to a key into a bucket on S3. :type fp: file :param fp: The file pointer to upload. The file pointer must point point at the offset from which you wish to upload. ie. if uploading the full file, it should point at the start of the file. Normally when a file is opened for reading, the fp will point at the first byte. See the bytes parameter below for more info. :type headers: dict :param headers: The headers to pass along with the PUT request :type num_cb: int :param num_cb: (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer. Providing a negative integer will cause your callback to be called with each buffer read. :type query_args: string :param query_args: (optional) Arguments to pass in the query string. :type chunked_transfer: boolean :param chunked_transfer: (optional) If true, we use chunked Transfer-Encoding. :type size: int :param size: (optional) The Maximum number of bytes to read from the file pointer (fp). This is useful when uploading a file in multiple parts where you are splitting the file up into different ranges to be uploaded. If not specified, the default behaviour is to read all bytes from the file pointer. Less bytes may be available. """ self._send_file_internal(fp, headers=headers, cb=cb, num_cb=num_cb, query_args=query_args, chunked_transfer=chunked_transfer, size=size) def _send_file_internal(self, fp, headers=None, cb=None, num_cb=10, query_args=None, chunked_transfer=False, size=None, hash_algs=None): provider = self.bucket.connection.provider try: spos = fp.tell() except IOError: spos = None self.read_from_stream = False # If hash_algs is unset and the MD5 hasn't already been computed, # default to an MD5 hash_alg to hash the data on-the-fly. if hash_algs is None and not self.md5: hash_algs = {'md5': md5} digesters = dict((alg, hash_algs[alg]()) for alg in hash_algs or {}) def sender(http_conn, method, path, data, headers): # This function is called repeatedly for temporary retries # so we must be sure the file pointer is pointing at the # start of the data. if spos is not None and spos != fp.tell(): fp.seek(spos) elif spos is None and self.read_from_stream: # if seek is not supported, and we've read from this # stream already, then we need to abort retries to # avoid setting bad data. raise provider.storage_data_error( 'Cannot retry failed request. fp does not support seeking.') # If the caller explicitly specified host header, tell putrequest # not to add a second host header. Similarly for accept-encoding. skips = {} if boto.utils.find_matching_headers('host', headers): skips['skip_host'] = 1 if boto.utils.find_matching_headers('accept-encoding', headers): skips['skip_accept_encoding'] = 1 http_conn.putrequest(method, path, **skips) for key in headers: http_conn.putheader(key, headers[key]) http_conn.endheaders() save_debug = self.bucket.connection.debug self.bucket.connection.debug = 0 # If the debuglevel < 4 we don't want to show connection # payload, so turn off HTTP connection-level debug output (to # be restored below). # Use the getattr approach to allow this to work in AppEngine. if getattr(http_conn, 'debuglevel', 0) < 4: http_conn.set_debuglevel(0) data_len = 0 if cb: if size: cb_size = size elif self.size: cb_size = self.size else: cb_size = 0 if chunked_transfer and cb_size == 0: # For chunked Transfer, we call the cb for every 1MB # of data transferred, except when we know size. cb_count = (1024 * 1024) / self.BufferSize elif num_cb > 1: cb_count = int( math.ceil(cb_size / self.BufferSize / (num_cb - 1.0))) elif num_cb < 0: cb_count = -1 else: cb_count = 0 i = 0 cb(data_len, cb_size) bytes_togo = size if bytes_togo and bytes_togo < self.BufferSize: chunk = fp.read(bytes_togo) else: chunk = fp.read(self.BufferSize) if spos is None: # read at least something from a non-seekable fp. self.read_from_stream = True while chunk: chunk_len = len(chunk) data_len += chunk_len if chunked_transfer: http_conn.send('%x;\r\n' % chunk_len) http_conn.send(chunk) http_conn.send('\r\n') else: http_conn.send(chunk) for alg in digesters: digesters[alg].update(chunk) if bytes_togo: bytes_togo -= chunk_len if bytes_togo <= 0: break if cb: i += 1 if i == cb_count or cb_count == -1: cb(data_len, cb_size) i = 0 if bytes_togo and bytes_togo < self.BufferSize: chunk = fp.read(bytes_togo) else: chunk = fp.read(self.BufferSize) self.size = data_len for alg in digesters: self.local_hashes[alg] = digesters[alg].digest() if chunked_transfer: http_conn.send('0\r\n') # http_conn.send("Content-MD5: %s\r\n" % self.base64md5) http_conn.send('\r\n') if cb and (cb_count <= 1 or i > 0) and data_len > 0: cb(data_len, cb_size) http_conn.set_debuglevel(save_debug) self.bucket.connection.debug = save_debug response = http_conn.getresponse() body = response.read() if not self.should_retry(response, chunked_transfer): raise provider.storage_response_error( response.status, response.reason, body) return response if not headers: headers = {} else: headers = headers.copy() # Overwrite user-supplied user-agent. for header in find_matching_headers('User-Agent', headers): del headers[header] headers['User-Agent'] = UserAgent if self.storage_class != 'STANDARD': headers[provider.storage_class_header] = self.storage_class if find_matching_headers('Content-Encoding', headers): self.content_encoding = merge_headers_by_name( 'Content-Encoding', headers) if find_matching_headers('Content-Language', headers): self.content_language = merge_headers_by_name( 'Content-Language', headers) content_type_headers = find_matching_headers('Content-Type', headers) if content_type_headers: # Some use cases need to suppress sending of the Content-Type # header and depend on the receiving server to set the content # type. This can be achieved by setting headers['Content-Type'] # to None when calling this method. if (len(content_type_headers) == 1 and headers[content_type_headers[0]] is None): # Delete null Content-Type value to skip sending that header. del headers[content_type_headers[0]] else: self.content_type = merge_headers_by_name( 'Content-Type', headers) elif self.path: self.content_type = mimetypes.guess_type(self.path)[0] if self.content_type == None: self.content_type = self.DefaultContentType headers['Content-Type'] = self.content_type else: headers['Content-Type'] = self.content_type if self.base64md5: headers['Content-MD5'] = self.base64md5 if chunked_transfer: headers['Transfer-Encoding'] = 'chunked' #if not self.base64md5: # headers['Trailer'] = "Content-MD5" else: headers['Content-Length'] = str(self.size) headers['Expect'] = '100-Continue' headers = boto.utils.merge_meta(headers, self.metadata, provider) resp = self.bucket.connection.make_request( 'PUT', self.bucket.name, self.name, headers, sender=sender, query_args=query_args ) self.handle_version_headers(resp, force=True) self.handle_addl_headers(resp.getheaders()) def should_retry(self, response, chunked_transfer=False): provider = self.bucket.connection.provider if not chunked_transfer: if response.status in [500, 503]: # 500 & 503 can be plain retries. return True if response.getheader('location'): # If there's a redirect, plain retry. return True if 200 <= response.status <= 299: self.etag = response.getheader('etag') if self.etag != '"%s"' % self.md5: raise provider.storage_data_error( 'ETag from S3 did not match computed MD5') return True if response.status == 400: # The 400 must be trapped so the retry handler can check to # see if it was a timeout. # If ``RequestTimeout`` is present, we'll retry. Otherwise, bomb # out. body = response.read() err = provider.storage_response_error( response.status, response.reason, body ) if err.error_code in ['RequestTimeout']: raise PleaseRetryException( "Saw %s, retrying" % err.error_code, response=response ) return False def compute_md5(self, fp, size=None): """ :type fp: file :param fp: File pointer to the file to MD5 hash. The file pointer will be reset to the same position before the method returns. :type size: int :param size: (optional) The Maximum number of bytes to read from the file pointer (fp). This is useful when uploading a file in multiple parts where the file is being split in place into different parts. Less bytes may be available. """ hex_digest, b64_digest, data_size = compute_md5(fp, size=size) # Returned values are MD5 hash, base64 encoded MD5 hash, and data size. # The internal implementation of compute_md5() needs to return the # data size but we don't want to return that value to the external # caller because it changes the class interface (i.e. it might # break some code) so we consume the third tuple value here and # return the remainder of the tuple to the caller, thereby preserving # the existing interface. self.size = data_size return (hex_digest, b64_digest) def set_contents_from_stream(self, fp, headers=None, replace=True, cb=None, num_cb=10, policy=None, reduced_redundancy=False, query_args=None, size=None): """ Store an object using the name of the Key object as the key in cloud and the contents of the data stream pointed to by 'fp' as the contents. The stream object is not seekable and total size is not known. This has the implication that we can't specify the Content-Size and Content-MD5 in the header. So for huge uploads, the delay in calculating MD5 is avoided but with a penalty of inability to verify the integrity of the uploaded data. :type fp: file :param fp: the file whose contents are to be uploaded :type headers: dict :param headers: additional HTTP headers to be sent with the PUT request. :type replace: bool :param replace: If this parameter is False, the method will first check to see if an object exists in the bucket with the same key. If it does, it won't overwrite it. The default value is True which will overwrite the object. :type cb: function :param cb: a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to GS and the second representing the total number of bytes that need to be transmitted. :type num_cb: int :param num_cb: (optional) If a callback is specified with the cb parameter, this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer. :type policy: :class:`boto.gs.acl.CannedACLStrings` :param policy: A canned ACL policy that will be applied to the new key in GS. :type reduced_redundancy: bool :param reduced_redundancy: If True, this will set the storage class of the new Key to be REDUCED_REDUNDANCY. The Reduced Redundancy Storage (RRS) feature of S3, provides lower redundancy at lower storage cost. :type size: int :param size: (optional) The Maximum number of bytes to read from the file pointer (fp). This is useful when uploading a file in multiple parts where you are splitting the file up into different ranges to be uploaded. If not specified, the default behaviour is to read all bytes from the file pointer. Less bytes may be available. """ provider = self.bucket.connection.provider if not provider.supports_chunked_transfer(): raise BotoClientError('%s does not support chunked transfer' % provider.get_provider_name()) # Name of the Object should be specified explicitly for Streams. if not self.name or self.name == '': raise BotoClientError('Cannot determine the destination ' 'object name for the given stream') if headers is None: headers = {} if policy: headers[provider.acl_header] = policy if reduced_redundancy: self.storage_class = 'REDUCED_REDUNDANCY' if provider.storage_class_header: headers[provider.storage_class_header] = self.storage_class if self.bucket != None: if not replace: if self.bucket.lookup(self.name): return self.send_file(fp, headers, cb, num_cb, query_args, chunked_transfer=True, size=size) def set_contents_from_file(self, fp, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, reduced_redundancy=False, query_args=None, encrypt_key=False, size=None, rewind=False): """ Store an object in S3 using the name of the Key object as the key in S3 and the contents of the file pointed to by 'fp' as the contents. The data is read from 'fp' from its current position until 'size' bytes have been read or EOF. :type fp: file :param fp: the file whose contents to upload :type headers: dict :param headers: Additional HTTP headers that will be sent with the PUT request. :type replace: bool :param replace: If this parameter is False, the method will first check to see if an object exists in the bucket with the same key. If it does, it won't overwrite it. The default value is True which will overwrite the object. :type cb: function :param cb: a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object. :type num_cb: int :param num_cb: (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer. :type policy: :class:`boto.s3.acl.CannedACLStrings` :param policy: A canned ACL policy that will be applied to the new key in S3. :type md5: A tuple containing the hexdigest version of the MD5 checksum of the file as the first element and the Base64-encoded version of the plain checksum as the second element. This is the same format returned by the compute_md5 method. :param md5: If you need to compute the MD5 for any reason prior to upload, it's silly to have to do it twice so this param, if present, will be used as the MD5 values of the file. Otherwise, the checksum will be computed. :type reduced_redundancy: bool :param reduced_redundancy: If True, this will set the storage class of the new Key to be REDUCED_REDUNDANCY. The Reduced Redundancy Storage (RRS) feature of S3, provides lower redundancy at lower storage cost. :type encrypt_key: bool :param encrypt_key: If True, the new copy of the object will be encrypted on the server-side by S3 and will be stored in an encrypted form while at rest in S3. :type size: int :param size: (optional) The Maximum number of bytes to read from the file pointer (fp). This is useful when uploading a file in multiple parts where you are splitting the file up into different ranges to be uploaded. If not specified, the default behaviour is to read all bytes from the file pointer. Less bytes may be available. :type rewind: bool :param rewind: (optional) If True, the file pointer (fp) will be rewound to the start before any bytes are read from it. The default behaviour is False which reads from the current position of the file pointer (fp). :rtype: int :return: The number of bytes written to the key. """ provider = self.bucket.connection.provider headers = headers or {} if policy: headers[provider.acl_header] = policy if encrypt_key: headers[provider.server_side_encryption_header] = 'AES256' if rewind: # caller requests reading from beginning of fp. fp.seek(0, os.SEEK_SET) else: # The following seek/tell/seek logic is intended # to detect applications using the older interface to # set_contents_from_file(), which automatically rewound the # file each time the Key was reused. This changed with commit # 14ee2d03f4665fe20d19a85286f78d39d924237e, to support uploads # split into multiple parts and uploaded in parallel, and at # the time of that commit this check was added because otherwise # older programs would get a success status and upload an empty # object. Unfortuantely, it's very inefficient for fp's implemented # by KeyFile (used, for example, by gsutil when copying between # providers). So, we skip the check for the KeyFile case. # TODO: At some point consider removing this seek/tell/seek # logic, after enough time has passed that it's unlikely any # programs remain that assume the older auto-rewind interface. if not isinstance(fp, KeyFile): spos = fp.tell() fp.seek(0, os.SEEK_END) if fp.tell() == spos: fp.seek(0, os.SEEK_SET) if fp.tell() != spos: # Raise an exception as this is likely a programming # error whereby there is data before the fp but nothing # after it. fp.seek(spos) raise AttributeError('fp is at EOF. Use rewind option ' 'or seek() to data start.') # seek back to the correct position. fp.seek(spos) if reduced_redundancy: self.storage_class = 'REDUCED_REDUNDANCY' if provider.storage_class_header: headers[provider.storage_class_header] = self.storage_class # TODO - What if provider doesn't support reduced reduncancy? # What if different providers provide different classes? if hasattr(fp, 'name'): self.path = fp.name if self.bucket != None: if not md5 and provider.supports_chunked_transfer(): # defer md5 calculation to on the fly and # we don't know anything about size yet. chunked_transfer = True self.size = None else: chunked_transfer = False if isinstance(fp, KeyFile): # Avoid EOF seek for KeyFile case as it's very inefficient. key = fp.getkey() size = key.size - fp.tell() self.size = size # At present both GCS and S3 use MD5 for the etag for # non-multipart-uploaded objects. If the etag is 32 hex # chars use it as an MD5, to avoid having to read the file # twice while transferring. if (re.match('^"[a-fA-F0-9]{32}"$', key.etag)): etag = key.etag.strip('"') md5 = (etag, base64.b64encode(binascii.unhexlify(etag))) if not md5: # compute_md5() and also set self.size to actual # size of the bytes read computing the md5. md5 = self.compute_md5(fp, size) # adjust size if required size = self.size elif size: self.size = size else: # If md5 is provided, still need to size so # calculate based on bytes to end of content spos = fp.tell() fp.seek(0, os.SEEK_END) self.size = fp.tell() - spos fp.seek(spos) size = self.size self.md5 = md5[0] self.base64md5 = md5[1] if self.name == None: self.name = self.md5 if not replace: if self.bucket.lookup(self.name): return self.send_file(fp, headers=headers, cb=cb, num_cb=num_cb, query_args=query_args, chunked_transfer=chunked_transfer, size=size) # return number of bytes written. return self.size def set_contents_from_filename(self, filename, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, reduced_redundancy=False, encrypt_key=False): """ Store an object in S3 using the name of the Key object as the key in S3 and the contents of the file named by 'filename'. See set_contents_from_file method for details about the parameters. :type filename: string :param filename: The name of the file that you want to put onto S3 :type headers: dict :param headers: Additional headers to pass along with the request to AWS. :type replace: bool :param replace: If True, replaces the contents of the file if it already exists. :type cb: function :param cb: a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object. :type cb: int :param num_cb: (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer. :type policy: :class:`boto.s3.acl.CannedACLStrings` :param policy: A canned ACL policy that will be applied to the new key in S3. :type md5: A tuple containing the hexdigest version of the MD5 checksum of the file as the first element and the Base64-encoded version of the plain checksum as the second element. This is the same format returned by the compute_md5 method. :param md5: If you need to compute the MD5 for any reason prior to upload, it's silly to have to do it twice so this param, if present, will be used as the MD5 values of the file. Otherwise, the checksum will be computed. :type reduced_redundancy: bool :param reduced_redundancy: If True, this will set the storage class of the new Key to be REDUCED_REDUNDANCY. The Reduced Redundancy Storage (RRS) feature of S3, provides lower redundancy at lower storage cost. :type encrypt_key: bool :param encrypt_key: If True, the new copy of the object will be encrypted on the server-side by S3 and will be stored in an encrypted form while at rest in S3. :rtype: int :return: The number of bytes written to the key. """ with open(filename, 'rb') as fp: return self.set_contents_from_file(fp, headers, replace, cb, num_cb, policy, md5, reduced_redundancy, encrypt_key=encrypt_key) def set_contents_from_string(self, s, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, reduced_redundancy=False, encrypt_key=False): """ Store an object in S3 using the name of the Key object as the key in S3 and the string 's' as the contents. See set_contents_from_file method for details about the parameters. :type headers: dict :param headers: Additional headers to pass along with the request to AWS. :type replace: bool :param replace: If True, replaces the contents of the file if it already exists. :type cb: function :param cb: a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object. :type cb: int :param num_cb: (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer. :type policy: :class:`boto.s3.acl.CannedACLStrings` :param policy: A canned ACL policy that will be applied to the new key in S3. :type md5: A tuple containing the hexdigest version of the MD5 checksum of the file as the first element and the Base64-encoded version of the plain checksum as the second element. This is the same format returned by the compute_md5 method. :param md5: If you need to compute the MD5 for any reason prior to upload, it's silly to have to do it twice so this param, if present, will be used as the MD5 values of the file. Otherwise, the checksum will be computed. :type reduced_redundancy: bool :param reduced_redundancy: If True, this will set the storage class of the new Key to be REDUCED_REDUNDANCY. The Reduced Redundancy Storage (RRS) feature of S3, provides lower redundancy at lower storage cost. :type encrypt_key: bool :param encrypt_key: If True, the new copy of the object will be encrypted on the server-side by S3 and will be stored in an encrypted form while at rest in S3. """ if isinstance(s, unicode): s = s.encode("utf-8") fp = StringIO.StringIO(s) r = self.set_contents_from_file(fp, headers, replace, cb, num_cb, policy, md5, reduced_redundancy, encrypt_key=encrypt_key) fp.close() return r def get_file(self, fp, headers=None, cb=None, num_cb=10, torrent=False, version_id=None, override_num_retries=None, response_headers=None): """ Retrieves a file from an S3 Key :type fp: file :param fp: File pointer to put the data into :type headers: string :param: headers to send when retrieving the files :type cb: function :param cb: a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object. :type cb: int :param num_cb: (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer. :type torrent: bool :param torrent: Flag for whether to get a torrent for the file :type override_num_retries: int :param override_num_retries: If not None will override configured num_retries parameter for underlying GET. :type response_headers: dict :param response_headers: A dictionary containing HTTP headers/values that will override any headers associated with the stored object in the response. See http://goo.gl/EWOPb for details. """ self._get_file_internal(fp, headers=headers, cb=cb, num_cb=num_cb, torrent=torrent, version_id=version_id, override_num_retries=override_num_retries, response_headers=response_headers, hash_algs=None, query_args=None) def _get_file_internal(self, fp, headers=None, cb=None, num_cb=10, torrent=False, version_id=None, override_num_retries=None, response_headers=None, hash_algs=None, query_args=None): if headers is None: headers = {} save_debug = self.bucket.connection.debug if self.bucket.connection.debug == 1: self.bucket.connection.debug = 0 query_args = query_args or [] if torrent: query_args.append('torrent') if hash_algs is None and not torrent: hash_algs = {'md5': md5} digesters = dict((alg, hash_algs[alg]()) for alg in hash_algs or {}) # If a version_id is passed in, use that. If not, check to see # if the Key object has an explicit version_id and, if so, use that. # Otherwise, don't pass a version_id query param. if version_id is None: version_id = self.version_id if version_id: query_args.append('versionId=%s' % version_id) if response_headers: for key in response_headers: query_args.append('%s=%s' % ( key, urllib.quote(response_headers[key]))) query_args = '&'.join(query_args) self.open('r', headers, query_args=query_args, override_num_retries=override_num_retries) data_len = 0 if cb: if self.size is None: cb_size = 0 else: cb_size = self.size if self.size is None and num_cb != -1: # If size is not available due to chunked transfer for example, # we'll call the cb for every 1MB of data transferred. cb_count = (1024 * 1024) / self.BufferSize elif num_cb > 1: cb_count = int(math.ceil(cb_size/self.BufferSize/(num_cb-1.0))) elif num_cb < 0: cb_count = -1 else: cb_count = 0 i = 0 cb(data_len, cb_size) try: for bytes in self: fp.write(bytes) data_len += len(bytes) for alg in digesters: digesters[alg].update(bytes) if cb: if cb_size > 0 and data_len >= cb_size: break i += 1 if i == cb_count or cb_count == -1: cb(data_len, cb_size) i = 0 except IOError, e: if e.errno == errno.ENOSPC: raise StorageDataError('Out of space for destination file ' '%s' % fp.name) raise if cb and (cb_count <= 1 or i > 0) and data_len > 0: cb(data_len, cb_size) for alg in digesters: self.local_hashes[alg] = digesters[alg].digest() if self.size is None and not torrent and "Range" not in headers: self.size = data_len self.close() self.bucket.connection.debug = save_debug def get_torrent_file(self, fp, headers=None, cb=None, num_cb=10): """ Get a torrent file (see to get_file) :type fp: file :param fp: The file pointer of where to put the torrent :type headers: dict :param headers: Headers to be passed :type cb: function :param cb: a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object. :type cb: int :param num_cb: (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer. """ return self.get_file(fp, headers, cb, num_cb, torrent=True) def get_contents_to_file(self, fp, headers=None, cb=None, num_cb=10, torrent=False, version_id=None, res_download_handler=None, response_headers=None): """ Retrieve an object from S3 using the name of the Key object as the key in S3. Write the contents of the object to the file pointed to by 'fp'. :type fp: File -like object :param fp: :type headers: dict :param headers: additional HTTP headers that will be sent with the GET request. :type cb: function :param cb: a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object. :type cb: int :param num_cb: (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer. :type torrent: bool :param torrent: If True, returns the contents of a torrent file as a string. :type res_upload_handler: ResumableDownloadHandler :param res_download_handler: If provided, this handler will perform the download. :type response_headers: dict :param response_headers: A dictionary containing HTTP headers/values that will override any headers associated with the stored object in the response. See http://goo.gl/EWOPb for details. """ if self.bucket != None: if res_download_handler: res_download_handler.get_file(self, fp, headers, cb, num_cb, torrent=torrent, version_id=version_id) else: self.get_file(fp, headers, cb, num_cb, torrent=torrent, version_id=version_id, response_headers=response_headers) def get_contents_to_filename(self, filename, headers=None, cb=None, num_cb=10, torrent=False, version_id=None, res_download_handler=None, response_headers=None): """ Retrieve an object from S3 using the name of the Key object as the key in S3. Store contents of the object to a file named by 'filename'. See get_contents_to_file method for details about the parameters. :type filename: string :param filename: The filename of where to put the file contents :type headers: dict :param headers: Any additional headers to send in the request :type cb: function :param cb: a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object. :type cb: int :param num_cb: (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer. :type torrent: bool :param torrent: If True, returns the contents of a torrent file as a string. :type res_upload_handler: ResumableDownloadHandler :param res_download_handler: If provided, this handler will perform the download. :type response_headers: dict :param response_headers: A dictionary containing HTTP headers/values that will override any headers associated with the stored object in the response. See http://goo.gl/EWOPb for details. """ try: with open(filename, 'wb') as fp: self.get_contents_to_file(fp, headers, cb, num_cb, torrent=torrent, version_id=version_id, res_download_handler=res_download_handler, response_headers=response_headers) except Exception: os.remove(filename) raise # if last_modified date was sent from s3, try to set file's timestamp if self.last_modified != None: try: modified_tuple = rfc822.parsedate_tz(self.last_modified) modified_stamp = int(rfc822.mktime_tz(modified_tuple)) os.utime(fp.name, (modified_stamp, modified_stamp)) except Exception: pass def get_contents_as_string(self, headers=None, cb=None, num_cb=10, torrent=False, version_id=None, response_headers=None): """ Retrieve an object from S3 using the name of the Key object as the key in S3. Return the contents of the object as a string. See get_contents_to_file method for details about the parameters. :type headers: dict :param headers: Any additional headers to send in the request :type cb: function :param cb: a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object. :type cb: int :param num_cb: (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer. :type torrent: bool :param torrent: If True, returns the contents of a torrent file as a string. :type response_headers: dict :param response_headers: A dictionary containing HTTP headers/values that will override any headers associated with the stored object in the response. See http://goo.gl/EWOPb for details. :rtype: string :returns: The contents of the file as a string """ fp = StringIO.StringIO() self.get_contents_to_file(fp, headers, cb, num_cb, torrent=torrent, version_id=version_id, response_headers=response_headers) return fp.getvalue() def add_email_grant(self, permission, email_address, headers=None): """ Convenience method that provides a quick way to add an email grant to a key. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT's the new ACL back to S3. :type permission: string :param permission: The permission being granted. Should be one of: (READ, WRITE, READ_ACP, WRITE_ACP, FULL_CONTROL). :type email_address: string :param email_address: The email address associated with the AWS account your are granting the permission to. :type recursive: boolean :param recursive: A boolean value to controls whether the command will apply the grant to all keys within the bucket or not. The default value is False. By passing a True value, the call will iterate through all keys in the bucket and apply the same grant to each key. CAUTION: If you have a lot of keys, this could take a long time! """ policy = self.get_acl(headers=headers) policy.acl.add_email_grant(permission, email_address) self.set_acl(policy, headers=headers) def add_user_grant(self, permission, user_id, headers=None, display_name=None): """ Convenience method that provides a quick way to add a canonical user grant to a key. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT's the new ACL back to S3. :type permission: string :param permission: The permission being granted. Should be one of: (READ, WRITE, READ_ACP, WRITE_ACP, FULL_CONTROL). :type user_id: string :param user_id: The canonical user id associated with the AWS account your are granting the permission to. :type display_name: string :param display_name: An option string containing the user's Display Name. Only required on Walrus. """ policy = self.get_acl(headers=headers) policy.acl.add_user_grant(permission, user_id, display_name=display_name) self.set_acl(policy, headers=headers) def _normalize_metadata(self, metadata): if type(metadata) == set: norm_metadata = set() for k in metadata: norm_metadata.add(k.lower()) else: norm_metadata = {} for k in metadata: norm_metadata[k.lower()] = metadata[k] return norm_metadata def _get_remote_metadata(self, headers=None): """ Extracts metadata from existing URI into a dict, so we can overwrite/delete from it to form the new set of metadata to apply to a key. """ metadata = {} for underscore_name in self._underscore_base_user_settable_fields: if hasattr(self, underscore_name): value = getattr(self, underscore_name) if value: # Generate HTTP field name corresponding to "_" named field. field_name = underscore_name.replace('_', '-') metadata[field_name.lower()] = value # self.metadata contains custom metadata, which are all user-settable. prefix = self.provider.metadata_prefix for underscore_name in self.metadata: field_name = underscore_name.replace('_', '-') metadata['%s%s' % (prefix, field_name.lower())] = ( self.metadata[underscore_name]) return metadata def set_remote_metadata(self, metadata_plus, metadata_minus, preserve_acl, headers=None): metadata_plus = self._normalize_metadata(metadata_plus) metadata_minus = self._normalize_metadata(metadata_minus) metadata = self._get_remote_metadata() metadata.update(metadata_plus) for h in metadata_minus: if h in metadata: del metadata[h] src_bucket = self.bucket # Boto prepends the meta prefix when adding headers, so strip prefix in # metadata before sending back in to copy_key() call. rewritten_metadata = {} for h in metadata: if (h.startswith('x-goog-meta-') or h.startswith('x-amz-meta-')): rewritten_h = (h.replace('x-goog-meta-', '') .replace('x-amz-meta-', '')) else: rewritten_h = h rewritten_metadata[rewritten_h] = metadata[h] metadata = rewritten_metadata src_bucket.copy_key(self.name, self.bucket.name, self.name, metadata=metadata, preserve_acl=preserve_acl, headers=headers) def restore(self, days, headers=None): """Restore an object from an archive. :type days: int :param days: The lifetime of the restored object (must be at least 1 day). If the object is already restored then this parameter can be used to readjust the lifetime of the restored object. In this case, the days param is with respect to the initial time of the request. If the object has not been restored, this param is with respect to the completion time of the request. """ response = self.bucket.connection.make_request( 'POST', self.bucket.name, self.name, data=self.RestoreBody % days, headers=headers, query_args='restore') if response.status not in (200, 202): provider = self.bucket.connection.provider raise provider.storage_response_error(response.status, response.reason, response.read()) boto-2.20.1/boto/s3/keyfile.py000066400000000000000000000105111225267101000160230ustar00rootroot00000000000000# Copyright 2013 Google Inc. # Copyright 2011, Nexenta Systems Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Wrapper class to expose a Key being read via a partial implementaiton of the Python file interface. The only functions supported are those needed for seeking in a Key open for reading. """ import os from boto.exception import StorageResponseError class KeyFile(): def __init__(self, key): self.key = key self.key.open_read() self.location = 0 self.closed = False self.softspace = -1 # Not implemented. self.mode = 'r' self.encoding = 'Undefined in KeyFile' self.errors = 'Undefined in KeyFile' self.newlines = 'Undefined in KeyFile' self.name = key.name def tell(self): if self.location is None: raise ValueError("I/O operation on closed file") return self.location def seek(self, pos, whence=os.SEEK_SET): self.key.close(fast=True) if whence == os.SEEK_END: # We need special handling for this case because sending an HTTP range GET # with EOF for the range start would cause an invalid range error. Instead # we position to one before EOF (plus pos) and then read one byte to # position at EOF. if self.key.size == 0: # Don't try to seek with an empty key. return pos = self.key.size + pos - 1 if pos < 0: raise IOError("Invalid argument") self.key.open_read(headers={"Range": "bytes=%d-" % pos}) self.key.read(1) self.location = pos + 1 return if whence == os.SEEK_SET: if pos < 0: raise IOError("Invalid argument") elif whence == os.SEEK_CUR: pos += self.location else: raise IOError('Invalid whence param (%d) passed to seek' % whence) try: self.key.open_read(headers={"Range": "bytes=%d-" % pos}) except StorageResponseError, e: # 416 Invalid Range means that the given starting byte was past the end # of file. We catch this because the Python file interface allows silently # seeking past the end of the file. if e.status != 416: raise self.location = pos def read(self, size): self.location += size return self.key.read(size) def close(self): self.key.close() self.location = None self.closed = True def isatty(self): return False # Non-file interface, useful for code that wants to dig into underlying Key # state. def getkey(self): return self.key # Unimplemented interfaces below here. def write(self, buf): raise NotImplementedError('write not implemented in KeyFile') def fileno(self): raise NotImplementedError('fileno not implemented in KeyFile') def flush(self): raise NotImplementedError('flush not implemented in KeyFile') def next(self): raise NotImplementedError('next not implemented in KeyFile') def readinto(self): raise NotImplementedError('readinto not implemented in KeyFile') def readline(self): raise NotImplementedError('readline not implemented in KeyFile') def readlines(self): raise NotImplementedError('readlines not implemented in KeyFile') def truncate(self): raise NotImplementedError('truncate not implemented in KeyFile') def writelines(self): raise NotImplementedError('writelines not implemented in KeyFile') def xreadlines(self): raise NotImplementedError('xreadlines not implemented in KeyFile') boto-2.20.1/boto/s3/lifecycle.py000066400000000000000000000171611225267101000163420ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class Rule(object): """ A Lifcycle rule for an S3 bucket. :ivar id: Unique identifier for the rule. The value cannot be longer than 255 characters. :ivar prefix: Prefix identifying one or more objects to which the rule applies. :ivar status: If Enabled, the rule is currently being applied. If Disabled, the rule is not currently being applied. :ivar expiration: An instance of `Expiration`. This indicates the lifetime of the objects that are subject to the rule. :ivar transition: An instance of `Transition`. This indicates when to transition to a different storage class. """ def __init__(self, id=None, prefix=None, status=None, expiration=None, transition=None): self.id = id self.prefix = prefix self.status = status if isinstance(expiration, (int, long)): # retain backwards compatibility??? self.expiration = Expiration(days=expiration) else: # None or object self.expiration = expiration self.transition = transition def __repr__(self): return '' % self.id def startElement(self, name, attrs, connection): if name == 'Transition': self.transition = Transition() return self.transition elif name == 'Expiration': self.expiration = Expiration() return self.expiration return None def endElement(self, name, value, connection): if name == 'ID': self.id = value elif name == 'Prefix': self.prefix = value elif name == 'Status': self.status = value else: setattr(self, name, value) def to_xml(self): s = '' s += '%s' % self.id s += '%s' % self.prefix s += '%s' % self.status if self.expiration is not None: s += self.expiration.to_xml() if self.transition is not None: s += self.transition.to_xml() s += '' return s class Expiration(object): """ When an object will expire. :ivar days: The number of days until the object expires :ivar date: The date when the object will expire. Must be in ISO 8601 format. """ def __init__(self, days=None, date=None): self.days = days self.date = date def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'Days': self.days = int(value) elif name == 'Date': self.date = value def __repr__(self): if self.days is None: how_long = "on: %s" % self.date else: how_long = "in: %s days" % self.days return '' % how_long def to_xml(self): s = '' if self.days is not None: s += '%s' % self.days elif self.date is not None: s += '%s' % self.date s += '' return s class Transition(object): """ A transition to a different storage class. :ivar days: The number of days until the object should be moved. :ivar date: The date when the object should be moved. Should be in ISO 8601 format. :ivar storage_class: The storage class to transition to. Valid values are GLACIER. """ def __init__(self, days=None, date=None, storage_class=None): self.days = days self.date = date self.storage_class = storage_class def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'Days': self.days = int(value) elif name == 'Date': self.date = value elif name == 'StorageClass': self.storage_class = value def __repr__(self): if self.days is None: how_long = "on: %s" % self.date else: how_long = "in: %s days" % self.days return '' % (how_long, self.storage_class) def to_xml(self): s = '' s += '%s' % self.storage_class if self.days is not None: s += '%s' % self.days elif self.date is not None: s += '%s' % self.date s += '' return s class Lifecycle(list): """ A container for the rules associated with a Lifecycle configuration. """ def startElement(self, name, attrs, connection): if name == 'Rule': rule = Rule() self.append(rule) return rule return None def endElement(self, name, value, connection): setattr(self, name, value) def to_xml(self): """ Returns a string containing the XML version of the Lifecycle configuration as defined by S3. """ s = '' s += '' for rule in self: s += rule.to_xml() s += '' return s def add_rule(self, id, prefix, status, expiration, transition=None): """ Add a rule to this Lifecycle configuration. This only adds the rule to the local copy. To install the new rule(s) on the bucket, you need to pass this Lifecycle config object to the configure_lifecycle method of the Bucket object. :type id: str :param id: Unique identifier for the rule. The value cannot be longer than 255 characters. :type prefix: str :iparam prefix: Prefix identifying one or more objects to which the rule applies. :type status: str :param status: If 'Enabled', the rule is currently being applied. If 'Disabled', the rule is not currently being applied. :type expiration: int :param expiration: Indicates the lifetime, in days, of the objects that are subject to the rule. The value must be a non-zero positive integer. A Expiration object instance is also perfect. :type transition: Transition :param transition: Indicates when an object transitions to a different storage class. """ rule = Rule(id, prefix, status, expiration, transition) self.append(rule) boto-2.20.1/boto/s3/multidelete.py000066400000000000000000000112251225267101000167130ustar00rootroot00000000000000# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto import handler import xml.sax class Deleted(object): """ A successfully deleted object in a multi-object delete request. :ivar key: Key name of the object that was deleted. :ivar version_id: Version id of the object that was deleted. :ivar delete_marker: If True, indicates the object deleted was a DeleteMarker. :ivar delete_marker_version_id: Version ID of the delete marker deleted. """ def __init__(self, key=None, version_id=None, delete_marker=False, delete_marker_version_id=None): self.key = key self.version_id = version_id self.delete_marker = delete_marker self.delete_marker_version_id = delete_marker_version_id def __repr__(self): if self.version_id: return '' % (self.key, self.version_id) else: return '' % self.key def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'Key': self.key = value elif name == 'VersionId': self.version_id = value elif name == 'DeleteMarker': if value.lower() == 'true': self.delete_marker = True elif name == 'DeleteMarkerVersionId': self.delete_marker_version_id = value else: setattr(self, name, value) class Error(object): """ An unsuccessful deleted object in a multi-object delete request. :ivar key: Key name of the object that was not deleted. :ivar version_id: Version id of the object that was not deleted. :ivar code: Status code of the failed delete operation. :ivar message: Status message of the failed delete operation. """ def __init__(self, key=None, version_id=None, code=None, message=None): self.key = key self.version_id = version_id self.code = code self.message = message def __repr__(self): if self.version_id: return '' % (self.key, self.version_id, self.code) else: return '' % (self.key, self.code) def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'Key': self.key = value elif name == 'VersionId': self.version_id = value elif name == 'Code': self.code = value elif name == 'Message': self.message = value else: setattr(self, name, value) class MultiDeleteResult(object): """ The status returned from a MultiObject Delete request. :ivar deleted: A list of successfully deleted objects. Note that if the quiet flag was specified in the request, this list will be empty because only error responses would be returned. :ivar errors: A list of unsuccessfully deleted objects. """ def __init__(self, bucket=None): self.bucket = None self.deleted = [] self.errors = [] def startElement(self, name, attrs, connection): if name == 'Deleted': d = Deleted() self.deleted.append(d) return d elif name == 'Error': e = Error() self.errors.append(e) return e return None def endElement(self, name, value, connection): setattr(self, name, value) boto-2.20.1/boto/s3/multipart.py000066400000000000000000000263371225267101000164310ustar00rootroot00000000000000# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # Copyright (c) 2010, Eucalyptus Systems, Inc. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import user import key from boto import handler import xml.sax class CompleteMultiPartUpload(object): """ Represents a completed MultiPart Upload. Contains the following useful attributes: * location - The URI of the completed upload * bucket_name - The name of the bucket in which the upload is contained * key_name - The name of the new, completed key * etag - The MD5 hash of the completed, combined upload * version_id - The version_id of the completed upload * encrypted - The value of the encryption header """ def __init__(self, bucket=None): self.bucket = bucket self.location = None self.bucket_name = None self.key_name = None self.etag = None self.version_id = None self.encrypted = None def __repr__(self): return '' % (self.bucket_name, self.key_name) def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'Location': self.location = value elif name == 'Bucket': self.bucket_name = value elif name == 'Key': self.key_name = value elif name == 'ETag': self.etag = value else: setattr(self, name, value) class Part(object): """ Represents a single part in a MultiPart upload. Attributes include: * part_number - The integer part number * last_modified - The last modified date of this part * etag - The MD5 hash of this part * size - The size, in bytes, of this part """ def __init__(self, bucket=None): self.bucket = bucket self.part_number = None self.last_modified = None self.etag = None self.size = None def __repr__(self): if isinstance(self.part_number, int): return '' % self.part_number else: return '' % None def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'PartNumber': self.part_number = int(value) elif name == 'LastModified': self.last_modified = value elif name == 'ETag': self.etag = value elif name == 'Size': self.size = int(value) else: setattr(self, name, value) def part_lister(mpupload, part_number_marker=None): """ A generator function for listing parts of a multipart upload. """ more_results = True part = None while more_results: parts = mpupload.get_all_parts(None, part_number_marker) for part in parts: yield part part_number_marker = mpupload.next_part_number_marker more_results = mpupload.is_truncated class MultiPartUpload(object): """ Represents a MultiPart Upload operation. """ def __init__(self, bucket=None): self.bucket = bucket self.bucket_name = None self.key_name = None self.id = id self.initiator = None self.owner = None self.storage_class = None self.initiated = None self.part_number_marker = None self.next_part_number_marker = None self.max_parts = None self.is_truncated = False self._parts = None def __repr__(self): return '' % self.key_name def __iter__(self): return part_lister(self) def to_xml(self): s = '\n' for part in self: s += ' \n' s += ' %d\n' % part.part_number s += ' %s\n' % part.etag s += ' \n' s += '' return s def startElement(self, name, attrs, connection): if name == 'Initiator': self.initiator = user.User(self) return self.initiator elif name == 'Owner': self.owner = user.User(self) return self.owner elif name == 'Part': part = Part(self.bucket) self._parts.append(part) return part return None def endElement(self, name, value, connection): if name == 'Bucket': self.bucket_name = value elif name == 'Key': self.key_name = value elif name == 'UploadId': self.id = value elif name == 'StorageClass': self.storage_class = value elif name == 'PartNumberMarker': self.part_number_marker = value elif name == 'NextPartNumberMarker': self.next_part_number_marker = value elif name == 'MaxParts': self.max_parts = int(value) elif name == 'IsTruncated': if value == 'true': self.is_truncated = True else: self.is_truncated = False elif name == 'Initiated': self.initiated = value else: setattr(self, name, value) def get_all_parts(self, max_parts=None, part_number_marker=None): """ Return the uploaded parts of this MultiPart Upload. This is a lower-level method that requires you to manually page through results. To simplify this process, you can just use the object itself as an iterator and it will automatically handle all of the paging with S3. """ self._parts = [] query_args = 'uploadId=%s' % self.id if max_parts: query_args += '&max-parts=%d' % max_parts if part_number_marker: query_args += '&part-number-marker=%s' % part_number_marker response = self.bucket.connection.make_request('GET', self.bucket.name, self.key_name, query_args=query_args) body = response.read() if response.status == 200: h = handler.XmlHandler(self, self) xml.sax.parseString(body, h) return self._parts def upload_part_from_file(self, fp, part_num, headers=None, replace=True, cb=None, num_cb=10, md5=None, size=None): """ Upload another part of this MultiPart Upload. :type fp: file :param fp: The file object you want to upload. :type part_num: int :param part_num: The number of this part. The other parameters are exactly as defined for the :class:`boto.s3.key.Key` set_contents_from_file method. :rtype: :class:`boto.s3.key.Key` or subclass :returns: The uploaded part containing the etag. """ if part_num < 1: raise ValueError('Part numbers must be greater than zero') query_args = 'uploadId=%s&partNumber=%d' % (self.id, part_num) key = self.bucket.new_key(self.key_name) key.set_contents_from_file(fp, headers=headers, replace=replace, cb=cb, num_cb=num_cb, md5=md5, reduced_redundancy=False, query_args=query_args, size=size) return key def copy_part_from_key(self, src_bucket_name, src_key_name, part_num, start=None, end=None, src_version_id=None, headers=None): """ Copy another part of this MultiPart Upload. :type src_bucket_name: string :param src_bucket_name: Name of the bucket containing the source key :type src_key_name: string :param src_key_name: Name of the source key :type part_num: int :param part_num: The number of this part. :type start: int :param start: Zero-based byte offset to start copying from :type end: int :param end: Zero-based byte offset to copy to :type src_version_id: string :param src_version_id: version_id of source object to copy from :type headers: dict :param headers: Any headers to pass along in the request """ if part_num < 1: raise ValueError('Part numbers must be greater than zero') query_args = 'uploadId=%s&partNumber=%d' % (self.id, part_num) if start is not None and end is not None: rng = 'bytes=%s-%s' % (start, end) provider = self.bucket.connection.provider if headers is None: headers = {} else: headers = headers.copy() headers[provider.copy_source_range_header] = rng return self.bucket.copy_key(self.key_name, src_bucket_name, src_key_name, src_version_id=src_version_id, storage_class=None, headers=headers, query_args=query_args) def complete_upload(self): """ Complete the MultiPart Upload operation. This method should be called when all parts of the file have been successfully uploaded to S3. :rtype: :class:`boto.s3.multipart.CompletedMultiPartUpload` :returns: An object representing the completed upload. """ xml = self.to_xml() return self.bucket.complete_multipart_upload(self.key_name, self.id, xml) def cancel_upload(self): """ Cancels a MultiPart Upload operation. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to abort a given multipart upload multiple times in order to completely free all storage consumed by all parts. """ self.bucket.cancel_multipart_upload(self.key_name, self.id) boto-2.20.1/boto/s3/prefix.py000066400000000000000000000031751225267101000157000ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class Prefix(object): def __init__(self, bucket=None, name=None): self.bucket = bucket self.name = name def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'Prefix': self.name = value else: setattr(self, name, value) @property def provider(self): provider = None if self.bucket and self.bucket.connection: provider = self.bucket.connection.provider return provider boto-2.20.1/boto/s3/resumable_download_handler.py000066400000000000000000000363401225267101000217460ustar00rootroot00000000000000# Copyright 2010 Google Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import errno import httplib import os import re import socket import time import boto from boto import config, storage_uri_for_key from boto.connection import AWSAuthConnection from boto.exception import ResumableDownloadException from boto.exception import ResumableTransferDisposition from boto.s3.keyfile import KeyFile from boto.gs.key import Key as GSKey """ Resumable download handler. Resumable downloads will retry failed downloads, resuming at the byte count completed by the last download attempt. If too many retries happen with no progress (per configurable num_retries param), the download will be aborted. The caller can optionally specify a tracker_file_name param in the ResumableDownloadHandler constructor. If you do this, that file will save the state needed to allow retrying later, in a separate process (e.g., in a later run of gsutil). Note that resumable downloads work across providers (they depend only on support Range GETs), but this code is in the boto.s3 package because it is the wrong abstraction level to go in the top-level boto package. TODO: At some point we should refactor the code to have a storage_service package where all these provider-independent files go. """ class ByteTranslatingCallbackHandler(object): """ Proxy class that translates progress callbacks made by boto.s3.Key.get_file(), taking into account that we're resuming a download. """ def __init__(self, proxied_cb, download_start_point): self.proxied_cb = proxied_cb self.download_start_point = download_start_point def call(self, total_bytes_uploaded, total_size): self.proxied_cb(self.download_start_point + total_bytes_uploaded, total_size) def get_cur_file_size(fp, position_to_eof=False): """ Returns size of file, optionally leaving fp positioned at EOF. """ if isinstance(fp, KeyFile) and not position_to_eof: # Avoid EOF seek for KeyFile case as it's very inefficient. return fp.getkey().size if not position_to_eof: cur_pos = fp.tell() fp.seek(0, os.SEEK_END) cur_file_size = fp.tell() if not position_to_eof: fp.seek(cur_pos, os.SEEK_SET) return cur_file_size class ResumableDownloadHandler(object): """ Handler for resumable downloads. """ MIN_ETAG_LEN = 5 RETRYABLE_EXCEPTIONS = (httplib.HTTPException, IOError, socket.error, socket.gaierror) def __init__(self, tracker_file_name=None, num_retries=None): """ Constructor. Instantiate once for each downloaded file. :type tracker_file_name: string :param tracker_file_name: optional file name to save tracking info about this download. If supplied and the current process fails the download, it can be retried in a new process. If called with an existing file containing an unexpired timestamp, we'll resume the transfer for this file; else we'll start a new resumable download. :type num_retries: int :param num_retries: the number of times we'll re-try a resumable download making no progress. (Count resets every time we get progress, so download can span many more than this number of retries.) """ self.tracker_file_name = tracker_file_name self.num_retries = num_retries self.etag_value_for_current_download = None if tracker_file_name: self._load_tracker_file_etag() # Save download_start_point in instance state so caller can # find how much was transferred by this ResumableDownloadHandler # (across retries). self.download_start_point = None def _load_tracker_file_etag(self): f = None try: f = open(self.tracker_file_name, 'r') self.etag_value_for_current_download = f.readline().rstrip('\n') # We used to match an MD5-based regex to ensure that the etag was # read correctly. Since ETags need not be MD5s, we now do a simple # length sanity check instead. if len(self.etag_value_for_current_download) < self.MIN_ETAG_LEN: print('Couldn\'t read etag in tracker file (%s). Restarting ' 'download from scratch.' % self.tracker_file_name) except IOError, e: # Ignore non-existent file (happens first time a download # is attempted on an object), but warn user for other errors. if e.errno != errno.ENOENT: # Will restart because # self.etag_value_for_current_download == None. print('Couldn\'t read URI tracker file (%s): %s. Restarting ' 'download from scratch.' % (self.tracker_file_name, e.strerror)) finally: if f: f.close() def _save_tracker_info(self, key): self.etag_value_for_current_download = key.etag.strip('"\'') if not self.tracker_file_name: return f = None try: f = open(self.tracker_file_name, 'w') f.write('%s\n' % self.etag_value_for_current_download) except IOError, e: raise ResumableDownloadException( 'Couldn\'t write tracker file (%s): %s.\nThis can happen' 'if you\'re using an incorrectly configured download tool\n' '(e.g., gsutil configured to save tracker files to an ' 'unwritable directory)' % (self.tracker_file_name, e.strerror), ResumableTransferDisposition.ABORT) finally: if f: f.close() def _remove_tracker_file(self): if (self.tracker_file_name and os.path.exists(self.tracker_file_name)): os.unlink(self.tracker_file_name) def _attempt_resumable_download(self, key, fp, headers, cb, num_cb, torrent, version_id, hash_algs): """ Attempts a resumable download. Raises ResumableDownloadException if any problems occur. """ cur_file_size = get_cur_file_size(fp, position_to_eof=True) if (cur_file_size and self.etag_value_for_current_download and self.etag_value_for_current_download == key.etag.strip('"\'')): # Try to resume existing transfer. if cur_file_size > key.size: raise ResumableDownloadException( '%s is larger (%d) than %s (%d).\nDeleting tracker file, so ' 'if you re-try this download it will start from scratch' % (fp.name, cur_file_size, str(storage_uri_for_key(key)), key.size), ResumableTransferDisposition.ABORT) elif cur_file_size == key.size: if key.bucket.connection.debug >= 1: print 'Download complete.' return if key.bucket.connection.debug >= 1: print 'Resuming download.' headers = headers.copy() headers['Range'] = 'bytes=%d-%d' % (cur_file_size, key.size - 1) cb = ByteTranslatingCallbackHandler(cb, cur_file_size).call self.download_start_point = cur_file_size else: if key.bucket.connection.debug >= 1: print 'Starting new resumable download.' self._save_tracker_info(key) self.download_start_point = 0 # Truncate the file, in case a new resumable download is being # started atop an existing file. fp.truncate(0) # Disable AWSAuthConnection-level retry behavior, since that would # cause downloads to restart from scratch. if isinstance(key, GSKey): key.get_file(fp, headers, cb, num_cb, torrent, version_id, override_num_retries=0, hash_algs=hash_algs) else: key.get_file(fp, headers, cb, num_cb, torrent, version_id, override_num_retries=0) fp.flush() def get_file(self, key, fp, headers, cb=None, num_cb=10, torrent=False, version_id=None, hash_algs=None): """ Retrieves a file from a Key :type key: :class:`boto.s3.key.Key` or subclass :param key: The Key object from which upload is to be downloaded :type fp: file :param fp: File pointer into which data should be downloaded :type headers: string :param: headers to send when retrieving the files :type cb: function :param cb: (optional) a callback function that will be called to report progress on the download. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted from the storage service and the second representing the total number of bytes that need to be transmitted. :type num_cb: int :param num_cb: (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer. :type torrent: bool :param torrent: Flag for whether to get a torrent for the file :type version_id: string :param version_id: The version ID (optional) :type hash_algs: dictionary :param hash_algs: (optional) Dictionary of hash algorithms and corresponding hashing class that implements update() and digest(). Defaults to {'md5': hashlib/md5.md5}. Raises ResumableDownloadException if a problem occurs during the transfer. """ debug = key.bucket.connection.debug if not headers: headers = {} # Use num-retries from constructor if one was provided; else check # for a value specified in the boto config file; else default to 6. if self.num_retries is None: self.num_retries = config.getint('Boto', 'num_retries', 6) progress_less_iterations = 0 while True: # Retry as long as we're making progress. had_file_bytes_before_attempt = get_cur_file_size(fp) try: self._attempt_resumable_download(key, fp, headers, cb, num_cb, torrent, version_id, hash_algs) # Download succceded, so remove the tracker file (if have one). self._remove_tracker_file() # Previously, check_final_md5() was called here to validate # downloaded file's checksum, however, to be consistent with # non-resumable downloads, this call was removed. Checksum # validation of file contents should be done by the caller. if debug >= 1: print 'Resumable download complete.' return except self.RETRYABLE_EXCEPTIONS, e: if debug >= 1: print('Caught exception (%s)' % e.__repr__()) if isinstance(e, IOError) and e.errno == errno.EPIPE: # Broken pipe error causes httplib to immediately # close the socket (http://bugs.python.org/issue5542), # so we need to close and reopen the key before resuming # the download. if isinstance(key, GSKey): key.get_file(fp, headers, cb, num_cb, torrent, version_id, override_num_retries=0, hash_algs=hash_algs) else: key.get_file(fp, headers, cb, num_cb, torrent, version_id, override_num_retries=0) except ResumableDownloadException, e: if (e.disposition == ResumableTransferDisposition.ABORT_CUR_PROCESS): if debug >= 1: print('Caught non-retryable ResumableDownloadException ' '(%s)' % e.message) raise elif (e.disposition == ResumableTransferDisposition.ABORT): if debug >= 1: print('Caught non-retryable ResumableDownloadException ' '(%s); aborting and removing tracker file' % e.message) self._remove_tracker_file() raise else: if debug >= 1: print('Caught ResumableDownloadException (%s) - will ' 'retry' % e.message) # At this point we had a re-tryable failure; see if made progress. if get_cur_file_size(fp) > had_file_bytes_before_attempt: progress_less_iterations = 0 else: progress_less_iterations += 1 if progress_less_iterations > self.num_retries: # Don't retry any longer in the current process. raise ResumableDownloadException( 'Too many resumable download attempts failed without ' 'progress. You might try this download again later', ResumableTransferDisposition.ABORT_CUR_PROCESS) # Close the key, in case a previous download died partway # through and left data in the underlying key HTTP buffer. # Do this within a try/except block in case the connection is # closed (since key.close() attempts to do a final read, in which # case this read attempt would get an IncompleteRead exception, # which we can safely ignore. try: key.close() except httplib.IncompleteRead: pass sleep_time_secs = 2**progress_less_iterations if debug >= 1: print('Got retryable failure (%d progress-less in a row).\n' 'Sleeping %d seconds before re-trying' % (progress_less_iterations, sleep_time_secs)) time.sleep(sleep_time_secs) boto-2.20.1/boto/s3/tagging.py000066400000000000000000000033041225267101000160150ustar00rootroot00000000000000from boto import handler import xml.sax class Tag(object): def __init__(self, key=None, value=None): self.key = key self.value = value def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'Key': self.key = value elif name == 'Value': self.value = value def to_xml(self): return '%s%s' % ( self.key, self.value) def __eq__(self, other): return (self.key == other.key and self.value == other.value) class TagSet(list): def startElement(self, name, attrs, connection): if name == 'Tag': tag = Tag() self.append(tag) return tag return None def endElement(self, name, value, connection): setattr(self, name, value) def add_tag(self, key, value): tag = Tag(key, value) self.append(tag) def to_xml(self): xml = '' for tag in self: xml += tag.to_xml() xml += '' return xml class Tags(list): """A container for the tags associated with a bucket.""" def startElement(self, name, attrs, connection): if name == 'TagSet': tag_set = TagSet() self.append(tag_set) return tag_set return None def endElement(self, name, value, connection): setattr(self, name, value) def to_xml(self): xml = '' for tag_set in self: xml += tag_set.to_xml() xml +='' return xml def add_tag_set(self, tag_set): self.append(tag_set) boto-2.20.1/boto/s3/user.py000066400000000000000000000036611225267101000153610ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class User: def __init__(self, parent=None, id='', display_name=''): if parent: parent.owner = self self.type = None self.id = id self.display_name = display_name def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'DisplayName': self.display_name = value elif name == 'ID': self.id = value else: setattr(self, name, value) def to_xml(self, element_name='Owner'): if self.type: s = '<%s xsi:type="%s">' % (element_name, self.type) else: s = '<%s>' % element_name s += '%s' % self.id s += '%s' % self.display_name s += '' % element_name return s boto-2.20.1/boto/s3/website.py000066400000000000000000000245571225267101000160540ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # def tag(key, value): start = '<%s>' % key end = '' % key return '%s%s%s' % (start, value, end) class WebsiteConfiguration(object): """ Website configuration for a bucket. :ivar suffix: Suffix that is appended to a request that is for a "directory" on the website endpoint (e.g. if the suffix is index.html and you make a request to samplebucket/images/ the data that is returned will be for the object with the key name images/index.html). The suffix must not be empty and must not include a slash character. :ivar error_key: The object key name to use when a 4xx class error occurs. This key identifies the page that is returned when such an error occurs. :ivar redirect_all_requests_to: Describes the redirect behavior for every request to this bucket's website endpoint. If this value is non None, no other values are considered when configuring the website configuration for the bucket. This is an instance of ``RedirectLocation``. :ivar routing_rules: ``RoutingRules`` object which specifies conditions and redirects that apply when the conditions are met. """ def __init__(self, suffix=None, error_key=None, redirect_all_requests_to=None, routing_rules=None): self.suffix = suffix self.error_key = error_key self.redirect_all_requests_to = redirect_all_requests_to if routing_rules is not None: self.routing_rules = routing_rules else: self.routing_rules = RoutingRules() def startElement(self, name, attrs, connection): if name == 'RoutingRules': self.routing_rules = RoutingRules() return self.routing_rules elif name == 'IndexDocument': return _XMLKeyValue([('Suffix', 'suffix')], container=self) elif name == 'ErrorDocument': return _XMLKeyValue([('Key', 'error_key')], container=self) def endElement(self, name, value, connection): pass def to_xml(self): parts = ['', ''] if self.suffix is not None: parts.append(tag('IndexDocument', tag('Suffix', self.suffix))) if self.error_key is not None: parts.append(tag('ErrorDocument', tag('Key', self.error_key))) if self.redirect_all_requests_to is not None: parts.append(self.redirect_all_requests_to.to_xml()) if self.routing_rules: parts.append(self.routing_rules.to_xml()) parts.append('') return ''.join(parts) class _XMLKeyValue(object): def __init__(self, translator, container=None): self.translator = translator if container: self.container = container else: self.container = self def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): for xml_key, attr_name in self.translator: if name == xml_key: setattr(self.container, attr_name, value) def to_xml(self): parts = [] for xml_key, attr_name in self.translator: content = getattr(self.container, attr_name) if content is not None: parts.append(tag(xml_key, content)) return ''.join(parts) class RedirectLocation(_XMLKeyValue): """Specify redirect behavior for every request to a bucket's endpoint. :ivar hostname: Name of the host where requests will be redirected. :ivar protocol: Protocol to use (http, https) when redirecting requests. The default is the protocol that is used in the original request. """ TRANSLATOR = [('HostName', 'hostname'), ('Protocol', 'protocol'), ] def __init__(self, hostname=None, protocol=None): self.hostname = hostname self.protocol = protocol super(RedirectLocation, self).__init__(self.TRANSLATOR) def to_xml(self): return tag('RedirectAllRequestsTo', super(RedirectLocation, self).to_xml()) class RoutingRules(list): def add_rule(self, rule): """ :type rule: :class:`boto.s3.website.RoutingRule` :param rule: A routing rule. :return: This ``RoutingRules`` object is returned, so that it can chain subsequent calls. """ self.append(rule) return self def startElement(self, name, attrs, connection): if name == 'RoutingRule': rule = RoutingRule(Condition(), Redirect()) self.add_rule(rule) return rule def endElement(self, name, value, connection): pass def __repr__(self): return "RoutingRules(%s)" % super(RoutingRules, self).__repr__() def to_xml(self): inner_text = [] for rule in self: inner_text.append(rule.to_xml()) return tag('RoutingRules', '\n'.join(inner_text)) class RoutingRule(object): """Represents a single routing rule. There are convenience methods to making creating rules more concise:: rule = RoutingRule.when(key_prefix='foo/').then_redirect('example.com') :ivar condition: Describes condition that must be met for the specified redirect to apply. :ivar redirect: Specifies redirect behavior. You can redirect requests to another host, to another page, or with another protocol. In the event of an error, you can can specify a different error code to return. """ def __init__(self, condition=None, redirect=None): self.condition = condition self.redirect = redirect def startElement(self, name, attrs, connection): if name == 'Condition': return self.condition elif name == 'Redirect': return self.redirect def endElement(self, name, value, connection): pass def to_xml(self): parts = [] if self.condition: parts.append(self.condition.to_xml()) if self.redirect: parts.append(self.redirect.to_xml()) return tag('RoutingRule', '\n'.join(parts)) @classmethod def when(cls, key_prefix=None, http_error_code=None): return cls(Condition(key_prefix=key_prefix, http_error_code=http_error_code), None) def then_redirect(self, hostname=None, protocol=None, replace_key=None, replace_key_prefix=None, http_redirect_code=None): self.redirect = Redirect( hostname=hostname, protocol=protocol, replace_key=replace_key, replace_key_prefix=replace_key_prefix, http_redirect_code=http_redirect_code) return self class Condition(_XMLKeyValue): """ :ivar key_prefix: The object key name prefix when the redirect is applied. For example, to redirect requests for ExamplePage.html, the key prefix will be ExamplePage.html. To redirect request for all pages with the prefix docs/, the key prefix will be /docs, which identifies all objects in the docs/ folder. :ivar http_error_code: The HTTP error code when the redirect is applied. In the event of an error, if the error code equals this value, then the specified redirect is applied. """ TRANSLATOR = [ ('KeyPrefixEquals', 'key_prefix'), ('HttpErrorCodeReturnedEquals', 'http_error_code'), ] def __init__(self, key_prefix=None, http_error_code=None): self.key_prefix = key_prefix self.http_error_code = http_error_code super(Condition, self).__init__(self.TRANSLATOR) def to_xml(self): return tag('Condition', super(Condition, self).to_xml()) class Redirect(_XMLKeyValue): """ :ivar hostname: The host name to use in the redirect request. :ivar protocol: The protocol to use in the redirect request. Can be either 'http' or 'https'. :ivar replace_key: The specific object key to use in the redirect request. For example, redirect request to error.html. :ivar replace_key_prefix: The object key prefix to use in the redirect request. For example, to redirect requests for all pages with prefix docs/ (objects in the docs/ folder) to documents/, you can set a condition block with KeyPrefixEquals set to docs/ and in the Redirect set ReplaceKeyPrefixWith to /documents. :ivar http_redirect_code: The HTTP redirect code to use on the response. """ TRANSLATOR = [ ('Protocol', 'protocol'), ('HostName', 'hostname'), ('ReplaceKeyWith', 'replace_key'), ('ReplaceKeyPrefixWith', 'replace_key_prefix'), ('HttpRedirectCode', 'http_redirect_code'), ] def __init__(self, hostname=None, protocol=None, replace_key=None, replace_key_prefix=None, http_redirect_code=None): self.hostname = hostname self.protocol = protocol self.replace_key = replace_key self.replace_key_prefix = replace_key_prefix self.http_redirect_code = http_redirect_code super(Redirect, self).__init__(self.TRANSLATOR) def to_xml(self): return tag('Redirect', super(Redirect, self).to_xml()) boto-2.20.1/boto/sdb/000077500000000000000000000000001225267101000142465ustar00rootroot00000000000000boto-2.20.1/boto/sdb/__init__.py000066400000000000000000000053271225267101000163660ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from .regioninfo import SDBRegionInfo def regions(): """ Get all available regions for the SDB service. :rtype: list :return: A list of :class:`boto.sdb.regioninfo.RegionInfo` instances """ return [SDBRegionInfo(name='us-east-1', endpoint='sdb.amazonaws.com'), SDBRegionInfo(name='eu-west-1', endpoint='sdb.eu-west-1.amazonaws.com'), SDBRegionInfo(name='us-west-1', endpoint='sdb.us-west-1.amazonaws.com'), SDBRegionInfo(name='sa-east-1', endpoint='sdb.sa-east-1.amazonaws.com'), SDBRegionInfo(name='us-west-2', endpoint='sdb.us-west-2.amazonaws.com'), SDBRegionInfo(name='ap-northeast-1', endpoint='sdb.ap-northeast-1.amazonaws.com'), SDBRegionInfo(name='ap-southeast-1', endpoint='sdb.ap-southeast-1.amazonaws.com'), SDBRegionInfo(name='ap-southeast-2', endpoint='sdb.ap-southeast-2.amazonaws.com') ] def connect_to_region(region_name, **kw_params): """ Given a valid region name, return a :class:`boto.sdb.connection.SDBConnection`. :type: str :param region_name: The name of the region to connect to. :rtype: :class:`boto.sdb.connection.SDBConnection` or ``None`` :return: A connection to the given region, or None if an invalid region name is given """ for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/sdb/connection.py000066400000000000000000000626031225267101000167660ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import xml.sax import threading import boto from boto import handler from boto.connection import AWSQueryConnection from boto.sdb.domain import Domain, DomainMetaData from boto.sdb.item import Item from boto.sdb.regioninfo import SDBRegionInfo from boto.exception import SDBResponseError class ItemThread(threading.Thread): """ A threaded :class:`Item ` retriever utility class. Retrieved :class:`Item ` objects are stored in the ``items`` instance variable after :py:meth:`run() ` is called. .. tip:: The item retrieval will not start until the :func:`run() ` method is called. """ def __init__(self, name, domain_name, item_names): """ :param str name: A thread name. Used for identification. :param str domain_name: The name of a SimpleDB :class:`Domain ` :type item_names: string or list of strings :param item_names: The name(s) of the items to retrieve from the specified :class:`Domain `. :ivar list items: A list of items retrieved. Starts as empty list. """ threading.Thread.__init__(self, name=name) #print 'starting %s with %d items' % (name, len(item_names)) self.domain_name = domain_name self.conn = SDBConnection() self.item_names = item_names self.items = [] def run(self): """ Start the threaded retrieval of items. Populates the ``items`` list with :class:`Item ` objects. """ for item_name in self.item_names: item = self.conn.get_attributes(self.domain_name, item_name) self.items.append(item) #boto.set_stream_logger('sdb') class SDBConnection(AWSQueryConnection): """ This class serves as a gateway to your SimpleDB region (defaults to us-east-1). Methods within allow access to SimpleDB :class:`Domain ` objects and their associated :class:`Item ` objects. .. tip:: While you may instantiate this class directly, it may be easier to go through :py:func:`boto.connect_sdb`. """ DefaultRegionName = 'us-east-1' DefaultRegionEndpoint = 'sdb.us-east-1.amazonaws.com' APIVersion = '2009-04-15' ResponseError = SDBResponseError def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/', converter=None, security_token=None, validate_certs=True): """ For any keywords that aren't documented, refer to the parent class, :py:class:`boto.connection.AWSAuthConnection`. You can avoid having to worry about these keyword arguments by instantiating these objects via :py:func:`boto.connect_sdb`. :type region: :class:`boto.sdb.regioninfo.SDBRegionInfo` :keyword region: Explicitly specify a region. Defaults to ``us-east-1`` if not specified. You may also specify the region in your ``boto.cfg``: .. code-block:: cfg [SDB] region = eu-west-1 """ if not region: region_name = boto.config.get('SDB', 'region', self.DefaultRegionName) for reg in boto.sdb.regions(): if reg.name == region_name: region = reg break self.region = region AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, proxy_user, proxy_pass, self.region.endpoint, debug, https_connection_factory, path, security_token=security_token, validate_certs=validate_certs) self.box_usage = 0.0 self.converter = converter self.item_cls = Item def _required_auth_capability(self): return ['sdb'] def set_item_cls(self, cls): """ While the default item class is :py:class:`boto.sdb.item.Item`, this default may be overridden. Use this method to change a connection's item class. :param object cls: The new class to set as this connection's item class. See the default item class for inspiration as to what your replacement should/could look like. """ self.item_cls = cls def _build_name_value_list(self, params, attributes, replace=False, label='Attribute'): keys = sorted(attributes.keys()) i = 1 for key in keys: value = attributes[key] if isinstance(value, list): for v in value: params['%s.%d.Name' % (label, i)] = key if self.converter: v = self.converter.encode(v) params['%s.%d.Value' % (label, i)] = v if replace: params['%s.%d.Replace' % (label, i)] = 'true' i += 1 else: params['%s.%d.Name' % (label, i)] = key if self.converter: value = self.converter.encode(value) params['%s.%d.Value' % (label, i)] = value if replace: params['%s.%d.Replace' % (label, i)] = 'true' i += 1 def _build_expected_value(self, params, expected_value): params['Expected.1.Name'] = expected_value[0] if expected_value[1] is True: params['Expected.1.Exists'] = 'true' elif expected_value[1] is False: params['Expected.1.Exists'] = 'false' else: params['Expected.1.Value'] = expected_value[1] def _build_batch_list(self, params, items, replace=False): item_names = items.keys() i = 0 for item_name in item_names: params['Item.%d.ItemName' % i] = item_name j = 0 item = items[item_name] if item is not None: attr_names = item.keys() for attr_name in attr_names: value = item[attr_name] if isinstance(value, list): for v in value: if self.converter: v = self.converter.encode(v) params['Item.%d.Attribute.%d.Name' % (i, j)] = attr_name params['Item.%d.Attribute.%d.Value' % (i, j)] = v if replace: params['Item.%d.Attribute.%d.Replace' % (i, j)] = 'true' j += 1 else: params['Item.%d.Attribute.%d.Name' % (i, j)] = attr_name if self.converter: value = self.converter.encode(value) params['Item.%d.Attribute.%d.Value' % (i, j)] = value if replace: params['Item.%d.Attribute.%d.Replace' % (i, j)] = 'true' j += 1 i += 1 def _build_name_list(self, params, attribute_names): i = 1 attribute_names.sort() for name in attribute_names: params['Attribute.%d.Name' % i] = name i += 1 def get_usage(self): """ Returns the BoxUsage (in USD) accumulated on this specific SDBConnection instance. .. tip:: This can be out of date, and should only be treated as a rough estimate. Also note that this estimate only applies to the requests made on this specific connection instance. It is by no means an account-wide estimate. :rtype: float :return: The accumulated BoxUsage of all requests made on the connection. """ return self.box_usage def print_usage(self): """ Print the BoxUsage and approximate costs of all requests made on this specific SDBConnection instance. .. tip:: This can be out of date, and should only be treated as a rough estimate. Also note that this estimate only applies to the requests made on this specific connection instance. It is by no means an account-wide estimate. """ print 'Total Usage: %f compute seconds' % self.box_usage cost = self.box_usage * 0.14 print 'Approximate Cost: $%f' % cost def get_domain(self, domain_name, validate=True): """ Retrieves a :py:class:`boto.sdb.domain.Domain` object whose name matches ``domain_name``. :param str domain_name: The name of the domain to retrieve :keyword bool validate: When ``True``, check to see if the domain actually exists. If ``False``, blindly return a :py:class:`Domain ` object with the specified name set. :raises: :py:class:`boto.exception.SDBResponseError` if ``validate`` is ``True`` and no match could be found. :rtype: :py:class:`boto.sdb.domain.Domain` :return: The requested domain """ domain = Domain(self, domain_name) if validate: self.select(domain, """select * from `%s` limit 1""" % domain_name) return domain def lookup(self, domain_name, validate=True): """ Lookup an existing SimpleDB domain. This differs from :py:meth:`get_domain` in that ``None`` is returned if ``validate`` is ``True`` and no match was found (instead of raising an exception). :param str domain_name: The name of the domain to retrieve :param bool validate: If ``True``, a ``None`` value will be returned if the specified domain can't be found. If ``False``, a :py:class:`Domain ` object will be dumbly returned, regardless of whether it actually exists. :rtype: :class:`boto.sdb.domain.Domain` object or ``None`` :return: The Domain object or ``None`` if the domain does not exist. """ try: domain = self.get_domain(domain_name, validate) except: domain = None return domain def get_all_domains(self, max_domains=None, next_token=None): """ Returns a :py:class:`boto.resultset.ResultSet` containing all :py:class:`boto.sdb.domain.Domain` objects associated with this connection's Access Key ID. :keyword int max_domains: Limit the returned :py:class:`ResultSet ` to the specified number of members. :keyword str next_token: A token string that was returned in an earlier call to this method as the ``next_token`` attribute on the returned :py:class:`ResultSet ` object. This attribute is set if there are more than Domains than the value specified in the ``max_domains`` keyword. Pass the ``next_token`` value from you earlier query in this keyword to get the next 'page' of domains. """ params = {} if max_domains: params['MaxNumberOfDomains'] = max_domains if next_token: params['NextToken'] = next_token return self.get_list('ListDomains', params, [('DomainName', Domain)]) def create_domain(self, domain_name): """ Create a SimpleDB domain. :type domain_name: string :param domain_name: The name of the new domain :rtype: :class:`boto.sdb.domain.Domain` object :return: The newly created domain """ params = {'DomainName':domain_name} d = self.get_object('CreateDomain', params, Domain) d.name = domain_name return d def get_domain_and_name(self, domain_or_name): """ Given a ``str`` or :class:`boto.sdb.domain.Domain`, return a ``tuple`` with the following members (in order): * In instance of :class:`boto.sdb.domain.Domain` for the requested domain * The domain's name as a ``str`` :type domain_or_name: ``str`` or :class:`boto.sdb.domain.Domain` :param domain_or_name: The domain or domain name to get the domain and name for. :raises: :class:`boto.exception.SDBResponseError` when an invalid domain name is specified. :rtype: tuple :return: A ``tuple`` with contents outlined as per above. """ if (isinstance(domain_or_name, Domain)): return (domain_or_name, domain_or_name.name) else: return (self.get_domain(domain_or_name), domain_or_name) def delete_domain(self, domain_or_name): """ Delete a SimpleDB domain. .. caution:: This will delete the domain and all items within the domain. :type domain_or_name: string or :class:`boto.sdb.domain.Domain` object. :param domain_or_name: Either the name of a domain or a Domain object :rtype: bool :return: True if successful """ domain, domain_name = self.get_domain_and_name(domain_or_name) params = {'DomainName':domain_name} return self.get_status('DeleteDomain', params) def domain_metadata(self, domain_or_name): """ Get the Metadata for a SimpleDB domain. :type domain_or_name: string or :class:`boto.sdb.domain.Domain` object. :param domain_or_name: Either the name of a domain or a Domain object :rtype: :class:`boto.sdb.domain.DomainMetaData` object :return: The newly created domain metadata object """ domain, domain_name = self.get_domain_and_name(domain_or_name) params = {'DomainName':domain_name} d = self.get_object('DomainMetadata', params, DomainMetaData) d.domain = domain return d def put_attributes(self, domain_or_name, item_name, attributes, replace=True, expected_value=None): """ Store attributes for a given item in a domain. :type domain_or_name: string or :class:`boto.sdb.domain.Domain` object. :param domain_or_name: Either the name of a domain or a Domain object :type item_name: string :param item_name: The name of the item whose attributes are being stored. :type attribute_names: dict or dict-like object :param attribute_names: The name/value pairs to store as attributes :type expected_value: list :param expected_value: If supplied, this is a list or tuple consisting of a single attribute name and expected value. The list can be of the form: * ['name', 'value'] In which case the call will first verify that the attribute "name" of this item has a value of "value". If it does, the delete will proceed, otherwise a ConditionalCheckFailed error will be returned. The list can also be of the form: * ['name', True|False] which will simply check for the existence (True) or non-existence (False) of the attribute. :type replace: bool :param replace: Whether the attribute values passed in will replace existing values or will be added as addition values. Defaults to True. :rtype: bool :return: True if successful """ domain, domain_name = self.get_domain_and_name(domain_or_name) params = {'DomainName' : domain_name, 'ItemName' : item_name} self._build_name_value_list(params, attributes, replace) if expected_value: self._build_expected_value(params, expected_value) return self.get_status('PutAttributes', params) def batch_put_attributes(self, domain_or_name, items, replace=True): """ Store attributes for multiple items in a domain. :type domain_or_name: string or :class:`boto.sdb.domain.Domain` object. :param domain_or_name: Either the name of a domain or a Domain object :type items: dict or dict-like object :param items: A dictionary-like object. The keys of the dictionary are the item names and the values are themselves dictionaries of attribute names/values, exactly the same as the attribute_names parameter of the scalar put_attributes call. :type replace: bool :param replace: Whether the attribute values passed in will replace existing values or will be added as addition values. Defaults to True. :rtype: bool :return: True if successful """ domain, domain_name = self.get_domain_and_name(domain_or_name) params = {'DomainName' : domain_name} self._build_batch_list(params, items, replace) return self.get_status('BatchPutAttributes', params, verb='POST') def get_attributes(self, domain_or_name, item_name, attribute_names=None, consistent_read=False, item=None): """ Retrieve attributes for a given item in a domain. :type domain_or_name: string or :class:`boto.sdb.domain.Domain` object. :param domain_or_name: Either the name of a domain or a Domain object :type item_name: string :param item_name: The name of the item whose attributes are being retrieved. :type attribute_names: string or list of strings :param attribute_names: An attribute name or list of attribute names. This parameter is optional. If not supplied, all attributes will be retrieved for the item. :type consistent_read: bool :param consistent_read: When set to true, ensures that the most recent data is returned. :type item: :class:`boto.sdb.item.Item` :keyword item: Instead of instantiating a new Item object, you may specify one to update. :rtype: :class:`boto.sdb.item.Item` :return: An Item with the requested attribute name/values set on it """ domain, domain_name = self.get_domain_and_name(domain_or_name) params = {'DomainName' : domain_name, 'ItemName' : item_name} if consistent_read: params['ConsistentRead'] = 'true' if attribute_names: if not isinstance(attribute_names, list): attribute_names = [attribute_names] self.build_list_params(params, attribute_names, 'AttributeName') response = self.make_request('GetAttributes', params) body = response.read() if response.status == 200: if item == None: item = self.item_cls(domain, item_name) h = handler.XmlHandler(item, self) xml.sax.parseString(body, h) return item else: raise SDBResponseError(response.status, response.reason, body) def delete_attributes(self, domain_or_name, item_name, attr_names=None, expected_value=None): """ Delete attributes from a given item in a domain. :type domain_or_name: string or :class:`boto.sdb.domain.Domain` object. :param domain_or_name: Either the name of a domain or a Domain object :type item_name: string :param item_name: The name of the item whose attributes are being deleted. :type attributes: dict, list or :class:`boto.sdb.item.Item` :param attributes: Either a list containing attribute names which will cause all values associated with that attribute name to be deleted or a dict or Item containing the attribute names and keys and list of values to delete as the value. If no value is supplied, all attribute name/values for the item will be deleted. :type expected_value: list :param expected_value: If supplied, this is a list or tuple consisting of a single attribute name and expected value. The list can be of the form: * ['name', 'value'] In which case the call will first verify that the attribute "name" of this item has a value of "value". If it does, the delete will proceed, otherwise a ConditionalCheckFailed error will be returned. The list can also be of the form: * ['name', True|False] which will simply check for the existence (True) or non-existence (False) of the attribute. :rtype: bool :return: True if successful """ domain, domain_name = self.get_domain_and_name(domain_or_name) params = {'DomainName':domain_name, 'ItemName' : item_name} if attr_names: if isinstance(attr_names, list): self._build_name_list(params, attr_names) elif isinstance(attr_names, dict) or isinstance(attr_names, self.item_cls): self._build_name_value_list(params, attr_names) if expected_value: self._build_expected_value(params, expected_value) return self.get_status('DeleteAttributes', params) def batch_delete_attributes(self, domain_or_name, items): """ Delete multiple items in a domain. :type domain_or_name: string or :class:`boto.sdb.domain.Domain` object. :param domain_or_name: Either the name of a domain or a Domain object :type items: dict or dict-like object :param items: A dictionary-like object. The keys of the dictionary are the item names and the values are either: * dictionaries of attribute names/values, exactly the same as the attribute_names parameter of the scalar put_attributes call. The attribute name/value pairs will only be deleted if they match the name/value pairs passed in. * None which means that all attributes associated with the item should be deleted. :return: True if successful """ domain, domain_name = self.get_domain_and_name(domain_or_name) params = {'DomainName' : domain_name} self._build_batch_list(params, items, False) return self.get_status('BatchDeleteAttributes', params, verb='POST') def select(self, domain_or_name, query='', next_token=None, consistent_read=False): """ Returns a set of Attributes for item names within domain_name that match the query. The query must be expressed in using the SELECT style syntax rather than the original SimpleDB query language. Even though the select request does not require a domain object, a domain object must be passed into this method so the Item objects returned can point to the appropriate domain. :type domain_or_name: string or :class:`boto.sdb.domain.Domain` object :param domain_or_name: Either the name of a domain or a Domain object :type query: string :param query: The SimpleDB query to be performed. :type consistent_read: bool :param consistent_read: When set to true, ensures that the most recent data is returned. :rtype: ResultSet :return: An iterator containing the results. """ domain, domain_name = self.get_domain_and_name(domain_or_name) params = {'SelectExpression' : query} if consistent_read: params['ConsistentRead'] = 'true' if next_token: params['NextToken'] = next_token try: return self.get_list('Select', params, [('Item', self.item_cls)], parent=domain) except SDBResponseError, e: e.body = "Query: %s\n%s" % (query, e.body) raise e boto-2.20.1/boto/sdb/db/000077500000000000000000000000001225267101000146335ustar00rootroot00000000000000boto-2.20.1/boto/sdb/db/__init__.py000066400000000000000000000021241225267101000167430ustar00rootroot00000000000000# Copyright (c) 2006,2007,2008 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. boto-2.20.1/boto/sdb/db/blob.py000066400000000000000000000045361225267101000161330ustar00rootroot00000000000000# Copyright (c) 2006,2007,2008 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class Blob(object): """Blob object""" def __init__(self, value=None, file=None, id=None): self._file = file self.id = id self.value = value @property def file(self): from StringIO import StringIO if self._file: f = self._file else: f = StringIO(self.value) return f def __str__(self): return unicode(self).encode('utf-8') def __unicode__(self): if hasattr(self.file, "get_contents_as_string"): value = self.file.get_contents_as_string() else: value = self.file.getvalue() if isinstance(value, unicode): return value else: return value.decode('utf-8') def read(self): if hasattr(self.file, "get_contents_as_string"): return self.file.get_contents_as_string() else: return self.file.read() def readline(self): return self.file.readline() def next(self): return self.file.next() def __iter__(self): return iter(self.file) @property def size(self): if self._file: return self._file.size elif self.value: return len(self.value) else: return 0 boto-2.20.1/boto/sdb/db/key.py000066400000000000000000000037701225267101000160040ustar00rootroot00000000000000# Copyright (c) 2006,2007,2008 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class Key(object): @classmethod def from_path(cls, *args, **kwds): raise NotImplementedError("Paths are not currently supported") def __init__(self, encoded=None, obj=None): self.name = None if obj: self.id = obj.id self.kind = obj.kind() else: self.id = None self.kind = None def app(self): raise NotImplementedError("Applications are not currently supported") def kind(self): return self.kind def id(self): return self.id def name(self): raise NotImplementedError("Key Names are not currently supported") def id_or_name(self): return self.id def has_id_or_name(self): return self.id != None def parent(self): raise NotImplementedError("Key parents are not currently supported") def __str__(self): return self.id_or_name() boto-2.20.1/boto/sdb/db/manager/000077500000000000000000000000001225267101000162455ustar00rootroot00000000000000boto-2.20.1/boto/sdb/db/manager/__init__.py000066400000000000000000000101621225267101000203560ustar00rootroot00000000000000# Copyright (c) 2006,2007,2008 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import boto def get_manager(cls): """ Returns the appropriate Manager class for a given Model class. It does this by looking in the boto config for a section like this:: [DB] db_type = SimpleDB db_user = db_passwd = db_name = my_domain [DB_TestBasic] db_type = SimpleDB db_user = db_passwd = db_name = basic_domain db_port = 1111 The values in the DB section are "generic values" that will be used if nothing more specific is found. You can also create a section for a specific Model class that gives the db info for that class. In the example above, TestBasic is a Model subclass. """ db_user = boto.config.get('DB', 'db_user', None) db_passwd = boto.config.get('DB', 'db_passwd', None) db_type = boto.config.get('DB', 'db_type', 'SimpleDB') db_name = boto.config.get('DB', 'db_name', None) db_table = boto.config.get('DB', 'db_table', None) db_host = boto.config.get('DB', 'db_host', "sdb.amazonaws.com") db_port = boto.config.getint('DB', 'db_port', 443) enable_ssl = boto.config.getbool('DB', 'enable_ssl', True) sql_dir = boto.config.get('DB', 'sql_dir', None) debug = boto.config.getint('DB', 'debug', 0) # first see if there is a fully qualified section name in the Boto config module_name = cls.__module__.replace('.', '_') db_section = 'DB_' + module_name + '_' + cls.__name__ if not boto.config.has_section(db_section): db_section = 'DB_' + cls.__name__ if boto.config.has_section(db_section): db_user = boto.config.get(db_section, 'db_user', db_user) db_passwd = boto.config.get(db_section, 'db_passwd', db_passwd) db_type = boto.config.get(db_section, 'db_type', db_type) db_name = boto.config.get(db_section, 'db_name', db_name) db_table = boto.config.get(db_section, 'db_table', db_table) db_host = boto.config.get(db_section, 'db_host', db_host) db_port = boto.config.getint(db_section, 'db_port', db_port) enable_ssl = boto.config.getint(db_section, 'enable_ssl', enable_ssl) debug = boto.config.getint(db_section, 'debug', debug) elif hasattr(cls, "_db_name") and cls._db_name is not None: # More specific then the generic DB config is any _db_name class property db_name = cls._db_name elif hasattr(cls.__bases__[0], "_manager"): return cls.__bases__[0]._manager if db_type == 'SimpleDB': from boto.sdb.db.manager.sdbmanager import SDBManager return SDBManager(cls, db_name, db_user, db_passwd, db_host, db_port, db_table, sql_dir, enable_ssl) elif db_type == 'XML': from boto.sdb.db.manager.xmlmanager import XMLManager return XMLManager(cls, db_name, db_user, db_passwd, db_host, db_port, db_table, sql_dir, enable_ssl) else: raise ValueError('Unknown db_type: %s' % db_type) boto-2.20.1/boto/sdb/db/manager/sdbmanager.py000066400000000000000000000651251225267101000207330ustar00rootroot00000000000000# Copyright (c) 2006,2007,2008 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010 Chris Moyer http://coredumped.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import boto import re from boto.utils import find_class import uuid from boto.sdb.db.key import Key from boto.sdb.db.blob import Blob from boto.sdb.db.property import ListProperty, MapProperty from datetime import datetime, date, time from boto.exception import SDBPersistenceError, S3ResponseError ISO8601 = '%Y-%m-%dT%H:%M:%SZ' class TimeDecodeError(Exception): pass class SDBConverter(object): """ Responsible for converting base Python types to format compatible with underlying database. For SimpleDB, that means everything needs to be converted to a string when stored in SimpleDB and from a string when retrieved. To convert a value, pass it to the encode or decode method. The encode method will take a Python native value and convert to DB format. The decode method will take a DB format value and convert it to Python native format. To find the appropriate method to call, the generic encode/decode methods will look for the type-specific method by searching for a method called"encode_" or "decode_". """ def __init__(self, manager): # Do a delayed import to prevent possible circular import errors. from boto.sdb.db.model import Model self.model_class = Model self.manager = manager self.type_map = {bool: (self.encode_bool, self.decode_bool), int: (self.encode_int, self.decode_int), long: (self.encode_long, self.decode_long), float: (self.encode_float, self.decode_float), self.model_class: ( self.encode_reference, self.decode_reference ), Key: (self.encode_reference, self.decode_reference), datetime: (self.encode_datetime, self.decode_datetime), date: (self.encode_date, self.decode_date), time: (self.encode_time, self.decode_time), Blob: (self.encode_blob, self.decode_blob), str: (self.encode_string, self.decode_string), } def encode(self, item_type, value): try: if self.model_class in item_type.mro(): item_type = self.model_class except: pass if item_type in self.type_map: encode = self.type_map[item_type][0] return encode(value) return value def decode(self, item_type, value): if item_type in self.type_map: decode = self.type_map[item_type][1] return decode(value) return value def encode_list(self, prop, value): if value in (None, []): return [] if not isinstance(value, list): # This is a little trick to avoid encoding when it's just a single value, # since that most likely means it's from a query item_type = getattr(prop, "item_type") return self.encode(item_type, value) # Just enumerate(value) won't work here because # we need to add in some zero padding # We support lists up to 1,000 attributes, since # SDB technically only supports 1024 attributes anyway values = {} for k, v in enumerate(value): values["%03d" % k] = v return self.encode_map(prop, values) def encode_map(self, prop, value): import urllib if value == None: return None if not isinstance(value, dict): raise ValueError('Expected a dict value, got %s' % type(value)) new_value = [] for key in value: item_type = getattr(prop, "item_type") if self.model_class in item_type.mro(): item_type = self.model_class encoded_value = self.encode(item_type, value[key]) if encoded_value != None: new_value.append('%s:%s' % (urllib.quote(key), encoded_value)) return new_value def encode_prop(self, prop, value): if isinstance(prop, ListProperty): return self.encode_list(prop, value) elif isinstance(prop, MapProperty): return self.encode_map(prop, value) else: return self.encode(prop.data_type, value) def decode_list(self, prop, value): if not isinstance(value, list): value = [value] if hasattr(prop, 'item_type'): item_type = getattr(prop, "item_type") dec_val = {} for val in value: if val != None: k, v = self.decode_map_element(item_type, val) try: k = int(k) except: k = v dec_val[k] = v value = dec_val.values() return value def decode_map(self, prop, value): if not isinstance(value, list): value = [value] ret_value = {} item_type = getattr(prop, "item_type") for val in value: k, v = self.decode_map_element(item_type, val) ret_value[k] = v return ret_value def decode_map_element(self, item_type, value): """Decode a single element for a map""" import urllib key = value if ":" in value: key, value = value.split(':', 1) key = urllib.unquote(key) if self.model_class in item_type.mro(): value = item_type(id=value) else: value = self.decode(item_type, value) return (key, value) def decode_prop(self, prop, value): if isinstance(prop, ListProperty): return self.decode_list(prop, value) elif isinstance(prop, MapProperty): return self.decode_map(prop, value) else: return self.decode(prop.data_type, value) def encode_int(self, value): value = int(value) value += 2147483648 return '%010d' % value def decode_int(self, value): try: value = int(value) except: boto.log.error("Error, %s is not an integer" % value) value = 0 value = int(value) value -= 2147483648 return int(value) def encode_long(self, value): value = long(value) value += 9223372036854775808 return '%020d' % value def decode_long(self, value): value = long(value) value -= 9223372036854775808 return value def encode_bool(self, value): if value == True or str(value).lower() in ("true", "yes"): return 'true' else: return 'false' def decode_bool(self, value): if value.lower() == 'true': return True else: return False def encode_float(self, value): """ See http://tools.ietf.org/html/draft-wood-ldapext-float-00. """ s = '%e' % value l = s.split('e') mantissa = l[0].ljust(18, '0') exponent = l[1] if value == 0.0: case = '3' exponent = '000' elif mantissa[0] != '-' and exponent[0] == '+': case = '5' exponent = exponent[1:].rjust(3, '0') elif mantissa[0] != '-' and exponent[0] == '-': case = '4' exponent = 999 + int(exponent) exponent = '%03d' % exponent elif mantissa[0] == '-' and exponent[0] == '-': case = '2' mantissa = '%f' % (10 + float(mantissa)) mantissa = mantissa.ljust(18, '0') exponent = exponent[1:].rjust(3, '0') else: case = '1' mantissa = '%f' % (10 + float(mantissa)) mantissa = mantissa.ljust(18, '0') exponent = 999 - int(exponent) exponent = '%03d' % exponent return '%s %s %s' % (case, exponent, mantissa) def decode_float(self, value): case = value[0] exponent = value[2:5] mantissa = value[6:] if case == '3': return 0.0 elif case == '5': pass elif case == '4': exponent = '%03d' % (int(exponent) - 999) elif case == '2': mantissa = '%f' % (float(mantissa) - 10) exponent = '-' + exponent else: mantissa = '%f' % (float(mantissa) - 10) exponent = '%03d' % abs((int(exponent) - 999)) return float(mantissa + 'e' + exponent) def encode_datetime(self, value): if isinstance(value, str) or isinstance(value, unicode): return value if isinstance(value, datetime): return value.strftime(ISO8601) else: return value.isoformat() def decode_datetime(self, value): """Handles both Dates and DateTime objects""" if value is None: return value try: if "T" in value: if "." in value: # Handle true "isoformat()" dates, which may have a microsecond on at the end of them return datetime.strptime(value.split(".")[0], "%Y-%m-%dT%H:%M:%S") else: return datetime.strptime(value, ISO8601) else: value = value.split("-") return date(int(value[0]), int(value[1]), int(value[2])) except Exception, e: return None def encode_date(self, value): if isinstance(value, str) or isinstance(value, unicode): return value return value.isoformat() def decode_date(self, value): try: value = value.split("-") return date(int(value[0]), int(value[1]), int(value[2])) except: return None encode_time = encode_date def decode_time(self, value): """ converts strings in the form of HH:MM:SS.mmmmmm (created by datetime.time.isoformat()) to datetime.time objects. Timzone-aware strings ("HH:MM:SS.mmmmmm+HH:MM") won't be handled right now and will raise TimeDecodeError. """ if '-' in value or '+' in value: # TODO: Handle tzinfo raise TimeDecodeError("Can't handle timezone aware objects: %r" % value) tmp = value.split('.') arg = map(int, tmp[0].split(':')) if len(tmp) == 2: arg.append(int(tmp[1])) return time(*arg) def encode_reference(self, value): if value in (None, 'None', '', ' '): return None if isinstance(value, str) or isinstance(value, unicode): return value else: return value.id def decode_reference(self, value): if not value or value == "None": return None return value def encode_blob(self, value): if not value: return None if isinstance(value, str): return value if not value.id: bucket = self.manager.get_blob_bucket() key = bucket.new_key(str(uuid.uuid4())) value.id = "s3://%s/%s" % (key.bucket.name, key.name) else: match = re.match("^s3:\/\/([^\/]*)\/(.*)$", value.id) if match: s3 = self.manager.get_s3_connection() bucket = s3.get_bucket(match.group(1), validate=False) key = bucket.get_key(match.group(2)) else: raise SDBPersistenceError("Invalid Blob ID: %s" % value.id) if value.value != None: key.set_contents_from_string(value.value) return value.id def decode_blob(self, value): if not value: return None match = re.match("^s3:\/\/([^\/]*)\/(.*)$", value) if match: s3 = self.manager.get_s3_connection() bucket = s3.get_bucket(match.group(1), validate=False) try: key = bucket.get_key(match.group(2)) except S3ResponseError, e: if e.reason != "Forbidden": raise return None else: return None if key: return Blob(file=key, id="s3://%s/%s" % (key.bucket.name, key.name)) else: return None def encode_string(self, value): """Convert ASCII, Latin-1 or UTF-8 to pure Unicode""" if not isinstance(value, str): return value try: return unicode(value, 'utf-8') except: # really, this should throw an exception. # in the interest of not breaking current # systems, however: arr = [] for ch in value: arr.append(unichr(ord(ch))) return u"".join(arr) def decode_string(self, value): """Decoding a string is really nothing, just return the value as-is""" return value class SDBManager(object): def __init__(self, cls, db_name, db_user, db_passwd, db_host, db_port, db_table, ddl_dir, enable_ssl, consistent=None): self.cls = cls self.db_name = db_name self.db_user = db_user self.db_passwd = db_passwd self.db_host = db_host self.db_port = db_port self.db_table = db_table self.ddl_dir = ddl_dir self.enable_ssl = enable_ssl self.s3 = None self.bucket = None self.converter = SDBConverter(self) self._sdb = None self._domain = None if consistent == None and hasattr(cls, "__consistent__"): consistent = cls.__consistent__ self.consistent = consistent @property def sdb(self): if self._sdb is None: self._connect() return self._sdb @property def domain(self): if self._domain is None: self._connect() return self._domain def _connect(self): args = dict(aws_access_key_id=self.db_user, aws_secret_access_key=self.db_passwd, is_secure=self.enable_ssl) try: region = [x for x in boto.sdb.regions() if x.endpoint == self.db_host][0] args['region'] = region except IndexError: pass self._sdb = boto.connect_sdb(**args) # This assumes that the domain has already been created # It's much more efficient to do it this way rather than # having this make a roundtrip each time to validate. # The downside is that if the domain doesn't exist, it breaks self._domain = self._sdb.lookup(self.db_name, validate=False) if not self._domain: self._domain = self._sdb.create_domain(self.db_name) def _object_lister(self, cls, query_lister): for item in query_lister: obj = self.get_object(cls, item.name, item) if obj: yield obj def encode_value(self, prop, value): if value == None: return None if not prop: return str(value) return self.converter.encode_prop(prop, value) def decode_value(self, prop, value): return self.converter.decode_prop(prop, value) def get_s3_connection(self): if not self.s3: self.s3 = boto.connect_s3(self.db_user, self.db_passwd) return self.s3 def get_blob_bucket(self, bucket_name=None): s3 = self.get_s3_connection() bucket_name = "%s-%s" % (s3.aws_access_key_id, self.domain.name) bucket_name = bucket_name.lower() try: self.bucket = s3.get_bucket(bucket_name) except: self.bucket = s3.create_bucket(bucket_name) return self.bucket def load_object(self, obj): if not obj._loaded: a = self.domain.get_attributes(obj.id, consistent_read=self.consistent) if '__type__' in a: for prop in obj.properties(hidden=False): if prop.name in a: value = self.decode_value(prop, a[prop.name]) value = prop.make_value_from_datastore(value) try: setattr(obj, prop.name, value) except Exception, e: boto.log.exception(e) obj._loaded = True def get_object(self, cls, id, a=None): obj = None if not a: a = self.domain.get_attributes(id, consistent_read=self.consistent) if '__type__' in a: if not cls or a['__type__'] != cls.__name__: cls = find_class(a['__module__'], a['__type__']) if cls: params = {} for prop in cls.properties(hidden=False): if prop.name in a: value = self.decode_value(prop, a[prop.name]) value = prop.make_value_from_datastore(value) params[prop.name] = value obj = cls(id, **params) obj._loaded = True else: s = '(%s) class %s.%s not found' % (id, a['__module__'], a['__type__']) boto.log.info('sdbmanager: %s' % s) return obj def get_object_from_id(self, id): return self.get_object(None, id) def query(self, query): query_str = "select * from `%s` %s" % (self.domain.name, self._build_filter_part(query.model_class, query.filters, query.sort_by, query.select)) if query.limit: query_str += " limit %s" % query.limit rs = self.domain.select(query_str, max_items=query.limit, next_token = query.next_token) query.rs = rs return self._object_lister(query.model_class, rs) def count(self, cls, filters, quick=True, sort_by=None, select=None): """ Get the number of results that would be returned in this query """ query = "select count(*) from `%s` %s" % (self.domain.name, self._build_filter_part(cls, filters, sort_by, select)) count = 0 for row in self.domain.select(query): count += int(row['Count']) if quick: return count return count def _build_filter(self, property, name, op, val): if name == "__id__": name = 'itemName()' if name != "itemName()": name = '`%s`' % name if val == None: if op in ('is', '='): return "%(name)s is null" % {"name": name} elif op in ('is not', '!='): return "%s is not null" % name else: val = "" if property.__class__ == ListProperty: if op in ("is", "="): op = "like" elif op in ("!=", "not"): op = "not like" if not(op in ["like", "not like"] and val.startswith("%")): val = "%%:%s" % val return "%s %s '%s'" % (name, op, val.replace("'", "''")) def _build_filter_part(self, cls, filters, order_by=None, select=None): """ Build the filter part """ import types query_parts = [] order_by_filtered = False if order_by: if order_by[0] == "-": order_by_method = "DESC" order_by = order_by[1:] else: order_by_method = "ASC" if select: if order_by and order_by in select: order_by_filtered = True query_parts.append("(%s)" % select) if isinstance(filters, str) or isinstance(filters, unicode): query = "WHERE %s AND `__type__` = '%s'" % (filters, cls.__name__) if order_by in ["__id__", "itemName()"]: query += " ORDER BY itemName() %s" % order_by_method elif order_by != None: query += " ORDER BY `%s` %s" % (order_by, order_by_method) return query for filter in filters: filter_parts = [] filter_props = filter[0] if not isinstance(filter_props, list): filter_props = [filter_props] for filter_prop in filter_props: (name, op) = filter_prop.strip().split(" ", 1) value = filter[1] property = cls.find_property(name) if name == order_by: order_by_filtered = True if types.TypeType(value) == types.ListType: filter_parts_sub = [] for val in value: val = self.encode_value(property, val) if isinstance(val, list): for v in val: filter_parts_sub.append(self._build_filter(property, name, op, v)) else: filter_parts_sub.append(self._build_filter(property, name, op, val)) filter_parts.append("(%s)" % (" OR ".join(filter_parts_sub))) else: val = self.encode_value(property, value) if isinstance(val, list): for v in val: filter_parts.append(self._build_filter(property, name, op, v)) else: filter_parts.append(self._build_filter(property, name, op, val)) query_parts.append("(%s)" % (" or ".join(filter_parts))) type_query = "(`__type__` = '%s'" % cls.__name__ for subclass in self._get_all_decendents(cls).keys(): type_query += " or `__type__` = '%s'" % subclass type_query += ")" query_parts.append(type_query) order_by_query = "" if order_by: if not order_by_filtered: query_parts.append("`%s` LIKE '%%'" % order_by) if order_by in ["__id__", "itemName()"]: order_by_query = " ORDER BY itemName() %s" % order_by_method else: order_by_query = " ORDER BY `%s` %s" % (order_by, order_by_method) if len(query_parts) > 0: return "WHERE %s %s" % (" AND ".join(query_parts), order_by_query) else: return "" def _get_all_decendents(self, cls): """Get all decendents for a given class""" decendents = {} for sc in cls.__sub_classes__: decendents[sc.__name__] = sc decendents.update(self._get_all_decendents(sc)) return decendents def query_gql(self, query_string, *args, **kwds): raise NotImplementedError("GQL queries not supported in SimpleDB") def save_object(self, obj, expected_value=None): if not obj.id: obj.id = str(uuid.uuid4()) attrs = {'__type__': obj.__class__.__name__, '__module__': obj.__class__.__module__, '__lineage__': obj.get_lineage()} del_attrs = [] for property in obj.properties(hidden=False): value = property.get_value_for_datastore(obj) if value is not None: value = self.encode_value(property, value) if value == []: value = None if value == None: del_attrs.append(property.name) continue attrs[property.name] = value if property.unique: try: args = {property.name: value} obj2 = obj.find(**args).next() if obj2.id != obj.id: raise SDBPersistenceError("Error: %s must be unique!" % property.name) except(StopIteration): pass # Convert the Expected value to SDB format if expected_value: prop = obj.find_property(expected_value[0]) v = expected_value[1] if v is not None and not isinstance(v, bool): v = self.encode_value(prop, v) expected_value[1] = v self.domain.put_attributes(obj.id, attrs, replace=True, expected_value=expected_value) if len(del_attrs) > 0: self.domain.delete_attributes(obj.id, del_attrs) return obj def delete_object(self, obj): self.domain.delete_attributes(obj.id) def set_property(self, prop, obj, name, value): setattr(obj, name, value) value = prop.get_value_for_datastore(obj) value = self.encode_value(prop, value) if prop.unique: try: args = {prop.name: value} obj2 = obj.find(**args).next() if obj2.id != obj.id: raise SDBPersistenceError("Error: %s must be unique!" % prop.name) except(StopIteration): pass self.domain.put_attributes(obj.id, {name: value}, replace=True) def get_property(self, prop, obj, name): a = self.domain.get_attributes(obj.id, consistent_read=self.consistent) # try to get the attribute value from SDB if name in a: value = self.decode_value(prop, a[name]) value = prop.make_value_from_datastore(value) setattr(obj, prop.name, value) return value raise AttributeError('%s not found' % name) def set_key_value(self, obj, name, value): self.domain.put_attributes(obj.id, {name: value}, replace=True) def delete_key_value(self, obj, name): self.domain.delete_attributes(obj.id, name) def get_key_value(self, obj, name): a = self.domain.get_attributes(obj.id, name, consistent_read=self.consistent) if name in a: return a[name] else: return None def get_raw_item(self, obj): return self.domain.get_item(obj.id) boto-2.20.1/boto/sdb/db/manager/xmlmanager.py000066400000000000000000000443471225267101000207660ustar00rootroot00000000000000# Copyright (c) 2006-2008 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import boto from boto.utils import find_class, Password from boto.sdb.db.key import Key from boto.sdb.db.model import Model from datetime import datetime from xml.dom.minidom import getDOMImplementation, parse, parseString, Node ISO8601 = '%Y-%m-%dT%H:%M:%SZ' class XMLConverter: """ Responsible for converting base Python types to format compatible with underlying database. For SimpleDB, that means everything needs to be converted to a string when stored in SimpleDB and from a string when retrieved. To convert a value, pass it to the encode or decode method. The encode method will take a Python native value and convert to DB format. The decode method will take a DB format value and convert it to Python native format. To find the appropriate method to call, the generic encode/decode methods will look for the type-specific method by searching for a method called "encode_" or "decode_". """ def __init__(self, manager): self.manager = manager self.type_map = { bool : (self.encode_bool, self.decode_bool), int : (self.encode_int, self.decode_int), long : (self.encode_long, self.decode_long), Model : (self.encode_reference, self.decode_reference), Key : (self.encode_reference, self.decode_reference), Password : (self.encode_password, self.decode_password), datetime : (self.encode_datetime, self.decode_datetime)} def get_text_value(self, parent_node): value = '' for node in parent_node.childNodes: if node.nodeType == node.TEXT_NODE: value += node.data return value def encode(self, item_type, value): if item_type in self.type_map: encode = self.type_map[item_type][0] return encode(value) return value def decode(self, item_type, value): if item_type in self.type_map: decode = self.type_map[item_type][1] return decode(value) else: value = self.get_text_value(value) return value def encode_prop(self, prop, value): if isinstance(value, list): if hasattr(prop, 'item_type'): new_value = [] for v in value: item_type = getattr(prop, "item_type") if Model in item_type.mro(): item_type = Model new_value.append(self.encode(item_type, v)) return new_value else: return value else: return self.encode(prop.data_type, value) def decode_prop(self, prop, value): if prop.data_type == list: if hasattr(prop, 'item_type'): item_type = getattr(prop, "item_type") if Model in item_type.mro(): item_type = Model values = [] for item_node in value.getElementsByTagName('item'): value = self.decode(item_type, item_node) values.append(value) return values else: return self.get_text_value(value) else: return self.decode(prop.data_type, value) def encode_int(self, value): value = int(value) return '%d' % value def decode_int(self, value): value = self.get_text_value(value) if value: value = int(value) else: value = None return value def encode_long(self, value): value = long(value) return '%d' % value def decode_long(self, value): value = self.get_text_value(value) return long(value) def encode_bool(self, value): if value == True: return 'true' else: return 'false' def decode_bool(self, value): value = self.get_text_value(value) if value.lower() == 'true': return True else: return False def encode_datetime(self, value): return value.strftime(ISO8601) def decode_datetime(self, value): value = self.get_text_value(value) try: return datetime.strptime(value, ISO8601) except: return None def encode_reference(self, value): if isinstance(value, str) or isinstance(value, unicode): return value if value == None: return '' else: val_node = self.manager.doc.createElement("object") val_node.setAttribute('id', value.id) val_node.setAttribute('class', '%s.%s' % (value.__class__.__module__, value.__class__.__name__)) return val_node def decode_reference(self, value): if not value: return None try: value = value.childNodes[0] class_name = value.getAttribute("class") id = value.getAttribute("id") cls = find_class(class_name) return cls.get_by_ids(id) except: return None def encode_password(self, value): if value and len(value) > 0: return str(value) else: return None def decode_password(self, value): value = self.get_text_value(value) return Password(value) class XMLManager(object): def __init__(self, cls, db_name, db_user, db_passwd, db_host, db_port, db_table, ddl_dir, enable_ssl): self.cls = cls if not db_name: db_name = cls.__name__.lower() self.db_name = db_name self.db_user = db_user self.db_passwd = db_passwd self.db_host = db_host self.db_port = db_port self.db_table = db_table self.ddl_dir = ddl_dir self.s3 = None self.converter = XMLConverter(self) self.impl = getDOMImplementation() self.doc = self.impl.createDocument(None, 'objects', None) self.connection = None self.enable_ssl = enable_ssl self.auth_header = None if self.db_user: import base64 base64string = base64.encodestring('%s:%s' % (self.db_user, self.db_passwd))[:-1] authheader = "Basic %s" % base64string self.auth_header = authheader def _connect(self): if self.db_host: if self.enable_ssl: from httplib import HTTPSConnection as Connection else: from httplib import HTTPConnection as Connection self.connection = Connection(self.db_host, self.db_port) def _make_request(self, method, url, post_data=None, body=None): """ Make a request on this connection """ if not self.connection: self._connect() try: self.connection.close() except: pass self.connection.connect() headers = {} if self.auth_header: headers["Authorization"] = self.auth_header self.connection.request(method, url, body, headers) resp = self.connection.getresponse() return resp def new_doc(self): return self.impl.createDocument(None, 'objects', None) def _object_lister(self, cls, doc): for obj_node in doc.getElementsByTagName('object'): if not cls: class_name = obj_node.getAttribute('class') cls = find_class(class_name) id = obj_node.getAttribute('id') obj = cls(id) for prop_node in obj_node.getElementsByTagName('property'): prop_name = prop_node.getAttribute('name') prop = obj.find_property(prop_name) if prop: if hasattr(prop, 'item_type'): value = self.get_list(prop_node, prop.item_type) else: value = self.decode_value(prop, prop_node) value = prop.make_value_from_datastore(value) setattr(obj, prop.name, value) yield obj def reset(self): self._connect() def get_doc(self): return self.doc def encode_value(self, prop, value): return self.converter.encode_prop(prop, value) def decode_value(self, prop, value): return self.converter.decode_prop(prop, value) def get_s3_connection(self): if not self.s3: self.s3 = boto.connect_s3(self.aws_access_key_id, self.aws_secret_access_key) return self.s3 def get_list(self, prop_node, item_type): values = [] try: items_node = prop_node.getElementsByTagName('items')[0] except: return [] for item_node in items_node.getElementsByTagName('item'): value = self.converter.decode(item_type, item_node) values.append(value) return values def get_object_from_doc(self, cls, id, doc): obj_node = doc.getElementsByTagName('object')[0] if not cls: class_name = obj_node.getAttribute('class') cls = find_class(class_name) if not id: id = obj_node.getAttribute('id') obj = cls(id) for prop_node in obj_node.getElementsByTagName('property'): prop_name = prop_node.getAttribute('name') prop = obj.find_property(prop_name) value = self.decode_value(prop, prop_node) value = prop.make_value_from_datastore(value) if value != None: try: setattr(obj, prop.name, value) except: pass return obj def get_props_from_doc(self, cls, id, doc): """ Pull out the properties from this document Returns the class, the properties in a hash, and the id if provided as a tuple :return: (cls, props, id) """ obj_node = doc.getElementsByTagName('object')[0] if not cls: class_name = obj_node.getAttribute('class') cls = find_class(class_name) if not id: id = obj_node.getAttribute('id') props = {} for prop_node in obj_node.getElementsByTagName('property'): prop_name = prop_node.getAttribute('name') prop = cls.find_property(prop_name) value = self.decode_value(prop, prop_node) value = prop.make_value_from_datastore(value) if value != None: props[prop.name] = value return (cls, props, id) def get_object(self, cls, id): if not self.connection: self._connect() if not self.connection: raise NotImplementedError("Can't query without a database connection") url = "/%s/%s" % (self.db_name, id) resp = self._make_request('GET', url) if resp.status == 200: doc = parse(resp) else: raise Exception("Error: %s" % resp.status) return self.get_object_from_doc(cls, id, doc) def query(self, cls, filters, limit=None, order_by=None): if not self.connection: self._connect() if not self.connection: raise NotImplementedError("Can't query without a database connection") from urllib import urlencode query = str(self._build_query(cls, filters, limit, order_by)) if query: url = "/%s?%s" % (self.db_name, urlencode({"query": query})) else: url = "/%s" % self.db_name resp = self._make_request('GET', url) if resp.status == 200: doc = parse(resp) else: raise Exception("Error: %s" % resp.status) return self._object_lister(cls, doc) def _build_query(self, cls, filters, limit, order_by): import types if len(filters) > 4: raise Exception('Too many filters, max is 4') parts = [] properties = cls.properties(hidden=False) for filter, value in filters: name, op = filter.strip().split() found = False for property in properties: if property.name == name: found = True if types.TypeType(value) == types.ListType: filter_parts = [] for val in value: val = self.encode_value(property, val) filter_parts.append("'%s' %s '%s'" % (name, op, val)) parts.append("[%s]" % " OR ".join(filter_parts)) else: value = self.encode_value(property, value) parts.append("['%s' %s '%s']" % (name, op, value)) if not found: raise Exception('%s is not a valid field' % name) if order_by: if order_by.startswith("-"): key = order_by[1:] type = "desc" else: key = order_by type = "asc" parts.append("['%s' starts-with ''] sort '%s' %s" % (key, key, type)) return ' intersection '.join(parts) def query_gql(self, query_string, *args, **kwds): raise NotImplementedError("GQL queries not supported in XML") def save_list(self, doc, items, prop_node): items_node = doc.createElement('items') prop_node.appendChild(items_node) for item in items: item_node = doc.createElement('item') items_node.appendChild(item_node) if isinstance(item, Node): item_node.appendChild(item) else: text_node = doc.createTextNode(item) item_node.appendChild(text_node) def save_object(self, obj, expected_value=None): """ Marshal the object and do a PUT """ doc = self.marshal_object(obj) if obj.id: url = "/%s/%s" % (self.db_name, obj.id) else: url = "/%s" % (self.db_name) resp = self._make_request("PUT", url, body=doc.toxml()) new_obj = self.get_object_from_doc(obj.__class__, None, parse(resp)) obj.id = new_obj.id for prop in obj.properties(): try: propname = prop.name except AttributeError: propname = None if propname: value = getattr(new_obj, prop.name) if value: setattr(obj, prop.name, value) return obj def marshal_object(self, obj, doc=None): if not doc: doc = self.new_doc() if not doc: doc = self.doc obj_node = doc.createElement('object') if obj.id: obj_node.setAttribute('id', obj.id) obj_node.setAttribute('class', '%s.%s' % (obj.__class__.__module__, obj.__class__.__name__)) root = doc.documentElement root.appendChild(obj_node) for property in obj.properties(hidden=False): prop_node = doc.createElement('property') prop_node.setAttribute('name', property.name) prop_node.setAttribute('type', property.type_name) value = property.get_value_for_datastore(obj) if value is not None: value = self.encode_value(property, value) if isinstance(value, list): self.save_list(doc, value, prop_node) elif isinstance(value, Node): prop_node.appendChild(value) else: text_node = doc.createTextNode(unicode(value).encode("ascii", "ignore")) prop_node.appendChild(text_node) obj_node.appendChild(prop_node) return doc def unmarshal_object(self, fp, cls=None, id=None): if isinstance(fp, str) or isinstance(fp, unicode): doc = parseString(fp) else: doc = parse(fp) return self.get_object_from_doc(cls, id, doc) def unmarshal_props(self, fp, cls=None, id=None): """ Same as unmarshalling an object, except it returns from "get_props_from_doc" """ if isinstance(fp, str) or isinstance(fp, unicode): doc = parseString(fp) else: doc = parse(fp) return self.get_props_from_doc(cls, id, doc) def delete_object(self, obj): url = "/%s/%s" % (self.db_name, obj.id) return self._make_request("DELETE", url) def set_key_value(self, obj, name, value): self.domain.put_attributes(obj.id, {name : value}, replace=True) def delete_key_value(self, obj, name): self.domain.delete_attributes(obj.id, name) def get_key_value(self, obj, name): a = self.domain.get_attributes(obj.id, name) if name in a: return a[name] else: return None def get_raw_item(self, obj): return self.domain.get_item(obj.id) def set_property(self, prop, obj, name, value): pass def get_property(self, prop, obj, name): pass def load_object(self, obj): if not obj._loaded: obj = obj.get_by_id(obj.id) obj._loaded = True return obj boto-2.20.1/boto/sdb/db/model.py000066400000000000000000000236161225267101000163150ustar00rootroot00000000000000# Copyright (c) 2006,2007,2008 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.sdb.db.property import Property from boto.sdb.db.key import Key from boto.sdb.db.query import Query import boto class ModelMeta(type): "Metaclass for all Models" def __init__(cls, name, bases, dict): super(ModelMeta, cls).__init__(name, bases, dict) # Make sure this is a subclass of Model - mainly copied from django ModelBase (thanks!) cls.__sub_classes__ = [] # Do a delayed import to prevent possible circular import errors. from boto.sdb.db.manager import get_manager try: if filter(lambda b: issubclass(b, Model), bases): for base in bases: base.__sub_classes__.append(cls) cls._manager = get_manager(cls) # look for all of the Properties and set their names for key in dict.keys(): if isinstance(dict[key], Property): property = dict[key] property.__property_config__(cls, key) prop_names = [] props = cls.properties() for prop in props: if not prop.__class__.__name__.startswith('_'): prop_names.append(prop.name) setattr(cls, '_prop_names', prop_names) except NameError: # 'Model' isn't defined yet, meaning we're looking at our own # Model class, defined below. pass class Model(object): __metaclass__ = ModelMeta __consistent__ = False # Consistent is set off by default id = None @classmethod def get_lineage(cls): l = [c.__name__ for c in cls.mro()] l.reverse() return '.'.join(l) @classmethod def kind(cls): return cls.__name__ @classmethod def _get_by_id(cls, id, manager=None): if not manager: manager = cls._manager return manager.get_object(cls, id) @classmethod def get_by_id(cls, ids=None, parent=None): if isinstance(ids, list): objs = [cls._get_by_id(id) for id in ids] return objs else: return cls._get_by_id(ids) get_by_ids = get_by_id @classmethod def get_by_key_name(cls, key_names, parent=None): raise NotImplementedError("Key Names are not currently supported") @classmethod def find(cls, limit=None, next_token=None, **params): q = Query(cls, limit=limit, next_token=next_token) for key, value in params.items(): q.filter('%s =' % key, value) return q @classmethod def all(cls, limit=None, next_token=None): return cls.find(limit=limit, next_token=next_token) @classmethod def get_or_insert(key_name, **kw): raise NotImplementedError("get_or_insert not currently supported") @classmethod def properties(cls, hidden=True): properties = [] while cls: for key in cls.__dict__.keys(): prop = cls.__dict__[key] if isinstance(prop, Property): if hidden or not prop.__class__.__name__.startswith('_'): properties.append(prop) if len(cls.__bases__) > 0: cls = cls.__bases__[0] else: cls = None return properties @classmethod def find_property(cls, prop_name): property = None while cls: for key in cls.__dict__.keys(): prop = cls.__dict__[key] if isinstance(prop, Property): if not prop.__class__.__name__.startswith('_') and prop_name == prop.name: property = prop if len(cls.__bases__) > 0: cls = cls.__bases__[0] else: cls = None return property @classmethod def get_xmlmanager(cls): if not hasattr(cls, '_xmlmanager'): from boto.sdb.db.manager.xmlmanager import XMLManager cls._xmlmanager = XMLManager(cls, None, None, None, None, None, None, None, False) return cls._xmlmanager @classmethod def from_xml(cls, fp): xmlmanager = cls.get_xmlmanager() return xmlmanager.unmarshal_object(fp) def __init__(self, id=None, **kw): self._loaded = False # first try to initialize all properties to their default values for prop in self.properties(hidden=False): try: setattr(self, prop.name, prop.default_value()) except ValueError: pass if 'manager' in kw: self._manager = kw['manager'] self.id = id for key in kw: if key != 'manager': # We don't want any errors populating up when loading an object, # so if it fails we just revert to it's default value try: setattr(self, key, kw[key]) except Exception, e: boto.log.exception(e) def __repr__(self): return '%s<%s>' % (self.__class__.__name__, self.id) def __str__(self): return str(self.id) def __eq__(self, other): return other and isinstance(other, Model) and self.id == other.id def _get_raw_item(self): return self._manager.get_raw_item(self) def load(self): if self.id and not self._loaded: self._manager.load_object(self) def reload(self): if self.id: self._loaded = False self._manager.load_object(self) def put(self, expected_value=None): """ Save this object as it is, with an optional expected value :param expected_value: Optional tuple of Attribute, and Value that must be the same in order to save this object. If this condition is not met, an SDBResponseError will be raised with a Confict status code. :type expected_value: tuple or list :return: This object :rtype: :class:`boto.sdb.db.model.Model` """ self._manager.save_object(self, expected_value) return self save = put def put_attributes(self, attrs): """ Save just these few attributes, not the whole object :param attrs: Attributes to save, key->value dict :type attrs: dict :return: self :rtype: :class:`boto.sdb.db.model.Model` """ assert(isinstance(attrs, dict)), "Argument must be a dict of key->values to save" for prop_name in attrs: value = attrs[prop_name] prop = self.find_property(prop_name) assert(prop), "Property not found: %s" % prop_name self._manager.set_property(prop, self, prop_name, value) self.reload() return self def delete_attributes(self, attrs): """ Delete just these attributes, not the whole object. :param attrs: Attributes to save, as a list of string names :type attrs: list :return: self :rtype: :class:`boto.sdb.db.model.Model` """ assert(isinstance(attrs, list)), "Argument must be a list of names of keys to delete." self._manager.domain.delete_attributes(self.id, attrs) self.reload() return self save_attributes = put_attributes def delete(self): self._manager.delete_object(self) def key(self): return Key(obj=self) def set_manager(self, manager): self._manager = manager def to_dict(self): props = {} for prop in self.properties(hidden=False): props[prop.name] = getattr(self, prop.name) obj = {'properties' : props, 'id' : self.id} return {self.__class__.__name__ : obj} def to_xml(self, doc=None): xmlmanager = self.get_xmlmanager() doc = xmlmanager.marshal_object(self, doc) return doc @classmethod def find_subclass(cls, name): """Find a subclass with a given name""" if name == cls.__name__: return cls for sc in cls.__sub_classes__: r = sc.find_subclass(name) if r != None: return r class Expando(Model): def __setattr__(self, name, value): if name in self._prop_names: object.__setattr__(self, name, value) elif name.startswith('_'): object.__setattr__(self, name, value) elif name == 'id': object.__setattr__(self, name, value) else: self._manager.set_key_value(self, name, value) object.__setattr__(self, name, value) def __getattr__(self, name): if not name.startswith('_'): value = self._manager.get_key_value(self, name) if value: object.__setattr__(self, name, value) return value raise AttributeError boto-2.20.1/boto/sdb/db/property.py000066400000000000000000000601001225267101000170660ustar00rootroot00000000000000# Copyright (c) 2006,2007,2008 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import datetime from key import Key from boto.utils import Password from boto.sdb.db.query import Query import re import boto import boto.s3.key from boto.sdb.db.blob import Blob class Property(object): data_type = str type_name = '' name = '' verbose_name = '' def __init__(self, verbose_name=None, name=None, default=None, required=False, validator=None, choices=None, unique=False): self.verbose_name = verbose_name self.name = name self.default = default self.required = required self.validator = validator self.choices = choices if self.name: self.slot_name = '_' + self.name else: self.slot_name = '_' self.unique = unique def __get__(self, obj, objtype): if obj: obj.load() return getattr(obj, self.slot_name) else: return None def __set__(self, obj, value): self.validate(value) # Fire off any on_set functions try: if obj._loaded and hasattr(obj, "on_set_%s" % self.name): fnc = getattr(obj, "on_set_%s" % self.name) value = fnc(value) except Exception: boto.log.exception("Exception running on_set_%s" % self.name) setattr(obj, self.slot_name, value) def __property_config__(self, model_class, property_name): self.model_class = model_class self.name = property_name self.slot_name = '_' + self.name def default_validator(self, value): if isinstance(value, basestring) or value == self.default_value(): return if not isinstance(value, self.data_type): raise TypeError('Validation Error, %s.%s expecting %s, got %s' % (self.model_class.__name__, self.name, self.data_type, type(value))) def default_value(self): return self.default def validate(self, value): if self.required and value == None: raise ValueError('%s is a required property' % self.name) if self.choices and value and not value in self.choices: raise ValueError('%s not a valid choice for %s.%s' % (value, self.model_class.__name__, self.name)) if self.validator: self.validator(value) else: self.default_validator(value) return value def empty(self, value): return not value def get_value_for_datastore(self, model_instance): return getattr(model_instance, self.name) def make_value_from_datastore(self, value): return value def get_choices(self): if callable(self.choices): return self.choices() return self.choices def validate_string(value): if value == None: return elif isinstance(value, str) or isinstance(value, unicode): if len(value) > 1024: raise ValueError('Length of value greater than maxlength') else: raise TypeError('Expecting String, got %s' % type(value)) class StringProperty(Property): type_name = 'String' def __init__(self, verbose_name=None, name=None, default='', required=False, validator=validate_string, choices=None, unique=False): Property.__init__(self, verbose_name, name, default, required, validator, choices, unique) class TextProperty(Property): type_name = 'Text' def __init__(self, verbose_name=None, name=None, default='', required=False, validator=None, choices=None, unique=False, max_length=None): Property.__init__(self, verbose_name, name, default, required, validator, choices, unique) self.max_length = max_length def validate(self, value): value = super(TextProperty, self).validate(value) if not isinstance(value, str) and not isinstance(value, unicode): raise TypeError('Expecting Text, got %s' % type(value)) if self.max_length and len(value) > self.max_length: raise ValueError('Length of value greater than maxlength %s' % self.max_length) class PasswordProperty(StringProperty): """ Hashed property whose original value can not be retrieved, but still can be compared. Works by storing a hash of the original value instead of the original value. Once that's done all that can be retrieved is the hash. The comparison obj.password == 'foo' generates a hash of 'foo' and compares it to the stored hash. Underlying data type for hashing, storing, and comparing is boto.utils.Password. The default hash function is defined there ( currently sha512 in most cases, md5 where sha512 is not available ) It's unlikely you'll ever need to use a different hash function, but if you do, you can control the behavior in one of two ways: 1) Specifying hashfunc in PasswordProperty constructor import hashlib class MyModel(model): password = PasswordProperty(hashfunc=hashlib.sha224) 2) Subclassing Password and PasswordProperty class SHA224Password(Password): hashfunc=hashlib.sha224 class SHA224PasswordProperty(PasswordProperty): data_type=MyPassword type_name="MyPassword" class MyModel(Model): password = SHA224PasswordProperty() """ data_type = Password type_name = 'Password' def __init__(self, verbose_name=None, name=None, default='', required=False, validator=None, choices=None, unique=False, hashfunc=None): """ The hashfunc parameter overrides the default hashfunc in boto.utils.Password. The remaining parameters are passed through to StringProperty.__init__""" StringProperty.__init__(self, verbose_name, name, default, required, validator, choices, unique) self.hashfunc = hashfunc def make_value_from_datastore(self, value): p = self.data_type(value, hashfunc=self.hashfunc) return p def get_value_for_datastore(self, model_instance): value = StringProperty.get_value_for_datastore(self, model_instance) if value and len(value): return str(value) else: return None def __set__(self, obj, value): if not isinstance(value, self.data_type): p = self.data_type(hashfunc=self.hashfunc) p.set(value) value = p Property.__set__(self, obj, value) def __get__(self, obj, objtype): return self.data_type(StringProperty.__get__(self, obj, objtype), hashfunc=self.hashfunc) def validate(self, value): value = Property.validate(self, value) if isinstance(value, self.data_type): if len(value) > 1024: raise ValueError('Length of value greater than maxlength') else: raise TypeError('Expecting %s, got %s' % (type(self.data_type), type(value))) class BlobProperty(Property): data_type = Blob type_name = "blob" def __set__(self, obj, value): if value != self.default_value(): if not isinstance(value, Blob): oldb = self.__get__(obj, type(obj)) id = None if oldb: id = oldb.id b = Blob(value=value, id=id) value = b Property.__set__(self, obj, value) class S3KeyProperty(Property): data_type = boto.s3.key.Key type_name = 'S3Key' validate_regex = "^s3:\/\/([^\/]*)\/(.*)$" def __init__(self, verbose_name=None, name=None, default=None, required=False, validator=None, choices=None, unique=False): Property.__init__(self, verbose_name, name, default, required, validator, choices, unique) def validate(self, value): value = super(S3KeyProperty, self).validate(value) if value == self.default_value() or value == str(self.default_value()): return self.default_value() if isinstance(value, self.data_type): return match = re.match(self.validate_regex, value) if match: return raise TypeError('Validation Error, expecting %s, got %s' % (self.data_type, type(value))) def __get__(self, obj, objtype): value = Property.__get__(self, obj, objtype) if value: if isinstance(value, self.data_type): return value match = re.match(self.validate_regex, value) if match: s3 = obj._manager.get_s3_connection() bucket = s3.get_bucket(match.group(1), validate=False) k = bucket.get_key(match.group(2)) if not k: k = bucket.new_key(match.group(2)) k.set_contents_from_string("") return k else: return value def get_value_for_datastore(self, model_instance): value = Property.get_value_for_datastore(self, model_instance) if value: return "s3://%s/%s" % (value.bucket.name, value.name) else: return None class IntegerProperty(Property): data_type = int type_name = 'Integer' def __init__(self, verbose_name=None, name=None, default=0, required=False, validator=None, choices=None, unique=False, max=2147483647, min=-2147483648): Property.__init__(self, verbose_name, name, default, required, validator, choices, unique) self.max = max self.min = min def validate(self, value): value = int(value) value = Property.validate(self, value) if value > self.max: raise ValueError('Maximum value is %d' % self.max) if value < self.min: raise ValueError('Minimum value is %d' % self.min) return value def empty(self, value): return value is None def __set__(self, obj, value): if value == "" or value == None: value = 0 return Property.__set__(self, obj, value) class LongProperty(Property): data_type = long type_name = 'Long' def __init__(self, verbose_name=None, name=None, default=0, required=False, validator=None, choices=None, unique=False): Property.__init__(self, verbose_name, name, default, required, validator, choices, unique) def validate(self, value): value = long(value) value = Property.validate(self, value) min = -9223372036854775808 max = 9223372036854775807 if value > max: raise ValueError('Maximum value is %d' % max) if value < min: raise ValueError('Minimum value is %d' % min) return value def empty(self, value): return value is None class BooleanProperty(Property): data_type = bool type_name = 'Boolean' def __init__(self, verbose_name=None, name=None, default=False, required=False, validator=None, choices=None, unique=False): Property.__init__(self, verbose_name, name, default, required, validator, choices, unique) def empty(self, value): return value is None class FloatProperty(Property): data_type = float type_name = 'Float' def __init__(self, verbose_name=None, name=None, default=0.0, required=False, validator=None, choices=None, unique=False): Property.__init__(self, verbose_name, name, default, required, validator, choices, unique) def validate(self, value): value = float(value) value = Property.validate(self, value) return value def empty(self, value): return value is None class DateTimeProperty(Property): """This class handles both the datetime.datetime object And the datetime.date objects. It can return either one, depending on the value stored in the database""" data_type = datetime.datetime type_name = 'DateTime' def __init__(self, verbose_name=None, auto_now=False, auto_now_add=False, name=None, default=None, required=False, validator=None, choices=None, unique=False): Property.__init__(self, verbose_name, name, default, required, validator, choices, unique) self.auto_now = auto_now self.auto_now_add = auto_now_add def default_value(self): if self.auto_now or self.auto_now_add: return self.now() return Property.default_value(self) def validate(self, value): if value == None: return if isinstance(value, datetime.date): return value return super(DateTimeProperty, self).validate(value) def get_value_for_datastore(self, model_instance): if self.auto_now: setattr(model_instance, self.name, self.now()) return Property.get_value_for_datastore(self, model_instance) def now(self): return datetime.datetime.utcnow() class DateProperty(Property): data_type = datetime.date type_name = 'Date' def __init__(self, verbose_name=None, auto_now=False, auto_now_add=False, name=None, default=None, required=False, validator=None, choices=None, unique=False): Property.__init__(self, verbose_name, name, default, required, validator, choices, unique) self.auto_now = auto_now self.auto_now_add = auto_now_add def default_value(self): if self.auto_now or self.auto_now_add: return self.now() return Property.default_value(self) def validate(self, value): value = super(DateProperty, self).validate(value) if value == None: return if not isinstance(value, self.data_type): raise TypeError('Validation Error, expecting %s, got %s' % (self.data_type, type(value))) def get_value_for_datastore(self, model_instance): if self.auto_now: setattr(model_instance, self.name, self.now()) val = Property.get_value_for_datastore(self, model_instance) if isinstance(val, datetime.datetime): val = val.date() return val def now(self): return datetime.date.today() class TimeProperty(Property): data_type = datetime.time type_name = 'Time' def __init__(self, verbose_name=None, name=None, default=None, required=False, validator=None, choices=None, unique=False): Property.__init__(self, verbose_name, name, default, required, validator, choices, unique) def validate(self, value): value = super(TimeProperty, self).validate(value) if value is None: return if not isinstance(value, self.data_type): raise TypeError('Validation Error, expecting %s, got %s' % (self.data_type, type(value))) class ReferenceProperty(Property): data_type = Key type_name = 'Reference' def __init__(self, reference_class=None, collection_name=None, verbose_name=None, name=None, default=None, required=False, validator=None, choices=None, unique=False): Property.__init__(self, verbose_name, name, default, required, validator, choices, unique) self.reference_class = reference_class self.collection_name = collection_name def __get__(self, obj, objtype): if obj: value = getattr(obj, self.slot_name) if value == self.default_value(): return value # If the value is still the UUID for the referenced object, we need to create # the object now that is the attribute has actually been accessed. This lazy # instantiation saves unnecessary roundtrips to SimpleDB if isinstance(value, str) or isinstance(value, unicode): value = self.reference_class(value) setattr(obj, self.name, value) return value def __set__(self, obj, value): """Don't allow this object to be associated to itself This causes bad things to happen""" if value != None and (obj.id == value or (hasattr(value, "id") and obj.id == value.id)): raise ValueError("Can not associate an object with itself!") return super(ReferenceProperty, self).__set__(obj, value) def __property_config__(self, model_class, property_name): Property.__property_config__(self, model_class, property_name) if self.collection_name is None: self.collection_name = '%s_%s_set' % (model_class.__name__.lower(), self.name) if hasattr(self.reference_class, self.collection_name): raise ValueError('duplicate property: %s' % self.collection_name) setattr(self.reference_class, self.collection_name, _ReverseReferenceProperty(model_class, property_name, self.collection_name)) def check_uuid(self, value): # This does a bit of hand waving to "type check" the string t = value.split('-') if len(t) != 5: raise ValueError def check_instance(self, value): try: obj_lineage = value.get_lineage() cls_lineage = self.reference_class.get_lineage() if obj_lineage.startswith(cls_lineage): return raise TypeError('%s not instance of %s' % (obj_lineage, cls_lineage)) except: raise ValueError('%s is not a Model' % value) def validate(self, value): if self.validator: self.validator(value) if self.required and value == None: raise ValueError('%s is a required property' % self.name) if value == self.default_value(): return if not isinstance(value, str) and not isinstance(value, unicode): self.check_instance(value) class _ReverseReferenceProperty(Property): data_type = Query type_name = 'query' def __init__(self, model, prop, name): self.__model = model self.__property = prop self.collection_name = prop self.name = name self.item_type = model def __get__(self, model_instance, model_class): """Fetches collection of model instances of this collection property.""" if model_instance is not None: query = Query(self.__model) if isinstance(self.__property, list): props = [] for prop in self.__property: props.append("%s =" % prop) return query.filter(props, model_instance) else: return query.filter(self.__property + ' =', model_instance) else: return self def __set__(self, model_instance, value): """Not possible to set a new collection.""" raise ValueError('Virtual property is read-only') class CalculatedProperty(Property): def __init__(self, verbose_name=None, name=None, default=None, required=False, validator=None, choices=None, calculated_type=int, unique=False, use_method=False): Property.__init__(self, verbose_name, name, default, required, validator, choices, unique) self.calculated_type = calculated_type self.use_method = use_method def __get__(self, obj, objtype): value = self.default_value() if obj: try: value = getattr(obj, self.slot_name) if self.use_method: value = value() except AttributeError: pass return value def __set__(self, obj, value): """Not possible to set a new AutoID.""" pass def _set_direct(self, obj, value): if not self.use_method: setattr(obj, self.slot_name, value) def get_value_for_datastore(self, model_instance): if self.calculated_type in [str, int, bool]: value = self.__get__(model_instance, model_instance.__class__) return value else: return None class ListProperty(Property): data_type = list type_name = 'List' def __init__(self, item_type, verbose_name=None, name=None, default=None, **kwds): if default is None: default = [] self.item_type = item_type Property.__init__(self, verbose_name, name, default=default, required=True, **kwds) def validate(self, value): if self.validator: self.validator(value) if value is not None: if not isinstance(value, list): value = [value] if self.item_type in (int, long): item_type = (int, long) elif self.item_type in (str, unicode): item_type = (str, unicode) else: item_type = self.item_type for item in value: if not isinstance(item, item_type): if item_type == (int, long): raise ValueError('Items in the %s list must all be integers.' % self.name) else: raise ValueError('Items in the %s list must all be %s instances' % (self.name, self.item_type.__name__)) return value def empty(self, value): return value is None def default_value(self): return list(super(ListProperty, self).default_value()) def __set__(self, obj, value): """Override the set method to allow them to set the property to an instance of the item_type instead of requiring a list to be passed in""" if self.item_type in (int, long): item_type = (int, long) elif self.item_type in (str, unicode): item_type = (str, unicode) else: item_type = self.item_type if isinstance(value, item_type): value = [value] elif value == None: # Override to allow them to set this to "None" to remove everything value = [] return super(ListProperty, self).__set__(obj, value) class MapProperty(Property): data_type = dict type_name = 'Map' def __init__(self, item_type=str, verbose_name=None, name=None, default=None, **kwds): if default is None: default = {} self.item_type = item_type Property.__init__(self, verbose_name, name, default=default, required=True, **kwds) def validate(self, value): value = super(MapProperty, self).validate(value) if value is not None: if not isinstance(value, dict): raise ValueError('Value must of type dict') if self.item_type in (int, long): item_type = (int, long) elif self.item_type in (str, unicode): item_type = (str, unicode) else: item_type = self.item_type for key in value: if not isinstance(value[key], item_type): if item_type == (int, long): raise ValueError('Values in the %s Map must all be integers.' % self.name) else: raise ValueError('Values in the %s Map must all be %s instances' % (self.name, self.item_type.__name__)) return value def empty(self, value): return value is None def default_value(self): return {} boto-2.20.1/boto/sdb/db/query.py000066400000000000000000000057511225267101000163620ustar00rootroot00000000000000# Copyright (c) 2006,2007,2008 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class Query(object): __local_iter__ = None def __init__(self, model_class, limit=None, next_token=None, manager=None): self.model_class = model_class self.limit = limit self.offset = 0 if manager: self.manager = manager else: self.manager = self.model_class._manager self.filters = [] self.select = None self.sort_by = None self.rs = None self.next_token = next_token def __iter__(self): return iter(self.manager.query(self)) def next(self): if self.__local_iter__ == None: self.__local_iter__ = self.__iter__() return self.__local_iter__.next() def filter(self, property_operator, value): self.filters.append((property_operator, value)) return self def fetch(self, limit, offset=0): """Not currently fully supported, but we can use this to allow them to set a limit in a chainable method""" self.limit = limit self.offset = offset return self def count(self, quick=True): return self.manager.count(self.model_class, self.filters, quick, self.sort_by, self.select) def get_query(self): return self.manager._build_filter_part(self.model_class, self.filters, self.sort_by, self.select) def order(self, key): self.sort_by = key return self def to_xml(self, doc=None): if not doc: xmlmanager = self.model_class.get_xmlmanager() doc = xmlmanager.new_doc() for obj in self: obj.to_xml(doc) return doc def get_next_token(self): if self.rs: return self.rs.next_token if self._next_token: return self._next_token return None def set_next_token(self, token): self._next_token = token next_token = property(get_next_token, set_next_token) boto-2.20.1/boto/sdb/db/sequence.py000066400000000000000000000177561225267101000170350ustar00rootroot00000000000000# Copyright (c) 2010 Chris Moyer http://coredumped.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.exception import SDBResponseError class SequenceGenerator(object): """Generic Sequence Generator object, this takes a single string as the "sequence" and uses that to figure out what the next value in a string is. For example if you give "ABC" and pass in "A" it will give you "B", and if you give it "C" it will give you "AA". If you set "rollover" to True in the above example, passing in "C" would give you "A" again. The Sequence string can be a string or any iterable that has the "index" function and is indexable. """ __name__ = "SequenceGenerator" def __init__(self, sequence_string, rollover=False): """Create a new SequenceGenerator using the sequence_string as how to generate the next item. :param sequence_string: The string or list that explains how to generate the next item in the sequence :type sequence_string: str,iterable :param rollover: Rollover instead of incrementing when we hit the end of the sequence :type rollover: bool """ self.sequence_string = sequence_string self.sequence_length = len(sequence_string[0]) self.rollover = rollover self.last_item = sequence_string[-1] self.__name__ = "%s('%s')" % (self.__class__.__name__, sequence_string) def __call__(self, val, last=None): """Get the next value in the sequence""" # If they pass us in a string that's not at least # the lenght of our sequence, then return the # first element in our sequence if val == None or len(val) < self.sequence_length: return self.sequence_string[0] last_value = val[-self.sequence_length:] if (not self.rollover) and (last_value == self.last_item): val = "%s%s" % (self(val[:-self.sequence_length]), self._inc(last_value)) else: val = "%s%s" % (val[:-self.sequence_length], self._inc(last_value)) return val def _inc(self, val): """Increment a single value""" assert(len(val) == self.sequence_length) return self.sequence_string[(self.sequence_string.index(val)+1) % len(self.sequence_string)] # # Simple Sequence Functions # def increment_by_one(cv=None, lv=None): if cv == None: return 0 return cv + 1 def double(cv=None, lv=None): if cv == None: return 1 return cv * 2 def fib(cv=1, lv=0): """The fibonacci sequence, this incrementer uses the last value""" if cv == None: cv = 1 if lv == None: lv = 0 return cv + lv increment_string = SequenceGenerator("ABCDEFGHIJKLMNOPQRSTUVWXYZ") class Sequence(object): """A simple Sequence using the new SDB "Consistent" features Based largly off of the "Counter" example from mitch garnaat: http://bitbucket.org/mitch/stupidbototricks/src/tip/counter.py""" def __init__(self, id=None, domain_name=None, fnc=increment_by_one, init_val=None): """Create a new Sequence, using an optional function to increment to the next number, by default we just increment by one. Every parameter here is optional, if you don't specify any options then you'll get a new SequenceGenerator with a random ID stored in the default domain that increments by one and uses the default botoweb environment :param id: Optional ID (name) for this counter :type id: str :param domain_name: Optional domain name to use, by default we get this out of the environment configuration :type domain_name:str :param fnc: Optional function to use for the incrementation, by default we just increment by one There are several functions defined in this module. Your function must accept "None" to get the initial value :type fnc: function, str :param init_val: Initial value, by default this is the first element in your sequence, but you can pass in any value, even a string if you pass in a function that uses strings instead of ints to increment """ self._db = None self._value = None self.last_value = None self.domain_name = domain_name self.id = id if init_val == None: init_val = fnc(init_val) if self.id == None: import uuid self.id = str(uuid.uuid4()) self.item_type = type(fnc(None)) self.timestamp = None # Allow us to pass in a full name to a function if isinstance(fnc, str): from boto.utils import find_class fnc = find_class(fnc) self.fnc = fnc # Bootstrap the value last if not self.val: self.val = init_val def set(self, val): """Set the value""" import time now = time.time() expected_value = [] new_val = {} new_val['timestamp'] = now if self._value != None: new_val['last_value'] = self._value expected_value = ['current_value', str(self._value)] new_val['current_value'] = val try: self.db.put_attributes(self.id, new_val, expected_value=expected_value) self.timestamp = new_val['timestamp'] except SDBResponseError, e: if e.status == 409: raise ValueError("Sequence out of sync") else: raise def get(self): """Get the value""" val = self.db.get_attributes(self.id, consistent_read=True) if val: if 'timestamp' in val: self.timestamp = val['timestamp'] if 'current_value' in val: self._value = self.item_type(val['current_value']) if "last_value" in val and val['last_value'] != None: self.last_value = self.item_type(val['last_value']) return self._value val = property(get, set) def __repr__(self): return "%s('%s', '%s', '%s.%s', '%s')" % ( self.__class__.__name__, self.id, self.domain_name, self.fnc.__module__, self.fnc.__name__, self.val) def _connect(self): """Connect to our domain""" if not self._db: import boto sdb = boto.connect_sdb() if not self.domain_name: self.domain_name = boto.config.get("DB", "sequence_db", boto.config.get("DB", "db_name", "default")) try: self._db = sdb.get_domain(self.domain_name) except SDBResponseError, e: if e.status == 400: self._db = sdb.create_domain(self.domain_name) else: raise return self._db db = property(_connect) def next(self): self.val = self.fnc(self.val, self.last_value) return self.val def delete(self): """Remove this sequence""" self.db.delete_attributes(self.id) boto-2.20.1/boto/sdb/db/test_db.py000066400000000000000000000124671225267101000166430ustar00rootroot00000000000000import logging import time from datetime import datetime from boto.sdb.db.model import Model from boto.sdb.db.property import StringProperty, IntegerProperty, BooleanProperty from boto.sdb.db.property import DateTimeProperty, FloatProperty, ReferenceProperty from boto.sdb.db.property import PasswordProperty, ListProperty, MapProperty from boto.exception import SDBPersistenceError logging.basicConfig() log = logging.getLogger('test_db') log.setLevel(logging.DEBUG) _objects = {} # # This will eventually be moved to the boto.tests module and become a real unit test # but for now it will live here. It shows examples of each of the Property types in # use and tests the basic operations. # class TestBasic(Model): name = StringProperty() size = IntegerProperty() foo = BooleanProperty() date = DateTimeProperty() class TestFloat(Model): name = StringProperty() value = FloatProperty() class TestRequired(Model): req = StringProperty(required=True, default='foo') class TestReference(Model): ref = ReferenceProperty(reference_class=TestBasic, collection_name='refs') class TestSubClass(TestBasic): answer = IntegerProperty() class TestPassword(Model): password = PasswordProperty() class TestList(Model): name = StringProperty() nums = ListProperty(int) class TestMap(Model): name = StringProperty() map = MapProperty() class TestListReference(Model): name = StringProperty() basics = ListProperty(TestBasic) class TestAutoNow(Model): create_date = DateTimeProperty(auto_now_add=True) modified_date = DateTimeProperty(auto_now=True) class TestUnique(Model): name = StringProperty(unique=True) def test_basic(): global _objects t = TestBasic() t.name = 'simple' t.size = -42 t.foo = True t.date = datetime.now() log.debug('saving object') t.put() _objects['test_basic_t'] = t time.sleep(5) log.debug('now try retrieving it') tt = TestBasic.get_by_id(t.id) _objects['test_basic_tt'] = tt assert tt.id == t.id l = TestBasic.get_by_id([t.id]) assert len(l) == 1 assert l[0].id == t.id assert t.size == tt.size assert t.foo == tt.foo assert t.name == tt.name #assert t.date == tt.date return t def test_float(): global _objects t = TestFloat() t.name = 'float object' t.value = 98.6 log.debug('saving object') t.save() _objects['test_float_t'] = t time.sleep(5) log.debug('now try retrieving it') tt = TestFloat.get_by_id(t.id) _objects['test_float_tt'] = tt assert tt.id == t.id assert tt.name == t.name assert tt.value == t.value return t def test_required(): global _objects t = TestRequired() _objects['test_required_t'] = t t.put() return t def test_reference(t=None): global _objects if not t: t = test_basic() tt = TestReference() tt.ref = t tt.put() time.sleep(10) tt = TestReference.get_by_id(tt.id) _objects['test_reference_tt'] = tt assert tt.ref.id == t.id for o in t.refs: log.debug(o) def test_subclass(): global _objects t = TestSubClass() _objects['test_subclass_t'] = t t.name = 'a subclass' t.size = -489 t.save() def test_password(): global _objects t = TestPassword() _objects['test_password_t'] = t t.password = "foo" t.save() time.sleep(5) # Make sure it stored ok tt = TestPassword.get_by_id(t.id) _objects['test_password_tt'] = tt #Testing password equality assert tt.password == "foo" #Testing password not stored as string assert str(tt.password) != "foo" def test_list(): global _objects t = TestList() _objects['test_list_t'] = t t.name = 'a list of ints' t.nums = [1, 2, 3, 4, 5] t.put() tt = TestList.get_by_id(t.id) _objects['test_list_tt'] = tt assert tt.name == t.name for n in tt.nums: assert isinstance(n, int) def test_list_reference(): global _objects t = TestBasic() t.put() _objects['test_list_ref_t'] = t tt = TestListReference() tt.name = "foo" tt.basics = [t] tt.put() time.sleep(5) _objects['test_list_ref_tt'] = tt ttt = TestListReference.get_by_id(tt.id) assert ttt.basics[0].id == t.id def test_unique(): global _objects t = TestUnique() name = 'foo' + str(int(time.time())) t.name = name t.put() _objects['test_unique_t'] = t time.sleep(10) tt = TestUnique() _objects['test_unique_tt'] = tt tt.name = name try: tt.put() assert False except(SDBPersistenceError): pass def test_datetime(): global _objects t = TestAutoNow() t.put() _objects['test_datetime_t'] = t time.sleep(5) tt = TestAutoNow.get_by_id(t.id) assert tt.create_date.timetuple() == t.create_date.timetuple() def test(): log.info('test_basic') t1 = test_basic() log.info('test_required') test_required() log.info('test_reference') test_reference(t1) log.info('test_subclass') test_subclass() log.info('test_password') test_password() log.info('test_list') test_list() log.info('test_list_reference') test_list_reference() log.info("test_datetime") test_datetime() log.info('test_unique') test_unique() if __name__ == "__main__": test() boto-2.20.1/boto/sdb/domain.py000066400000000000000000000337011225267101000160730ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents an SDB Domain """ from boto.sdb.queryresultset import SelectResultSet class Domain: def __init__(self, connection=None, name=None): self.connection = connection self.name = name self._metadata = None def __repr__(self): return 'Domain:%s' % self.name def __iter__(self): return iter(self.select("SELECT * FROM `%s`" % self.name)) def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'DomainName': self.name = value else: setattr(self, name, value) def get_metadata(self): if not self._metadata: self._metadata = self.connection.domain_metadata(self) return self._metadata def put_attributes(self, item_name, attributes, replace=True, expected_value=None): """ Store attributes for a given item. :type item_name: string :param item_name: The name of the item whose attributes are being stored. :type attribute_names: dict or dict-like object :param attribute_names: The name/value pairs to store as attributes :type expected_value: list :param expected_value: If supplied, this is a list or tuple consisting of a single attribute name and expected value. The list can be of the form: * ['name', 'value'] In which case the call will first verify that the attribute "name" of this item has a value of "value". If it does, the delete will proceed, otherwise a ConditionalCheckFailed error will be returned. The list can also be of the form: * ['name', True|False] which will simply check for the existence (True) or non-existence (False) of the attribute. :type replace: bool :param replace: Whether the attribute values passed in will replace existing values or will be added as addition values. Defaults to True. :rtype: bool :return: True if successful """ return self.connection.put_attributes(self, item_name, attributes, replace, expected_value) def batch_put_attributes(self, items, replace=True): """ Store attributes for multiple items. :type items: dict or dict-like object :param items: A dictionary-like object. The keys of the dictionary are the item names and the values are themselves dictionaries of attribute names/values, exactly the same as the attribute_names parameter of the scalar put_attributes call. :type replace: bool :param replace: Whether the attribute values passed in will replace existing values or will be added as addition values. Defaults to True. :rtype: bool :return: True if successful """ return self.connection.batch_put_attributes(self, items, replace) def get_attributes(self, item_name, attribute_name=None, consistent_read=False, item=None): """ Retrieve attributes for a given item. :type item_name: string :param item_name: The name of the item whose attributes are being retrieved. :type attribute_names: string or list of strings :param attribute_names: An attribute name or list of attribute names. This parameter is optional. If not supplied, all attributes will be retrieved for the item. :rtype: :class:`boto.sdb.item.Item` :return: An Item mapping type containing the requested attribute name/values """ return self.connection.get_attributes(self, item_name, attribute_name, consistent_read, item) def delete_attributes(self, item_name, attributes=None, expected_values=None): """ Delete attributes from a given item. :type item_name: string :param item_name: The name of the item whose attributes are being deleted. :type attributes: dict, list or :class:`boto.sdb.item.Item` :param attributes: Either a list containing attribute names which will cause all values associated with that attribute name to be deleted or a dict or Item containing the attribute names and keys and list of values to delete as the value. If no value is supplied, all attribute name/values for the item will be deleted. :type expected_value: list :param expected_value: If supplied, this is a list or tuple consisting of a single attribute name and expected value. The list can be of the form: * ['name', 'value'] In which case the call will first verify that the attribute "name" of this item has a value of "value". If it does, the delete will proceed, otherwise a ConditionalCheckFailed error will be returned. The list can also be of the form: * ['name', True|False] which will simply check for the existence (True) or non-existence (False) of the attribute. :rtype: bool :return: True if successful """ return self.connection.delete_attributes(self, item_name, attributes, expected_values) def batch_delete_attributes(self, items): """ Delete multiple items in this domain. :type items: dict or dict-like object :param items: A dictionary-like object. The keys of the dictionary are the item names and the values are either: * dictionaries of attribute names/values, exactly the same as the attribute_names parameter of the scalar put_attributes call. The attribute name/value pairs will only be deleted if they match the name/value pairs passed in. * None which means that all attributes associated with the item should be deleted. :rtype: bool :return: True if successful """ return self.connection.batch_delete_attributes(self, items) def select(self, query='', next_token=None, consistent_read=False, max_items=None): """ Returns a set of Attributes for item names within domain_name that match the query. The query must be expressed in using the SELECT style syntax rather than the original SimpleDB query language. :type query: string :param query: The SimpleDB query to be performed. :rtype: iter :return: An iterator containing the results. This is actually a generator function that will iterate across all search results, not just the first page. """ return SelectResultSet(self, query, max_items=max_items, next_token=next_token, consistent_read=consistent_read) def get_item(self, item_name, consistent_read=False): """ Retrieves an item from the domain, along with all of its attributes. :param string item_name: The name of the item to retrieve. :rtype: :class:`boto.sdb.item.Item` or ``None`` :keyword bool consistent_read: When set to true, ensures that the most recent data is returned. :return: The requested item, or ``None`` if there was no match found """ item = self.get_attributes(item_name, consistent_read=consistent_read) if item: item.domain = self return item else: return None def new_item(self, item_name): return self.connection.item_cls(self, item_name) def delete_item(self, item): self.delete_attributes(item.name) def to_xml(self, f=None): """Get this domain as an XML DOM Document :param f: Optional File to dump directly to :type f: File or Stream :return: File object where the XML has been dumped to :rtype: file """ if not f: from tempfile import TemporaryFile f = TemporaryFile() print >> f, '' print >> f, '' % self.name for item in self: print >> f, '\t' % item.name for k in item: print >> f, '\t\t' % k values = item[k] if not isinstance(values, list): values = [values] for value in values: print >> f, '\t\t\t> f, ']]>' print >> f, '\t\t' print >> f, '\t' print >> f, '' f.flush() f.seek(0) return f def from_xml(self, doc): """Load this domain based on an XML document""" import xml.sax handler = DomainDumpParser(self) xml.sax.parse(doc, handler) return handler def delete(self): """ Delete this domain, and all items under it """ return self.connection.delete_domain(self) class DomainMetaData: def __init__(self, domain=None): self.domain = domain self.item_count = None self.item_names_size = None self.attr_name_count = None self.attr_names_size = None self.attr_value_count = None self.attr_values_size = None def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'ItemCount': self.item_count = int(value) elif name == 'ItemNamesSizeBytes': self.item_names_size = int(value) elif name == 'AttributeNameCount': self.attr_name_count = int(value) elif name == 'AttributeNamesSizeBytes': self.attr_names_size = int(value) elif name == 'AttributeValueCount': self.attr_value_count = int(value) elif name == 'AttributeValuesSizeBytes': self.attr_values_size = int(value) elif name == 'Timestamp': self.timestamp = value else: setattr(self, name, value) import sys from xml.sax.handler import ContentHandler class DomainDumpParser(ContentHandler): """ SAX parser for a domain that has been dumped """ def __init__(self, domain): self.uploader = UploaderThread(domain) self.item_id = None self.attrs = {} self.attribute = None self.value = "" self.domain = domain def startElement(self, name, attrs): if name == "Item": self.item_id = attrs['id'] self.attrs = {} elif name == "attribute": self.attribute = attrs['id'] elif name == "value": self.value = "" def characters(self, ch): self.value += ch def endElement(self, name): if name == "value": if self.value and self.attribute: value = self.value.strip() attr_name = self.attribute.strip() if attr_name in self.attrs: self.attrs[attr_name].append(value) else: self.attrs[attr_name] = [value] elif name == "Item": self.uploader.items[self.item_id] = self.attrs # Every 20 items we spawn off the uploader if len(self.uploader.items) >= 20: self.uploader.start() self.uploader = UploaderThread(self.domain) elif name == "Domain": # If we're done, spawn off our last Uploader Thread self.uploader.start() from threading import Thread class UploaderThread(Thread): """Uploader Thread""" def __init__(self, domain): self.db = domain self.items = {} Thread.__init__(self) def run(self): try: self.db.batch_put_attributes(self.items) except: print "Exception using batch put, trying regular put instead" for item_name in self.items: self.db.put_attributes(item_name, self.items[item_name]) print ".", sys.stdout.flush() boto-2.20.1/boto/sdb/item.py000066400000000000000000000154761225267101000155730ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import base64 class Item(dict): """ A ``dict`` sub-class that serves as an object representation of a SimpleDB item. An item in SDB is similar to a row in a relational database. Items belong to a :py:class:`Domain `, which is similar to a table in a relational database. The keys on instances of this object correspond to attributes that are stored on the SDB item. .. tip:: While it is possible to instantiate this class directly, you may want to use the convenience methods on :py:class:`boto.sdb.domain.Domain` for that purpose. For example, :py:meth:`boto.sdb.domain.Domain.get_item`. """ def __init__(self, domain, name='', active=False): """ :type domain: :py:class:`boto.sdb.domain.Domain` :param domain: The domain that this item belongs to. :param str name: The name of this item. This name will be used when querying for items using methods like :py:meth:`boto.sdb.domain.Domain.get_item` """ dict.__init__(self) self.domain = domain self.name = name self.active = active self.request_id = None self.encoding = None self.in_attribute = False self.converter = self.domain.connection.converter def startElement(self, name, attrs, connection): if name == 'Attribute': self.in_attribute = True self.encoding = attrs.get('encoding', None) return None def decode_value(self, value): if self.encoding == 'base64': self.encoding = None return base64.decodestring(value) else: return value def endElement(self, name, value, connection): if name == 'ItemName': self.name = self.decode_value(value) elif name == 'Name': if self.in_attribute: self.last_key = self.decode_value(value) else: self.name = self.decode_value(value) elif name == 'Value': if self.last_key in self: if not isinstance(self[self.last_key], list): self[self.last_key] = [self[self.last_key]] value = self.decode_value(value) if self.converter: value = self.converter.decode(value) self[self.last_key].append(value) else: value = self.decode_value(value) if self.converter: value = self.converter.decode(value) self[self.last_key] = value elif name == 'BoxUsage': try: connection.box_usage += float(value) except: pass elif name == 'RequestId': self.request_id = value elif name == 'Attribute': self.in_attribute = False else: setattr(self, name, value) def load(self): """ Loads or re-loads this item's attributes from SDB. .. warning:: If you have changed attribute values on an Item instance, this method will over-write the values if they are different in SDB. For any local attributes that don't yet exist in SDB, they will be safe. """ self.domain.get_attributes(self.name, item=self) def save(self, replace=True): """ Saves this item to SDB. :param bool replace: If ``True``, delete any attributes on the remote SDB item that have a ``None`` value on this object. """ self.domain.put_attributes(self.name, self, replace) # Delete any attributes set to "None" if replace: del_attrs = [] for name in self: if self[name] == None: del_attrs.append(name) if len(del_attrs) > 0: self.domain.delete_attributes(self.name, del_attrs) def add_value(self, key, value): """ Helps set or add to attributes on this item. If you are adding a new attribute that has yet to be set, it will simply create an attribute named ``key`` with your given ``value`` as its value. If you are adding a value to an existing attribute, this method will convert the attribute to a list (if it isn't already) and append your new value to said list. For clarification, consider the following interactive session: .. code-block:: python >>> item = some_domain.get_item('some_item') >>> item.has_key('some_attr') False >>> item.add_value('some_attr', 1) >>> item['some_attr'] 1 >>> item.add_value('some_attr', 2) >>> item['some_attr'] [1, 2] :param str key: The attribute to add a value to. :param object value: The value to set or append to the attribute. """ if key in self: # We already have this key on the item. if not isinstance(self[key], list): # The key isn't already a list, take its current value and # convert it to a list with the only member being the # current value. self[key] = [self[key]] # Add the new value to the list. self[key].append(value) else: # This is a new attribute, just set it. self[key] = value def delete(self): """ Deletes this item in SDB. .. note:: This local Python object remains in its current state after deletion, this only deletes the remote item in SDB. """ self.domain.delete_item(self) boto-2.20.1/boto/sdb/queryresultset.py000066400000000000000000000070741225267101000177500ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. def query_lister(domain, query='', max_items=None, attr_names=None): more_results = True num_results = 0 next_token = None while more_results: rs = domain.connection.query_with_attributes(domain, query, attr_names, next_token=next_token) for item in rs: if max_items: if num_results == max_items: raise StopIteration yield item num_results += 1 next_token = rs.next_token more_results = next_token != None class QueryResultSet: def __init__(self, domain=None, query='', max_items=None, attr_names=None): self.max_items = max_items self.domain = domain self.query = query self.attr_names = attr_names def __iter__(self): return query_lister(self.domain, self.query, self.max_items, self.attr_names) def select_lister(domain, query='', max_items=None): more_results = True num_results = 0 next_token = None while more_results: rs = domain.connection.select(domain, query, next_token=next_token) for item in rs: if max_items: if num_results == max_items: raise StopIteration yield item num_results += 1 next_token = rs.next_token more_results = next_token != None class SelectResultSet(object): def __init__(self, domain=None, query='', max_items=None, next_token=None, consistent_read=False): self.domain = domain self.query = query self.consistent_read = consistent_read self.max_items = max_items self.next_token = next_token def __iter__(self): more_results = True num_results = 0 while more_results: rs = self.domain.connection.select(self.domain, self.query, next_token=self.next_token, consistent_read=self.consistent_read) for item in rs: if self.max_items and num_results >= self.max_items: raise StopIteration yield item num_results += 1 self.next_token = rs.next_token if self.max_items and num_results >= self.max_items: raise StopIteration more_results = self.next_token != None def next(self): return self.__iter__().next() boto-2.20.1/boto/sdb/regioninfo.py000066400000000000000000000027051225267101000167630ustar00rootroot00000000000000# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.regioninfo import RegionInfo class SDBRegionInfo(RegionInfo): def __init__(self, connection=None, name=None, endpoint=None): from boto.sdb.connection import SDBConnection RegionInfo.__init__(self, connection, name, endpoint, SDBConnection) boto-2.20.1/boto/services/000077500000000000000000000000001225267101000153215ustar00rootroot00000000000000boto-2.20.1/boto/services/__init__.py000066400000000000000000000021241225267101000174310ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # boto-2.20.1/boto/services/bs.py000077500000000000000000000176631225267101000163170ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2006-2008 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from optparse import OptionParser from boto.services.servicedef import ServiceDef from boto.services.submit import Submitter from boto.services.result import ResultProcessor import boto import sys, os, StringIO class BS(object): Usage = "usage: %prog [options] config_file command" Commands = {'reset' : 'Clear input queue and output bucket', 'submit' : 'Submit local files to the service', 'start' : 'Start the service', 'status' : 'Report on the status of the service buckets and queues', 'retrieve' : 'Retrieve output generated by a batch', 'batches' : 'List all batches stored in current output_domain'} def __init__(self): self.service_name = None self.parser = OptionParser(usage=self.Usage) self.parser.add_option("--help-commands", action="store_true", dest="help_commands", help="provides help on the available commands") self.parser.add_option("-a", "--access-key", action="store", type="string", help="your AWS Access Key") self.parser.add_option("-s", "--secret-key", action="store", type="string", help="your AWS Secret Access Key") self.parser.add_option("-p", "--path", action="store", type="string", dest="path", help="the path to local directory for submit and retrieve") self.parser.add_option("-k", "--keypair", action="store", type="string", dest="keypair", help="the SSH keypair used with launched instance(s)") self.parser.add_option("-l", "--leave", action="store_true", dest="leave", help="leave the files (don't retrieve) files during retrieve command") self.parser.set_defaults(leave=False) self.parser.add_option("-n", "--num-instances", action="store", type="string", dest="num_instances", help="the number of launched instance(s)") self.parser.set_defaults(num_instances=1) self.parser.add_option("-i", "--ignore-dirs", action="append", type="string", dest="ignore", help="directories that should be ignored by submit command") self.parser.add_option("-b", "--batch-id", action="store", type="string", dest="batch", help="batch identifier required by the retrieve command") def print_command_help(self): print '\nCommands:' for key in self.Commands.keys(): print ' %s\t\t%s' % (key, self.Commands[key]) def do_reset(self): iq = self.sd.get_obj('input_queue') if iq: print 'clearing out input queue' i = 0 m = iq.read() while m: i += 1 iq.delete_message(m) m = iq.read() print 'deleted %d messages' % i ob = self.sd.get_obj('output_bucket') ib = self.sd.get_obj('input_bucket') if ob: if ib and ob.name == ib.name: return print 'delete generated files in output bucket' i = 0 for k in ob: i += 1 k.delete() print 'deleted %d keys' % i def do_submit(self): if not self.options.path: self.parser.error('No path provided') if not os.path.exists(self.options.path): self.parser.error('Invalid path (%s)' % self.options.path) s = Submitter(self.sd) t = s.submit_path(self.options.path, None, self.options.ignore, None, None, True, self.options.path) print 'A total of %d files were submitted' % t[1] print 'Batch Identifier: %s' % t[0] def do_start(self): ami_id = self.sd.get('ami_id') instance_type = self.sd.get('instance_type', 'm1.small') security_group = self.sd.get('security_group', 'default') if not ami_id: self.parser.error('ami_id option is required when starting the service') ec2 = boto.connect_ec2() if not self.sd.has_section('Credentials'): self.sd.add_section('Credentials') self.sd.set('Credentials', 'aws_access_key_id', ec2.aws_access_key_id) self.sd.set('Credentials', 'aws_secret_access_key', ec2.aws_secret_access_key) s = StringIO.StringIO() self.sd.write(s) rs = ec2.get_all_images([ami_id]) img = rs[0] r = img.run(user_data=s.getvalue(), key_name=self.options.keypair, max_count=self.options.num_instances, instance_type=instance_type, security_groups=[security_group]) print 'Starting AMI: %s' % ami_id print 'Reservation %s contains the following instances:' % r.id for i in r.instances: print '\t%s' % i.id def do_status(self): iq = self.sd.get_obj('input_queue') if iq: print 'The input_queue (%s) contains approximately %s messages' % (iq.id, iq.count()) ob = self.sd.get_obj('output_bucket') ib = self.sd.get_obj('input_bucket') if ob: if ib and ob.name == ib.name: return total = 0 for k in ob: total += 1 print 'The output_bucket (%s) contains %d keys' % (ob.name, total) def do_retrieve(self): if not self.options.path: self.parser.error('No path provided') if not os.path.exists(self.options.path): self.parser.error('Invalid path (%s)' % self.options.path) if not self.options.batch: self.parser.error('batch identifier is required for retrieve command') s = ResultProcessor(self.options.batch, self.sd) s.get_results(self.options.path, get_file=(not self.options.leave)) def do_batches(self): d = self.sd.get_obj('output_domain') if d: print 'Available Batches:' rs = d.query("['type'='Batch']") for item in rs: print ' %s' % item.name else: self.parser.error('No output_domain specified for service') def main(self): self.options, self.args = self.parser.parse_args() if self.options.help_commands: self.print_command_help() sys.exit(0) if len(self.args) != 2: self.parser.error("config_file and command are required") self.config_file = self.args[0] self.sd = ServiceDef(self.config_file) self.command = self.args[1] if hasattr(self, 'do_%s' % self.command): method = getattr(self, 'do_%s' % self.command) method() else: self.parser.error('command (%s) not recognized' % self.command) if __name__ == "__main__": bs = BS() bs.main() boto-2.20.1/boto/services/message.py000066400000000000000000000045311225267101000173220ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.sqs.message import MHMessage from boto.utils import get_ts from socket import gethostname import os, mimetypes, time class ServiceMessage(MHMessage): def for_key(self, key, params=None, bucket_name=None): if params: self.update(params) if key.path: t = os.path.split(key.path) self['OriginalLocation'] = t[0] self['OriginalFileName'] = t[1] mime_type = mimetypes.guess_type(t[1])[0] if mime_type == None: mime_type = 'application/octet-stream' self['Content-Type'] = mime_type s = os.stat(key.path) t = time.gmtime(s[7]) self['FileAccessedDate'] = get_ts(t) t = time.gmtime(s[8]) self['FileModifiedDate'] = get_ts(t) t = time.gmtime(s[9]) self['FileCreateDate'] = get_ts(t) else: self['OriginalFileName'] = key.name self['OriginalLocation'] = key.bucket.name self['ContentType'] = key.content_type self['Host'] = gethostname() if bucket_name: self['Bucket'] = bucket_name else: self['Bucket'] = key.bucket.name self['InputKey'] = key.name self['Size'] = key.size boto-2.20.1/boto/services/result.py000066400000000000000000000127311225267101000172150ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import os from datetime import datetime, timedelta from boto.utils import parse_ts import boto class ResultProcessor: LogFileName = 'log.csv' def __init__(self, batch_name, sd, mimetype_files=None): self.sd = sd self.batch = batch_name self.log_fp = None self.num_files = 0 self.total_time = 0 self.min_time = timedelta.max self.max_time = timedelta.min self.earliest_time = datetime.max self.latest_time = datetime.min self.queue = self.sd.get_obj('output_queue') self.domain = self.sd.get_obj('output_domain') def calculate_stats(self, msg): start_time = parse_ts(msg['Service-Read']) end_time = parse_ts(msg['Service-Write']) elapsed_time = end_time - start_time if elapsed_time > self.max_time: self.max_time = elapsed_time if elapsed_time < self.min_time: self.min_time = elapsed_time self.total_time += elapsed_time.seconds if start_time < self.earliest_time: self.earliest_time = start_time if end_time > self.latest_time: self.latest_time = end_time def log_message(self, msg, path): keys = sorted(msg.keys()) if not self.log_fp: self.log_fp = open(os.path.join(path, self.LogFileName), 'a') line = ','.join(keys) self.log_fp.write(line+'\n') values = [] for key in keys: value = msg[key] if value.find(',') > 0: value = '"%s"' % value values.append(value) line = ','.join(values) self.log_fp.write(line+'\n') def process_record(self, record, path, get_file=True): self.log_message(record, path) self.calculate_stats(record) outputs = record['OutputKey'].split(',') if 'OutputBucket' in record: bucket = boto.lookup('s3', record['OutputBucket']) else: bucket = boto.lookup('s3', record['Bucket']) for output in outputs: if get_file: key_name = output.split(';')[0] key = bucket.lookup(key_name) file_name = os.path.join(path, key_name) print 'retrieving file: %s to %s' % (key_name, file_name) key.get_contents_to_filename(file_name) self.num_files += 1 def get_results_from_queue(self, path, get_file=True, delete_msg=True): m = self.queue.read() while m: if 'Batch' in m and m['Batch'] == self.batch: self.process_record(m, path, get_file) if delete_msg: self.queue.delete_message(m) m = self.queue.read() def get_results_from_domain(self, path, get_file=True): rs = self.domain.query("['Batch'='%s']" % self.batch) for item in rs: self.process_record(item, path, get_file) def get_results_from_bucket(self, path): bucket = self.sd.get_obj('output_bucket') if bucket: print 'No output queue or domain, just retrieving files from output_bucket' for key in bucket: file_name = os.path.join(path, key) print 'retrieving file: %s to %s' % (key, file_name) key.get_contents_to_filename(file_name) self.num_files + 1 def get_results(self, path, get_file=True, delete_msg=True): if not os.path.isdir(path): os.mkdir(path) if self.queue: self.get_results_from_queue(path, get_file) elif self.domain: self.get_results_from_domain(path, get_file) else: self.get_results_from_bucket(path) if self.log_fp: self.log_fp.close() print '%d results successfully retrieved.' % self.num_files if self.num_files > 0: self.avg_time = float(self.total_time)/self.num_files print 'Minimum Processing Time: %d' % self.min_time.seconds print 'Maximum Processing Time: %d' % self.max_time.seconds print 'Average Processing Time: %f' % self.avg_time self.elapsed_time = self.latest_time-self.earliest_time print 'Elapsed Time: %d' % self.elapsed_time.seconds tput = 1.0 / ((self.elapsed_time.seconds/60.0) / self.num_files) print 'Throughput: %f transactions / minute' % tput boto-2.20.1/boto/services/service.py000066400000000000000000000147611225267101000173440ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import boto from boto.services.message import ServiceMessage from boto.services.servicedef import ServiceDef from boto.pyami.scriptbase import ScriptBase from boto.utils import get_ts import time import os import mimetypes class Service(ScriptBase): # Time required to process a transaction ProcessingTime = 60 def __init__(self, config_file=None, mimetype_files=None): ScriptBase.__init__(self, config_file) self.name = self.__class__.__name__ self.working_dir = boto.config.get('Pyami', 'working_dir') self.sd = ServiceDef(config_file) self.retry_count = self.sd.getint('retry_count', 5) self.loop_delay = self.sd.getint('loop_delay', 30) self.processing_time = self.sd.getint('processing_time', 60) self.input_queue = self.sd.get_obj('input_queue') self.output_queue = self.sd.get_obj('output_queue') self.output_domain = self.sd.get_obj('output_domain') if mimetype_files: mimetypes.init(mimetype_files) def split_key(key): if key.find(';') < 0: t = (key, '') else: key, type = key.split(';') label, mtype = type.split('=') t = (key, mtype) return t def read_message(self): boto.log.info('read_message') message = self.input_queue.read(self.processing_time) if message: boto.log.info(message.get_body()) key = 'Service-Read' message[key] = get_ts() return message # retrieve the source file from S3 def get_file(self, message): bucket_name = message['Bucket'] key_name = message['InputKey'] file_name = os.path.join(self.working_dir, message.get('OriginalFileName', 'in_file')) boto.log.info('get_file: %s/%s to %s' % (bucket_name, key_name, file_name)) bucket = boto.lookup('s3', bucket_name) key = bucket.new_key(key_name) key.get_contents_to_filename(os.path.join(self.working_dir, file_name)) return file_name # process source file, return list of output files def process_file(self, in_file_name, msg): return [] # store result file in S3 def put_file(self, bucket_name, file_path, key_name=None): boto.log.info('putting file %s as %s.%s' % (file_path, bucket_name, key_name)) bucket = boto.lookup('s3', bucket_name) key = bucket.new_key(key_name) key.set_contents_from_filename(file_path) return key def save_results(self, results, input_message, output_message): output_keys = [] for file, type in results: if 'OutputBucket' in input_message: output_bucket = input_message['OutputBucket'] else: output_bucket = input_message['Bucket'] key_name = os.path.split(file)[1] key = self.put_file(output_bucket, file, key_name) output_keys.append('%s;type=%s' % (key.name, type)) output_message['OutputKey'] = ','.join(output_keys) # write message to each output queue def write_message(self, message): message['Service-Write'] = get_ts() message['Server'] = self.name if 'HOSTNAME' in os.environ: message['Host'] = os.environ['HOSTNAME'] else: message['Host'] = 'unknown' message['Instance-ID'] = self.instance_id if self.output_queue: boto.log.info('Writing message to SQS queue: %s' % self.output_queue.id) self.output_queue.write(message) if self.output_domain: boto.log.info('Writing message to SDB domain: %s' % self.output_domain.name) item_name = '/'.join([message['Service-Write'], message['Bucket'], message['InputKey']]) self.output_domain.put_attributes(item_name, message) # delete message from input queue def delete_message(self, message): boto.log.info('deleting message from %s' % self.input_queue.id) self.input_queue.delete_message(message) # to clean up any files, etc. after each iteration def cleanup(self): pass def shutdown(self): on_completion = self.sd.get('on_completion', 'shutdown') if on_completion == 'shutdown': if self.instance_id: time.sleep(60) c = boto.connect_ec2() c.terminate_instances([self.instance_id]) def main(self, notify=False): self.notify('Service: %s Starting' % self.name) empty_reads = 0 while self.retry_count < 0 or empty_reads < self.retry_count: try: input_message = self.read_message() if input_message: empty_reads = 0 output_message = ServiceMessage(None, input_message.get_body()) input_file = self.get_file(input_message) results = self.process_file(input_file, output_message) self.save_results(results, input_message, output_message) self.write_message(output_message) self.delete_message(input_message) self.cleanup() else: empty_reads += 1 time.sleep(self.loop_delay) except Exception: boto.log.exception('Service Failed') empty_reads += 1 self.notify('Service: %s Shutting Down' % self.name) self.shutdown() boto-2.20.1/boto/services/servicedef.py000066400000000000000000000064151225267101000200200ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.pyami.config import Config from boto.services.message import ServiceMessage import boto class ServiceDef(Config): def __init__(self, config_file, aws_access_key_id=None, aws_secret_access_key=None): Config.__init__(self, config_file) self.aws_access_key_id = aws_access_key_id self.aws_secret_access_key = aws_secret_access_key script = Config.get(self, 'Pyami', 'scripts') if script: self.name = script.split('.')[-1] else: self.name = None def get(self, name, default=None): return Config.get(self, self.name, name, default) def has_option(self, option): return Config.has_option(self, self.name, option) def getint(self, option, default=0): try: val = Config.get(self, self.name, option) val = int(val) except: val = int(default) return val def getbool(self, option, default=False): try: val = Config.get(self, self.name, option) if val.lower() == 'true': val = True else: val = False except: val = default return val def get_obj(self, name): """ Returns the AWS object associated with a given option. The heuristics used are a bit lame. If the option name contains the word 'bucket' it is assumed to be an S3 bucket, if the name contains the word 'queue' it is assumed to be an SQS queue and if it contains the word 'domain' it is assumed to be a SimpleDB domain. If the option name specified does not exist in the config file or if the AWS object cannot be retrieved this returns None. """ val = self.get(name) if not val: return None if name.find('queue') >= 0: obj = boto.lookup('sqs', val) if obj: obj.set_message_class(ServiceMessage) elif name.find('bucket') >= 0: obj = boto.lookup('s3', val) elif name.find('domain') >= 0: obj = boto.lookup('sdb', val) else: obj = None return obj boto-2.20.1/boto/services/sonofmmm.cfg000066400000000000000000000027641225267101000176460ustar00rootroot00000000000000# # Your AWS Credentials # You only need to supply these in this file if you are not using # the boto tools to start your service # #[Credentials] #aws_access_key_id = #aws_secret_access_key = # # Fill out this section if you want emails from the service # when it starts and stops # #[Notification] #smtp_host = #smtp_user = #smtp_pass = #smtp_from = #smtp_to = [Pyami] scripts = boto.services.sonofmmm.SonOfMMM [SonOfMMM] # id of the AMI to be launched ami_id = ami-dc799cb5 # number of times service will read an empty queue before exiting # a negative value will cause the service to run forever retry_count = 5 # seconds to wait after empty queue read before reading again loop_delay = 10 # average time it takes to process a transaction # controls invisibility timeout of messages processing_time = 60 ffmpeg_args = -y -i %%s -f mov -r 29.97 -b 1200kb -mbd 2 -flags +4mv+trell -aic 2 -cmp 2 -subcmp 2 -ar 48000 -ab 19200 -s 320x240 -vcodec mpeg4 -acodec libfaac %%s output_mimetype = video/quicktime output_ext = .mov input_bucket = output_bucket = output_domain = output_queue = input_queue = boto-2.20.1/boto/services/sonofmmm.py000066400000000000000000000066311225267101000175340ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import boto from boto.services.service import Service from boto.services.message import ServiceMessage import os import mimetypes class SonOfMMM(Service): def __init__(self, config_file=None): Service.__init__(self, config_file) self.log_file = '%s.log' % self.instance_id self.log_path = os.path.join(self.working_dir, self.log_file) boto.set_file_logger(self.name, self.log_path) if self.sd.has_option('ffmpeg_args'): self.command = '/usr/local/bin/ffmpeg ' + self.sd.get('ffmpeg_args') else: self.command = '/usr/local/bin/ffmpeg -y -i %s %s' self.output_mimetype = self.sd.get('output_mimetype') if self.sd.has_option('output_ext'): self.output_ext = self.sd.get('output_ext') else: self.output_ext = mimetypes.guess_extension(self.output_mimetype) self.output_bucket = self.sd.get_obj('output_bucket') self.input_bucket = self.sd.get_obj('input_bucket') # check to see if there are any messages queue # if not, create messages for all files in input_bucket m = self.input_queue.read(1) if not m: self.queue_files() def queue_files(self): boto.log.info('Queueing files from %s' % self.input_bucket.name) for key in self.input_bucket: boto.log.info('Queueing %s' % key.name) m = ServiceMessage() if self.output_bucket: d = {'OutputBucket' : self.output_bucket.name} else: d = None m.for_key(key, d) self.input_queue.write(m) def process_file(self, in_file_name, msg): base, ext = os.path.splitext(in_file_name) out_file_name = os.path.join(self.working_dir, base+self.output_ext) command = self.command % (in_file_name, out_file_name) boto.log.info('running:\n%s' % command) status = self.run(command) if status == 0: return [(out_file_name, self.output_mimetype)] else: return [] def shutdown(self): if os.path.isfile(self.log_path): if self.output_bucket: key = self.output_bucket.new_key(self.log_file) key.set_contents_from_filename(self.log_path) Service.shutdown(self) boto-2.20.1/boto/services/submit.py000066400000000000000000000067331225267101000172070ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import time import os class Submitter: def __init__(self, sd): self.sd = sd self.input_bucket = self.sd.get_obj('input_bucket') self.output_bucket = self.sd.get_obj('output_bucket') self.output_domain = self.sd.get_obj('output_domain') self.queue = self.sd.get_obj('input_queue') def get_key_name(self, fullpath, prefix): key_name = fullpath[len(prefix):] l = key_name.split(os.sep) return '/'.join(l) def write_message(self, key, metadata): if self.queue: m = self.queue.new_message() m.for_key(key, metadata) if self.output_bucket: m['OutputBucket'] = self.output_bucket.name self.queue.write(m) def submit_file(self, path, metadata=None, cb=None, num_cb=0, prefix='/'): if not metadata: metadata = {} key_name = self.get_key_name(path, prefix) k = self.input_bucket.new_key(key_name) k.update_metadata(metadata) k.set_contents_from_filename(path, replace=False, cb=cb, num_cb=num_cb) self.write_message(k, metadata) def submit_path(self, path, tags=None, ignore_dirs=None, cb=None, num_cb=0, status=False, prefix='/'): path = os.path.expanduser(path) path = os.path.expandvars(path) path = os.path.abspath(path) total = 0 metadata = {} if tags: metadata['Tags'] = tags l = [] for t in time.gmtime(): l.append(str(t)) metadata['Batch'] = '_'.join(l) if self.output_domain: self.output_domain.put_attributes(metadata['Batch'], {'type' : 'Batch'}) if os.path.isdir(path): for root, dirs, files in os.walk(path): if ignore_dirs: for ignore in ignore_dirs: if ignore in dirs: dirs.remove(ignore) for file in files: fullpath = os.path.join(root, file) if status: print 'Submitting %s' % fullpath self.submit_file(fullpath, metadata, cb, num_cb, prefix) total += 1 elif os.path.isfile(path): self.submit_file(path, metadata, cb, num_cb) total += 1 else: print 'problem with %s' % path return (metadata['Batch'], total) boto-2.20.1/boto/ses/000077500000000000000000000000001225267101000142705ustar00rootroot00000000000000boto-2.20.1/boto/ses/__init__.py000066400000000000000000000040531225267101000164030ustar00rootroot00000000000000# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2011 Harry Marr http://hmarr.com/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from connection import SESConnection from boto.regioninfo import RegionInfo def regions(): """ Get all available regions for the SES service. :rtype: list :return: A list of :class:`boto.regioninfo.RegionInfo` instances """ return [RegionInfo(name='us-east-1', endpoint='email.us-east-1.amazonaws.com', connection_cls=SESConnection)] def connect_to_region(region_name, **kw_params): """ Given a valid region name, return a :class:`boto.ses.connection.SESConnection`. :type: str :param region_name: The name of the region to connect to. :rtype: :class:`boto.ses.connection.SESConnection` or ``None`` :return: A connection to the given region, or None if an invalid region name is given """ for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/ses/connection.py000066400000000000000000000521701225267101000170060ustar00rootroot00000000000000# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2011 Harry Marr http://hmarr.com/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import re import urllib import base64 from boto.connection import AWSAuthConnection from boto.exception import BotoServerError from boto.regioninfo import RegionInfo import boto import boto.jsonresponse from boto.ses import exceptions as ses_exceptions class SESConnection(AWSAuthConnection): ResponseError = BotoServerError DefaultRegionName = 'us-east-1' DefaultRegionEndpoint = 'email.us-east-1.amazonaws.com' APIVersion = '2010-12-01' def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/', security_token=None, validate_certs=True): if not region: region = RegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint) self.region = region AWSAuthConnection.__init__(self, self.region.endpoint, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, proxy_user, proxy_pass, debug, https_connection_factory, path, security_token=security_token, validate_certs=validate_certs) def _required_auth_capability(self): return ['ses'] def _build_list_params(self, params, items, label): """Add an AWS API-compatible parameter list to a dictionary. :type params: dict :param params: The parameter dictionary :type items: list :param items: Items to be included in the list :type label: string :param label: The parameter list's name """ if isinstance(items, basestring): items = [items] for i in range(1, len(items) + 1): params['%s.%d' % (label, i)] = items[i - 1] def _make_request(self, action, params=None): """Make a call to the SES API. :type action: string :param action: The API method to use (e.g. SendRawEmail) :type params: dict :param params: Parameters that will be sent as POST data with the API call. """ ct = 'application/x-www-form-urlencoded; charset=UTF-8' headers = {'Content-Type': ct} params = params or {} params['Action'] = action for k, v in params.items(): if isinstance(v, unicode): # UTF-8 encode only if it's Unicode params[k] = v.encode('utf-8') response = super(SESConnection, self).make_request( 'POST', '/', headers=headers, data=urllib.urlencode(params) ) body = response.read() if response.status == 200: list_markers = ('VerifiedEmailAddresses', 'Identities', 'DkimTokens', 'VerificationAttributes', 'SendDataPoints') item_markers = ('member', 'item', 'entry') e = boto.jsonresponse.Element(list_marker=list_markers, item_marker=item_markers) h = boto.jsonresponse.XmlHandler(e, None) h.parse(body) return e else: # HTTP codes other than 200 are considered errors. Go through # some error handling to determine which exception gets raised, self._handle_error(response, body) def _handle_error(self, response, body): """ Handle raising the correct exception, depending on the error. Many errors share the same HTTP response code, meaning we have to get really kludgey and do string searches to figure out what went wrong. """ boto.log.error('%s %s' % (response.status, response.reason)) boto.log.error('%s' % body) if "Address blacklisted." in body: # Delivery failures happened frequently enough with the recipient's # email address for Amazon to blacklist it. After a day or three, # they'll be automatically removed, and delivery can be attempted # again (if you write the code to do so in your application). ExceptionToRaise = ses_exceptions.SESAddressBlacklistedError exc_reason = "Address blacklisted." elif "Email address is not verified." in body: # This error happens when the "Reply-To" value passed to # send_email() hasn't been verified yet. ExceptionToRaise = ses_exceptions.SESAddressNotVerifiedError exc_reason = "Email address is not verified." elif "Daily message quota exceeded." in body: # Encountered when your account exceeds the maximum total number # of emails per 24 hours. ExceptionToRaise = ses_exceptions.SESDailyQuotaExceededError exc_reason = "Daily message quota exceeded." elif "Maximum sending rate exceeded." in body: # Your account has sent above its allowed requests a second rate. ExceptionToRaise = ses_exceptions.SESMaxSendingRateExceededError exc_reason = "Maximum sending rate exceeded." elif "Domain ends with dot." in body: # Recipient address ends with a dot/period. This is invalid. ExceptionToRaise = ses_exceptions.SESDomainEndsWithDotError exc_reason = "Domain ends with dot." elif "Local address contains control or whitespace" in body: # I think this pertains to the recipient address. ExceptionToRaise = ses_exceptions.SESLocalAddressCharacterError exc_reason = "Local address contains control or whitespace." elif "Illegal address" in body: # A clearly mal-formed address. ExceptionToRaise = ses_exceptions.SESIllegalAddressError exc_reason = "Illegal address" # The re.search is to distinguish from the # SESAddressNotVerifiedError error above. elif re.search('Identity.*is not verified', body): ExceptionToRaise = ses_exceptions.SESIdentityNotVerifiedError exc_reason = "Identity is not verified." elif "ownership not confirmed" in body: ExceptionToRaise = ses_exceptions.SESDomainNotConfirmedError exc_reason = "Domain ownership is not confirmed." else: # This is either a common AWS error, or one that we don't devote # its own exception to. ExceptionToRaise = self.ResponseError exc_reason = response.reason raise ExceptionToRaise(response.status, exc_reason, body) def send_email(self, source, subject, body, to_addresses, cc_addresses=None, bcc_addresses=None, format='text', reply_addresses=None, return_path=None, text_body=None, html_body=None): """Composes an email message based on input data, and then immediately queues the message for sending. :type source: string :param source: The sender's email address. :type subject: string :param subject: The subject of the message: A short summary of the content, which will appear in the recipient's inbox. :type body: string :param body: The message body. :type to_addresses: list of strings or string :param to_addresses: The To: field(s) of the message. :type cc_addresses: list of strings or string :param cc_addresses: The CC: field(s) of the message. :type bcc_addresses: list of strings or string :param bcc_addresses: The BCC: field(s) of the message. :type format: string :param format: The format of the message's body, must be either "text" or "html". :type reply_addresses: list of strings or string :param reply_addresses: The reply-to email address(es) for the message. If the recipient replies to the message, each reply-to address will receive the reply. :type return_path: string :param return_path: The email address to which bounce notifications are to be forwarded. If the message cannot be delivered to the recipient, then an error message will be returned from the recipient's ISP; this message will then be forwarded to the email address specified by the ReturnPath parameter. :type text_body: string :param text_body: The text body to send with this email. :type html_body: string :param html_body: The html body to send with this email. """ format = format.lower().strip() if body is not None: if format == "text": if text_body is not None: raise Warning("You've passed in both a body and a " "text_body; please choose one or the other.") text_body = body else: if html_body is not None: raise Warning("You've passed in both a body and an " "html_body; please choose one or the other.") html_body = body params = { 'Source': source, 'Message.Subject.Data': subject, } if return_path: params['ReturnPath'] = return_path if html_body is not None: params['Message.Body.Html.Data'] = html_body if text_body is not None: params['Message.Body.Text.Data'] = text_body if(format not in ("text", "html")): raise ValueError("'format' argument must be 'text' or 'html'") if(not (html_body or text_body)): raise ValueError("No text or html body found for mail") self._build_list_params(params, to_addresses, 'Destination.ToAddresses.member') if cc_addresses: self._build_list_params(params, cc_addresses, 'Destination.CcAddresses.member') if bcc_addresses: self._build_list_params(params, bcc_addresses, 'Destination.BccAddresses.member') if reply_addresses: self._build_list_params(params, reply_addresses, 'ReplyToAddresses.member') return self._make_request('SendEmail', params) def send_raw_email(self, raw_message, source=None, destinations=None): """Sends an email message, with header and content specified by the client. The SendRawEmail action is useful for sending multipart MIME emails, with attachments or inline content. The raw text of the message must comply with Internet email standards; otherwise, the message cannot be sent. :type source: string :param source: The sender's email address. Amazon's docs say: If you specify the Source parameter, then bounce notifications and complaints will be sent to this email address. This takes precedence over any Return-Path header that you might include in the raw text of the message. :type raw_message: string :param raw_message: The raw text of the message. The client is responsible for ensuring the following: - Message must contain a header and a body, separated by a blank line. - All required header fields must be present. - Each part of a multipart MIME message must be formatted properly. - MIME content types must be among those supported by Amazon SES. Refer to the Amazon SES Developer Guide for more details. - Content must be base64-encoded, if MIME requires it. :type destinations: list of strings or string :param destinations: A list of destinations for the message. """ if isinstance(raw_message, unicode): raw_message = raw_message.encode('utf-8') params = { 'RawMessage.Data': base64.b64encode(raw_message), } if source: params['Source'] = source if destinations: self._build_list_params(params, destinations, 'Destinations.member') return self._make_request('SendRawEmail', params) def list_verified_email_addresses(self): """Fetch a list of the email addresses that have been verified. :rtype: dict :returns: A ListVerifiedEmailAddressesResponse structure. Note that keys must be unicode strings. """ return self._make_request('ListVerifiedEmailAddresses') def get_send_quota(self): """Fetches the user's current activity limits. :rtype: dict :returns: A GetSendQuotaResponse structure. Note that keys must be unicode strings. """ return self._make_request('GetSendQuota') def get_send_statistics(self): """Fetches the user's sending statistics. The result is a list of data points, representing the last two weeks of sending activity. Each data point in the list contains statistics for a 15-minute interval. :rtype: dict :returns: A GetSendStatisticsResponse structure. Note that keys must be unicode strings. """ return self._make_request('GetSendStatistics') def delete_verified_email_address(self, email_address): """Deletes the specified email address from the list of verified addresses. :type email_adddress: string :param email_address: The email address to be removed from the list of verified addreses. :rtype: dict :returns: A DeleteVerifiedEmailAddressResponse structure. Note that keys must be unicode strings. """ return self._make_request('DeleteVerifiedEmailAddress', { 'EmailAddress': email_address, }) def verify_email_address(self, email_address): """Verifies an email address. This action causes a confirmation email message to be sent to the specified address. :type email_adddress: string :param email_address: The email address to be verified. :rtype: dict :returns: A VerifyEmailAddressResponse structure. Note that keys must be unicode strings. """ return self._make_request('VerifyEmailAddress', { 'EmailAddress': email_address, }) def verify_domain_dkim(self, domain): """ Returns a set of DNS records, or tokens, that must be published in the domain name's DNS to complete the DKIM verification process. These tokens are DNS ``CNAME`` records that point to DKIM public keys hosted by Amazon SES. To complete the DKIM verification process, these tokens must be published in the domain's DNS. The tokens must remain published in order for Easy DKIM signing to function correctly. After the tokens are added to the domain's DNS, Amazon SES will be able to DKIM-sign email originating from that domain. To enable or disable Easy DKIM signing for a domain, use the ``SetIdentityDkimEnabled`` action. For more information about Easy DKIM, go to the `Amazon SES Developer Guide `_. :type domain: string :param domain: The domain name. """ return self._make_request('VerifyDomainDkim', { 'Domain': domain, }) def set_identity_dkim_enabled(self, identity, dkim_enabled): """Enables or disables DKIM signing of email sent from an identity. * If Easy DKIM signing is enabled for a domain name identity (e.g., * ``example.com``), then Amazon SES will DKIM-sign all email sent by addresses under that domain name (e.g., ``user@example.com``) * If Easy DKIM signing is enabled for an email address, then Amazon SES will DKIM-sign all email sent by that email address. For email addresses (e.g., ``user@example.com``), you can only enable Easy DKIM signing if the corresponding domain (e.g., ``example.com``) has been set up for Easy DKIM using the AWS Console or the ``VerifyDomainDkim`` action. :type identity: string :param identity: An email address or domain name. :type dkim_enabled: bool :param dkim_enabled: Specifies whether or not to enable DKIM signing. """ return self._make_request('SetIdentityDkimEnabled', { 'Identity': identity, 'DkimEnabled': 'true' if dkim_enabled else 'false' }) def get_identity_dkim_attributes(self, identities): """Get attributes associated with a list of verified identities. Given a list of verified identities (email addresses and/or domains), returns a structure describing identity notification attributes. :type identities: list :param identities: A list of verified identities (email addresses and/or domains). """ params = {} self._build_list_params(params, identities, 'Identities.member') return self._make_request('GetIdentityDkimAttributes', params) def list_identities(self): """Returns a list containing all of the identities (email addresses and domains) for a specific AWS Account, regardless of verification status. :rtype: dict :returns: A ListIdentitiesResponse structure. Note that keys must be unicode strings. """ return self._make_request('ListIdentities') def get_identity_verification_attributes(self, identities): """Given a list of identities (email addresses and/or domains), returns the verification status and (for domain identities) the verification token for each identity. :type identities: list of strings or string :param identities: List of identities. :rtype: dict :returns: A GetIdentityVerificationAttributesResponse structure. Note that keys must be unicode strings. """ params = {} self._build_list_params(params, identities, 'Identities.member') return self._make_request('GetIdentityVerificationAttributes', params) def verify_domain_identity(self, domain): """Verifies a domain. :type domain: string :param domain: The domain to be verified. :rtype: dict :returns: A VerifyDomainIdentityResponse structure. Note that keys must be unicode strings. """ return self._make_request('VerifyDomainIdentity', { 'Domain': domain, }) def verify_email_identity(self, email_address): """Verifies an email address. This action causes a confirmation email message to be sent to the specified address. :type email_adddress: string :param email_address: The email address to be verified. :rtype: dict :returns: A VerifyEmailIdentityResponse structure. Note that keys must be unicode strings. """ return self._make_request('VerifyEmailIdentity', { 'EmailAddress': email_address, }) def delete_identity(self, identity): """Deletes the specified identity (email address or domain) from the list of verified identities. :type identity: string :param identity: The identity to be deleted. :rtype: dict :returns: A DeleteIdentityResponse structure. Note that keys must be unicode strings. """ return self._make_request('DeleteIdentity', { 'Identity': identity, }) boto-2.20.1/boto/ses/exceptions.py000066400000000000000000000034431225267101000170270ustar00rootroot00000000000000""" Various exceptions that are specific to the SES module. """ from boto.exception import BotoServerError class SESError(BotoServerError): """ Sub-class all SES-related errors from here. Don't raise this error directly from anywhere. The only thing this gets us is the ability to catch SESErrors separately from the more generic, top-level BotoServerError exception. """ pass class SESAddressNotVerifiedError(SESError): """ Raised when a "Reply-To" address has not been validated in SES yet. """ pass class SESIdentityNotVerifiedError(SESError): """ Raised when an identity (domain or address) has not been verified in SES yet. """ pass class SESDomainNotConfirmedError(SESError): """ """ pass class SESAddressBlacklistedError(SESError): """ After you attempt to send mail to an address, and delivery repeatedly fails, said address is blacklisted for at least 24 hours. The blacklisting eventually expires, and you are able to attempt delivery again. If you attempt to send mail to a blacklisted email, this is raised. """ pass class SESDailyQuotaExceededError(SESError): """ Your account's daily (rolling 24 hour total) allotment of outbound emails has been exceeded. """ pass class SESMaxSendingRateExceededError(SESError): """ Your account's requests/second limit has been exceeded. """ pass class SESDomainEndsWithDotError(SESError): """ Recipient's email address' domain ends with a period/dot. """ pass class SESLocalAddressCharacterError(SESError): """ An address contained a control or whitespace character. """ pass class SESIllegalAddressError(SESError): """ Raised when an illegal address is encountered. """ pass boto-2.20.1/boto/sns/000077500000000000000000000000001225267101000143015ustar00rootroot00000000000000boto-2.20.1/boto/sns/__init__.py000066400000000000000000000066651225267101000164270ustar00rootroot00000000000000# Copyright (c) 2010-2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010-2011, Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # this is here for backward compatibility # originally, the SNSConnection class was defined here from connection import SNSConnection from boto.regioninfo import RegionInfo def regions(): """ Get all available regions for the SNS service. :rtype: list :return: A list of :class:`boto.regioninfo.RegionInfo` instances """ return [RegionInfo(name='us-east-1', endpoint='sns.us-east-1.amazonaws.com', connection_cls=SNSConnection), RegionInfo(name='eu-west-1', endpoint='sns.eu-west-1.amazonaws.com', connection_cls=SNSConnection), RegionInfo(name='us-gov-west-1', endpoint='sns.us-gov-west-1.amazonaws.com', connection_cls=SNSConnection), RegionInfo(name='us-west-1', endpoint='sns.us-west-1.amazonaws.com', connection_cls=SNSConnection), RegionInfo(name='sa-east-1', endpoint='sns.sa-east-1.amazonaws.com', connection_cls=SNSConnection), RegionInfo(name='us-west-2', endpoint='sns.us-west-2.amazonaws.com', connection_cls=SNSConnection), RegionInfo(name='ap-northeast-1', endpoint='sns.ap-northeast-1.amazonaws.com', connection_cls=SNSConnection), RegionInfo(name='ap-southeast-1', endpoint='sns.ap-southeast-1.amazonaws.com', connection_cls=SNSConnection), RegionInfo(name='ap-southeast-2', endpoint='sns.ap-southeast-2.amazonaws.com', connection_cls=SNSConnection), ] def connect_to_region(region_name, **kw_params): """ Given a valid region name, return a :class:`boto.sns.connection.SNSConnection`. :type: str :param region_name: The name of the region to connect to. :rtype: :class:`boto.sns.connection.SNSConnection` or ``None`` :return: A connection to the given region, or None if an invalid region name is given """ for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/sns/connection.py000066400000000000000000000744521225267101000170260ustar00rootroot00000000000000# Copyright (c) 2010-2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import uuid import hashlib from boto.connection import AWSQueryConnection from boto.regioninfo import RegionInfo from boto.compat import json import boto class SNSConnection(AWSQueryConnection): """ Amazon Simple Notification Service Amazon Simple Notification Service (Amazon SNS) is a web service that enables you to build distributed web-enabled applications. Applications can use Amazon SNS to easily push real-time notification messages to interested subscribers over multiple delivery protocols. For more information about this product see `http://aws.amazon.com/sns`_. For detailed information about Amazon SNS features and their associated API calls, see the `Amazon SNS Developer Guide`_. We also provide SDKs that enable you to access Amazon SNS from your preferred programming language. The SDKs contain functionality that automatically takes care of tasks such as: cryptographically signing your service requests, retrying requests, and handling error responses. For a list of available SDKs, go to `Tools for Amazon Web Services`_. """ DefaultRegionName = 'us-east-1' DefaultRegionEndpoint = 'sns.us-east-1.amazonaws.com' APIVersion = '2010-03-31' def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/', security_token=None, validate_certs=True): if not region: region = RegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint, connection_cls=SNSConnection) self.region = region AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, proxy_user, proxy_pass, self.region.endpoint, debug, https_connection_factory, path, security_token=security_token, validate_certs=validate_certs) def _build_dict_as_list_params(self, params, dictionary, name): """ Serialize a parameter 'name' which value is a 'dictionary' into a list of parameters. See: http://docs.aws.amazon.com/sns/latest/api/API_SetPlatformApplicationAttributes.html For example:: dictionary = {'PlatformPrincipal': 'foo', 'PlatformCredential': 'bar'} name = 'Attributes' would result in params dict being populated with: Attributes.entry.1.key = PlatformPrincipal Attributes.entry.1.value = foo Attributes.entry.2.key = PlatformCredential Attributes.entry.2.value = bar :param params: the resulting parameters will be added to this dict :param dictionary: dict - value of the serialized parameter :param name: name of the serialized parameter """ items = sorted(dictionary.items(), key=lambda x:x[0]) for kv, index in zip(items, range(1, len(items)+1)): key, value = kv prefix = '%s.entry.%s' % (name, index) params['%s.key' % prefix] = key params['%s.value' % prefix] = value def _required_auth_capability(self): return ['hmac-v4'] def get_all_topics(self, next_token=None): """ :type next_token: string :param next_token: Token returned by the previous call to this method. """ params = {} if next_token: params['NextToken'] = next_token return self._make_request('ListTopics', params) def get_topic_attributes(self, topic): """ Get attributes of a Topic :type topic: string :param topic: The ARN of the topic. """ params = {'TopicArn': topic} return self._make_request('GetTopicAttributes', params) def set_topic_attributes(self, topic, attr_name, attr_value): """ Get attributes of a Topic :type topic: string :param topic: The ARN of the topic. :type attr_name: string :param attr_name: The name of the attribute you want to set. Only a subset of the topic's attributes are mutable. Valid values: Policy | DisplayName :type attr_value: string :param attr_value: The new value for the attribute. """ params = {'TopicArn': topic, 'AttributeName': attr_name, 'AttributeValue': attr_value} return self._make_request('SetTopicAttributes', params) def add_permission(self, topic, label, account_ids, actions): """ Adds a statement to a topic's access control policy, granting access for the specified AWS accounts to the specified actions. :type topic: string :param topic: The ARN of the topic. :type label: string :param label: A unique identifier for the new policy statement. :type account_ids: list of strings :param account_ids: The AWS account ids of the users who will be give access to the specified actions. :type actions: list of strings :param actions: The actions you want to allow for each of the specified principal(s). """ params = {'TopicArn': topic, 'Label': label} self.build_list_params(params, account_ids, 'AWSAccountId.member') self.build_list_params(params, actions, 'ActionName.member') return self._make_request('AddPermission', params) def remove_permission(self, topic, label): """ Removes a statement from a topic's access control policy. :type topic: string :param topic: The ARN of the topic. :type label: string :param label: A unique identifier for the policy statement to be removed. """ params = {'TopicArn': topic, 'Label': label} return self._make_request('RemovePermission', params) def create_topic(self, topic): """ Create a new Topic. :type topic: string :param topic: The name of the new topic. """ params = {'Name': topic} return self._make_request('CreateTopic', params) def delete_topic(self, topic): """ Delete an existing topic :type topic: string :param topic: The ARN of the topic """ params = {'TopicArn': topic} return self._make_request('DeleteTopic', params, '/', 'GET') def publish(self, topic=None, message=None, subject=None, target_arn=None, message_structure=None): """ Get properties of a Topic :type topic: string :param topic: The ARN of the new topic. :type message: string :param message: The message you want to send to the topic. Messages must be UTF-8 encoded strings and be at most 4KB in size. :type message_structure: string :param message_structure: Optional parameter. If left as ``None``, plain text will be sent. If set to ``json``, your message should be a JSON string that matches the structure described at http://docs.aws.amazon.com/sns/latest/dg/PublishTopic.html#sns-message-formatting-by-protocol :type subject: string :param subject: Optional parameter to be used as the "Subject" line of the email notifications. :type target_arn: string :param target_arn: Optional parameter for either TopicArn or EndpointArn, but not both. """ if message is None: # To be backwards compatible when message did not have # a default value and topic and message were required # args. raise TypeError("'message' is a required parameter") params = {'Message': message} if subject is not None: params['Subject'] = subject if topic is not None: params['TopicArn'] = topic if target_arn is not None: params['TargetArn'] = target_arn if message_structure is not None: params['MessageStructure'] = message_structure return self._make_request('Publish', params, '/', 'POST') def subscribe(self, topic, protocol, endpoint): """ Subscribe to a Topic. :type topic: string :param topic: The ARN of the new topic. :type protocol: string :param protocol: The protocol used to communicate with the subscriber. Current choices are: email|email-json|http|https|sqs|sms :type endpoint: string :param endpoint: The location of the endpoint for the subscriber. * For email, this would be a valid email address * For email-json, this would be a valid email address * For http, this would be a URL beginning with http * For https, this would be a URL beginning with https * For sqs, this would be the ARN of an SQS Queue * For sms, this would be a phone number of an SMS-enabled device """ params = {'TopicArn': topic, 'Protocol': protocol, 'Endpoint': endpoint} return self._make_request('Subscribe', params) def subscribe_sqs_queue(self, topic, queue): """ Subscribe an SQS queue to a topic. This is convenience method that handles most of the complexity involved in using an SQS queue as an endpoint for an SNS topic. To achieve this the following operations are performed: * The correct ARN is constructed for the SQS queue and that ARN is then subscribed to the topic. * A JSON policy document is contructed that grants permission to the SNS topic to send messages to the SQS queue. * This JSON policy is then associated with the SQS queue using the queue's set_attribute method. If the queue already has a policy associated with it, this process will add a Statement to that policy. If no policy exists, a new policy will be created. :type topic: string :param topic: The ARN of the new topic. :type queue: A boto Queue object :param queue: The queue you wish to subscribe to the SNS Topic. """ t = queue.id.split('/') q_arn = queue.arn sid = hashlib.md5(topic + q_arn).hexdigest() sid_exists = False resp = self.subscribe(topic, 'sqs', q_arn) attr = queue.get_attributes('Policy') if 'Policy' in attr: policy = json.loads(attr['Policy']) else: policy = {} if 'Version' not in policy: policy['Version'] = '2008-10-17' if 'Statement' not in policy: policy['Statement'] = [] # See if a Statement with the Sid exists already. for s in policy['Statement']: if s['Sid'] == sid: sid_exists = True if not sid_exists: statement = {'Action': 'SQS:SendMessage', 'Effect': 'Allow', 'Principal': {'AWS': '*'}, 'Resource': q_arn, 'Sid': sid, 'Condition': {'StringLike': {'aws:SourceArn': topic}}} policy['Statement'].append(statement) queue.set_attribute('Policy', json.dumps(policy)) return resp def confirm_subscription(self, topic, token, authenticate_on_unsubscribe=False): """ Get properties of a Topic :type topic: string :param topic: The ARN of the new topic. :type token: string :param token: Short-lived token sent to and endpoint during the Subscribe operation. :type authenticate_on_unsubscribe: bool :param authenticate_on_unsubscribe: Optional parameter indicating that you wish to disable unauthenticated unsubscription of the subscription. """ params = {'TopicArn': topic, 'Token': token} if authenticate_on_unsubscribe: params['AuthenticateOnUnsubscribe'] = 'true' return self._make_request('ConfirmSubscription', params) def unsubscribe(self, subscription): """ Allows endpoint owner to delete subscription. Confirmation message will be delivered. :type subscription: string :param subscription: The ARN of the subscription to be deleted. """ params = {'SubscriptionArn': subscription} return self._make_request('Unsubscribe', params) def get_all_subscriptions(self, next_token=None): """ Get list of all subscriptions. :type next_token: string :param next_token: Token returned by the previous call to this method. """ params = {} if next_token: params['NextToken'] = next_token return self._make_request('ListSubscriptions', params) def get_all_subscriptions_by_topic(self, topic, next_token=None): """ Get list of all subscriptions to a specific topic. :type topic: string :param topic: The ARN of the topic for which you wish to find subscriptions. :type next_token: string :param next_token: Token returned by the previous call to this method. """ params = {'TopicArn': topic} if next_token: params['NextToken'] = next_token return self._make_request('ListSubscriptionsByTopic', params) def create_platform_application(self, name=None, platform=None, attributes=None): """ The `CreatePlatformApplication` action creates a platform application object for one of the supported push notification services, such as APNS and GCM, to which devices and mobile apps may register. You must specify PlatformPrincipal and PlatformCredential attributes when using the `CreatePlatformApplication` action. The PlatformPrincipal is received from the notification service. For APNS/APNS_SANDBOX, PlatformPrincipal is "SSL certificate". For GCM, PlatformPrincipal is not applicable. For ADM, PlatformPrincipal is "client id". The PlatformCredential is also received from the notification service. For APNS/APNS_SANDBOX, PlatformCredential is "private key". For GCM, PlatformCredential is "API key". For ADM, PlatformCredential is "client secret". The PlatformApplicationArn that is returned when using `CreatePlatformApplication` is then used as an attribute for the `CreatePlatformEndpoint` action. For more information, see `Using Amazon SNS Mobile Push Notifications`_. :type name: string :param name: Application names must be made up of only uppercase and lowercase ASCII letters, numbers, underscores, hyphens, and periods, and must be between 1 and 256 characters long. :type platform: string :param platform: The following platforms are supported: ADM (Amazon Device Messaging), APNS (Apple Push Notification Service), APNS_SANDBOX, and GCM (Google Cloud Messaging). :type attributes: map :param attributes: For a list of attributes, see `SetPlatformApplicationAttributes`_ """ params = {} if name is not None: params['Name'] = name if platform is not None: params['Platform'] = platform if attributes is not None: self._build_dict_as_list_params(params, attributes, 'Attributes') return self._make_request(action='CreatePlatformApplication', params=params) def set_platform_application_attributes(self, platform_application_arn=None, attributes=None): """ The `SetPlatformApplicationAttributes` action sets the attributes of the platform application object for the supported push notification services, such as APNS and GCM. For more information, see `Using Amazon SNS Mobile Push Notifications`_. :type platform_application_arn: string :param platform_application_arn: PlatformApplicationArn for SetPlatformApplicationAttributes action. :type attributes: map :param attributes: A map of the platform application attributes. Attributes in this map include the following: + `PlatformCredential` -- The credential received from the notification service. For APNS/APNS_SANDBOX, PlatformCredential is "private key". For GCM, PlatformCredential is "API key". For ADM, PlatformCredential is "client secret". + `PlatformPrincipal` -- The principal received from the notification service. For APNS/APNS_SANDBOX, PlatformPrincipal is "SSL certificate". For GCM, PlatformPrincipal is not applicable. For ADM, PlatformPrincipal is "client id". + `EventEndpointCreated` -- Topic ARN to which EndpointCreated event notifications should be sent. + `EventEndpointDeleted` -- Topic ARN to which EndpointDeleted event notifications should be sent. + `EventEndpointUpdated` -- Topic ARN to which EndpointUpdate event notifications should be sent. + `EventDeliveryFailure` -- Topic ARN to which DeliveryFailure event notifications should be sent upon Direct Publish delivery failure (permanent) to one of the application's endpoints. """ params = {} if platform_application_arn is not None: params['PlatformApplicationArn'] = platform_application_arn if attributes is not None: self._build_dict_as_list_params(params, attributes, 'Attributes') return self._make_request(action='SetPlatformApplicationAttributes', params=params) def get_platform_application_attributes(self, platform_application_arn=None): """ The `GetPlatformApplicationAttributes` action retrieves the attributes of the platform application object for the supported push notification services, such as APNS and GCM. For more information, see `Using Amazon SNS Mobile Push Notifications`_. :type platform_application_arn: string :param platform_application_arn: PlatformApplicationArn for GetPlatformApplicationAttributesInput. """ params = {} if platform_application_arn is not None: params['PlatformApplicationArn'] = platform_application_arn return self._make_request(action='GetPlatformApplicationAttributes', params=params) def list_platform_applications(self, next_token=None): """ The `ListPlatformApplications` action lists the platform application objects for the supported push notification services, such as APNS and GCM. The results for `ListPlatformApplications` are paginated and return a limited list of applications, up to 100. If additional records are available after the first page results, then a NextToken string will be returned. To receive the next page, you call `ListPlatformApplications` using the NextToken string received from the previous call. When there are no more records to return, NextToken will be null. For more information, see `Using Amazon SNS Mobile Push Notifications`_. :type next_token: string :param next_token: NextToken string is used when calling ListPlatformApplications action to retrieve additional records that are available after the first page results. """ params = {} if next_token is not None: params['NextToken'] = next_token return self._make_request(action='ListPlatformApplications', params=params) def list_endpoints_by_platform_application(self, platform_application_arn=None, next_token=None): """ The `ListEndpointsByPlatformApplication` action lists the endpoints and endpoint attributes for devices in a supported push notification service, such as GCM and APNS. The results for `ListEndpointsByPlatformApplication` are paginated and return a limited list of endpoints, up to 100. If additional records are available after the first page results, then a NextToken string will be returned. To receive the next page, you call `ListEndpointsByPlatformApplication` again using the NextToken string received from the previous call. When there are no more records to return, NextToken will be null. For more information, see `Using Amazon SNS Mobile Push Notifications`_. :type platform_application_arn: string :param platform_application_arn: PlatformApplicationArn for ListEndpointsByPlatformApplicationInput action. :type next_token: string :param next_token: NextToken string is used when calling ListEndpointsByPlatformApplication action to retrieve additional records that are available after the first page results. """ params = {} if platform_application_arn is not None: params['PlatformApplicationArn'] = platform_application_arn if next_token is not None: params['NextToken'] = next_token return self._make_request(action='ListEndpointsByPlatformApplication', params=params) def delete_platform_application(self, platform_application_arn=None): """ The `DeletePlatformApplication` action deletes a platform application object for one of the supported push notification services, such as APNS and GCM. For more information, see `Using Amazon SNS Mobile Push Notifications`_. :type platform_application_arn: string :param platform_application_arn: PlatformApplicationArn of platform application object to delete. """ params = {} if platform_application_arn is not None: params['PlatformApplicationArn'] = platform_application_arn return self._make_request(action='DeletePlatformApplication', params=params) def create_platform_endpoint(self, platform_application_arn=None, token=None, custom_user_data=None, attributes=None): """ The `CreatePlatformEndpoint` creates an endpoint for a device and mobile app on one of the supported push notification services, such as GCM and APNS. `CreatePlatformEndpoint` requires the PlatformApplicationArn that is returned from `CreatePlatformApplication`. The EndpointArn that is returned when using `CreatePlatformEndpoint` can then be used by the `Publish` action to send a message to a mobile app or by the `Subscribe` action for subscription to a topic. For more information, see `Using Amazon SNS Mobile Push Notifications`_. :type platform_application_arn: string :param platform_application_arn: PlatformApplicationArn returned from CreatePlatformApplication is used to create a an endpoint. :type token: string :param token: Unique identifier created by the notification service for an app on a device. The specific name for Token will vary, depending on which notification service is being used. For example, when using APNS as the notification service, you need the device token. Alternatively, when using GCM or ADM, the device token equivalent is called the registration ID. :type custom_user_data: string :param custom_user_data: Arbitrary user data to associate with the endpoint. SNS does not use this data. The data must be in UTF-8 format and less than 2KB. :type attributes: map :param attributes: For a list of attributes, see `SetEndpointAttributes`_. """ params = {} if platform_application_arn is not None: params['PlatformApplicationArn'] = platform_application_arn if token is not None: params['Token'] = token if custom_user_data is not None: params['CustomUserData'] = custom_user_data if attributes is not None: self._build_dict_as_list_params(params, attributes, 'Attributes') return self._make_request(action='CreatePlatformEndpoint', params=params) def delete_endpoint(self, endpoint_arn=None): """ The `DeleteEndpoint` action, which is idempotent, deletes the endpoint from SNS. For more information, see `Using Amazon SNS Mobile Push Notifications`_. :type endpoint_arn: string :param endpoint_arn: EndpointArn of endpoint to delete. """ params = {} if endpoint_arn is not None: params['EndpointArn'] = endpoint_arn return self._make_request(action='DeleteEndpoint', params=params) def set_endpoint_attributes(self, endpoint_arn=None, attributes=None): """ The `SetEndpointAttributes` action sets the attributes for an endpoint for a device on one of the supported push notification services, such as GCM and APNS. For more information, see `Using Amazon SNS Mobile Push Notifications`_. :type endpoint_arn: string :param endpoint_arn: EndpointArn used for SetEndpointAttributes action. :type attributes: map :param attributes: A map of the endpoint attributes. Attributes in this map include the following: + `CustomUserData` -- arbitrary user data to associate with the endpoint. SNS does not use this data. The data must be in UTF-8 format and less than 2KB. + `Enabled` -- flag that enables/disables delivery to the endpoint. Message Processor will set this to false when a notification service indicates to SNS that the endpoint is invalid. Users can set it back to true, typically after updating Token. + `Token` -- device token, also referred to as a registration id, for an app and mobile device. This is returned from the notification service when an app and mobile device are registered with the notification service. """ params = {} if endpoint_arn is not None: params['EndpointArn'] = endpoint_arn if attributes is not None: self._build_dict_as_list_params(params, attributes, 'Attributes') return self._make_request(action='SetEndpointAttributes', params=params) def get_endpoint_attributes(self, endpoint_arn=None): """ The `GetEndpointAttributes` retrieves the endpoint attributes for a device on one of the supported push notification services, such as GCM and APNS. For more information, see `Using Amazon SNS Mobile Push Notifications`_. :type endpoint_arn: string :param endpoint_arn: EndpointArn for GetEndpointAttributes input. """ params = {} if endpoint_arn is not None: params['EndpointArn'] = endpoint_arn return self._make_request(action='GetEndpointAttributes', params=params) def _make_request(self, action, params, path='/', verb='GET'): params['ContentType'] = 'JSON' response = self.make_request(action=action, verb=verb, path=path, params=params) body = response.read() boto.log.debug(body) if response.status == 200: return json.loads(body) else: boto.log.error('%s %s' % (response.status, response.reason)) boto.log.error('%s' % body) raise self.ResponseError(response.status, response.reason, body) boto-2.20.1/boto/sqs/000077500000000000000000000000001225267101000143045ustar00rootroot00000000000000boto-2.20.1/boto/sqs/__init__.py000066400000000000000000000047621225267101000164260ustar00rootroot00000000000000# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from regioninfo import SQSRegionInfo def regions(): """ Get all available regions for the SQS service. :rtype: list :return: A list of :class:`boto.sqs.regioninfo.RegionInfo` """ return [SQSRegionInfo(name='us-east-1', endpoint='queue.amazonaws.com'), SQSRegionInfo(name='us-gov-west-1', endpoint='sqs.us-gov-west-1.amazonaws.com'), SQSRegionInfo(name='eu-west-1', endpoint='eu-west-1.queue.amazonaws.com'), SQSRegionInfo(name='us-west-1', endpoint='us-west-1.queue.amazonaws.com'), SQSRegionInfo(name='us-west-2', endpoint='us-west-2.queue.amazonaws.com'), SQSRegionInfo(name='sa-east-1', endpoint='sa-east-1.queue.amazonaws.com'), SQSRegionInfo(name='ap-northeast-1', endpoint='ap-northeast-1.queue.amazonaws.com'), SQSRegionInfo(name='ap-southeast-1', endpoint='ap-southeast-1.queue.amazonaws.com'), SQSRegionInfo(name='ap-southeast-2', endpoint='ap-southeast-2.queue.amazonaws.com') ] def connect_to_region(region_name, **kw_params): for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/sqs/attributes.py000066400000000000000000000032661225267101000170530ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents an SQS Attribute Name/Value set """ class Attributes(dict): def __init__(self, parent): self.parent = parent self.current_key = None self.current_value = None def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'Attribute': self[self.current_key] = self.current_value elif name == 'Name': self.current_key = value elif name == 'Value': self.current_value = value else: setattr(self, name, value) boto-2.20.1/boto/sqs/batchresults.py000066400000000000000000000066731225267101000173750ustar00rootroot00000000000000# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2011 Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ A set of results returned by SendMessageBatch. """ class ResultEntry(dict): """ The result (successful or unsuccessful) of a single message within a send_message_batch request. In the case of a successful result, this dict-like object will contain the following items: :ivar id: A string containing the user-supplied ID of the message. :ivar message_id: A string containing the SQS ID of the new message. :ivar message_md5: A string containing the MD5 hash of the message body. In the case of an error, this object will contain the following items: :ivar id: A string containing the user-supplied ID of the message. :ivar sender_fault: A boolean value. :ivar error_code: A string containing a short description of the error. :ivar error_message: A string containing a description of the error. """ def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'Id': self['id'] = value elif name == 'MessageId': self['message_id'] = value elif name == 'MD5OfMessageBody': self['message_md5'] = value elif name == 'SenderFault': self['sender_fault'] = value elif name == 'Code': self['error_code'] = value elif name == 'Message': self['error_message'] = value class BatchResults(object): """ A container for the results of a send_message_batch request. :ivar results: A list of successful results. Each item in the list will be an instance of :class:`ResultEntry`. :ivar errors: A list of unsuccessful results. Each item in the list will be an instance of :class:`ResultEntry`. """ def __init__(self, parent): self.parent = parent self.results = [] self.errors = [] def startElement(self, name, attrs, connection): if name.endswith('MessageBatchResultEntry'): entry = ResultEntry() self.results.append(entry) return entry if name == 'BatchResultErrorEntry': entry = ResultEntry() self.errors.append(entry) return entry return None def endElement(self, name, value, connection): setattr(self, name, value) boto-2.20.1/boto/sqs/connection.py000066400000000000000000000405351225267101000170240ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.connection import AWSQueryConnection from boto.sqs.regioninfo import SQSRegionInfo from boto.sqs.queue import Queue from boto.sqs.message import Message from boto.sqs.attributes import Attributes from boto.sqs.batchresults import BatchResults from boto.exception import SQSError, BotoServerError class SQSConnection(AWSQueryConnection): """ A Connection to the SQS Service. """ DefaultRegionName = 'us-east-1' DefaultRegionEndpoint = 'queue.amazonaws.com' APIVersion = '2012-11-05' DefaultContentType = 'text/plain' ResponseError = SQSError AuthServiceName = 'sqs' def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/', security_token=None, validate_certs=True): if not region: region = SQSRegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint) self.region = region AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, proxy_user, proxy_pass, self.region.endpoint, debug, https_connection_factory, path, security_token=security_token, validate_certs=validate_certs) self.auth_region_name = self.region.name def _required_auth_capability(self): return ['hmac-v4'] def create_queue(self, queue_name, visibility_timeout=None): """ Create an SQS Queue. :type queue_name: str or unicode :param queue_name: The name of the new queue. Names are scoped to an account and need to be unique within that account. Calling this method on an existing queue name will not return an error from SQS unless the value for visibility_timeout is different than the value of the existing queue of that name. This is still an expensive operation, though, and not the preferred way to check for the existence of a queue. See the :func:`boto.sqs.connection.SQSConnection.lookup` method. :type visibility_timeout: int :param visibility_timeout: The default visibility timeout for all messages written in the queue. This can be overridden on a per-message. :rtype: :class:`boto.sqs.queue.Queue` :return: The newly created queue. """ params = {'QueueName': queue_name} if visibility_timeout: params['Attribute.1.Name'] = 'VisibilityTimeout' params['Attribute.1.Value'] = int(visibility_timeout) return self.get_object('CreateQueue', params, Queue) def delete_queue(self, queue, force_deletion=False): """ Delete an SQS Queue. :type queue: A Queue object :param queue: The SQS queue to be deleted :type force_deletion: Boolean :param force_deletion: A deprecated parameter that is no longer used by SQS's API. :rtype: bool :return: True if the command succeeded, False otherwise """ return self.get_status('DeleteQueue', None, queue.id) def get_queue_attributes(self, queue, attribute='All'): """ Gets one or all attributes of a Queue :type queue: A Queue object :param queue: The SQS queue to be deleted :type attribute: str :type attribute: The specific attribute requested. If not supplied, the default is to return all attributes. Valid attributes are: * ApproximateNumberOfMessages * ApproximateNumberOfMessagesNotVisible * VisibilityTimeout * CreatedTimestamp * LastModifiedTimestamp * Policy * ReceiveMessageWaitTimeSeconds :rtype: :class:`boto.sqs.attributes.Attributes` :return: An Attributes object containing request value(s). """ params = {'AttributeName' : attribute} return self.get_object('GetQueueAttributes', params, Attributes, queue.id) def set_queue_attribute(self, queue, attribute, value): params = {'Attribute.Name' : attribute, 'Attribute.Value' : value} return self.get_status('SetQueueAttributes', params, queue.id) def receive_message(self, queue, number_messages=1, visibility_timeout=None, attributes=None, wait_time_seconds=None): """ Read messages from an SQS Queue. :type queue: A Queue object :param queue: The Queue from which messages are read. :type number_messages: int :param number_messages: The maximum number of messages to read (default=1) :type visibility_timeout: int :param visibility_timeout: The number of seconds the message should remain invisible to other queue readers (default=None which uses the Queues default) :type attributes: str :param attributes: The name of additional attribute to return with response or All if you want all attributes. The default is to return no additional attributes. Valid values: * All * SenderId * SentTimestamp * ApproximateReceiveCount * ApproximateFirstReceiveTimestamp :type wait_time_seconds: int :param wait_time_seconds: The duration (in seconds) for which the call will wait for a message to arrive in the queue before returning. If a message is available, the call will return sooner than wait_time_seconds. :rtype: list :return: A list of :class:`boto.sqs.message.Message` objects. """ params = {'MaxNumberOfMessages' : number_messages} if visibility_timeout is not None: params['VisibilityTimeout'] = visibility_timeout if attributes is not None: self.build_list_params(params, attributes, 'AttributeName') if wait_time_seconds is not None: params['WaitTimeSeconds'] = wait_time_seconds return self.get_list('ReceiveMessage', params, [('Message', queue.message_class)], queue.id, queue) def delete_message(self, queue, message): """ Delete a message from a queue. :type queue: A :class:`boto.sqs.queue.Queue` object :param queue: The Queue from which messages are read. :type message: A :class:`boto.sqs.message.Message` object :param message: The Message to be deleted :rtype: bool :return: True if successful, False otherwise. """ params = {'ReceiptHandle' : message.receipt_handle} return self.get_status('DeleteMessage', params, queue.id) def delete_message_batch(self, queue, messages): """ Deletes a list of messages from a queue in a single request. :type queue: A :class:`boto.sqs.queue.Queue` object. :param queue: The Queue to which the messages will be written. :type messages: List of :class:`boto.sqs.message.Message` objects. :param messages: A list of message objects. """ params = {} for i, msg in enumerate(messages): prefix = 'DeleteMessageBatchRequestEntry' p_name = '%s.%i.Id' % (prefix, (i+1)) params[p_name] = msg.id p_name = '%s.%i.ReceiptHandle' % (prefix, (i+1)) params[p_name] = msg.receipt_handle return self.get_object('DeleteMessageBatch', params, BatchResults, queue.id, verb='POST') def delete_message_from_handle(self, queue, receipt_handle): """ Delete a message from a queue, given a receipt handle. :type queue: A :class:`boto.sqs.queue.Queue` object :param queue: The Queue from which messages are read. :type receipt_handle: str :param receipt_handle: The receipt handle for the message :rtype: bool :return: True if successful, False otherwise. """ params = {'ReceiptHandle' : receipt_handle} return self.get_status('DeleteMessage', params, queue.id) def send_message(self, queue, message_content, delay_seconds=None): params = {'MessageBody' : message_content} if delay_seconds: params['DelaySeconds'] = int(delay_seconds) return self.get_object('SendMessage', params, Message, queue.id, verb='POST') def send_message_batch(self, queue, messages): """ Delivers up to 10 messages to a queue in a single request. :type queue: A :class:`boto.sqs.queue.Queue` object. :param queue: The Queue to which the messages will be written. :type messages: List of lists. :param messages: A list of lists or tuples. Each inner tuple represents a single message to be written and consists of and ID (string) that must be unique within the list of messages, the message body itself which can be a maximum of 64K in length, and an integer which represents the delay time (in seconds) for the message (0-900) before the message will be delivered to the queue. """ params = {} for i, msg in enumerate(messages): p_name = 'SendMessageBatchRequestEntry.%i.Id' % (i+1) params[p_name] = msg[0] p_name = 'SendMessageBatchRequestEntry.%i.MessageBody' % (i+1) params[p_name] = msg[1] p_name = 'SendMessageBatchRequestEntry.%i.DelaySeconds' % (i+1) params[p_name] = msg[2] return self.get_object('SendMessageBatch', params, BatchResults, queue.id, verb='POST') def change_message_visibility(self, queue, receipt_handle, visibility_timeout): """ Extends the read lock timeout for the specified message from the specified queue to the specified value. :type queue: A :class:`boto.sqs.queue.Queue` object :param queue: The Queue from which messages are read. :type receipt_handle: str :param receipt_handle: The receipt handle associated with the message whose visibility timeout will be changed. :type visibility_timeout: int :param visibility_timeout: The new value of the message's visibility timeout in seconds. """ params = {'ReceiptHandle' : receipt_handle, 'VisibilityTimeout' : visibility_timeout} return self.get_status('ChangeMessageVisibility', params, queue.id) def change_message_visibility_batch(self, queue, messages): """ A batch version of change_message_visibility that can act on up to 10 messages at a time. :type queue: A :class:`boto.sqs.queue.Queue` object. :param queue: The Queue to which the messages will be written. :type messages: List of tuples. :param messages: A list of tuples where each tuple consists of a :class:`boto.sqs.message.Message` object and an integer that represents the new visibility timeout for that message. """ params = {} for i, t in enumerate(messages): prefix = 'ChangeMessageVisibilityBatchRequestEntry' p_name = '%s.%i.Id' % (prefix, (i+1)) params[p_name] = t[0].id p_name = '%s.%i.ReceiptHandle' % (prefix, (i+1)) params[p_name] = t[0].receipt_handle p_name = '%s.%i.VisibilityTimeout' % (prefix, (i+1)) params[p_name] = t[1] return self.get_object('ChangeMessageVisibilityBatch', params, BatchResults, queue.id, verb='POST') def get_all_queues(self, prefix=''): """ Retrieves all queues. :keyword str prefix: Optionally, only return queues that start with this value. :rtype: list :returns: A list of :py:class:`boto.sqs.queue.Queue` instances. """ params = {} if prefix: params['QueueNamePrefix'] = prefix return self.get_list('ListQueues', params, [('QueueUrl', Queue)]) def get_queue(self, queue_name, owner_acct_id=None): """ Retrieves the queue with the given name, or ``None`` if no match was found. :param str queue_name: The name of the queue to retrieve. :param str owner_acct_id: Optionally, the AWS account ID of the account that created the queue. :rtype: :py:class:`boto.sqs.queue.Queue` or ``None`` :returns: The requested queue, or ``None`` if no match was found. """ params = {'QueueName': queue_name} if owner_acct_id: params['QueueOwnerAWSAccountId']=owner_acct_id try: return self.get_object('GetQueueUrl', params, Queue) except SQSError: return None lookup = get_queue # # Permissions methods # def add_permission(self, queue, label, aws_account_id, action_name): """ Add a permission to a queue. :type queue: :class:`boto.sqs.queue.Queue` :param queue: The queue object :type label: str or unicode :param label: A unique identification of the permission you are setting. Maximum of 80 characters ``[0-9a-zA-Z_-]`` Example, AliceSendMessage :type aws_account_id: str or unicode :param principal_id: The AWS account number of the principal who will be given permission. The principal must have an AWS account, but does not need to be signed up for Amazon SQS. For information about locating the AWS account identification. :type action_name: str or unicode :param action_name: The action. Valid choices are: * * * SendMessage * ReceiveMessage * DeleteMessage * ChangeMessageVisibility * GetQueueAttributes :rtype: bool :return: True if successful, False otherwise. """ params = {'Label': label, 'AWSAccountId' : aws_account_id, 'ActionName' : action_name} return self.get_status('AddPermission', params, queue.id) def remove_permission(self, queue, label): """ Remove a permission from a queue. :type queue: :class:`boto.sqs.queue.Queue` :param queue: The queue object :type label: str or unicode :param label: The unique label associated with the permission being removed. :rtype: bool :return: True if successful, False otherwise. """ params = {'Label': label} return self.get_status('RemovePermission', params, queue.id) boto-2.20.1/boto/sqs/jsonmessage.py000066400000000000000000000032331225267101000171750ustar00rootroot00000000000000# Copyright (c) 2006-2008 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import base64 from boto.sqs.message import MHMessage from boto.exception import SQSDecodeError from boto.compat import json class JSONMessage(MHMessage): """ Acts like a dictionary but encodes it's data as a Base64 encoded JSON payload. """ def decode(self, value): try: value = base64.b64decode(value) value = json.loads(value) except: raise SQSDecodeError('Unable to decode message', self) return value def encode(self, value): value = json.dumps(value) return base64.b64encode(value) boto-2.20.1/boto/sqs/message.py000066400000000000000000000221231225267101000163020ustar00rootroot00000000000000# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ SQS Message A Message represents the data stored in an SQS queue. The rules for what is allowed within an SQS Message are here: http://docs.amazonwebservices.com/AWSSimpleQueueService/2008-01-01/SQSDeveloperGuide/Query_QuerySendMessage.html So, at it's simplest level a Message just needs to allow a developer to store bytes in it and get the bytes back out. However, to allow messages to have richer semantics, the Message class must support the following interfaces: The constructor for the Message class must accept a keyword parameter "queue" which is an instance of a boto Queue object and represents the queue that the message will be stored in. The default value for this parameter is None. The constructor for the Message class must accept a keyword parameter "body" which represents the content or body of the message. The format of this parameter will depend on the behavior of the particular Message subclass. For example, if the Message subclass provides dictionary-like behavior to the user the body passed to the constructor should be a dict-like object that can be used to populate the initial state of the message. The Message class must provide an encode method that accepts a value of the same type as the body parameter of the constructor and returns a string of characters that are able to be stored in an SQS message body (see rules above). The Message class must provide a decode method that accepts a string of characters that can be stored (and probably were stored!) in an SQS message and return an object of a type that is consistent with the "body" parameter accepted on the class constructor. The Message class must provide a __len__ method that will return the size of the encoded message that would be stored in SQS based on the current state of the Message object. The Message class must provide a get_body method that will return the body of the message in the same format accepted in the constructor of the class. The Message class must provide a set_body method that accepts a message body in the same format accepted by the constructor of the class. This method should alter to the internal state of the Message object to reflect the state represented in the message body parameter. The Message class must provide a get_body_encoded method that returns the current body of the message in the format in which it would be stored in SQS. """ import base64 import StringIO from boto.sqs.attributes import Attributes from boto.exception import SQSDecodeError import boto class RawMessage: """ Base class for SQS messages. RawMessage does not encode the message in any way. Whatever you store in the body of the message is what will be written to SQS and whatever is returned from SQS is stored directly into the body of the message. """ def __init__(self, queue=None, body=''): self.queue = queue self.set_body(body) self.id = None self.receipt_handle = None self.md5 = None self.attributes = Attributes(self) def __len__(self): return len(self.encode(self._body)) def startElement(self, name, attrs, connection): if name == 'Attribute': return self.attributes return None def endElement(self, name, value, connection): if name == 'Body': self.set_body(value) elif name == 'MessageId': self.id = value elif name == 'ReceiptHandle': self.receipt_handle = value elif name == 'MD5OfMessageBody': self.md5 = value else: setattr(self, name, value) def endNode(self, connection): self.set_body(self.decode(self.get_body())) def encode(self, value): """Transform body object into serialized byte array format.""" return value def decode(self, value): """Transform seralized byte array into any object.""" return value def set_body(self, body): """Override the current body for this object, using decoded format.""" self._body = body def get_body(self): return self._body def get_body_encoded(self): """ This method is really a semi-private method used by the Queue.write method when writing the contents of the message to SQS. You probably shouldn't need to call this method in the normal course of events. """ return self.encode(self.get_body()) def delete(self): if self.queue: return self.queue.delete_message(self) def change_visibility(self, visibility_timeout): if self.queue: self.queue.connection.change_message_visibility(self.queue, self.receipt_handle, visibility_timeout) class Message(RawMessage): """ The default Message class used for SQS queues. This class automatically encodes/decodes the message body using Base64 encoding to avoid any illegal characters in the message body. See: https://forums.aws.amazon.com/thread.jspa?threadID=13067 for details on why this is a good idea. The encode/decode is meant to be transparent to the end-user. """ def encode(self, value): return base64.b64encode(value) def decode(self, value): try: value = base64.b64decode(value) except: boto.log.warning('Unable to decode message') return value return value class MHMessage(Message): """ The MHMessage class provides a message that provides RFC821-like headers like this: HeaderName: HeaderValue The encoding/decoding of this is handled automatically and after the message body has been read, the message instance can be treated like a mapping object, i.e. m['HeaderName'] would return 'HeaderValue'. """ def __init__(self, queue=None, body=None, xml_attrs=None): if body == None or body == '': body = {} Message.__init__(self, queue, body) def decode(self, value): try: msg = {} fp = StringIO.StringIO(value) line = fp.readline() while line: delim = line.find(':') key = line[0:delim] value = line[delim+1:].strip() msg[key.strip()] = value.strip() line = fp.readline() except: raise SQSDecodeError('Unable to decode message', self) return msg def encode(self, value): s = '' for item in value.items(): s = s + '%s: %s\n' % (item[0], item[1]) return s def __contains__(self, key): return key in self._body def __getitem__(self, key): if key in self._body: return self._body[key] else: raise KeyError(key) def __setitem__(self, key, value): self._body[key] = value self.set_body(self._body) def keys(self): return self._body.keys() def values(self): return self._body.values() def items(self): return self._body.items() def has_key(self, key): return key in self._body def update(self, d): self._body.update(d) self.set_body(self._body) def get(self, key, default=None): return self._body.get(key, default) class EncodedMHMessage(MHMessage): """ The EncodedMHMessage class provides a message that provides RFC821-like headers like this: HeaderName: HeaderValue This variation encodes/decodes the body of the message in base64 automatically. The message instance can be treated like a mapping object, i.e. m['HeaderName'] would return 'HeaderValue'. """ def decode(self, value): try: value = base64.b64decode(value) except: raise SQSDecodeError('Unable to decode message', self) return MHMessage.decode(self, value) def encode(self, value): value = MHMessage.encode(self, value) return base64.b64encode(value) boto-2.20.1/boto/sqs/queue.py000066400000000000000000000410011225267101000157760ustar00rootroot00000000000000# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents an SQS Queue """ import urlparse from boto.sqs.message import Message class Queue: def __init__(self, connection=None, url=None, message_class=Message): self.connection = connection self.url = url self.message_class = message_class self.visibility_timeout = None def __repr__(self): return 'Queue(%s)' % self.url def _id(self): if self.url: val = urlparse.urlparse(self.url)[2] else: val = self.url return val id = property(_id) def _name(self): if self.url: val = urlparse.urlparse(self.url)[2].split('/')[2] else: val = self.url return val name = property(_name) def _arn(self): parts = self.id.split('/') return 'arn:aws:sqs:%s:%s:%s' % ( self.connection.region.name, parts[1], parts[2]) arn = property(_arn) def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'QueueUrl': self.url = value elif name == 'VisibilityTimeout': self.visibility_timeout = int(value) else: setattr(self, name, value) def set_message_class(self, message_class): """ Set the message class that should be used when instantiating messages read from the queue. By default, the class :class:`boto.sqs.message.Message` is used but this can be overriden with any class that behaves like a message. :type message_class: Message-like class :param message_class: The new Message class """ self.message_class = message_class def get_attributes(self, attributes='All'): """ Retrieves attributes about this queue object and returns them in an Attribute instance (subclass of a Dictionary). :type attributes: string :param attributes: String containing one of: ApproximateNumberOfMessages, ApproximateNumberOfMessagesNotVisible, VisibilityTimeout, CreatedTimestamp, LastModifiedTimestamp, Policy ReceiveMessageWaitTimeSeconds :rtype: Attribute object :return: An Attribute object which is a mapping type holding the requested name/value pairs """ return self.connection.get_queue_attributes(self, attributes) def set_attribute(self, attribute, value): """ Set a new value for an attribute of the Queue. :type attribute: String :param attribute: The name of the attribute you want to set. The only valid value at this time is: VisibilityTimeout :type value: int :param value: The new value for the attribute. For VisibilityTimeout the value must be an integer number of seconds from 0 to 86400. :rtype: bool :return: True if successful, otherwise False. """ return self.connection.set_queue_attribute(self, attribute, value) def get_timeout(self): """ Get the visibility timeout for the queue. :rtype: int :return: The number of seconds as an integer. """ a = self.get_attributes('VisibilityTimeout') return int(a['VisibilityTimeout']) def set_timeout(self, visibility_timeout): """ Set the visibility timeout for the queue. :type visibility_timeout: int :param visibility_timeout: The desired timeout in seconds """ retval = self.set_attribute('VisibilityTimeout', visibility_timeout) if retval: self.visibility_timeout = visibility_timeout return retval def add_permission(self, label, aws_account_id, action_name): """ Add a permission to a queue. :type label: str or unicode :param label: A unique identification of the permission you are setting. Maximum of 80 characters ``[0-9a-zA-Z_-]`` Example, AliceSendMessage :type aws_account_id: str or unicode :param principal_id: The AWS account number of the principal who will be given permission. The principal must have an AWS account, but does not need to be signed up for Amazon SQS. For information about locating the AWS account identification. :type action_name: str or unicode :param action_name: The action. Valid choices are: SendMessage|ReceiveMessage|DeleteMessage| ChangeMessageVisibility|GetQueueAttributes|* :rtype: bool :return: True if successful, False otherwise. """ return self.connection.add_permission(self, label, aws_account_id, action_name) def remove_permission(self, label): """ Remove a permission from a queue. :type label: str or unicode :param label: The unique label associated with the permission being removed. :rtype: bool :return: True if successful, False otherwise. """ return self.connection.remove_permission(self, label) def read(self, visibility_timeout=None, wait_time_seconds=None): """ Read a single message from the queue. :type visibility_timeout: int :param visibility_timeout: The timeout for this message in seconds :type wait_time_seconds: int :param wait_time_seconds: The duration (in seconds) for which the call will wait for a message to arrive in the queue before returning. If a message is available, the call will return sooner than wait_time_seconds. :rtype: :class:`boto.sqs.message.Message` :return: A single message or None if queue is empty """ rs = self.get_messages(1, visibility_timeout, wait_time_seconds=wait_time_seconds) if len(rs) == 1: return rs[0] else: return None def write(self, message, delay_seconds=None): """ Add a single message to the queue. :type message: Message :param message: The message to be written to the queue :rtype: :class:`boto.sqs.message.Message` :return: The :class:`boto.sqs.message.Message` object that was written. """ new_msg = self.connection.send_message(self, message.get_body_encoded(), delay_seconds) message.id = new_msg.id message.md5 = new_msg.md5 return message def write_batch(self, messages): """ Delivers up to 10 messages in a single request. :type messages: List of lists. :param messages: A list of lists or tuples. Each inner tuple represents a single message to be written and consists of and ID (string) that must be unique within the list of messages, the message body itself which can be a maximum of 64K in length, and an integer which represents the delay time (in seconds) for the message (0-900) before the message will be delivered to the queue. """ return self.connection.send_message_batch(self, messages) def new_message(self, body=''): """ Create new message of appropriate class. :type body: message body :param body: The body of the newly created message (optional). :rtype: :class:`boto.sqs.message.Message` :return: A new Message object """ m = self.message_class(self, body) m.queue = self return m # get a variable number of messages, returns a list of messages def get_messages(self, num_messages=1, visibility_timeout=None, attributes=None, wait_time_seconds=None): """ Get a variable number of messages. :type num_messages: int :param num_messages: The maximum number of messages to read from the queue. :type visibility_timeout: int :param visibility_timeout: The VisibilityTimeout for the messages read. :type attributes: str :param attributes: The name of additional attribute to return with response or All if you want all attributes. The default is to return no additional attributes. Valid values: All SenderId SentTimestamp ApproximateReceiveCount ApproximateFirstReceiveTimestamp :type wait_time_seconds: int :param wait_time_seconds: The duration (in seconds) for which the call will wait for a message to arrive in the queue before returning. If a message is available, the call will return sooner than wait_time_seconds. :rtype: list :return: A list of :class:`boto.sqs.message.Message` objects. """ return self.connection.receive_message( self, number_messages=num_messages, visibility_timeout=visibility_timeout, attributes=attributes, wait_time_seconds=wait_time_seconds) def delete_message(self, message): """ Delete a message from the queue. :type message: :class:`boto.sqs.message.Message` :param message: The :class:`boto.sqs.message.Message` object to delete. :rtype: bool :return: True if successful, False otherwise """ return self.connection.delete_message(self, message) def delete_message_batch(self, messages): """ Deletes a list of messages in a single request. :type messages: List of :class:`boto.sqs.message.Message` objects. :param messages: A list of message objects. """ return self.connection.delete_message_batch(self, messages) def change_message_visibility_batch(self, messages): """ A batch version of change_message_visibility that can act on up to 10 messages at a time. :type messages: List of tuples. :param messages: A list of tuples where each tuple consists of a :class:`boto.sqs.message.Message` object and an integer that represents the new visibility timeout for that message. """ return self.connection.change_message_visibility_batch(self, messages) def delete(self): """ Delete the queue. """ return self.connection.delete_queue(self) def clear(self, page_size=10, vtimeout=10): """Utility function to remove all messages from a queue""" n = 0 l = self.get_messages(page_size, vtimeout) while l: for m in l: self.delete_message(m) n += 1 l = self.get_messages(page_size, vtimeout) return n def count(self, page_size=10, vtimeout=10): """ Utility function to count the number of messages in a queue. Note: This function now calls GetQueueAttributes to obtain an 'approximate' count of the number of messages in a queue. """ a = self.get_attributes('ApproximateNumberOfMessages') return int(a['ApproximateNumberOfMessages']) def count_slow(self, page_size=10, vtimeout=10): """ Deprecated. This is the old 'count' method that actually counts the messages by reading them all. This gives an accurate count but is very slow for queues with non-trivial number of messasges. Instead, use get_attribute('ApproximateNumberOfMessages') to take advantage of the new SQS capability. This is retained only for the unit tests. """ n = 0 l = self.get_messages(page_size, vtimeout) while l: for m in l: n += 1 l = self.get_messages(page_size, vtimeout) return n def dump(self, file_name, page_size=10, vtimeout=10, sep='\n'): """Utility function to dump the messages in a queue to a file NOTE: Page size must be < 10 else SQS errors""" fp = open(file_name, 'wb') n = 0 l = self.get_messages(page_size, vtimeout) while l: for m in l: fp.write(m.get_body()) if sep: fp.write(sep) n += 1 l = self.get_messages(page_size, vtimeout) fp.close() return n def save_to_file(self, fp, sep='\n'): """ Read all messages from the queue and persist them to file-like object. Messages are written to the file and the 'sep' string is written in between messages. Messages are deleted from the queue after being written to the file. Returns the number of messages saved. """ n = 0 m = self.read() while m: n += 1 fp.write(m.get_body()) if sep: fp.write(sep) self.delete_message(m) m = self.read() return n def save_to_filename(self, file_name, sep='\n'): """ Read all messages from the queue and persist them to local file. Messages are written to the file and the 'sep' string is written in between messages. Messages are deleted from the queue after being written to the file. Returns the number of messages saved. """ fp = open(file_name, 'wb') n = self.save_to_file(fp, sep) fp.close() return n # for backwards compatibility save = save_to_filename def save_to_s3(self, bucket): """ Read all messages from the queue and persist them to S3. Messages are stored in the S3 bucket using a naming scheme of:: / Messages are deleted from the queue after being saved to S3. Returns the number of messages saved. """ n = 0 m = self.read() while m: n += 1 key = bucket.new_key('%s/%s' % (self.id, m.id)) key.set_contents_from_string(m.get_body()) self.delete_message(m) m = self.read() return n def load_from_s3(self, bucket, prefix=None): """ Load messages previously saved to S3. """ n = 0 if prefix: prefix = '%s/' % prefix else: prefix = '%s/' % self.id[1:] rs = bucket.list(prefix=prefix) for key in rs: n += 1 m = self.new_message(key.get_contents_as_string()) self.write(m) return n def load_from_file(self, fp, sep='\n'): """Utility function to load messages from a file-like object to a queue""" n = 0 body = '' l = fp.readline() while l: if l == sep: m = Message(self, body) self.write(m) n += 1 print 'writing message %d' % n body = '' else: body = body + l l = fp.readline() return n def load_from_filename(self, file_name, sep='\n'): """Utility function to load messages from a local filename to a queue""" fp = open(file_name, 'rb') n = self.load_from_file(fp, sep) fp.close() return n # for backward compatibility load = load_from_filename boto-2.20.1/boto/sqs/regioninfo.py000066400000000000000000000027051225267101000170210ustar00rootroot00000000000000# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.regioninfo import RegionInfo class SQSRegionInfo(RegionInfo): def __init__(self, connection=None, name=None, endpoint=None): from boto.sqs.connection import SQSConnection RegionInfo.__init__(self, connection, name, endpoint, SQSConnection) boto-2.20.1/boto/storage_uri.py000077500000000000000000001141351225267101000164030ustar00rootroot00000000000000# Copyright 2010 Google Inc. # Copyright (c) 2011, Nexenta Systems Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import boto import os import sys import textwrap from boto.s3.deletemarker import DeleteMarker from boto.exception import BotoClientError from boto.exception import InvalidUriError class StorageUri(object): """ Base class for representing storage provider-independent bucket and object name with a shorthand URI-like syntax. This is an abstract class: the constructor cannot be called (throws an exception if you try). """ connection = None # Optional args that can be set from one of the concrete subclass # constructors, to change connection behavior (e.g., to override # https_connection_factory). connection_args = None # Map of provider scheme ('s3' or 'gs') to AWSAuthConnection object. We # maintain a pool here in addition to the connection pool implemented # in AWSAuthConnection because the latter re-creates its connection pool # every time that class is instantiated (so the current pool is used to # avoid re-instantiating AWSAuthConnection). provider_pool = {} def __init__(self): """Uncallable constructor on abstract base StorageUri class. """ raise BotoClientError('Attempt to instantiate abstract StorageUri ' 'class') def __repr__(self): """Returns string representation of URI.""" return self.uri def equals(self, uri): """Returns true if two URIs are equal.""" return self.uri == uri.uri def check_response(self, resp, level, uri): if resp is None: raise InvalidUriError('\n'.join(textwrap.wrap( 'Attempt to get %s for "%s" failed. This can happen if ' 'the URI refers to a non-existent object or if you meant to ' 'operate on a directory (e.g., leaving off -R option on gsutil ' 'cp, mv, or ls of a bucket)' % (level, uri), 80))) def _check_bucket_uri(self, function_name): if issubclass(type(self), BucketStorageUri) and not self.bucket_name: raise InvalidUriError( '%s on bucket-less URI (%s)' % (function_name, self.uri)) def _check_object_uri(self, function_name): if issubclass(type(self), BucketStorageUri) and not self.object_name: raise InvalidUriError('%s on object-less URI (%s)' % (function_name, self.uri)) def _warn_about_args(self, function_name, **args): for arg in args: if args[arg]: sys.stderr.write( 'Warning: %s ignores argument: %s=%s\n' % (function_name, arg, str(args[arg]))) def connect(self, access_key_id=None, secret_access_key=None, **kwargs): """ Opens a connection to appropriate provider, depending on provider portion of URI. Requires Credentials defined in boto config file (see boto/pyami/config.py). @type storage_uri: StorageUri @param storage_uri: StorageUri specifying a bucket or a bucket+object @rtype: L{AWSAuthConnection} @return: A connection to storage service provider of the given URI. """ connection_args = dict(self.connection_args or ()) if (hasattr(self, 'suppress_consec_slashes') and 'suppress_consec_slashes' not in connection_args): connection_args['suppress_consec_slashes'] = ( self.suppress_consec_slashes) connection_args.update(kwargs) if not self.connection: if self.scheme in self.provider_pool: self.connection = self.provider_pool[self.scheme] elif self.scheme == 's3': from boto.s3.connection import S3Connection self.connection = S3Connection(access_key_id, secret_access_key, **connection_args) self.provider_pool[self.scheme] = self.connection elif self.scheme == 'gs': from boto.gs.connection import GSConnection # Use OrdinaryCallingFormat instead of boto-default # SubdomainCallingFormat because the latter changes the hostname # that's checked during cert validation for HTTPS connections, # which will fail cert validation (when cert validation is # enabled). # # The same is not true for S3's HTTPS certificates. In fact, # we don't want to do this for S3 because S3 requires the # subdomain to match the location of the bucket. If the proper # subdomain is not used, the server will return a 301 redirect # with no Location header. # # Note: the following import can't be moved up to the # start of this file else it causes a config import failure when # run from the resumable upload/download tests. from boto.s3.connection import OrdinaryCallingFormat connection_args['calling_format'] = OrdinaryCallingFormat() self.connection = GSConnection(access_key_id, secret_access_key, **connection_args) self.provider_pool[self.scheme] = self.connection elif self.scheme == 'file': from boto.file.connection import FileConnection self.connection = FileConnection(self) else: raise InvalidUriError('Unrecognized scheme "%s"' % self.scheme) self.connection.debug = self.debug return self.connection def has_version(self): return (issubclass(type(self), BucketStorageUri) and ((self.version_id is not None) or (self.generation is not None))) def delete_key(self, validate=False, headers=None, version_id=None, mfa_token=None): self._check_object_uri('delete_key') bucket = self.get_bucket(validate, headers) return bucket.delete_key(self.object_name, headers, version_id, mfa_token) def list_bucket(self, prefix='', delimiter='', headers=None, all_versions=False): self._check_bucket_uri('list_bucket') bucket = self.get_bucket(headers=headers) if all_versions: return (v for v in bucket.list_versions( prefix=prefix, delimiter=delimiter, headers=headers) if not isinstance(v, DeleteMarker)) else: return bucket.list(prefix=prefix, delimiter=delimiter, headers=headers) def get_all_keys(self, validate=False, headers=None, prefix=None): bucket = self.get_bucket(validate, headers) return bucket.get_all_keys(headers) def get_bucket(self, validate=False, headers=None): self._check_bucket_uri('get_bucket') conn = self.connect() bucket = conn.get_bucket(self.bucket_name, validate, headers) self.check_response(bucket, 'bucket', self.uri) return bucket def get_key(self, validate=False, headers=None, version_id=None): self._check_object_uri('get_key') bucket = self.get_bucket(validate, headers) key = bucket.get_key(self.object_name, headers, version_id) self.check_response(key, 'key', self.uri) return key def new_key(self, validate=False, headers=None): self._check_object_uri('new_key') bucket = self.get_bucket(validate, headers) return bucket.new_key(self.object_name) def get_contents_to_stream(self, fp, headers=None, version_id=None): self._check_object_uri('get_key') self._warn_about_args('get_key', validate=False) key = self.get_key(None, headers) self.check_response(key, 'key', self.uri) return key.get_contents_to_file(fp, headers, version_id=version_id) def get_contents_to_file(self, fp, headers=None, cb=None, num_cb=10, torrent=False, version_id=None, res_download_handler=None, response_headers=None, hash_algs=None): self._check_object_uri('get_contents_to_file') key = self.get_key(None, headers) self.check_response(key, 'key', self.uri) if hash_algs: key.get_contents_to_file(fp, headers, cb, num_cb, torrent, version_id, res_download_handler, response_headers, hash_algs=hash_algs) else: key.get_contents_to_file(fp, headers, cb, num_cb, torrent, version_id, res_download_handler, response_headers) def get_contents_as_string(self, validate=False, headers=None, cb=None, num_cb=10, torrent=False, version_id=None): self._check_object_uri('get_contents_as_string') key = self.get_key(validate, headers) self.check_response(key, 'key', self.uri) return key.get_contents_as_string(headers, cb, num_cb, torrent, version_id) def acl_class(self): conn = self.connect() acl_class = conn.provider.acl_class self.check_response(acl_class, 'acl_class', self.uri) return acl_class def canned_acls(self): conn = self.connect() canned_acls = conn.provider.canned_acls self.check_response(canned_acls, 'canned_acls', self.uri) return canned_acls class BucketStorageUri(StorageUri): """ StorageUri subclass that handles bucket storage providers. Callers should instantiate this class by calling boto.storage_uri(). """ delim = '/' capabilities = set([]) # A set of additional capabilities. def __init__(self, scheme, bucket_name=None, object_name=None, debug=0, connection_args=None, suppress_consec_slashes=True, version_id=None, generation=None, is_latest=False): """Instantiate a BucketStorageUri from scheme,bucket,object tuple. @type scheme: string @param scheme: URI scheme naming the storage provider (gs, s3, etc.) @type bucket_name: string @param bucket_name: bucket name @type object_name: string @param object_name: object name, excluding generation/version. @type debug: int @param debug: debug level to pass in to connection (range 0..2) @type connection_args: map @param connection_args: optional map containing args to be passed to {S3,GS}Connection constructor (e.g., to override https_connection_factory). @param suppress_consec_slashes: If provided, controls whether consecutive slashes will be suppressed in key paths. @param version_id: Object version id (S3-specific). @param generation: Object generation number (GCS-specific). @param is_latest: boolean indicating that a versioned object is the current version After instantiation the components are available in the following fields: scheme, bucket_name, object_name, version_id, generation, is_latest, versionless_uri, version_specific_uri, uri. Note: If instantiated without version info, the string representation for a URI stays versionless; similarly, if instantiated with version info, the string representation for a URI stays version-specific. If you call one of the uri.set_contents_from_xyz() methods, a specific object version will be created, and its version-specific URI string can be retrieved from version_specific_uri even if the URI was instantiated without version info. """ self.scheme = scheme self.bucket_name = bucket_name self.object_name = object_name self.debug = debug if connection_args: self.connection_args = connection_args self.suppress_consec_slashes = suppress_consec_slashes self.version_id = version_id self.generation = generation and int(generation) self.is_latest = is_latest self.is_version_specific = bool(self.generation) or bool(version_id) self._build_uri_strings() def _build_uri_strings(self): if self.bucket_name and self.object_name: self.versionless_uri = '%s://%s/%s' % (self.scheme, self.bucket_name, self.object_name) if self.generation: self.version_specific_uri = '%s#%s' % (self.versionless_uri, self.generation) elif self.version_id: self.version_specific_uri = '%s#%s' % ( self.versionless_uri, self.version_id) if self.is_version_specific: self.uri = self.version_specific_uri else: self.uri = self.versionless_uri elif self.bucket_name: self.uri = ('%s://%s/' % (self.scheme, self.bucket_name)) else: self.uri = ('%s://' % self.scheme) def _update_from_key(self, key): self._update_from_values( getattr(key, 'version_id', None), getattr(key, 'generation', None), getattr(key, 'is_latest', None), getattr(key, 'md5', None)) def _update_from_values(self, version_id, generation, is_latest, md5): self.version_id = version_id self.generation = generation self.is_latest = is_latest self._build_uri_strings() self.md5 = md5 def get_key(self, validate=False, headers=None, version_id=None): self._check_object_uri('get_key') bucket = self.get_bucket(validate, headers) if self.get_provider().name == 'aws': key = bucket.get_key(self.object_name, headers, version_id=(version_id or self.version_id)) elif self.get_provider().name == 'google': key = bucket.get_key(self.object_name, headers, generation=self.generation) self.check_response(key, 'key', self.uri) return key def delete_key(self, validate=False, headers=None, version_id=None, mfa_token=None): self._check_object_uri('delete_key') bucket = self.get_bucket(validate, headers) if self.get_provider().name == 'aws': version_id = version_id or self.version_id return bucket.delete_key(self.object_name, headers, version_id, mfa_token) elif self.get_provider().name == 'google': return bucket.delete_key(self.object_name, headers, generation=self.generation) def clone_replace_name(self, new_name): """Instantiate a BucketStorageUri from the current BucketStorageUri, but replacing the object_name. @type new_name: string @param new_name: new object name """ self._check_bucket_uri('clone_replace_name') return BucketStorageUri( self.scheme, bucket_name=self.bucket_name, object_name=new_name, debug=self.debug, suppress_consec_slashes=self.suppress_consec_slashes) def clone_replace_key(self, key): """Instantiate a BucketStorageUri from the current BucketStorageUri, by replacing the object name with the object name and other metadata found in the given Key object (including generation). @type key: Key @param key: key for the new StorageUri to represent """ self._check_bucket_uri('clone_replace_key') version_id = None generation = None is_latest = False if hasattr(key, 'version_id'): version_id = key.version_id if hasattr(key, 'generation'): generation = key.generation if hasattr(key, 'is_latest'): is_latest = key.is_latest return BucketStorageUri( key.provider.get_provider_name(), bucket_name=key.bucket.name, object_name=key.name, debug=self.debug, suppress_consec_slashes=self.suppress_consec_slashes, version_id=version_id, generation=generation, is_latest=is_latest) def get_acl(self, validate=False, headers=None, version_id=None): """returns a bucket's acl""" self._check_bucket_uri('get_acl') bucket = self.get_bucket(validate, headers) # This works for both bucket- and object- level ACLs (former passes # key_name=None): key_name = self.object_name or '' if self.get_provider().name == 'aws': version_id = version_id or self.version_id acl = bucket.get_acl(key_name, headers, version_id) else: acl = bucket.get_acl(key_name, headers, generation=self.generation) self.check_response(acl, 'acl', self.uri) return acl def get_def_acl(self, validate=False, headers=None): """returns a bucket's default object acl""" self._check_bucket_uri('get_def_acl') bucket = self.get_bucket(validate, headers) acl = bucket.get_def_acl(headers) self.check_response(acl, 'acl', self.uri) return acl def get_cors(self, validate=False, headers=None): """returns a bucket's CORS XML""" self._check_bucket_uri('get_cors') bucket = self.get_bucket(validate, headers) cors = bucket.get_cors(headers) self.check_response(cors, 'cors', self.uri) return cors def set_cors(self, cors, validate=False, headers=None): """sets or updates a bucket's CORS XML""" self._check_bucket_uri('set_cors ') bucket = self.get_bucket(validate, headers) bucket.set_cors(cors.to_xml(), headers) def get_location(self, validate=False, headers=None): self._check_bucket_uri('get_location') bucket = self.get_bucket(validate, headers) return bucket.get_location() def get_storage_class(self, validate=False, headers=None): self._check_bucket_uri('get_storage_class') # StorageClass is defined as a bucket param for GCS, but as a key # param for S3. if self.scheme != 'gs': raise ValueError('get_storage_class() not supported for %s ' 'URIs.' % self.scheme) bucket = self.get_bucket(validate, headers) return bucket.get_storage_class() def get_subresource(self, subresource, validate=False, headers=None, version_id=None): self._check_bucket_uri('get_subresource') bucket = self.get_bucket(validate, headers) return bucket.get_subresource(subresource, self.object_name, headers, version_id) def add_group_email_grant(self, permission, email_address, recursive=False, validate=False, headers=None): self._check_bucket_uri('add_group_email_grant') if self.scheme != 'gs': raise ValueError('add_group_email_grant() not supported for %s ' 'URIs.' % self.scheme) if self.object_name: if recursive: raise ValueError('add_group_email_grant() on key-ful URI cannot ' 'specify recursive=True') key = self.get_key(validate, headers) self.check_response(key, 'key', self.uri) key.add_group_email_grant(permission, email_address, headers) elif self.bucket_name: bucket = self.get_bucket(validate, headers) bucket.add_group_email_grant(permission, email_address, recursive, headers) else: raise InvalidUriError('add_group_email_grant() on bucket-less URI ' '%s' % self.uri) def add_email_grant(self, permission, email_address, recursive=False, validate=False, headers=None): self._check_bucket_uri('add_email_grant') if not self.object_name: bucket = self.get_bucket(validate, headers) bucket.add_email_grant(permission, email_address, recursive, headers) else: key = self.get_key(validate, headers) self.check_response(key, 'key', self.uri) key.add_email_grant(permission, email_address) def add_user_grant(self, permission, user_id, recursive=False, validate=False, headers=None): self._check_bucket_uri('add_user_grant') if not self.object_name: bucket = self.get_bucket(validate, headers) bucket.add_user_grant(permission, user_id, recursive, headers) else: key = self.get_key(validate, headers) self.check_response(key, 'key', self.uri) key.add_user_grant(permission, user_id) def list_grants(self, headers=None): self._check_bucket_uri('list_grants ') bucket = self.get_bucket(headers) return bucket.list_grants(headers) def is_file_uri(self): """Returns True if this URI names a file or directory.""" return False def is_cloud_uri(self): """Returns True if this URI names a bucket or object.""" return True def names_container(self): """ Returns True if this URI names a directory or bucket. Will return False for bucket subdirs; providing bucket subdir semantics needs to be done by the caller (like gsutil does). """ return bool(not self.object_name) def names_singleton(self): """Returns True if this URI names a file or object.""" return bool(self.object_name) def names_directory(self): """Returns True if this URI names a directory.""" return False def names_provider(self): """Returns True if this URI names a provider.""" return bool(not self.bucket_name) def names_bucket(self): """Returns True if this URI names a bucket.""" return bool(self.bucket_name) and bool(not self.object_name) def names_file(self): """Returns True if this URI names a file.""" return False def names_object(self): """Returns True if this URI names an object.""" return self.names_singleton() def is_stream(self): """Returns True if this URI represents input/output stream.""" return False def create_bucket(self, headers=None, location='', policy=None, storage_class=None): self._check_bucket_uri('create_bucket ') conn = self.connect() # Pass storage_class param only if this is a GCS bucket. (In S3 the # storage class is specified on the key object.) if self.scheme == 'gs': return conn.create_bucket(self.bucket_name, headers, location, policy, storage_class) else: return conn.create_bucket(self.bucket_name, headers, location, policy) def delete_bucket(self, headers=None): self._check_bucket_uri('delete_bucket') conn = self.connect() return conn.delete_bucket(self.bucket_name, headers) def get_all_buckets(self, headers=None): conn = self.connect() return conn.get_all_buckets(headers) def get_provider(self): conn = self.connect() provider = conn.provider self.check_response(provider, 'provider', self.uri) return provider def set_acl(self, acl_or_str, key_name='', validate=False, headers=None, version_id=None, if_generation=None, if_metageneration=None): """Sets or updates a bucket's ACL.""" self._check_bucket_uri('set_acl') key_name = key_name or self.object_name or '' bucket = self.get_bucket(validate, headers) if self.generation: bucket.set_acl( acl_or_str, key_name, headers, generation=self.generation, if_generation=if_generation, if_metageneration=if_metageneration) else: version_id = version_id or self.version_id bucket.set_acl(acl_or_str, key_name, headers, version_id) def set_xml_acl(self, xmlstring, key_name='', validate=False, headers=None, version_id=None, if_generation=None, if_metageneration=None): """Sets or updates a bucket's ACL with an XML string.""" self._check_bucket_uri('set_xml_acl') key_name = key_name or self.object_name or '' bucket = self.get_bucket(validate, headers) if self.generation: bucket.set_xml_acl( xmlstring, key_name, headers, generation=self.generation, if_generation=if_generation, if_metageneration=if_metageneration) else: version_id = version_id or self.version_id bucket.set_xml_acl(xmlstring, key_name, headers, version_id=version_id) def set_def_xml_acl(self, xmlstring, validate=False, headers=None): """Sets or updates a bucket's default object ACL with an XML string.""" self._check_bucket_uri('set_def_xml_acl') self.get_bucket(validate, headers).set_def_xml_acl(xmlstring, headers) def set_def_acl(self, acl_or_str, validate=False, headers=None, version_id=None): """Sets or updates a bucket's default object ACL.""" self._check_bucket_uri('set_def_acl') self.get_bucket(validate, headers).set_def_acl(acl_or_str, headers) def set_canned_acl(self, acl_str, validate=False, headers=None, version_id=None): """Sets or updates a bucket's acl to a predefined (canned) value.""" self._check_object_uri('set_canned_acl') self._warn_about_args('set_canned_acl', version_id=version_id) key = self.get_key(validate, headers) self.check_response(key, 'key', self.uri) key.set_canned_acl(acl_str, headers) def set_def_canned_acl(self, acl_str, validate=False, headers=None, version_id=None): """Sets or updates a bucket's default object acl to a predefined (canned) value.""" self._check_bucket_uri('set_def_canned_acl ') key = self.get_key(validate, headers) self.check_response(key, 'key', self.uri) key.set_def_canned_acl(acl_str, headers, version_id) def set_subresource(self, subresource, value, validate=False, headers=None, version_id=None): self._check_bucket_uri('set_subresource') bucket = self.get_bucket(validate, headers) bucket.set_subresource(subresource, value, self.object_name, headers, version_id) def set_contents_from_string(self, s, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, reduced_redundancy=False): self._check_object_uri('set_contents_from_string') key = self.new_key(headers=headers) if self.scheme == 'gs': if reduced_redundancy: sys.stderr.write('Warning: GCS does not support ' 'reduced_redundancy; argument ignored by ' 'set_contents_from_string') result = key.set_contents_from_string( s, headers, replace, cb, num_cb, policy, md5) else: result = key.set_contents_from_string( s, headers, replace, cb, num_cb, policy, md5, reduced_redundancy) self._update_from_key(key) return result def set_contents_from_file(self, fp, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, size=None, rewind=False, res_upload_handler=None): self._check_object_uri('set_contents_from_file') key = self.new_key(headers=headers) if self.scheme == 'gs': result = key.set_contents_from_file( fp, headers, replace, cb, num_cb, policy, md5, size=size, rewind=rewind, res_upload_handler=res_upload_handler) if res_upload_handler: self._update_from_values(None, res_upload_handler.generation, None, md5) else: self._warn_about_args('set_contents_from_file', res_upload_handler=res_upload_handler) result = key.set_contents_from_file( fp, headers, replace, cb, num_cb, policy, md5, size=size, rewind=rewind) self._update_from_key(key) return result def set_contents_from_stream(self, fp, headers=None, replace=True, cb=None, policy=None, reduced_redundancy=False): self._check_object_uri('set_contents_from_stream') dst_key = self.new_key(False, headers) result = dst_key.set_contents_from_stream( fp, headers, replace, cb, policy=policy, reduced_redundancy=reduced_redundancy) self._update_from_key(dst_key) return result def copy_key(self, src_bucket_name, src_key_name, metadata=None, src_version_id=None, storage_class='STANDARD', preserve_acl=False, encrypt_key=False, headers=None, query_args=None, src_generation=None): """Returns newly created key.""" self._check_object_uri('copy_key') dst_bucket = self.get_bucket(validate=False, headers=headers) if src_generation: return dst_bucket.copy_key(new_key_name=self.object_name, src_bucket_name=src_bucket_name, src_key_name=src_key_name, metadata=metadata, storage_class=storage_class, preserve_acl=preserve_acl, encrypt_key=encrypt_key, headers=headers, query_args=query_args, src_generation=src_generation) else: return dst_bucket.copy_key(new_key_name=self.object_name, src_bucket_name=src_bucket_name, src_key_name=src_key_name, metadata=metadata, src_version_id=src_version_id, storage_class=storage_class, preserve_acl=preserve_acl, encrypt_key=encrypt_key, headers=headers, query_args=query_args) def enable_logging(self, target_bucket, target_prefix=None, validate=False, headers=None, version_id=None): self._check_bucket_uri('enable_logging') bucket = self.get_bucket(validate, headers) bucket.enable_logging(target_bucket, target_prefix, headers=headers) def disable_logging(self, validate=False, headers=None, version_id=None): self._check_bucket_uri('disable_logging') bucket = self.get_bucket(validate, headers) bucket.disable_logging(headers=headers) def get_logging_config(self, validate=False, headers=None, version_id=None): self._check_bucket_uri('get_logging_config') bucket = self.get_bucket(validate, headers) return bucket.get_logging_config(headers=headers) def set_website_config(self, main_page_suffix=None, error_key=None, validate=False, headers=None): self._check_bucket_uri('set_website_config') bucket = self.get_bucket(validate, headers) if not (main_page_suffix or error_key): bucket.delete_website_configuration(headers) else: bucket.configure_website(main_page_suffix, error_key, headers) def get_website_config(self, validate=False, headers=None): self._check_bucket_uri('get_website_config') bucket = self.get_bucket(validate, headers) return bucket.get_website_configuration(headers) def get_versioning_config(self, headers=None): self._check_bucket_uri('get_versioning_config') bucket = self.get_bucket(False, headers) return bucket.get_versioning_status(headers) def configure_versioning(self, enabled, headers=None): self._check_bucket_uri('configure_versioning') bucket = self.get_bucket(False, headers) return bucket.configure_versioning(enabled, headers) def set_metadata(self, metadata_plus, metadata_minus, preserve_acl, headers=None): return self.get_key(False).set_remote_metadata(metadata_plus, metadata_minus, preserve_acl, headers=headers) def compose(self, components, content_type=None, headers=None): self._check_object_uri('compose') component_keys = [] for suri in components: component_keys.append(suri.new_key()) component_keys[-1].generation = suri.generation self.generation = self.new_key().compose( component_keys, content_type=content_type, headers=headers) self._build_uri_strings() return self def get_lifecycle_config(self, validate=False, headers=None): """Returns a bucket's lifecycle configuration.""" self._check_bucket_uri('get_lifecycle_config') bucket = self.get_bucket(validate, headers) lifecycle_config = bucket.get_lifecycle_config(headers) self.check_response(lifecycle_config, 'lifecycle', self.uri) return lifecycle_config def configure_lifecycle(self, lifecycle_config, validate=False, headers=None): """Sets or updates a bucket's lifecycle configuration.""" self._check_bucket_uri('configure_lifecycle') bucket = self.get_bucket(validate, headers) bucket.configure_lifecycle(lifecycle_config, headers) def exists(self, headers=None): """Returns True if the object exists or False if it doesn't""" if not self.object_name: raise InvalidUriError('exists on object-less URI (%s)' % self.uri) bucket = self.get_bucket() key = bucket.get_key(self.object_name, headers=headers) return bool(key) class FileStorageUri(StorageUri): """ StorageUri subclass that handles files in the local file system. Callers should instantiate this class by calling boto.storage_uri(). See file/README about how we map StorageUri operations onto a file system. """ delim = os.sep def __init__(self, object_name, debug, is_stream=False): """Instantiate a FileStorageUri from a path name. @type object_name: string @param object_name: object name @type debug: boolean @param debug: whether to enable debugging on this StorageUri After instantiation the components are available in the following fields: uri, scheme, bucket_name (always blank for this "anonymous" bucket), object_name. """ self.scheme = 'file' self.bucket_name = '' self.object_name = object_name self.uri = 'file://' + object_name self.debug = debug self.stream = is_stream def clone_replace_name(self, new_name): """Instantiate a FileStorageUri from the current FileStorageUri, but replacing the object_name. @type new_name: string @param new_name: new object name """ return FileStorageUri(new_name, self.debug, self.stream) def is_file_uri(self): """Returns True if this URI names a file or directory.""" return True def is_cloud_uri(self): """Returns True if this URI names a bucket or object.""" return False def names_container(self): """Returns True if this URI names a directory or bucket.""" return self.names_directory() def names_singleton(self): """Returns True if this URI names a file (or stream) or object.""" return not self.names_container() def names_directory(self): """Returns True if this URI names a directory.""" if self.stream: return False return os.path.isdir(self.object_name) def names_provider(self): """Returns True if this URI names a provider.""" return False def names_bucket(self): """Returns True if this URI names a bucket.""" return False def names_file(self): """Returns True if this URI names a file.""" return self.names_singleton() def names_object(self): """Returns True if this URI names an object.""" return False def is_stream(self): """Returns True if this URI represents input/output stream. """ return bool(self.stream) def close(self): """Closes the underlying file. """ self.get_key().close() def exists(self, _headers_not_used=None): """Returns True if the file exists or False if it doesn't""" # The _headers_not_used parameter is ignored. It is only there to ensure # that this method's signature is identical to the exists method on the # BucketStorageUri class. return os.path.exists(self.object_name) boto-2.20.1/boto/sts/000077500000000000000000000000001225267101000143075ustar00rootroot00000000000000boto-2.20.1/boto/sts/__init__.py000066400000000000000000000043321225267101000164220ustar00rootroot00000000000000# Copyright (c) 2010-2011 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010-2011, Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from connection import STSConnection from boto.regioninfo import RegionInfo def regions(): """ Get all available regions for the STS service. :rtype: list :return: A list of :class:`boto.regioninfo.RegionInfo` instances """ return [RegionInfo(name='us-east-1', endpoint='sts.amazonaws.com', connection_cls=STSConnection), RegionInfo(name='us-gov-west-1', endpoint='sts.us-gov-west-1.amazonaws.com', connection_cls=STSConnection) ] def connect_to_region(region_name, **kw_params): """ Given a valid region name, return a :class:`boto.sts.connection.STSConnection`. :type: str :param region_name: The name of the region to connect to. :rtype: :class:`boto.sts.connection.STSConnection` or ``None`` :return: A connection to the given region, or None if an invalid region name is given """ for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/sts/connection.py000066400000000000000000000717641225267101000170370ustar00rootroot00000000000000# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2011, Eucalyptus Systems, Inc. # Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from boto.connection import AWSQueryConnection from boto.regioninfo import RegionInfo from credentials import Credentials, FederationToken, AssumedRole from credentials import DecodeAuthorizationMessage import boto import boto.utils import datetime import threading _session_token_cache = {} class STSConnection(AWSQueryConnection): """ AWS Security Token Service The AWS Security Token Service is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). This guide provides descriptions of the AWS Security Token Service API. For more detailed information about using this service, go to `Using Temporary Security Credentials`_. For information about setting up signatures and authorization through the API, go to `Signing AWS API Requests`_ in the AWS General Reference . For general information about the Query API, go to `Making Query Requests`_ in Using IAM . For information about using security tokens with other AWS products, go to `Using Temporary Security Credentials to Access AWS`_ in Using Temporary Security Credentials . If you're new to AWS and need additional technical information about a specific AWS product, you can find the product's technical documentation at `http://aws.amazon.com/documentation/`_. We will refer to Amazon Identity and Access Management using the abbreviated form IAM. All copyrights and legal protections still apply. """ DefaultRegionName = 'us-east-1' DefaultRegionEndpoint = 'sts.amazonaws.com' APIVersion = '2011-06-15' def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/', converter=None, validate_certs=True, anon=False): if not region: region = RegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint, connection_cls=STSConnection) self.region = region self.anon = anon self._mutex = threading.Semaphore() AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, proxy_user, proxy_pass, self.region.endpoint, debug, https_connection_factory, path, validate_certs=validate_certs) def _required_auth_capability(self): if self.anon: return ['pure-query'] else: return ['sign-v2'] def _check_token_cache(self, token_key, duration=None, window_seconds=60): token = _session_token_cache.get(token_key, None) if token: now = datetime.datetime.utcnow() expires = boto.utils.parse_ts(token.expiration) delta = expires - now if delta < datetime.timedelta(seconds=window_seconds): msg = 'Cached session token %s is expired' % token_key boto.log.debug(msg) token = None return token def _get_session_token(self, duration=None, mfa_serial_number=None, mfa_token=None): params = {} if duration: params['DurationSeconds'] = duration if mfa_serial_number: params['SerialNumber'] = mfa_serial_number if mfa_token: params['TokenCode'] = mfa_token return self.get_object('GetSessionToken', params, Credentials, verb='POST') def get_session_token(self, duration=None, force_new=False, mfa_serial_number=None, mfa_token=None): """ Return a valid session token. Because retrieving new tokens from the Secure Token Service is a fairly heavyweight operation this module caches previously retrieved tokens and returns them when appropriate. Each token is cached with a key consisting of the region name of the STS endpoint concatenated with the requesting user's access id. If there is a token in the cache meeting with this key, the session expiration is checked to make sure it is still valid and if so, the cached token is returned. Otherwise, a new session token is requested from STS and it is placed into the cache and returned. :type duration: int :param duration: The number of seconds the credentials should remain valid. :type force_new: bool :param force_new: If this parameter is True, a new session token will be retrieved from the Secure Token Service regardless of whether there is a valid cached token or not. :type mfa_serial_number: str :param mfa_serial_number: The serial number of an MFA device. If this is provided and if the mfa_passcode provided is valid, the temporary session token will be authorized with to perform operations requiring the MFA device authentication. :type mfa_token: str :param mfa_token: The 6 digit token associated with the MFA device. """ token_key = '%s:%s' % (self.region.name, self.provider.access_key) token = self._check_token_cache(token_key, duration) if force_new or not token: boto.log.debug('fetching a new token for %s' % token_key) try: self._mutex.acquire() token = self._get_session_token(duration, mfa_serial_number, mfa_token) _session_token_cache[token_key] = token finally: self._mutex.release() return token def get_federation_token(self, name, duration=None, policy=None): """ Returns a set of temporary security credentials (consisting of an access key ID, a secret access key, and a security token) for a federated user. A typical use is in a proxy application that is getting temporary security credentials on behalf of distributed applications inside a corporate network. Because you must call the `GetFederationToken` action using the long- term security credentials of an IAM user, this call is appropriate in contexts where those credentials can be safely stored, usually in a server-based application. **Note:** Do not use this call in mobile applications or client-based web applications that directly get temporary security credentials. For those types of applications, use `AssumeRoleWithWebIdentity`. The `GetFederationToken` action must be called by using the long-term AWS security credentials of the AWS account or an IAM user. Credentials that are created by IAM users are valid for the specified duration, between 900 seconds (15 minutes) and 129600 seconds (36 hours); credentials that are created by using account credentials have a maximum duration of 3600 seconds (1 hour). The permissions that are granted to the federated user are the intersection of the policy that is passed with the `GetFederationToken` request and policies that are associated with of the entity making the `GetFederationToken` call. For more information about how permissions work, see `Controlling Permissions in Temporary Credentials`_ in Using Temporary Security Credentials . For information about using `GetFederationToken` to create temporary security credentials, see `Creating Temporary Credentials to Enable Access for Federated Users`_ in Using Temporary Security Credentials . :type name: string :param name: The name of the federated user. The name is used as an identifier for the temporary security credentials (such as `Bob`). For example, you can reference the federated user name in a resource-based policy, such as in an Amazon S3 bucket policy. :type policy: string :param policy: A policy that specifies the permissions that are granted to the federated user. By default, federated users have no permissions; they do not inherit any from the IAM user. When you specify a policy, the federated user's permissions are intersection of the specified policy and the IAM user's policy. If you don't specify a policy, federated users can only access AWS resources that explicitly allow those federated users in a resource policy, such as in an Amazon S3 bucket policy. :type duration: integer :param duration: The duration, in seconds, that the session should last. Acceptable durations for federation sessions range from 900 seconds (15 minutes) to 129600 seconds (36 hours), with 43200 seconds (12 hours) as the default. Sessions for AWS account owners are restricted to a maximum of 3600 seconds (one hour). If the duration is longer than one hour, the session for AWS account owners defaults to one hour. """ params = {'Name': name} if duration: params['DurationSeconds'] = duration if policy: params['Policy'] = policy return self.get_object('GetFederationToken', params, FederationToken, verb='POST') def assume_role(self, role_arn, role_session_name, policy=None, duration_seconds=None, external_id=None): """ Returns a set of temporary security credentials (consisting of an access key ID, a secret access key, and a security token) that you can use to access AWS resources that you might not normally have access to. Typically, you use `AssumeRole` for cross-account access or federation. For cross-account access, imagine that you own multiple accounts and need to access resources in each account. You could create long-term credentials in each account to access those resources. However, managing all those credentials and remembering which one can access which account can be time consuming. Instead, you can create one set of long-term credentials in one account and then use temporary security credentials to access all the other accounts by assuming roles in those accounts. For more information about roles, see `Roles`_ in Using IAM . For federation, you can, for example, grant single sign-on access to the AWS Management Console. If you already have an identity and authentication system in your corporate network, you don't have to recreate user identities in AWS in order to grant those user identities access to AWS. Instead, after a user has been authenticated, you call `AssumeRole` (and specify the role with the appropriate permissions) to get temporary security credentials for that user. With those temporary security credentials, you construct a sign-in URL that users can use to access the console. For more information, see `Scenarios for Granting Temporary Access`_ in AWS Security Token Service . The temporary security credentials are valid for the duration that you specified when calling `AssumeRole`, which can be from 900 seconds (15 minutes) to 3600 seconds (1 hour). The default is 1 hour. The temporary security credentials that are returned from the `AssumeRoleWithWebIdentity` response have the permissions that are associated with the access policy of the role being assumed and any policies that are associated with the AWS resource being accessed. You can further restrict the permissions of the temporary security credentials by passing a policy in the request. The resulting permissions are an intersection of the role's access policy and the policy that you passed. These policies and any applicable resource-based policies are evaluated when calls to AWS service APIs are made using the temporary security credentials. To assume a role, your AWS account must be trusted by the role. The trust relationship is defined in the role's trust policy when the IAM role is created. You must also have a policy that allows you to call `sts:AssumeRole`. **Important:** You cannot call `Assumerole` by using AWS account credentials; access will be denied. You must use IAM user credentials to call `AssumeRole`. :type role_arn: string :param role_arn: The Amazon Resource Name (ARN) of the role that the caller is assuming. :type role_session_name: string :param role_session_name: An identifier for the assumed role session. The session name is included as part of the `AssumedRoleUser`. :type policy: string :param policy: A supplemental policy that is associated with the temporary security credentials from the `AssumeRole` call. The resulting permissions of the temporary security credentials are an intersection of this policy and the access policy that is associated with the role. Use this policy to further restrict the permissions of the temporary security credentials. :type duration_seconds: integer :param duration_seconds: The duration, in seconds, of the role session. The value can range from 900 seconds (15 minutes) to 3600 seconds (1 hour). By default, the value is set to 3600 seconds. :type external_id: string :param external_id: A unique identifier that is used by third parties to assume a role in their customers' accounts. For each role that the third party can assume, they should instruct their customers to create a role with the external ID that the third party generated. Each time the third party assumes the role, they must pass the customer's external ID. The external ID is useful in order to help third parties bind a role to the customer who created it. For more information about the external ID, see `About the External ID`_ in Using Temporary Security Credentials . """ params = { 'RoleArn': role_arn, 'RoleSessionName': role_session_name } if policy is not None: params['Policy'] = policy if duration_seconds is not None: params['DurationSeconds'] = duration_seconds if external_id is not None: params['ExternalId'] = external_id return self.get_object('AssumeRole', params, AssumedRole, verb='POST') def assume_role_with_saml(self, role_arn, principal_arn, saml_assertion, policy=None, duration_seconds=None): """ Returns a set of temporary security credentials for users who have been authenticated via a SAML authentication response. This operation provides a mechanism for tying an enterprise identity store or directory to role-based AWS access without user-specific credentials or configuration. The temporary security credentials returned by this operation consist of an access key ID, a secret access key, and a security token. Applications can use these temporary security credentials to sign calls to AWS services. The credentials are valid for the duration that you specified when calling `AssumeRoleWithSAML`, which can be up to 3600 seconds (1 hour) or until the time specified in the SAML authentication response's `NotOnOrAfter` value, whichever is shorter. The maximum duration for a session is 1 hour, and the minimum duration is 15 minutes, even if values outside this range are specified. Optionally, you can pass an AWS IAM access policy to this operation. The temporary security credentials that are returned by the operation have the permissions that are associated with the access policy of the role being assumed, except for any permissions explicitly denied by the policy you pass. This gives you a way to further restrict the permissions for the federated user. These policies and any applicable resource-based policies are evaluated when calls to AWS are made using the temporary security credentials. Before your application can call `AssumeRoleWithSAML`, you must configure your SAML identity provider (IdP) to issue the claims required by AWS. Additionally, you must use AWS Identity and Access Management (AWS IAM) to create a SAML provider entity in your AWS account that represents your identity provider, and create an AWS IAM role that specifies this SAML provider in its trust policy. Calling `AssumeRoleWithSAML` does not require the use of AWS security credentials. The identity of the caller is validated by using keys in the metadata document that is uploaded for the SAML provider entity for your identity provider. For more information, see the following resources: + `Creating Temporary Security Credentials for SAML Federation`_ in the Using Temporary Security Credentials guide. + `SAML Providers`_ in the Using IAM guide. + `Configuring a Relying Party and Claims in the Using IAM guide. `_ + `Creating a Role for SAML-Based Federation`_ in the Using IAM guide. :type role_arn: string :param role_arn: The Amazon Resource Name (ARN) of the role that the caller is assuming. :type principal_arn: string :param principal_arn: The Amazon Resource Name (ARN) of the SAML provider in AWS IAM that describes the IdP. :type saml_assertion: string :param saml_assertion: The base-64 encoded SAML authentication response provided by the IdP. For more information, see `Configuring a Relying Party and Adding Claims`_ in the Using IAM guide. :type policy: string :param policy: An AWS IAM policy in JSON format. The temporary security credentials that are returned by this operation have the permissions that are associated with the access policy of the role being assumed, except for any permissions explicitly denied by the policy you pass. These policies and any applicable resource-based policies are evaluated when calls to AWS are made using the temporary security credentials. The policy must be 2048 bytes or shorter, and its packed size must be less than 450 bytes. :type duration_seconds: integer :param duration_seconds: The duration, in seconds, of the role session. The value can range from 900 seconds (15 minutes) to 3600 seconds (1 hour). By default, the value is set to 3600 seconds. An expiration can also be specified in the SAML authentication response's `NotOnOrAfter` value. The actual expiration time is whichever value is shorter. The maximum duration for a session is 1 hour, and the minimum duration is 15 minutes, even if values outside this range are specified. """ params = { 'RoleArn': role_arn, 'PrincipalArn': principal_arn, 'SAMLAssertion': saml_assertion, } if policy is not None: params['Policy'] = policy if duration_seconds is not None: params['DurationSeconds'] = duration_seconds return self.get_object('AssumeRoleWithSAML', params, AssumedRole, verb='POST') def assume_role_with_web_identity(self, role_arn, role_session_name, web_identity_token, provider_id=None, policy=None, duration_seconds=None): """ Returns a set of temporary security credentials for users who have been authenticated in a mobile or web application with a web identity provider, such as Login with Amazon, Facebook, or Google. `AssumeRoleWithWebIdentity` is an API call that does not require the use of AWS security credentials. Therefore, you can distribute an application (for example, on mobile devices) that requests temporary security credentials without including long-term AWS credentials in the application or by deploying server-based proxy services that use long-term AWS credentials. For more information, see `Creating a Mobile Application with Third-Party Sign-In`_ in AWS Security Token Service . The temporary security credentials consist of an access key ID, a secret access key, and a security token. Applications can use these temporary security credentials to sign calls to AWS service APIs. The credentials are valid for the duration that you specified when calling `AssumeRoleWithWebIdentity`, which can be from 900 seconds (15 minutes) to 3600 seconds (1 hour). By default, the temporary security credentials are valid for 1 hour. The temporary security credentials that are returned from the `AssumeRoleWithWebIdentity` response have the permissions that are associated with the access policy of the role being assumed. You can further restrict the permissions of the temporary security credentials by passing a policy in the request. The resulting permissions are an intersection of the role's access policy and the policy that you passed. These policies and any applicable resource-based policies are evaluated when calls to AWS service APIs are made using the temporary security credentials. Before your application can call `AssumeRoleWithWebIdentity`, you must have an identity token from a supported identity provider and create a role that the application can assume. The role that your application assumes must trust the identity provider that is associated with the identity token. In other words, the identity provider must be specified in the role's trust policy. For more information, see ` Creating Temporary Security Credentials for Mobile Apps Using Third-Party Identity Providers`_. :type role_arn: string :param role_arn: The Amazon Resource Name (ARN) of the role that the caller is assuming. :type role_session_name: string :param role_session_name: An identifier for the assumed role session. Typically, you pass the name or identifier that is associated with the user who is using your application. That way, the temporary security credentials that your application will use are associated with that user. This session name is included as part of the ARN and assumed role ID in the `AssumedRoleUser` response element. :type web_identity_token: string :param web_identity_token: The OAuth 2.0 access token or OpenID Connect ID token that is provided by the identity provider. Your application must get this token by authenticating the user who is using your application with a web identity provider before the application makes an `AssumeRoleWithWebIdentity` call. :type provider_id: string :param provider_id: Specify this value only for OAuth access tokens. Do not specify this value for OpenID Connect ID tokens, such as `accounts.google.com`. This is the fully-qualified host component of the domain name of the identity provider. Do not include URL schemes and port numbers. Currently, `www.amazon.com` and `graph.facebook.com` are supported. :type policy: string :param policy: A supplemental policy that is associated with the temporary security credentials from the `AssumeRoleWithWebIdentity` call. The resulting permissions of the temporary security credentials are an intersection of this policy and the access policy that is associated with the role. Use this policy to further restrict the permissions of the temporary security credentials. :type duration_seconds: integer :param duration_seconds: The duration, in seconds, of the role session. The value can range from 900 seconds (15 minutes) to 3600 seconds (1 hour). By default, the value is set to 3600 seconds. """ params = { 'RoleArn': role_arn, 'RoleSessionName': role_session_name, 'WebIdentityToken': web_identity_token, } if provider_id is not None: params['ProviderId'] = provider_id if policy is not None: params['Policy'] = policy if duration_seconds is not None: params['DurationSeconds'] = duration_seconds return self.get_object( 'AssumeRoleWithWebIdentity', params, AssumedRole, verb='POST' ) def decode_authorization_message(self, encoded_message): """ Decodes additional information about the authorization status of a request from an encoded message returned in response to an AWS request. For example, if a user is not authorized to perform an action that he or she has requested, the request returns a `Client.UnauthorizedOperation` response (an HTTP 403 response). Some AWS actions additionally return an encoded message that can provide details about this authorization failure. Only certain AWS actions return an encoded authorization message. The documentation for an individual action indicates whether that action returns an encoded message in addition to returning an HTTP code. The message is encoded because the details of the authorization status can constitute privileged information that the user who requested the action should not see. To decode an authorization status message, a user must be granted permissions via an IAM policy to request the `DecodeAuthorizationMessage` ( `sts:DecodeAuthorizationMessage`) action. The decoded message includes the following type of information: + Whether the request was denied due to an explicit deny or due to the absence of an explicit allow. For more information, see `Determining Whether a Request is Allowed or Denied`_ in Using IAM . + The principal who made the request. + The requested action. + The requested resource. + The values of condition keys in the context of the user's request. :type encoded_message: string :param encoded_message: The encoded message that was returned with the response. """ params = { 'EncodedMessage': encoded_message, } return self.get_object( 'DecodeAuthorizationMessage', params, DecodeAuthorizationMessage, verb='POST' ) boto-2.20.1/boto/sts/credentials.py000066400000000000000000000200221225267101000171520ustar00rootroot00000000000000# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2011, Eucalyptus Systems, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import os import datetime import boto.utils from boto.compat import json class Credentials(object): """ :ivar access_key: The AccessKeyID. :ivar secret_key: The SecretAccessKey. :ivar session_token: The session token that must be passed with requests to use the temporary credentials :ivar expiration: The timestamp for when the credentials will expire """ def __init__(self, parent=None): self.parent = parent self.access_key = None self.secret_key = None self.session_token = None self.expiration = None self.request_id = None @classmethod def from_json(cls, json_doc): """ Create and return a new Session Token based on the contents of a JSON document. :type json_doc: str :param json_doc: A string containing a JSON document with a previously saved Credentials object. """ d = json.loads(json_doc) token = cls() token.__dict__.update(d) return token @classmethod def load(cls, file_path): """ Create and return a new Session Token based on the contents of a previously saved JSON-format file. :type file_path: str :param file_path: The fully qualified path to the JSON-format file containing the previously saved Session Token information. """ fp = open(file_path) json_doc = fp.read() fp.close() return cls.from_json(json_doc) def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'AccessKeyId': self.access_key = value elif name == 'SecretAccessKey': self.secret_key = value elif name == 'SessionToken': self.session_token = value elif name == 'Expiration': self.expiration = value elif name == 'RequestId': self.request_id = value else: pass def to_dict(self): """ Return a Python dict containing the important information about this Session Token. """ return {'access_key': self.access_key, 'secret_key': self.secret_key, 'session_token': self.session_token, 'expiration': self.expiration, 'request_id': self.request_id} def save(self, file_path): """ Persist a Session Token to a file in JSON format. :type path: str :param path: The fully qualified path to the file where the the Session Token data should be written. Any previous data in the file will be overwritten. To help protect the credentials contained in the file, the permissions of the file will be set to readable/writable by owner only. """ fp = open(file_path, 'wb') json.dump(self.to_dict(), fp) fp.close() os.chmod(file_path, 0600) def is_expired(self, time_offset_seconds=0): """ Checks to see if the Session Token is expired or not. By default it will check to see if the Session Token is expired as of the moment the method is called. However, you can supply an optional parameter which is the number of seconds of offset into the future for the check. For example, if you supply a value of 5, this method will return a True if the Session Token will be expired 5 seconds from this moment. :type time_offset_seconds: int :param time_offset_seconds: The number of seconds into the future to test the Session Token for expiration. """ now = datetime.datetime.utcnow() if time_offset_seconds: now = now + datetime.timedelta(seconds=time_offset_seconds) ts = boto.utils.parse_ts(self.expiration) delta = ts - now return delta.total_seconds() <= 0 class FederationToken(object): """ :ivar credentials: A Credentials object containing the credentials. :ivar federated_user_arn: ARN specifying federated user using credentials. :ivar federated_user_id: The ID of the federated user using credentials. :ivar packed_policy_size: A percentage value indicating the size of the policy in packed form """ def __init__(self, parent=None): self.parent = parent self.credentials = None self.federated_user_arn = None self.federated_user_id = None self.packed_policy_size = None self.request_id = None def startElement(self, name, attrs, connection): if name == 'Credentials': self.credentials = Credentials() return self.credentials else: return None def endElement(self, name, value, connection): if name == 'Arn': self.federated_user_arn = value elif name == 'FederatedUserId': self.federated_user_id = value elif name == 'PackedPolicySize': self.packed_policy_size = int(value) elif name == 'RequestId': self.request_id = value else: pass class AssumedRole(object): """ :ivar user: The assumed role user. :ivar credentials: A Credentials object containing the credentials. """ def __init__(self, connection=None, credentials=None, user=None): self._connection = connection self.credentials = credentials self.user = user def startElement(self, name, attrs, connection): if name == 'Credentials': self.credentials = Credentials() return self.credentials elif name == 'AssumedRoleUser': self.user = User() return self.user def endElement(self, name, value, connection): pass class User(object): """ :ivar arn: The arn of the user assuming the role. :ivar assume_role_id: The identifier of the assumed role. """ def __init__(self, arn=None, assume_role_id=None): self.arn = arn self.assume_role_id = assume_role_id def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'Arn': self.arn = value elif name == 'AssumedRoleId': self.assume_role_id = value class DecodeAuthorizationMessage(object): """ :ivar request_id: The request ID. :ivar decoded_message: The decoded authorization message (may be JSON). """ def __init__(self, request_id=None, decoded_message=None): self.request_id = request_id self.decoded_message = decoded_message def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'requestId': self.request_id = value elif name == 'DecodedMessage': self.decoded_message = value boto-2.20.1/boto/support/000077500000000000000000000000001225267101000152125ustar00rootroot00000000000000boto-2.20.1/boto/support/__init__.py000066400000000000000000000033301225267101000173220ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.regioninfo import RegionInfo def regions(): """ Get all available regions for the Amazon Support service. :rtype: list :return: A list of :class:`boto.regioninfo.RegionInfo` """ from boto.support.layer1 import SupportConnection return [ RegionInfo( name='us-east-1', endpoint='support.us-east-1.amazonaws.com', connection_cls=SupportConnection ), ] def connect_to_region(region_name, **kw_params): for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/support/exceptions.py000066400000000000000000000024741225267101000177540ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.exception import JSONResponseError class CaseIdNotFound(JSONResponseError): pass class CaseCreationLimitExceeded(JSONResponseError): pass class InternalServerError(JSONResponseError): pass boto-2.20.1/boto/support/layer1.py000066400000000000000000000501461225267101000167670ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import json import boto from boto.connection import AWSQueryConnection from boto.regioninfo import RegionInfo from boto.exception import JSONResponseError from boto.support import exceptions class SupportConnection(AWSQueryConnection): """ AWS Support The AWS Support API reference is intended for programmers who need detailed information about the AWS Support actions and data types. This service enables you to manage with your AWS Support cases programmatically. It is built on the AWS Query API programming model and provides HTTP methods that take parameters and return results in JSON format. The AWS Support service also exposes a set of `Trusted Advisor`_ features. You can retrieve a list of checks you can run on your resources, specify checks to run and refresh, and check the status of checks you have submitted. The following list describes the AWS Support case management actions: + **Service names, issue categories, and available severity levels. **The actions `DescribeServices`_ and `DescribeSeverityLevels`_ enable you to obtain AWS service names, service codes, service categories, and problem severity levels. You use these values when you call the `CreateCase`_ action. + **Case Creation, case details, and case resolution**. The actions `CreateCase`_, `DescribeCases`_, and `ResolveCase`_ enable you to create AWS Support cases, retrieve them, and resolve them. + **Case communication**. The actions `DescribeCaseCommunications`_ and `AddCommunicationToCase`_ enable you to retrieve and add communication to AWS Support cases. The following list describes the actions available from the AWS Support service for Trusted Advisor: + `DescribeTrustedAdviserChecks`_ returns the list of checks that you can run against your AWS resources. + Using the CheckId for a specific check returned by DescribeTrustedAdviserChecks, you can call `DescribeTrustedAdvisorCheckResult`_ and obtain a new result for the check you specified. + Using `DescribeTrustedAdvisorCheckSummaries`_, you can get summaries for a set of Trusted Advisor checks. + `RefreshTrustedAdvisorCheck`_ enables you to request that Trusted Advisor run the check again. + ``_ gets statuses on the checks you are running. For authentication of requests, the AWS Support uses `Signature Version 4 Signing Process`_. See the AWS Support Developer Guide for information about how to use this service to manage create and manage your support cases, and how to call Trusted Advisor for results of checks on your resources. """ APIVersion = "2013-04-15" DefaultRegionName = "us-east-1" DefaultRegionEndpoint = "support.us-east-1.amazonaws.com" ServiceName = "Support" TargetPrefix = "AWSSupport_20130415" ResponseError = JSONResponseError _faults = { "CaseIdNotFound": exceptions.CaseIdNotFound, "CaseCreationLimitExceeded": exceptions.CaseCreationLimitExceeded, "InternalServerError": exceptions.InternalServerError, } def __init__(self, **kwargs): region = kwargs.pop('region', None) if not region: region = RegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint) kwargs['host'] = region.endpoint AWSQueryConnection.__init__(self, **kwargs) self.region = region def _required_auth_capability(self): return ['hmac-v4'] def add_communication_to_case(self, communication_body, case_id=None, cc_email_addresses=None): """ This action adds additional customer communication to an AWS Support case. You use the CaseId value to identify the case to which you want to add communication. You can list a set of email addresses to copy on the communication using the CcEmailAddresses value. The CommunicationBody value contains the text of the communication. This action's response indicates the success or failure of the request. This action implements a subset of the behavior on the AWS Support `Your Support Cases`_ web form. :type case_id: string :param case_id: :type communication_body: string :param communication_body: :type cc_email_addresses: list :param cc_email_addresses: """ params = {'communicationBody': communication_body, } if case_id is not None: params['caseId'] = case_id if cc_email_addresses is not None: params['ccEmailAddresses'] = cc_email_addresses return self.make_request(action='AddCommunicationToCase', body=json.dumps(params)) def create_case(self, subject, service_code, category_code, communication_body, severity_code=None, cc_email_addresses=None, language=None, issue_type=None): """ Creates a new case in the AWS Support Center. This action is modeled on the behavior of the AWS Support Center `Open a new case`_ page. Its parameters require you to specify the following information: #. **ServiceCode.** Represents a code for an AWS service. You obtain the ServiceCode by calling `DescribeServices`_. #. **CategoryCode**. Represents a category for the service defined for the ServiceCode value. You also obtain the cateogory code for a service by calling `DescribeServices`_. Each AWS service defines its own set of category codes. #. **SeverityCode**. Represents a value that specifies the urgency of the case, and the time interval in which your service level agreement specifies a response from AWS Support. You obtain the SeverityCode by calling `DescribeSeverityLevels`_. #. **Subject**. Represents the **Subject** field on the AWS Support Center `Open a new case`_ page. #. **CommunicationBody**. Represents the **Description** field on the AWS Support Center `Open a new case`_ page. #. **Language**. Specifies the human language in which AWS Support handles the case. The API currently supports English and Japanese. #. **CcEmailAddresses**. Represents the AWS Support Center **CC** field on the `Open a new case`_ page. You can list email addresses to be copied on any correspondence about the case. The account that opens the case is already identified by passing the AWS Credentials in the HTTP POST method or in a method or function call from one of the programming languages supported by an `AWS SDK`_. The AWS Support API does not currently support the ability to add attachments to cases. You can, however, call `AddCommunicationToCase`_ to add information to an open case. A successful `CreateCase`_ request returns an AWS Support case number. Case numbers are used by `DescribeCases`_ request to retrieve existing AWS Support support cases. :type subject: string :param subject: :type service_code: string :param service_code: :type severity_code: string :param severity_code: :type category_code: string :param category_code: :type communication_body: string :param communication_body: :type cc_email_addresses: list :param cc_email_addresses: :type language: string :param language: :type issue_type: string :param issue_type: """ params = { 'subject': subject, 'serviceCode': service_code, 'categoryCode': category_code, 'communicationBody': communication_body, } if severity_code is not None: params['severityCode'] = severity_code if cc_email_addresses is not None: params['ccEmailAddresses'] = cc_email_addresses if language is not None: params['language'] = language if issue_type is not None: params['issueType'] = issue_type return self.make_request(action='CreateCase', body=json.dumps(params)) def describe_cases(self, case_id_list=None, display_id=None, after_time=None, before_time=None, include_resolved_cases=None, next_token=None, max_results=None, language=None): """ This action returns a list of cases that you specify by passing one or more CaseIds. In addition, you can filter the cases by date by setting values for the AfterTime and BeforeTime request parameters. The response returns the following in JSON format: #. One or more `CaseDetails`_ data types. #. One or more NextToken objects, strings that specifies where to paginate the returned records represented by CaseDetails . :type case_id_list: list :param case_id_list: :type display_id: string :param display_id: :type after_time: string :param after_time: :type before_time: string :param before_time: :type include_resolved_cases: boolean :param include_resolved_cases: :type next_token: string :param next_token: :type max_results: integer :param max_results: :type language: string :param language: """ params = {} if case_id_list is not None: params['caseIdList'] = case_id_list if display_id is not None: params['displayId'] = display_id if after_time is not None: params['afterTime'] = after_time if before_time is not None: params['beforeTime'] = before_time if include_resolved_cases is not None: params['includeResolvedCases'] = include_resolved_cases if next_token is not None: params['nextToken'] = next_token if max_results is not None: params['maxResults'] = max_results if language is not None: params['language'] = language return self.make_request(action='DescribeCases', body=json.dumps(params)) def describe_communications(self, case_id, before_time=None, after_time=None, next_token=None, max_results=None): """ This action returns communications regarding the support case. You can use the AfterTime and BeforeTime parameters to filter by date. The CaseId parameter enables you to identify a specific case by its CaseId number. The MaxResults and NextToken parameters enable you to control the pagination of the result set. Set MaxResults to the number of cases you want displayed on each page, and use NextToken to specify the resumption of pagination. :type case_id: string :param case_id: :type before_time: string :param before_time: :type after_time: string :param after_time: :type next_token: string :param next_token: :type max_results: integer :param max_results: """ params = {'caseId': case_id, } if before_time is not None: params['beforeTime'] = before_time if after_time is not None: params['afterTime'] = after_time if next_token is not None: params['nextToken'] = next_token if max_results is not None: params['maxResults'] = max_results return self.make_request(action='DescribeCommunications', body=json.dumps(params)) def describe_services(self, service_code_list=None, language=None): """ Returns the current list of AWS services and a list of service categories that applies to each one. You then use service names and categories in your `CreateCase`_ requests. Each AWS service has its own set of categories. The service codes and category codes correspond to the values that are displayed in the **Service** and **Category** drop- down lists on the AWS Support Center `Open a new case`_ page. The values in those fields, however, do not necessarily match the service codes and categories returned by the `DescribeServices` request. Always use the service codes and categories obtained programmatically. This practice ensures that you always have the most recent set of service and category codes. :type service_code_list: list :param service_code_list: :type language: string :param language: """ params = {} if service_code_list is not None: params['serviceCodeList'] = service_code_list if language is not None: params['language'] = language return self.make_request(action='DescribeServices', body=json.dumps(params)) def describe_severity_levels(self, language=None): """ This action returns the list of severity levels that you can assign to an AWS Support case. The severity level for a case is also a field in the `CaseDetails`_ data type included in any `CreateCase`_ request. :type language: string :param language: """ params = {} if language is not None: params['language'] = language return self.make_request(action='DescribeSeverityLevels', body=json.dumps(params)) def resolve_case(self, case_id=None): """ Takes a CaseId and returns the initial state of the case along with the state of the case after the call to `ResolveCase`_ completed. :type case_id: string :param case_id: """ params = {} if case_id is not None: params['caseId'] = case_id return self.make_request(action='ResolveCase', body=json.dumps(params)) def describe_trusted_advisor_check_refresh_statuses(self, check_ids): """ Returns the status of all refresh requests Trusted Advisor checks called using `RefreshTrustedAdvisorCheck`_. :type check_ids: list :param check_ids: """ params = {'checkIds': check_ids, } return self.make_request(action='DescribeTrustedAdvisorCheckRefreshStatuses', body=json.dumps(params)) def describe_trusted_advisor_check_result(self, check_id, language=None): """ This action responds with the results of a Trusted Advisor check. Once you have obtained the list of available Trusted Advisor checks by calling `DescribeTrustedAdvisorChecks`_, you specify the CheckId for the check you want to retrieve from AWS Support. The response for this action contains a JSON-formatted `TrustedAdvisorCheckResult`_ object , which is a container for the following three objects: #. `TrustedAdvisorCategorySpecificSummary`_ #. `TrustedAdvisorResourceDetail`_ #. `TrustedAdvisorResourcesSummary`_ In addition, the response contains the following fields: #. **Status**. Overall status of the check. #. **Timestamp**. Time at which Trusted Advisor last ran the check. #. **CheckId**. Unique identifier for the specific check returned by the request. :type check_id: string :param check_id: :type language: string :param language: """ params = {'checkId': check_id, } if language is not None: params['language'] = language return self.make_request(action='DescribeTrustedAdvisorCheckResult', body=json.dumps(params)) def describe_trusted_advisor_check_summaries(self, check_ids): """ This action enables you to get the latest summaries for Trusted Advisor checks that you specify in your request. You submit the list of Trusted Advisor checks for which you want summaries. You obtain these CheckIds by submitting a `DescribeTrustedAdvisorChecks`_ request. The response body contains an array of `TrustedAdvisorCheckSummary`_ objects. :type check_ids: list :param check_ids: """ params = {'checkIds': check_ids, } return self.make_request(action='DescribeTrustedAdvisorCheckSummaries', body=json.dumps(params)) def describe_trusted_advisor_checks(self, language): """ This action enables you to get a list of the available Trusted Advisor checks. You must specify a language code. English ("en") and Japanese ("jp") are currently supported. The response contains a list of `TrustedAdvisorCheckDescription`_ objects. :type language: string :param language: """ params = {'language': language, } return self.make_request(action='DescribeTrustedAdvisorChecks', body=json.dumps(params)) def refresh_trusted_advisor_check(self, check_id): """ This action enables you to query the service to request a refresh for a specific Trusted Advisor check. Your request body contains a CheckId for which you are querying. The response body contains a `RefreshTrustedAdvisorCheckResult`_ object containing Status and TimeUntilNextRefresh fields. :type check_id: string :param check_id: """ params = {'checkId': check_id, } return self.make_request(action='RefreshTrustedAdvisorCheck', body=json.dumps(params)) def make_request(self, action, body): headers = { 'X-Amz-Target': '%s.%s' % (self.TargetPrefix, action), 'Host': self.region.endpoint, 'Content-Type': 'application/x-amz-json-1.1', 'Content-Length': str(len(body)), } http_request = self.build_base_http_request( method='POST', path='/', auth_path='/', params={}, headers=headers, data=body) response = self._mexe(http_request, sender=None, override_num_retries=10) response_body = response.read() boto.log.debug(response_body) if response.status == 200: if response_body: return json.loads(response_body) else: json_body = json.loads(response_body) fault_name = json_body.get('__type', None) exception_class = self._faults.get(fault_name, self.ResponseError) raise exception_class(response.status, response.reason, body=json_body) boto-2.20.1/boto/swf/000077500000000000000000000000001225267101000142755ustar00rootroot00000000000000boto-2.20.1/boto/swf/__init__.py000066400000000000000000000044011225267101000164050ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.ec2.regioninfo import RegionInfo import boto.swf.layer1 REGION_ENDPOINTS = { 'us-east-1': 'swf.us-east-1.amazonaws.com', 'us-gov-west-1': 'swf.us-gov-west-1.amazonaws.com', 'us-west-1': 'swf.us-west-1.amazonaws.com', 'us-west-2': 'swf.us-west-2.amazonaws.com', 'sa-east-1': 'swf.sa-east-1.amazonaws.com', 'eu-west-1': 'swf.eu-west-1.amazonaws.com', 'ap-northeast-1': 'swf.ap-northeast-1.amazonaws.com', 'ap-southeast-1': 'swf.ap-southeast-1.amazonaws.com', 'ap-southeast-2': 'swf.ap-southeast-2.amazonaws.com', } def regions(**kw_params): """ Get all available regions for the Amazon Simple Workflow service. :rtype: list :return: A list of :class:`boto.regioninfo.RegionInfo` """ return [RegionInfo(name=region_name, endpoint=REGION_ENDPOINTS[region_name], connection_cls=boto.swf.layer1.Layer1) for region_name in REGION_ENDPOINTS] def connect_to_region(region_name, **kw_params): for region in regions(): if region.name == region_name: return region.connect(**kw_params) return None boto-2.20.1/boto/swf/exceptions.py000066400000000000000000000016761225267101000170420ustar00rootroot00000000000000""" Exceptions that are specific to the swf module. This module subclasses the base SWF response exception, boto.exceptions.SWFResponseError, for some of the SWF specific faults. """ from boto.exception import SWFResponseError class SWFDomainAlreadyExistsError(SWFResponseError): """ Raised when when the domain already exists. """ pass class SWFLimitExceededError(SWFResponseError): """ Raised when when a system imposed limitation has been reached. """ pass class SWFOperationNotPermittedError(SWFResponseError): """ Raised when (reserved for future use). """ class SWFTypeAlreadyExistsError(SWFResponseError): """ Raised when when the workflow type or activity type already exists. """ pass class SWFWorkflowExecutionAlreadyStartedError(SWFResponseError): """ Raised when an open execution with the same workflow_id is already running in the specified domain. """ boto-2.20.1/boto/swf/layer1.py000066400000000000000000001745701225267101000160620ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import time import boto from boto.connection import AWSAuthConnection from boto.provider import Provider from boto.exception import SWFResponseError from boto.swf import exceptions as swf_exceptions from boto.compat import json # # To get full debug output, uncomment the following line and set the # value of Debug to be 2 # #boto.set_stream_logger('swf') Debug = 0 class Layer1(AWSAuthConnection): """ Low-level interface to Simple WorkFlow Service. """ DefaultRegionName = 'us-east-1' """The default region name for Simple Workflow.""" ServiceName = 'com.amazonaws.swf.service.model.SimpleWorkflowService' """The name of the Service""" # In some cases, the fault response __type value is mapped to # an exception class more specific than SWFResponseError. _fault_excp = { 'com.amazonaws.swf.base.model#DomainAlreadyExistsFault': swf_exceptions.SWFDomainAlreadyExistsError, 'com.amazonaws.swf.base.model#LimitExceededFault': swf_exceptions.SWFLimitExceededError, 'com.amazonaws.swf.base.model#OperationNotPermittedFault': swf_exceptions.SWFOperationNotPermittedError, 'com.amazonaws.swf.base.model#TypeAlreadyExistsFault': swf_exceptions.SWFTypeAlreadyExistsError, 'com.amazonaws.swf.base.model#WorkflowExecutionAlreadyStartedFault': swf_exceptions.SWFWorkflowExecutionAlreadyStartedError, } ResponseError = SWFResponseError def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, debug=0, session_token=None, region=None): if not region: region_name = boto.config.get('SWF', 'region', self.DefaultRegionName) for reg in boto.swf.regions(): if reg.name == region_name: region = reg break self.region = region AWSAuthConnection.__init__(self, self.region.endpoint, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, debug, session_token) def _required_auth_capability(self): return ['hmac-v4'] @classmethod def _normalize_request_dict(cls, data): """ This class method recurses through request data dictionary and removes any default values. :type data: dict :param data: Specifies request parameters with default values to be removed. """ for item in data.keys(): if isinstance(data[item], dict): cls._normalize_request_dict(data[item]) if data[item] in (None, {}): del data[item] def json_request(self, action, data, object_hook=None): """ This method wraps around make_request() to normalize and serialize the dictionary with request parameters. :type action: string :param action: Specifies an SWF action. :type data: dict :param data: Specifies request parameters associated with the action. """ self._normalize_request_dict(data) json_input = json.dumps(data) return self.make_request(action, json_input, object_hook) def make_request(self, action, body='', object_hook=None): """ :raises: ``SWFResponseError`` if response status is not 200. """ headers = {'X-Amz-Target': '%s.%s' % (self.ServiceName, action), 'Host': self.region.endpoint, 'Content-Type': 'application/json; charset=UTF-8', 'Content-Encoding': 'amz-1.0', 'Content-Length': str(len(body))} http_request = self.build_base_http_request('POST', '/', '/', {}, headers, body, None) response = self._mexe(http_request, sender=None, override_num_retries=10) response_body = response.read() boto.log.debug(response_body) if response.status == 200: if response_body: return json.loads(response_body, object_hook=object_hook) else: return None else: json_body = json.loads(response_body) fault_name = json_body.get('__type', None) # Certain faults get mapped to more specific exception classes. excp_cls = self._fault_excp.get(fault_name, self.ResponseError) raise excp_cls(response.status, response.reason, body=json_body) # Actions related to Activities def poll_for_activity_task(self, domain, task_list, identity=None): """ Used by workers to get an ActivityTask from the specified activity taskList. This initiates a long poll, where the service holds the HTTP connection open and responds as soon as a task becomes available. The maximum time the service holds on to the request before responding is 60 seconds. If no task is available within 60 seconds, the poll will return an empty result. An empty result, in this context, means that an ActivityTask is returned, but that the value of taskToken is an empty string. If a task is returned, the worker should use its type to identify and process it correctly. :type domain: string :param domain: The name of the domain that contains the task lists being polled. :type task_list: string :param task_list: Specifies the task list to poll for activity tasks. :type identity: string :param identity: Identity of the worker making the request, which is recorded in the ActivityTaskStarted event in the workflow history. This enables diagnostic tracing when problems arise. The form of this identity is user defined. :raises: UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('PollForActivityTask', { 'domain': domain, 'taskList': {'name': task_list}, 'identity': identity, }) def respond_activity_task_completed(self, task_token, result=None): """ Used by workers to tell the service that the ActivityTask identified by the taskToken completed successfully with a result (if provided). :type task_token: string :param task_token: The taskToken of the ActivityTask. :type result: string :param result: The result of the activity task. It is a free form string that is implementation specific. :raises: UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('RespondActivityTaskCompleted', { 'taskToken': task_token, 'result': result, }) def respond_activity_task_failed(self, task_token, details=None, reason=None): """ Used by workers to tell the service that the ActivityTask identified by the taskToken has failed with reason (if specified). :type task_token: string :param task_token: The taskToken of the ActivityTask. :type details: string :param details: Optional detailed information about the failure. :type reason: string :param reason: Description of the error that may assist in diagnostics. :raises: UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('RespondActivityTaskFailed', { 'taskToken': task_token, 'details': details, 'reason': reason, }) def respond_activity_task_canceled(self, task_token, details=None): """ Used by workers to tell the service that the ActivityTask identified by the taskToken was successfully canceled. Additional details can be optionally provided using the details argument. :type task_token: string :param task_token: The taskToken of the ActivityTask. :type details: string :param details: Optional detailed information about the failure. :raises: UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('RespondActivityTaskCanceled', { 'taskToken': task_token, 'details': details, }) def record_activity_task_heartbeat(self, task_token, details=None): """ Used by activity workers to report to the service that the ActivityTask represented by the specified taskToken is still making progress. The worker can also (optionally) specify details of the progress, for example percent complete, using the details parameter. This action can also be used by the worker as a mechanism to check if cancellation is being requested for the activity task. If a cancellation is being attempted for the specified task, then the boolean cancelRequested flag returned by the service is set to true. :type task_token: string :param task_token: The taskToken of the ActivityTask. :type details: string :param details: If specified, contains details about the progress of the task. :raises: UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('RecordActivityTaskHeartbeat', { 'taskToken': task_token, 'details': details, }) # Actions related to Deciders def poll_for_decision_task(self, domain, task_list, identity=None, maximum_page_size=None, next_page_token=None, reverse_order=None): """ Used by deciders to get a DecisionTask from the specified decision taskList. A decision task may be returned for any open workflow execution that is using the specified task list. The task includes a paginated view of the history of the workflow execution. The decider should use the workflow type and the history to determine how to properly handle the task. :type domain: string :param domain: The name of the domain containing the task lists to poll. :type task_list: string :param task_list: Specifies the task list to poll for decision tasks. :type identity: string :param identity: Identity of the decider making the request, which is recorded in the DecisionTaskStarted event in the workflow history. This enables diagnostic tracing when problems arise. The form of this identity is user defined. :type maximum_page_size: integer :param maximum_page_size: The maximum number of history events returned in each page. The default is 100, but the caller can override this value to a page size smaller than the default. You cannot specify a page size greater than 100. :type next_page_token: string :param next_page_token: If on a previous call to this method a NextPageToken was returned, the results are being paginated. To get the next page of results, repeat the call with the returned token and all other arguments unchanged. :type reverse_order: boolean :param reverse_order: When set to true, returns the events in reverse order. By default the results are returned in ascending order of the eventTimestamp of the events. :raises: UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('PollForDecisionTask', { 'domain': domain, 'taskList': {'name': task_list}, 'identity': identity, 'maximumPageSize': maximum_page_size, 'nextPageToken': next_page_token, 'reverseOrder': reverse_order, }) def respond_decision_task_completed(self, task_token, decisions=None, execution_context=None): """ Used by deciders to tell the service that the DecisionTask identified by the taskToken has successfully completed. The decisions argument specifies the list of decisions made while processing the task. :type task_token: string :param task_token: The taskToken of the ActivityTask. :type decisions: list :param decisions: The list of decisions (possibly empty) made by the decider while processing this decision task. See the docs for the Decision structure for details. :type execution_context: string :param execution_context: User defined context to add to workflow execution. :raises: UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('RespondDecisionTaskCompleted', { 'taskToken': task_token, 'decisions': decisions, 'executionContext': execution_context, }) def request_cancel_workflow_execution(self, domain, workflow_id, run_id=None): """ Records a WorkflowExecutionCancelRequested event in the currently running workflow execution identified by the given domain, workflowId, and runId. This logically requests the cancellation of the workflow execution as a whole. It is up to the decider to take appropriate actions when it receives an execution history with this event. :type domain: string :param domain: The name of the domain containing the workflow execution to cancel. :type run_id: string :param run_id: The runId of the workflow execution to cancel. :type workflow_id: string :param workflow_id: The workflowId of the workflow execution to cancel. :raises: UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('RequestCancelWorkflowExecution', { 'domain': domain, 'workflowId': workflow_id, 'runId': run_id, }) def start_workflow_execution(self, domain, workflow_id, workflow_name, workflow_version, task_list=None, child_policy=None, execution_start_to_close_timeout=None, input=None, tag_list=None, task_start_to_close_timeout=None): """ Starts an execution of the workflow type in the specified domain using the provided workflowId and input data. :type domain: string :param domain: The name of the domain in which the workflow execution is created. :type workflow_id: string :param workflow_id: The user defined identifier associated with the workflow execution. You can use this to associate a custom identifier with the workflow execution. You may specify the same identifier if a workflow execution is logically a restart of a previous execution. You cannot have two open workflow executions with the same workflowId at the same time. :type workflow_name: string :param workflow_name: The name of the workflow type. :type workflow_version: string :param workflow_version: The version of the workflow type. :type task_list: string :param task_list: The task list to use for the decision tasks generated for this workflow execution. This overrides the defaultTaskList specified when registering the workflow type. :type child_policy: string :param child_policy: If set, specifies the policy to use for the child workflow executions of this workflow execution if it is terminated, by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout. This policy overrides the default child policy specified when registering the workflow type using RegisterWorkflowType. The supported child policies are: * TERMINATE: the child executions will be terminated. * REQUEST_CANCEL: a request to cancel will be attempted for each child execution by recording a WorkflowExecutionCancelRequested event in its history. It is up to the decider to take appropriate actions when it receives an execution history with this event. * ABANDON: no action will be taken. The child executions will continue to run. :type execution_start_to_close_timeout: string :param execution_start_to_close_timeout: The total duration for this workflow execution. This overrides the defaultExecutionStartToCloseTimeout specified when registering the workflow type. :type input: string :param input: The input for the workflow execution. This is a free form string which should be meaningful to the workflow you are starting. This input is made available to the new workflow execution in the WorkflowExecutionStarted history event. :type tag_list: list :param tag_list: The list of tags to associate with the workflow execution. You can specify a maximum of 5 tags. You can list workflow executions with a specific tag by calling list_open_workflow_executions or list_closed_workflow_executions and specifying a TagFilter. :type task_start_to_close_timeout: string :param task_start_to_close_timeout: Specifies the maximum duration of decision tasks for this workflow execution. This parameter overrides the defaultTaskStartToCloseTimout specified when registering the workflow type using register_workflow_type. :raises: UnknownResourceFault, TypeDeprecatedFault, SWFWorkflowExecutionAlreadyStartedError, SWFLimitExceededError, SWFOperationNotPermittedError, DefaultUndefinedFault """ return self.json_request('StartWorkflowExecution', { 'domain': domain, 'workflowId': workflow_id, 'workflowType': {'name': workflow_name, 'version': workflow_version}, 'taskList': {'name': task_list}, 'childPolicy': child_policy, 'executionStartToCloseTimeout': execution_start_to_close_timeout, 'input': input, 'tagList': tag_list, 'taskStartToCloseTimeout': task_start_to_close_timeout, }) def signal_workflow_execution(self, domain, signal_name, workflow_id, input=None, run_id=None): """ Records a WorkflowExecutionSignaled event in the workflow execution history and creates a decision task for the workflow execution identified by the given domain, workflowId and runId. The event is recorded with the specified user defined signalName and input (if provided). :type domain: string :param domain: The name of the domain containing the workflow execution to signal. :type signal_name: string :param signal_name: The name of the signal. This name must be meaningful to the target workflow. :type workflow_id: string :param workflow_id: The workflowId of the workflow execution to signal. :type input: string :param input: Data to attach to the WorkflowExecutionSignaled event in the target workflow execution's history. :type run_id: string :param run_id: The runId of the workflow execution to signal. :raises: UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('SignalWorkflowExecution', { 'domain': domain, 'signalName': signal_name, 'workflowId': workflow_id, 'input': input, 'runId': run_id, }) def terminate_workflow_execution(self, domain, workflow_id, child_policy=None, details=None, reason=None, run_id=None): """ Records a WorkflowExecutionTerminated event and forces closure of the workflow execution identified by the given domain, runId, and workflowId. The child policy, registered with the workflow type or specified when starting this execution, is applied to any open child workflow executions of this workflow execution. :type domain: string :param domain: The domain of the workflow execution to terminate. :type workflow_id: string :param workflow_id: The workflowId of the workflow execution to terminate. :type child_policy: string :param child_policy: If set, specifies the policy to use for the child workflow executions of the workflow execution being terminated. This policy overrides the child policy specified for the workflow execution at registration time or when starting the execution. The supported child policies are: * TERMINATE: the child executions will be terminated. * REQUEST_CANCEL: a request to cancel will be attempted for each child execution by recording a WorkflowExecutionCancelRequested event in its history. It is up to the decider to take appropriate actions when it receives an execution history with this event. * ABANDON: no action will be taken. The child executions will continue to run. :type details: string :param details: Optional details for terminating the workflow execution. :type reason: string :param reason: An optional descriptive reason for terminating the workflow execution. :type run_id: string :param run_id: The runId of the workflow execution to terminate. :raises: UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('TerminateWorkflowExecution', { 'domain': domain, 'workflowId': workflow_id, 'childPolicy': child_policy, 'details': details, 'reason': reason, 'runId': run_id, }) # Actions related to Administration ## Activity Management def register_activity_type(self, domain, name, version, task_list=None, default_task_heartbeat_timeout=None, default_task_schedule_to_close_timeout=None, default_task_schedule_to_start_timeout=None, default_task_start_to_close_timeout=None, description=None): """ Registers a new activity type along with its configuration settings in the specified domain. :type domain: string :param domain: The name of the domain in which this activity is to be registered. :type name: string :param name: The name of the activity type within the domain. :type version: string :param version: The version of the activity type. :type task_list: string :param task_list: If set, specifies the default task list to use for scheduling tasks of this activity type. This default task list is used if a task list is not provided when a task is scheduled through the schedule_activity_task Decision. :type default_task_heartbeat_timeout: string :param default_task_heartbeat_timeout: If set, specifies the default maximum time before which a worker processing a task of this type must report progress by calling RecordActivityTaskHeartbeat. If the timeout is exceeded, the activity task is automatically timed out. This default can be overridden when scheduling an activity task using the ScheduleActivityTask Decision. If the activity worker subsequently attempts to record a heartbeat or returns a result, the activity worker receives an UnknownResource fault. In this case, Amazon SWF no longer considers the activity task to be valid; the activity worker should clean up the activity task.no docs :type default_task_schedule_to_close_timeout: string :param default_task_schedule_to_close_timeout: If set, specifies the default maximum duration for a task of this activity type. This default can be overridden when scheduling an activity task using the ScheduleActivityTask Decision.no docs :type default_task_schedule_to_start_timeout: string :param default_task_schedule_to_start_timeout: If set, specifies the default maximum duration that a task of this activity type can wait before being assigned to a worker. This default can be overridden when scheduling an activity task using the ScheduleActivityTask Decision. :type default_task_start_to_close_timeout: string :param default_task_start_to_close_timeout: If set, specifies the default maximum duration that a worker can take to process tasks of this activity type. This default can be overridden when scheduling an activity task using the ScheduleActivityTask Decision. :type description: string :param description: A textual description of the activity type. :raises: SWFTypeAlreadyExistsError, SWFLimitExceededError, UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('RegisterActivityType', { 'domain': domain, 'name': name, 'version': version, 'defaultTaskList': {'name': task_list}, 'defaultTaskHeartbeatTimeout': default_task_heartbeat_timeout, 'defaultTaskScheduleToCloseTimeout': default_task_schedule_to_close_timeout, 'defaultTaskScheduleToStartTimeout': default_task_schedule_to_start_timeout, 'defaultTaskStartToCloseTimeout': default_task_start_to_close_timeout, 'description': description, }) def deprecate_activity_type(self, domain, activity_name, activity_version): """ Returns information about the specified activity type. This includes configuration settings provided at registration time as well as other general information about the type. :type domain: string :param domain: The name of the domain in which the activity type is registered. :type activity_name: string :param activity_name: The name of this activity. :type activity_version: string :param activity_version: The version of this activity. :raises: UnknownResourceFault, TypeDeprecatedFault, SWFOperationNotPermittedError """ return self.json_request('DeprecateActivityType', { 'domain': domain, 'activityType': {'name': activity_name, 'version': activity_version} }) ## Workflow Management def register_workflow_type(self, domain, name, version, task_list=None, default_child_policy=None, default_execution_start_to_close_timeout=None, default_task_start_to_close_timeout=None, description=None): """ Registers a new workflow type and its configuration settings in the specified domain. :type domain: string :param domain: The name of the domain in which to register the workflow type. :type name: string :param name: The name of the workflow type. :type version: string :param version: The version of the workflow type. :type task_list: list of name, version of tasks :param task_list: If set, specifies the default task list to use for scheduling decision tasks for executions of this workflow type. This default is used only if a task list is not provided when starting the execution through the StartWorkflowExecution Action or StartChildWorkflowExecution Decision. :type default_child_policy: string :param default_child_policy: If set, specifies the default policy to use for the child workflow executions when a workflow execution of this type is terminated, by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout. This default can be overridden when starting a workflow execution using the StartWorkflowExecution action or the StartChildWorkflowExecution Decision. The supported child policies are: * TERMINATE: the child executions will be terminated. * REQUEST_CANCEL: a request to cancel will be attempted for each child execution by recording a WorkflowExecutionCancelRequested event in its history. It is up to the decider to take appropriate actions when it receives an execution history with this event. * ABANDON: no action will be taken. The child executions will continue to run.no docs :type default_execution_start_to_close_timeout: string :param default_execution_start_to_close_timeout: If set, specifies the default maximum duration for executions of this workflow type. You can override this default when starting an execution through the StartWorkflowExecution Action or StartChildWorkflowExecution Decision. :type default_task_start_to_close_timeout: string :param default_task_start_to_close_timeout: If set, specifies the default maximum duration of decision tasks for this workflow type. This default can be overridden when starting a workflow execution using the StartWorkflowExecution action or the StartChildWorkflowExecution Decision. :type description: string :param description: Textual description of the workflow type. :raises: SWFTypeAlreadyExistsError, SWFLimitExceededError, UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('RegisterWorkflowType', { 'domain': domain, 'name': name, 'version': version, 'defaultTaskList': {'name': task_list}, 'defaultChildPolicy': default_child_policy, 'defaultExecutionStartToCloseTimeout': default_execution_start_to_close_timeout, 'defaultTaskStartToCloseTimeout': default_task_start_to_close_timeout, 'description': description, }) def deprecate_workflow_type(self, domain, workflow_name, workflow_version): """ Deprecates the specified workflow type. After a workflow type has been deprecated, you cannot create new executions of that type. Executions that were started before the type was deprecated will continue to run. A deprecated workflow type may still be used when calling visibility actions. :type domain: string :param domain: The name of the domain in which the workflow type is registered. :type workflow_name: string :param workflow_name: The name of the workflow type. :type workflow_version: string :param workflow_version: The version of the workflow type. :raises: UnknownResourceFault, TypeDeprecatedFault, SWFOperationNotPermittedError """ return self.json_request('DeprecateWorkflowType', { 'domain': domain, 'workflowType': {'name': workflow_name, 'version': workflow_version}, }) ## Domain Management def register_domain(self, name, workflow_execution_retention_period_in_days, description=None): """ Registers a new domain. :type name: string :param name: Name of the domain to register. The name must be unique. :type workflow_execution_retention_period_in_days: string :param workflow_execution_retention_period_in_days: Specifies the duration *in days* for which the record (including the history) of workflow executions in this domain should be kept by the service. After the retention period, the workflow execution will not be available in the results of visibility calls. If a duration of NONE is specified, the records for workflow executions in this domain are not retained at all. :type description: string :param description: Textual description of the domain. :raises: SWFDomainAlreadyExistsError, SWFLimitExceededError, SWFOperationNotPermittedError """ return self.json_request('RegisterDomain', { 'name': name, 'workflowExecutionRetentionPeriodInDays': workflow_execution_retention_period_in_days, 'description': description, }) def deprecate_domain(self, name): """ Deprecates the specified domain. After a domain has been deprecated it cannot be used to create new workflow executions or register new types. However, you can still use visibility actions on this domain. Deprecating a domain also deprecates all activity and workflow types registered in the domain. Executions that were started before the domain was deprecated will continue to run. :type name: string :param name: The name of the domain to deprecate. :raises: UnknownResourceFault, DomainDeprecatedFault, SWFOperationNotPermittedError """ return self.json_request('DeprecateDomain', {'name': name}) # Visibility Actions ## Activity Visibility def list_activity_types(self, domain, registration_status, name=None, maximum_page_size=None, next_page_token=None, reverse_order=None): """ Returns information about all activities registered in the specified domain that match the specified name and registration status. The result includes information like creation date, current status of the activity, etc. The results may be split into multiple pages. To retrieve subsequent pages, make the call again using the nextPageToken returned by the initial call. :type domain: string :param domain: The name of the domain in which the activity types have been registered. :type registration_status: string :param registration_status: Specifies the registration status of the activity types to list. Valid values are: * REGISTERED * DEPRECATED :type name: string :param name: If specified, only lists the activity types that have this name. :type maximum_page_size: integer :param maximum_page_size: The maximum number of results returned in each page. The default is 100, but the caller can override this value to a page size smaller than the default. You cannot specify a page size greater than 100. :type next_page_token: string :param next_page_token: If on a previous call to this method a NextResultToken was returned, the results have more than one page. To get the next page of results, repeat the call with the nextPageToken and keep all other arguments unchanged. :type reverse_order: boolean :param reverse_order: When set to true, returns the results in reverse order. By default the results are returned in ascending alphabetical order of the name of the activity types. :raises: SWFOperationNotPermittedError, UnknownResourceFault """ return self.json_request('ListActivityTypes', { 'domain': domain, 'name': name, 'registrationStatus': registration_status, 'maximumPageSize': maximum_page_size, 'nextPageToken': next_page_token, 'reverseOrder': reverse_order, }) def describe_activity_type(self, domain, activity_name, activity_version): """ Returns information about the specified activity type. This includes configuration settings provided at registration time as well as other general information about the type. :type domain: string :param domain: The name of the domain in which the activity type is registered. :type activity_name: string :param activity_name: The name of this activity. :type activity_version: string :param activity_version: The version of this activity. :raises: UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('DescribeActivityType', { 'domain': domain, 'activityType': {'name': activity_name, 'version': activity_version} }) ## Workflow Visibility def list_workflow_types(self, domain, registration_status, maximum_page_size=None, name=None, next_page_token=None, reverse_order=None): """ Returns information about workflow types in the specified domain. The results may be split into multiple pages that can be retrieved by making the call repeatedly. :type domain: string :param domain: The name of the domain in which the workflow types have been registered. :type registration_status: string :param registration_status: Specifies the registration status of the activity types to list. Valid values are: * REGISTERED * DEPRECATED :type name: string :param name: If specified, lists the workflow type with this name. :type maximum_page_size: integer :param maximum_page_size: The maximum number of results returned in each page. The default is 100, but the caller can override this value to a page size smaller than the default. You cannot specify a page size greater than 100. :type next_page_token: string :param next_page_token: If on a previous call to this method a NextPageToken was returned, the results are being paginated. To get the next page of results, repeat the call with the returned token and all other arguments unchanged. :type reverse_order: boolean :param reverse_order: When set to true, returns the results in reverse order. By default the results are returned in ascending alphabetical order of the name of the workflow types. :raises: SWFOperationNotPermittedError, UnknownResourceFault """ return self.json_request('ListWorkflowTypes', { 'domain': domain, 'name': name, 'registrationStatus': registration_status, 'maximumPageSize': maximum_page_size, 'nextPageToken': next_page_token, 'reverseOrder': reverse_order, }) def describe_workflow_type(self, domain, workflow_name, workflow_version): """ Returns information about the specified workflow type. This includes configuration settings specified when the type was registered and other information such as creation date, current status, etc. :type domain: string :param domain: The name of the domain in which this workflow type is registered. :type workflow_name: string :param workflow_name: The name of the workflow type. :type workflow_version: string :param workflow_version: The version of the workflow type. :raises: UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('DescribeWorkflowType', { 'domain': domain, 'workflowType': {'name': workflow_name, 'version': workflow_version} }) ## Workflow Execution Visibility def describe_workflow_execution(self, domain, run_id, workflow_id): """ Returns information about the specified workflow execution including its type and some statistics. :type domain: string :param domain: The name of the domain containing the workflow execution. :type run_id: string :param run_id: A system generated unique identifier for the workflow execution. :type workflow_id: string :param workflow_id: The user defined identifier associated with the workflow execution. :raises: UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('DescribeWorkflowExecution', { 'domain': domain, 'execution': {'runId': run_id, 'workflowId': workflow_id}, }) def get_workflow_execution_history(self, domain, run_id, workflow_id, maximum_page_size=None, next_page_token=None, reverse_order=None): """ Returns the history of the specified workflow execution. The results may be split into multiple pages. To retrieve subsequent pages, make the call again using the nextPageToken returned by the initial call. :type domain: string :param domain: The name of the domain containing the workflow execution. :type run_id: string :param run_id: A system generated unique identifier for the workflow execution. :type workflow_id: string :param workflow_id: The user defined identifier associated with the workflow execution. :type maximum_page_size: integer :param maximum_page_size: Specifies the maximum number of history events returned in one page. The next page in the result is identified by the NextPageToken returned. By default 100 history events are returned in a page but the caller can override this value to a page size smaller than the default. You cannot specify a page size larger than 100. :type next_page_token: string :param next_page_token: If a NextPageToken is returned, the result has more than one pages. To get the next page, repeat the call and specify the nextPageToken with all other arguments unchanged. :type reverse_order: boolean :param reverse_order: When set to true, returns the events in reverse order. By default the results are returned in ascending order of the eventTimeStamp of the events. :raises: UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('GetWorkflowExecutionHistory', { 'domain': domain, 'execution': {'runId': run_id, 'workflowId': workflow_id}, 'maximumPageSize': maximum_page_size, 'nextPageToken': next_page_token, 'reverseOrder': reverse_order, }) def count_open_workflow_executions(self, domain, latest_date, oldest_date, tag=None, workflow_id=None, workflow_name=None, workflow_version=None): """ Returns the number of open workflow executions within the given domain that meet the specified filtering criteria. .. note: workflow_id, workflow_name/workflow_version and tag are mutually exclusive. You can specify at most one of these in a request. :type domain: string :param domain: The name of the domain containing the workflow executions to count. :type latest_date: timestamp :param latest_date: Specifies the latest start or close date and time to return. :type oldest_date: timestamp :param oldest_date: Specifies the oldest start or close date and time to return. :type workflow_name: string :param workflow_name: Name of the workflow type to filter on. :type workflow_version: string :param workflow_version: Version of the workflow type to filter on. :type tag: string :param tag: If specified, only executions that have a tag that matches the filter are counted. :type workflow_id: string :param workflow_id: If specified, only workflow executions matching the workflow_id are counted. :raises: UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('CountOpenWorkflowExecutions', { 'domain': domain, 'startTimeFilter': {'oldestDate': oldest_date, 'latestDate': latest_date}, 'typeFilter': {'name': workflow_name, 'version': workflow_version}, 'executionFilter': {'workflowId': workflow_id}, 'tagFilter': {'tag': tag}, }) def list_open_workflow_executions(self, domain, oldest_date, latest_date=None, tag=None, workflow_id=None, workflow_name=None, workflow_version=None, maximum_page_size=None, next_page_token=None, reverse_order=None): """ Returns the list of open workflow executions within the given domain that meet the specified filtering criteria. .. note: workflow_id, workflow_name/workflow_version and tag are mutually exclusive. You can specify at most one of these in a request. :type domain: string :param domain: The name of the domain containing the workflow executions to count. :type latest_date: timestamp :param latest_date: Specifies the latest start or close date and time to return. :type oldest_date: timestamp :param oldest_date: Specifies the oldest start or close date and time to return. :type tag: string :param tag: If specified, only executions that have a tag that matches the filter are counted. :type workflow_id: string :param workflow_id: If specified, only workflow executions matching the workflow_id are counted. :type workflow_name: string :param workflow_name: Name of the workflow type to filter on. :type workflow_version: string :param workflow_version: Version of the workflow type to filter on. :type maximum_page_size: integer :param maximum_page_size: The maximum number of results returned in each page. The default is 100, but the caller can override this value to a page size smaller than the default. You cannot specify a page size greater than 100. :type next_page_token: string :param next_page_token: If on a previous call to this method a NextPageToken was returned, the results are being paginated. To get the next page of results, repeat the call with the returned token and all other arguments unchanged. :type reverse_order: boolean :param reverse_order: When set to true, returns the results in reverse order. By default the results are returned in descending order of the start or the close time of the executions. :raises: UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('ListOpenWorkflowExecutions', { 'domain': domain, 'startTimeFilter': {'oldestDate': oldest_date, 'latestDate': latest_date}, 'tagFilter': {'tag': tag}, 'typeFilter': {'name': workflow_name, 'version': workflow_version}, 'executionFilter': {'workflowId': workflow_id}, 'maximumPageSize': maximum_page_size, 'nextPageToken': next_page_token, 'reverseOrder': reverse_order, }) def count_closed_workflow_executions(self, domain, start_latest_date=None, start_oldest_date=None, close_latest_date=None, close_oldest_date=None, close_status=None, tag=None, workflow_id=None, workflow_name=None, workflow_version=None): """ Returns the number of closed workflow executions within the given domain that meet the specified filtering criteria. .. note: close_status, workflow_id, workflow_name/workflow_version and tag are mutually exclusive. You can specify at most one of these in a request. .. note: start_latest_date/start_oldest_date and close_latest_date/close_oldest_date are mutually exclusive. You can specify at most one of these in a request. :type domain: string :param domain: The name of the domain containing the workflow executions to count. :type start_latest_date: timestamp :param start_latest_date: If specified, only workflow executions that meet the start time criteria of the filter are counted. :type start_oldest_date: timestamp :param start_oldest_date: If specified, only workflow executions that meet the start time criteria of the filter are counted. :type close_latest_date: timestamp :param close_latest_date: If specified, only workflow executions that meet the close time criteria of the filter are counted. :type close_oldest_date: timestamp :param close_oldest_date: If specified, only workflow executions that meet the close time criteria of the filter are counted. :type close_status: string :param close_status: The close status that must match the close status of an execution for it to meet the criteria of this filter. Valid values are: * COMPLETED * FAILED * CANCELED * TERMINATED * CONTINUED_AS_NEW * TIMED_OUT :type tag: string :param tag: If specified, only executions that have a tag that matches the filter are counted. :type workflow_id: string :param workflow_id: If specified, only workflow executions matching the workflow_id are counted. :type workflow_name: string :param workflow_name: Name of the workflow type to filter on. :type workflow_version: string :param workflow_version: Version of the workflow type to filter on. :raises: UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('CountClosedWorkflowExecutions', { 'domain': domain, 'startTimeFilter': {'oldestDate': start_oldest_date, 'latestDate': start_latest_date}, 'closeTimeFilter': {'oldestDate': close_oldest_date, 'latestDate': close_latest_date}, 'closeStatusFilter': {'status': close_status}, 'tagFilter': {'tag': tag}, 'typeFilter': {'name': workflow_name, 'version': workflow_version}, 'executionFilter': {'workflowId': workflow_id} }) def list_closed_workflow_executions(self, domain, start_latest_date=None, start_oldest_date=None, close_latest_date=None, close_oldest_date=None, close_status=None, tag=None, workflow_id=None, workflow_name=None, workflow_version=None, maximum_page_size=None, next_page_token=None, reverse_order=None): """ Returns the number of closed workflow executions within the given domain that meet the specified filtering criteria. .. note: close_status, workflow_id, workflow_name/workflow_version and tag are mutually exclusive. You can specify at most one of these in a request. .. note: start_latest_date/start_oldest_date and close_latest_date/close_oldest_date are mutually exclusive. You can specify at most one of these in a request. :type domain: string :param domain: The name of the domain containing the workflow executions to count. :type start_latest_date: timestamp :param start_latest_date: If specified, only workflow executions that meet the start time criteria of the filter are counted. :type start_oldest_date: timestamp :param start_oldest_date: If specified, only workflow executions that meet the start time criteria of the filter are counted. :type close_latest_date: timestamp :param close_latest_date: If specified, only workflow executions that meet the close time criteria of the filter are counted. :type close_oldest_date: timestamp :param close_oldest_date: If specified, only workflow executions that meet the close time criteria of the filter are counted. :type close_status: string :param close_status: The close status that must match the close status of an execution for it to meet the criteria of this filter. Valid values are: * COMPLETED * FAILED * CANCELED * TERMINATED * CONTINUED_AS_NEW * TIMED_OUT :type tag: string :param tag: If specified, only executions that have a tag that matches the filter are counted. :type workflow_id: string :param workflow_id: If specified, only workflow executions matching the workflow_id are counted. :type workflow_name: string :param workflow_name: Name of the workflow type to filter on. :type workflow_version: string :param workflow_version: Version of the workflow type to filter on. :type maximum_page_size: integer :param maximum_page_size: The maximum number of results returned in each page. The default is 100, but the caller can override this value to a page size smaller than the default. You cannot specify a page size greater than 100. :type next_page_token: string :param next_page_token: If on a previous call to this method a NextPageToken was returned, the results are being paginated. To get the next page of results, repeat the call with the returned token and all other arguments unchanged. :type reverse_order: boolean :param reverse_order: When set to true, returns the results in reverse order. By default the results are returned in descending order of the start or the close time of the executions. :raises: UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('ListClosedWorkflowExecutions', { 'domain': domain, 'startTimeFilter': {'oldestDate': start_oldest_date, 'latestDate': start_latest_date}, 'closeTimeFilter': {'oldestDate': close_oldest_date, 'latestDate': close_latest_date}, 'executionFilter': {'workflowId': workflow_id}, 'closeStatusFilter': {'status': close_status}, 'tagFilter': {'tag': tag}, 'typeFilter': {'name': workflow_name, 'version': workflow_version}, 'maximumPageSize': maximum_page_size, 'nextPageToken': next_page_token, 'reverseOrder': reverse_order, }) ## Domain Visibility def list_domains(self, registration_status, maximum_page_size=None, next_page_token=None, reverse_order=None): """ Returns the list of domains registered in the account. The results may be split into multiple pages. To retrieve subsequent pages, make the call again using the nextPageToken returned by the initial call. :type registration_status: string :param registration_status: Specifies the registration status of the domains to list. Valid Values: * REGISTERED * DEPRECATED :type maximum_page_size: integer :param maximum_page_size: The maximum number of results returned in each page. The default is 100, but the caller can override this value to a page size smaller than the default. You cannot specify a page size greater than 100. :type next_page_token: string :param next_page_token: If on a previous call to this method a NextPageToken was returned, the result has more than one page. To get the next page of results, repeat the call with the returned token and all other arguments unchanged. :type reverse_order: boolean :param reverse_order: When set to true, returns the results in reverse order. By default the results are returned in ascending alphabetical order of the name of the domains. :raises: SWFOperationNotPermittedError """ return self.json_request('ListDomains', { 'registrationStatus': registration_status, 'maximumPageSize': maximum_page_size, 'nextPageToken': next_page_token, 'reverseOrder': reverse_order, }) def describe_domain(self, name): """ Returns information about the specified domain including description and status. :type name: string :param name: The name of the domain to describe. :raises: UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('DescribeDomain', {'name': name}) ## Task List Visibility def count_pending_decision_tasks(self, domain, task_list): """ Returns the estimated number of decision tasks in the specified task list. The count returned is an approximation and is not guaranteed to be exact. If you specify a task list that no decision task was ever scheduled in then 0 will be returned. :type domain: string :param domain: The name of the domain that contains the task list. :type task_list: string :param task_list: The name of the task list. :raises: UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('CountPendingDecisionTasks', { 'domain': domain, 'taskList': {'name': task_list} }) def count_pending_activity_tasks(self, domain, task_list): """ Returns the estimated number of activity tasks in the specified task list. The count returned is an approximation and is not guaranteed to be exact. If you specify a task list that no activity task was ever scheduled in then 0 will be returned. :type domain: string :param domain: The name of the domain that contains the task list. :type task_list: string :param task_list: The name of the task list. :raises: UnknownResourceFault, SWFOperationNotPermittedError """ return self.json_request('CountPendingActivityTasks', { 'domain': domain, 'taskList': {'name': task_list} }) boto-2.20.1/boto/swf/layer1_decisions.py000066400000000000000000000272261225267101000201150ustar00rootroot00000000000000""" Helper class for creating decision responses. """ class Layer1Decisions: """ Use this object to build a list of decisions for a decision response. Each method call will add append a new decision. Retrieve the list of decisions from the _data attribute. """ def __init__(self): self._data = [] def schedule_activity_task(self, activity_id, activity_type_name, activity_type_version, task_list=None, control=None, heartbeat_timeout=None, schedule_to_close_timeout=None, schedule_to_start_timeout=None, start_to_close_timeout=None, input=None): """ Schedules an activity task. :type activity_id: string :param activity_id: The activityId of the type of the activity being scheduled. :type activity_type_name: string :param activity_type_name: The name of the type of the activity being scheduled. :type activity_type_version: string :param activity_type_version: The version of the type of the activity being scheduled. :type task_list: string :param task_list: If set, specifies the name of the task list in which to schedule the activity task. If not specified, the defaultTaskList registered with the activity type will be used. Note: a task list for this activity task must be specified either as a default for the activity type or through this field. If neither this field is set nor a default task list was specified at registration time then a fault will be returned. """ o = {} o['decisionType'] = 'ScheduleActivityTask' attrs = o['scheduleActivityTaskDecisionAttributes'] = {} attrs['activityId'] = activity_id attrs['activityType'] = { 'name': activity_type_name, 'version': activity_type_version, } if task_list is not None: attrs['taskList'] = {'name': task_list} if control is not None: attrs['control'] = control if heartbeat_timeout is not None: attrs['heartbeatTimeout'] = heartbeat_timeout if schedule_to_close_timeout is not None: attrs['scheduleToCloseTimeout'] = schedule_to_close_timeout if schedule_to_start_timeout is not None: attrs['scheduleToStartTimeout'] = schedule_to_start_timeout if start_to_close_timeout is not None: attrs['startToCloseTimeout'] = start_to_close_timeout if input is not None: attrs['input'] = input self._data.append(o) def request_cancel_activity_task(self, activity_id): """ Attempts to cancel a previously scheduled activity task. If the activity task was scheduled but has not been assigned to a worker, then it will be canceled. If the activity task was already assigned to a worker, then the worker will be informed that cancellation has been requested in the response to RecordActivityTaskHeartbeat. """ o = {} o['decisionType'] = 'RequestCancelActivityTask' attrs = o['requestCancelActivityTaskDecisionAttributes'] = {} attrs['activityId'] = activity_id self._data.append(o) def record_marker(self, marker_name, details=None): """ Records a MarkerRecorded event in the history. Markers can be used for adding custom information in the history for instance to let deciders know that they do not need to look at the history beyond the marker event. """ o = {} o['decisionType'] = 'RecordMarker' attrs = o['recordMarkerDecisionAttributes'] = {} attrs['markerName'] = marker_name if details is not None: attrs['details'] = details self._data.append(o) def complete_workflow_execution(self, result=None): """ Closes the workflow execution and records a WorkflowExecutionCompleted event in the history """ o = {} o['decisionType'] = 'CompleteWorkflowExecution' attrs = o['completeWorkflowExecutionDecisionAttributes'] = {} if result is not None: attrs['result'] = result self._data.append(o) def fail_workflow_execution(self, reason=None, details=None): """ Closes the workflow execution and records a WorkflowExecutionFailed event in the history. """ o = {} o['decisionType'] = 'FailWorkflowExecution' attrs = o['failWorkflowExecutionDecisionAttributes'] = {} if reason is not None: attrs['reason'] = reason if details is not None: attrs['details'] = details self._data.append(o) def cancel_workflow_executions(self, details=None): """ Closes the workflow execution and records a WorkflowExecutionCanceled event in the history. """ o = {} o['decisionType'] = 'CancelWorkflowExecution' attrs = o['cancelWorkflowExecutionsDecisionAttributes'] = {} if details is not None: attrs['details'] = details self._data.append(o) def continue_as_new_workflow_execution(self, child_policy=None, execution_start_to_close_timeout=None, input=None, tag_list=None, task_list=None, start_to_close_timeout=None, workflow_type_version=None): """ Closes the workflow execution and starts a new workflow execution of the same type using the same workflow id and a unique run Id. A WorkflowExecutionContinuedAsNew event is recorded in the history. """ o = {} o['decisionType'] = 'ContinueAsNewWorkflowExecution' attrs = o['continueAsNewWorkflowExecutionDecisionAttributes'] = {} if child_policy is not None: attrs['childPolicy'] = child_policy if execution_start_to_close_timeout is not None: attrs['executionStartToCloseTimeout'] = execution_start_to_close_timeout if input is not None: attrs['input'] = input if tag_list is not None: attrs['tagList'] = tag_list if task_list is not None: attrs['taskList'] = {'name': task_list} if start_to_close_timeout is not None: attrs['startToCloseTimeout'] = start_to_close_timeout if workflow_type_version is not None: attrs['workflowTypeVersion'] = workflow_type_version self._data.append(o) def start_timer(self, start_to_fire_timeout, timer_id, control=None): """ Starts a timer for this workflow execution and records a TimerStarted event in the history. This timer will fire after the specified delay and record a TimerFired event. """ o = {} o['decisionType'] = 'StartTimer' attrs = o['startTimerDecisionAttributes'] = {} attrs['startToFireTimeout'] = start_to_fire_timeout attrs['timerId'] = timer_id if control is not None: attrs['control'] = control self._data.append(o) def cancel_timer(self, timer_id): """ Cancels a previously started timer and records a TimerCanceled event in the history. """ o = {} o['decisionType'] = 'CancelTimer' attrs = o['cancelTimerDecisionAttributes'] = {} attrs['timerId'] = timer_id self._data.append(o) def signal_external_workflow_execution(self, workflow_id, signal_name, run_id=None, control=None, input=None): """ Requests a signal to be delivered to the specified external workflow execution and records a SignalExternalWorkflowExecutionInitiated event in the history. """ o = {} o['decisionType'] = 'SignalExternalWorkflowExecution' attrs = o['signalExternalWorkflowExecutionDecisionAttributes'] = {} attrs['workflowId'] = workflow_id attrs['signalName'] = signal_name if run_id is not None: attrs['runId'] = run_id if control is not None: attrs['control'] = control if input is not None: attrs['input'] = input self._data.append(o) def request_cancel_external_workflow_execution(self, workflow_id, control=None, run_id=None): """ Requests that a request be made to cancel the specified external workflow execution and records a RequestCancelExternalWorkflowExecutionInitiated event in the history. """ o = {} o['decisionType'] = 'RequestCancelExternalWorkflowExecution' attrs = o['requestCancelExternalWorkflowExecutionDecisionAttributes'] = {} attrs['workflowId'] = workflow_id if control is not None: attrs['control'] = control if run_id is not None: attrs['runId'] = run_id self._data.append(o) def start_child_workflow_execution(self, workflow_type_name, workflow_type_version, workflow_id, child_policy=None, control=None, execution_start_to_close_timeout=None, input=None, tag_list=None, task_list=None, task_start_to_close_timeout=None): """ Requests that a child workflow execution be started and records a StartChildWorkflowExecutionInitiated event in the history. The child workflow execution is a separate workflow execution with its own history. """ o = {} o['decisionType'] = 'StartChildWorkflowExecution' attrs = o['startChildWorkflowExecutionDecisionAttributes'] = {} attrs['workflowType'] = { 'name': workflow_type_name, 'version': workflow_type_version, } attrs['workflowId'] = workflow_id if child_policy is not None: attrs['childPolicy'] = child_policy if control is not None: attrs['control'] = control if execution_start_to_close_timeout is not None: attrs['executionStartToCloseTimeout'] = execution_start_to_close_timeout if input is not None: attrs['input'] = input if tag_list is not None: attrs['tagList'] = tag_list if task_list is not None: attrs['taskList'] = {'name': task_list} if task_start_to_close_timeout is not None: attrs['taskStartToCloseTimeout'] = task_start_to_close_timeout self._data.append(o) boto-2.20.1/boto/swf/layer2.py000066400000000000000000000311161225267101000160470ustar00rootroot00000000000000"""Object-oriented interface to SWF wrapping boto.swf.layer1.Layer1""" import time from functools import wraps from boto.swf.layer1 import Layer1 from boto.swf.layer1_decisions import Layer1Decisions DEFAULT_CREDENTIALS = { 'aws_access_key_id': None, 'aws_secret_access_key': None } def set_default_credentials(aws_access_key_id, aws_secret_access_key): """Set default credentials.""" DEFAULT_CREDENTIALS.update({ 'aws_access_key_id': aws_access_key_id, 'aws_secret_access_key': aws_secret_access_key, }) class SWFBase(object): name = None domain = None aws_access_key_id = None aws_secret_access_key = None def __init__(self, **kwargs): # Set default credentials. for credkey in ('aws_access_key_id', 'aws_secret_access_key'): if DEFAULT_CREDENTIALS.get(credkey): setattr(self, credkey, DEFAULT_CREDENTIALS[credkey]) # Override attributes with keyword args. for kwarg in kwargs: setattr(self, kwarg, kwargs[kwarg]) self._swf = Layer1(self.aws_access_key_id, self.aws_secret_access_key) def __repr__(self): rep_str = str(self.name) if hasattr(self, 'version'): rep_str += '-' + str(getattr(self, 'version')) return '<%s %r at 0x%x>' % (self.__class__.__name__, rep_str, id(self)) class Domain(SWFBase): """Simple Workflow Domain.""" description = None retention = 30 @wraps(Layer1.describe_domain) def describe(self): """DescribeDomain.""" return self._swf.describe_domain(self.name) @wraps(Layer1.deprecate_domain) def deprecate(self): """DeprecateDomain""" self._swf.deprecate_domain(self.name) @wraps(Layer1.register_domain) def register(self): """RegisterDomain.""" self._swf.register_domain(self.name, str(self.retention), self.description) @wraps(Layer1.list_activity_types) def activities(self, status='REGISTERED', **kwargs): """ListActivityTypes.""" act_types = self._swf.list_activity_types(self.name, status, **kwargs) act_objects = [] for act_args in act_types['typeInfos']: act_ident = act_args['activityType'] del act_args['activityType'] act_args.update(act_ident) act_args.update({ 'aws_access_key_id': self.aws_access_key_id, 'aws_secret_access_key': self.aws_secret_access_key, 'domain': self.name, }) act_objects.append(ActivityType(**act_args)) return act_objects @wraps(Layer1.list_workflow_types) def workflows(self, status='REGISTERED', **kwargs): """ListWorkflowTypes.""" wf_types = self._swf.list_workflow_types(self.name, status, **kwargs) wf_objects = [] for wf_args in wf_types['typeInfos']: wf_ident = wf_args['workflowType'] del wf_args['workflowType'] wf_args.update(wf_ident) wf_args.update({ 'aws_access_key_id': self.aws_access_key_id, 'aws_secret_access_key': self.aws_secret_access_key, 'domain': self.name, }) wf_objects.append(WorkflowType(**wf_args)) return wf_objects def executions(self, closed=False, **kwargs): """List list open/closed executions. For a full list of available parameters refer to :py:func:`boto.swf.layer1.Layer1.list_closed_workflow_executions` and :py:func:`boto.swf.layer1.Layer1.list_open_workflow_executions` """ if closed: executions = self._swf.list_closed_workflow_executions(self.name, **kwargs) else: if 'oldest_date' not in kwargs: # Last 24 hours. kwargs['oldest_date'] = time.time() - (3600 * 24) executions = self._swf.list_open_workflow_executions(self.name, **kwargs) exe_objects = [] for exe_args in executions['executionInfos']: for nested_key in ('execution', 'workflowType'): nested_dict = exe_args[nested_key] del exe_args[nested_key] exe_args.update(nested_dict) exe_args.update({ 'aws_access_key_id': self.aws_access_key_id, 'aws_secret_access_key': self.aws_secret_access_key, 'domain': self.name, }) exe_objects.append(WorkflowExecution(**exe_args)) return exe_objects @wraps(Layer1.count_pending_activity_tasks) def count_pending_activity_tasks(self, task_list): """CountPendingActivityTasks.""" return self._swf.count_pending_activity_tasks(self.name, task_list) @wraps(Layer1.count_pending_decision_tasks) def count_pending_decision_tasks(self, task_list): """CountPendingDecisionTasks.""" return self._swf.count_pending_decision_tasks(self.name, task_list) class Actor(SWFBase): task_list = None last_tasktoken = None domain = None def run(self): """To be overloaded by subclasses.""" raise NotImplementedError() class ActivityWorker(Actor): """Base class for SimpleWorkflow activity workers.""" @wraps(Layer1.respond_activity_task_canceled) def cancel(self, task_token=None, details=None): """RespondActivityTaskCanceled.""" if task_token is None: task_token = self.last_tasktoken return self._swf.respond_activity_task_canceled(task_token, details) @wraps(Layer1.respond_activity_task_completed) def complete(self, task_token=None, result=None): """RespondActivityTaskCompleted.""" if task_token is None: task_token = self.last_tasktoken return self._swf.respond_activity_task_completed(task_token, result) @wraps(Layer1.respond_activity_task_failed) def fail(self, task_token=None, details=None, reason=None): """RespondActivityTaskFailed.""" if task_token is None: task_token = self.last_tasktoken return self._swf.respond_activity_task_failed(task_token, details, reason) @wraps(Layer1.record_activity_task_heartbeat) def heartbeat(self, task_token=None, details=None): """RecordActivityTaskHeartbeat.""" if task_token is None: task_token = self.last_tasktoken return self._swf.record_activity_task_heartbeat(task_token, details) @wraps(Layer1.poll_for_activity_task) def poll(self, **kwargs): """PollForActivityTask.""" task_list = self.task_list if 'task_list' in kwargs: task_list = kwargs.get('task_list') del kwargs['task_list'] task = self._swf.poll_for_activity_task(self.domain, task_list, **kwargs) self.last_tasktoken = task.get('taskToken') return task class Decider(Actor): """Base class for SimpleWorkflow deciders.""" @wraps(Layer1.respond_decision_task_completed) def complete(self, task_token=None, decisions=None, **kwargs): """RespondDecisionTaskCompleted.""" if isinstance(decisions, Layer1Decisions): # Extract decision list from a Layer1Decisions instance. decisions = decisions._data if task_token is None: task_token = self.last_tasktoken return self._swf.respond_decision_task_completed(task_token, decisions, **kwargs) @wraps(Layer1.poll_for_decision_task) def poll(self, **kwargs): """PollForDecisionTask.""" task_list = self.task_list if 'task_list' in kwargs: task_list = kwargs.get('task_list') del kwargs['task_list'] decision_task = self._swf.poll_for_decision_task(self.domain, task_list, **kwargs) self.last_tasktoken = decision_task.get('taskToken') return decision_task class WorkflowType(SWFBase): """A versioned workflow type.""" version = None task_list = None child_policy = 'TERMINATE' @wraps(Layer1.describe_workflow_type) def describe(self): """DescribeWorkflowType.""" return self._swf.describe_workflow_type(self.domain, self.name, self.version) @wraps(Layer1.register_workflow_type) def register(self, **kwargs): """RegisterWorkflowType.""" args = { 'default_execution_start_to_close_timeout': '3600', 'default_task_start_to_close_timeout': '300', 'default_child_policy': 'TERMINATE', } args.update(kwargs) self._swf.register_workflow_type(self.domain, self.name, self.version, **args) @wraps(Layer1.deprecate_workflow_type) def deprecate(self): """DeprecateWorkflowType.""" self._swf.deprecate_workflow_type(self.domain, self.name, self.version) @wraps(Layer1.start_workflow_execution) def start(self, **kwargs): """StartWorkflowExecution.""" if 'workflow_id' in kwargs: workflow_id = kwargs['workflow_id'] del kwargs['workflow_id'] else: workflow_id = '%s-%s-%i' % (self.name, self.version, time.time()) for def_attr in ('task_list', 'child_policy'): kwargs[def_attr] = kwargs.get(def_attr, getattr(self, def_attr)) run_id = self._swf.start_workflow_execution(self.domain, workflow_id, self.name, self.version, **kwargs)['runId'] return WorkflowExecution(name=self.name, version=self.version, runId=run_id, domain=self.domain, workflowId=workflow_id, aws_access_key_id=self.aws_access_key_id, aws_secret_access_key=self.aws_secret_access_key) class WorkflowExecution(SWFBase): """An instance of a workflow.""" workflowId = None runId = None @wraps(Layer1.signal_workflow_execution) def signal(self, signame, **kwargs): """SignalWorkflowExecution.""" self._swf.signal_workflow_execution(self.domain, signame, self.workflowId, **kwargs) @wraps(Layer1.terminate_workflow_execution) def terminate(self, **kwargs): """TerminateWorkflowExecution (p. 103).""" return self._swf.terminate_workflow_execution(self.domain, self.workflowId, **kwargs) @wraps(Layer1.get_workflow_execution_history) def history(self, **kwargs): """GetWorkflowExecutionHistory.""" return self._swf.get_workflow_execution_history(self.domain, self.runId, self.workflowId, **kwargs)['events'] @wraps(Layer1.describe_workflow_execution) def describe(self): """DescribeWorkflowExecution.""" return self._swf.describe_workflow_execution(self.domain, self.runId, self.workflowId) @wraps(Layer1.request_cancel_workflow_execution) def request_cancel(self): """RequestCancelWorkflowExecution.""" return self._swf.request_cancel_workflow_execution(self.domain, self.workflowId, self.runId) class ActivityType(SWFBase): """A versioned activity type.""" version = None @wraps(Layer1.deprecate_activity_type) def deprecate(self): """DeprecateActivityType.""" return self._swf.deprecate_activity_type(self.domain, self.name, self.version) @wraps(Layer1.describe_activity_type) def describe(self): """DescribeActivityType.""" return self._swf.describe_activity_type(self.domain, self.name, self.version) @wraps(Layer1.register_activity_type) def register(self, **kwargs): """RegisterActivityType.""" args = { 'default_task_heartbeat_timeout': '600', 'default_task_schedule_to_close_timeout': '3900', 'default_task_schedule_to_start_timeout': '300', 'default_task_start_to_close_timeout': '3600', } args.update(kwargs) self._swf.register_activity_type(self.domain, self.name, self.version, **args) boto-2.20.1/boto/utils.py000066400000000000000000001015571225267101000152210ustar00rootroot00000000000000# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # # Parts of this code were copied or derived from sample code supplied by AWS. # The following notice applies to that code. # # This software code is made available "AS IS" without warranties of any # kind. You may copy, display, modify and redistribute the software # code either by itself or as incorporated into your code; provided that # you do not remove any proprietary notices. Your use of this software # code is at your own risk and you waive any claim against Amazon # Digital Services, Inc. or its affiliates with respect to your use of # this software code. (c) 2006 Amazon Digital Services, Inc. or its # affiliates. """ Some handy utility functions used by several classes. """ import socket import urllib import urllib2 import imp import subprocess import StringIO import time import logging.handlers import boto import boto.provider import tempfile import random import smtplib import datetime import re import email.mime.multipart import email.mime.base import email.mime.text import email.utils import email.encoders import gzip import base64 try: from hashlib import md5 except ImportError: from md5 import md5 try: import hashlib _hashfn = hashlib.sha512 except ImportError: import md5 _hashfn = md5.md5 from boto.compat import json try: from boto.compat.json import JSONDecodeError except ImportError: JSONDecodeError = ValueError # List of Query String Arguments of Interest qsa_of_interest = ['acl', 'cors', 'defaultObjectAcl', 'location', 'logging', 'partNumber', 'policy', 'requestPayment', 'torrent', 'versioning', 'versionId', 'versions', 'website', 'uploads', 'uploadId', 'response-content-type', 'response-content-language', 'response-expires', 'response-cache-control', 'response-content-disposition', 'response-content-encoding', 'delete', 'lifecycle', 'tagging', 'restore', # storageClass is a QSA for buckets in Google Cloud Storage. # (StorageClass is associated to individual keys in S3, but # having it listed here should cause no problems because # GET bucket?storageClass is not part of the S3 API.) 'storageClass', # websiteConfig is a QSA for buckets in Google Cloud # Storage. 'websiteConfig', # compose is a QSA for objects in Google Cloud Storage. 'compose'] _first_cap_regex = re.compile('(.)([A-Z][a-z]+)') _number_cap_regex = re.compile('([a-z])([0-9]+)') _end_cap_regex = re.compile('([a-z0-9])([A-Z])') def unquote_v(nv): if len(nv) == 1: return nv else: return (nv[0], urllib.unquote(nv[1])) def canonical_string(method, path, headers, expires=None, provider=None): """ Generates the aws canonical string for the given parameters """ if not provider: provider = boto.provider.get_default() interesting_headers = {} for key in headers: lk = key.lower() if headers[key] is not None and \ (lk in ['content-md5', 'content-type', 'date'] or lk.startswith(provider.header_prefix)): interesting_headers[lk] = str(headers[key]).strip() # these keys get empty strings if they don't exist if 'content-type' not in interesting_headers: interesting_headers['content-type'] = '' if 'content-md5' not in interesting_headers: interesting_headers['content-md5'] = '' # just in case someone used this. it's not necessary in this lib. if provider.date_header in interesting_headers: interesting_headers['date'] = '' # if you're using expires for query string auth, then it trumps date # (and provider.date_header) if expires: interesting_headers['date'] = str(expires) sorted_header_keys = sorted(interesting_headers.keys()) buf = "%s\n" % method for key in sorted_header_keys: val = interesting_headers[key] if key.startswith(provider.header_prefix): buf += "%s:%s\n" % (key, val) else: buf += "%s\n" % val # don't include anything after the first ? in the resource... # unless it is one of the QSA of interest, defined above t = path.split('?') buf += t[0] if len(t) > 1: qsa = t[1].split('&') qsa = [a.split('=', 1) for a in qsa] qsa = [unquote_v(a) for a in qsa if a[0] in qsa_of_interest] if len(qsa) > 0: qsa.sort(cmp=lambda x, y: cmp(x[0], y[0])) qsa = ['='.join(a) for a in qsa] buf += '?' buf += '&'.join(qsa) return buf def merge_meta(headers, metadata, provider=None): if not provider: provider = boto.provider.get_default() metadata_prefix = provider.metadata_prefix final_headers = headers.copy() for k in metadata.keys(): if k.lower() in ['cache-control', 'content-md5', 'content-type', 'content-encoding', 'content-disposition', 'expires']: final_headers[k] = metadata[k] else: final_headers[metadata_prefix + k] = metadata[k] return final_headers def get_aws_metadata(headers, provider=None): if not provider: provider = boto.provider.get_default() metadata_prefix = provider.metadata_prefix metadata = {} for hkey in headers.keys(): if hkey.lower().startswith(metadata_prefix): val = urllib.unquote_plus(headers[hkey]) try: metadata[hkey[len(metadata_prefix):]] = unicode(val, 'utf-8') except UnicodeDecodeError: metadata[hkey[len(metadata_prefix):]] = val del headers[hkey] return metadata def retry_url(url, retry_on_404=True, num_retries=10): """ Retry a url. This is specifically used for accessing the metadata service on an instance. Since this address should never be proxied (for security reasons), we create a ProxyHandler with a NULL dictionary to override any proxy settings in the environment. """ for i in range(0, num_retries): try: proxy_handler = urllib2.ProxyHandler({}) opener = urllib2.build_opener(proxy_handler) req = urllib2.Request(url) r = opener.open(req) result = r.read() return result except urllib2.HTTPError, e: # in 2.6 you use getcode(), in 2.5 and earlier you use code if hasattr(e, 'getcode'): code = e.getcode() else: code = e.code if code == 404 and not retry_on_404: return '' except Exception, e: pass boto.log.exception('Caught exception reading instance data') # If not on the last iteration of the loop then sleep. if i + 1 != num_retries: time.sleep(2 ** i) boto.log.error('Unable to read instance data, giving up') return '' def _get_instance_metadata(url, num_retries): return LazyLoadMetadata(url, num_retries) class LazyLoadMetadata(dict): def __init__(self, url, num_retries): self._url = url self._num_retries = num_retries self._leaves = {} self._dicts = [] data = boto.utils.retry_url(self._url, num_retries=self._num_retries) if data: fields = data.split('\n') for field in fields: if field.endswith('/'): key = field[0:-1] self._dicts.append(key) else: p = field.find('=') if p > 0: key = field[p + 1:] resource = field[0:p] + '/openssh-key' else: key = resource = field self._leaves[key] = resource self[key] = None def _materialize(self): for key in self: self[key] def __getitem__(self, key): if key not in self: # allow dict to throw the KeyError return super(LazyLoadMetadata, self).__getitem__(key) # already loaded val = super(LazyLoadMetadata, self).__getitem__(key) if val is not None: return val if key in self._leaves: resource = self._leaves[key] for i in range(0, self._num_retries): try: val = boto.utils.retry_url( self._url + urllib.quote(resource, safe="/:"), num_retries=self._num_retries) if val and val[0] == '{': val = json.loads(val) break else: p = val.find('\n') if p > 0: val = val.split('\n') break except JSONDecodeError, e: boto.log.debug( "encountered '%s' exception: %s" % ( e.__class__.__name__, e)) boto.log.debug( 'corrupted JSON data found: %s' % val) except Exception, e: boto.log.debug("encountered unretryable" + " '%s' exception, re-raising" % ( e.__class__.__name__)) raise boto.log.error("Caught exception reading meta data" + " for the '%s' try" % (i + 1)) if i + 1 != self._num_retries: next_sleep = random.random() * (2 ** i) time.sleep(next_sleep) else: boto.log.error('Unable to read meta data, giving up') boto.log.error( "encountered '%s' exception: %s" % ( e.__class__.__name__, e)) raise self[key] = val elif key in self._dicts: self[key] = LazyLoadMetadata(self._url + key + '/', self._num_retries) return super(LazyLoadMetadata, self).__getitem__(key) def get(self, key, default=None): try: return self[key] except KeyError: return default def values(self): self._materialize() return super(LazyLoadMetadata, self).values() def items(self): self._materialize() return super(LazyLoadMetadata, self).items() def __str__(self): self._materialize() return super(LazyLoadMetadata, self).__str__() def __repr__(self): self._materialize() return super(LazyLoadMetadata, self).__repr__() def _build_instance_metadata_url(url, version, path): """ Builds an EC2 metadata URL for fetching information about an instance. Example: >>> _build_instance_metadata_url('http://169.254.169.254', 'latest', 'meta-data/') http://169.254.169.254/latest/meta-data/ :type url: string :param url: URL to metadata service, e.g. 'http://169.254.169.254' :type version: string :param version: Version of the metadata to get, e.g. 'latest' :type path: string :param path: Path of the metadata to get, e.g. 'meta-data/'. If a trailing slash is required it must be passed in with the path. :return: The full metadata URL """ return '%s/%s/%s' % (url, version, path) def get_instance_metadata(version='latest', url='http://169.254.169.254', data='meta-data/', timeout=None, num_retries=5): """ Returns the instance metadata as a nested Python dictionary. Simple values (e.g. local_hostname, hostname, etc.) will be stored as string values. Values such as ancestor-ami-ids will be stored in the dict as a list of string values. More complex fields such as public-keys and will be stored as nested dicts. If the timeout is specified, the connection to the specified url will time out after the specified number of seconds. """ if timeout is not None: original = socket.getdefaulttimeout() socket.setdefaulttimeout(timeout) try: metadata_url = _build_instance_metadata_url(url, version, data) return _get_instance_metadata(metadata_url, num_retries=num_retries) except urllib2.URLError, e: return None finally: if timeout is not None: socket.setdefaulttimeout(original) def get_instance_identity(version='latest', url='http://169.254.169.254', timeout=None, num_retries=5): """ Returns the instance identity as a nested Python dictionary. """ iid = {} base_url = _build_instance_metadata_url(url, version, 'dynamic/instance-identity/') if timeout is not None: original = socket.getdefaulttimeout() socket.setdefaulttimeout(timeout) try: data = retry_url(base_url, num_retries=num_retries) fields = data.split('\n') for field in fields: val = retry_url(base_url + '/' + field + '/') if val[0] == '{': val = json.loads(val) if field: iid[field] = val return iid except urllib2.URLError, e: return None finally: if timeout is not None: socket.setdefaulttimeout(original) def get_instance_userdata(version='latest', sep=None, url='http://169.254.169.254'): ud_url = _build_instance_metadata_url(url, version, 'user-data') user_data = retry_url(ud_url, retry_on_404=False) if user_data: if sep: l = user_data.split(sep) user_data = {} for nvpair in l: t = nvpair.split('=') user_data[t[0].strip()] = t[1].strip() return user_data ISO8601 = '%Y-%m-%dT%H:%M:%SZ' ISO8601_MS = '%Y-%m-%dT%H:%M:%S.%fZ' RFC1123 = '%a, %d %b %Y %H:%M:%S %Z' def get_ts(ts=None): if not ts: ts = time.gmtime() return time.strftime(ISO8601, ts) def parse_ts(ts): ts = ts.strip() try: dt = datetime.datetime.strptime(ts, ISO8601) return dt except ValueError: try: dt = datetime.datetime.strptime(ts, ISO8601_MS) return dt except ValueError: dt = datetime.datetime.strptime(ts, RFC1123) return dt def find_class(module_name, class_name=None): if class_name: module_name = "%s.%s" % (module_name, class_name) modules = module_name.split('.') c = None try: for m in modules[1:]: if c: c = getattr(c, m) else: c = getattr(__import__(".".join(modules[0:-1])), m) return c except: return None def update_dme(username, password, dme_id, ip_address): """ Update your Dynamic DNS record with DNSMadeEasy.com """ dme_url = 'https://www.dnsmadeeasy.com/servlet/updateip' dme_url += '?username=%s&password=%s&id=%s&ip=%s' s = urllib2.urlopen(dme_url % (username, password, dme_id, ip_address)) return s.read() def fetch_file(uri, file=None, username=None, password=None): """ Fetch a file based on the URI provided. If you do not pass in a file pointer a tempfile.NamedTemporaryFile, or None if the file could not be retrieved is returned. The URI can be either an HTTP url, or "s3://bucket_name/key_name" """ boto.log.info('Fetching %s' % uri) if file is None: file = tempfile.NamedTemporaryFile() try: if uri.startswith('s3://'): bucket_name, key_name = uri[len('s3://'):].split('/', 1) c = boto.connect_s3(aws_access_key_id=username, aws_secret_access_key=password) bucket = c.get_bucket(bucket_name) key = bucket.get_key(key_name) key.get_contents_to_file(file) else: if username and password: passman = urllib2.HTTPPasswordMgrWithDefaultRealm() passman.add_password(None, uri, username, password) authhandler = urllib2.HTTPBasicAuthHandler(passman) opener = urllib2.build_opener(authhandler) urllib2.install_opener(opener) s = urllib2.urlopen(uri) file.write(s.read()) file.seek(0) except: raise boto.log.exception('Problem Retrieving file: %s' % uri) file = None return file class ShellCommand(object): def __init__(self, command, wait=True, fail_fast=False, cwd=None): self.exit_code = 0 self.command = command self.log_fp = StringIO.StringIO() self.wait = wait self.fail_fast = fail_fast self.run(cwd=cwd) def run(self, cwd=None): boto.log.info('running:%s' % self.command) self.process = subprocess.Popen(self.command, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd) if(self.wait): while self.process.poll() is None: time.sleep(1) t = self.process.communicate() self.log_fp.write(t[0]) self.log_fp.write(t[1]) boto.log.info(self.log_fp.getvalue()) self.exit_code = self.process.returncode if self.fail_fast and self.exit_code != 0: raise Exception("Command " + self.command + " failed with status " + self.exit_code) return self.exit_code def setReadOnly(self, value): raise AttributeError def getStatus(self): return self.exit_code status = property(getStatus, setReadOnly, None, 'The exit code for the command') def getOutput(self): return self.log_fp.getvalue() output = property(getOutput, setReadOnly, None, 'The STDIN and STDERR output of the command') class AuthSMTPHandler(logging.handlers.SMTPHandler): """ This class extends the SMTPHandler in the standard Python logging module to accept a username and password on the constructor and to then use those credentials to authenticate with the SMTP server. To use this, you could add something like this in your boto config file: [handler_hand07] class=boto.utils.AuthSMTPHandler level=WARN formatter=form07 args=('localhost', 'username', 'password', 'from@abc', ['user1@abc', 'user2@xyz'], 'Logger Subject') """ def __init__(self, mailhost, username, password, fromaddr, toaddrs, subject): """ Initialize the handler. We have extended the constructor to accept a username/password for SMTP authentication. """ logging.handlers.SMTPHandler.__init__(self, mailhost, fromaddr, toaddrs, subject) self.username = username self.password = password def emit(self, record): """ Emit a record. Format the record and send it to the specified addressees. It would be really nice if I could add authorization to this class without having to resort to cut and paste inheritance but, no. """ try: port = self.mailport if not port: port = smtplib.SMTP_PORT smtp = smtplib.SMTP(self.mailhost, port) smtp.login(self.username, self.password) msg = self.format(record) msg = "From: %s\r\nTo: %s\r\nSubject: %s\r\nDate: %s\r\n\r\n%s" % ( self.fromaddr, ','.join(self.toaddrs), self.getSubject(record), email.utils.formatdate(), msg) smtp.sendmail(self.fromaddr, self.toaddrs, msg) smtp.quit() except (KeyboardInterrupt, SystemExit): raise except: self.handleError(record) class LRUCache(dict): """A dictionary-like object that stores only a certain number of items, and discards its least recently used item when full. >>> cache = LRUCache(3) >>> cache['A'] = 0 >>> cache['B'] = 1 >>> cache['C'] = 2 >>> len(cache) 3 >>> cache['A'] 0 Adding new items to the cache does not increase its size. Instead, the least recently used item is dropped: >>> cache['D'] = 3 >>> len(cache) 3 >>> 'B' in cache False Iterating over the cache returns the keys, starting with the most recently used: >>> for key in cache: ... print key D A C This code is based on the LRUCache class from Genshi which is based on `Myghty `_'s LRUCache from ``myghtyutils.util``, written by Mike Bayer and released under the MIT license (Genshi uses the BSD License). """ class _Item(object): def __init__(self, key, value): self.previous = self.next = None self.key = key self.value = value def __repr__(self): return repr(self.value) def __init__(self, capacity): self._dict = dict() self.capacity = capacity self.head = None self.tail = None def __contains__(self, key): return key in self._dict def __iter__(self): cur = self.head while cur: yield cur.key cur = cur.next def __len__(self): return len(self._dict) def __getitem__(self, key): item = self._dict[key] self._update_item(item) return item.value def __setitem__(self, key, value): item = self._dict.get(key) if item is None: item = self._Item(key, value) self._dict[key] = item self._insert_item(item) else: item.value = value self._update_item(item) self._manage_size() def __repr__(self): return repr(self._dict) def _insert_item(self, item): item.previous = None item.next = self.head if self.head is not None: self.head.previous = item else: self.tail = item self.head = item self._manage_size() def _manage_size(self): while len(self._dict) > self.capacity: del self._dict[self.tail.key] if self.tail != self.head: self.tail = self.tail.previous self.tail.next = None else: self.head = self.tail = None def _update_item(self, item): if self.head == item: return previous = item.previous previous.next = item.next if item.next is not None: item.next.previous = previous else: self.tail = previous item.previous = None item.next = self.head self.head.previous = self.head = item class Password(object): """ Password object that stores itself as hashed. Hash defaults to SHA512 if available, MD5 otherwise. """ hashfunc = _hashfn def __init__(self, str=None, hashfunc=None): """ Load the string from an initial value, this should be the raw hashed password. """ self.str = str if hashfunc: self.hashfunc = hashfunc def set(self, value): self.str = self.hashfunc(value).hexdigest() def __str__(self): return str(self.str) def __eq__(self, other): if other is None: return False return str(self.hashfunc(other).hexdigest()) == str(self.str) def __len__(self): if self.str: return len(self.str) else: return 0 def notify(subject, body=None, html_body=None, to_string=None, attachments=None, append_instance_id=True): attachments = attachments or [] if append_instance_id: subject = "[%s] %s" % ( boto.config.get_value("Instance", "instance-id"), subject) if not to_string: to_string = boto.config.get_value('Notification', 'smtp_to', None) if to_string: try: from_string = boto.config.get_value('Notification', 'smtp_from', 'boto') msg = email.mime.multipart.MIMEMultipart() msg['From'] = from_string msg['Reply-To'] = from_string msg['To'] = to_string msg['Date'] = email.utils.formatdate(localtime=True) msg['Subject'] = subject if body: msg.attach(email.mime.text.MIMEText(body)) if html_body: part = email.mime.base.MIMEBase('text', 'html') part.set_payload(html_body) email.encoders.encode_base64(part) msg.attach(part) for part in attachments: msg.attach(part) smtp_host = boto.config.get_value('Notification', 'smtp_host', 'localhost') # Alternate port support if boto.config.get_value("Notification", "smtp_port"): server = smtplib.SMTP(smtp_host, int( boto.config.get_value("Notification", "smtp_port"))) else: server = smtplib.SMTP(smtp_host) # TLS support if boto.config.getbool("Notification", "smtp_tls"): server.ehlo() server.starttls() server.ehlo() smtp_user = boto.config.get_value('Notification', 'smtp_user', '') smtp_pass = boto.config.get_value('Notification', 'smtp_pass', '') if smtp_user: server.login(smtp_user, smtp_pass) server.sendmail(from_string, to_string, msg.as_string()) server.quit() except: boto.log.exception('notify failed') def get_utf8_value(value): if not isinstance(value, str) and not isinstance(value, unicode): value = str(value) if isinstance(value, unicode): return value.encode('utf-8') else: return value def mklist(value): if not isinstance(value, list): if isinstance(value, tuple): value = list(value) else: value = [value] return value def pythonize_name(name): """Convert camel case to a "pythonic" name. Examples:: pythonize_name('CamelCase') -> 'camel_case' pythonize_name('already_pythonized') -> 'already_pythonized' pythonize_name('HTTPRequest') -> 'http_request' pythonize_name('HTTPStatus200Ok') -> 'http_status_200_ok' pythonize_name('UPPER') -> 'upper' pythonize_name('') -> '' """ s1 = _first_cap_regex.sub(r'\1_\2', name) s2 = _number_cap_regex.sub(r'\1_\2', s1) return _end_cap_regex.sub(r'\1_\2', s2).lower() def write_mime_multipart(content, compress=False, deftype='text/plain', delimiter=':'): """Description: :param content: A list of tuples of name-content pairs. This is used instead of a dict to ensure that scripts run in order :type list of tuples: :param compress: Use gzip to compress the scripts, defaults to no compression :type bool: :param deftype: The type that should be assumed if nothing else can be figured out :type str: :param delimiter: mime delimiter :type str: :return: Final mime multipart :rtype: str: """ wrapper = email.mime.multipart.MIMEMultipart() for name, con in content: definite_type = guess_mime_type(con, deftype) maintype, subtype = definite_type.split('/', 1) if maintype == 'text': mime_con = email.mime.text.MIMEText(con, _subtype=subtype) else: mime_con = email.mime.base.MIMEBase(maintype, subtype) mime_con.set_payload(con) # Encode the payload using Base64 email.encoders.encode_base64(mime_con) mime_con.add_header('Content-Disposition', 'attachment', filename=name) wrapper.attach(mime_con) rcontent = wrapper.as_string() if compress: buf = StringIO.StringIO() gz = gzip.GzipFile(mode='wb', fileobj=buf) try: gz.write(rcontent) finally: gz.close() rcontent = buf.getvalue() return rcontent def guess_mime_type(content, deftype): """Description: Guess the mime type of a block of text :param content: content we're finding the type of :type str: :param deftype: Default mime type :type str: :rtype: : :return: """ # Mappings recognized by cloudinit starts_with_mappings = { '#include': 'text/x-include-url', '#!': 'text/x-shellscript', '#cloud-config': 'text/cloud-config', '#upstart-job': 'text/upstart-job', '#part-handler': 'text/part-handler', '#cloud-boothook': 'text/cloud-boothook' } rtype = deftype for possible_type, mimetype in starts_with_mappings.items(): if content.startswith(possible_type): rtype = mimetype break return(rtype) def compute_md5(fp, buf_size=8192, size=None): """ Compute MD5 hash on passed file and return results in a tuple of values. :type fp: file :param fp: File pointer to the file to MD5 hash. The file pointer will be reset to its current location before the method returns. :type buf_size: integer :param buf_size: Number of bytes per read request. :type size: int :param size: (optional) The Maximum number of bytes to read from the file pointer (fp). This is useful when uploading a file in multiple parts where the file is being split inplace into different parts. Less bytes may be available. :rtype: tuple :return: A tuple containing the hex digest version of the MD5 hash as the first element, the base64 encoded version of the plain digest as the second element and the data size as the third element. """ return compute_hash(fp, buf_size, size, hash_algorithm=md5) def compute_hash(fp, buf_size=8192, size=None, hash_algorithm=md5): hash_obj = hash_algorithm() spos = fp.tell() if size and size < buf_size: s = fp.read(size) else: s = fp.read(buf_size) while s: hash_obj.update(s) if size: size -= len(s) if size <= 0: break if size and size < buf_size: s = fp.read(size) else: s = fp.read(buf_size) hex_digest = hash_obj.hexdigest() base64_digest = base64.encodestring(hash_obj.digest()) if base64_digest[-1] == '\n': base64_digest = base64_digest[0:-1] # data_size based on bytes read. data_size = fp.tell() - spos fp.seek(spos) return (hex_digest, base64_digest, data_size) def find_matching_headers(name, headers): """ Takes a specific header name and a dict of headers {"name": "value"}. Returns a list of matching header names, case-insensitive. """ return [h for h in headers if h.lower() == name.lower()] def merge_headers_by_name(name, headers): """ Takes a specific header name and a dict of headers {"name": "value"}. Returns a string of all header values, comma-separated, that match the input header name, case-insensitive. """ matching_headers = find_matching_headers(name, headers) return ','.join(str(headers[h]) for h in matching_headers if headers[h] is not None) boto-2.20.1/boto/vpc/000077500000000000000000000000001225267101000142665ustar00rootroot00000000000000boto-2.20.1/boto/vpc/__init__.py000066400000000000000000001567521225267101000164170ustar00rootroot00000000000000# Copyright (c) 2009 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents a connection to the EC2 service. """ from boto.ec2.connection import EC2Connection from boto.resultset import ResultSet from boto.vpc.vpc import VPC from boto.vpc.customergateway import CustomerGateway from boto.vpc.networkacl import NetworkAcl from boto.vpc.routetable import RouteTable from boto.vpc.internetgateway import InternetGateway from boto.vpc.vpngateway import VpnGateway, Attachment from boto.vpc.dhcpoptions import DhcpOptions from boto.vpc.subnet import Subnet from boto.vpc.vpnconnection import VpnConnection from boto.ec2 import RegionData from boto.regioninfo import RegionInfo def regions(**kw_params): """ Get all available regions for the EC2 service. You may pass any of the arguments accepted by the VPCConnection object's constructor as keyword arguments and they will be passed along to the VPCConnection object. :rtype: list :return: A list of :class:`boto.ec2.regioninfo.RegionInfo` """ regions = [] for region_name in RegionData: region = RegionInfo(name=region_name, endpoint=RegionData[region_name], connection_cls=VPCConnection) regions.append(region) regions.append(RegionInfo(name='us-gov-west-1', endpoint=RegionData[region_name], connection_cls=VPCConnection)) return regions def connect_to_region(region_name, **kw_params): """ Given a valid region name, return a :class:`boto.vpc.VPCConnection`. Any additional parameters after the region_name are passed on to the connect method of the region object. :type: str :param region_name: The name of the region to connect to. :rtype: :class:`boto.vpc.VPCConnection` or ``None`` :return: A connection to the given region, or None if an invalid region name is given """ for region in regions(**kw_params): if region.name == region_name: return region.connect(**kw_params) return None class VPCConnection(EC2Connection): # VPC methods def get_all_vpcs(self, vpc_ids=None, filters=None, dry_run=False): """ Retrieve information about your VPCs. You can filter results to return information only about those VPCs that match your search parameters. Otherwise, all VPCs associated with your account are returned. :type vpc_ids: list :param vpc_ids: A list of strings with the desired VPC ID's :type filters: list of tuples :param filters: A list of tuples containing filters. Each tuple consists of a filter key and a filter value. Possible filter keys are: * *state* - a list of states of the VPC (pending or available) * *cidrBlock* - a list CIDR blocks of the VPC * *dhcpOptionsId* - a list of IDs of a set of DHCP options :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.vpc.vpc.VPC` """ params = {} if vpc_ids: self.build_list_params(params, vpc_ids, 'VpcId') if filters: self.build_filter_params(params, dict(filters)) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeVpcs', params, [('item', VPC)]) def create_vpc(self, cidr_block, instance_tenancy=None, dry_run=False): """ Create a new Virtual Private Cloud. :type cidr_block: str :param cidr_block: A valid CIDR block :type instance_tenancy: str :param instance_tenancy: The supported tenancy options for instances launched into the VPC. Valid values are 'default' and 'dedicated'. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: The newly created VPC :return: A :class:`boto.vpc.vpc.VPC` object """ params = {'CidrBlock': cidr_block} if instance_tenancy: params['InstanceTenancy'] = instance_tenancy if dry_run: params['DryRun'] = 'true' return self.get_object('CreateVpc', params, VPC) def delete_vpc(self, vpc_id, dry_run=False): """ Delete a Virtual Private Cloud. :type vpc_id: str :param vpc_id: The ID of the vpc to be deleted. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = {'VpcId': vpc_id} if dry_run: params['DryRun'] = 'true' return self.get_status('DeleteVpc', params) def modify_vpc_attribute(self, vpc_id, enable_dns_support=None, enable_dns_hostnames=None, dry_run=False): """ Modifies the specified attribute of the specified VPC. You can only modify one attribute at a time. :type vpc_id: str :param vpc_id: The ID of the vpc to be deleted. :type enable_dns_support: bool :param enable_dns_support: Specifies whether the DNS server provided by Amazon is enabled for the VPC. :type enable_dns_hostnames: bool :param enable_dns_hostnames: Specifies whether DNS hostnames are provided for the instances launched in this VPC. You can only set this attribute to ``true`` if EnableDnsSupport is also ``true``. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = {'VpcId': vpc_id} if enable_dns_support is not None: if enable_dns_support: params['EnableDnsSupport.Value'] = 'true' else: params['EnableDnsSupport.Value'] = 'false' if enable_dns_hostnames is not None: if enable_dns_hostnames: params['EnableDnsHostnames.Value'] = 'true' else: params['EnableDnsHostnames.Value'] = 'false' if dry_run: params['DryRun'] = 'true' return self.get_status('ModifyVpcAttribute', params) # Route Tables def get_all_route_tables(self, route_table_ids=None, filters=None, dry_run=False): """ Retrieve information about your routing tables. You can filter results to return information only about those route tables that match your search parameters. Otherwise, all route tables associated with your account are returned. :type route_table_ids: list :param route_table_ids: A list of strings with the desired route table IDs. :type filters: list of tuples :param filters: A list of tuples containing filters. Each tuple consists of a filter key and a filter value. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.vpc.routetable.RouteTable` """ params = {} if route_table_ids: self.build_list_params(params, route_table_ids, "RouteTableId") if filters: self.build_filter_params(params, dict(filters)) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeRouteTables', params, [('item', RouteTable)]) def associate_route_table(self, route_table_id, subnet_id, dry_run=False): """ Associates a route table with a specific subnet. :type route_table_id: str :param route_table_id: The ID of the route table to associate. :type subnet_id: str :param subnet_id: The ID of the subnet to associate with. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: str :return: The ID of the association created """ params = { 'RouteTableId': route_table_id, 'SubnetId': subnet_id } if dry_run: params['DryRun'] = 'true' result = self.get_object('AssociateRouteTable', params, ResultSet) return result.associationId def disassociate_route_table(self, association_id, dry_run=False): """ Removes an association from a route table. This will cause all subnets that would've used this association to now use the main routing association instead. :type association_id: str :param association_id: The ID of the association to disassociate. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = {'AssociationId': association_id} if dry_run: params['DryRun'] = 'true' return self.get_status('DisassociateRouteTable', params) def create_route_table(self, vpc_id, dry_run=False): """ Creates a new route table. :type vpc_id: str :param vpc_id: The VPC ID to associate this route table with. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: The newly created route table :return: A :class:`boto.vpc.routetable.RouteTable` object """ params = {'VpcId': vpc_id} if dry_run: params['DryRun'] = 'true' return self.get_object('CreateRouteTable', params, RouteTable) def delete_route_table(self, route_table_id, dry_run=False): """ Delete a route table. :type route_table_id: str :param route_table_id: The ID of the route table to delete. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = {'RouteTableId': route_table_id} if dry_run: params['DryRun'] = 'true' return self.get_status('DeleteRouteTable', params) def _replace_route_table_association(self, association_id, route_table_id, dry_run=False): """ Helper function for replace_route_table_association and replace_route_table_association_with_assoc. Should not be used directly. :type association_id: str :param association_id: The ID of the existing association to replace. :type route_table_id: str :param route_table_id: The route table to ID to be used in the association. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: ResultSet :return: ResultSet of Amazon resposne """ params = { 'AssociationId': association_id, 'RouteTableId': route_table_id } if dry_run: params['DryRun'] = 'true' return self.get_object('ReplaceRouteTableAssociation', params, ResultSet) def replace_route_table_assocation(self, association_id, route_table_id, dry_run=False): """ Replaces a route association with a new route table. This can be used to replace the 'main' route table by using the main route table association instead of the more common subnet type association. NOTE: It may be better to use replace_route_table_association_with_assoc instead of this function; this function does not return the new association ID. This function is retained for backwards compatibility. :type association_id: str :param association_id: The ID of the existing association to replace. :type route_table_id: str :param route_table_id: The route table to ID to be used in the association. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ return self._replace_route_table_association( association_id, route_table_id, dry_run=dry_run).status def replace_route_table_association_with_assoc(self, association_id, route_table_id, dry_run=False): """ Replaces a route association with a new route table. This can be used to replace the 'main' route table by using the main route table association instead of the more common subnet type association. Returns the new association ID. :type association_id: str :param association_id: The ID of the existing association to replace. :type route_table_id: str :param route_table_id: The route table to ID to be used in the association. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: str :return: New association ID """ return self._replace_route_table_association( association_id, route_table_id, dry_run=dry_run).newAssociationId def create_route(self, route_table_id, destination_cidr_block, gateway_id=None, instance_id=None, interface_id=None, dry_run=False): """ Creates a new route in the route table within a VPC. The route's target can be either a gateway attached to the VPC or a NAT instance in the VPC. :type route_table_id: str :param route_table_id: The ID of the route table for the route. :type destination_cidr_block: str :param destination_cidr_block: The CIDR address block used for the destination match. :type gateway_id: str :param gateway_id: The ID of the gateway attached to your VPC. :type instance_id: str :param instance_id: The ID of a NAT instance in your VPC. :type interface_id: str :param interface_id: Allows routing to network interface attachments. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = { 'RouteTableId': route_table_id, 'DestinationCidrBlock': destination_cidr_block } if gateway_id is not None: params['GatewayId'] = gateway_id elif instance_id is not None: params['InstanceId'] = instance_id elif interface_id is not None: params['NetworkInterfaceId'] = interface_id if dry_run: params['DryRun'] = 'true' return self.get_status('CreateRoute', params) def replace_route(self, route_table_id, destination_cidr_block, gateway_id=None, instance_id=None, interface_id=None, dry_run=False): """ Replaces an existing route within a route table in a VPC. :type route_table_id: str :param route_table_id: The ID of the route table for the route. :type destination_cidr_block: str :param destination_cidr_block: The CIDR address block used for the destination match. :type gateway_id: str :param gateway_id: The ID of the gateway attached to your VPC. :type instance_id: str :param instance_id: The ID of a NAT instance in your VPC. :type interface_id: str :param interface_id: Allows routing to network interface attachments. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = { 'RouteTableId': route_table_id, 'DestinationCidrBlock': destination_cidr_block } if gateway_id is not None: params['GatewayId'] = gateway_id elif instance_id is not None: params['InstanceId'] = instance_id elif interface_id is not None: params['NetworkInterfaceId'] = interface_id if dry_run: params['DryRun'] = 'true' return self.get_status('ReplaceRoute', params) def delete_route(self, route_table_id, destination_cidr_block, dry_run=False): """ Deletes a route from a route table within a VPC. :type route_table_id: str :param route_table_id: The ID of the route table with the route. :type destination_cidr_block: str :param destination_cidr_block: The CIDR address block used for destination match. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = { 'RouteTableId': route_table_id, 'DestinationCidrBlock': destination_cidr_block } if dry_run: params['DryRun'] = 'true' return self.get_status('DeleteRoute', params) #Network ACLs def get_all_network_acls(self, network_acl_ids=None, filters=None): """ Retrieve information about your network acls. You can filter results to return information only about those network acls that match your search parameters. Otherwise, all network acls associated with your account are returned. :type network_acl_ids: list :param network_acl_ids: A list of strings with the desired network ACL IDs. :type filters: list of tuples :param filters: A list of tuples containing filters. Each tuple consists of a filter key and a filter value. :rtype: list :return: A list of :class:`boto.vpc.networkacl.NetworkAcl` """ params = {} if network_acl_ids: self.build_list_params(params, network_acl_ids, "NetworkAclId") if filters: self.build_filter_params(params, dict(filters)) return self.get_list('DescribeNetworkAcls', params, [('item', NetworkAcl)]) def associate_network_acl(self, network_acl_id, subnet_id): """ Associates a network acl with a specific subnet. :type network_acl_id: str :param network_acl_id: The ID of the network ACL to associate. :type subnet_id: str :param subnet_id: The ID of the subnet to associate with. :rtype: str :return: The ID of the association created """ acl = self.get_all_network_acls(filters=[('association.subnet-id', subnet_id)])[0] association = [ association for association in acl.associations if association.subnet_id == subnet_id ][0] params = { 'AssociationId': association.id, 'NetworkAclId': network_acl_id } result = self.get_object('ReplaceNetworkAclAssociation', params, ResultSet) return result.newAssociationId def disassociate_network_acl(self, subnet_id, vpc_id=None): """ Figures out what the default ACL is for the VPC, and associates current network ACL with the default. :type subnet_id: str :param subnet_id: The ID of the subnet to which the ACL belongs. :type vpc_id: str :param vpc_id: The ID of the VPC to which the ACL/subnet belongs. Queries EC2 if omitted. :rtype: str :return: The ID of the association created """ if not vpc_id: vpc_id = self.get_all_subnets([subnet_id])[0].vpc_id acls = self.get_all_network_acls(filters=[('vpc-id', vpc_id), ('default', 'true')]) default_acl_id = acls[0].id return self.associate_network_acl(default_acl_id, subnet_id) def create_network_acl(self, vpc_id): """ Creates a new network ACL. :type vpc_id: str :param vpc_id: The VPC ID to associate this network ACL with. :rtype: The newly created network ACL :return: A :class:`boto.vpc.networkacl.NetworkAcl` object """ params = {'VpcId': vpc_id} return self.get_object('CreateNetworkAcl', params, NetworkAcl) def delete_network_acl(self, network_acl_id): """ Delete a network ACL :type network_acl_id: str :param network_acl_id: The ID of the network_acl to delete. :rtype: bool :return: True if successful """ params = {'NetworkAclId': network_acl_id} return self.get_status('DeleteNetworkAcl', params) def create_network_acl_entry(self, network_acl_id, rule_number, protocol, rule_action, cidr_block, egress=None, icmp_code=None, icmp_type=None, port_range_from=None, port_range_to=None): """ Creates a new network ACL entry in a network ACL within a VPC. :type network_acl_id: str :param network_acl_id: The ID of the network ACL for this network ACL entry. :type rule_number: int :param rule_number: The rule number to assign to the entry (for example, 100). :type protocol: int :param protocol: Valid values: -1 or a protocol number (http://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml) :type rule_action: str :param rule_action: Indicates whether to allow or deny traffic that matches the rule. :type cidr_block: str :param cidr_block: The CIDR range to allow or deny, in CIDR notation (for example, 172.16.0.0/24). :type egress: bool :param egress: Indicates whether this rule applies to egress traffic from the subnet (true) or ingress traffic to the subnet (false). :type icmp_type: int :param icmp_type: For the ICMP protocol, the ICMP type. You can use -1 to specify all ICMP types. :type icmp_code: int :param icmp_code: For the ICMP protocol, the ICMP code. You can use -1 to specify all ICMP codes for the given ICMP type. :type port_range_from: int :param port_range_from: The first port in the range. :type port_range_to: int :param port_range_to: The last port in the range. :rtype: bool :return: True if successful """ params = { 'NetworkAclId': network_acl_id, 'RuleNumber': rule_number, 'Protocol': protocol, 'RuleAction': rule_action, 'CidrBlock': cidr_block } if egress is not None: if isinstance(egress, bool): egress = str(egress).lower() params['Egress'] = egress if icmp_code is not None: params['Icmp.Code'] = icmp_code if icmp_type is not None: params['Icmp.Type'] = icmp_type if port_range_from is not None: params['PortRange.From'] = port_range_from if port_range_to is not None: params['PortRange.To'] = port_range_to return self.get_status('CreateNetworkAclEntry', params) def replace_network_acl_entry(self, network_acl_id, rule_number, protocol, rule_action, cidr_block, egress=None, icmp_code=None, icmp_type=None, port_range_from=None, port_range_to=None): """ Creates a new network ACL entry in a network ACL within a VPC. :type network_acl_id: str :param network_acl_id: The ID of the network ACL for the id you want to replace :type rule_number: int :param rule_number: The rule number that you want to replace(for example, 100). :type protocol: int :param protocol: Valid values: -1 or a protocol number (http://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml) :type rule_action: str :param rule_action: Indicates whether to allow or deny traffic that matches the rule. :type cidr_block: str :param cidr_block: The CIDR range to allow or deny, in CIDR notation (for example, 172.16.0.0/24). :type egress: bool :param egress: Indicates whether this rule applies to egress traffic from the subnet (true) or ingress traffic to the subnet (false). :type icmp_type: int :param icmp_type: For the ICMP protocol, the ICMP type. You can use -1 to specify all ICMP types. :type icmp_code: int :param icmp_code: For the ICMP protocol, the ICMP code. You can use -1 to specify all ICMP codes for the given ICMP type. :type port_range_from: int :param port_range_from: The first port in the range. :type port_range_to: int :param port_range_to: The last port in the range. :rtype: bool :return: True if successful """ params = { 'NetworkAclId': network_acl_id, 'RuleNumber': rule_number, 'Protocol': protocol, 'RuleAction': rule_action, 'CidrBlock': cidr_block } if egress is not None: if isinstance(egress, bool): egress = str(egress).lower() params['Egress'] = egress if icmp_code is not None: params['Icmp.Code'] = icmp_code if icmp_type is not None: params['Icmp.Type'] = icmp_type if port_range_from is not None: params['PortRange.From'] = port_range_from if port_range_to is not None: params['PortRange.To'] = port_range_to return self.get_status('ReplaceNetworkAclEntry', params) def delete_network_acl_entry(self, network_acl_id, rule_number, egress=None): """ Deletes a network ACL entry from a network ACL within a VPC. :type network_acl_id: str :param network_acl_id: The ID of the network ACL with the network ACL entry. :type rule_number: int :param rule_number: The rule number for the entry to delete. :type egress: bool :param egress: Specifies whether the rule to delete is an egress rule (true) or ingress rule (false). :rtype: bool :return: True if successful """ params = { 'NetworkAclId': network_acl_id, 'RuleNumber': rule_number } if egress is not None: if isinstance(egress, bool): egress = str(egress).lower() params['Egress'] = egress return self.get_status('DeleteNetworkAclEntry', params) # Internet Gateways def get_all_internet_gateways(self, internet_gateway_ids=None, filters=None, dry_run=False): """ Get a list of internet gateways. You can filter results to return information about only those gateways that you're interested in. :type internet_gateway_ids: list :param internet_gateway_ids: A list of strings with the desired gateway IDs. :type filters: list of tuples :param filters: A list of tuples containing filters. Each tuple consists of a filter key and a filter value. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. """ params = {} if internet_gateway_ids: self.build_list_params(params, internet_gateway_ids, 'InternetGatewayId') if filters: self.build_filter_params(params, dict(filters)) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeInternetGateways', params, [('item', InternetGateway)]) def create_internet_gateway(self, dry_run=False): """ Creates an internet gateway for VPC. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: Newly created internet gateway. :return: `boto.vpc.internetgateway.InternetGateway` """ params = {} if dry_run: params['DryRun'] = 'true' return self.get_object('CreateInternetGateway', params, InternetGateway) def delete_internet_gateway(self, internet_gateway_id, dry_run=False): """ Deletes an internet gateway from the VPC. :type internet_gateway_id: str :param internet_gateway_id: The ID of the internet gateway to delete. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: Bool :return: True if successful """ params = {'InternetGatewayId': internet_gateway_id} if dry_run: params['DryRun'] = 'true' return self.get_status('DeleteInternetGateway', params) def attach_internet_gateway(self, internet_gateway_id, vpc_id, dry_run=False): """ Attach an internet gateway to a specific VPC. :type internet_gateway_id: str :param internet_gateway_id: The ID of the internet gateway to attach. :type vpc_id: str :param vpc_id: The ID of the VPC to attach to. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: Bool :return: True if successful """ params = { 'InternetGatewayId': internet_gateway_id, 'VpcId': vpc_id } if dry_run: params['DryRun'] = 'true' return self.get_status('AttachInternetGateway', params) def detach_internet_gateway(self, internet_gateway_id, vpc_id, dry_run=False): """ Detach an internet gateway from a specific VPC. :type internet_gateway_id: str :param internet_gateway_id: The ID of the internet gateway to detach. :type vpc_id: str :param vpc_id: The ID of the VPC to attach to. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: Bool :return: True if successful """ params = { 'InternetGatewayId': internet_gateway_id, 'VpcId': vpc_id } if dry_run: params['DryRun'] = 'true' return self.get_status('DetachInternetGateway', params) # Customer Gateways def get_all_customer_gateways(self, customer_gateway_ids=None, filters=None, dry_run=False): """ Retrieve information about your CustomerGateways. You can filter results to return information only about those CustomerGateways that match your search parameters. Otherwise, all CustomerGateways associated with your account are returned. :type customer_gateway_ids: list :param customer_gateway_ids: A list of strings with the desired CustomerGateway ID's. :type filters: list of tuples :param filters: A list of tuples containing filters. Each tuple consists of a filter key and a filter value. Possible filter keys are: - *state*, the state of the CustomerGateway (pending,available,deleting,deleted) - *type*, the type of customer gateway (ipsec.1) - *ipAddress* the IP address of customer gateway's internet-routable external inteface :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.vpc.customergateway.CustomerGateway` """ params = {} if customer_gateway_ids: self.build_list_params(params, customer_gateway_ids, 'CustomerGatewayId') if filters: self.build_filter_params(params, dict(filters)) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeCustomerGateways', params, [('item', CustomerGateway)]) def create_customer_gateway(self, type, ip_address, bgp_asn, dry_run=False): """ Create a new Customer Gateway :type type: str :param type: Type of VPN Connection. Only valid value currently is 'ipsec.1' :type ip_address: str :param ip_address: Internet-routable IP address for customer's gateway. Must be a static address. :type bgp_asn: int :param bgp_asn: Customer gateway's Border Gateway Protocol (BGP) Autonomous System Number (ASN) :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: The newly created CustomerGateway :return: A :class:`boto.vpc.customergateway.CustomerGateway` object """ params = {'Type': type, 'IpAddress': ip_address, 'BgpAsn': bgp_asn} if dry_run: params['DryRun'] = 'true' return self.get_object('CreateCustomerGateway', params, CustomerGateway) def delete_customer_gateway(self, customer_gateway_id, dry_run=False): """ Delete a Customer Gateway. :type customer_gateway_id: str :param customer_gateway_id: The ID of the customer_gateway to be deleted. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = {'CustomerGatewayId': customer_gateway_id} if dry_run: params['DryRun'] = 'true' return self.get_status('DeleteCustomerGateway', params) # VPN Gateways def get_all_vpn_gateways(self, vpn_gateway_ids=None, filters=None, dry_run=False): """ Retrieve information about your VpnGateways. You can filter results to return information only about those VpnGateways that match your search parameters. Otherwise, all VpnGateways associated with your account are returned. :type vpn_gateway_ids: list :param vpn_gateway_ids: A list of strings with the desired VpnGateway ID's :type filters: list of tuples :param filters: A list of tuples containing filters. Each tuple consists of a filter key and a filter value. Possible filter keys are: - *state*, a list of states of the VpnGateway (pending,available,deleting,deleted) - *type*, a list types of customer gateway (ipsec.1) - *availabilityZone*, a list of Availability zones the VPN gateway is in. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.vpc.customergateway.VpnGateway` """ params = {} if vpn_gateway_ids: self.build_list_params(params, vpn_gateway_ids, 'VpnGatewayId') if filters: self.build_filter_params(params, dict(filters)) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeVpnGateways', params, [('item', VpnGateway)]) def create_vpn_gateway(self, type, availability_zone=None, dry_run=False): """ Create a new Vpn Gateway :type type: str :param type: Type of VPN Connection. Only valid value currently is 'ipsec.1' :type availability_zone: str :param availability_zone: The Availability Zone where you want the VPN gateway. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: The newly created VpnGateway :return: A :class:`boto.vpc.vpngateway.VpnGateway` object """ params = {'Type': type} if availability_zone: params['AvailabilityZone'] = availability_zone if dry_run: params['DryRun'] = 'true' return self.get_object('CreateVpnGateway', params, VpnGateway) def delete_vpn_gateway(self, vpn_gateway_id, dry_run=False): """ Delete a Vpn Gateway. :type vpn_gateway_id: str :param vpn_gateway_id: The ID of the vpn_gateway to be deleted. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = {'VpnGatewayId': vpn_gateway_id} if dry_run: params['DryRun'] = 'true' return self.get_status('DeleteVpnGateway', params) def attach_vpn_gateway(self, vpn_gateway_id, vpc_id, dry_run=False): """ Attaches a VPN gateway to a VPC. :type vpn_gateway_id: str :param vpn_gateway_id: The ID of the vpn_gateway to attach :type vpc_id: str :param vpc_id: The ID of the VPC you want to attach the gateway to. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: An attachment :return: a :class:`boto.vpc.vpngateway.Attachment` """ params = {'VpnGatewayId': vpn_gateway_id, 'VpcId': vpc_id} if dry_run: params['DryRun'] = 'true' return self.get_object('AttachVpnGateway', params, Attachment) def detach_vpn_gateway(self, vpn_gateway_id, vpc_id, dry_run=False): """ Detaches a VPN gateway from a VPC. :type vpn_gateway_id: str :param vpn_gateway_id: The ID of the vpn_gateway to detach :type vpc_id: str :param vpc_id: The ID of the VPC you want to detach the gateway from. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = {'VpnGatewayId': vpn_gateway_id, 'VpcId': vpc_id} if dry_run: params['DryRun'] = 'true' return self.get_status('DetachVpnGateway', params) # Subnets def get_all_subnets(self, subnet_ids=None, filters=None, dry_run=False): """ Retrieve information about your Subnets. You can filter results to return information only about those Subnets that match your search parameters. Otherwise, all Subnets associated with your account are returned. :type subnet_ids: list :param subnet_ids: A list of strings with the desired Subnet ID's :type filters: list of tuples :param filters: A list of tuples containing filters. Each tuple consists of a filter key and a filter value. Possible filter keys are: - *state*, a list of states of the Subnet (pending,available) - *vpcId*, a list of IDs of teh VPC the subnet is in. - *cidrBlock*, a list of CIDR blocks of the subnet - *availabilityZone*, list of the Availability Zones the subnet is in. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.vpc.subnet.Subnet` """ params = {} if subnet_ids: self.build_list_params(params, subnet_ids, 'SubnetId') if filters: self.build_filter_params(params, dict(filters)) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeSubnets', params, [('item', Subnet)]) def create_subnet(self, vpc_id, cidr_block, availability_zone=None, dry_run=False): """ Create a new Subnet :type vpc_id: str :param vpc_id: The ID of the VPC where you want to create the subnet. :type cidr_block: str :param cidr_block: The CIDR block you want the subnet to cover. :type availability_zone: str :param availability_zone: The AZ you want the subnet in :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: The newly created Subnet :return: A :class:`boto.vpc.customergateway.Subnet` object """ params = {'VpcId': vpc_id, 'CidrBlock': cidr_block} if availability_zone: params['AvailabilityZone'] = availability_zone if dry_run: params['DryRun'] = 'true' return self.get_object('CreateSubnet', params, Subnet) def delete_subnet(self, subnet_id, dry_run=False): """ Delete a subnet. :type subnet_id: str :param subnet_id: The ID of the subnet to be deleted. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = {'SubnetId': subnet_id} if dry_run: params['DryRun'] = 'true' return self.get_status('DeleteSubnet', params) # DHCP Options def get_all_dhcp_options(self, dhcp_options_ids=None, filters=None, dry_run=False): """ Retrieve information about your DhcpOptions. :type dhcp_options_ids: list :param dhcp_options_ids: A list of strings with the desired DhcpOption ID's :type filters: list of tuples :param filters: A list of tuples containing filters. Each tuple consists of a filter key and a filter value. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.vpc.dhcpoptions.DhcpOptions` """ params = {} if dhcp_options_ids: self.build_list_params(params, dhcp_options_ids, 'DhcpOptionsId') if filters: self.build_filter_params(params, dict(filters)) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeDhcpOptions', params, [('item', DhcpOptions)]) def create_dhcp_options(self, domain_name=None, domain_name_servers=None, ntp_servers=None, netbios_name_servers=None, netbios_node_type=None, dry_run=False): """ Create a new DhcpOption This corresponds to http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/ApiReference-query-CreateDhcpOptions.html :type domain_name: str :param domain_name: A domain name of your choice (for example, example.com) :type domain_name_servers: list of strings :param domain_name_servers: The IP address of a domain name server. You can specify up to four addresses. :type ntp_servers: list of strings :param ntp_servers: The IP address of a Network Time Protocol (NTP) server. You can specify up to four addresses. :type netbios_name_servers: list of strings :param netbios_name_servers: The IP address of a NetBIOS name server. You can specify up to four addresses. :type netbios_node_type: str :param netbios_node_type: The NetBIOS node type (1, 2, 4, or 8). For more information about the values, see RFC 2132. We recommend you only use 2 at this time (broadcast and multicast are currently not supported). :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: The newly created DhcpOption :return: A :class:`boto.vpc.customergateway.DhcpOption` object """ key_counter = 1 params = {} def insert_option(params, name, value): params['DhcpConfiguration.%d.Key' % (key_counter,)] = name if isinstance(value, (list, tuple)): for idx, value in enumerate(value, 1): key_name = 'DhcpConfiguration.%d.Value.%d' % ( key_counter, idx) params[key_name] = value else: key_name = 'DhcpConfiguration.%d.Value.1' % (key_counter,) params[key_name] = value return key_counter + 1 if domain_name: key_counter = insert_option(params, 'domain-name', domain_name) if domain_name_servers: key_counter = insert_option(params, 'domain-name-servers', domain_name_servers) if ntp_servers: key_counter = insert_option(params, 'ntp-servers', ntp_servers) if netbios_name_servers: key_counter = insert_option(params, 'netbios-name-servers', netbios_name_servers) if netbios_node_type: key_counter = insert_option(params, 'netbios-node-type', netbios_node_type) if dry_run: params['DryRun'] = 'true' return self.get_object('CreateDhcpOptions', params, DhcpOptions) def delete_dhcp_options(self, dhcp_options_id, dry_run=False): """ Delete a DHCP Options :type dhcp_options_id: str :param dhcp_options_id: The ID of the DHCP Options to be deleted. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = {'DhcpOptionsId': dhcp_options_id} if dry_run: params['DryRun'] = 'true' return self.get_status('DeleteDhcpOptions', params) def associate_dhcp_options(self, dhcp_options_id, vpc_id, dry_run=False): """ Associate a set of Dhcp Options with a VPC. :type dhcp_options_id: str :param dhcp_options_id: The ID of the Dhcp Options :type vpc_id: str :param vpc_id: The ID of the VPC. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = {'DhcpOptionsId': dhcp_options_id, 'VpcId': vpc_id} if dry_run: params['DryRun'] = 'true' return self.get_status('AssociateDhcpOptions', params) # VPN Connection def get_all_vpn_connections(self, vpn_connection_ids=None, filters=None, dry_run=False): """ Retrieve information about your VPN_CONNECTIONs. You can filter results to return information only about those VPN_CONNECTIONs that match your search parameters. Otherwise, all VPN_CONNECTIONs associated with your account are returned. :type vpn_connection_ids: list :param vpn_connection_ids: A list of strings with the desired VPN_CONNECTION ID's :type filters: list of tuples :param filters: A list of tuples containing filters. Each tuple consists of a filter key and a filter value. Possible filter keys are: - *state*, a list of states of the VPN_CONNECTION pending,available,deleting,deleted - *type*, a list of types of connection, currently 'ipsec.1' - *customerGatewayId*, a list of IDs of the customer gateway associated with the VPN - *vpnGatewayId*, a list of IDs of the VPN gateway associated with the VPN connection :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: list :return: A list of :class:`boto.vpn_connection.vpnconnection.VpnConnection` """ params = {} if vpn_connection_ids: self.build_list_params(params, vpn_connection_ids, 'VpnConnectionId') if filters: self.build_filter_params(params, dict(filters)) if dry_run: params['DryRun'] = 'true' return self.get_list('DescribeVpnConnections', params, [('item', VpnConnection)]) def create_vpn_connection(self, type, customer_gateway_id, vpn_gateway_id, static_routes_only=None, dry_run=False): """ Create a new VPN Connection. :type type: str :param type: The type of VPN Connection. Currently only 'ipsec.1' is supported :type customer_gateway_id: str :param customer_gateway_id: The ID of the customer gateway. :type vpn_gateway_id: str :param vpn_gateway_id: The ID of the VPN gateway. :type static_routes_only: bool :param static_routes_only: Indicates whether the VPN connection requires static routes. If you are creating a VPN connection for a device that does not support BGP, you must specify true. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: The newly created VpnConnection :return: A :class:`boto.vpc.vpnconnection.VpnConnection` object """ params = {'Type': type, 'CustomerGatewayId': customer_gateway_id, 'VpnGatewayId': vpn_gateway_id} if static_routes_only is not None: if isinstance(static_routes_only, bool): static_routes_only = str(static_routes_only).lower() params['Options.StaticRoutesOnly'] = static_routes_only if dry_run: params['DryRun'] = 'true' return self.get_object('CreateVpnConnection', params, VpnConnection) def delete_vpn_connection(self, vpn_connection_id, dry_run=False): """ Delete a VPN Connection. :type vpn_connection_id: str :param vpn_connection_id: The ID of the vpn_connection to be deleted. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = {'VpnConnectionId': vpn_connection_id} if dry_run: params['DryRun'] = 'true' return self.get_status('DeleteVpnConnection', params) def disable_vgw_route_propagation(self, route_table_id, gateway_id, dry_run=False): """ Disables a virtual private gateway (VGW) from propagating routes to the routing tables of an Amazon VPC. :type route_table_id: str :param route_table_id: The ID of the routing table. :type gateway_id: str :param gateway_id: The ID of the virtual private gateway. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = { 'RouteTableId': route_table_id, 'GatewayId': gateway_id, } if dry_run: params['DryRun'] = 'true' return self.get_status('DisableVgwRoutePropagation', params) def enable_vgw_route_propagation(self, route_table_id, gateway_id, dry_run=False): """ Enables a virtual private gateway (VGW) to propagate routes to the routing tables of an Amazon VPC. :type route_table_id: str :param route_table_id: The ID of the routing table. :type gateway_id: str :param gateway_id: The ID of the virtual private gateway. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = { 'RouteTableId': route_table_id, 'GatewayId': gateway_id, } if dry_run: params['DryRun'] = 'true' return self.get_status('EnableVgwRoutePropagation', params) def create_vpn_connection_route(self, destination_cidr_block, vpn_connection_id, dry_run=False): """ Creates a new static route associated with a VPN connection between an existing virtual private gateway and a VPN customer gateway. The static route allows traffic to be routed from the virtual private gateway to the VPN customer gateway. :type destination_cidr_block: str :param destination_cidr_block: The CIDR block associated with the local subnet of the customer data center. :type vpn_connection_id: str :param vpn_connection_id: The ID of the VPN connection. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = { 'DestinationCidrBlock': destination_cidr_block, 'VpnConnectionId': vpn_connection_id, } if dry_run: params['DryRun'] = 'true' return self.get_status('CreateVpnConnectionRoute', params) def delete_vpn_connection_route(self, destination_cidr_block, vpn_connection_id, dry_run=False): """ Deletes a static route associated with a VPN connection between an existing virtual private gateway and a VPN customer gateway. The static route allows traffic to be routed from the virtual private gateway to the VPN customer gateway. :type destination_cidr_block: str :param destination_cidr_block: The CIDR block associated with the local subnet of the customer data center. :type vpn_connection_id: str :param vpn_connection_id: The ID of the VPN connection. :type dry_run: bool :param dry_run: Set to True if the operation should not actually run. :rtype: bool :return: True if successful """ params = { 'DestinationCidrBlock': destination_cidr_block, 'VpnConnectionId': vpn_connection_id, } if dry_run: params['DryRun'] = 'true' return self.get_status('DeleteVpnConnectionRoute', params) boto-2.20.1/boto/vpc/customergateway.py000066400000000000000000000036511225267101000200700ustar00rootroot00000000000000# Copyright (c) 2009-2010 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents a Customer Gateway """ from boto.ec2.ec2object import TaggedEC2Object class CustomerGateway(TaggedEC2Object): def __init__(self, connection=None): TaggedEC2Object.__init__(self, connection) self.id = None self.type = None self.state = None self.ip_address = None self.bgp_asn = None def __repr__(self): return 'CustomerGateway:%s' % self.id def endElement(self, name, value, connection): if name == 'customerGatewayId': self.id = value elif name == 'ipAddress': self.ip_address = value elif name == 'type': self.type = value elif name == 'state': self.state = value elif name == 'bgpAsn': self.bgp_asn = int(value) else: setattr(self, name, value) boto-2.20.1/boto/vpc/dhcpoptions.py000066400000000000000000000046571225267101000172060ustar00rootroot00000000000000# Copyright (c) 2009-2010 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents a DHCP Options set """ from boto.ec2.ec2object import TaggedEC2Object class DhcpValueSet(list): def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'value': self.append(value) class DhcpConfigSet(dict): def startElement(self, name, attrs, connection): if name == 'valueSet': if self._name not in self: self[self._name] = DhcpValueSet() return self[self._name] def endElement(self, name, value, connection): if name == 'key': self._name = value class DhcpOptions(TaggedEC2Object): def __init__(self, connection=None): TaggedEC2Object.__init__(self, connection) self.id = None self.options = None def __repr__(self): return 'DhcpOptions:%s' % self.id def startElement(self, name, attrs, connection): retval = TaggedEC2Object.startElement(self, name, attrs, connection) if retval is not None: return retval if name == 'dhcpConfigurationSet': self.options = DhcpConfigSet() return self.options def endElement(self, name, value, connection): if name == 'dhcpOptionsId': self.id = value else: setattr(self, name, value) boto-2.20.1/boto/vpc/internetgateway.py000066400000000000000000000050221225267101000200510ustar00rootroot00000000000000# Copyright (c) 2009-2010 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents an Internet Gateway """ from boto.ec2.ec2object import TaggedEC2Object from boto.resultset import ResultSet class InternetGateway(TaggedEC2Object): def __init__(self, connection=None): TaggedEC2Object.__init__(self, connection) self.id = None self.attachments = [] def __repr__(self): return 'InternetGateway:%s' % self.id def startElement(self, name, attrs, connection): result = super(InternetGateway, self).startElement(name, attrs, connection) if result is not None: # Parent found an interested element, just return it return result if name == 'attachmentSet': self.attachments = ResultSet([('item', InternetGatewayAttachment)]) return self.attachments else: return None def endElement(self, name, value, connection): if name == 'internetGatewayId': self.id = value else: setattr(self, name, value) class InternetGatewayAttachment(object): def __init__(self, connection=None): self.vpc_id = None self.state = None def __repr__(self): return 'InternetGatewayAttachment:%s' % self.vpc_id def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'vpcId': self.vpc_id = value elif name == 'state': self.state = value boto-2.20.1/boto/vpc/networkacl.py000066400000000000000000000115561225267101000170210ustar00rootroot00000000000000# Copyright (c) 2009-2010 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents a Network ACL """ from boto.ec2.ec2object import TaggedEC2Object from boto.resultset import ResultSet class Icmp(object): """ Defines the ICMP code and type. """ def __init__(self, connection=None): self.code = None self.type = None def __repr__(self): return 'Icmp::code:%s, type:%s)' % ( self.code, self.type) def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'code': self.code = value elif name == 'type': self.type = value class NetworkAcl(TaggedEC2Object): def __init__(self, connection=None): TaggedEC2Object.__init__(self, connection) self.id = None self.vpc_id = None self.network_acl_entries = [] self.associations = [] def __repr__(self): return 'NetworkAcl:%s' % self.id def startElement(self, name, attrs, connection): result = super(NetworkAcl, self).startElement(name, attrs, connection) if result is not None: # Parent found an interested element, just return it return result if name == 'entrySet': self.network_acl_entries = ResultSet([('item', NetworkAclEntry)]) return self.network_acl_entries elif name == 'associationSet': self.associations = ResultSet([('item', NetworkAclAssociation)]) return self.associations else: return None def endElement(self, name, value, connection): if name == 'networkAclId': self.id = value elif name == 'vpcId': self.vpc_id = value else: setattr(self, name, value) class NetworkAclEntry(object): def __init__(self, connection=None): self.rule_number = None self.protocol = None self.rule_action = None self.egress = None self.cidr_block = None self.port_range = PortRange() self.icmp = Icmp() def __repr__(self): return 'Acl:%s' % self.rule_number def startElement(self, name, attrs, connection): if name == 'portRange': return self.port_range elif name == 'icmpTypeCode': return self.icmp else: return None def endElement(self, name, value, connection): if name == 'cidrBlock': self.cidr_block = value elif name == 'egress': self.egress = value elif name == 'protocol': self.protocol = value elif name == 'ruleAction': self.rule_action = value elif name == 'ruleNumber': self.rule_number = value class NetworkAclAssociation(object): def __init__(self, connection=None): self.id = None self.subnet_id = None self.network_acl_id = None def __repr__(self): return 'NetworkAclAssociation:%s' % self.id def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'networkAclAssociationId': self.id = value elif name == 'networkAclId': self.route_table_id = value elif name == 'subnetId': self.subnet_id = value class PortRange(object): """ Define the port range for the ACL entry if it is tcp / udp """ def __init__(self, connection=None): self.from_port = None self.to_port = None def __repr__(self): return 'PortRange:(%s-%s)' % ( self.from_port, self.to_port) def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'from': self.from_port = value elif name == 'to': self.to_port = value boto-2.20.1/boto/vpc/routetable.py000066400000000000000000000071431225267101000170130ustar00rootroot00000000000000# Copyright (c) 2009-2010 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents a Route Table """ from boto.ec2.ec2object import TaggedEC2Object from boto.resultset import ResultSet class RouteTable(TaggedEC2Object): def __init__(self, connection=None): TaggedEC2Object.__init__(self, connection) self.id = None self.vpc_id = None self.routes = [] self.associations = [] def __repr__(self): return 'RouteTable:%s' % self.id def startElement(self, name, attrs, connection): result = super(RouteTable, self).startElement(name, attrs, connection) if result is not None: # Parent found an interested element, just return it return result if name == 'routeSet': self.routes = ResultSet([('item', Route)]) return self.routes elif name == 'associationSet': self.associations = ResultSet([('item', RouteAssociation)]) return self.associations else: return None def endElement(self, name, value, connection): if name == 'routeTableId': self.id = value elif name == 'vpcId': self.vpc_id = value else: setattr(self, name, value) class Route(object): def __init__(self, connection=None): self.destination_cidr_block = None self.gateway_id = None self.instance_id = None self.state = None def __repr__(self): return 'Route:%s' % self.destination_cidr_block def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'destinationCidrBlock': self.destination_cidr_block = value elif name == 'gatewayId': self.gateway_id = value elif name == 'instanceId': self.instance_id = value elif name == 'state': self.state = value class RouteAssociation(object): def __init__(self, connection=None): self.id = None self.route_table_id = None self.subnet_id = None self.main = False def __repr__(self): return 'RouteAssociation:%s' % self.id def startElement(self, name, attrs, connection): return None def endElement(self, name, value, connection): if name == 'routeTableAssociationId': self.id = value elif name == 'routeTableId': self.route_table_id = value elif name == 'subnetId': self.subnet_id = value elif name == 'main': self.main = value == 'true' boto-2.20.1/boto/vpc/subnet.py000066400000000000000000000040671225267101000161470ustar00rootroot00000000000000# Copyright (c) 2009-2010 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents a Subnet """ from boto.ec2.ec2object import TaggedEC2Object class Subnet(TaggedEC2Object): def __init__(self, connection=None): TaggedEC2Object.__init__(self, connection) self.id = None self.vpc_id = None self.state = None self.cidr_block = None self.available_ip_address_count = 0 self.availability_zone = None def __repr__(self): return 'Subnet:%s' % self.id def endElement(self, name, value, connection): if name == 'subnetId': self.id = value elif name == 'vpcId': self.vpc_id = value elif name == 'state': self.state = value elif name == 'cidrBlock': self.cidr_block = value elif name == 'availableIpAddressCount': self.available_ip_address_count = int(value) elif name == 'availabilityZone': self.availability_zone = value else: setattr(self, name, value) boto-2.20.1/boto/vpc/vpc.py000066400000000000000000000062051225267101000154330ustar00rootroot00000000000000# Copyright (c) 2009-2010 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents a Virtual Private Cloud. """ from boto.ec2.ec2object import TaggedEC2Object class VPC(TaggedEC2Object): def __init__(self, connection=None): """ Represents a VPC. :ivar id: The unique ID of the VPC. :ivar dhcp_options_id: The ID of the set of DHCP options you've associated with the VPC (or default if the default options are associated with the VPC). :ivar state: The current state of the VPC. :ivar cidr_block: The CIDR block for the VPC. :ivar is_default: Indicates whether the VPC is the default VPC. :ivar instance_tenancy: The allowed tenancy of instances launched into the VPC. """ TaggedEC2Object.__init__(self, connection) self.id = None self.dhcp_options_id = None self.state = None self.cidr_block = None self.is_default = None self.instance_tenancy = None def __repr__(self): return 'VPC:%s' % self.id def endElement(self, name, value, connection): if name == 'vpcId': self.id = value elif name == 'dhcpOptionsId': self.dhcp_options_id = value elif name == 'state': self.state = value elif name == 'cidrBlock': self.cidr_block = value elif name == 'isDefault': self.is_default = True if value == 'true' else False elif name == 'instanceTenancy': self.instance_tenancy = value else: setattr(self, name, value) def delete(self): return self.connection.delete_vpc(self.id) def _update(self, updated): self.__dict__.update(updated.__dict__) def update(self, validate=False, dry_run=False): vpc_list = self.connection.get_all_vpcs( [self.id], dry_run=dry_run ) if len(vpc_list): updated_vpc = vpc_list[0] self._update(updated_vpc) elif validate: raise ValueError('%s is not a valid VPC ID' % (self.id,)) return self.state boto-2.20.1/boto/vpc/vpnconnection.py000066400000000000000000000170451225267101000175320ustar00rootroot00000000000000# Copyright (c) 2009-2010 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import boto from datetime import datetime from boto.resultset import ResultSet """ Represents a VPN Connectionn """ from boto.ec2.ec2object import TaggedEC2Object class VpnConnectionOptions(object): """ Represents VPN connection options :ivar static_routes_only: Indicates whether the VPN connection uses static routes only. Static routes must be used for devices that don't support BGP. """ def __init__(self, static_routes_only=None): self.static_routes_only = static_routes_only def __repr__(self): return 'VpnConnectionOptions' def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'staticRoutesOnly': self.static_routes_only = True if value == 'true' else False else: setattr(self, name, value) class VpnStaticRoute(object): """ Represents a static route for a VPN connection. :ivar destination_cidr_block: The CIDR block associated with the local subnet of the customer data center. :ivar source: Indicates how the routes were provided. :ivar state: The current state of the static route. """ def __init__(self, destination_cidr_block=None, source=None, state=None): self.destination_cidr_block = destination_cidr_block self.source = source self.available = state def __repr__(self): return 'VpnStaticRoute: %s' % self.destination_cidr_block def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'destinationCidrBlock': self.destination_cidr_block = value elif name == 'source': self.source = value elif name == 'state': self.state = value else: setattr(self, name, value) class VpnTunnel(object): """ Represents telemetry for a VPN tunnel :ivar outside_ip_address: The Internet-routable IP address of the virtual private gateway's outside interface. :ivar status: The status of the VPN tunnel. Valid values: UP | DOWN :ivar last_status_change: The date and time of the last change in status. :ivar status_message: If an error occurs, a description of the error. :ivar accepted_route_count: The number of accepted routes. """ def __init__(self, outside_ip_address=None, status=None, last_status_change=None, status_message=None, accepted_route_count=None): self.outside_ip_address = outside_ip_address self.status = status self.last_status_change = last_status_change self.status_message = status_message self.accepted_route_count = accepted_route_count def __repr__(self): return 'VpnTunnel: %s' % self.outside_ip_address def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'outsideIpAddress': self.outside_ip_address = value elif name == 'status': self.status = value elif name == 'lastStatusChange': self.last_status_change = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ') elif name == 'statusMessage': self.status_message = value elif name == 'acceptedRouteCount': try: value = int(value) except ValueError: boto.log.warning('Error converting code (%s) to int' % value) self.accepted_route_count = value else: setattr(self, name, value) class VpnConnection(TaggedEC2Object): """ Represents a VPN Connection :ivar id: The ID of the VPN connection. :ivar state: The current state of the VPN connection. Valid values: pending | available | deleting | deleted :ivar customer_gateway_configuration: The configuration information for the VPN connection's customer gateway (in the native XML format). This element is always present in the :class:`boto.vpc.VPCConnection.create_vpn_connection` response; however, it's present in the :class:`boto.vpc.VPCConnection.get_all_vpn_connections` response only if the VPN connection is in the pending or available state. :ivar type: The type of VPN connection (ipsec.1). :ivar customer_gateway_id: The ID of the customer gateway at your end of the VPN connection. :ivar vpn_gateway_id: The ID of the virtual private gateway at the AWS side of the VPN connection. :ivar tunnels: A list of the vpn tunnels (always 2) :ivar options: The option set describing the VPN connection. :ivar static_routes: A list of static routes associated with a VPN connection. """ def __init__(self, connection=None): TaggedEC2Object.__init__(self, connection) self.id = None self.state = None self.customer_gateway_configuration = None self.type = None self.customer_gateway_id = None self.vpn_gateway_id = None self.tunnels = [] self.options = None self.static_routes = [] def __repr__(self): return 'VpnConnection:%s' % self.id def startElement(self, name, attrs, connection): retval = super(VpnConnection, self).startElement(name, attrs, connection) if retval is not None: return retval if name == 'vgwTelemetry': self.tunnels = ResultSet([('item', VpnTunnel)]) return self.tunnels elif name == 'routes': self.static_routes = ResultSet([('item', VpnStaticRoute)]) return self.static_routes elif name == 'options': self.options = VpnConnectionOptions() return self.options return None def endElement(self, name, value, connection): if name == 'vpnConnectionId': self.id = value elif name == 'state': self.state = value elif name == 'customerGatewayConfiguration': self.customer_gateway_configuration = value elif name == 'type': self.type = value elif name == 'customerGatewayId': self.customer_gateway_id = value elif name == 'vpnGatewayId': self.vpn_gateway_id = value else: setattr(self, name, value) def delete(self, dry_run=False): return self.connection.delete_vpn_connection( self.id, dry_run=dry_run ) boto-2.20.1/boto/vpc/vpngateway.py000066400000000000000000000054411225267101000170310ustar00rootroot00000000000000# Copyright (c) 2009-2010 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Represents a Vpn Gateway """ from boto.ec2.ec2object import TaggedEC2Object class Attachment(object): def __init__(self, connection=None): self.vpc_id = None self.state = None def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): if name == 'vpcId': self.vpc_id = value elif name == 'state': self.state = value else: setattr(self, name, value) class VpnGateway(TaggedEC2Object): def __init__(self, connection=None): TaggedEC2Object.__init__(self, connection) self.id = None self.type = None self.state = None self.availability_zone = None self.attachments = [] def __repr__(self): return 'VpnGateway:%s' % self.id def startElement(self, name, attrs, connection): retval = TaggedEC2Object.startElement(self, name, attrs, connection) if retval is not None: return retval if name == 'item': att = Attachment() self.attachments.append(att) return att def endElement(self, name, value, connection): if name == 'vpnGatewayId': self.id = value elif name == 'type': self.type = value elif name == 'state': self.state = value elif name == 'availabilityZone': self.availability_zone = value elif name == 'attachments': pass else: setattr(self, name, value) def attach(self, vpc_id, dry_run=False): return self.connection.attach_vpn_gateway( self.id, vpc_id, dry_run=dry_run ) boto-2.20.1/docs/000077500000000000000000000000001225267101000134635ustar00rootroot00000000000000boto-2.20.1/docs/BotoCheatSheet.pdf000066400000000000000000001357551225267101000170370ustar00rootroot00000000000000%PDF-1.7 3 0 obj <> endobj 4 0 obj <> stream xœí}Í’ã:såþ>…^@e @pïÇx÷³pxUÏ GÕL|3 ¿þàœLRwu]©Z,`ƽ]„DŠ$€DþœüÁð§Ó†Ó?—ÿyú×+‡/ÿþ©üû_ /cNÃË0¬×o‚Ÿ^ÆÙŸ^ßÿpüâòïÿůË1ÎñeO.Oå§Ãéì‚/¾œÿŸ§ÿøãïúÃË'üüM¹ÿßOSäíðßÄ–+§òè^R>E7â¯ï§øoîôÿçô·~ÇÈÃÅÅ^/¾õ5b’;¡/ãéâséÒxŠÉ£gqbc’Çɧ×?œŸ.¿xÃÓK^¾p>_Þlý8ÔßëãP¯_„é%áz» ·ÓÇ}~}Ÿ ¯oôm/ïU?úìÓŸœþþÇéœþ÷§Éë ”ÿFiÊåõ^©ø—?Ýéÿ•Ÿýc{ú¯?†Ÿô¿ÿÓ¡ô_>¡»ìÂKát§—iÊÑr:ùÖâúMŒ¹¬„q]‹¿¢eYƒÞñ‡ uÑþ4M·“›ËÜ”J ŸÜÉÅ2*eýÏéôÊFÈåüX^ú„”Ãëiæ±|™Oeü\¾Â;ŸÞNÁÉ!œ|r¥Óµáâ\؆;•»9žê;¼žþ<9W.ð:ÑŸ¼ãõQ~V®,/Ë›„ÄÓnx=ޝ…zÙÀ3BÄü ÇèNPï]æè?~9”Þ¥kÆà†\(I‹ƒ„×(Gy§Ò(u)q€†IhÌ8ú,¯ãõRïùÓòJ¼“Þ›/?“Ž3. ¼¬´ã¥˜†Â¤ÙñaÖŽ§‰˜x¾x}9–!*¯„A”{ßÙñ©Œ<Þ§¼þy:¥2ÝeüÊaœðPô7Ê,¯óžj£ŒºÑ(/´4ä”›Üe‹7ÀÉÚˆ ‡“ñÀ$ÏBdò^o6‰·tü®‹?—9˧VØ–² e¤“4"ˆg惄Fù˜"IÇ%^RcàH´ÌçàÀºÞ Êœ|™ˆ$œÎ%¥H˜O:©ò’´+3ä3gù—Áp@@ ér·¢‘ÌÊïñŠõï\†`p¥+ï§ÔYbæÿã oQ(ö¬VÈûí”Ë)W6ƒ÷%‹¼Ã‹æ‘¿u1,Gç}mlùDá›8~qκæF¹wYØ9c 'ßp,ë¬0pŒ®Hæä;ø\@90rù—99¼WžsÊÉd²Ê{c²Êbæ_J0È—ò ,ºlÂ_Ræ ãS$ Ư<¦,ÌrA¹æÒËS¸üð>>ä@yÒ :—×)ªÃÈ¿¼?DlV2,= ìK ŠBº$ÓxÀ p|%Íad(*Éh¢2šì8èm¡yJK +ž8bòQÆz±Š:V¼¾,å?°ªArž¹ »Ö.bpÁ½0´åXF\¬ð‘$g‹FF>:&NKyqyÊà¡X”Õ;gýýq¥ÖØ÷œÚbÞ‚å5Þ¹ª}áFÐõ Bãõ( ¥7NL+N}.š`ù[ô9°'.‚¬«¢Pr™ìˆñ.tUî Óîd®å/oƒÄl™°Ç¨O-„飰(WÁ¹¡ò?xν¨ÈÂP“–“Û˜ôÍ{ÿF†z¾¼æ¬ý.8a}À½Æ—˜DÀ—s¿BDæÑ½”y 1 "2» ÜL?Ïq„¦—aý䯗>£ŸÏ€Ø k{[¿ÓK‘øz¯sHx•“>ª~|ý£i~™×o0YÞ½¤õ›bÞ_Ýqý,,÷Ðoô¥ÞÖ/äëµCõ‘úñ°È€\…lƨÒúE}ÊŒ±BBQÎV )L—_¼á ‚>õáúváêqø½~á+†T¿p‚!ÕÛ9½>Î-V¾ü‚”¼°»Â°êíüÕã®97èà¬Àœ^_oW‘9¯ÈÜgth#û9BWVv]ð) ®ÚÍOxùú ÔòVúR.|.CqpeX|ÑЋ¢æ‹H/úÆëi*8àebÕú2ì‰MÑÎêi)@ß‚¾)©Æ4‹Æ h «®œ¨b@ó)6EáìÅ€ðPŽ¢ˆOäGÇßᕊŠüƒ˜Ã£T*}Ê„Çy@)i/Q7+ÿt²È£ó|…Ò‚ÂVÞ)SY<Ó¼˜À¹µµ3ÄAéñ™­B#ep2æ{â5®‹sÔö ¤¶›ô1nâs¡¯RŸ’£E{¤MéUÞDp?×7D+cÎ @<Ê ÄyøƒÒ(7ÀÉXïþù¸åÇí¨ó—R;*´c¸If^@™=ªÌ.Úeö¨2›âÏ«Èæ‡a‘Ø"=Ã"°åsé:¨¼žDø•דÊëyPq=©¸žƒJkùbŽW·[>•Õú…¾ÍÛòY_Un¶ôBäôø%9=¦ka²|&óÓ¥(Óµ$‰?’x%G╉?H‘øƒ‰WL?^‰øƒ©ïQH}K§oyy§úéC¹ü³ÃlL‰åpƒX>â@~&†ã0›n©:Ÿ4\ãàŠÈâ!Á‘=DãLò,:Ù‚^Ä_"aù†3:èË’Aæ+K»S†„Vqæ!ñ>pöáyàœåv€Àv4h&â£#üÅQðx ïÀ’8ÆÓ<óP~ëØ ›q õœ‰@fÎæ[åƒþŒß•y:: <ºLÌuRÇÌèÅSÆíóÑ~í¿ÝÃ|?Qb«¿¦¼kY•Ö yý@ÜË7±,³ðó6«gí~fñÑPƒ*Wޤn4ÎNgÔ=TlÓ1£IÍ¢åÌuø*-<{ÁŸê“ ›Å¤\àØ€¶ÐwÀúšÄ%ï ðÆ‘t’„^ëoßä6BÿŽ (—tîÍ'Bê|2nìÁÑ—„ëþÎ\¹@#¡;Ü—ý L0üàŽu¤ ˆ'3k‚2À™ ? ×(¶`¹ÂéË•ðŽŽˆw("b&¥Mª{L¶Œ] Œ1 `¥a"ʳž¯H_¯4W0 zÜ ì tgç )ô Ø`¡åƒø5žcÃEC€¸à.Z¼úO@’ž‹ Ø<¹¶Ôƒ8LØàÂtNï!Ó:K Q”k$ÚC¯žÕiÂ7°°’Y(ÝïÔ¾ç8GF„áö´¯ÁÃ/Ø †] T14‡W ýR¡ ††…˜œê|Ê5rXZÕÑ%Ç?) ÐÁwJ£‰+ î‡ržNŒ‘…78õ½€·Šï% o˜§ªeR‡Ti ÷¾oh$,î½Èê󌑀ΠŽF£©Çc„ê@ÏGy¢ú†ªX„b‚qþDm¸þœbGnŽ:i‘ò”7yyQ€ÎAcôîœ[?W ‘ª„‡ø±D2äõÔ׺6dÂp¯µõ;a[æd3íåqò/øY¢p[‘¹N+GɵE{6 Euã¼*÷^P#iê;Õ%çå”KÖÌN‰-‚ÚÊwqã ‡aêÑ,‡I»9V™ ÛƒÑïQP£$²$×àw‡ Ð8L°IÜBL]’D•f‰†×ßT™8g‘8JPÎÖ`=J¯zD`#§ŽÐ½jš8ר( È‹Äèuð³…ëİt^ж4)_ÓF¥’äW• £‚ψŠ:6“Øz³ò‡S FQ$>©iá“bAÃe*Á¸¦Lk*9å$÷²T‚;: dLXÞÚ<ôªˆÌ®j¶ª(N¤\6¶Ê˜æ,K3J†‰xH£hcU£Àƒ“„c+a¼ÂzD§䓾Ëq£J?<£ žð“à ‹à™_&€›ž‡ul…çé% ØQǹøS<ÉMJ8mŒ¯à;žîxGŠça'1Oôme: ßè×^}ž7øöȾ¼K"—z†{ô#^óÌ€¤2q Hš% É'qR@ëñ^’à¢qƒ-ü—Is Hb´~W)U8ˆÌée¯ ñ(c”k  …×€“QFÄŒwg¸Î ¬¬4 C–×g’]gÅÂãÅ\–°3y®]½#hIŒ‘<‰§#‰Ï[âàOuäžO¿‡wå–R³dÇ™Í8‹åÂ(:É„¼#ñÓLÊAt2‘™”“@‰Ô|©¾Ð1»a.VcN}«{v^™Ø5w}0?º¶ðÕ=/*ƒçÌ#laó›GøÁR*é%¥©)uÐBC†}+>v b„G•WØ#½tÃ@+­ºVÉvm_›Ø5wÝ èrN/Tá,èr› Ë6Æ×‚.À%Ë| ÕÝKÐe#ÜÊ‚.•|Üñž ‰£N¢]†“]îTÞ‚~“ã=‹l#6ÈY¹õW"³¦°6æ¼`Èd]sUv¨¼ ù §X¡Gìƒ1~—÷ ” …K”›A)DÊ×ô !ì¢ô¯,-VcœQx‡¼§úËÇE••&ÕbaTŸö$Ïa˜R[Êážñ7³„,¾Åâ[,¾Åâ[,¥Ö(ÌF¤Ô”D¡s,3GÖ‘õH!•ëAK$úµ]ùÑòÂN´"¢.ð†ÁQΞL¥3•î‹Þ씳öbiÄ¡óxïO#ãk*³©Ì¦2›Êl*³©Ìg/!á­H)óúÿé~ùv7°^B| >U¾eûçœlÿÛ?çÀnð[¹‡Oxüþ9­ˆKx'Á3.F>άIbHg¦™ŒßpÁó ðÜr­„é"à*êáOmsñ‡š¿ÆIõ§hAá¥BäMkvœp·XP@¢íÇzwC¾e0t~è€0ÕgsÕ§'qÏšÌQíÛNRÌmB|…K¯õÊ]8MoÃSb8p#Þ ¼ÁmŒ¯ÕÄ0Y´YÔI7Ì¢3ÀË –V –N˜L–¹/ȹ¨¨g˜Õ‹ êò¬sYÛÅÚ?——/òæìÁ{¦Ê9_ã\VzY[8!…£ ¿‰”Ž\¤Ônðòᨨ†Iù—›7{Y³ø"u/÷—Ê4ë R— çÚú“–…ÐX¹ïX,®tè Bv¬B6T!+Ýõ^QÎZp¸—Ü{‹4°òæY¬^ªýòù²²\ÎÆ³ªSh¡>`#z}÷j¸{E,Þ6hÂt†ÇMm™Q±IÛ˜ZŽË@|ü$ɽŒÆZ[QÂæµñ'šI¯Ïõ¢¥‘ôâô´{?û¦Ø©•[µÌøzG+·jÐ’AKtà%ƒ– ZjZ2!±oîZ$áêR®gÁ”0{`Pø˜ØsÍÍ×µÌü2¸„¿œ1|š´y†õIvÊë©>ï^sàÄ¿sÒ£ŽHr †Ê_¸#n8Tk,T« wµÆ2p.Æeý*Žõª»Iö'‰¿Ë‹zÍc9ǃät5~ ›_Rà³W  Ÿ)gØÈtûŒoÞê©;_­ð+YVò®¹ P)éoÒW”7 5c¿t%ÔLÔÄ‘¡~+2DÏÑÖY:ÂÖÄyH“îŒ`8;äHq¸h,½êÞén·fŸý¾>Û­ -§A°KC—í4_¶X 'kÃË ¦µ±öó~†?péyY³«|_ž¦¯;.ãÂÅ$ã±®„á +aC’òÚï”%úÞ¹Ž7Rq.ZËÉÚ ‡Hõ j„[®žXw÷\Éá]´¸ \{–lñWšÀL!*÷‡ôÑÌ£+B¾H ¸Œ\E¯`ï¸jÉB"òp‘Jô…< yP—±ê‘”$x4e‹.ÅÍ"‚ •ž- O<'¨6E¾«¿ÇKP9y¼^ò¬­ž’o ¶7ýÄôÓO ŸsD[W}ŽƒyÓŽ§7Ï@B2â¯d#Z˜®ËÖ ßepuðݯµi&LÃuBœg8Õ§ÕdgŸµ8ÝI]*Ìh !ᚦ&™Êk²³¿7®¢ušSÍóšçµº­b0\4~°HغÖ`›aqf‚íÒÛ·Jÿ0Õ{åS+ª7’ '€ç“¯”òeK¦™{Kë+ÿ¾hËÔ»µa.í>%Ô\wÙZÄú½A¬æ^3÷Ú³Ük›‰†Çïϱ[Ž“Š{4Dx;Ñ!ϸ¶Úu8Ž-öz¯û^`‹¿t¦)ÀØ è‚ ý±SL<¿èÔcZ–<·¬úR½ºVÕÓÊÏý§ÐáÐeªO|zÔüì[¨:ÞHPy÷ß5ßMX8ÔQómLm¿QómŒ¯U¤5 üï)Âò¨“¸Å>~‡¶‡¥¦NŒò^ºaèª1§®Åö¦&á¸5à,gô›sFw‘dqXdstC Eëþ_°‘ñ=N›t¨ÚZ}m«¯} ‡Nìs‹ôùæHŸ‡©.apM©.Æû·æý#ß–Ua¾¿6{‰ŽH§*2'T—ûÒ³I%D¡{U7+É‚¿sÇf1ÂEOªš½ðMY_ú,;¦C@9¦)œƒŽù½©y&')IliªŽá0ºz*ÂÚ˜óbuIàþ2aÜU0_(Nò”¯;ÍéOÍî³@îèçR;Á¸º¿ø¦P¿Fh„…tl1¨†˜ö§h˜Õlˆ©­ ßYA[`.a¸DçZ…á{bv] ³]Dí½lš‰0a&ÂL„™Û@„• ÉQ޲8””–KfÉ5‡Œ.1pcsp2'ÍÔ¡”IS rdå=´¸H–ô-Yx‰0‚’‹¦ŒHŽƒÞGå‚z¨s¬ 3@ÙÏD¯LžQú˜«èwC’ÅL#3€ÁÛ\xº ·>‹. ¿© e”“=…Ô”ףô ¤.ÁA"¬êTùûPåž‹Ë7s`Ì-$ñ7âGÜ s ñE†”U¯¢ ˜Q@Z«¨õHí’â‚óÚ[Pµwi1ª“2o À"ƒòW)ÃUÊW}’¢úŽ ØKZBvÌu¢1+Í€D²ŠAÄ$„~ªÀdI€ð']¥W éÄò‰'í³±vÓÉ;í‹EÈ?Û†±ùÝFÈ/uŒÑfï'BÞ RÖéŒÿ׿ÁøãŒo&ƒEȫ֓R É]`9Ý_|[„|4aêÌ·Í|­•ÛÈÌw[+·‘ñ55r‹ÜÀ¾2»oÛ‚ö *1¨Ä cÜ;cÜ´?Xо‰0a&ÂL„íS„YÐþ_H:‹R° } Úïƒq[Ðþ>ƒösl¡’U#®Í ‚öÛ_,Š0Q¯*R õôñ’XF³°F0MngfáA­ ¸×`ù5|l"÷e®EždGÞ­ éà’ë)g䇦 ÿ¶‚ìç©j¤¶X~¬‰ÐÎëñWû@ûù²eE‚Ì~¹ê‹%<ÛF2‡ðn“Ö²ù¨fnï&)`)›ß阛à¯ÍGs˜›Àܦf[R€ˆ®8Œ-ä³5‚uñMIЄ »-B—E¼ˆ!7‹.©‹`TGpc…4(4¸‘è@ÌQbz.-HO©Á‰/ŠÅê+8ÃÒ]¨Z”¢ã½ÓŽ˜©;Ïòóœ`Œ¹à’wy‰Í¥EFF™fµ>=ç‹ M¹¥¹‹ Æì{z:Ò¯l~-,ÞÀ ,8(³ëJ˜í$,ÞdÀ_É€cÆEï[È^n5y|¼Q#ãkEBM^›¼î­/dñ@ôU] ¸4"¢-h?ñ@µHh#¤cæA|ñÄg&ÃjöãÄÆ”šÒ8. 7+áŠ6¨*ã+œ$1ô‡IÕ ªÐ@W.‰ íÎýL{åg"($…Ë€uŠ8˜E‡ŒU‡œ«9‰™D‡L1㔨-è½ïä%&/? Þf‘aJ9Ã>pQÆ µ Ç‹†,Êqºhi—‚Y•w…º*ZÞ9ÑH‘çÐÐ(,NŒM¶ÄÆ\C ТµY¯3ýÛ<ÚtŠŽ¤T§Q«£k!á¢C÷ßµÚM´k<¾†t#CÞm éFÆ#I’vT³Þ Ûf=r°~(?€ ÝÚpn&¹ýÊõ$µUùÎ)0Âç˜÷û5^Æê}Àûj…C–›Án)Sà…ÀÐ`a–4S“/ïu.ººD\Ok] áDÅìjQcêÓ,@æ€áqdÒÐBŽL# Z÷ߦ¶Aíj [ä35ÛYÓLö©™lànƒ/ôf&?«ÚÆø>¾Ü¡ybLïþ#5OLÿÒÒ<1=yb¬Ìø_ª†çheÆ­Ìx¢ëáºðÞ”[·jáÚ Ç­ñÝMíc™~…»Ëû¨Ö‚—å“Úqâ,¬²$TuŽh% y$ƒ£Ë20:rÄ/Ë…è#tåIl3jÐ Ýö]:¢ÖhÅŸC­Ó…7×…­ š¥¶Ëƒ-}t¯é£«S¢ í—#9R†ª±ÄP0êá)æéó{úØNcàa)B䦥DÃL* RЬˆöqV%;ŽZæcªÆ¨p,‰Àõ&ÕÊ€ ²ö<_'PÔ¿rÙBðÐú µô39Ú–ݳ°0àsÛa‡G5)Ì£úW²Â<ª€¥Í£Ú‘GõaRj][J{G“tX‰û8êL™”ÁžC-Þ¨ßÀªÅO–ÔQ´HZ<»´H¤C=È÷~¼hÌõ”¶ÈG”…t|œ”vð2G*‘Ö$@o*ÆpGŒûî®ç©…LÄFÜßÝ_|[ºK4ÑnÈé.æøfπɸ¾È-·X1´á¨Æ_/ŽèŽº1r#Ó=é‘ö[ ãžhä¨buKú0Tód¨¦) ]+ fI†5Å’ wÏ”vRCnxp*$Ãk!—P ²xI%ÇCrÐ…à²~ÇzÕÝ‚Ùô"Ó‹Œ™^dz‘éE¦í)akÕIÔ‘ûÀ¤¨  §T¸ûê¢/,:ÔŠgk7‡-)›Ê¯(c×HÊãëQ42¾VbW³Õ£°zG_fX= «Gaõ(,êt¿õ(Ñ~ÞÒS ¦¥HÁwŠr²ÖÙ™¹´ÑRmK.)ö3›…yEÑ£Ëû¥(â&WŽýJyRl欧Ó$jqá+B" šÍ'}“+&WL®|“\™5 ¨¹b ´YoϲÞ:’ëF-[SKŸÂ)Œ-'hÄ!ÑýÅ7e7B–!l:õ†:µ… ô­Y†°e[†ðº1r#Ó=é‘]( cC>¶r0ÊÝD<ò¸öÄÚÍ©jNUsªîÚEêqó*bh¡Ne#ÀýymŒ/w} aœ¤%_¯Rº5# ˆûª93 @¯n2‰êî e¿ÑxgžlŠá’ªûQñ:E“äÔ°Š|Ñ æu1Ô«'!Ðð4"äUït î^K0 Å¬O‹Òø ÈôlÓ³¦g?.xqpM©.Æû·æý̧h˪腙šFØOæñã[lÌÄþëÑY¡:/Á&—P $N°n®¡ºˆTy…êF]¦¡ZD«vS5À$J >C#Š*4å`Ö™Á;Ùû"JžÀ»´ÈL/^ƒwã ÞM+xGïò$÷ú xg»—!5©Mâðèe3Ö: o=¬ð8ê̱-ê´šÏ'Sºù^Œ½óLe«0V]¢¼û6Lºf‹§ìºÚUЉԋWVø¼ˆûŵ˜£*Ãi±JÄkyéÄ* :æwj°–``¾à)=ÉË>cܧan!%¥ÇA÷ßãÞM´Ëê·ˆÄm¶³&×öãôxì­¾Ð["Ʋk#ãûøŽÃB f‘ÿ#µ²ýKKC-z²ƒ-vú/UC 2±Øi‹îBt=\ÞÂó¹…üíF®î/¾ Âkƒ&ÚEµ¶€ð½ÉŽ ®µ~ÛãÔö8µ=N°Ð‘™îL¬lm)Œ÷ØÞÕîGÇR ÙÛÙ3~ñi,ÆaLqü¤æÓ2ŸÖ>%Ÿ1¥L§=Ô,üÞ ƒVÐäîÈ-“ ¶éÿ¨òí3-æçƪŠÔøl¥&1àE€,¥?nV¤š1zšß£BfZ›im¦uçU-[ƒrS … ‰¼z| ÊFÆ×jPÂdkT¢J_4³ÙjPZ J«Ai5(÷[ƒ²­mÿø€…êìU1‡å6ËÇ—Îl…[9`³'¨ÿ<Û´`Ó‚¦wZ½0Å*&59wñm©ÏmЄ¥>F¹3ïl× ¯¥>[ê³¥>`¡#72Ý™YÙ²xËOUûÝcs‡UÌŸ²»Ôç#óážêþ¹¦y4ž®`šGã›=ûHc}|ÙïMŒ5g2ÎdœÉ8“qÏqß™äöxIuÜ´¹_f¼¨¸ ç«k/¿‰©Ü ¥u!›‹@AäÄî&€fCÎ3SìbAŸ"Y2MÀ"xn¹VÀ/ø¬q/½»À)YÈb¸+ˆŸ)ÔŸ¢5ÉÂöðÂÒHÁ* ¬¬IüA!÷ÄÔ'?Ö»Wxï¶z‚yT:ÔIJ3ŽÆ/›á—–O{7foÜ­?îf ºV×ê ¨Äa}­À1GÆf?O}«‰º×zå³BÁû̽ÈCqö¤&<¾6P#ãKî Ì)z]VtM‘d©t’é;[ÊÉ¡TïŽ,¹Ý{ϼ˜Ú©­lùy][y'ékIRøÕIDÆe Væ$:ÀeÉ×(_åÇ/âÛ$ÅPň13âÅ2šOQ~¦|"‹>úcFü3ט‘‘1#xÀ×bF¤›ï§Ð8ô›"zCåËÌŒÔØ˜IET1ùÔ‚ª^{ LÌ •éÓk¨²aAnDr§¥(u³?PV¡(vÐ17 øÎh¶²“Ùæ}«‡Ø<õ ¢,%¥¥•½ …ùîmšÊ»ˆ2¥ ?¦‹i“²`a#•ZCí?eÆ*AÍŒÖíÆË–-¼ ž¹pG µóÒ€ÌNp¶é%Ï~S³©`‘ ©`‘ ©`‘ ©Ð·"o‘ –X{»gÍÁ,î®EÝ‘-nó[‹ùýè=±?Û¹¯'¾ÝEüBWëkÏjÞá"SÌyg"é÷*¦c賩v†>7/‰ŒS}ʲAZr-TAn$?ëqy9ËnmŒ¯mn‘!ŸHIL1÷Û†b™ÖÀ#¥„«êÿ8^4ÄF§‹Ö™¶:t‚ò®¤"Ή{–ÉsÈ­‹&!8[Â×ð´h×ë̤øm“âð°DOFófžíC—@´°Ñ»u\ã´Æi{Lj3Äž ðq)‰±1CÎv‘m—êlL0•Ú:ÿp)4z¯‰ñH!ÛEöÔ{] Šj óéÊC}°ÅÔLêe§9•F›ö4â™ìþâÛêh¶Aæ}|G2«ã+RÖlð¡;5Ð6rÚs¸œA„íè¶‘“ù¹oURvo{ïÙŸiU»•‡)f¦x‡ðÚª&Âãœ5qh €x¸þ`Jìq‘ì>ÜB öT…Î’ÆL‰8ÓéI×Ý1÷1«nûšÁø¶ð.õ´ŽÉß© N*âZtP(jÆ·¾*1qÝÒh"•¿ O—ïÖ^ø(?t³ìû'Ï9ð8?Ü;îž©snas FB!6HYoc|iÚN™>€™R}¡cÙSVwPyÈ%Ÿ©K@é™;vž‘¶°ßf•ÚTÑ(r/©°qÂEä/oƒlç²|`AD}*Ólg7µP~¢oJ÷ß”fÛM@|‚S…FueºLpõ 2} kCÕqYó[æò®-Oy~‚å·šu5&)'æ·2/ˆÛ¶1G äÃýð4 j°ûËÉÒA¨Réh _éúc‘ ù¨ü"×hÍXª¬eŒ 3p;“fÉ]R$ MAÿÂra€‡TÇŸëoêR\Ÿô(»Ê-ìk=J¯à¶WŒàqu„lIÚ’4{Á  ƒ6Ž@|ƪŒZî`U­š`Ê£)¦<GnŽ#{,÷IªñbµÎ5jjªì0Uv¨•2jLfÍÐP4Ê×HÎ!^4èJG@Tô$Ž—eðzÒ§?›=·_`Êm!¦ÔŠDüÞi¿ B2zð²7Q¸hȾƒ¿l¹š=ÄÖÁK£ÊçÏÂqÆ«á¸0Šœ„ì{P0x€‹‘tsDðq¨ŒD?Œ5ÄИ3Å(òè™|[Ù‘B¼_Ae–Úv+. ò©™g%rž)Æö_pòÈr¸'qõ8S[Î]t Ð5@·9@×¢ŒøžG|ƪŒZ¾_;œÆÆì_+:ß+Í6TžPªG¼—‰=Ϭ•Rè,T㈀`[·uÈ`Ë4÷óĤ¦i3p¾pÀUÙ¡jæ2àÂ$62ÿ° Ïc¼èæIÈTL.ôÞ÷Zë£v|ß;ú™_n{1–¢lhÔŠ3 ß€~ú èoè7idÄgQ"ƪڧ–f"Îmi‡–‰Ø+ÍZ&¢e"nªì[&âo ƒ51Ž-ì}ÜH¢Þã²úœî-ÝÈøb{?ªÒIf6#8 ¦ŠÂÀÿÀ4ÌkKUóø––'[ƒì(8È4¼ Gâ»eq*vêEHKlk‘JÔH±¿½8$œ?q‹HemY˜‚H8á§Š^…±'ã€k® ÆR¸‰òÝÒgc·†­tÚ—=‡ŠtâÖÈ2÷ïÔÉÛ K„]î|æFåYç²J ;——å“)*Á™¼´¼Æyd¸>΃íâ(Ào"Yä(¤+•u¸':[æ“6ò/Ñ /ÈT^q]VAQèY•ÞÛ¸q®­?éK¹g@pPÐ)ˆ?l¬þ°PýabìDÙì·x=¼nçp/¹÷“ìa?û¦T 0x >&‹qâQ yfoX'0'q¤F)¨—ç–kÅJF¦>î¥wgVe 6ƒx{Cý)Z“Мl¹‘/Ån€Eˆ—ŒõÅÜÁú£›Uï~¯ÅªØeÑÖ@qƈé2³ÄX{йãz‘˜Qôh˜çã‰+Ÿ¼4ùºôÍ3ª7ˆÀ ˆ§ú"7ª˜CÀë£üLÝÀYÌúè5.cT²Ÿä ÚôŒŸ½Œ[YÊКæzï;aYL1$>$êYG9RrUÐ`e/ YÐãtÑ:SR@ã(ï \Cà€s"ÛçpéÆ,ìŸ-áú«]Žù½îN`Š–…yt­ž=LR…ؘ1™"‘[G4QÚŵ© ò=5&Ôƒ$€ËÎM03Ó{‰ÎÂŽãX÷yšëÏœ† ÈÃ1ËîBB¬o^sóšwK|ƪŒZnáÛjÅ•—ËÕÎK•|¾l…,—³ñlgÍZ¨!܈ÿ¥û‹o+ÙM«l¤¡¾ßŒúšñÖ·Fdµ*O–…`K²¥%iFŠá)†§ôK|ƪŒZî`UV«Ò”GSMy4ŽÜGF€óŒà]·s Õšª×ó4ç3°€#¾çŸ±*£–ï×£kÌþµ²—½Ò¬•½´²—›*û÷¬þíSÖ9\_üë ç–àãô}å±ñúÚËobŠ/ ÀÈF¸ÐžägaØà– ’:7¥qnë$†Û¬ŠÏ˜ S , 0d^ IhÃ`ÆA`t¸PØV`¹0.ð²(øšlü*eƒ¢„©`‘ 35\âúÛ7¹ÍZtG€,/f3y(ŸŒ[`}•ø¥sr‹»Ñ@“ Û›\G†+w ”[-¦ÇE­ ãø’’… yáÔ…¼5F­ë(!i-'kƒ^ÔTB`n=ºzbi„»IÏämä‹"ƒâä"¦bwQd"}B²Âu‘Á7Ö¼5€4Ô‡µÔhµéл2*ø w|Ô±QwBk1—eù΂uÍúzqVÍBÇb-;( Ê—i-;Häc’{}¥ìàa·ÝÛ¿Ÿ´#¸íÐJTW¥lVK1û—˜ž^¶¤‘øîªZuãF¸ƒpŸý‹‚£ qS¤·Q¤È®ÒØ”<°:äÆK~Ññžôé£NâµàŽl›H˜Ð; ²–ó—aB‚ÖSâæá*LHLjX©W;TUµÁ ±àªµ˜y@’Òñç&º-jÓŸ5‚èý*‚hü(‚È_GqíÖjvÑIœ¦4^F'Í~÷Ãç~?P ¯ŵ"–’3Ž:“L‚êñR$fž¤†ˆ àpÁô ÍðÜep¯Æ Ê(ò;6‰!õCx‰é>›ë>=Y´{VeŽjwR<äÐ6DŸû$†[P„q4?ÐÒðáÅ74ÀVÆÐ„ÌN„L'Ý0SÍ ,³DZµD:a2+¾7V¶O+`t¾%µžû‹ÿ¢[z&Šç*^ixdZ|„ÖÈÜr`BßTh¡BÄËV¤1hãO4“^ŸëEK#éÅéÞ·FÖ áe‚ ª‘>ê¦ ¶•ǧÃsPª0 Ì0°}ØutÃ00ÃÀ k3!±oîZ$ᩊå4# ÊxaöÀ ð1±ç0'Áß¾ÆÈõ+o F–ð—3†OAOõV²y^OõywRYöRô¤ô;±ÄÉ,Ç0rÉ"j¥’%ô”ã(œŒš2楌ùLøMNÝùjeÁ ]ÊH´AAQNय(oj9•Ò•Ps_ÅR ‚(LXÏQy]:–éèÝí3âÙ€Œ—Ò.ZQ!zÕ½Ó8Ýôâ):ˆɪs=h­¿6¾òÆaÖ9êIr‚k‚åsîÌÈ|µåÙn]h9c]á{}m YOÖ†—LkcíçýsàÒó²4fWø¾‚öºÞ†&§´»Ž5rñmeéàãD¶Y$D‡Æ„•n¶Ò͉âè±Ð¥o]z cÌêUnCw1æ¿5ó 팞fhÇð•¯à+âQ~/ËëBóƒa}§hetc^*¼øü…;’*« ª)«…‰ð š£î”õ«8Ö«î®ÖhRĤˆ1œÍãòûˆD1ƒpsƒì ÍzŠøI rdå>´hè-H1åð":% êǃðÌQ ê¡:²ÆÊ|0Ìjf^æR+}ÄTiW“ (f¢aédjIâqÂUæú›ÊgP 8éÑ“ƒkPÙR1eíH]‰ùª#düî÷á)°\Ž¢@% NžRËðÃ¥â±(#ÐOž­&u\ZÏy©ö˜u-øº$3Ò\©MyY‘£è7’¸;±ê:>N°ç¡Ó˜ö[²éFXTTüý‘‘{Ö(u$øvM„}x-‡}Ç9ìÃü’\æ…ÞÜ ÝjÛΫ :ÅqjA>6âsëþâÛ :]…tÚ’döécVÃ<¶µ†SgÕ¬\è—FØ /mQÍÊl¤ï¶‘ÌRï:eË2@æ 7gø^Å*[µfC˜ã²C)h“Úá¤ÚÚßGY¯ªK\æÅ‹ÃÖlTqXóâ ybbÎá:/>•8¯yñUê…:ˆ+ôîW.‚©ÅgÀõQ­euIÎê¶ø8S^ÄV”M‘)Ÿ·Å$rø:S^¤üiÍ”§â=ɽ¾’)o.Û¹%ÒPŠzöN†(¿ÔÔü"Ég Z´âxviÑêA¾çnœµ1×SuM`­‹Ž,‹ÏÇIÝ}x™% Ðc÷N?â”eMh3ñ¥F†€˜|«q&šŽo2ïòÝjû(?t¥¯Â-Ç»‘ªFðÕ×€4Ac‚Æ 4Ÿñã—ʳÄÿÛ^óS×çóG¾å}„ßÜW4»ëX#ߘøßÆ;G4[\kDgTÚ‚¶"Np€EjÌköM5³ÚòRk+4%œ…¾%ú´ /ÓFª d’ *ž§ß ¾eA¯EñÈϪdåâ pju¯¾,OqݳsÆÓàZ¸kº¿ø¶œñk¢°œñ}©$ˆ%«·‚šë{$º^&`H °h–6ÝûÇÔ×¾}Ú–cÚšÎao‡,Ã&µÃIÝGšåÎ3gLŠ˜1†c“Úí¤n¹rAŸ¨·yý%Û¨Ù6jþ&~·ï<¿ÒìBŒAåûðXÞæŠõywkäâÛÒì`ð{±º$–l-(Ì?ˆ\ÆaÐmwáoa£è&¡J±D.İCq¯è9ð"¹ë‘­ƒ0Øs|dGý®ç¡Ääµ"´ÌØ2cËÊ`ž¾¡ æ;o Ùæ ™éŒdþ™L®ÓD‚^P&ìɬRÝ_ü9âÇbŠLÍ…/Û/¤˜)¼¢ûmPLw™Ãì›âÓ@øU7ü@x° ФqÐ3ZïS¨¯7Ø àœÞC² f&¹æõ$Éö¼ZW¸¼Á×÷ý5]n0]îÐ7kݬu³ÖÀá÷2¾“ — ,ï)½Dèîfyáâ-ï6ˆÂ,ïm,ï-Ò”¬ÊšØï;iÓ|…PI‡álS"‰[…E¡ÄR?]¡ çµ°¨¯AƒŒ¨Ä3¥qmc|“ø xúEQ/ÕfÉêšµŒUæÄÓ?”•/¸òãZFtdQ<àôÛeD³®-± Ôtt†t$[ “Úˆ´a‰¬U(`fAËw?/aªél²³@NuS:>‹-«-1ÿÀÖ’ØiMgó4ô\½÷½à¹7³¤ ý”“Ù95Ï9`9Ñ­/­*ÏE.Yk¾à² eKk\R ª•%3ûNCx$ó"qo¢Õä/÷®’ÅØû!)P…CM ¼:Pü0pµ³Ð ÖÒÇGC¤Lמw.kÊ8‡æB¾ë®?³f~-,¦>­G³g癲Æ={÷ì{n©qu¤˜šáñÁ˜4¼á_O´gI»Ä{§Á½ä˜Ÿ í5‡vñmxo#D±ÿôA“dž o´VkV#kµ»·BM1C«ö¥*`[ø¶–”û°¦ÜSÓ2’ªQÕYê(’?JCp²wYgAø7¾˜£I` IæÌùjk[:þnÒÀ;²õL¯Ù-Ào!›„Pì?ø|°aüH¥ÜGí ÛÀŸw×±F.¾­(R#,EV¸3ã G’C"–Þ@çÕ0]s8&:öüÚ R‘ë¡fˆÀ–T Û’§Aèà·>«/…i•IØ'Œ±FË;‘U¾døzÈwõ·&Š¿¾ž¥lœJ`îÔ.m+Ue‰.”ªjE•°ÔÒN´‚C;ÝyÓ-µ´7‰kÞ˜ÜUçû,5Åð2—y±ð¤C\|cxRD±wê‘ùåé2û¦è²¿PœÁI=¯VØBqZ Å1»ÃìŽçÛ…öl™ie±'Võß { ì5°×”.ãðÆá¿Îá÷l<‰ýúNr,åË–¤ƒ›-­s¬*ÒE‹/¬w;dDËþ·=7ÝÝß̵ÝM™ sæÖy3×Ì\3sÍ8ü!˜\YÒ焾—¨.// V×F¹$½ä΄îÇ{ÔöqËqRË ±¿h‰=YN® ±:Çñ¢%v¥ÞËêEÝ5Í;H[@VçßÕQ˜Åº‘ÅjeuoWø¬¬®åVöõ²º[TËjCôéglc€-øyÏi ž9”¿ÂRh‰b¡H…›UåÆ2…k¡é)4=ÎAXòZ4 Ý•ûð^zw‘š¼ìã…Ë€òeEyŒÊ z9]ÔÊQ 4 \L=ËDbw8éëÝ_%³f®‰5c­E¼¶fÍ«™ïM«yºµk‘»†Jö‰JZ䮀fšxPpqd;Û?/®8å ä}Ô ¼É3‡´»Ž5rñmÅ`­ƒø.ÅÆF(…0„¤¹Ì‹JÁ‚µ^ûcEÖJôrÉ)„öb‰Ë]ÐÊ™õa…]J¼¿´™i¿¨us-U—j©:‰üôc=H‰º¼ekúù²e¥Í8¹ê‹UE|v ¹!.;®Šè‡—±!Àª"v#dŽì9tç-òÚ"¯{“¸ÆáÉ]u¾Ïªˆ9„´ÀFгî/þLIMƒ+¢°âhVÍ|s¦'ÜËôô„ÛüZ%‡òbŒŒÍÚ´Îkç O4<ÑðÄ#pø=Ä šÎc]ÿââ2¬áeB.ÝqÔÛâ4ì®c\|c,fLpðýbkêÂq.ZŽö¿_N²ù¦ù²5d=Y^n0­ ܼ4&¼yAI1!<k¯ù8€k'ˆÚë|œˆ<ÚyÍÇ©z¨ây D«™IâÏðÁkQÇFóqgMTù8CG0qdèÈÏäõ˜¨2 È¡# *ïÓš¡C j’{})C§#øl™%ðiŸG3·ølDÑ0pgcpç´3z6š¡‹øJDÀþk¾xf“z„Ií4È0O-HF`¡î/¾1ÈðŠ(,Èp_*ÅA†Î‰D‹äÔ~¾,dâ˜B‘/X‘‚j9êL¢ñgƒÔ{FrÉSèÐ jR'ù‚Wò@qÙ+˜¹NÄ ç;ÈgLßÈ›Ëóg!¾‘83QÄû‚D´¸¹–…³Y®[[®”‚܃G{'ž×âqV7mJ‡KA Ü©oliL¢dm!fp”ªßóŒ;Ëâ&<œç·s=𶣓5\kƒ(yª~†¼¸þ†ÚⲋúVÚJK&u^Z¨±býkKž¤Ý¦S—zÒC/çZaR®ÒkŽf9ôÏôŽ-ù Ú2¾Lå7ÒtJc‰÷G޲Р+q±‹™€Š_dõ‚‡æEeÁÙ¥ÅPŒ¡TY/s=UCX ÃŠa-±2>®%eüáô”†ÐŠv^e5Ÿëó𑎰Ôþ&wÄìÆÝu¬‘‹o imd€­¼h7â¾þnÓÓwD¬E›Z´©E›~YX¢MÑöá±äK¾9YòÍÁT=@‡sÃE’`Ÿf!ïfgÚ¤v;©‡wè3Þ] ¢³̼û‹o‹÷¿& 9yZȉÕ¶úÀæÎ±é9Æbóûó˜lXØÐ C+l™Ù¤v;©;ñ'þ"Þƒ³ãoü•yNÿUÖü?—ÿyú×+‡ÿѨ¦ÍûÁ—Å:þãÿnAð´ endstream endobj 5 0 obj <> endobj 6 0 obj <> stream xœí}ÍrëºÒÝü<…^@8OU*™ÝdA*#ççTÊNêKßëkuƒ¢¼½}%oÑÁ®s¶ ‰IîÕ¿ð/ÎþõäNÿ¡üû_§ÿò_Ëá¿\ùÖ]ýýÿî“/ÿïÿü˽ŒÑ}8uù&øô2Îþôúþ×À/ÖùërŒs|ÇÓSù©;‡àË/çÿûéüõ/úÃõ~ý¦Üÿ_N)òvø/±5”Sy^¦|ŠÃˆg¼¾ŸþÍ¿§ûNÿøìqŒ<¬.õâ[_#Nr'ôe<­>—.§8yô,&ö8Nò8ùôú×àÓú‹7|‘^òòÅàóúf—®þ^¿]ý½~ÒË„èí‚ÞNôùõ}‚>¾¾mз]ß«~õÙ§¿ÿ:ýË_§ÿ|úß’×A)ÿÒ”Ëë½8Rÿæ?ý=žþ_ùÙ?8¶§ýë3ª;ô€þ§/è.á%„p:é%¥Œ-§'ÿq™.ßĘËJ/kñw´,ëÑé?[òŸLÓàÃi˜Ë†4•> §!–Q)ëžN¯l„\Îå¥OøA9¼žfË—ùT†ÁÏå+¼óéí9„“Ÿ†ÒÀéÚâ\ lc8•s<Õwx=ý}†r±ÃëDò¯ò³rey™XÞ$L<=$Çëq|-ÔËž"à_8Æá”ð€zï2Gÿã·Cé‡éš1 .ך¤ÅAÂk”£¼Si”ÎÓÄrIhÌ8ú,¯ãõRïùÓòJ¼“Þ›/?“Ž3ÃäxY9hÇK0 …I³ãnÖŽO‰8ñ|9ðúr,CT^ ƒ(÷¾³ã©Œ<Þ§¼þ9¦2ÝeüÊaLx(úe–/ó>ÕFu?¢Q^hiÈ©! ëo€“µA. AÆ“< ‘É{½Ù$ÞÒñ».þJ\æ,ŸZa[XÊC(#=I#‚xf0Hh”S$é ¯ )ˆ1ð$ZæÓ `]o€2'_&bN7LJ‘40ŸtRå$é¡ÌÏ`œå_æËÝb2+¿Ç+Öw¾s‚Á•®¼ŸFPgYˆ™ÿ'¼E¡Øß°Z!ï·S.§†B°¼oâ±È;¼hùÛ!†å8x_[ [>Qø&Žßgœ³®¹Qî]vÎÄãŽeŽÑÉœ|ŸË"F.ÿ2G#‡òÊsN9YN&«¼7&«,fþ¥ƒ|)ÏÀ¢û¸`'<1ð%eÞ0>EbüÊcÊÂ,”[a.½0…ËïãCÞÊÓÐNøÓ¹¼N#ÿJ\t8ðþ±Yɰô,°/,( é’Lsà7Àñ•4‡‘¡¨$£‰ÊhòÀù@o ÍSZbXñÄ“Àˆ2ÖƒŒUÔ±âÝð d)ÿU9eÈyæR(ìZ»ˆÁ÷ÂЖcAp±ÂG&9[ùè8qZÊ‹ËSœ°(«wÎúûãJ­‡1ïyj‹yC –×xçªö…ë „<ÆëQ:¥7NL+N}.H°ü-x쉋 ëª(”\&;b¼ ]‚{ô2×ò—·Ab¶Ì ØcÔ§ÂôQX”Ç«àœ«<Ç;Ϲˆ, Õ8ùa9¹Iß¼÷d¨çõ5g=èwÞ ë+}Æ—8‰€/ç~gÉ–?F±ˆd—^æåsšå¬O/ñòÉ©EB?ŸåQ¥/Ëaz)s«÷:‘çõQõã+¦æê›2Yyð/Ãå›aùÜñòy¨÷¨pÕ*S¿w®wÔÕGêÇW˜Eî0È ^ +cTÒå 5úˆQfŒjÛ~,oæõoø‚FŸúƒÑ]ÝnùXmHõ _mHõ‹AlHõvƒÜ®>nXlXÓú Ø ô…‡úÂW·«ý'†¹A­q+Ü^_oW-sQ-s_Ñ¡ì׺MÇ— +\³¬ªŠn~±—_¾ÁpšòFx)`.•¿>‚5NdbÝS”O8I"ì'¤p)äçoE F[ƒ׆™èf£·qò%9¨' ÷„ëòïYhœàØï4ðçòͽ¢Xe¹È·3p4FØ!Þ"AÚ¹pnˆ@œU¾ª(€=r„ §E$È£¾|Á!d1CRN<5gªH ‡Ä„ÅÛC›òCmaðBR½j@…7"¦ó¯Q~ïåçX|͸4"ÞÚ 3¸o¬°·|£ L}½¯ÁšûØ©"׋Ò÷΢@‡ÉJQÞûø}Â_*‘…å#:N%Ò-J¤¢G ”ÒÙ¹«’[2Ñ9PuH.vÀN?‹ÌMŠHÜ¢CFÕ!=ÍÉ<ªéO¤QwŸeB'Ò3é Žy,ôåò<0Ô—ÿÆo©j–cAAV®P ¸>°7ÐaK÷ø2s!•”nÉÏïh?8…ó‚®'v §4+Ä#õ„…zÐN NPŒskꉤžzøõL¢¤€A8™9!C=/ê –´LHpbºò…w•IõIŸ<ÖG×7g+:±y%LŠ»A‘˜?r¥„wÚhú­ƒNÊQ,¤y:±¯zg5}þ‚àV[jùb .uEdãWj„³Y¯ ârÐz1ÂŽ£®áAé: 3σ4¢˜€q‡ J”[ßG({eßÔ `¬Í$tã2/4Î`$e.¼¼ïëå}ÁA7†ëIŽ5#¶HÅ2ÒõQ¯ÒG0‘wêœA›‚3ÙÔð$€Í2ën Z Õ@5zîWÉ«žy¬()Âón©ñÃÿS]¡b´óäoÒÆ8:ª¢0dø×õ£`ê^\m| %®ûð2BCÐãô¦Î[œÇ‰X^î_?½þ5ÏY´‡©*Î/Óå›Á×·[>ËÓ (ÓoômÞ._È«Ö;j/ô‰>~K7?Øñ ¿ŽWðuü€^cº¯1­±kÌë;-Ÿ9ÖϸÖÏ4ë…ÖçTÔ:~­ãf¯ ëø•.ð«“~üT˜nPŽ8_AÿèüKôM!²…¥ú®V-¯Æ“¥1ÐvDözi £ž¬ 0¸ò›xi¨œù'úÆGsdš  ½<,áµÄ©àés ˆáOÏ&‘N4êŸËtµz( ª¯”—ôÏyXzGÊ£³GšÌpsyþŒhI•>^æŽn&væI!FEØ×Žx>F2|¶R—ëÖr²6ˆ¨¦zú.Ç¡žXánʃ™BìO^)„V.'ÇAL_âž—ƒ˜s@”à]bÇKA KYírpI +„B«$cµGÑ6Dó­&,¦˜\£:i$¦îD. ª›ÿé\S=ª°ÊNz䤼¨Õ¼†…Q:¡;¹Ñ*"6_Üýt¯®"b!Ñ“S×±v´ù;žÎI}Ú¨üiòµ1rTðä¨c“„aÌeøyŒì(«Wß#Ë׋³‹ñcŒìx‰‘M—YzO“ÜËbdïè8ú8‰‡z9}•Åó *Í ‚Æ‹z„ËFbŸL%A–f- ô³!‚W…ÚëIMeIŒµJe6¥S”*ù¤ïrÜp©Cc¨œh)ÈoÄ©K3ÀóÀBY^ˆ—êäÀzi…,—³ñ$pQSn l7 Oƒ²Bmz´|§ù%A%|þ}Ù1?¼$HÏF:FZðÕS–5ô†ÑSvMí_Zdw<»´È8]=È÷~\5æzªB(+Ca?~¡êcR©Š—¹oî_joæßó?Ž[M¹)6lé}ÆJ~ÓñžÿQ'ñ6ÔoíØ~x#/©sw§x%ÄlÔÕ723zq"#ñ"]`h§IX›ôƒ£e«àæ*Ò¨üŸPŒ/%£Û.­Há´ñ7š“^ŸëEKcÒ‹§{j7¾³oŠZ"s3& Kdîk>ÍÄ´kµ©“n˜‰ÉLLfbjÕÄdBbßÜ5&Â;3ËiÆJ•ñÂìAáãÄœƒ;™D‰ûM£ZyK0² 9c_4¸y† JÖÏë©>ï^C¦ãÄ¿sÒcލw ÜÊ߸#nèªVªv¥Ûóá!ö.Æeý*Žõª» 7ì}ç´MXj…Xy #çØI®ì¢·,º æÃ)ÃTSaéüL9à LU!™™Æ99uç«~%ËJ÷Æš j2%ý%}Ey#6JWBÍðå–Ť?iÕsÜçxéÈ7ö+Ä¢N™#7“9!µÐØKʺ¸¢rö ä-Àú‹êà¿GÓLîFV í<îTŸV³= 7¸,˜7È1 BAÈ·¦ÔIVõ%1Ûß_Ñ:Í)ò|'ò¼†Û*êñA#aëÁ6ÃâLÛ¥ ¶oH\»sòõ>S^·äm‘À¶´Î±²¹U‹&?½[gZEš2­hûO_5׳>=˜ÕÜkæ^{–{mL*vÑ!:øUK„-B¥–†Í8®Z"tõ^÷Ð÷ÞLäJW?  ¦j¬¾ÿÄópãÍA|eÆå¹eBéúóê TÇ ?'ºû9¡®ÓçO|v°÷èŠhFx™{á⛂½¡ ‹Þy|°w#SÛm°w#ãk…i 0þ¦ã=uoË;´PY£¦é½tÌƜºÛ[: mñØâ9jÆå.RvbhýJÙ'÷™²¿jy·…ô»Ž5rñMeßaˆZÑ\$ÄVY-ÂéQnS¢æK"éM„ÚÒ¢¿•!`Žþorj¨æ®Õ2Iü@¨í%.Â:Y(GExá7ò o v6juF8-uÖýI9ÊUØÝtbÇIûl˜üaEGˆ´«¾ìÙ߉ o±+?»ò0 ¤›ѶÀæ»ø¹òË©òK‰ ÖØÙr>™/Çß…£øyݺ›u·ó*[PÑË~A,( í‹y'#ˆO¤»_å^²_²æNÕ ¸Æ¥j,l„^6tçÞðQ}—¢bríé86¦l¶¡lš¿ßZö‘…þãœsc6é¥A¸2`ú`òë2**Û°¥Aoæh‚¤f‚¥ìHÉé@z«Y§“޶ì‰î>îØÔõ)‚³JÛã°ÂžD’²¦÷/YÓ¯A¬•5¡dÁ Å ¼›„Eø¤ˆ±rŹãq¨‘ò¸Â;SnÊêa¨~kTÿ8Ò™ÝËØéì_•6ûˆy¨žî¡Úµt7My×°¦ÏmªÆ)´ ‰;ëþâ/ë¦I1ÓMXEšoc¦#ÛÖžæ{y`KYL÷nc=ô–î¼oŠßXŠÇ±¦#ý´Ždšzצù ÿÌÀiq’Ïx•-sY÷l1êÃprl5¨#Écâa{ñÐÇŠ?¼&e ÜäÀ‡Îï¡®@…Û °=Ÿ}z¡`3Àv¸o€Í›™enœc±øÏ Š?GVXÂj)ßùî%R8sܘÌq8{kWƒçÔQXq‹PAo þÊaiÕÈéñT]RÖ÷ì39qmÓÃ-‘;‰®!T7(¡Bn ¡FY¾ê¦ð\‡äJäàrKC7w~«-‡6ðÊíã–¶ŸÅm¯ù•“}ÊŸõì#Ðë¶¶Ùí®c\|[™µ6Æ×ʬÝ8©†dûVê7ˆ÷lc…›µÂ¢žñ*”=kcAíZÉîİvd…¸'©kÂÉLé·uÃL馀˜Ø`Cj `Y™' ÃŽP…•Ù1Õâ»@cƒâempÕýçý2dñõ˜ì•[qánM=AÌ=#5+™´kˆzh¡ßg½¨8¤t„Fhº¿ø¦økšx^ýÏO6ž¹,„Š8¯€ P°–ðÈ*ÝxviQx»zï¾[s=%ìRâtÅ-bÉǤr /cïÚÀã«N5²ªlf–™©õª™™]#û>îãh+º¦hk÷«þЪÇÓLSê<ãØÂšlD%°‹¼øK=möMѱᖧã–CKÇ*óK ªÖÆÂê®@ð˜›b\»Å[Ô÷5fnÌÜr ,Ç`ŽýNV¼åXD•É[qäÝË„,×8rÎëÛ,TiØ]ǹø¦Œ¯ÕÀ0ñð}ñÐRÞ †¥‘~d.õðØù$/²\:^V¨–—P}Pj#£/R|/½€,eÖ"¸B­Ÿ|åÕ6 Çgù¨@YM.³ÆÍè[qÉ£ÆÍÄQÄ,Uôf}½(§eÒF´$ NhÂaÆœM\Iî%÷6F÷njδ£=å:ˆâÛóð~ñ<Î}1ÉVo[ì™,;1ÜÞu¤€˜©ÞLõ&OMEù¦Šò8Óöä^rKcÛYÝžg@¸ŽÄÿnëö<ŒñDÛRnÊ;¡[Š’B+Ii†+y¨™‰.YVìˆ>×e? ²¿Ë«U&O9¦êrÔÔÄý ‹>ïq,kvmùzö,Ìm¤ß ÍN6é4UkÎ-,íFb[º¿ø6×þMXÚ“™ßi~ß BE «»´§¥ÎDãk&ì-ò¦²¨‰ˆuŠìR™è3gYä焎¶w.ºãˆ-‡SagnP˜Ó™á›…#žGÚDqL GáÅü&RçÀpDD!æë> ¶Vîê!?Fþ% ö2o£«Šý¨ ÿ„J,üì).V¸\[“YЏ+wâ=z„ƒ‚í züXõøP9°x¸¢@Ëràõ°DœÃ½äÞæ‡°˜ÐK_6ܭؼ½¶3Ù3^Å6àþÍ ™6²wm¤'ÉcâÁ‚nëÆá5)á&>t~C†–žÎ;½J -Z:ÜŠ7´dhÉl"·ŠG È1äð™´mÂ9 Ÿ°´ÒBC,ï±´‚T+W­stzvÕ"â׻݋ƒ9Ý;Á §)/¯àó7´RpÛPdz¨c›á¦#ÉúUëUw/ð=3§NÉ‘‰© ¦‚oÅ› b*ˆ© ¦‚ü¡ ‚êyI€¢ @H9‚B<’0‘ö丠IÌgãÜ1)xGC 8j.­ßF?Œãª%Ð]ïuß ì$¥¢%§pE?Vˆ–”QSž9ˆx¹P ×iz,ÿX˜Ø(€×¢Qø(ò&q/½»äéñ²wJ\“hÊctñÑã‚ ‘øÁ…£Áižþ œà¤¯w•ü“¹¦ŸŒ•ˆ/­Y³Oæ{“OžNÕ_ÅgÿiòÂ>ZnÊÔ™|Ü]ǹø¦’µŒïè:øâd®´4ˆQÁjkƒy´þî`jÓç¶Öçæ zPax3Cà ƒT Ÿ¥6—zšIu°Ò{Õê‹S ‹;^Tv4/ ÷9*”›Hhô5¤¤ä¾_1…´…t¯¥7v­Úu²z,é&ülÒÍã’âÂЮ)8æD)ˆ†-Qá}\·–“µARŸêA–åp9õÄÒ÷ÛbŽìªè„[™Çź!CsUXlù÷ä@Ÿ%n¦è[@ØÂº¿ø¦7×4a•"zª±û]‡Ív؆>aa =Ïî¡‘ž©Ê¦*oÅm™ÆoràCçw‘M¾ïLP› (è9ôqS¡o£öãä88¥X”å’ ‚‰ü] ʸ•àÆA<Ô} ôPÍ_cÕª0Êjû™ ¡ôS¥]4²0ÈÒep&™¨â eÒ §×b³®Ok'ŵë«ad³‚ìS»3žüs’ùa¡>SJ/ŽïÒH¨Ïž!~èØžóø”8ÜŠ7¿a—ÕÕ:ã²­«Ýb3$y'ÙXKš4æ…ìЀE1ab%nT'Ê‚^gæeUœJYB¼ þJMÿ</œ‡™ ÕЬz)=¢ ¹3)'5Ê_Þ XÊRÏtñËSÊE¦{¼ ι*¤½,³€¥|ä<û5ÊŠ†DÕhŽW6Ø—éºüí ¹¨”PÅx1ü h\Rˆ—vÄ)¸ ³qY +- ó‹Ô$á“:³'ÐÚD3€Þû^ ß³ ÙæCfVï§ÛÀ,rî‡#çWÃÉ;MWoï´^é“knª>°¥¹©ÌMu[7ÌMeFi“ëÎ÷YŠ*¹©׈©¯û‹o*EuMVŠª§RT‡fá=Ay3›ÍóÖ!³²a¿¡¤>t5cé}°t³Î˜uƬ3f1ëL¯eÃH`7Àö|ö}è…j€ÍÛáV¼6lf–¹U}ÁZšë~Ó\£É !eKs5¿¼ùå *µ•:Yñf¤6+ˆÉuç;Ms\ 8®#q÷ß–æzE–æÚUškGhÖD´9*ÌQq븚ÝóÙJ•ißæ¨0éfáÆÓ#›zb]&èÌÌlff“òff¶|}Ë×ï˜}z¡`3Àv¸o€Í›™en–¯oÈá×Î[®çm¯ùe®ç0ÈÆ”{ `¹-2'§:fÖ¦I™&eš”iR¦I™&µµ&õ¸`Δ´¤A0cêGbN2ëABŸ¸„`Žæ„ÎMH <Ë,,\cÎs˵Q]Îñ^zw©X„6%i…*á‡úS´°„¡;a¥áœ3l~PQø_¬?(Òtâ¦~¬w·ÆÓ¶»Ñsè/¶ÓÀ‘ÀÄôwÔò|½ ó#ëã}fºåal7’öøÚiŒ/9^Ê\¹397`~aà½)ëºu*Í)ö2PÐ+—Ôy:>Cپ·Y1}¢©=©e~Q#y4@ïe}g–ê§"˜' ‚õxœs’z­;»¨ùn†Õ‘à7˜¸5Lܽ&ѹïçv÷¬"U8í´"•×¢¿à^æúáPk²5²Lž=¶m¾è š41ir[7,œÀ<†G±äŽC 8§ãl÷ßT³ìš&žW³Ì°ïÓ%ᑱïãŽ<ž¦K˜.an•]’û> )tŠ ¢âî®{Äå·)SÞ]ǹø¶Ð‘6Æ×BGöÏôwjÇ-ÅÀ†l«!3óÆÓͪ²ÿP•6ðŽ̹øÃ+` Uic˜<{:m™y¹Õˤ‰I“Ûºa¡*ªr”P•‚JÀ9ƒ»¿ø¶P•+š°Pþ‡Ä¾ªbº„¹qÌ'ñÇ> ‹óø¡88KÉÝ=‚Ú›Ðúìýî:ÖÈÅ7Åy42¾‚=¡/ø·Y $ Æå$¢#]oÇ £–”uÖar®­Žì" w ¬v§ú40OüÖ3ÀÊ*«r¤ó-?ŒZ•p`Uvnz«¿µM$ÿx{©•ŒTÆ"¯Ö2bŸžÒk;BV勌õUQp"hE¿™ÆKC6¦¾0ˆ¾'Q½XžæS”Ÿ)ÉbëŒþ£Dõ‰/u¤DÅþ¼öâ±Ð×a;Þ“î`v§Û5,Fe·1*ah «™WѼŠ?¼¦ð25´Lž=¶Ì®Ü 64ibÒä¶nXŒŠYŽ£2s 8§Cv÷ߣrM£bØ÷Øwƒ•ý׈0``.%ó¬XÇc6°£Ÿ 1Jù3ü¹ä6e+M»ëX#ßbÔÆøÖѱES™‡zÐ ÓêÁÀ!TrYµ®7X7ˆvôÍ3‡1s:ßåõ0±ÜQÙ5cÄÐ"•-,…0ñFþ.Õ*U¡Ãƒ\ä!o9TŒÛÒˆ @‰%2ç©Lj®øsp“Ì6 ¥Ð¿(‹HA*‰ñ8!ɹþ¦)vžôèi:U‰ÄŽb¯ ‹Ç^L¨u„¬¦ÉÎl æ–±}amÁîhÁZ€×~¼j –F²ñ~sÉÿð ȱ-]ÑäÙ³iëÐN®ž°¡I“&·uüÌHx¯Á¹ÐÐiÄÐýÅ7Ex} ñ2ô{Hô»E¢Žµ¡s%š+ñÖq5YölYfs%štû£Xf vü™`ÇÑ}®ŠíC?¿ÑòÜîzÖÈÅ7…;¶2ÀÃV®RXr>am•#æQ(|Ø÷JâÒ·z~·´@ñç(»È7½ÊgZ; ÃÕwò`ï„I^E‘`Ñ@Á½¢ØjgåîYø¢Ù7ᨇ޼Ùå±ÊTŽbòƒË0zˆböÙ¬®,qzB=õÅÔ™g«3hµß@+7K¡fd´-ç§/ç#[Ú{ff²8“Ûºaq&¦ñl2`CÊ- cŒf?f_öïÞµ~ª±,Ÿ tó½ðàóL¥½ˆÌPijªîøÒHº¼ ÏTA\Ù,4íŠ÷F$¸«t½hö‚É&=ŽxÊ›@²éìç c~`¶gÈTã+ÙÑiÜqL-`ÒFzÝ_|cÜñQXܱYCå?(îØœE?ì,2 ×7P?à …8i4ÈGj¶j0½Ê<f(3CÙ ’Ï ef(3C™Ê _}ÏP¶‡ø|„ËbXßIÔ½r®,ÌçoÜ‘+Ì0‘#¼‘—h™T<$cýödý*Žõª{Ù¤¡3CgÇì‹¡3Cg†Î :û:C6R"èA@‚ž¢0aiA- ’¸y6ÆÚIV%°XÂ*AIw§¼nÉ bÅ/­stzvÕâëÝž=ö†o ß¶Êè;ê‹á[÷†o ß¾= ¾“G4¢¶ÀÒ¸XN^ò´q\µ6ê½W$¹øâ¦´Ó¸¹#󸻞5rñõ@Ú`VcK™’u&P€€-\5s"°Ë)W%ïÏ€Q¯Œ¹*²¯0°s¢GT/²êµÙ$I­ §IWé âDþò6hæŸØ#O…Ç$ ÷xœs•)cÛ5ˆÁ%žÓâ,¨Ó¾ìZè#Ö6ÃÏlnQÝ£‰kËùéËùÈñì=‰F³Ž[uÛºaÕ=Lãù‘êm€ cŒæ6ƒSD]‰jȵbËç¦ÄQtÔ¬¯eÍÒØymJ/¦Ät1%RH%¹×wL‰¶Ð>7¯rK¯yqCúuKƒµÂwöx ,˜dLËzI R¸$†z§¥piÌŒ¯‹î2/vZèë.¯ ¹<Å̘Jû÷u‚‡Œ6[ŽkÙ_î~šÎg†öÚ-â«ËàŠC[›…ßÊÐýG‹šd ûvˆ ¥Šê»0ó3T®Â4Í"Ä_ùHHR€(ˆW¼z4õ×zr1!´ÌË‘»ÐB•™òá‘’DDòCó_4†eÍÛeÔÒfØ"K4N/#ÜQ–%z„‹oÌmƒ(,Kt_Y¢PYHÌBÒ™ºi¨mû¥K‚e7µíÙ2ãË'c’ýIÈ%ƒ ¾!ø‹Ç ³# g^ Ëb­@í±ú*·‚øâ¥;ðP¿ÕÍb5ä)‹ÿÙ:þgÏ‚ú©qÆ:4Ìš3y#gò®ÙLAT†ìÍz{Øí7ûàT[Æ_û˜ÝC›×z(,ñ¶nX:£Ù ~“²˜5êâÞDëÚ$#­ådmj§z™:\ŽC=±4î®ÌÚËÚÛ3àéHXîyLЛåÀdÝŸh²H‘ßÍÌ¥ûΩsBGÊ`Á\“²B9§”DzÏð ¼RO;3Xé\ôB¼yKVfÃ7(W–ë¤náA–üåmЀîWõ©Øí8 Ýz¼ ιJˆðaI-.Æ|?&ÕÑð6øÕŸ_±^g{x@¾bÉ_æüfxr°l’žx­ñ3Bßò>Êð²Üó(ùµÞT—Z¨´Ë×Aç#ˆNbhmI…< „—ðÑ‚Œ"Ô¸ÝZtûZãå×(ìé|{˜RZ¸Èã”(¹ ~FËü’ÕózÉêa8“x¸=ú†ô,ì1ÓäH‚ºX}$bÙ‰6 ìË’ †UF’%B§N‡ ®s†ùD O/&’,ŸŒ[€•øå0œ¾—Nd@±#X`:âöÆà'mƒ¹gÓÅaQÀî#±v–ܖ𨭻‘ܯî/¾1¹­ ¢°ä¶}%·µA5» ‡ýÇíxG–½}V1•Þô¶Çèm2dKéÓªìõ‚)𙳰»+®7å!º“4€3pczûã…2ò²©³î#O±‰ÆÌ_–™µMf–ñ™M@°ìÿ6@MfÿK‘2ôÕl¡XS {‘2¦šbhŠác–Ò%扺ˆ,#2©ŒÇèôLÞ­ª‚¯Î€&uCÌœ¹‡PÄ,¸)ŽUiœ„êò¦46¦4ZA ûñ· Ù‘r¬: ºÌâ§‘ Kå®Îໂǔg ×)lí\^¾¨KgYYè™ã|s¡ê"qRG~©ÜQnR`ŽžÆ!hyxyÒ0i#ÿ2šÙ ò€J”ÆuÀuPάƒÊ@Û¸¸œrmýÍ´w¡±ÑÉ=3$ÊAƒ¦ƒèˆcÕCÕ…×SÏ8ðzh¢L€À½äÞÍÄç»BÀQ»O,ÙÄ„éV¦[™Á©Û™<¶\í(8ˆZ éf’FÏf”|#ÊhõLûƒ ÀWésƒñMb4`ø¿éZ¿Ó0)¬ˆ* 'È8àÁ>‹ ¬¼.t\-à"ÁÓ|!¯z4Yã?†/öäVúÇ—ÑÉîúâß_8׌?…—8/[tÌW×®¿‰SyØ9·‰}·dÌQžÙ¢·¶E}íÙDzx±õ¸„97ŽMɇ֋ÐYÐã6Aæä²¬¢;¾³nO–#ƒ¨,Ù™(@2s—f@óþF/ÄKur|^·XÇG¯ºWOüBº~–$ÖÒ}éð_õì¢ÖÎîŪÛSǹøk\8$¥œ6x×úŠF 4h|¸˜Ê^j»´"¦,›ÌP÷ë÷°“øxý%”_Íax¶ÙÕÜr7Nª¹åÌ-gn¹ÏßÝÜrw˵"°üÈÂö„!ƒ¨ùDœYCЫÂ86@·bz<·\+ÖOÖà^zw‘ùhsõ‡ÇYõ§h¥Qm0ªÆÞ {‹õ…LT¡ýXïnv†´3X´ëÃaØÇ¶:”9ª‚ÛKöé‘uˆö½ÒÓ ¾ÅàvÎÛÛ¬:c| °_ì¨cÏÒt\þtÌž¥é<Ò…žÅ7ÑHÏ”¤dâ{ß⻓n˜lFBÓñZÕñ:a2Vh'œ~´ÐÎ?‰¸ªÛ´¡´  n _%_FÐ?{pѯº¿øk5«nïÕQÂyàÜÖM¸™[Œ£„èx'®[‘ Çião4'½>׋–ƤO÷ÖxàϾ)Žjñ“fÚøMÇ{R›Ž:‰·¡Q3(5ÝÔñN4øNºaÖNcN]‹í3B˜Æ‰×!"ÓW­0©WÝ›FOª#×,Eo£†¾Â„v¦½*Ð7GZ¦OαÊçÌò¸k뚆Ù#|€=Åšç«á4/u¡—¢{ã©ZÛ·¾3†*9 /c”™0íDë™–ùté"’A–nÒUËœ' ’KQ È-ï:SçÀ€XÌiìÒy¨¥†Nt Æ€ 3bmŸËl—ñ¢—±†WŽ ý %µ†Búæ.}°p ð³M«î»³TðÃb`·„í›-éSÚ‘}ß¹ËiRCyÐÇl»,‡sp┤¼e; ø ˘€yLøË®âS¢?ÑyÙ…âõTŸw¯lu\mïä|ïH2Ô0ùü;’«7Tk¬–ÃCrPN9dý*Žõªo¹™5Ý/sêßÒÑJ¸NuëLÙõ&É6üo÷ru‡NÛq“¾ÒÈ_¢vâ {PÄe£iTÔZƒ‘rÍ +À-8g*H_ ÜUv++W~V¢Øf ×4É;OßxgBG’ÝV™/‹>H²+à@Uº–¥kŽLÈ_³Jþ3Ÿ†ÑîTŸVsq}V”M‡ï$ÀC&nÐ)ôƒÎ†$Ò^rqý½^á-µ©ìe÷¹2“ÎAV “¸˜\<Щâ«b­ÌçÌÕËNãåʰϲA OÝùj8 ~ð:y™ 'W^QÞ(TPº*œÈ¸ÊÄ ç(Ÿ—Ž|ƒ]½$H»ð:,ÖÂß¹I{VãiSCψ¤pвFÊ::ƒe "ã¤Ùá`”åb\(ˆzÒ=Qúå/oƒàQ¹™‘òTH§(¶$WÁ9WCLTŸÝiÉËË÷.º-)1É–¢`öB<þ”ýïWèä/ d"¼É¢“9u‹€À"¼&X>çî}D(îpYs­¸ªK\ÞGDeš×-Ö9'ßцcÓ¥qéç~ç†$Ñx…Û}î.ã<¬ªÁ Cô†GÏùeŒñØ$øl@ZxâyR)Lýa^ˆk!×Kžý¦ :t6èlÐÙ óæ.±CŽhë¨ü8I„Ç‘'­Óœ*EïTŠ®q Bϰj|P–ÙºV®šaqfØ¥u`ßÚæé8óþê8ØÅvñ÷2ü\­^ÒÕ#D'!Ä.-nú)¯[Â|÷²´Î±¢–U‹]½[ö«G¦eÎMMÚîkÁY´ïg¶KÅþÙTì]×8lÈZÁ²í =ïjáF„Þ㳿öqË1)DCC×àW-fcZ5„ÃŒãª%Mïuß Ìa ô›i´`44´V¬jZèøÏ# oêW˜ƒ<·¬}z ÔA~N üƒµ–ZÆú韭™Ž~xÉÃülÊoD¯éþâÛ E4B–¶A¡ˆFæ¶ßB °÷›zñ›Žw”qzØI´BV(Â,vÃLÇÆœºÛV( EtÍ­P„Ѝº¯ŠØ6Áa…"º.q\'ט_Æ<=Û$Ûˆè6ï!©Sáz€m§­s/,<®C‡) ¶}ìABûúÐò-žõ§ãYð>„¶°KG¬ÉÙæ‚Ì4ò6@s•ŽS•Ž’ë©Ùå R1_Ž¿‹2õóºõ'1 uÎÇSc:è‘wV7/m…¥ï_ÂÒ_Yl…¥° RÃÃoR»ÿb'¾—úÓ ó1mâc2|h†£–v¨åë$¹ÉkVH^”>7ìS|°5Õ!.¾1« ¢0•ÆTš=ª4[$wµ±"»Ëž+ôÜË;üª·£-ÕÂ6ú³™~jÖ [A²‚6M2ÝåÀw·º‹± sǘ;¦³‹ü½ï†´W^oëoûõwh¸Óu±¥Z0 ²±Ä$ I?ª'R¾õ c‘Â¥1“SÅŧ8WP@!ï„UA=Üo$3à·ËÂ[kÙ&æ¬&Â>$ãÉŸBûoñcÊöM¶˜ e*”©P&®M…ú\.‡JI*YÑ÷ÅyYåK¢bSƒ®ó%#`Â|É—¬[M„jœ½hNut&Q˜ðì"*ºÕXŒYëa}žA9 õIÝdPòõ¢xá˜ãyA) rätÉ ¤ç:ɽ¾“AÙU¬Ê±*™u.q¨IܔʮºYEìÂÑ—ì¼0s¦½èìTÿ£N#<þƒ7”¾}]€fÒÞ¥Ð@AÌØj&¨Bg¡rý©"¾ÒHÊzÊD(«cˆª*^ÉHªþ\dX ¢çi)£(`f2ŽA«˜ÕçØVŸÇ§mEW…qXÚÖ!.¾-m«¢°}o™´ì›š´w¯4DÝ¢váͶÚ?]ûÏð¸Ù÷̾߱P1ôa!R"ÕOÞ³±ÌB¤LÃ5c®i¸¦áš†kî³ áð®E°YÛA‚Ÿ,‚­ƒI´6Sž,‚Í"ØZƒ‘»6Ê%éSÐüªõÛ•0Ž«–@>½—í!|[¤ŸŸ^R![‹ô{pVÝCøÃÛ¶‡°‰7+FÛ@íI7µ=„ÍËr4/Ëã°Ë²‡p#ØÅ˜ÿÖÌÿ´j¶H´Ó“X³u`ûOÛþÓ¦&}—5çÌF+6Ÿ>w<Œ±€'XÍ v„‹oLmƒ(zCœ¥TÕ6f‰B©§]ð–m`Û?ÆvÁ³]ð Qß2&¦s›ãÁVP«‘13î™qÏécb»­mŸÛ‰ûݤ‰U\¨’Æ*.ìóï<¸{£Š ?+޽OÈÑm{»Ý)ÅØ©‰îŸÝ:·? ºÛæ“éŸÙ~W<’‹·Ï’«µâÆ+BçÏŒSÌã#™%æ`ÿÄµŽª÷¤bOà=µ.-åÛsXZ §e/¦Îwer²ò¸%›?1j^³“¥yÀ`¡ù(T#ò=§‰€Š³)·l&ýÿ‰Ù}Ïv>Ô‘KìÐ:·A½oP¾Ô‰Q€Âû&õÕ ¿«|0Tª¦›}í S9)Åi;”c0^AÛFÿ-R|¥Vl9F5~Ù¿4]ÁàšÁ5c?׺†kÇ-V„Æó73k$õh‹bE×ü¬¤íÑÉ£hCk¥”¬ zô@µAPïïOXسØ82«íJfî™;V©&œöZ©ÆÕmNÛ\æ™:P݆^å+jŒMAÁž„½qKñ±ŸåUÌZmzÝ6BÌ{ùØŒ3-ïÙZÞ‰+Öj&m×þÝÇ&Ì(ñt£Ä®yôQ÷bêD8})o‚^ÝŠ÷«Ï²“k¢&f#ÞÂî/¾­lc#DÑÒ¾AÕÅF¹ßª‹ °%oQu±Ö°k}d»[O.+6wS_Ìež(óD-¯bH“ˆ×ßC¥½Ã†O¡‰2ò˜26ÿ0ÀÏÛ³–õr1MªèŒŽ®,ÑÁ1¨q.³4ÀüÞ„}.-:DÈ%E=Ù°¿Ú¬Ö]mVË\ Y žx aaàQÔÑY…E6+˜D2µ¸½-¼& Ÿ¸­l½ëOºL¯¶ÞÀß< ûl´?`Áõ®ú²g3Aî&s.wñÞˆŒ6KÂŽöfõ#­ÐαuÌ•(«ÈË@¬ D¨›• K^ˆ""°æ‹hT….Týó"B|mˆäÀgˆ¨äš5Úãs“Ñ(ðp–zZ³¾^W Z×&£ñb2J“±D’{}Ëdd0×Jóö(aQ žH%3Ç·Vr²l_tYNÖ÷åÙ¥E|ïêA¾g9¾Ú˜ë©Êðîb3Î㣔‡•—9ž?üâÙ æ¾pÑ‘Îl ÛÜuæ®3ø]Ö¼„Úµá8é4‚xr-„g7âhêþâ#ˆÛ ŠžÀÐ!ÈmÌRÇ!Èm °é{[„ ›7é§S wí"· æ›úbV³Š˜UÄ„Âc²‹8Þ½ï˜m"ÅDЉ)‡)[bó>¢K‡nÎCmÿâMö/6~÷)”ü±}M„›"—›¨ùш¯n‹¹ë~Zø½D ¾K$ÜþSº„u4 o|%|ͼM W¦·i„^Ö}UþfT ÜGâsc½ð_¹ Ç!<àp=üЀ¥%\ÿI0¼9æv_ÅuÏqÀf”èyr»P«Í;ã¤ÍPÃÁÚ@f²7s“IG3Ù›fÕ½fõµ\šó§rÉ0º‹-. Md]4b^ëþâÛ! 3¼l-ÒS48ç" 6HÙ†WèIg*%•¾(™®íh¦•Фµ å)Éþi’•ÍW˜ÃÒUOU¥ÀóN왜˜ƒh’ý7U$C7(€$ešUýòœ/Š1Â_¹¥¹Ø Ÿ˜å×,¿{5šnk »¡0³Ì™eθ—YæÁH-?ÃY~†‰)&RL¤´Íû$1jù•¿Z~ÆÞùågì2?#®…‚H¸L7ÈÏø0À–ŸaŒöm†û›žžŽ¬ûv¨u¡$Y´ýŽ£í‡Ðè‰5™ñÀ °f€5 öŘ9šík¹”>×M-Úþ£{±¬LM¤24b,éþâ£íÛ Šž ÅÍ[ܼYälzŽa‘³ùÝiðÞÑ<ŒC&Õ2Á<¤¾I 3©½Æ¥‘h–%óF÷Wõ Vb¯'ŠÉ#lv`î¤ñ üv®ÞvÔã$Ù,Œ¸4È"¦Ê+Šº'.6¸Ã¥Ål›¨o¥­)ÔËòÒbÎÒviÉ“´ÛÔŸj†Ï¤/†^Jc¬×ë5‡4¶‹áêý4žÎ3 Û¥3* h/Zö¯Núè0W[mu°Ò3ÆF8†pfŽ@®¿‡õ[ÆbZà^ bEÏ›A­hw¢³i . š2 ®¤ŒT:Ôƒ@Œ.™¾qQ¬çåuÁ“œ„ÿÖpšá~Hm‚Ë,TÛï½)׿f17‹¹YL»þq!ö@>žrS|Üxßö¼¯ µÁÌÅ?À‚^Ý wØul\GÕd²åýüXÞ)ìaåÉ·ÅÊÓL«FŽdƒ•ËÔ¥¤ú”蕨Ceut`’&ØqA¬eü¡½q>€$"PË1¶€ªHM|‡u¤hüs·QÍdP2HAèû— ôWm¼‚P@:@†z< †¶@¼›qðq¨å¡èTç ½H±Æ­xi°.R)õ’géåã85¥—ïßKsdax}ÓµšÇ¶ Õá«jšqk+âþë–KüuL½<µ2I>HDf¢ó‹¬^bÿóƒ³K‹Z‚«ŒW¹žÈ 6Vqeb:Å® /swÄ×ÎqJCž·v^…|\2r±’UI@uÇLÛ"1ä/H _ȹX1çòHê‘®Œd2—ÑÉÜ-úk}ÚÓcâ7H™¯vxVVl¹±VƸ„àå¥RëÖr²6VÖ!܇Êìp9õÄÒ‡¶“˜ 3[ýÙ꘼j ÄêGú›¹´UZ¡0†¤˜™Š?ðÃYÈ{Ñ'0 <·\+0 Q¡¸—ÞKmÚ)‚§ ÚU¨?E V0Æ ùŒœ¥EsˆõœÀÌ6ònr÷ï‡K DcǹŽWáRèHBPÿu¸TnŸ/áR£FÖ‡šp1öUá<‰ŸñæQI^ñ?Œýóø»*©„*ù™¼^e–!^×Tã%€*]¨È†“ÜË*ç?†ëî^g4³ÂæÁI=¯=ûyVì++ûŒÖýœó0‰´KJæ$w”1ºØA)(4üÊ\R¾wd,¸ é`qáÛYd ÂüÁV?I!¢Iƒ­&ð†‰Tª÷>htܽ_gÈŸkÂV‡ã£»˜²^r!$«Ãñ`µ¨V8ý0ÀOS‹²—(žÂ¹&H›Â¯y b!cͼØx–š¦HšÅØ×8Z$n‹RŸaÚgÆ7oõ”Õ“xP€° MŸ ­ìbÕ7Ãi§Õ7Ýì?O“O{^Î=éÂf3'„% <]÷Ÿj¦û>`t/ºï˜_Æ<={pQg»¿ø¶”…i~æÜ1çŽÕhÙµ<ö¸AaOL˜,¬nœ–Ù.­(AˆÚøÍI¯Ïõ¢¥1éÅÓÓ‚ ‡Á·}ãGršB˜IÿL¥v §zã9ŠN”4 eh#uÒ3é»4ÈiÊQV[b`1c FYLF…ª…èA.à)êå sÈȼÄÅæ* 3lãXöŒ àõUÖbmÑ©éÀFD;š¸ðD”’Nßëît§Çnê×A ù®Ffhi9ŽÝð÷o‡Æön6ÌùlÌi&YÛ»ùç¤Ò£¡WWæÔÁOm)„Uk[öÑ^í­Íí¶ÉC³~%{rÏñ{ðúIæR×ã’µ¶L‚ e¼p2˜¨ášŽÌý;Il&c¶¯¿Ø ݲ¬É•OV°ì^ŽÂ`%KBOɤ¼[ó­…—½Ý÷¬öëè„qlJŠ<^êÇ¶Š¹w3ËcåttäwÚoÕ;Õý%Dni³3?ÂÉüM-׆47ó#Ì®fzƒé ¦7˜Þ`zƒ‘ýë ³ïÅ”šŠí3ÇMÇâ¼øoŽ›¹ú±˜ž±«ï\MgDˆ*å`é²Í¹¯ÛœûYbQž-Š«û $U —ýÌä7ìq™3Ç1â€qÏwú9Õ€U¼L Ijo‚ÂÉ|¢§B9J¯€êptY×®n¡í³µ8òLΨ#,­ÌA)§’üz’Ãx/ÄÛ û+5QV¦‘ä¨î/¾1û« ¢ØµA²'e̤Ÿ…^þPèåiTmp³ŽÓ¨ÚàžXîÅù;4ïš0º€]nN¹¥CÞÀcœÌÖ«ÇŸ+¥$ËwÞ5;gqÍK­B0þr´Æ«]‰а„têBg‡ Ù#|JL^öƒØ5/ZŸwß–Ùì7·Ù¹Øn×Å–(¦tÑ Æ ˆ,CžT¾“}õ c‘Â¥1 p[B{æÅOîò ÉSL®ÿy.‹ŒqËÛ¾YÞº‰¹§‹9ÿ–·þsù…-˜]-ÛXçCX§EõXT϶ìÒÒ±×Rc>Dã¯óמLïõe÷1ä‡6í„[Σ© &ƬvyûÔò“ ÿÙñîs~A,¿Å»ÛÅvñw—Ó*ö³õ‰ ðÒ‚ˆ¦¼n Ç›]ZZQ¿œ]µ¢wëÍš1Œ¹©IÛ½NgÈÄjRÜÖ Û#6œ~tX³í[œ’ÂŽã”i24¿}¿@Æüöæ·ïÄoŸTÙ—mé¿jý–nÇqÕU[ïu7×4™È…k=F ®W~ÁóosÐDX ˆÌQR@0Ü›˜öô‘8k¦‡>ñÙ{5Ï>¼€ÎÍzyˆ‹o«ÖÑQXÂeî2“ŽpþSç›”)•|“”leìiÈ Y¡Xˆ“…änŽ"£ÏöÝ™Éa³ Y÷¡æw^ }U¹šÄ¾‡Ï0FnµÏZøÕ‡ í£~£¬”Yb$f}=~M<-ñ=£ê8ITn³ XY‚Ê0I$o+hi êüblYòœ.DW†Hç7BíJ½^O`Qã8ÎâÆp'2üv®ÞvÔã$®. ©KƒËaªë¢0D”B|“ÚˆlÑõ­´5…zY^Z,²a…×-y’v›åç«ûoÒC/çªÊUz•þØ%+4$jHtwuãüÚ’{æfGz IŽC[$ÙIT†í²ñÀU–r[Õa½>Ø”ZQÛ v„Ìy-Ä'Â9KDP½­Y¬ªÑ:‡pžar A§‹q’/x%DO‘ö„- ‚Ψxù t6òæòü™ &odš^ø'ÑÙðg޾c-PC‰ÛȰqÝf\÷ŒQ;†¦q[æS°S˜|SØiÿ.‘NT¬ž˜ëž…\;ᇬÎ24¦°õDì{-ƺëUjæÒ‡ð…)¶e.5’<¾-·'ÄlKÔloOMç—\0—%=8Ñf¨¡£×<ÿ2Àó2ÀåþȘÝ»ÑèÕ˜ƒ Iâ^ðÈ+n:í]ºvø ÓQC÷—u\5à3Òà#ºŠ;Õ§{à·åBIçŽ×“xÇxd †„Ât|¡1,)ºÕô·ÆS±Ù_/2s÷€v­ÚÛâj§î ©_•i ?[¤éé=¡&C7‚Ñö¼œ;&ã,îçvŸ=°«äkðh#&ŠÛÒÖ‘¢(£ÇK«5?¯[Ü]¯zzq˜4½Œyzöà6bÒéþâ‹Ã´A†‡ÚÀC¤°ì›¢°FêE>Phºšôׯ÷´ ¶¨[cÖƒ¶˜n×·¬Ú²\µÁ‘&¦Ø–°™g~Î<³‡Â\»»·â›—?²}q·ëÂ6Ý0¹¾0HUmÓþÎ6Ý0‰i›n¸fBõ¥ƧÚÜ7÷ÆZ m–mïÎ[&mvMMÚî#wMúÚÞæØ1ÇÎN †™ŒÌŒ`f2fF03Bo,ÝöîNî<´„g—¹j¢ôO#áËäçá§%¹ "á<:§p2„Ñ"2@’í O©(Er/»Ùád\gò1äøÊÏR5 mþ¬4„WfðÁ‰ÏÕÇß+ãBå˶íÞCfÅM “> “ö¸çhL3rš‘óɹïuòfðÏž×sO|Õtm‹®¶ä÷Ó““ß ¯¥ç¾@ŸÙï«4Q[ ‹I÷ß–þÞ U$j5k6R@ ³Ø.ÞÁªO('gîà«Á±;Ì3Ô+çßθђÝÛqviqž\=è¾íãª1×SÕŽ‡w– } !/s €Áì–]¦f¦ýíÐlTÔ‡]Ì Ù›²M!ßF!7©ôÐëéáÓ™üŌ܆ÎÔHáG8x6ša÷s’] 㫟[è;uñ*$™$Šeì_%m¨Ú?Kd`†&¸Ò AÄ~Ðj ¨.6FLŸ¯lÁ«‡™ º­“ÄÎÖ€Ûv°Å÷ YÛQn-¸CvºË´â•#&Ž/©Œž9bqñ­Ž˜6¨b׆æŽ1[T`µæ{ Àû‡-‰}(> endobj 7 0 obj <> /View <>>>>> endobj 8 0 obj <> /View <>>>>> endobj 9 0 obj <> /Length 2863>> stream xœíÝ]sã6 …a€òÿÿÅ+œ^ð›¢do¶;é4ï“ıÛÛÞœÁ€ eà_àñÕðƒè7_ÿ>P‰\øMŸDñs¸úæà–6÷öîcÕ×Àðè’½O|—©ÞÿxÉaÀÖ”ºžÙ؇ªÛÀ~ÿ:À¤eî|gc«Þœ€O‘«¾û¾fê¼>f0ù OjúÖï9„W—HÍé[·~õp§¾ý+ßìòwÍÔ^÷z¿¡€7¦ÒW¦~Ójà™_ºÕäÍéëæä/<ÓWów‹à™¯J¿!‡o¹­-ÂîäC.zeR½½Ë__ÔŽÃ$—Ã0Üémß½•Iûü}Íkå[¤ÀÔ¿poª}³(ùk&—_šcü–Mn-ySÉßÔ¤/ìÕ¾¯,oÈCŠòG—Ù’ÀsõëSø¦äi)€ `ØhůL¡¤+•rõ›Ûc÷aÙvá-€sùû:ŽW)K)MðÀU™;kÅoÉÞCI¹vùÜ}Ïê“e×[Yy{ÇQ»æ4`54B9+%I)E’d×3ÏJüënóìÃëx ‹oÌýÀUßrŠhC‘£4r°®Íßaòa:o²÷Jõ›<µSwò ÿúÿüÇi¼›[¿¥Q+)EJî)r+x9õaÞtÜO›LžŽt´ª_bVuì×Sœyæ!Òùð²(Ëjò}ï·ÈíßÀ)yó[:ˆ_¸×âWnn&åðŒ,'þ¶ú÷rµ‹±Nõä²öÆÒ\ KoVÃ÷ˆrÕŠzÍø§ó~ËdYíýÖƒó±;óà/ kÕoø=T.X1V¿‹a×Û°ñ­®¾µƒ‰_x 3ÉBù褚i¾\ÅÓ‰gÖ‡½¾³ô!òà/­¸PžuHŠd’§¨W j֮݇ë‘;S\J`Ÿ}ŒÜd.)Rx’¢¯¹ÕÎo~ÙÝ®·Ö[h§?´U8Ÿ<+o€Ÿ­ÇiNֲŢî´ðVÎ.¯3Û\l¨¬¿ <ÖÂÄ/<ȧ›õ̬-ÛZÜî.64½ß[çðµMüÀw‰DŸÓ·gÙövÙõ6|B?P½TÀ-»™|€;Ê'šƒí0‡ü{MÏõZoý¦µ †ÎÍŒ½ß|¤CIÚÖ{¿Oš/oÊ¿‡–Eþ Õ/ÌZ"ªœçëª]³^ýn Ö´ÿÀ»óg¾°³”«VVÐÆ–ÂðR3»‰ß%d·] @×: Ã%ÙÆì¼fè>~§Ïú—þÛàÿ¯Î- ZSw±<?èöŒ3bÀw¹|ñÆ]X>ÆïC¾`~ÀŸ´î›oþIÀŸ¤á›ø½ù`Ò²ëÆ·ró.(¿Tý’¾P}5¿Ø|¼ñ&—?ˆßqšk½Àâ‹™Hõ ÃÛP&~à[¿ð-ˆ_øCmÔì·k ~à[¿ð-ˆ_øÄ/| â¾ñ ß‚ø€?¤r#O¼CüÀ· ~à[|¿­šV/±Þ&%Õ/ü™/–¤_Š_Ê_¨nñMT¾‰ß›w“¿ð‡¾Ø| Àl6ëÏ”›w9ù¿o¾þ‹ðó¼‰Â§?¿Þ½óvÖAÆUßüdOêo¿ßOvoPÀ蚊·½‚{¿$-|@Càª|K› ÝÆ¯$WúÍÌðÓ¬!YRT—4küÖ¤ÕðsóY€BSdJÊu¬Æ?ö—šÙ¥÷ÛÚ¤õdæò)Yzðsiy$)ß¶ –=ô~süªçèR)—7Kr3“kJ\Ja033•êWÖkW ‡åØ%1·“ªo+Ÿ ò„¹œÄ€ µ†ÁùOªÑ§©ík:”<ùPýÀÕC%%Ë# gœ¿rþÎíßå}óÒ›J©×¾&¥H)yIc’Vud7>œ%s·ôÕvÛ…åæo_÷ð8ýô\ Ç‘ÒRýš‘Ã0n&î½Ûž¿qöµ·uãÛ°ëÍLV»¿á§Ÿu-.ÅᩎýÒ|€ÑÐ|Pë>Ôîo-~?8rÇ¥ðpw?-Ï<(޹ø«_ƳÔÛ¾‹’¿ç¯Öý­géLï|ÍŸâ²’¿ùIqDí=0ö [²1~ëðYnþj˜ü\®õæ’"Ÿñ`f&%¥äcë—ü€™†æom?DþU´îƒ6s¿å(_åò·°røF_xcîVµ«;mœ8Ë×0ùð¸éØMfŠö‘Ò¡4¿D/\¬åo=ygÈßõjoãÒ[¹Š¼JçNñ˜¾ô`§Ÿ·Þó7Æðµëå–mù‚o‘J©§o›{ `RÒUuö¬—Þ.~ñÜ|P¾”Ì,ÊÌ™Kuæl;#`Ôâ·»£hßÃŽã»]oV›¿f)”$W¤|ÖNî=PýÀF?t';Y®·Ù®¹Ù.d Š_xÖw¶ÍÁ» ßÛDmiKá Ÿk—Òôhç6TýrKÀ#¿³×ž#õ’ºä/ÜÓzç!|ß&ªßÜ\i{wï£L%xàso“×̾¬D1\}¹|Ù?ÃÕ/ò endstream endobj 10 0 obj <> /Length 12>> stream ×c``` endstream endobj 11 0 obj <> /Length 2863>> stream xœíÝ]sã6 …a€òÿÿÅ+œ^ð›¢do¶;é4ï“ıÛÛÞœÁ€ eà_àñÕðƒè7_ÿ>P‰\øMŸDñs¸úæà–6÷öîcÕ×Àðè’½O|—©ÞÿxÉaÀÖ”ºžÙ؇ªÛÀ~ÿ:À¤eî|gc«Þœ€O‘«¾û¾fê¼>f0ù OjúÖï9„W—HÍé[·~õp§¾ý+ßìòwÍÔ^÷z¿¡€7¦ÒW¦~Ójà™_ºÕäÍéëæä/<ÓWów‹à™¯J¿!‡o¹­-ÂîäC.zeR½½Ë__ÔŽÃ$—Ã0Üémß½•Iûü}Íkå[¤ÀÔ¿poª}³(ùk&—_šcü–Mn-ySÉßÔ¤/ìÕ¾¯,oÈCŠòG—Ù’ÀsõëSø¦äi)€ `ØhůL¡¤+•rõ›Ûc÷aÙvá-€sùû:ŽW)K)MðÀU™;kÅoÉÞCI¹vùÜ}Ïê“e×[Yy{ÇQ»æ4`54B9+%I)E’d×3ÏJüënóìÃëx ‹oÌýÀUßrŠhC‘£4r°®Íßaòa:o²÷Jõ›<µSwò ÿúÿüÇi¼›[¿¥Q+)EJî)r+x9õaÞtÜO›LžŽt´ª_bVuì×Sœyæ!Òùð²(Ëjò}ï·ÈíßÀ)yó[:ˆ_¸×âWnn&åðŒ,'þ¶ú÷rµ‹±Nõä²öÆÒ\ KoVÃ÷ˆrÕŠzÍø§ó~ËdYíýÖƒó±;óà/ kÕoø=T.X1V¿‹a×Û°ñ­®¾µƒ‰_x 3ÉBù褚i¾\ÅÓ‰gÖ‡½¾³ô!òà/­¸PžuHŠd’§¨W j֮݇ë‘;S\J`Ÿ}ŒÜd.)Rx’¢¯¹ÕÎo~ÙÝ®·Ö[h§?´U8Ÿ<+o€Ÿ­ÇiNֲŢî´ðVÎ.¯3Û\l¨¬¿ <ÖÂÄ/<ȧ›õ̬-ÛZÜî.64½ß[çðµMüÀw‰DŸÓ·gÙövÙõ6|B?P½TÀ-»™|€;Ê'šƒí0‡ü{MÏõZoý¦µ †ÎÍŒ½ß|¤CIÚÖ{¿Oš/oÊ¿‡–Eþ Õ/ÌZ"ªœçëª]³^ýn Ö´ÿÀ»óg¾°³”«VVÐÆ–ÂðR3»‰ß%d·] @×: Ã%ÙÆì¼fè>~§Ïú—þÛàÿ¯Î- ZSw±<?èöŒ3bÀw¹|ñÆ]X>ÆïC¾`~ÀŸ´î›oþIÀŸ¤á›ø½ù`Ò²ëÆ·ró.(¿Tý’¾P}5¿Ø|¼ñ&—?ˆßqšk½Àâ‹™Hõ ÃÛP&~à[¿ð-ˆ_øCmÔì·k ~à[¿ð-ˆ_øÄ/| â¾ñ ß‚ø€?¤r#O¼CüÀ· ~à[|¿­šV/±Þ&%Õ/ü™/–¤_Š_Ê_¨nñMT¾‰ß›w“¿ð‡¾Ø| Àl6ëÏ”›w9ù¿o¾þ‹ðó¼‰Â§?¿Þ½óvÖAÆUßüdOêo¿ßOvoPÀ蚊·½‚{¿$-|@Càª|K› ÝÆ¯$WúÍÌðÓ¬!YRT—4küÖ¤ÕðsóY€BSdJÊu¬Æ?ö—šÙ¥÷ÛÚ¤õdæò)Yzðsiy$)ß¶ –=ô~süªçèR)—7Kr3“kJ\Ja033•êWÖkW ‡åØ%1·“ªo+Ÿ ò„¹œÄ€ µ†ÁùOªÑ§©ík:”<ùPýÀÕC%%Ë# gœ¿rþÎíßå}óÒ›J©×¾&¥H)yIc’Vud7>œ%s·ôÕvÛ…åæo_÷ð8ýô\ Ç‘ÒRýš‘Ã0n&î½Ûž¿qöµ·uãÛ°ëÍLV»¿á§Ÿu-.ÅᩎýÒ|€ÑÐ|Pë>Ôîo-~?8rÇ¥ðpw?-Ï<(޹ø«_ƳÔÛ¾‹’¿ç¯Öý­géLï|ÍŸâ²’¿ùIqDí=0ö [²1~ëðYnþj˜ü\®õæ’"Ÿñ`f&%¥äcë—ü€™†æom?DþU´îƒ6s¿å(_åò·°røF_xcîVµ«;mœ8Ë×0ùð¸éØMfŠö‘Ò¡4¿D/\¬åo=ygÈßõjoãÒ[¹Š¼JçNñ˜¾ô`§Ÿ·Þó7Æðµëå–mù‚o‘J©§o›{ `RÒUuö¬—Þ.~ñÜ|P¾”Ì,ÊÌ™Kuæl;#`Ôâ·»£hßÃŽã»]oV›¿f)”$W¤|ÖNî=PýÀF?t';Y®·Ù®¹Ù.d Š_xÖw¶ÍÁ» ßÛDmiKá Ÿk—Òôhç6TýrKÀ#¿³×ž#õ’ºä/ÜÓzç!|ß&ªßÜ\i{wï£L%xàso“×̾¬D1\}¹|Ù?ÃÕ/ò endstream endobj 12 0 obj <> /Length 12>> stream ×c``` endstream endobj 13 0 obj << /FunctionType 2 /Domain [0 1] /C0 [1.000 1.000 1.000] /C1 [1.000 1.000 1.000] /N 1 >> endobj 14 0 obj << /ShadingType 2 /ColorSpace /DeviceRGB /Coords [0.000 1.000 1.000 0.000] /Function 13 0 R /Extend [true true] >> endobj 15 0 obj << /FunctionType 2 /Domain [0 1] /C0 [1.000 1.000 1.000] /C1 [1.000 1.000 1.000] /N 1 >> endobj 16 0 obj << /ShadingType 2 /ColorSpace /DeviceRGB /Coords [0.000 1.000 1.000 0.000] /Function 15 0 R /Extend [true true] >> endobj 17 0 obj << /FunctionType 2 /Domain [0 1] /C0 [1.000 1.000 1.000] /C1 [1.000 1.000 1.000] /N 1 >> endobj 18 0 obj << /ShadingType 2 /ColorSpace /DeviceRGB /Coords [0.000 1.000 1.000 0.000] /Function 17 0 R /Extend [true true] >> endobj 19 0 obj << /FunctionType 2 /Domain [0 1] /C0 [1.000 1.000 1.000] /C1 [1.000 1.000 1.000] /N 1 >> endobj 20 0 obj << /ShadingType 2 /ColorSpace /DeviceRGB /Coords [0.000 1.000 1.000 0.000] /Function 19 0 R /Extend [true true] >> endobj 21 0 obj << /FunctionType 2 /Domain [0 1] /C0 [1.000 1.000 1.000] /C1 [1.000 1.000 1.000] /N 1 >> endobj 22 0 obj << /ShadingType 2 /ColorSpace /DeviceRGB /Coords [0.000 1.000 1.000 0.000] /Function 21 0 R /Extend [true true] >> endobj 23 0 obj << /FunctionType 2 /Domain [0 1] /C0 [1.000 1.000 1.000] /C1 [1.000 1.000 1.000] /N 1 >> endobj 24 0 obj << /ShadingType 2 /ColorSpace /DeviceRGB /Coords [0.000 1.000 1.000 0.000] /Function 23 0 R /Extend [true true] >> endobj 2 0 obj << /ProcSet [/PDF /Text /ImageB /ImageC /ImageI] /Font << >> /XObject << /I1 9 0 R /I2 10 0 R /I3 11 0 R /I4 12 0 R >> /Properties <> /ExtGState << >> /Shading << /Sh1 14 0 R /Sh2 16 0 R /Sh3 18 0 R /Sh4 20 0 R /Sh5 22 0 R /Sh6 24 0 R >> >> endobj 25 0 obj << /Title (Lucidchart Document) /Author (Lucidchart) /Creator (TCPDF) /Producer (TCPDF 4.4.009 \(http://www.tcpdf.org\)) /CreationDate (D:20120523181442) /ModDate (D:20120523181442) >> endobj 26 0 obj << /Type /Catalog /Pages 1 0 R /OpenAction [3 0 R /FitH null] /PageLayout /SinglePage /PageMode /UseNone /Names << >> /ViewerPreferences<< /Direction /L2R >> /OCProperties <> <>]>>>> >> endobj xref 0 27 0000000000 65535 f 0000038205 00000 n 0000046643 00000 n 0000000009 00000 n 0000000117 00000 n 0000018310 00000 n 0000018418 00000 n 0000038268 00000 n 0000038376 00000 n 0000038483 00000 n 0000041591 00000 n 0000041851 00000 n 0000044960 00000 n 0000045221 00000 n 0000045326 00000 n 0000045458 00000 n 0000045563 00000 n 0000045695 00000 n 0000045800 00000 n 0000045932 00000 n 0000046037 00000 n 0000046169 00000 n 0000046274 00000 n 0000046406 00000 n 0000046511 00000 n 0000046922 00000 n 0000047123 00000 n trailer << /Size 27 /Root 26 0 R /Info 25 0 R >> startxref 47488 %%EOF boto-2.20.1/docs/Makefile000066400000000000000000000064031225267101000151260ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source .PHONY: help clean html dirhtml pickle json epub htmlhelp qthelp latex changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " epub to make ePub files (sphinx >= v1.2b2)" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The e-Pub pages are in $(BUILDDIR)/epub." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/boto.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/boto.qhc" latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." boto-2.20.1/docs/make.bat000066400000000000000000000057771225267101000151100ustar00rootroot00000000000000@ECHO OFF REM Command file for Sphinx documentation set SPHINXBUILD=sphinx-build set BUILDDIR=build set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% source if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. dirhtml to make HTML files named index.html in directories echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. changes to make an overview over all changed/added/deprecated items echo. linkcheck to check all external links for integrity echo. doctest to run all doctests embedded in the documentation if enabled goto end ) if "%1" == "clean" ( for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i del /q /s %BUILDDIR%\* goto end ) if "%1" == "html" ( %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html echo. echo.Build finished. The HTML pages are in %BUILDDIR%/html. goto end ) if "%1" == "dirhtml" ( %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml echo. echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in %BUILDDIR%/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in %BUILDDIR%/qthelp, like this: echo.^> qcollectiongenerator %BUILDDIR%\qthelp\boto.qhcp echo.To view the help file: echo.^> assistant -collectionFile %BUILDDIR%\qthelp\boto.ghc goto end ) if "%1" == "latex" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex echo. echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. goto end ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes echo. echo.The overview file is in %BUILDDIR%/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck echo. echo.Link check complete; look for any errors in the above output ^ or in %BUILDDIR%/linkcheck/output.txt. goto end ) if "%1" == "doctest" ( %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest echo. echo.Testing of doctests in the sources finished, look at the ^ results in %BUILDDIR%/doctest/output.txt. goto end ) :end boto-2.20.1/docs/source/000077500000000000000000000000001225267101000147635ustar00rootroot00000000000000boto-2.20.1/docs/source/_templates/000077500000000000000000000000001225267101000171205ustar00rootroot00000000000000boto-2.20.1/docs/source/_templates/layout.html000066400000000000000000000002001225267101000213130ustar00rootroot00000000000000{% extends '!layout.html' %} {% block sidebarsearch %}{{ super() }}{% endblock %} boto-2.20.1/docs/source/apps_built_on_boto.rst000066400000000000000000000035171225267101000214040ustar00rootroot00000000000000.. _apps_built_on_boto: ========================== Applications Built On Boto ========================== Many people have taken Boto and layered on additional functionality, then shared them with the community. This is a (partial) list of applications that use Boto. If you have an application or utility you've open-sourced that uses Boto & you'd like it listed here, please submit a `pull request`_ adding it! .. _`pull request`: https://github.com/boto/boto/pulls **botornado** https://pypi.python.org/pypi/botornado An asynchronous AWS client on Tornado. This is a dirty work to move boto onto Tornado ioloop. Currently works with SQS and S3. **boto_rsync** https://pypi.python.org/pypi/boto_rsync boto-rsync is a rough adaptation of boto's s3put script which has been reengineered to more closely mimic rsync. Its goal is to provide a familiar rsync-like wrapper for boto's S3 and Google Storage interfaces. **boto_utils** https://pypi.python.org/pypi/boto_utils Command-line tools for interacting with Amazon Web Services, based on Boto. Includes utils for S3, SES & Cloudwatch. **django-storages** https://pypi.python.org/pypi/django-storages A collection of storage backends for Django. Features the ``S3BotoStorage`` backend for storing media on S3. **mr.awsome** https://pypi.python.org/pypi/mr.awsome mr.awsome is a commandline-tool (aws) to manage and control Amazon Webservice's EC2 instances. Once configured with your AWS key, you can create, delete, monitor and ssh into instances, as well as perform scripted tasks on them (via fabfiles). Examples are adding additional, pre-configured webservers to a cluster (including updating the load balancer), performing automated software deployments and creating backups - each with just one call from the commandline. boto-2.20.1/docs/source/autoscale_tut.rst000066400000000000000000000217211225267101000203740ustar00rootroot00000000000000.. _autoscale_tut: ============================================= An Introduction to boto's Autoscale interface ============================================= This tutorial focuses on the boto interface to the Autoscale service. This assumes you are familiar with boto's EC2 interface and concepts. Autoscale Concepts ------------------ The AWS Autoscale service is comprised of three core concepts: #. *Autoscale Group (AG):* An AG can be viewed as a collection of criteria for maintaining or scaling a set of EC2 instances over one or more availability zones. An AG is limited to a single region. #. *Launch Configuration (LC):* An LC is the set of information needed by the AG to launch new instances - this can encompass image ids, startup data, security groups and keys. Only one LC is attached to an AG. #. *Triggers*: A trigger is essentially a set of rules for determining when to scale an AG up or down. These rules can encompass a set of metrics such as average CPU usage across instances, or incoming requests, a threshold for when an action will take place, as well as parameters to control how long to wait after a threshold is crossed. Creating a Connection --------------------- The first step in accessing autoscaling is to create a connection to the service. There are two ways to do this in boto. The first is: >>> from boto.ec2.autoscale import AutoScaleConnection >>> conn = AutoScaleConnection('', '') A Note About Regions and Endpoints ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Like EC2 the Autoscale service has a different endpoint for each region. By default the US endpoint is used. To choose a specific region, instantiate the AutoScaleConnection object with that region's endpoint. >>> import boto.ec2.autoscale >>> autoscale = boto.ec2.autoscale.connect_to_region('eu-west-1') Alternatively, edit your boto.cfg with the default Autoscale endpoint to use:: [Boto] autoscale_endpoint = autoscaling.eu-west-1.amazonaws.com Getting Existing AutoScale Groups ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To retrieve existing autoscale groups: >>> conn.get_all_groups() You will get back a list of AutoScale group objects, one for each AG you have. Creating Autoscaling Groups --------------------------- An Autoscaling group has a number of parameters associated with it. #. *Name*: The name of the AG. #. *Availability Zones*: The list of availability zones it is defined over. #. *Minimum Size*: Minimum number of instances running at one time. #. *Maximum Size*: Maximum number of instances running at one time. #. *Launch Configuration (LC)*: A set of instructions on how to launch an instance. #. *Load Balancer*: An optional ELB load balancer to use. See the ELB tutorial for information on how to create a load balancer. For the purposes of this tutorial, let's assume we want to create one autoscale group over the us-east-1a and us-east-1b availability zones. We want to have two instances in each availability zone, thus a minimum size of 4. For now we won't worry about scaling up or down - we'll introduce that later when we talk about triggers. Thus we'll set a maximum size of 4 as well. We'll also associate the AG with a load balancer which we assume we've already created, called 'my_lb'. Our LC tells us how to start an instance. This will at least include the image id to use, security_group, and key information. We assume the image id, key name and security groups have already been defined elsewhere - see the EC2 tutorial for information on how to create these. >>> from boto.ec2.autoscale import LaunchConfiguration >>> from boto.ec2.autoscale import AutoScalingGroup >>> lc = LaunchConfiguration(name='my-launch_config', image_id='my-ami', key_name='my_key_name', security_groups=['my_security_groups']) >>> conn.create_launch_configuration(lc) We now have created a launch configuration called 'my-launch-config'. We are now ready to associate it with our new autoscale group. >>> ag = AutoScalingGroup(group_name='my_group', load_balancers=['my-lb'], availability_zones=['us-east-1a', 'us-east-1b'], launch_config=lc, min_size=4, max_size=8, connection=conn) >>> conn.create_auto_scaling_group(ag) We now have a new autoscaling group defined! At this point instances should be starting to launch. To view activity on an autoscale group: >>> ag.get_activities() [Activity:Launching a new EC2 instance status:Successful progress:100, ...] or alternatively: >>> conn.get_all_activities(ag) This autoscale group is fairly useful in that it will maintain the minimum size without breaching the maximum size defined. That means if one instance crashes, the autoscale group will use the launch configuration to start a new one in an attempt to maintain its minimum defined size. It knows instance health using the health check defined on its associated load balancer. Scaling a Group Up or Down ^^^^^^^^^^^^^^^^^^^^^^^^^^ It can also be useful to scale a group up or down depending on certain criteria. For example, if the average CPU utilization of the group goes above 70%, you may want to scale up the number of instances to deal with demand. Likewise, you might want to scale down if usage drops again. These rules for **how** to scale are defined by *Scaling Policies*, and the rules for **when** to scale are defined by CloudWatch *Metric Alarms*. For example, let's configure scaling for the above group based on CPU utilization. We'll say it should scale up if the average CPU usage goes above 70% and scale down if it goes below 40%. Firstly, define some Scaling Policies. These tell Auto Scaling how to scale the group (but not when to do it, we'll specify that later). We need one policy for scaling up and one for scaling down. >>> from boto.ec2.autoscale import ScalingPolicy >>> scale_up_policy = ScalingPolicy( name='scale_up', adjustment_type='ChangeInCapacity', as_name='my_group', scaling_adjustment=1, cooldown=180) >>> scale_down_policy = ScalingPolicy( name='scale_down', adjustment_type='ChangeInCapacity', as_name='my_group', scaling_adjustment=-1, cooldown=180) The policy objects are now defined locally. Let's submit them to AWS. >>> conn.create_scaling_policy(scale_up_policy) >>> conn.create_scaling_policy(scale_down_policy) Now that the polices have been digested by AWS, they have extra properties that we aren't aware of locally. We need to refresh them by requesting them back again. >>> scale_up_policy = conn.get_all_policies( as_group='my_group', policy_names=['scale_up'])[0] >>> scale_down_policy = conn.get_all_policies( as_group='my_group', policy_names=['scale_down'])[0] Specifically, we'll need the Amazon Resource Name (ARN) of each policy, which will now be a property of our ScalingPolicy objects. Next we'll create CloudWatch alarms that will define when to run the Auto Scaling Policies. >>> import boto.ec2.cloudwatch >>> cloudwatch = boto.ec2.cloudwatch.connect_to_region('us-west-2') It makes sense to measure the average CPU usage across the whole Auto Scaling Group, rather than individual instances. We express that as CloudWatch *Dimensions*. >>> alarm_dimensions = {"AutoScalingGroupName": 'my_group'} Create an alarm for when to scale up, and one for when to scale down. >>> from boto.ec2.cloudwatch import MetricAlarm >>> scale_up_alarm = MetricAlarm( name='scale_up_on_cpu', namespace='AWS/EC2', metric='CPUUtilization', statistic='Average', comparison='>', threshold='70', period='60', evaluation_periods=2, alarm_actions=[scale_up_policy.policy_arn], dimensions=alarm_dimensions) >>> cloudwatch.create_alarm(scale_up_alarm) >>> scale_down_alarm = MetricAlarm( name='scale_down_on_cpu', namespace='AWS/EC2', metric='CPUUtilization', statistic='Average', comparison='<', threshold='40', period='60', evaluation_periods=2, alarm_actions=[scale_down_policy.policy_arn], dimensions=alarm_dimensions) >>> cloudwatch.create_alarm(scale_down_alarm) Auto Scaling will now create a new instance if the existing cluster averages more than 70% CPU for two minutes. Similarly, it will terminate an instance when CPU usage sits below 40%. Auto Scaling will not add or remove instances beyond the limits of the Scaling Group's 'max_size' and 'min_size' properties. To retrieve the instances in your autoscale group: >>> import boto.ec2 >>> ec2 = boto.ec2.connect_to_region('us-west-2) >>> conn.get_all_groups(names=['my_group'])[0] >>> instance_ids = [i.instance_id for i in group.instances] >>> instances = ec2.get_only_instances(instance_ids) To delete your autoscale group, we first need to shutdown all the instances: >>> ag.shutdown_instances() Once the instances have been shutdown, you can delete the autoscale group: >>> ag.delete() You can also delete your launch configuration: >>> lc.delete() boto-2.20.1/docs/source/boto_config_tut.rst000066400000000000000000000256651225267101000207170ustar00rootroot00000000000000.. _ref-boto_config: =========== Boto Config =========== Introduction ------------ There is a growing list of configuration options for the boto library. Many of these options can be passed into the constructors for top-level objects such as connections. Some options, such as credentials, can also be read from environment variables (e.g. ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY``). It is also possible to manage these options in a central place through the use of boto config files. Details ------- A boto config file is simply a .ini format configuration file that specifies values for options that control the behavior of the boto library. Upon startup, the boto library looks for configuration files in the following locations and in the following order: * /etc/boto.cfg - for site-wide settings that all users on this machine will use * ~/.boto - for user-specific settings The options are merged into a single, in-memory configuration that is available as :py:mod:`boto.config`. The :py:class:`boto.pyami.config.Config` class is a subclass of the standard Python :py:class:`ConfigParser.SafeConfigParser` object and inherits all of the methods of that object. In addition, the boto :py:class:`Config ` class defines additional methods that are described on the PyamiConfigMethods page. An example ``~/.boto`` file should look like:: [Credentials] aws_access_key_id = aws_secret_access_key = Sections -------- The following sections and options are currently recognized within the boto config file. Credentials ^^^^^^^^^^^ The Credentials section is used to specify the AWS credentials used for all boto requests. The order of precedence for authentication credentials is: * Credentials passed into Connection class constructor. * Credentials specified by environment variables * Credentials specified as options in the config file. This section defines the following options: ``aws_access_key_id`` and ``aws_secret_access_key``. The former being your AWS key id and the latter being the secret key. For example:: [Credentials] aws_access_key_id = aws_secret_access_key = Please notice that quote characters are not used to either side of the '=' operator even when both your AWS access key id and secret key are strings. For greater security, the secret key can be stored in a keyring and retrieved via the keyring package. To use a keyring, use ``keyring``, rather than ``aws_secret_access_key``:: [Credentials] aws_access_key_id = keyring = To use a keyring, you must have the Python `keyring `_ package installed and in the Python path. To learn about setting up keyrings, see the `keyring documentation `_ Credentials can also be supplied for a Eucalyptus service:: [Credentials] euca_access_key_id = euca_secret_access_key = Finally, this section is also be used to provide credentials for the Internet Archive API:: [Credentials] ia_access_key_id = ia_secret_access_key = Boto ^^^^ The Boto section is used to specify options that control the operation of boto itself. This section defines the following options: :debug: Controls the level of debug messages that will be printed by the boto library. The following values are defined:: 0 - no debug messages are printed 1 - basic debug messages from boto are printed 2 - all boto debugging messages plus request/response messages from httplib :proxy: The name of the proxy host to use for connecting to AWS. :proxy_port: The port number to use to connect to the proxy host. :proxy_user: The user name to use when authenticating with proxy host. :proxy_pass: The password to use when authenticating with proxy host. :num_retries: The number of times to retry failed requests to an AWS server. If boto receives an error from AWS, it will attempt to recover and retry the request. The default number of retries is 5 but you can change the default with this option. For example:: [Boto] debug = 0 num_retries = 10 proxy = myproxy.com proxy_port = 8080 proxy_user = foo proxy_pass = bar :connection_stale_duration: Amount of time to wait in seconds before a connection will stop getting reused. AWS will disconnect connections which have been idle for 180 seconds. :is_secure: Is the connection over SSL. This setting will overide passed in values. :https_validate_certificates: Validate HTTPS certificates. This is on by default :ca_certificates_file: Location of CA certificates :http_socket_timeout: Timeout used to overwrite the system default socket timeout for httplib . :send_crlf_after_proxy_auth_headers: Change line ending behaviour with proxies. For more details see this `discussion `_ These settings will default to:: [Boto] connection_stale_duration = 180 is_secure = True https_validate_certificates = True ca_certificates_file = cacerts.txt http_socket_timeout = 60 send_crlf_after_proxy_auth_headers = False You can control the timeouts and number of retries used when retrieving information from the Metadata Service (this is used for retrieving credentials for IAM roles on EC2 instances): :metadata_service_timeout: Number of seconds until requests to the metadata service will timeout (float). :metadata_service_num_attempts: Number of times to attempt to retrieve information from the metadata service before giving up (int). These settings will default to:: [Boto] metadata_service_timeout = 1.0 metadata_service_num_attempts = 1 This section is also used for specifying endpoints for non-AWS services such as Eucalyptus and Walrus. :eucalyptus_host: Select a default endpoint host for eucalyptus :walrus_host: Select a default host for Walrus For example:: [Boto] eucalyptus_host = somehost.example.com walrus_host = somehost.example.com Finally, the Boto section is used to set defaults versions for many AWS services AutoScale settings: options: :autoscale_version: Set the API version :autoscale_endpoint: Endpoint to use :autoscale_region_name: Default region to use For example:: [Boto] autoscale_version = 2011-01-01 autoscale_endpoint = autoscaling.us-west-2.amazonaws.com autoscale_region_name = us-west-2 Cloudformation settings can also be defined: :cfn_version: Cloud formation API version :cfn_region_name: Default region name :cfn_region_endpoint: Default endpoint For example:: [Boto] cfn_version = 2010-05-15 cfn_region_name = us-west-2 cfn_region_endpoint = cloudformation.us-west-2.amazonaws.com Cloudsearch settings: :cs_region_name: Default cloudsearch region :cs_region_endpoint: Default cloudsearch endpoint For example:: [Boto] cs_region_name = us-west-2 cs_region_endpoint = cloudsearch.us-west-2.amazonaws.com Cloudwatch settings: :cloudwatch_version: Cloudwatch API version :cloudwatch_region_name: Default region name :cloudwatch_region_endpoint: Default endpoint For example:: [Boto] cloudwatch_version = 2010-08-01 cloudwatch_region_name = us-west-2 cloudwatch_region_endpoint = monitoring.us-west-2.amazonaws.com EC2 settings: :ec2_version: EC2 API version :ec2_region_name: Default region name :ec2_region_endpoint: Default endpoint For example:: [Boto] ec2_version = 2012-12-01 ec2_region_name = us-west-2 ec2_region_endpoint = ec2.us-west-2.amazonaws.com ELB settings: :elb_version: ELB API version :elb_region_name: Default region name :elb_region_endpoint: Default endpoint For example:: [Boto] elb_version = 2012-06-01 elb_region_name = us-west-2 elb_region_endpoint = elasticloadbalancing.us-west-2.amazonaws.com EMR settings: :emr_version: EMR API version :emr_region_name: Default region name :emr_region_endpoint: Default endpoint For example:: [Boto] emr_version = 2009-03-31 emr_region_name = us-west-2 emr_region_endpoint = elasticmapreduce.us-west-2.amazonaws.com Precedence ---------- Even if you have your boto config setup, you can also have credentials and options stored in environmental variables or you can explicitly pass them to method calls i.e.:: >>> boto.ec2.connect_to_region( ... 'us-west-2', ... aws_access_key_id='foo', ... aws_secret_access_key='bar') In these cases where these options can be found in more than one place boto will first use the explicitly supplied arguments, if none found it will then look for them amidst environment variables and if that fails it will use the ones in boto config. Notification ^^^^^^^^^^^^ If you are using notifications for boto.pyami, you can specify the email details through the following variables. :smtp_from: Used as the sender in notification emails. :smtp_to: Destination to which emails should be sent :smtp_host: Host to connect to when sending notification emails. :smtp_port: Port to connect to when connecting to the :smtp_host: Default values are:: [notification] smtp_from = boto smtp_to = None smtp_host = localhost smtp_port = 25 smtp_tls = True smtp_user = john smtp_pass = hunter2 SWF ^^^ The SWF section allows you to configure the default region to be used for the Amazon Simple Workflow service. :region: Set the default region Example:: [SWF] region = us-west-2 Pyami ^^^^^ The Pyami section is used to configure the working directory for PyAMI. :working_dir: Working directory used by PyAMI Example:: [Pyami] working_dir = /home/foo/ DB ^^ The DB section is used to configure access to databases through the :func:`boto.sdb.db.manager.get_manager` function. :db_type: Type of the database. Current allowed values are `SimpleDB` and `XML`. :db_user: AWS access key id. :db_passwd: AWS secret access key. :db_name: Database that will be connected to. :db_table: Table name :note: This doesn't appear to be used. :db_host: Host to connect to :db_port: Port to connect to :enable_ssl: Use SSL More examples:: [DB] db_type = SimpleDB db_user = db_passwd = db_name = my_domain db_table = table db_host = sdb.amazonaws.com enable_ssl = True debug = True [DB_TestBasic] db_type = SimpleDB db_user = db_passwd = db_name = basic_domain db_port = 1111 SDB ^^^ This section is used to configure SimpleDB :region: Set the region to which SDB should connect Example:: [SDB] region = us-west-2 DynamoDB ^^^^^^^^ This section is used to configure DynamoDB :region: Choose the default region :validate_checksums: Check checksums returned by DynamoDB Example:: [DynamoDB] region = us-west-2 validate_checksums = True boto-2.20.1/docs/source/boto_theme/000077500000000000000000000000001225267101000171105ustar00rootroot00000000000000boto-2.20.1/docs/source/boto_theme/static/000077500000000000000000000000001225267101000203775ustar00rootroot00000000000000boto-2.20.1/docs/source/boto_theme/static/boto.css_t000066400000000000000000000077561225267101000224160ustar00rootroot00000000000000/** * Sphinx stylesheet -- default theme * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */ @import url("basic.css"); /* -- page layout ----------------------------------------------------------- */ body { font-family: 'Lucida Grande', 'Lucida Sans Unicode', Geneva, Verdana, Arial, sans-serif; font-size: 100%; background-color: #111111; color: #555555; margin: 0; padding: 0; } div.documentwrapper { float: left; width: 100%; } div.bodywrapper { margin: 0 0 0 300px; } hr{ border: 1px solid #B1B4B6; } div.document { background-color: #fafafa; } div.body { background-color: #ffffff; color: #3E4349; padding: 1em 30px 30px 30px; font-size: 0.9em; } div.footer { color: #555; width: 100%; padding: 13px 0; text-align: center; font-size: 75%; } div.footer a { color: #444444; } div.related { background-color: #6F6555; /*#6BA81E;*/ line-height: 36px; color: #CCCCCC; text-shadow: 0px 1px 0 #444444; font-size: 1.1em; } div.related a { color: #D9C5A7; } div.related .right { font-size: 0.9em; } div.sphinxsidebar { font-size: 0.9em; line-height: 1.5em; width: 300px } div.sphinxsidebarwrapper{ padding: 20px 0; } div.sphinxsidebar h3, div.sphinxsidebar h4 { font-family: 'Lucida Grande', 'Lucida Sans Unicode', Geneva, Verdana, Arial, sans-serif; color: #222222; font-size: 1.2em; font-weight: bold; margin: 0; padding: 5px 10px; text-shadow: 1px 1px 0 white } div.sphinxsidebar h3 a { color: #444444; } div.sphinxsidebar p { color: #888888; padding: 5px 20px; margin: 0.5em 0px; } div.sphinxsidebar p.topless { } div.sphinxsidebar ul { margin: 10px 10px 10px 20px; padding: 0; color: #000000; } div.sphinxsidebar a { color: #444444; } div.sphinxsidebar a:hover { color: #E32E00; } div.sphinxsidebar input { border: 1px solid #cccccc; font-family: sans-serif; font-size: 1.1em; padding: 0.15em 0.3em; } div.sphinxsidebar input[type=text]{ margin-left: 20px; } /* -- body styles ----------------------------------------------------------- */ a { color: #005B81; text-decoration: none; } a:hover { color: #E32E00; } div.body h1, div.body h2, div.body h3, div.body h4, div.body h5, div.body h6 { font-family: 'Lucida Grande', 'Lucida Sans Unicode', Geneva, Verdana, Arial, sans-serif; font-weight: bold; color: #069; margin: 30px 0px 10px 0px; padding: 5px 0 5px 0px; text-shadow: 0px 1px 0 white; border-bottom: 1px solid #C8D5E3; } div.body h1 { margin-top: 0; font-size: 165%; } div.body h2 { font-size: 135%; } div.body h3 { font-size: 120%; } div.body h4 { font-size: 110%; } div.body h5 { font-size: 100%; } div.body h6 { font-size: 100%; } a.headerlink { color: #c60f0f; font-size: 0.8em; padding: 0 4px 0 4px; text-decoration: none; } a.headerlink:hover { background-color: #c60f0f; color: white; } div.body p, div.body dd, div.body li { line-height: 1.5em; } div.admonition p.admonition-title + p { display: inline; } div.highlight{ background-color: white; } div.note { background-color: #eeeeee; border: 1px solid #cccccc; } div.seealso { background-color: #ffffcc; border: 1px solid #ffff66; } div.topic { background-color: #fafafa; border-width: 0; } div.warning { background-color: #ffe4e4; border: 1px solid #ff6666; } p.admonition-title { display: inline; } p.admonition-title:after { content: ":"; } pre { padding: 10px; background-color: #fafafa; color: #222222; line-height: 1.5em; font-size: 1.1em; margin: 1.5em 0 1.5em 0; -webkit-box-shadow: 0px 0px 4px #d8d8d8; -moz-box-shadow: 0px 0px 4px #d8d8d8; box-shadow: 0px 0px 4px #d8d8d8; } tt { color: #222222; padding: 1px 2px; font-size: 1.2em; font-family: monospace; } #table-of-contents ul { padding-left: 2em; } div.sphinxsidebarwrapper div a {margin: 0.7em;}boto-2.20.1/docs/source/boto_theme/static/pygments.css000066400000000000000000000062301225267101000227600ustar00rootroot00000000000000.hll { background-color: #ffffcc } .c { color: #408090; font-style: italic } /* Comment */ .err { border: 1px solid #FF0000 } /* Error */ .k { color: #007020; font-weight: bold } /* Keyword */ .o { color: #666666 } /* Operator */ .cm { color: #408090; font-style: italic } /* Comment.Multiline */ .cp { color: #007020 } /* Comment.Preproc */ .c1 { color: #408090; font-style: italic } /* Comment.Single */ .cs { color: #408090; background-color: #fff0f0 } /* Comment.Special */ .gd { color: #A00000 } /* Generic.Deleted */ .ge { font-style: italic } /* Generic.Emph */ .gr { color: #FF0000 } /* Generic.Error */ .gh { color: #000080; font-weight: bold } /* Generic.Heading */ .gi { color: #00A000 } /* Generic.Inserted */ .go { color: #303030 } /* Generic.Output */ .gp { color: #c65d09; font-weight: bold } /* Generic.Prompt */ .gs { font-weight: bold } /* Generic.Strong */ .gu { color: #800080; font-weight: bold } /* Generic.Subheading */ .gt { color: #0040D0 } /* Generic.Traceback */ .kc { color: #007020; font-weight: bold } /* Keyword.Constant */ .kd { color: #007020; font-weight: bold } /* Keyword.Declaration */ .kn { color: #007020; font-weight: bold } /* Keyword.Namespace */ .kp { color: #007020 } /* Keyword.Pseudo */ .kr { color: #007020; font-weight: bold } /* Keyword.Reserved */ .kt { color: #902000 } /* Keyword.Type */ .m { color: #208050 } /* Literal.Number */ .s { color: #4070a0 } /* Literal.String */ .na { color: #4070a0 } /* Name.Attribute */ .nb { color: #007020 } /* Name.Builtin */ .nc { color: #0e84b5; font-weight: bold } /* Name.Class */ .no { color: #60add5 } /* Name.Constant */ .nd { color: #555555; font-weight: bold } /* Name.Decorator */ .ni { color: #d55537; font-weight: bold } /* Name.Entity */ .ne { color: #007020 } /* Name.Exception */ .nf { color: #06287e } /* Name.Function */ .nl { color: #002070; font-weight: bold } /* Name.Label */ .nn { color: #0e84b5; font-weight: bold } /* Name.Namespace */ .nt { color: #062873; font-weight: bold } /* Name.Tag */ .nv { color: #bb60d5 } /* Name.Variable */ .ow { color: #007020; font-weight: bold } /* Operator.Word */ .w { color: #bbbbbb } /* Text.Whitespace */ .mf { color: #208050 } /* Literal.Number.Float */ .mh { color: #208050 } /* Literal.Number.Hex */ .mi { color: #208050 } /* Literal.Number.Integer */ .mo { color: #208050 } /* Literal.Number.Oct */ .sb { color: #4070a0 } /* Literal.String.Backtick */ .sc { color: #4070a0 } /* Literal.String.Char */ .sd { color: #4070a0; font-style: italic } /* Literal.String.Doc */ .s2 { color: #4070a0 } /* Literal.String.Double */ .se { color: #4070a0; font-weight: bold } /* Literal.String.Escape */ .sh { color: #4070a0 } /* Literal.String.Heredoc */ .si { color: #70a0d0; font-style: italic } /* Literal.String.Interpol */ .sx { color: #c65d09 } /* Literal.String.Other */ .sr { color: #235388 } /* Literal.String.Regex */ .s1 { color: #4070a0 } /* Literal.String.Single */ .ss { color: #517918 } /* Literal.String.Symbol */ .bp { color: #007020 } /* Name.Builtin.Pseudo */ .vc { color: #bb60d5 } /* Name.Variable.Class */ .vg { color: #bb60d5 } /* Name.Variable.Global */ .vi { color: #bb60d5 } /* Name.Variable.Instance */ .il { color: #208050 } /* Literal.Number.Integer.Long */boto-2.20.1/docs/source/boto_theme/theme.conf000066400000000000000000000000551225267101000210610ustar00rootroot00000000000000[theme] inherit = basic stylesheet = boto.cssboto-2.20.1/docs/source/cloudfront_tut.rst000066400000000000000000000170341225267101000205750ustar00rootroot00000000000000.. _cloudfront_tut: ========== CloudFront ========== This new boto module provides an interface to Amazon's Content Service, CloudFront. .. warning:: This module is not well tested. Paging of distributions is not yet supported. CNAME support is completely untested. Use with caution. Feedback and bug reports are greatly appreciated. Creating a CloudFront connection -------------------------------- If you've placed your credentials in your ``$HOME/.boto`` config file then you can simply create a CloudFront connection using:: >>> import boto >>> c = boto.connect_cloudfront() If you do not have this file you will need to specify your AWS access key and secret access key:: >>> import boto >>> c = boto.connect_cloudfront('your-aws-access-key-id', 'your-aws-secret-access-key') Working with CloudFront Distributions ------------------------------------- Create a new :class:`boto.cloudfront.distribution.Distribution`:: >>> origin = boto.cloudfront.origin.S3Origin('mybucket.s3.amazonaws.com') >>> distro = c.create_distribution(origin=origin, enabled=False, comment='My new Distribution') >>> d.domain_name u'd2oxf3980lnb8l.cloudfront.net' >>> d.id u'ECH69MOIW7613' >>> d.status u'InProgress' >>> d.config.comment u'My new distribution' >>> d.config.origin >>> d.config.caller_reference u'31b8d9cf-a623-4a28-b062-a91856fac6d0' >>> d.config.enabled False Note that a new caller reference is created automatically, using uuid.uuid4(). The :class:`boto.cloudfront.distribution.Distribution`, :class:`boto.cloudfront.distribution.DistributionConfig` and :class:`boto.cloudfront.distribution.DistributionSummary` objects are defined in the :mod:`boto.cloudfront.distribution` module. To get a listing of all current distributions:: >>> rs = c.get_all_distributions() >>> rs [, ] This returns a list of :class:`boto.cloudfront.distribution.DistributionSummary` objects. Note that paging is not yet supported! To get a :class:`boto.cloudfront.distribution.DistributionObject` from a :class:`boto.cloudfront.distribution.DistributionSummary` object:: >>> ds = rs[1] >>> distro = ds.get_distribution() >>> distro.domain_name u'd2oxf3980lnb8l.cloudfront.net' To change a property of a distribution object:: >>> distro.comment u'My new distribution' >>> distro.update(comment='This is a much better comment') >>> distro.comment 'This is a much better comment' You can also enable/disable a distribution using the following convenience methods:: >>> distro.enable() # just calls distro.update(enabled=True) or:: >>> distro.disable() # just calls distro.update(enabled=False) The only attributes that can be updated for a Distribution are comment, enabled and cnames. To delete a :class:`boto.cloudfront.distribution.Distribution`:: >>> distro.delete() Invalidating CloudFront Distribution Paths ------------------------------------------ Invalidate a list of paths in a CloudFront distribution:: >>> paths = ['/path/to/file1.html', '/path/to/file2.html', ...] >>> inval_req = c.create_invalidation_request(u'ECH69MOIW7613', paths) >>> print inval_req >>> print inval_req.id u'IFCT7K03VUETK' >>> print inval_req.paths [u'/path/to/file1.html', u'/path/to/file2.html', ..] .. warning:: Each CloudFront invalidation request can only specify up to 1000 paths. If you need to invalidate more than 1000 paths you will need to split up the paths into groups of 1000 or less and create multiple invalidation requests. This will return a :class:`boto.cloudfront.invalidation.InvalidationBatch` object representing the invalidation request. You can also fetch a single invalidaton request for a given distribution using ``invalidation_request_status``:: >>> inval_req = c.invalidation_request_status(u'ECH69MOIW7613', u'IFCT7K03VUETK') >>> print inval_req The first parameter is the CloudFront distribution id the request belongs to and the second parameter is the invalidation request id. It's also possible to get *all* invalidations for a given CloudFront distribution:: >>> invals = c.get_invalidation_requests(u'ECH69MOIW7613') >>> print invals This will return an instance of :class:`boto.cloudfront.invalidation.InvalidationListResultSet` which is an iterable object that contains a list of :class:`boto.cloudfront.invalidation.InvalidationSummary` objects that describe each invalidation request and its status:: >>> for inval in invals: >>> print 'Object: %s, ID: %s, Status: %s' % (inval, inval.id, inval.status) Object: , ID: ICXT2K02SUETK, Status: Completed Object: , ID: ITV9SV0PDNY1Y, Status: Completed Object: , ID: I1X3F6N0PLGJN5, Status: Completed Object: , ID: I1F3G9N0ZLGKN2, Status: Completed ... Simply iterating over the :class:`boto.cloudfront.invalidation.InvalidationListResultSet` object will automatically paginate the results on-the-fly as needed by repeatedly requesting more results from CloudFront until there are none left. If you wish to paginate the results manually you can do so by specifying the ``max_items`` option when calling ``get_invalidation_requests``:: >>> invals = c.get_invalidation_requests(u'ECH69MOIW7613', max_items=2) >>> print len(list(invals)) 2 >>> for inval in invals: >>> print 'Object: %s, ID: %s, Status: %s' % (inval, inval.id, inval.status) Object: , ID: ICXT2K02SUETK, Status: Completed Object: , ID: ITV9SV0PDNY1Y, Status: Completed In this case, iterating over the :class:`boto.cloudfront.invalidation.InvalidationListResultSet` object will *only* make a single request to CloudFront and *only* ``max_items`` invalidation requests are returned by the iterator. To get the next "page" of results pass the ``next_marker`` attribute of the previous :class:`boto.cloudfront.invalidation.InvalidationListResultSet` object as the ``marker`` option to the next call to ``get_invalidation_requests``:: >>> invals = c.get_invalidation_requests(u'ECH69MOIW7613', max_items=10, marker=invals.next_marker) >>> print len(list(invals)) 2 >>> for inval in invals: >>> print 'Object: %s, ID: %s, Status: %s' % (inval, inval.id, inval.status) Object: , ID: I1X3F6N0PLGJN5, Status: Completed Object: , ID: I1F3G9N0ZLGKN2, Status: Completed You can get the :class:`boto.cloudfront.invalidation.InvalidationBatch` object representing the invalidation request pointed to by a :class:`boto.cloudfront.invalidation.InvalidationSummary` object using:: >>> inval_req = inval.get_invalidation_request() >>> print inval_req Simiarly you can get the parent :class:`boto.cloudfront.distribution.Distribution` object for the invalidation request from a :class:`boto.cloudfront.invalidation.InvalidationSummary` object using:: >>> dist = inval.get_distribution() >>> print dist boto-2.20.1/docs/source/cloudsearch_tut.rst000066400000000000000000000350261225267101000207130ustar00rootroot00000000000000.. cloudsearch_tut: =============================================== An Introduction to boto's Cloudsearch interface =============================================== This tutorial focuses on the boto interface to AWS' Cloudsearch_. This tutorial assumes that you have boto already downloaded and installed. .. _Cloudsearch: http://aws.amazon.com/cloudsearch/ Creating a Connection --------------------- The first step in accessing CloudSearch is to create a connection to the service. The recommended method of doing this is as follows:: >>> import boto.cloudsearch >>> conn = boto.cloudsearch.connect_to_region("us-west-2", ... aws_access_key_id=', ... aws_secret_access_key='') At this point, the variable conn will point to a CloudSearch connection object in the us-west-2 region. Currently, this is the only region which has the CloudSearch service. In this example, the AWS access key and AWS secret key are passed in to the method explicitly. Alternatively, you can set the environment variables: * `AWS_ACCESS_KEY_ID` - Your AWS Access Key ID * `AWS_SECRET_ACCESS_KEY` - Your AWS Secret Access Key and then simply call:: >>> import boto.cloudsearch >>> conn = boto.cloudsearch.connect_to_region("us-west-2") In either case, conn will point to the Connection object which we will use throughout the remainder of this tutorial. Creating a Domain ----------------- Once you have a connection established with the CloudSearch service, you will want to create a domain. A domain encapsulates the data that you wish to index, as well as indexes and metadata relating to it:: >>> from boto.cloudsearch.domain import Domain >>> domain = Domain(conn, conn.create_domain('demo')) This domain can be used to control access policies, indexes, and the actual document service, which you will use to index and search. Setting access policies ----------------------- Before you can connect to a document service, you need to set the correct access properties. For example, if you were connecting from 192.168.1.0, you could give yourself access as follows:: >>> our_ip = '192.168.1.0' >>> # Allow our IP address to access the document and search services >>> policy = domain.get_access_policies() >>> policy.allow_search_ip(our_ip) >>> policy.allow_doc_ip(our_ip) You can use the :py:meth:`allow_search_ip ` and :py:meth:`allow_doc_ip ` methods to give different CIDR blocks access to searching and the document service respectively. Creating index fields --------------------- Each domain can have up to twenty index fields which are indexed by the CloudSearch service. For each index field, you will need to specify whether it's a text or integer field, as well as optionaly a default value:: >>> # Create an 'text' index field called 'username' >>> uname_field = domain.create_index_field('username', 'text') >>> # Epoch time of when the user last did something >>> time_field = domain.create_index_field('last_activity', ... 'uint', ... default=0) It is also possible to mark an index field as a facet. Doing so allows a search query to return categories into which results can be grouped, or to create drill-down categories:: >>> # But it would be neat to drill down into different countries >>> loc_field = domain.create_index_field('location', 'text', facet=True) Finally, you can also mark a snippet of text as being able to be returned directly in your search query by using the results option:: >>> # Directly insert user snippets in our results >>> snippet_field = domain.create_index_field('snippet', 'text', result=True) You can add up to 20 index fields in this manner:: >>> follower_field = domain.create_index_field('follower_count', ... 'uint', ... default=0) Adding Documents to the Index ----------------------------- Now, we can add some documents to our new search domain. First, you will need a document service object through which queries are sent:: >>> doc_service = domain.get_document_service() For this example, we will use a pre-populated list of sample content for our import. You would normally pull such data from your database or another document store:: >>> users = [ { 'id': 1, 'username': 'dan', 'last_activity': 1334252740, 'follower_count': 20, 'location': 'USA', 'snippet': 'Dan likes watching sunsets and rock climbing', }, { 'id': 2, 'username': 'dankosaur', 'last_activity': 1334252904, 'follower_count': 1, 'location': 'UK', 'snippet': 'Likes to dress up as a dinosaur.', }, { 'id': 3, 'username': 'danielle', 'last_activity': 1334252969, 'follower_count': 100, 'location': 'DE', 'snippet': 'Just moved to Germany!' }, { 'id': 4, 'username': 'daniella', 'last_activity': 1334253279, 'follower_count': 7, 'location': 'USA', 'snippet': 'Just like Dan, I like to watch a good sunset, but heights scare me.', } ] When adding documents to our document service, we will batch them together. You can schedule a document to be added by using the :py:meth:`add ` method. Whenever you are adding a document, you must provide a unique ID, a version ID, and the actual document to be indexed. In this case, we are using the user ID as our unique ID. The version ID is used to determine which is the latest version of an object to be indexed. If you wish to update a document, you must use a higher version ID. In this case, we are using the time of the user's last activity as a version number:: >>> for user in users: >>> doc_service.add(user['id'], user['last_activity'], user) When you are ready to send the batched request to the document service, you can do with the :py:meth:`commit ` method. Note that cloudsearch will charge per 1000 batch uploads. Each batch upload must be under 5MB:: >>> result = doc_service.commit() The result is an instance of :py:class:`CommitResponse ` which will make the plain dictionary response a nice object (ie result.adds, result.deletes) and raise an exception for us if all of our documents weren't actually committed. After you have successfully committed some documents to cloudsearch, you must use :py:meth:`clear_sdf `, if you wish to use the same document service connection again so that its internal cache is cleared. Searching Documents ------------------- Now, let's try performing a search. First, we will need a SearchServiceConnection:: >>> search_service = domain.get_search_service() A standard search will return documents which contain the exact words being searched for:: >>> results = search_service.search(q="dan") >>> results.hits 2 >>> map(lambda x: x['id'], results) [u'1', u'4'] The standard search does not look at word order:: >>> results = search_service.search(q="dinosaur dress") >>> results.hits 1 >>> map(lambda x: x['id'], results) [u'2'] It's also possible to do more complex queries using the bq argument (Boolean Query). When you are using bq, your search terms must be enclosed in single quotes:: >>> results = search_service.search(bq="'dan'") >>> results.hits 2 >>> map(lambda x: x['id'], results) [u'1', u'4'] When you are using boolean queries, it's also possible to use wildcards to extend your search to all words which start with your search terms:: >>> results = search_service.search(bq="'dan*'") >>> results.hits 4 >>> map(lambda x: x['id'], results) [u'1', u'2', u'3', u'4'] The boolean query also allows you to create more complex queries. You can OR term together using "|", AND terms together using "+" or a space, and you can remove words from the query using the "-" operator:: >>> results = search_service.search(bq="'watched|moved'") >>> results.hits 2 >>> map(lambda x: x['id'], results) [u'3', u'4'] By default, the search will return 10 terms but it is possible to adjust this by using the size argument as follows:: >>> results = search_service.search(bq="'dan*'", size=2) >>> results.hits 4 >>> map(lambda x: x['id'], results) [u'1', u'2'] It is also possible to offset the start of the search by using the start argument as follows:: >>> results = search_service.search(bq="'dan*'", start=2) >>> results.hits 4 >>> map(lambda x: x['id'], results) [u'3', u'4'] Ordering search results and rank expressions -------------------------------------------- If your search query is going to return many results, it is good to be able to sort them. You can order your search results by using the rank argument. You are able to sort on any fields which have the results option turned on:: >>> results = search_service.search(bq=query, rank=['-follower_count']) You can also create your own rank expressions to sort your results according to other criteria, such as showing most recently active user, or combining the recency score with the text_relevance:: >>> domain.create_rank_expression('recently_active', 'last_activity') >>> domain.create_rank_expression('activish', ... 'text_relevance + ((follower_count/(time() - last_activity))*1000)') >>> results = search_service.search(bq=query, rank=['-recently_active']) Viewing and Adjusting Stemming for a Domain ------------------------------------------- A stemming dictionary maps related words to a common stem. A stem is typically the root or base word from which variants are derived. For example, run is the stem of running and ran. During indexing, Amazon CloudSearch uses the stemming dictionary when it performs text-processing on text fields. At search time, the stemming dictionary is used to perform text-processing on the search request. This enables matching on variants of a word. For example, if you map the term running to the stem run and then search for running, the request matches documents that contain run as well as running. To get the current stemming dictionary defined for a domain, use the :py:meth:`get_stemming ` method:: >>> stems = domain.get_stemming() >>> stems {u'stems': {}} >>> This returns a dictionary object that can be manipulated directly to add additional stems for your search domain by adding pairs of term:stem to the stems dictionary:: >>> stems['stems']['running'] = 'run' >>> stems['stems']['ran'] = 'run' >>> stems {u'stems': {u'ran': u'run', u'running': u'run'}} >>> This has changed the value locally. To update the information in Amazon CloudSearch, you need to save the data:: >>> stems.save() You can also access certain CloudSearch-specific attributes related to the stemming dictionary defined for your domain:: >>> stems.status u'RequiresIndexDocuments' >>> stems.creation_date u'2012-05-01T12:12:32Z' >>> stems.update_date u'2012-05-01T12:12:32Z' >>> stems.update_version 19 >>> The status indicates that, because you have changed the stems associated with the domain, you will need to re-index the documents in the domain before the new stems are used. Viewing and Adjusting Stopwords for a Domain -------------------------------------------- Stopwords are words that should typically be ignored both during indexing and at search time because they are either insignificant or so common that including them would result in a massive number of matches. To view the stopwords currently defined for your domain, use the :py:meth:`get_stopwords ` method:: >>> stopwords = domain.get_stopwords() >>> stopwords {u'stopwords': [u'a', u'an', u'and', u'are', u'as', u'at', u'be', u'but', u'by', u'for', u'in', u'is', u'it', u'of', u'on', u'or', u'the', u'to', u'was']} >>> You can add additional stopwords by simply appending the values to the list:: >>> stopwords['stopwords'].append('foo') >>> stopwords['stopwords'].append('bar') >>> stopwords Similarly, you could remove currently defined stopwords from the list. To save the changes, use the :py:meth:`save ` method:: >>> stopwords.save() The stopwords object has similar attributes defined above for stemming that provide additional information about the stopwords in your domain. Viewing and Adjusting Stopwords for a Domain -------------------------------------------- You can configure synonyms for terms that appear in the data you are searching. That way, if a user searches for the synonym rather than the indexed term, the results will include documents that contain the indexed term. If you want two terms to match the same documents, you must define them as synonyms of each other. For example:: cat, feline feline, cat To view the synonyms currently defined for your domain, use the :py:meth:`get_synonyms ` method:: >>> synonyms = domain.get_synonyms() >>> synonyms {u'synonyms': {}} >>> You can define new synonyms by adding new term:synonyms entries to the synonyms dictionary object:: >>> synonyms['synonyms']['cat'] = ['feline', 'kitten'] >>> synonyms['synonyms']['dog'] = ['canine', 'puppy'] To save the changes, use the :py:meth:`save ` method:: >>> synonyms.save() The synonyms object has similar attributes defined above for stemming that provide additional information about the stopwords in your domain. Deleting Documents ------------------ It is also possible to delete documents:: >>> import time >>> from datetime import datetime >>> doc_service = domain.get_document_service() >>> # Again we'll cheat and use the current epoch time as our version number >>> doc_service.delete(4, int(time.mktime(datetime.utcnow().timetuple()))) >>> service.commit() boto-2.20.1/docs/source/cloudwatch_tut.rst000066400000000000000000000113301225267101000205440ustar00rootroot00000000000000.. cloudwatch_tut: ========== CloudWatch ========== First, make sure you have something to monitor. You can either create a LoadBalancer or enable monitoring on an existing EC2 instance. To enable monitoring, you can either call the monitor_instance method on the EC2Connection object or call the monitor method on the Instance object. It takes a while for the monitoring data to start accumulating but once it does, you can do this:: >>> import boto.ec2.cloudwatch >>> c = boto.ec2.cloudwatch.connect_to_region('us-west-2') >>> metrics = c.list_metrics() >>> metrics [Metric:NetworkIn, Metric:NetworkOut, Metric:NetworkOut(InstanceType,m1.small), Metric:NetworkIn(InstanceId,i-e573e68c), Metric:CPUUtilization(InstanceId,i-e573e68c), Metric:DiskWriteBytes(InstanceType,m1.small), Metric:DiskWriteBytes(ImageId,ami-a1ffb63), Metric:NetworkOut(ImageId,ami-a1ffb63), Metric:DiskWriteOps(InstanceType,m1.small), Metric:DiskReadBytes(InstanceType,m1.small), Metric:DiskReadOps(ImageId,ami-a1ffb63), Metric:CPUUtilization(InstanceType,m1.small), Metric:NetworkIn(ImageId,ami-a1ffb63), Metric:DiskReadOps(InstanceType,m1.small), Metric:DiskReadBytes, Metric:CPUUtilization, Metric:DiskWriteBytes(InstanceId,i-e573e68c), Metric:DiskWriteOps(InstanceId,i-e573e68c), Metric:DiskWriteOps, Metric:DiskReadOps, Metric:CPUUtilization(ImageId,ami-a1ffb63), Metric:DiskReadOps(InstanceId,i-e573e68c), Metric:NetworkOut(InstanceId,i-e573e68c), Metric:DiskReadBytes(ImageId,ami-a1ffb63), Metric:DiskReadBytes(InstanceId,i-e573e68c), Metric:DiskWriteBytes, Metric:NetworkIn(InstanceType,m1.small), Metric:DiskWriteOps(ImageId,ami-a1ffb63)] The list_metrics call will return a list of all of the available metrics that you can query against. Each entry in the list is a Metric object. As you can see from the list above, some of the metrics are generic metrics and some have Dimensions associated with them (e.g. InstanceType=m1.small). The Dimension can be used to refine your query. So, for example, I could query the metric Metric:CPUUtilization which would create the desired statistic by aggregating cpu utilization data across all sources of information available or I could refine that by querying the metric Metric:CPUUtilization(InstanceId,i-e573e68c) which would use only the data associated with the instance identified by the instance ID i-e573e68c. Because for this example, I'm only monitoring a single instance, the set of metrics available to me are fairly limited. If I was monitoring many instances, using many different instance types and AMI's and also several load balancers, the list of available metrics would grow considerably. Once you have the list of available metrics, you can actually query the CloudWatch system for that metric. Let's choose the CPU utilization metric for our instance.:: >>> m = metrics[5] >>> m Metric:CPUUtilization(InstanceId,i-e573e68c) The Metric object has a query method that lets us actually perform the query against the collected data in CloudWatch. To call that, we need a start time and end time to control the time span of data that we are interested in. For this example, let's say we want the data for the previous hour:: >>> import datetime >>> end = datetime.datetime.now() >>> start = end - datetime.timedelta(hours=1) We also need to supply the Statistic that we want reported and the Units to use for the results. The Statistic can be one of these values:: ['Minimum', 'Maximum', 'Sum', 'Average', 'SampleCount'] And Units must be one of the following:: ['Seconds', 'Percent', 'Bytes', 'Bits', 'Count', 'Bytes/Second', 'Bits/Second', 'Count/Second'] The query method also takes an optional parameter, period. This parameter controls the granularity (in seconds) of the data returned. The smallest period is 60 seconds and the value must be a multiple of 60 seconds. So, let's ask for the average as a percent:: >>> datapoints = m.query(start, end, 'Average', 'Percent') >>> len(datapoints) 60 Our period was 60 seconds and our duration was one hour so we should get 60 data points back and we can see that we did. Each element in the datapoints list is a DataPoint object which is a simple subclass of a Python dict object. Each Datapoint object contains all of the information available about that particular data point.:: >>> d = datapoints[0] >>> d {u'Average': 0.0, u'SampleCount': 1.0, u'Timestamp': u'2009-05-21T19:55:00Z', u'Unit': u'Percent'} My server obviously isn't very busy right now! boto-2.20.1/docs/source/commandline.rst000066400000000000000000000035541225267101000200120ustar00rootroot00000000000000.. _ref-boto_commandline: ================== Command Line Tools ================== Introduction ============ Boto ships with a number of command line utilities, which are installed when the package is installed. This guide outlines which ones are available & what they do. .. note:: If you're not already depending on these utilities, you may wish to check out the AWS-CLI (http://aws.amazon.com/cli/ - `User Guide`_ & `Reference Guide`_). It provides much wider & complete access to the AWS services. .. _`User Guide`: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html .. _`Reference Guide`: http://docs.aws.amazon.com/cli/latest/reference/ The included utilities available are: ``asadmin`` Works with Autoscaling ``bundle_image`` Creates a bundled AMI in S3 based on a EC2 instance ``cfadmin`` Works with CloudFront & invalidations ``cq`` Works with SQS queues ``cwutil`` Works with CloudWatch ``dynamodb_dump`` ``dynamodb_load`` Handle dumping/loading data from DynamoDB tables ``elbadmin`` Manages Elastic Load Balancer instances ``fetch_file`` Downloads an S3 key to disk ``glacier`` Lists vaults, jobs & uploads files to Glacier ``instance_events`` Lists all events for EC2 reservations ``kill_instance`` Kills a list of EC2 instances ``launch_instance`` Launches an EC2 instance ``list_instances`` Lists all of your EC2 instances ``lss3`` Lists what keys you have within a bucket in S3 ``mturk`` Provides a number of facilities for interacting with Mechanical Turk ``pyami_sendmail`` Sends an email from the Pyami instance ``route53`` Interacts with the Route53 service ``s3put`` Uploads a directory or a specific file(s) to S3 ``sdbadmin`` Allows for working with SimpleDB domains ``taskadmin`` A tool for working with the tasks in SimpleDB boto-2.20.1/docs/source/conf.py000066400000000000000000000016541225267101000162700ustar00rootroot00000000000000# -*- coding: utf-8 -*- import os import boto import sys sys.path.append(os.path.join(os.path.dirname(__file__), 'extensions')) extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx', 'sphinx.ext.todo', 'githublinks'] autoclass_content = "both" templates_path = ['_templates'] source_suffix = '.rst' master_doc = 'index' project = u'boto' copyright = u'2009,2010, Mitch Garnaat' version = boto.__version__ exclude_trees = [] pygments_style = 'sphinx' html_theme = 'boto_theme' html_theme_path = ["."] html_static_path = ['_static'] htmlhelp_basename = 'botodoc' latex_documents = [ ('index', 'boto.tex', u'boto Documentation', u'Mitch Garnaat', 'manual'), ] intersphinx_mapping = {'http://docs.python.org/': None} github_project_url = 'https://github.com/boto/boto/' try: release = os.environ.get('SVN_REVISION', 'HEAD') print release except Exception, e: print e html_title = "boto v%s" % version boto-2.20.1/docs/source/contributing.rst000066400000000000000000000163121225267101000202270ustar00rootroot00000000000000==================== Contributing to Boto ==================== Setting Up a Development Environment ==================================== While not strictly required, it is highly recommended to do development in a virtualenv. You can install virtualenv using pip:: $ pip install virtualenv Once the package is installed, you'll have a ``virtualenv`` command you can use to create a virtual environment:: $ virtualenv venv You can then activate the virtualenv:: $ . venv/bin/activate .. note:: You may also want to check out virtualenvwrapper_, which is a set of extensions to virtualenv that makes it easy to manage multiple virtual environments. A requirements.txt is included with boto which contains all the additional packages needed for boto development. You can install these packages by running:: $ pip install -r requirements.txt Running the Tests ================= All of the tests for boto are under the ``tests/`` directory. The tests for boto have been split into two main categories, unit and integration tests: * **unit** - These are tests that do not talk to any AWS services. Anyone should be able to run these tests without have any credentials configured. These are the types of tests that could be run in something like a public CI server. These tests tend to be fast. * **integration** - These are tests that will talk to AWS services, and will typically require a boto config file with valid credentials. Due to the nature of these tests, they tend to take a while to run. Also keep in mind anyone who runs these tests will incur any usage fees associated with the various AWS services. To run all the unit tests, cd to the ``tests/`` directory and run:: $ python test.py unit You should see output like this:: $ python test.py unit ................................ ---------------------------------------------------------------------- Ran 32 tests in 0.075s OK To run the integration tests, run:: $ python test.py integration Note that running the integration tests may take a while. Various integration tests have been tagged with service names to allow you to easily run tests by service type. For example, to run the ec2 integration tests you can run:: $ python test.py -t ec2 You can specify the ``-t`` argument multiple times. For example, to run the s3 and ec2 tests you can run:: $ python test.py -t ec2 -t s3 .. warning:: In the examples above no top level directory was specified. By default, nose will assume the current working directory, so the above command is equivalent to:: $ python test.py -t ec2 -t s3 . Be sure that you are in the ``tests/`` directory when running the tests, or explicitly specify the top level directory. For example, if you in the root directory of the boto repo, you could run the ec2 and s3 tests by running:: $ python tests/test.py -t ec2 -t s3 tests/ You can use nose's collect plugin to see what tests are associated with each service tag:: $ python tests.py -t s3 -t ec2 --with-id --collect -v Testing Details --------------- The ``tests/test.py`` script is a lightweight wrapper around nose_. In general, you should be able to run ``nosetests`` directly instead of ``tests/test.py``. The ``tests/unit`` and ``tests/integration`` args in the commands above were referring to directories. The command line arguments are forwarded to nose when you use ``tests/test.py``. For example, you can run:: $ python tests/test.py -x -vv tests/unit/cloudformation And the ``-x -vv tests/unit/cloudformation`` are forwarded to nose. See the nose_ docs for the supported command line options, or run ``nosetests --help``. The only thing that ``tests/test.py`` does before invoking nose is to inject an argument that specifies that any testcase tagged with "notdefault" should not be run. A testcase may be tagged with "notdefault" if the test author does not want everyone to run the tests. In general, there shouldn't be many of these tests, but some reasons a test may be tagged "notdefault" include: * An integration test that requires specific credentials. * An interactive test (the S3 MFA tests require you to type in the S/N and code). Tagging is done using nose's tagging_ plugin. To summarize, you can tag a specific testcase by setting an attribute on the object. Nose provides an ``attr`` decorator for convenience:: from nose.plugins.attrib import attr @attr('notdefault') def test_s3_mfs(): pass You can then run these tests be specifying:: nosetests -a 'notdefault' Or you can exclude any tests tagged with 'notdefault' by running:: nosetests -a '!notdefault' Conceptually, ``tests/test.py`` is injecting the "-a !notdefault" arg into nosetests. Testing Supported Python Versions ================================== Boto supports python 2.6 and 2.7. An easy way to verify functionality across multiple python versions is to use tox_. A tox.ini file is included with boto. You can run tox with no args and it will automatically test all supported python versions:: $ tox GLOB sdist-make: boto/setup.py py26 sdist-reinst: boto/.tox/dist/boto-2.4.1.zip py26 runtests: commands[0] ................................ ---------------------------------------------------------------------- Ran 32 tests in 0.089s OK py27 sdist-reinst: boto/.tox/dist/boto-2.4.1.zip py27 runtests: commands[0] ................................ ---------------------------------------------------------------------- Ran 32 tests in 0.087s OK ____ summary ____ py26: commands succeeded py27: commands succeeded congratulations :) Writing Documentation ===================== The boto docs use sphinx_ to generate documentation. All of the docs are located in the ``docs/`` directory. To generate the html documentation, cd into the docs directory and run ``make html``:: $ cd docs $ make html The generated documentation will be in the ``docs/build/html`` directory. The source for the documentation is located in ``docs/source`` directory, and uses `restructured text`_ for the markup language. .. _nose: http://readthedocs.org/docs/nose/en/latest/ .. _tagging: http://nose.readthedocs.org/en/latest/plugins/attrib.html .. _tox: http://tox.testrun.org/latest/ .. _virtualenvwrapper: http://www.doughellmann.com/projects/virtualenvwrapper/ .. _sphinx: http://sphinx.pocoo.org/ .. _restructured text: http://sphinx.pocoo.org/rest.html Merging A Branch (Core Devs) ============================ * All features/bugfixes should go through a review. * This includes new features added by core devs themselves. The usual branch/pull-request/merge flow that happens for community contributions should also apply to core. * Ensure there is proper test coverage. If there's a change in behavior, there should be a test demonstrating the failure before the change & passing with the change. * This helps ensure we don't regress in the future as well. * Merging of pull requests is typically done with ``git merge --no-ff ``. * GitHub's big green button is probably OK for very small PRs (like doc fixes), but you can't run tests on GH, so most things should get pulled down locally. boto-2.20.1/docs/source/documentation.rst000066400000000000000000000035631225267101000203750ustar00rootroot00000000000000.. _documentation: ======================= About the Documentation ======================= boto's documentation uses the Sphinx__ documentation system, which in turn is based on docutils__. The basic idea is that lightly-formatted plain-text documentation is transformed into HTML, PDF, and any other output format. __ http://sphinx.pocoo.org/ __ http://docutils.sf.net/ To actually build the documentation locally, you'll currently need to install Sphinx -- ``easy_install Sphinx`` should do the trick. Then, building the html is easy; just ``make html`` from the ``docs`` directory. To get started contributing, you'll want to read the `ReStructuredText Primer`__. After that, you'll want to read about the `Sphinx-specific markup`__ that's used to manage metadata, indexing, and cross-references. __ http://sphinx.pocoo.org/rest.html __ http://sphinx.pocoo.org/markup/ The main thing to keep in mind as you write and edit docs is that the more semantic markup you can add the better. So:: Import ``boto`` to your script... Isn't nearly as helpful as:: Add :mod:`boto` to your script... This is because Sphinx will generate a proper link for the latter, which greatly helps readers. There's basically no limit to the amount of useful markup you can add. The fabfile ----------- There is a Fabric__ file that can be used to build and deploy the documentation to a webserver that you ssh access to. __ http://fabfile.org To build and deploy:: cd docs/ fab deploy:remote_path='/var/www/folder/whatever' --hosts=user@host This will get the latest code from subversion, add the revision number to the docs conf.py file, call ``make html`` to build the documentation, then it will tarball it up and scp up to the host you specified and untarball it in the folder you specified creating a symbolic link from the untarballed versioned folder to ``{remote_path}/boto-docs``. boto-2.20.1/docs/source/dynamodb2_tut.rst000066400000000000000000000440171225267101000202760ustar00rootroot00000000000000.. _dynamodb2_tut: =============================================== An Introduction to boto's DynamoDB v2 interface =============================================== This tutorial focuses on the boto interface to AWS' DynamoDB_ v2. This tutorial assumes that you have boto already downloaded and installed. .. _DynamoDB: http://aws.amazon.com/dynamodb/ .. warning:: This tutorial covers the **SECOND** major release of DynamoDB (including local secondary index support). The documentation for the original version of DynamoDB (& boto's support for it) is at :doc:`DynamoDB v1 `. The v2 DynamoDB API has both a high-level & low-level component. The low-level API (contained primarily within ``boto.dynamodb2.layer1``) provides an interface that rough matches exactly what is provided by the API. It supports all options available to the service. The high-level API attempts to make interacting with the service more natural from Python. It supports most of the featureset. The High-Level API ================== Most of the interaction centers around a single object, the ``Table``. Tables act as a way to effectively namespace your records. If you're familiar with database tables from an RDBMS, tables will feel somewhat familiar. Creating a New Table -------------------- To create a new table, you need to call ``Table.create`` & specify (at a minimum) both the table's name as well as the key schema for the table. Since both the key schema and local secondary indexes can not be modified after the table is created, you'll need to plan ahead of time how you think the table will be used. Both the keys & indexes are also used for querying, so you'll want to represent the data you'll need when querying there as well. For the schema, you can either have a single ``HashKey`` or a combined ``HashKey+RangeKey``. The ``HashKey`` by itself should be thought of as a unique identifier (for instance, like a username or UUID). It is typically looked up as an exact value. A ``HashKey+RangeKey`` combination is slightly different, in that the ``HashKey`` acts like a namespace/prefix & the ``RangeKey`` acts as a value that can be referred to by a sorted range of values. For the local secondary indexes, you can choose from an ``AllIndex``, a ``KeysOnlyIndex`` or a ``IncludeIndex`` field. Each builds an index of values that can be queried on. The ``AllIndex`` duplicates all values onto the index (to prevent additional reads to fetch the data). The ``KeysOnlyIndex`` duplicates only the keys from the schema onto the index. The ``IncludeIndex`` lets you specify a list of fieldnames to duplicate over. Simple example:: >>> from boto.dynamodb2.fields import HashKey >>> from boto.dynamodb2.table import Table # Uses your ``aws_access_key_id`` & ``aws_secret_access_key`` from either a # config file or environment variable & the default region. >>> users = Table.create('users', schema=[ ... HashKey('username'), ... ]) A full example:: >>> import boto.dynamodb2 >>> from boto.dynamodb2.fields import HashKey, RangeKey, KeysOnlyIndex, AllIndex >>> from boto.dynamodb2.table import Table >>> from boto.dynamodb2.types import NUMBER >>> users = Table.create('users', schema=[ ... HashKey('account_type', data_type=NUMBER), ... RangeKey('last_name'), ... ], throughput={ ... 'read': 5, ... 'write': 15, ... }, indexes=[ ... AllIndex('EverythingIndex', parts=[ ... HashKey('account_type', data_type=NUMBER), ... ]) ... ], ... # If you need to specify custom parameters like keys or region info... ... connection= boto.dynamodb2.connect_to_region('us-east-1')) Using an Existing Table ----------------------- Once a table has been created, using it is relatively simple. You can either specify just the ``table_name`` (allowing the object to lazily do an additional call to get details about itself if needed) or provide the ``schema/indexes`` again (same as what was used with ``Table.create``) to avoid extra overhead. Lazy example:: >>> from boto.dynamodb2.table import Table >>> users = Table('users') Efficient example:: >>> from boto.dynamodb2.fields import HashKey, RangeKey, AllIndex >>> from boto.dynamodb2.table import Table >>> from boto.dynamodb2.types import NUMBER >>> users = Table('users', schema=[ ... HashKey('account_type', data_type=NUMBER), ... RangeKey('last_name'), ... ], indexes=[ ... AllIndex('EverythingIndex', parts=[ ... HashKey('account_type', data_type=NUMBER), ... ]) ... ]) Creating a New Item ------------------- Once you have a ``Table`` instance, you can add new items to the table. There are two ways to do this. The first is to use the ``Table.put_item`` method. Simply hand it a dictionary of data & it will create the item on the server side. This dictionary should be relatively flat (as you can nest in other dictionaries) & **must** contain the keys used in the ``schema``. Example:: >>> from boto.dynamodb2.table import Table >>> users = Table('users') # Create the new user. >>> users.put_item(data={ ... 'username': 'johndoe', ... 'first_name': 'John', ... 'last_name': 'Doe', ... }) True The alternative is to manually construct an ``Item`` instance & tell it to ``save`` itself. This is useful if the object will be around for awhile & you don't want to re-fetch it. Example:: >>> from boto.dynamodb2.items import Item >>> from boto.dynamodb2.table import Table >>> users = Table('users') # WARNING - This doens't save it yet! >>> johndoe = Item(users, data={ ... 'username': 'johndoe', ... 'first_name': 'John', ... 'last_name': 'Doe', ... }) # The data now gets persisted to the server. >>> johndoe.save() True Getting an Item & Accessing Data -------------------------------- With data now in DynamoDB, if you know the key of the item, you can fetch it back out. Specify the key value(s) as kwargs to ``Table.get_item``. Example:: >>> from boto.dynamodb2.table import Table >>> users = Table('users') >>> johndoe = users.get_item(username='johndoe') Once you have an ``Item`` instance, it presents a dictionary-like interface to the data.:: >>> johndoe = users.get_item(username='johndoe') # Read a field out. >>> johndoe['first_name'] 'John' # Change a field (DOESN'T SAVE YET!). >>> johndoe['first_name'] = 'Johann' # Delete data from it (DOESN'T SAVE YET!). >>> del johndoe['last_name'] Updating an Item ---------------- Just creating new items or changing only the in-memory version of the ``Item`` isn't particularly effective. To persist the changes to DynamoDB, you have three choices. The first is sending all the data with the expectation nothing has changed since you read the data. DynamoDB will verify the data is in the original state and, if so, will send all of the item's data. If that expectation fails, the call will fail:: >>> johndoe = users.get_item(username='johndoe') >>> johndoe['first_name'] = 'Johann' >>> johndoe['whatever'] = "man, that's just like your opinion" >>> del johndoe['last_name'] # Affects all fields, even the ones not changed locally. >>> johndoe.save() True The second is a full overwrite. If you can be confident your version of the data is the most correct, you can force an overwrite of the data.:: >>> johndoe = users.get_item(username='johndoe') >>> johndoe['first_name'] = 'Johann' >>> johndoe['whatever'] = "man, that's just like your opinion" >>> del johndoe['last_name'] # Specify ``overwrite=True`` to fully replace the data. >>> johndoe.save(overwrite=True) True The last is a partial update. If you've only modified certain fields, you can send a partial update that only writes those fields, allowing other (potentially changed) fields to go untouched.:: >>> johndoe = users.get_item(username='johndoe') >>> johndoe['first_name'] = 'Johann' >>> johndoe['whatever'] = "man, that's just like your opinion" >>> del johndoe['last_name'] # Partial update, only sending/affecting the # ``first_name/whatever/last_name`` fields. >>> johndoe.partial_save() True Deleting an Item ---------------- You can also delete items from the table. You have two choices, depending on what data you have present. If you already have an ``Item`` instance, the easiest approach is just to call ``Item.delete``.:: >>> johndoe.delete() True If you don't have an ``Item`` instance & you don't want to incur the ``Table.get_item`` call to get it, you can call ``Table.delete_item`` method.:: >>> from boto.dynamodb2.table import Table >>> users = Table('users') >>> users.delete_item(username='johndoe') True Batch Writing ------------- If you're loading a lot of data at a time, making use of batch writing can both speed up the process & reduce the number of write requests made to the service. Batch writing involves wrapping the calls you want batched in a context manager. The context manager imitates the ``Table.put_item`` & ``Table.delete_item`` APIs. Getting & using the context manager looks like:: >>> from boto.dynamodb2.table import Table >>> users = Table('users') >>> with users.batch_write() as batch: ... batch.put_item(data={ ... 'username': 'anotherdoe', ... 'first_name': 'Another', ... 'last_name': 'Doe', ... 'date_joined': int(time.time()), ... }) ... batch.put_item(data={ ... 'username': 'alice', ... 'first_name': 'Alice', ... 'date_joined': int(time.time()), ... }) ... batch.delete_item(username=jane') However, there are some limitations on what you can do within the context manager. * It can't read data at all or do batch any other operations. * You can't put & delete the same data within a batch request. .. note:: Additionally, the context manager can only batch 25 items at a time for a request (this is a DynamoDB limitation). It is handled for you so you can keep writing additional items, but you should be aware that 100 ``put_item`` calls is 4 batch requests, not 1. Querying -------- Manually fetching out each item by itself isn't tenable for large datasets. To cope with fetching many records, you can either perform a standard query, query via a local secondary index or scan the entire table. A standard query typically gets run against a hash+range key combination. Filter parameters are passed as kwargs & use a ``__`` to separate the fieldname from the operator being used to filter the value. In terms of querying, our original schema is less than optimal. For the following examples, we'll be using the following table setup:: >>> users = Table.create('users', schema=[ ... HashKey('account_type'), ... RangeKey('last_name'), ... ], indexes=[ ... AllIndex('DateJoinedIndex', parts=[ ... HashKey('account_type'), ... RangeKey('date_joined', data_type=NUMBER), ... ]), ... ]) When executing the query, you get an iterable back that contains your results. These results may be spread over multiple requests as DynamoDB paginates them. This is done transparently, but you should be aware it may take more than one request. To run a query for last names starting with the letter "D":: >>> names_with_d = users.query( ... account_type__eq='standard_user', ... last_name__beginswith='D' ... ) >>> for user in names_with_d: ... print user['first_name'] 'Bob' 'Jane' 'John' You can also reverse results (``reverse=True``) as well as limiting them (``limit=2``):: >>> rev_with_d = users.query( ... account_type__eq='standard_user', ... last_name__beginswith='D', ... reverse=True, ... limit=2 ... ) >>> for user in rev_with_d: ... print user['first_name'] 'John' 'Jane' You can also run queries against the local secondary indexes. Simply provide the index name (``index='FirstNameIndex'``) & filter parameters against its fields:: # Users within the last hour. >>> recent = users.query( ... account_type__eq='standard_user', ... date_joined__gte=time.time() - (60 * 60), ... index='DateJoinedIndex' ... ) >>> for user in recent: ... print user['first_name'] 'Alice' 'Jane' Finally, if you need to query on data that's not in either a key or in an index, you can run a ``Table.scan`` across the whole table, which accepts a similar but expanded set of filters. If you're familiar with the Map/Reduce concept, this is akin to what DynamoDB does. .. warning:: Scans are consistent & run over the entire table, so relatively speaking, they're more expensive than plain queries or queries against an LSI. An example scan of all records in the table looks like:: >>> all_users = users.scan() Filtering a scan looks like:: >>> owners_with_emails = users.scan( ... is_owner__eq=1, ... email__null=False, ... ) >>> for user in recent: ... print user['first_name'] 'George' 'John' Parallel Scan ------------- DynamoDB also includes a feature called "Parallel Scan", which allows you to make use of **extra** read capacity to divide up your result set & scan an entire table faster. This does require extra code on the user's part & you should ensure that you need the speed boost, have enough data to justify it and have the extra capacity to read it without impacting other queries/scans. To run it, you should pick the ``total_segments`` to use, which is an integer representing the number of temporary partitions you'd divide your table into. You then need to spin up a thread/process for each one, giving each thread/process a ``segment``, which is a zero-based integer of the segment you'd like to scan. An example of using parallel scan to send out email to all users might look something like:: #!/usr/bin/env python import threading import boto.ses import boto.dynamodb2 from boto.dynamodb2.table import Table AWS_ACCESS_KEY_ID = '' AWS_SECRET_ACCESS_KEY = '' APPROVED_EMAIL = 'some@address.com' def send_email(email): # Using Amazon's Simple Email Service, send an email to a given # email address. You must already have an email you've verified with # AWS before this will work. conn = boto.ses.connect_to_region( 'us-east-1', aws_access_key_id=AWS_ACCESS_KEY_ID, aws_secret_access_key=AWS_SECRET_ACCESS_KEY ) conn.send_email( APPROVED_EMAIL, "[OurSite] New feature alert!", "We've got some exciting news! We added a new feature to...", [email] ) def process_segment(segment=0, total_segments=10): # This method/function is executed in each thread, each getting its # own segment to process through. conn = boto.dynamodb2.connect_to_region( 'us-east-1', aws_access_key_id=AWS_ACCESS_KEY_ID, aws_secret_access_key=AWS_SECRET_ACCESS_KEY ) table = Table('users', connection=conn) # We pass in the segment & total_segments to scan here. for user in table.scan(segment=segment, total_segments=total_segments): send_email(user['email']) def send_all_emails(): pool = [] # We're choosing to divide the table in 3, then... pool_size = 3 # ...spinning up a thread for each segment. for i in range(pool_size): worker = threading.Thread( target=process_segment, kwargs={ 'segment': i, 'total_segments': pool_size, } ) pool.append(worker) # We start them to let them start scanning & consuming their # assigned segment. worker.start() # Finally, we wait for each to finish. for thread in pool: thread.join() if __name__ == '__main__': send_all_emails() Batch Reading ------------- Similar to batch writing, batch reading can also help reduce the number of API requests necessary to access a large number of items. The ``Table.batch_get`` method takes a list (or any sliceable collection) of keys & fetches all of them, presented as an iterator interface. This is done lazily, so if you never iterate over the results, no requests are executed. Additionally, if you only iterate over part of the set, the minumum number of calls are made to fetch those results (typically max 100 per response). Example:: >>> from boto.dynamodb2.table import Table >>> users = Table('users') # No request yet. >>> many_users = users.batch_get(keys=[ {'username': 'alice'}, {'username': 'bob'}, {'username': 'fred'}, {'username': 'jane'}, {'username': 'johndoe'}, ]) # Now the request is performed, requesting all five in one request. >>> for user in many_users: ... print user['first_name'] 'Alice' 'Bobby' 'Fred' 'Jane' 'John' Deleting a Table ---------------- Deleting a table is a simple exercise. When you no longer need a table, simply run:: >>> users.delete() DynamoDB Local -------------- `Amazon DynamoDB Local`_ is a utility which can be used to mock DynamoDB during development. Connecting to a running DynamoDB Local server is easy:: #!/usr/bin/env python from boto.dynamodb2.layer1 import DynamoDBConnection # Connect to DynamoDB Local conn = DynamoDBConnection( host='localhost', port=8000, aws_secret_access_key='anything', is_secure=False) # List all local tables tables = conn.list_tables() .. _`Amazon DynamoDB Local`: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Tools.html Next Steps ---------- You can find additional information about other calls & parameter options in the :doc:`API docs `. boto-2.20.1/docs/source/dynamodb_tut.rst000066400000000000000000000257201225267101000202140ustar00rootroot00000000000000.. dynamodb_tut: ============================================ An Introduction to boto's DynamoDB interface ============================================ This tutorial focuses on the boto interface to AWS' DynamoDB_. This tutorial assumes that you have boto already downloaded and installed. .. _DynamoDB: http://aws.amazon.com/dynamodb/ .. warning:: This tutorial covers the **ORIGINAL** release of DynamoDB. It has since been supplanted by a second major version & an updated API to talk to the new version. The documentation for the new version of DynamoDB (& boto's support for it) is at :doc:`DynamoDB v2 `. Creating a Connection --------------------- The first step in accessing DynamoDB is to create a connection to the service. To do so, the most straight forward way is the following:: >>> import boto.dynamodb >>> conn = boto.dynamodb.connect_to_region( 'us-west-2', aws_access_key_id='', aws_secret_access_key='') >>> conn Bear in mind that if you have your credentials in boto config in your home directory, the two keyword arguments in the call above are not needed. More details on configuration can be found in :doc:`boto_config_tut`. The :py:func:`boto.dynamodb.connect_to_region` function returns a :py:class:`boto.dynamodb.layer2.Layer2` instance, which is a high-level API for working with DynamoDB. Layer2 is a set of abstractions that sit atop the lower level :py:class:`boto.dynamodb.layer1.Layer1` API, which closely mirrors the Amazon DynamoDB API. For the purpose of this tutorial, we'll just be covering Layer2. Listing Tables -------------- Now that we have a DynamoDB connection object, we can then query for a list of existing tables in that region:: >>> conn.list_tables() ['test-table', 'another-table'] Creating Tables --------------- DynamoDB tables are created with the :py:meth:`Layer2.create_table ` method. While DynamoDB's items (a rough equivalent to a relational DB's row) don't have a fixed schema, you do need to create a schema for the table's hash key element, and the optional range key element. This is explained in greater detail in DynamoDB's `Data Model`_ documentation. We'll start by defining a schema that has a hash key and a range key that are both strings:: >>> message_table_schema = conn.create_schema( hash_key_name='forum_name', hash_key_proto_value=str, range_key_name='subject', range_key_proto_value=str ) The next few things to determine are table name and read/write throughput. We'll defer explaining throughput to the DynamoDB's `Provisioned Throughput`_ docs. We're now ready to create the table:: >>> table = conn.create_table( name='messages', schema=message_table_schema, read_units=10, write_units=10 ) >>> table Table(messages) This returns a :py:class:`boto.dynamodb.table.Table` instance, which provides simple ways to create (put), update, and delete items. Getting a Table --------------- To retrieve an existing table, use :py:meth:`Layer2.get_table `:: >>> conn.list_tables() ['test-table', 'another-table', 'messages'] >>> table = conn.get_table('messages') >>> table Table(messages) :py:meth:`Layer2.get_table `, like :py:meth:`Layer2.create_table `, returns a :py:class:`boto.dynamodb.table.Table` instance. Keep in mind that :py:meth:`Layer2.get_table ` will make an API call to retrieve various attributes of the table including the creation time, the read and write capacity, and the table schema. If you already know the schema, you can save an API call and create a :py:class:`boto.dynamodb.table.Table` object without making any calls to Amazon DynamoDB:: >>> table = conn.table_from_schema( name='messages', schema=message_table_schema) If you do this, the following fields will have ``None`` values: * create_time * status * read_units * write_units In addition, the ``item_count`` and ``size_bytes`` will be 0. If you create a table object directly from a schema object and decide later that you need to retrieve any of these additional attributes, you can use the :py:meth:`Table.refresh ` method:: >>> from boto.dynamodb.schema import Schema >>> table = conn.table_from_schema( name='messages', schema=Schema.create(hash_key=('forum_name', 'S'), range_key=('subject', 'S'))) >>> print table.write_units None >>> # Now we decide we need to know the write_units: >>> table.refresh() >>> print table.write_units 10 The recommended best practice is to retrieve a table object once and use that object for the duration of your application. So, for example, instead of this:: class Application(object): def __init__(self, layer2): self._layer2 = layer2 def retrieve_item(self, table_name, key): return self._layer2.get_table(table_name).get_item(key) You can do something like this instead:: class Application(object): def __init__(self, layer2): self._layer2 = layer2 self._tables_by_name = {} def retrieve_item(self, table_name, key): table = self._tables_by_name.get(table_name) if table is None: table = self._layer2.get_table(table_name) self._tables_by_name[table_name] = table return table.get_item(key) Describing Tables ----------------- To get a complete description of a table, use :py:meth:`Layer2.describe_table `:: >>> conn.list_tables() ['test-table', 'another-table', 'messages'] >>> conn.describe_table('messages') { 'Table': { 'CreationDateTime': 1327117581.624, 'ItemCount': 0, 'KeySchema': { 'HashKeyElement': { 'AttributeName': 'forum_name', 'AttributeType': 'S' }, 'RangeKeyElement': { 'AttributeName': 'subject', 'AttributeType': 'S' } }, 'ProvisionedThroughput': { 'ReadCapacityUnits': 10, 'WriteCapacityUnits': 10 }, 'TableName': 'messages', 'TableSizeBytes': 0, 'TableStatus': 'ACTIVE' } } Adding Items ------------ Continuing on with our previously created ``messages`` table, adding an:: >>> table = conn.get_table('messages') >>> item_data = { 'Body': 'http://url_to_lolcat.gif', 'SentBy': 'User A', 'ReceivedTime': '12/9/2011 11:36:03 PM', } >>> item = table.new_item( # Our hash key is 'forum' hash_key='LOLCat Forum', # Our range key is 'subject' range_key='Check this out!', # This has the attrs=item_data ) The :py:meth:`Table.new_item ` method creates a new :py:class:`boto.dynamodb.item.Item` instance with your specified hash key, range key, and attributes already set. :py:class:`Item ` is a :py:class:`dict` sub-class, meaning you can edit your data as such:: item['a_new_key'] = 'testing' del item['a_new_key'] After you are happy with the contents of the item, use :py:meth:`Item.put ` to commit it to DynamoDB:: >>> item.put() Retrieving Items ---------------- Now, let's check if it got added correctly. Since DynamoDB works under an 'eventual consistency' mode, we need to specify that we wish a consistent read, as follows:: >>> table = conn.get_table('messages') >>> item = table.get_item( # Your hash key was 'forum_name' hash_key='LOLCat Forum', # Your range key was 'subject' range_key='Check this out!' ) >>> item { # Note that this was your hash key attribute (forum_name) 'forum_name': 'LOLCat Forum', # This is your range key attribute (subject) 'subject': 'Check this out!' 'Body': 'http://url_to_lolcat.gif', 'ReceivedTime': '12/9/2011 11:36:03 PM', 'SentBy': 'User A', } Updating Items -------------- To update an item's attributes, simply retrieve it, modify the value, then :py:meth:`Item.put ` it again:: >>> table = conn.get_table('messages') >>> item = table.get_item( hash_key='LOLCat Forum', range_key='Check this out!' ) >>> item['SentBy'] = 'User B' >>> item.put() Working with Decimals --------------------- To avoid the loss of precision, you can stipulate that the ``decimal.Decimal`` type be used for numeric values:: >>> import decimal >>> conn.use_decimals() >>> table = conn.get_table('messages') >>> item = table.new_item( hash_key='LOLCat Forum', range_key='Check this out!' ) >>> item['decimal_type'] = decimal.Decimal('1.12345678912345') >>> item.put() >>> print table.get_item('LOLCat Forum', 'Check this out!') {u'forum_name': 'LOLCat Forum', u'decimal_type': Decimal('1.12345678912345'), u'subject': 'Check this out!'} You can enable the usage of ``decimal.Decimal`` by using either the ``use_decimals`` method, or by passing in the :py:class:`Dynamizer ` class for the ``dynamizer`` param:: >>> from boto.dynamodb.types import Dynamizer >>> conn = boto.dynamodb.connect_to_region(dynamizer=Dynamizer) This mechanism can also be used if you want to customize the encoding/decoding process of DynamoDB types. Deleting Items -------------- To delete items, use the :py:meth:`Item.delete ` method:: >>> table = conn.get_table('messages') >>> item = table.get_item( hash_key='LOLCat Forum', range_key='Check this out!' ) >>> item.delete() Deleting Tables --------------- .. WARNING:: Deleting a table will also **permanently** delete all of its contents without prompt. Use carefully. There are two easy ways to delete a table. Through your top-level :py:class:`Layer2 ` object:: >>> conn.delete_table(table) Or by getting the table, then using :py:meth:`Table.delete `:: >>> table = conn.get_table('messages') >>> table.delete() .. _Data Model: http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/DataModel.html .. _Provisioned Throughput: http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/ProvisionedThroughputIntro.html boto-2.20.1/docs/source/ec2_tut.rst000066400000000000000000000143631225267101000170710ustar00rootroot00000000000000.. _ec2_tut: ======================================= An Introduction to boto's EC2 interface ======================================= This tutorial focuses on the boto interface to the Elastic Compute Cloud from Amazon Web Services. This tutorial assumes that you have already downloaded and installed boto. Creating a Connection --------------------- The first step in accessing EC2 is to create a connection to the service. The recommended way of doing this in boto is:: >>> import boto.ec2 >>> conn = boto.ec2.connect_to_region("us-west-2", ... aws_access_key_id='', ... aws_secret_access_key='') At this point the variable ``conn`` will point to an EC2Connection object. In this example, the AWS access key and AWS secret key are passed in to the method explicitly. Alternatively, you can set the boto config environment variables and then simply specify which region you want as follows:: >>> conn = boto.ec2.connect_to_region("us-west-2") In either case, conn will point to an EC2Connection object which we will use throughout the remainder of this tutorial. Launching Instances ------------------- Possibly, the most important and common task you'll use EC2 for is to launch, stop and terminate instances. In its most primitive form, you can launch an instance as follows:: >>> conn.run_instances('') This will launch an instance in the specified region with the default parameters. You will not be able to SSH into this machine, as it doesn't have a security group set. See :doc:`security_groups` for details on creating one. Now, let's say that you already have a key pair, want a specific type of instance, and you have your :doc:`security group ` all setup. In this case we can use the keyword arguments to accomplish that:: >>> conn.run_instances( '', key_name='myKey', instance_type='c1.xlarge', security_groups=['your-security-group-here']) The main caveat with the above call is that it is possible to request an instance type that is not compatible with the provided AMI (for example, the instance was created for a 64-bit instance and you choose a m1.small instance_type). For more details on the plethora of possible keyword parameters, be sure to check out boto's :doc:`EC2 API reference `. Stopping Instances ------------------ Once you have your instances up and running, you might wish to shut them down if they're not in use. Please note that this will only de-allocate virtual hardware resources (as well as instance store drives), but won't destroy your EBS volumes -- this means you'll pay nominal provisioned EBS storage fees even if your instance is stopped. To do this, you can do so as follows:: >>> conn.stop_instances(instance_ids=['instance-id-1','instance-id-2', ...]) This will request a 'graceful' stop of each of the specified instances. If you wish to request the equivalent of unplugging your instance(s), simply add ``force=True`` keyword argument to the call above. Please note that stop instance is not allowed with Spot instances. Terminating Instances --------------------- Once you are completely done with your instance and wish to surrender both virtual hardware, root EBS volume and all other underlying components you can request instance termination. To do so you can use the call bellow:: >>> conn.terminate_instances(instance_ids=['instance-id-1','instance-id-2', ...]) Please use with care since once you request termination for an instance there is no turning back. Checking What Instances Are Running ----------------------------------- You can also get information on your currently running instances:: >>> reservations = conn.get_all_reservations() >>> reservations [Reservation:r-00000000] A reservation corresponds to a command to start instances. You can see what instances are associated with a reservation:: >>> instances = reservations[0].instances >>> instances [Instance:i-00000000] An instance object allows you get more meta-data available about the instance:: >>> inst = instances[0] >>> inst.instance_type u'c1.xlarge' >>> inst.placement u'us-west-2' In this case, we can see that our instance is a c1.xlarge instance in the `us-west-2` availability zone. ================================= Using Elastic Block Storage (EBS) ================================= EBS Basics ---------- EBS can be used by EC2 instances for permanent storage. Note that EBS volumes must be in the same availability zone as the EC2 instance you wish to attach it to. To actually create a volume you will need to specify a few details. The following example will create a 50GB EBS in one of the `us-west-2` availability zones:: >>> vol = conn.create_volume(50, "us-west-2") >>> vol Volume:vol-00000000 You can check that the volume is now ready and available:: >>> curr_vol = conn.get_all_volumes([vol.id])[0] >>> curr_vol.status u'available' >>> curr_vol.zone u'us-west-2' We can now attach this volume to the EC2 instance we created earlier, making it available as a new device:: >>> conn.attach_volume (vol.id, inst.id, "/dev/sdx") u'attaching' You will now have a new volume attached to your instance. Note that with some Linux kernels, `/dev/sdx` may get translated to `/dev/xvdx`. This device can now be used as a normal block device within Linux. Working With Snapshots ---------------------- Snapshots allow you to make point-in-time snapshots of an EBS volume for future recovery. Snapshots allow you to create incremental backups, and can also be used to instantiate multiple new volumes. Snapshots can also be used to move EBS volumes across availability zones or making backups to S3. Creating a snapshot is easy:: >>> snapshot = conn.create_snapshot(vol.id, 'My snapshot') >>> snapshot Snapshot:snap-00000000 Once you have a snapshot, you can create a new volume from it. Volumes are created lazily from snapshots, which means you can start using such a volume straight away:: >>> new_vol = snapshot.create_volume('us-west-2') >>> conn.attach_volume (new_vol.id, inst.id, "/dev/sdy") u'attaching' If you no longer need a snapshot, you can also easily delete it:: >>> conn.delete_snapshot(snapshot.id) True boto-2.20.1/docs/source/elb_tut.rst000066400000000000000000000221411225267101000171530ustar00rootroot00000000000000.. _elb_tut: ========================================================== An Introduction to boto's Elastic Load Balancing interface ========================================================== This tutorial focuses on the boto interface for `Elastic Load Balancing`_ from Amazon Web Services. This tutorial assumes that you have already downloaded and installed boto, and are familiar with the boto ec2 interface. .. _Elastic Load Balancing: http://aws.amazon.com/elasticloadbalancing/ Elastic Load Balancing Concepts ------------------------------- `Elastic Load Balancing`_ (ELB) is intimately connected with Amazon's `Elastic Compute Cloud`_ (EC2) service. Using the ELB service allows you to create a load balancer - a DNS endpoint and set of ports that distributes incoming requests to a set of EC2 instances. The advantages of using a load balancer is that it allows you to truly scale up or down a set of backend instances without disrupting service. Before the ELB service, you had to do this manually by launching an EC2 instance and installing load balancer software on it (nginx, haproxy, perlbal, etc.) to distribute traffic to other EC2 instances. Recall that the EC2 service is split into Regions, which are further divided into Availability Zones (AZ). For example, the US-East region is divided into us-east-1a, us-east-1b, us-east-1c, us-east-1d, and us-east-1e. You can think of AZs as data centers - each runs off a different set of ISP backbones and power providers. ELB load balancers can span multiple AZs but cannot span multiple regions. That means that if you'd like to create a set of instances spanning both the US and Europe Regions you'd have to create two load balancers and have some sort of other means of distributing requests between the two load balancers. An example of this could be using GeoIP techniques to choose the correct load balancer, or perhaps DNS round robin. Keep in mind also that traffic is distributed equally over all AZs the ELB balancer spans. This means you should have an equal number of instances in each AZ if you want to equally distribute load amongst all your instances. .. _Elastic Compute Cloud: http://aws.amazon.com/ec2/ Creating a Connection --------------------- The first step in accessing ELB is to create a connection to the service. Like EC2, the ELB service has a different endpoint for each region. By default the US East endpoint is used. To choose a specific region, use the ``connect_to_region`` function:: >>> import boto.ec2.elb >>> elb = boto.ec2.elb.connect_to_region('us-west-2') Here's yet another way to discover what regions are available and then connect to one:: >>> import boto.ec2.elb >>> regions = boto.ec2.elb.regions() >>> regions [RegionInfo:us-east-1, RegionInfo:ap-northeast-1, RegionInfo:us-west-1, RegionInfo:us-west-2, RegionInfo:ap-southeast-1, RegionInfo:eu-west-1] >>> elb = regions[-1].connect() Alternatively, edit your boto.cfg with the default ELB endpoint to use:: [Boto] elb_region_name = eu-west-1 elb_region_endpoint = elasticloadbalancing.eu-west-1.amazonaws.com Getting Existing Load Balancers ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To retrieve any exiting load balancers: >>> conn.get_all_load_balancers() [LoadBalancer:load-balancer-prod, LoadBalancer:load-balancer-staging] You can also filter by name >>> conn.get_all_load_balancers(load_balancer_names=['load-balancer-prod']) [LoadBalancer:load-balancer-prod] :py:meth:`get_all_load_balancers ` returns a :py:class:`boto.resultset.ResultSet` that contains instances of :class:`boto.ec2.elb.loadbalancer.LoadBalancer`, each of which abstracts access to a load balancer. :py:class:`ResultSet ` works very much like a list. >>> balancers = conn.get_all_load_balancers() >>> balancers[0] [LoadBalancer:load-balancer-prod] Creating a Load Balancer ------------------------ To create a load balancer you need the following: #. The specific **ports and protocols** you want to load balancer over, and what port you want to connect to all instances. #. A **health check** - the ELB concept of a *heart beat* or *ping*. ELB will use this health check to see whether your instances are up or down. If they go down, the load balancer will no longer send requests to them. #. A **list of Availability Zones** you'd like to create your load balancer over. Ports and Protocols ^^^^^^^^^^^^^^^^^^^ An incoming connection to your load balancer will come on one or more ports - for example 80 (HTTP) and 443 (HTTPS). Each can be using a protocol - currently, the supported protocols are TCP and HTTP. We also need to tell the load balancer which port to route connects *to* on each instance. For example, to create a load balancer for a website that accepts connections on 80 and 443, and that routes connections to port 8080 and 8443 on each instance, you would specify that the load balancer ports and protocols are: * 80, 8080, HTTP * 443, 8443, TCP This says that the load balancer will listen on two ports - 80 and 443. Connections on 80 will use an HTTP load balancer to forward connections to port 8080 on instances. Likewise, the load balancer will listen on 443 to forward connections to 8443 on each instance using the TCP balancer. We need to use TCP for the HTTPS port because it is encrypted at the application layer. Of course, we could specify the load balancer use TCP for port 80, however specifying HTTP allows you to let ELB handle some work for you - for example HTTP header parsing. .. _elb-configuring-a-health-check: Configuring a Health Check ^^^^^^^^^^^^^^^^^^^^^^^^^^ A health check allows ELB to determine which instances are alive and able to respond to requests. A health check is essentially a tuple consisting of: * *Target*: What to check on an instance. For a TCP check this is comprised of:: TCP:PORT_TO_CHECK Which attempts to open a connection on PORT_TO_CHECK. If the connection opens successfully, that specific instance is deemed healthy, otherwise it is marked temporarily as unhealthy. For HTTP, the situation is slightly different:: HTTP:PORT_TO_CHECK/RESOURCE This means that the health check will connect to the resource /RESOURCE on PORT_TO_CHECK. If an HTTP 200 status is returned the instance is deemed healthy. * *Interval*: How often the check is made. This is given in seconds and defaults to 30. The valid range of intervals goes from 5 seconds to 600 seconds. * *Timeout*: The number of seconds the load balancer will wait for a check to return a result. * *Unhealthy threshold*: The number of consecutive failed checks to deem the instance as being dead. The default is 5, and the range of valid values lies from 2 to 10. The following example creates a health check called *instance_health* that simply checks instances every 20 seconds on port 80 over HTTP at the resource /health for 200 successes. >>> from boto.ec2.elb import HealthCheck >>> hc = HealthCheck( interval=20, healthy_threshold=3, unhealthy_threshold=5, target='HTTP:8080/health' ) Putting It All Together ^^^^^^^^^^^^^^^^^^^^^^^ Finally, let's create a load balancer in the US region that listens on ports 80 and 443 and distributes requests to instances on 8080 and 8443 over HTTP and TCP. We want the load balancer to span the availability zones *us-east-1a* and *us-east-1b*: >>> zones = ['us-east-1a', 'us-east-1b'] >>> ports = [(80, 8080, 'http'), (443, 8443, 'tcp')] >>> lb = conn.create_load_balancer('my-lb', zones, ports) >>> # This is from the previous section. >>> lb.configure_health_check(hc) The load balancer has been created. To see where you can actually connect to it, do: >>> print lb.dns_name my_elb-123456789.us-east-1.elb.amazonaws.com You can then CNAME map a better name, i.e. www.MYWEBSITE.com to the above address. Adding Instances To a Load Balancer ----------------------------------- Now that the load balancer has been created, there are two ways to add instances to it: #. Manually, adding each instance in turn. #. Mapping an autoscale group to the load balancer. Please see the :doc:`Autoscale tutorial ` for information on how to do this. Manually Adding and Removing Instances ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Assuming you have a list of instance ids, you can add them to the load balancer >>> instance_ids = ['i-4f8cf126', 'i-0bb7ca62'] >>> lb.register_instances(instance_ids) Keep in mind that these instances should be in Security Groups that match the internal ports of the load balancer you just created (for this example, they should allow incoming connections on 8080 and 8443). To remove instances: >>> lb.deregister_instances(instance_ids) Modifying Availability Zones for a Load Balancer ------------------------------------------------ If you wanted to disable one or more zones from an existing load balancer: >>> lb.disable_zones(['us-east-1a']) You can then terminate each instance in the disabled zone and then deregister then from your load balancer. To enable zones: >>> lb.enable_zones(['us-east-1c']) Deleting a Load Balancer ------------------------ >>> lb.delete() boto-2.20.1/docs/source/emr_tut.rst000066400000000000000000000127731225267101000172060ustar00rootroot00000000000000.. _emr_tut: ===================================================== An Introduction to boto's Elastic Mapreduce interface ===================================================== This tutorial focuses on the boto interface to Elastic Mapreduce from Amazon Web Services. This tutorial assumes that you have already downloaded and installed boto. Creating a Connection --------------------- The first step in accessing Elastic Mapreduce is to create a connection to the service. There are two ways to do this in boto. The first is: >>> from boto.emr.connection import EmrConnection >>> conn = EmrConnection('', '') At this point the variable conn will point to an EmrConnection object. In this example, the AWS access key and AWS secret key are passed in to the method explicitly. Alternatively, you can set the environment variables: AWS_ACCESS_KEY_ID - Your AWS Access Key ID \ AWS_SECRET_ACCESS_KEY - Your AWS Secret Access Key and then call the constructor without any arguments, like this: >>> conn = EmrConnection() There is also a shortcut function in boto that makes it easy to create EMR connections: >>> import boto.emr >>> conn = boto.emr.connect_to_region('us-west-2') In either case, conn points to an EmrConnection object which we will use throughout the remainder of this tutorial. Creating Streaming JobFlow Steps -------------------------------- Upon creating a connection to Elastic Mapreduce you will next want to create one or more jobflow steps. There are two types of steps, streaming and custom jar, both of which have a class in the boto Elastic Mapreduce implementation. Creating a streaming step that runs the AWS wordcount example, itself written in Python, can be accomplished by: >>> from boto.emr.step import StreamingStep >>> step = StreamingStep(name='My wordcount example', ... mapper='s3n://elasticmapreduce/samples/wordcount/wordSplitter.py', ... reducer='aggregate', ... input='s3n://elasticmapreduce/samples/wordcount/input', ... output='s3n:///output/wordcount_output') where is a bucket you have created in S3. Note that this statement does not run the step, that is accomplished later when we create a jobflow. Additional arguments of note to the streaming jobflow step are cache_files, cache_archive and step_args. The options cache_files and cache_archive enable you to use the Hadoops distributed cache to share files amongst the instances that run the step. The argument step_args allows one to pass additional arguments to Hadoop streaming, for example modifications to the Hadoop job configuration. Creating Custom Jar Job Flow Steps ---------------------------------- The second type of jobflow step executes tasks written with a custom jar. Creating a custom jar step for the AWS CloudBurst example can be accomplished by: >>> from boto.emr.step import JarStep >>> step = JarStep(name='Coudburst example', ... jar='s3n://elasticmapreduce/samples/cloudburst/cloudburst.jar', ... step_args=['s3n://elasticmapreduce/samples/cloudburst/input/s_suis.br', ... 's3n://elasticmapreduce/samples/cloudburst/input/100k.br', ... 's3n:///output/cloudfront_output', ... 36, 3, 0, 1, 240, 48, 24, 24, 128, 16]) Note that this statement does not actually run the step, that is accomplished later when we create a jobflow. Also note that this JarStep does not include a main_class argument since the jar MANIFEST.MF has a Main-Class entry. Creating JobFlows ----------------- Once you have created one or more jobflow steps, you will next want to create and run a jobflow. Creating a jobflow that executes either of the steps we created above can be accomplished by: >>> import boto.emr >>> conn = boto.emr.connect_to_region('us-west-2') >>> jobid = conn.run_jobflow(name='My jobflow', ... log_uri='s3:///jobflow_logs', ... steps=[step]) The method will not block for the completion of the jobflow, but will immediately return. The status of the jobflow can be determined by: >>> status = conn.describe_jobflow(jobid) >>> status.state u'STARTING' One can then use this state to block for a jobflow to complete. Valid jobflow states currently defined in the AWS API are COMPLETED, FAILED, TERMINATED, RUNNING, SHUTTING_DOWN, STARTING and WAITING. In some cases you may not have built all of the steps prior to running the jobflow. In these cases additional steps can be added to a jobflow by running: >>> conn.add_jobflow_steps(jobid, [second_step]) If you wish to add additional steps to a running jobflow you may want to set the keep_alive parameter to True in run_jobflow so that the jobflow does not automatically terminate when the first step completes. The run_jobflow method has a number of important parameters that are worth investigating. They include parameters to change the number and type of EC2 instances on which the jobflow is executed, set a SSH key for manual debugging and enable AWS console debugging. Terminating JobFlows -------------------- By default when all the steps of a jobflow have finished or failed the jobflow terminates. However, if you set the keep_alive parameter to True or just want to halt the execution of a jobflow early you can terminate a jobflow by: >>> import boto.emr >>> conn = boto.emr.connect_to_region('us-west-2') >>> conn.terminate_jobflow('') boto-2.20.1/docs/source/extensions/000077500000000000000000000000001225267101000171625ustar00rootroot00000000000000boto-2.20.1/docs/source/extensions/githublinks/000077500000000000000000000000001225267101000215055ustar00rootroot00000000000000boto-2.20.1/docs/source/extensions/githublinks/__init__.py000066400000000000000000000033211225267101000236150ustar00rootroot00000000000000"""Add github roles to sphinx docs. Based entirely on Doug Hellmann's bitbucket version, but adapted for Github. (https://bitbucket.org/dhellmann/sphinxcontrib-bitbucket/) """ from urlparse import urljoin from docutils import nodes, utils from docutils.parsers.rst.roles import set_classes def make_node(rawtext, app, type_, slug, options): base_url = app.config.github_project_url if base_url is None: raise ValueError( "Configuration value for 'github_project_url' is not set.") relative = '%s/%s' % (type_, slug) full_ref = urljoin(base_url, relative) set_classes(options) if type_ == 'issues': type_ = 'issue' node = nodes.reference(rawtext, type_ + ' ' + utils.unescape(slug), refuri=full_ref, **options) return node def github_sha(name, rawtext, text, lineno, inliner, options={}, content=[]): app = inliner.document.settings.env.app node = make_node(rawtext, app, 'commit', text, options) return [node], [] def github_issue(name, rawtext, text, lineno, inliner, options={}, content=[]): try: issue = int(text) except ValueError: msg = inliner.reporter.error( "Invalid Github Issue '%s', must be an integer" % text, line=lineno) problem = inliner.problematic(rawtext, rawtext, msg) return [problem], [msg] app = inliner.document.settings.env.app node = make_node(rawtext, app, 'issues', str(issue), options) return [node], [] def setup(app): app.info('Adding github link roles') app.add_role('sha', github_sha) app.add_role('issue', github_issue) app.add_config_value('github_project_url', None, 'env') boto-2.20.1/docs/source/getting_started.rst000066400000000000000000000142461225267101000207130ustar00rootroot00000000000000.. _getting-started: ========================= Getting Started with Boto ========================= This tutorial will walk you through installing and configuring ``boto``, as well how to use it to make API calls. This tutorial assumes you are familiar with Python & that you have registered for an `Amazon Web Services`_ account. You'll need retrieve your ``Access Key ID`` and ``Secret Access Key`` from the web-based console. .. _`Amazon Web Services`: https://aws.amazon.com/ Installing Boto --------------- You can use ``pip`` to install the latest released version of ``boto``:: pip install boto If you want to install ``boto`` from source:: git clone git://github.com/boto/boto.git cd boto python setup.py install .. note:: For most services, this is enough to get going. However, to support everything Boto ships with, you should additionally run ``pip install -r requirements.txt``. This installs all additional, non-stdlib modules, enabling use of things like ``boto.cloudsearch``, ``boto.manage`` & ``boto.mashups``, as well as covering everything needed for the test suite. Using Virtual Environments -------------------------- Another common way to install ``boto`` is to use a ``virtualenv``, which provides isolated environments. First, install the ``virtualenv`` Python package:: pip install virtualenv Next, create a virtual environment by using the ``virtualenv`` command and specifying where you want the virtualenv to be created (you can specify any directory you like, though this example allows for compatibility with ``virtualenvwrapper``):: mkdir ~/.virtualenvs virtualenv ~/.virtualenvs/boto You can now activate the virtual environment:: source ~/.virtualenvs/boto/bin/activate Now, any usage of ``python`` or ``pip`` (within the current shell) will default to the new, isolated version within your virtualenv. You can now install ``boto`` into this virtual environment:: pip install boto When you are done using ``boto``, you can deactivate your virtual environment:: deactivate If you are creating a lot of virtual environments, `virtualenvwrapper`_ is an excellent tool that lets you easily manage your virtual environments. .. _`virtualenvwrapper`: http://virtualenvwrapper.readthedocs.org/en/latest/ Configuring Boto Credentials ---------------------------- You have a few options for configuring ``boto`` (see :doc:`boto_config_tut`). For this tutorial, we'll be using a configuration file. First, create a ``~/.boto`` file with these contents:: [Credentials] aws_access_key_id = YOURACCESSKEY aws_secret_access_key = YOURSECRETKEY ``boto`` supports a number of configuration values. For more information, see :doc:`boto_config_tut`. The above file, however, is all we need for now. You're now ready to use ``boto``. Making Connections ------------------ ``boto`` provides a number of convenience functions to simplify connecting to a service. For example, to work with S3, you can run:: >>> import boto >>> s3 = boto.connect_s3() If you want to connect to a different region, you can import the service module and use the ``connect_to_region`` functions. For example, to create an EC2 client in 'us-west-2' region, you'd run the following:: >>> import boto.ec2 >>> ec2 = boto.ec2.connect_to_region('us-west-2') Troubleshooting Connections --------------------------- When calling the various ``connect_*`` functions, you might run into an error like this:: >>> import boto >>> s3 = boto.connect_s3() Traceback (most recent call last): File "", line 1, in File "boto/__init__.py", line 121, in connect_s3 return S3Connection(aws_access_key_id, aws_secret_access_key, **kwargs) File "boto/s3/connection.py", line 171, in __init__ validate_certs=validate_certs) File "boto/connection.py", line 548, in __init__ host, config, self.provider, self._required_auth_capability()) File "boto/auth.py", line 668, in get_auth_handler 'Check your credentials' % (len(names), str(names))) boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler'] Check your credentials This is because ``boto`` cannot find credentials to use. Verify that you have created a ``~/.boto`` file as shown above. You can also turn on debug logging to verify where your credentials are coming from:: >>> import boto >>> boto.set_stream_logger('boto') >>> s3 = boto.connect_s3() 2012-12-10 17:15:03,799 boto [DEBUG]:Using access key found in config file. 2012-12-10 17:15:03,799 boto [DEBUG]:Using secret key found in config file. Interacting with AWS Services ----------------------------- Once you have a client for the specific service you want, there are methods on that object that will invoke API operations for that service. The following code demonstrates how to create a bucket and put an object in that bucket:: >>> import boto >>> import time >>> s3 = boto.connect_s3() # Create a new bucket. Buckets must have a globally unique name (not just # unique to your account). >>> bucket = s3.create_bucket('boto-demo-%s' % int(time.time())) # Create a new key/value pair. >>> key = bucket.new_key('mykey') >>> key.set_contents_from_string("Hello World!") # Sleep to ensure the data is eventually there. >>> time.sleep(2) # Retrieve the contents of ``mykey``. >>> print key.get_contents_as_string() 'Hello World!' # Delete the key. >>> key.delete() # Delete the bucket. >>> bucket.delete() Each service supports a different set of commands. You'll want to refer to the other guides & API references in this documentation, as well as referring to the `official AWS API`_ documentation. .. _`official AWS API`: https://aws.amazon.com/documentation/ Next Steps ---------- For many of the services that ``boto`` supports, there are tutorials as well as detailed API documentation. If you are interested in a specific service, the tutorial for the service is a good starting point. For instance, if you'd like more information on S3, check out the :ref:`S3 Tutorial ` and the :doc:`S3 API reference `. boto-2.20.1/docs/source/index.rst000066400000000000000000000126611225267101000166320ustar00rootroot00000000000000.. _index: =============================================== boto: A Python interface to Amazon Web Services =============================================== An integrated interface to current and future infrastructural services offered by `Amazon Web Services`_. .. _Amazon Web Services: http://aws.amazon.com/ Getting Started --------------- If you've never used ``boto`` before, you should read the :doc:`Getting Started with Boto ` guide to get familiar with ``boto`` & its usage. Currently Supported Services ---------------------------- * **Compute** * :doc:`Elastic Compute Cloud (EC2) ` -- (:doc:`API Reference `) * :doc:`Elastic MapReduce (EMR) ` -- (:doc:`API Reference `) * :doc:`Auto Scaling ` -- (:doc:`API Reference `) * **Content Delivery** * :doc:`CloudFront ` -- (:doc:`API Reference `) * **Database** * :doc:`DynamoDB2 ` -- (:doc:`API Reference `) -- (:doc:`Migration Guide from v1 `) * :doc:`DynamoDB ` -- (:doc:`API Reference `) * :doc:`Relational Data Services (RDS) ` -- (:doc:`API Reference `) * ElastiCache -- (:doc:`API Reference `) * Redshift -- (:doc:`API Reference `) * :doc:`SimpleDB ` -- (:doc:`API Reference `) * **Deployment and Management** * CloudFormation -- (:doc:`API Reference `) * Elastic Beanstalk -- (:doc:`API Reference `) * Data Pipeline -- (:doc:`API Reference `) * Opsworks -- (:doc:`API Reference `) * CloudTrail -- (:doc:`API Reference `) * **Identity & Access** * Identity and Access Management (IAM) -- (:doc:`API Reference `) * Security Token Service (STS) -- (:doc:`API Reference `) * **Application Services** * :doc:`Cloudsearch ` -- (:doc:`API Reference `) * Elastic Transcoder -- (:doc:`API Reference `) * :doc:`Simple Workflow Service (SWF) ` -- (:doc:`API Reference `) * :doc:`Simple Queue Service (SQS) ` -- (:doc:`API Reference `) * Simple Notification Service (SNS) -- (:doc:`API Reference `) * :doc:`Simple Email Service (SES) ` -- (:doc:`API Reference `) * **Monitoring** * :doc:`CloudWatch ` -- (:doc:`API Reference `) * **Networking** * Route 53 -- (:doc:`API Reference `) * :doc:`Virtual Private Cloud (VPC) ` -- (:doc:`API Reference `) * :doc:`Elastic Load Balancing (ELB) ` -- (:doc:`API Reference `) * **Payments & Billing** * Flexible Payments Service (FPS) -- (:doc:`API Reference `) * **Storage** * :doc:`Simple Storage Service (S3) ` -- (:doc:`API Reference `) * Amazon Glacier -- (:doc:`API Reference `) * Google Cloud Storage -- (:doc:`API Reference `) * **Workforce** * Mechanical Turk -- (:doc:`API Reference `) * **Other** * Marketplace Web Services -- (:doc:`API Reference `) * :doc:`Support ` -- (:doc:`API Reference `) Additional Resources -------------------- * :doc:`Applications Built On Boto ` * :doc:`Command Line Utilities ` * :doc:`Boto Config Tutorial ` * :doc:`Contributing to Boto ` * `Boto Source Repository`_ * `Boto Issue Tracker`_ * `Boto Twitter`_ * `Follow Mitch on Twitter`_ * Join our `IRC channel`_ (#boto on FreeNode). .. _Boto Issue Tracker: https://github.com/boto/boto/issues .. _Boto Source Repository: https://github.com/boto/boto .. _Boto Twitter: http://twitter.com/pythonboto .. _IRC channel: http://webchat.freenode.net/?channels=boto .. _Follow Mitch on Twitter: http://twitter.com/garnaat Release Notes ------------- .. toctree:: :titlesonly: releasenotes/v2.20.1 releasenotes/v2.20.0 releasenotes/v2.19.0 releasenotes/v2.18.0 releasenotes/v2.17.0 releasenotes/v2.16.0 releasenotes/v2.15.0 releasenotes/v2.14.0 releasenotes/v2.13.3 releasenotes/v2.13.2 releasenotes/v2.13.0 releasenotes/v2.12.0 releasenotes/v2.11.0 releasenotes/v2.10.0 releasenotes/v2.9.9 releasenotes/v2.9.8 releasenotes/v2.9.7 releasenotes/v2.9.6 releasenotes/v2.9.5 releasenotes/v2.9.4 releasenotes/v2.9.3 releasenotes/v2.9.2 releasenotes/v2.9.1 releasenotes/v2.9.0 releasenotes/v2.8.0 releasenotes/v2.7.0 releasenotes/v2.6.0 releasenotes/v2.5.2 releasenotes/v2.5.1 releasenotes/v2.5.0 releasenotes/v2.4.0 releasenotes/v2.3.0 releasenotes/v2.2.2 releasenotes/v2.2.1 releasenotes/v2.2.0 releasenotes/v2.1.1 releasenotes/v2.1.0 releasenotes/v2.0.0 releasenotes/v2.0b1 .. toctree:: :hidden: :glob: getting_started ec2_tut security_groups emr_tut autoscale_tut cloudfront_tut simpledb_tut dynamodb_tut rds_tut sqs_tut ses_tut swf_tut cloudsearch_tut cloudwatch_tut vpc_tut elb_tut s3_tut boto_config_tut documentation contributing commandline support_tut dynamodb2_tut migrations/dynamodb_v1_to_v2 apps_built_on_boto ref/* releasenotes/* Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` boto-2.20.1/docs/source/migrations/000077500000000000000000000000001225267101000171375ustar00rootroot00000000000000boto-2.20.1/docs/source/migrations/dynamodb_v1_to_v2.rst000066400000000000000000000224721225267101000232140ustar00rootroot00000000000000.. dynamodb_v1_to_v2: ========================================= Migrating from DynamoDB v1 to DynamoDB v2 ========================================= For the v2 release of AWS' DynamoDB_, the high-level API for interacting via ``boto`` was rewritten. Since there were several new features added in v2, people using the v1 API may wish to transition their code to the new API. This guide covers the high-level APIs. .. _DynamoDB: http://aws.amazon.com/dynamodb/ Creating New Tables =================== DynamoDB v1:: >>> import boto.dynamodb >>> conn = boto.dynamodb.connect_to_region() >>> message_table_schema = conn.create_schema( ... hash_key_name='forum_name', ... hash_key_proto_value=str, ... range_key_name='subject', ... range_key_proto_value=str ... ) >>> table = conn.create_table( ... name='messages', ... schema=message_table_schema, ... read_units=10, ... write_units=10 ... ) DynamoDB v2:: >>> from boto.dynamodb2.fields import HashKey >>> from boto.dynamodb2.fields import RangeKey >>> from boto.dynamodb2.table import Table >>> table = Table.create('messages', schema=[ ... HashKey('forum_name'), ... RangeKey('subject'), ... ], throughput={ ... 'read': 10, ... 'write': 10, ... }) Using an Existing Table ======================= DynamoDB v1:: >>> import boto.dynamodb >>> conn = boto.dynamodb.connect_to_region() # With API calls. >>> table = conn.get_table('messages') # Without API calls. >>> message_table_schema = conn.create_schema( ... hash_key_name='forum_name', ... hash_key_proto_value=str, ... range_key_name='subject', ... range_key_proto_value=str ... ) >>> table = conn.table_from_schema( ... name='messages', ... schema=message_table_schema) DynamoDB v2:: >>> from boto.dynamodb2.table import Table # With API calls. >>> table = Table('messages') # Without API calls. >>> from boto.dynamodb2.fields import HashKey >>> from boto.dynamodb2.table import Table >>> table = Table('messages', schema=[ ... HashKey('forum_name'), ... HashKey('subject'), ... ]) Updating Throughput =================== DynamoDB v1:: >>> import boto.dynamodb >>> conn = boto.dynamodb.connect_to_region() >>> table = conn.get_table('messages') >>> conn.update_throughput(table, read_units=5, write_units=15) DynamoDB v2:: >>> from boto.dynamodb2.table import Table >>> table = Table('messages') >>> table.update(throughput={ ... 'read': 5, ... 'write': 15, ... }) Deleting a Table ================ DynamoDB v1:: >>> import boto.dynamodb >>> conn = boto.dynamodb.connect_to_region() >>> table = conn.get_table('messages') >>> conn.delete_table(table) DynamoDB v2:: >>> from boto.dynamodb2.table import Table >>> table = Table('messages') >>> table.delete() Creating an Item ================ DynamoDB v1:: >>> import boto.dynamodb >>> conn = boto.dynamodb.connect_to_region() >>> table = conn.get_table('messages') >>> item_data = { ... 'Body': 'http://url_to_lolcat.gif', ... 'SentBy': 'User A', ... 'ReceivedTime': '12/9/2011 11:36:03 PM', ... } >>> item = table.new_item( ... # Our hash key is 'forum' ... hash_key='LOLCat Forum', ... # Our range key is 'subject' ... range_key='Check this out!', ... # This has the ... attrs=item_data ... ) DynamoDB v2:: >>> from boto.dynamodb2.table import Table >>> table = Table('messages') >>> item = table.put_item(data={ ... 'forum_name': 'LOLCat Forum', ... 'subject': 'Check this out!', ... 'Body': 'http://url_to_lolcat.gif', ... 'SentBy': 'User A', ... 'ReceivedTime': '12/9/2011 11:36:03 PM', ... }) Getting an Existing Item ======================== DynamoDB v1:: >>> table = conn.get_table('messages') >>> item = table.get_item( ... hash_key='LOLCat Forum', ... range_key='Check this out!' ... ) DynamoDB v2:: >>> table = Table('messages') >>> item = table.get_item( ... forum_name='LOLCat Forum', ... subject='Check this out!' ... ) Updating an Item ================ DynamoDB v1:: >>> item['a_new_key'] = 'testing' >>> del item['a_new_key'] >>> item.put() DynamoDB v2:: >>> item['a_new_key'] = 'testing' >>> del item['a_new_key'] # Conditional save, only if data hasn't changed. >>> item.save() # Forced full overwrite. >>> item.save(overwrite=True) # Partial update (only changed fields). >>> item.partial_save() Deleting an Item ================ DynamoDB v1:: >>> item.delete() DynamoDB v2:: >>> item.delete() Querying ======== DynamoDB v1:: >>> import boto.dynamodb >>> conn = boto.dynamodb.connect_to_region() >>> table = conn.get_table('messages') >>> from boto.dynamodb.condition import BEGINS_WITH >>> items = table.query('Amazon DynamoDB', ... range_key_condition=BEGINS_WITH('DynamoDB'), ... request_limit=1, max_results=1) >>> for item in items: >>> print item['Body'] DynamoDB v2:: >>> from boto.dynamodb2.table import Table >>> table = Table('messages') >>> items = table.query( ... forum_name__eq='Amazon DynamoDB', ... subject__beginswith='DynamoDB', ... limit=1 ... ) >>> for item in items: >>> print item['Body'] Scans ===== DynamoDB v1:: >>> import boto.dynamodb >>> conn = boto.dynamodb.connect_to_region() >>> table = conn.get_table('messages') # All items. >>> items = table.scan() # With a filter. >>> items = table.scan(scan_filter={'Replies': GT(0)}) DynamoDB v2:: >>> from boto.dynamodb2.table import Table >>> table = Table('messages') # All items. >>> items = table.scan() # With a filter. >>> items = table.scan(replies__gt=0) Batch Gets ========== DynamoDB v1:: >>> import boto.dynamodb >>> conn = boto.dynamodb.connect_to_region() >>> table = conn.get_table('messages') >>> from boto.dynamodb.batch import BatchList >>> the_batch = BatchList(conn) >>> the_batch.add_batch(table, keys=[ ... ('LOLCat Forum', 'Check this out!'), ... ('LOLCat Forum', 'I can haz docs?'), ... ('LOLCat Forum', 'Maru'), ... ]) >>> results = conn.batch_get_item(the_batch) # (Largely) Raw dictionaries back from DynamoDB. >>> for item_dict in response['Responses'][table.name]['Items']: ... print item_dict['Body'] DynamoDB v2:: >>> from boto.dynamodb2.table import Table >>> table = Table('messages') >>> results = table.batch_get(keys=[ ... {'forum_name': 'LOLCat Forum', 'subject': 'Check this out!'}, ... {'forum_name': 'LOLCat Forum', 'subject': 'I can haz docs?'}, ... {'forum_name': 'LOLCat Forum', 'subject': 'Maru'}, ... ]) # Lazy requests across pages, if paginated. >>> for res in results: ... # You get back actual ``Item`` instances. ... print item['Body'] Batch Writes ============ DynamoDB v1:: >>> import boto.dynamodb >>> conn = boto.dynamodb.connect_to_region() >>> table = conn.get_table('messages') >>> from boto.dynamodb.batch import BatchWriteList >>> from boto.dynamodb.item import Item # You must manually manage this so that your total ``puts/deletes`` don't # exceed 25. >>> the_batch = BatchList(conn) >>> the_batch.add_batch(table, puts=[ ... Item(table, 'Corgi Fanciers', 'Sploots!', { ... 'Body': 'Post your favorite corgi-on-the-floor shots!', ... 'SentBy': 'User B', ... 'ReceivedTime': '2013/05/02 10:56:45 AM', ... }), ... Item(table, 'Corgi Fanciers', 'Maximum FRAPS', { ... 'Body': 'http://internetvideosite/watch?v=1247869', ... 'SentBy': 'User C', ... 'ReceivedTime': '2013/05/01 09:15:25 PM', ... }), ... ], deletes=[ ... ('LOLCat Forum', 'Off-topic post'), ... ('LOLCat Forum', 'They be stealin mah bukket!'), ... ]) >>> conn.batch_write_item(the_writes) DynamoDB v2:: >>> from boto.dynamodb2.table import Table >>> table = Table('messages') # Uses a context manager, which also automatically handles batch sizes. >>> with table.batch_write() as batch: ... batch.delete_item( ... forum_name='LOLCat Forum', ... subject='Off-topic post' ... ) ... batch.put_item(data={ ... 'forum_name': 'Corgi Fanciers', ... 'subject': 'Sploots!', ... 'Body': 'Post your favorite corgi-on-the-floor shots!', ... 'SentBy': 'User B', ... 'ReceivedTime': '2013/05/02 10:56:45 AM', ... }) ... batch.put_item(data={ ... 'forum_name': 'Corgi Fanciers', ... 'subject': 'Sploots!', ... 'Body': 'Post your favorite corgi-on-the-floor shots!', ... 'SentBy': 'User B', ... 'ReceivedTime': '2013/05/02 10:56:45 AM', ... }) ... batch.delete_item( ... forum_name='LOLCat Forum', ... subject='They be stealin mah bukket!' ... ) boto-2.20.1/docs/source/rds_tut.rst000066400000000000000000000066771225267101000172210ustar00rootroot00000000000000.. _rds_tut: ======================================= An Introduction to boto's RDS interface ======================================= This tutorial focuses on the boto interface to the Relational Database Service from Amazon Web Services. This tutorial assumes that you have boto already downloaded and installed, and that you wish to setup a MySQL instance in RDS. Creating a Connection --------------------- The first step in accessing RDS is to create a connection to the service. The recommended method of doing this is as follows:: >>> import boto.rds >>> conn = boto.rds.connect_to_region( ... "us-west-2", ... aws_access_key_id=', ... aws_secret_access_key='') At this point the variable conn will point to an RDSConnection object in the US-WEST-2 region. Bear in mind that just as any other AWS service, RDS is region-specific. In this example, the AWS access key and AWS secret key are passed in to the method explicitely. Alternatively, you can set the environment variables: * ``AWS_ACCESS_KEY_ID`` - Your AWS Access Key ID * ``AWS_SECRET_ACCESS_KEY`` - Your AWS Secret Access Key and then simply call:: >>> import boto.rds >>> conn = boto.rds.connect_to_region("us-west-2") In either case, conn will point to an RDSConnection object which we will use throughout the remainder of this tutorial. Starting an RDS Instance ------------------------ Creating a DB instance is easy. You can do so as follows:: >>> db = conn.create_dbinstance("db-master-1", 10, 'db.m1.small', 'root', 'hunter2') This example would create a DB identified as ``db-master-1`` with 10GB of storage. This instance would be running on ``db.m1.small`` type, with the login name being ``root``, and the password ``hunter2``. To check on the status of your RDS instance, you will have to query the RDS connection again:: >>> instances = conn.get_all_dbinstances("db-master-1") >>> instances [DBInstance:db-master-1] >>> db = instances[0] >>> db.status u'available' >>> db.endpoint (u'db-master-1.aaaaaaaaaa.us-west-2.rds.amazonaws.com', 3306) Creating a Security Group ------------------------- Before you can actually connect to this RDS service, you must first create a security group. You can add a CIDR range or an :py:class:`EC2 security group ` to your :py:class:`DB security group ` :: >>> sg = conn.create_dbsecurity_group('web_servers', 'Web front-ends') >>> sg.authorize(cidr_ip='10.3.2.45/32') True You can then associate this security group with your RDS instance:: >>> db.modify(security_groups=[sg]) Connecting to your New Database ------------------------------- Once you have reached this step, you can connect to your RDS instance as you would with any other MySQL instance:: >>> db.endpoint (u'db-master-1.aaaaaaaaaa.us-west-2.rds.amazonaws.com', 3306) % mysql -h db-master-1.aaaaaaaaaa.us-west-2.rds.amazonaws.com -u root -phunter2 mysql> Making a backup --------------- You can also create snapshots of your database very easily:: >>> db.snapshot('db-master-1-2013-02-05') DBSnapshot:db-master-1-2013-02-05 Once this snapshot is complete, you can create a new database instance from it:: >>> db2 = conn.restore_dbinstance_from_dbsnapshot( ... 'db-master-1-2013-02-05', ... 'db-restored-1', ... 'db.m1.small', ... 'us-west-2') boto-2.20.1/docs/source/ref/000077500000000000000000000000001225267101000155375ustar00rootroot00000000000000boto-2.20.1/docs/source/ref/autoscale.rst000066400000000000000000000022031225267101000202460ustar00rootroot00000000000000.. ref-autoscale ====================== Auto Scaling Reference ====================== boto.ec2.autoscale ------------------ .. automodule:: boto.ec2.autoscale :members: :undoc-members: boto.ec2.autoscale.activity --------------------------- .. automodule:: boto.ec2.autoscale.activity :members: :undoc-members: boto.ec2.autoscale.group ------------------------ .. automodule:: boto.ec2.autoscale.group :members: :undoc-members: boto.ec2.autoscale.instance --------------------------- .. automodule:: boto.ec2.autoscale.instance :members: :undoc-members: boto.ec2.autoscale.launchconfig ------------------------------- .. automodule:: boto.ec2.autoscale.launchconfig :members: :undoc-members: boto.ec2.autoscale.policy -------------------------- .. automodule:: boto.ec2.autoscale.policy :members: :undoc-members: boto.ec2.autoscale.request -------------------------- .. automodule:: boto.ec2.autoscale.request :members: :undoc-members: boto.ec2.autoscale.scheduled ---------------------------- .. automodule:: boto.ec2.autoscale.scheduled :members: :undoc-members: boto-2.20.1/docs/source/ref/beanstalk.rst000066400000000000000000000006251225267101000202400ustar00rootroot00000000000000.. ref-beanstalk ================= Elastic Beanstalk ================= boto.beanstalk -------------- .. automodule:: boto.beanstalk :members: :undoc-members: boto.beanstalk.layer1 --------------------- .. automodule:: boto.beanstalk.layer1 :members: :undoc-members: boto.beanstalk.response ----------------------- .. automodule:: boto.beanstalk.response :members: :undoc-members: boto-2.20.1/docs/source/ref/boto.rst000066400000000000000000000010761225267101000172400ustar00rootroot00000000000000.. _ref-boto: ==== boto ==== boto ---- .. automodule:: boto :members: :undoc-members: boto.connection --------------- .. automodule:: boto.connection :members: :undoc-members: boto.exception -------------- .. automodule:: boto.exception :members: :undoc-members: boto.handler ------------ .. automodule:: boto.handler :members: :undoc-members: boto.resultset -------------- .. automodule:: boto.resultset :members: :undoc-members: boto.utils ---------- .. automodule:: boto.utils :members: :undoc-members: boto-2.20.1/docs/source/ref/cloudformation.rst000066400000000000000000000011161225267101000213150ustar00rootroot00000000000000.. ref-cloudformation ============== cloudformation ============== boto.cloudformation ------------------- .. automodule:: boto.cloudformation :members: :undoc-members: boto.cloudformation.connection ------------------------------ .. automodule:: boto.cloudformation.connection :members: :undoc-members: boto.cloudformation.stack ------------------------- .. automodule:: boto.cloudformation.stack :members: :undoc-members: boto.cloudformation.template ---------------------------- .. automodule:: boto.cloudformation.template :members: :undoc-members: boto-2.20.1/docs/source/ref/cloudfront.rst000066400000000000000000000022071225267101000204510ustar00rootroot00000000000000.. ref-cloudfront ========== CloudFront ========== boto.cloudfront --------------- .. automodule:: boto.cloudfront :members: :undoc-members: boto.cloudfront.distribution ---------------------------- .. automodule:: boto.cloudfront.distribution :members: :undoc-members: boto.cloudfront.origin ---------------------- .. automodule:: boto.cloudfront.origin :members: :undoc-members: boto.cloudfront.identity ------------------------ .. automodule:: boto.cloudfront.identity :members: :undoc-members: boto.cloudfront.signers ----------------------- .. automodule:: boto.cloudfront.signers :members: :undoc-members: boto.cloudfront.invalidation ---------------------------- .. automodule:: boto.cloudfront.invalidation :members: :undoc-members: boto.cloudfront.object ---------------------- .. automodule:: boto.cloudfront.object :members: :undoc-members: boto.cloudfront.logging ----------------------- .. automodule:: boto.cloudfront.logging :members: :undoc-members: boto.cloudfront.exception ------------------------- .. automodule:: boto.cloudfront.exception :members: :undoc-members: boto-2.20.1/docs/source/ref/cloudsearch.rst000066400000000000000000000016341225267101000205710ustar00rootroot00000000000000.. ref-cloudsearch =========== Cloudsearch =========== boto.cloudsearch ---------------- .. automodule:: boto.cloudsearch :members: :undoc-members: boto.cloudsearch.domain ----------------------- .. automodule:: boto.cloudsearch.domain :members: :undoc-members: boto.cloudsearch.layer1 ----------------------- .. automodule:: boto.cloudsearch.layer1 :members: :undoc-members: boto.cloudsearch.layer2 ----------------------- .. automodule:: boto.cloudsearch.layer2 :members: :undoc-members: boto.cloudsearch.optionstatus ----------------------------- .. automodule:: boto.cloudsearch.optionstatus :members: :undoc-members: boto.cloudsearch.search ----------------------- .. automodule:: boto.cloudsearch.search :members: :undoc-members: boto.cloudsearch.document ------------------------- .. automodule:: boto.cloudsearch.document :members: :undoc-members: boto-2.20.1/docs/source/ref/cloudtrail.rst000066400000000000000000000006221225267101000204330ustar00rootroot00000000000000.. _ref-cloudtrail: ========== CloudTrail ========== boto.cloudtrail --------------- .. automodule:: boto.cloudtrail :members: :undoc-members: boto.cloudtrail.layer1 ---------------------- .. automodule:: boto.cloudtrail.layer1 :members: :undoc-members: boto.cloudtrail.exceptions -------------------------- .. automodule:: boto.cloudtrail.exceptions :members: :undoc-members: boto-2.20.1/docs/source/ref/cloudwatch.rst000066400000000000000000000011351225267101000204260ustar00rootroot00000000000000.. ref-cloudwatch ==================== CloudWatch Reference ==================== boto.ec2.cloudwatch ------------------- .. automodule:: boto.ec2.cloudwatch :members: :undoc-members: boto.ec2.cloudwatch.datapoint ----------------------------- .. automodule:: boto.ec2.cloudwatch.datapoint :members: :undoc-members: boto.ec2.cloudwatch.metric -------------------------- .. automodule:: boto.ec2.cloudwatch.metric :members: :undoc-members: boto.ec2.cloudwatch.alarm -------------------------- .. automodule:: boto.ec2.cloudwatch.alarm :members: :undoc-members: boto-2.20.1/docs/source/ref/contrib.rst000066400000000000000000000003721225267101000177330ustar00rootroot00000000000000.. ref-contrib ======= contrib ======= boto.contrib ------------ .. automodule:: boto.contrib :members: :undoc-members: boto.contrib.ymlmessage ----------------------- .. automodule:: boto.contrib.ymlmessage :members: :undoc-members:boto-2.20.1/docs/source/ref/datapipeline.rst000066400000000000000000000006571225267101000207400ustar00rootroot00000000000000.. _ref-datapipeline: ============= Data Pipeline ============= boto.datapipeline ----------------- .. automodule:: boto.datapipeline :members: :undoc-members: boto.datapipeline.layer1 ------------------------ .. automodule:: boto.datapipeline.layer1 :members: :undoc-members: boto.datapipeline.exceptions ---------------------------- .. automodule:: boto.datapipeline.exceptions :members: :undoc-members: boto-2.20.1/docs/source/ref/dynamodb.rst000066400000000000000000000016201225267101000200650ustar00rootroot00000000000000.. ref-dynamodb ======== DynamoDB ======== boto.dynamodb ------------- .. automodule:: boto.dynamodb :members: :undoc-members: boto.dynamodb.layer1 -------------------- .. automodule:: boto.dynamodb.layer1 :members: :undoc-members: boto.dynamodb.layer2 -------------------- .. automodule:: boto.dynamodb.layer2 :members: :undoc-members: boto.dynamodb.table ------------------- .. automodule:: boto.dynamodb.table :members: :undoc-members: boto.dynamodb.schema -------------------- .. automodule:: boto.dynamodb.schema :members: :undoc-members: boto.dynamodb.item ------------------ .. automodule:: boto.dynamodb.item :members: :undoc-members: boto.dynamodb.batch ------------------- .. automodule:: boto.dynamodb.batch :members: :undoc-members: boto.dynamodb.types ------------------- .. automodule:: boto.dynamodb.types :members: :undoc-members: boto-2.20.1/docs/source/ref/dynamodb2.rst000066400000000000000000000016151225267101000201530ustar00rootroot00000000000000.. ref-dynamodb2 ========= DynamoDB2 ========= High-Level API ============== boto.dynamodb2.fields --------------------- .. automodule:: boto.dynamodb2.fields :members: :undoc-members: boto.dynamodb2.items -------------------- .. automodule:: boto.dynamodb2.items :members: :undoc-members: boto.dynamodb2.results ---------------------- .. automodule:: boto.dynamodb2.results :members: :undoc-members: boto.dynamodb2.table -------------------- .. automodule:: boto.dynamodb2.table :members: :undoc-members: Low-Level API ============= boto.dynamodb2 -------------- .. automodule:: boto.dynamodb2 :members: :undoc-members: boto.dynamodb2.layer1 --------------------- .. automodule:: boto.dynamodb2.layer1 :members: :undoc-members: boto.dynamodb2.exceptions ------------------------- .. automodule:: boto.dynamodb2.exceptions :members: :undoc-members: boto-2.20.1/docs/source/ref/ec2.rst000066400000000000000000000057321225267101000167510ustar00rootroot00000000000000.. ref-ec2 === EC2 === boto.ec2 -------- .. automodule:: boto.ec2 :members: :undoc-members: boto.ec2.address ---------------- .. automodule:: boto.ec2.address :members: :undoc-members: boto.ec2.autoscale ------------------- See the :doc:`Auto Scaling Reference `. boto.ec2.blockdevicemapping --------------------------- .. automodule:: boto.ec2.blockdevicemapping :members: :undoc-members: boto.ec2.buyreservation ----------------------- .. automodule:: boto.ec2.buyreservation :members: :undoc-members: boto.ec2.cloudwatch ------------------- See the :doc:`CloudWatch Reference `. boto.ec2.connection ------------------- .. automodule:: boto.ec2.connection :members: :undoc-members: boto.ec2.ec2object ------------------ .. automodule:: boto.ec2.ec2object :members: :undoc-members: boto.ec2.elb ------------ See the :doc:`ELB Reference `. boto.ec2.group -------------- .. automodule:: boto.ec2.group :members: :undoc-members: boto.ec2.image -------------- .. automodule:: boto.ec2.image :members: :undoc-members: boto.ec2.instance ----------------- .. automodule:: boto.ec2.instance :members: :undoc-members: boto.ec2.instanceinfo --------------------- .. automodule:: boto.ec2.instanceinfo :members: :undoc-members: boto.ec2.instancestatus ----------------------- .. automodule:: boto.ec2.instancestatus :members: :undoc-members: boto.ec2.keypair ---------------- .. automodule:: boto.ec2.keypair :members: :undoc-members: boto.ec2.launchspecification ---------------------------- .. automodule:: boto.ec2.launchspecification :members: :undoc-members: boto.ec2.networkinterface ------------------------- .. automodule:: boto.ec2.networkinterface :members: :undoc-members: boto.ec2.placementgroup ----------------------- .. automodule:: boto.ec2.placementgroup :members: :undoc-members: boto.ec2.regioninfo ------------------- .. automodule:: boto.ec2.regioninfo :members: :undoc-members: boto.ec2.reservedinstance ------------------------- .. automodule:: boto.ec2.reservedinstance :members: :undoc-members: boto.ec2.securitygroup ---------------------- .. automodule:: boto.ec2.securitygroup :members: :undoc-members: boto.ec2.snapshot ----------------- .. automodule:: boto.ec2.snapshot :members: :undoc-members: boto.ec2.spotinstancerequest ---------------------------- .. automodule:: boto.ec2.spotinstancerequest :members: :undoc-members: boto.ec2.tag ------------ .. automodule:: boto.ec2.tag :members: :undoc-members: boto.ec2.vmtype --------------- .. automodule:: boto.ec2.vmtype :members: :undoc-members: boto.ec2.volume --------------- .. automodule:: boto.ec2.volume :members: :undoc-members: boto.ec2.volumestatus --------------------- .. automodule:: boto.ec2.volumestatus :members: :undoc-members: boto.ec2.zone ------------- .. automodule:: boto.ec2.zone :members: :undoc-members: boto-2.20.1/docs/source/ref/ecs.rst000066400000000000000000000003261225267101000170440ustar00rootroot00000000000000.. ref-ecs === ECS === boto.ecs -------- .. automodule:: boto.ecs :members: :undoc-members: boto.ecs.item ---------------------------- .. automodule:: boto.ecs.item :members: :undoc-members: boto-2.20.1/docs/source/ref/elasticache.rst000066400000000000000000000004541225267101000205410ustar00rootroot00000000000000.. ref-elasticache ================== Amazon ElastiCache ================== boto.elasticache ---------------- .. automodule:: boto.elasticache :members: :undoc-members: boto.elasticache.layer1 ----------------------- .. automodule:: boto.elasticache.layer1 :members: :undoc-members: boto-2.20.1/docs/source/ref/elastictranscoder.rst000066400000000000000000000007601225267101000220050ustar00rootroot00000000000000.. _ref-elastictranscoder: ================== Elastic Transcoder ================== boto.elastictranscoder ---------------------- .. automodule:: boto.elastictranscoder :members: :undoc-members: boto.elastictranscoder.layer1 ----------------------------- .. automodule:: boto.elastictranscoder.layer1 :members: :undoc-members: boto.elastictranscoder.exceptions --------------------------------- .. automodule:: boto.elastictranscoder.exceptions :members: :undoc-members: boto-2.20.1/docs/source/ref/elb.rst000066400000000000000000000017721225267101000170420ustar00rootroot00000000000000.. ref-elb ============= ELB Reference ============= boto.ec2.elb ------------ .. automodule:: boto.ec2.elb :members: :undoc-members: boto.ec2.elb.healthcheck ------------------------ .. automodule:: boto.ec2.elb.healthcheck :members: :undoc-members: boto.ec2.elb.instancestate -------------------------- .. automodule:: boto.ec2.elb.instancestate :members: :undoc-members: boto.ec2.elb.listelement ------------------------ .. automodule:: boto.ec2.elb.listelement :members: :undoc-members: boto.ec2.elb.listener --------------------- .. automodule:: boto.ec2.elb.listener :members: :undoc-members: boto.ec2.elb.loadbalancer ------------------------- .. automodule:: boto.ec2.elb.loadbalancer :members: :undoc-members: boto.ec2.elb.policies ------------------------- .. automodule:: boto.ec2.elb.policies :members: :undoc-members: boto.ec2.elb.securitygroup ------------------------- .. automodule:: boto.ec2.elb.securitygroup :members: :undoc-members: boto-2.20.1/docs/source/ref/emr.rst000066400000000000000000000006351225267101000170600ustar00rootroot00000000000000.. _ref-emr: === EMR === boto.emr -------- .. automodule:: boto.emr :members: :undoc-members: boto.emr.connection ------------------- .. automodule:: boto.emr.connection :members: :undoc-members: boto.emr.step ------------- .. automodule:: boto.emr.step :members: :undoc-members: boto.emr.emrobject ------------------ .. automodule:: boto.emr.emrobject :members: :undoc-members: boto-2.20.1/docs/source/ref/file.rst000066400000000000000000000007271225267101000172160ustar00rootroot00000000000000.. ref-s3: ==== file ==== boto.file.bucket ---------------- .. automodule:: boto.file.bucket :members: :undoc-members: boto.file.simpleresultset ------------------------- .. automodule:: boto.file.simpleresultset :members: :undoc-members: boto.file.connection -------------------- .. automodule:: boto.file.connection :members: :undoc-members: boto.file.key ------------- .. automodule:: boto.file.key :members: :undoc-members: boto-2.20.1/docs/source/ref/fps.rst000066400000000000000000000003311225267101000170560ustar00rootroot00000000000000.. ref-fps === fps === boto.fps -------- .. automodule:: boto.fps :members: :undoc-members: boto.fps.connection ------------------- .. automodule:: boto.fps.connection :members: :undoc-members: boto-2.20.1/docs/source/ref/glacier.rst000066400000000000000000000016211225267101000176770ustar00rootroot00000000000000.. ref-glacier ======= Glacier ======= boto.glacier ------------ .. automodule:: boto.glacier :members: :undoc-members: boto.glacier.layer1 ------------------- .. automodule:: boto.glacier.layer1 :members: :undoc-members: boto.glacier.layer2 ------------------- .. automodule:: boto.glacier.layer2 :members: :undoc-members: boto.glacier.vault ------------------ .. automodule:: boto.glacier.vault :members: :undoc-members: boto.glacier.job ---------------- .. automodule:: boto.glacier.job :members: :undoc-members: boto.glacier.writer ------------------- .. automodule:: boto.glacier.writer :members: :undoc-members: boto.glacier.concurrent ----------------------- .. automodule:: boto.glacier.concurrent :members: :undoc-members: boto.glacier.exceptions ----------------------- .. automodule:: boto.glacier.exceptions :members: :undoc-members: boto-2.20.1/docs/source/ref/gs.rst000066400000000000000000000023271225267101000167060ustar00rootroot00000000000000.. ref-gs: == GS == boto.gs.acl ----------- .. automodule:: boto.gs.acl :members: :inherited-members: :undoc-members: boto.gs.bucket -------------- .. automodule:: boto.gs.bucket :members: :inherited-members: :undoc-members: :exclude-members: BucketPaymentBody, LoggingGroup, MFADeleteRE, VersionRE, VersioningBody, WebsiteBody, WebsiteErrorFragment, WebsiteMainPageFragment, startElement, endElement boto.gs.bucketlistresultset --------------------------- .. automodule:: boto.gs.bucketlistresultset :members: :inherited-members: :undoc-members: boto.gs.connection ------------------ .. automodule:: boto.gs.connection :members: :inherited-members: :undoc-members: boto.gs.cors ------------ .. automodule:: boto.gs.cors :members: :undoc-members: boto.gs.key ----------- .. automodule:: boto.gs.key :members: :inherited-members: :undoc-members: boto.gs.user ------------ .. automodule:: boto.gs.user :members: :inherited-members: :undoc-members: boto.gs.resumable_upload_handler -------------------------------- .. automodule:: boto.gs.resumable_upload_handler :members: :inherited-members: :undoc-members: boto-2.20.1/docs/source/ref/iam.rst000066400000000000000000000005131225267101000170360ustar00rootroot00000000000000.. ref-iam === IAM === boto.iam -------- .. automodule:: boto.iam :members: :undoc-members: boto.iam.connection ------------------- .. automodule:: boto.iam.connection :members: :undoc-members: boto.iam.summarymap ------------------- .. automodule:: boto.iam.summarymap :members: :undoc-members: boto-2.20.1/docs/source/ref/index.rst000066400000000000000000000005651225267101000174060ustar00rootroot00000000000000.. _ref-index: ============= API Reference ============= .. toctree:: :maxdepth: 4 boto beanstalk cloudformation cloudfront cloudsearch contrib dynamodb ec2 ecs emr file fps glacier gs iam manage mturk mws pyami rds redshift route53 s3 sdb services ses sns sqs sts swf vpc boto-2.20.1/docs/source/ref/manage.rst000066400000000000000000000012461225267101000175240ustar00rootroot00000000000000.. ref-manage ====== manage ====== boto.manage ----------- .. automodule:: boto.manage :members: :undoc-members: boto.manage.cmdshell -------------------- .. automodule:: boto.manage.cmdshell :members: :undoc-members: boto.manage.propget ------------------- .. automodule:: boto.manage.propget :members: :undoc-members: boto.manage.server ------------------ .. automodule:: boto.manage.server :members: :undoc-members: boto.manage.task ---------------- .. automodule:: boto.manage.task :members: :undoc-members: boto.manage.volume ------------------ .. automodule:: boto.manage.volume :members: :undoc-members: boto-2.20.1/docs/source/ref/mturk.rst000066400000000000000000000014771225267101000174440ustar00rootroot00000000000000.. ref-mturk ===== mturk ===== boto.mturk ------------ .. automodule:: boto.mturk :members: :undoc-members: boto.mturk.connection --------------------- .. automodule:: boto.mturk.connection :members: :undoc-members: boto.mturk.layoutparam ---------------------- .. automodule:: boto.mturk.layoutparam :members: :undoc-members: boto.mturk.notification ----------------------- .. automodule:: boto.mturk.notification :members: :undoc-members: boto.mturk.price ---------------- .. automodule:: boto.mturk.price :members: :undoc-members: boto.mturk.qualification ------------------------ .. automodule:: boto.mturk.qualification :members: :undoc-members: boto.mturk.question ------------------- .. automodule:: boto.mturk.question :members: :undoc-members: boto-2.20.1/docs/source/ref/mws.rst000066400000000000000000000006511225267101000171010ustar00rootroot00000000000000.. ref-mws === mws === boto.mws -------- .. automodule:: boto.mws :members: :undoc-members: boto.mws.connection ------------------- .. automodule:: boto.mws.connection :members: :undoc-members: boto.mws.exception ------------------- .. automodule:: boto.mws.exception :members: :undoc-members: boto.mws.response ------------------- .. automodule:: boto.mws.response :members: :undoc-members: boto-2.20.1/docs/source/ref/opsworks.rst000066400000000000000000000005651225267101000201660ustar00rootroot00000000000000.. ref-opsworks ======== Opsworks ======== boto.opsworks ------------ .. automodule:: boto.opsworks :members: :undoc-members: boto.opsworks.layer1 ------------------- .. automodule:: boto.opsworks.layer1 :members: :undoc-members: boto.opsworks.exceptions ----------------------- .. automodule:: boto.opsworks.exceptions :members: :undoc-members: boto-2.20.1/docs/source/ref/pyami.rst000066400000000000000000000035011225267101000174070ustar00rootroot00000000000000.. ref-pyami ===== pyami ===== boto.pyami -------------- .. automodule:: boto.pyami :members: :undoc-members: boto.pyami.bootstrap -------------------- .. automodule:: boto.pyami.bootstrap :members: :undoc-members: boto.pyami.config ----------------- .. automodule:: boto.pyami.config :members: :undoc-members: boto.pyami.copybot ------------------ .. automodule:: boto.pyami.copybot :members: :undoc-members: boto.pyami.installers --------------------- .. automodule:: boto.pyami.installers :members: :undoc-members: boto.pyami.installers.ubuntu ---------------------------- .. automodule:: boto.pyami.installers.ubuntu :members: :undoc-members: boto.pyami.installers.ubuntu.apache ----------------------------------- .. automodule:: boto.pyami.installers.ubuntu.apache :members: :undoc-members: boto.pyami.installers.ubuntu.ebs -------------------------------- .. automodule:: boto.pyami.installers.ubuntu.ebs :members: :undoc-members: boto.pyami.installers.ubuntu.installer -------------------------------------- .. automodule:: boto.pyami.installers.ubuntu.installer :members: :undoc-members: boto.pyami.installers.ubuntu.mysql ---------------------------------- .. automodule:: boto.pyami.installers.ubuntu.mysql :members: :undoc-members: boto.pyami.installers.ubuntu.trac --------------------------------- .. automodule:: boto.pyami.installers.ubuntu.trac :members: :undoc-members: boto.pyami.launch_ami --------------------- .. automodule:: boto.pyami.launch_ami :members: :undoc-members: boto.pyami.scriptbase --------------------- .. automodule:: boto.pyami.scriptbase :members: :undoc-members: boto.pyami.startup ------------------ .. automodule:: boto.pyami.startup :members: :undoc-members:boto-2.20.1/docs/source/ref/rds.rst000066400000000000000000000012501225267101000170570ustar00rootroot00000000000000.. ref-rds === RDS === boto.rds -------- .. automodule:: boto.rds :members: :undoc-members: boto.rds.dbinstance ------------------- .. automodule:: boto.rds.dbinstance :members: :undoc-members: boto.rds.dbsecuritygroup ------------------------ .. automodule:: boto.rds.dbsecuritygroup :members: :undoc-members: boto.rds.dbsnapshot ------------------- .. automodule:: boto.rds.dbsnapshot :members: :undoc-members: boto.rds.event -------------- .. automodule:: boto.rds.event :members: :undoc-members: boto.rds.parametergroup ----------------------- .. automodule:: boto.rds.parametergroup :members: :undoc-members:boto-2.20.1/docs/source/ref/redshift.rst000066400000000000000000000005701225267101000201030ustar00rootroot00000000000000.. _ref-redshift: ======== Redshift ======== boto.redshift ------------- .. automodule:: boto.redshift :members: :undoc-members: boto.redshift.layer1 -------------------- .. automodule:: boto.redshift.layer1 :members: :undoc-members: boto.redshift.exceptions ------------------------ .. automodule:: boto.redshift.exceptions :members: :undoc-members: boto-2.20.1/docs/source/ref/route53.rst000066400000000000000000000010001225267101000175660ustar00rootroot00000000000000.. ref-route53 ======= route53 ======= boto.route53.connection ----------------------- .. automodule:: boto.route53.connection :members: :undoc-members: boto.route53.exception ------------------- .. automodule:: boto.route53.exception :members: :undoc-members: boto.route53.record ------------------- .. automodule:: boto.route53.record :members: :undoc-members: boto.route53.zone ------------------------ .. automodule:: boto.route53.zone :members: :undoc-members: boto-2.20.1/docs/source/ref/s3.rst000066400000000000000000000027261225267101000166250ustar00rootroot00000000000000.. ref-s3: === S3 === boto.s3.acl ----------- .. automodule:: boto.s3.acl :members: :undoc-members: boto.s3.bucket -------------- .. automodule:: boto.s3.bucket :members: :undoc-members: boto.s3.bucketlistresultset --------------------------- .. automodule:: boto.s3.bucketlistresultset :members: :undoc-members: boto.s3.connection ------------------ .. automodule:: boto.s3.connection :members: :undoc-members: boto.s3.cors -------------- .. automodule:: boto.s3.cors :members: :undoc-members: boto.s3.deletemarker -------------------- .. automodule:: boto.s3.deletemarker :members: :undoc-members: boto.s3.key ----------- .. automodule:: boto.s3.key :members: :undoc-members: boto.s3.prefix -------------- .. automodule:: boto.s3.prefix :members: :undoc-members: boto.s3.multipart ----------------- .. automodule:: boto.s3.multipart :members: :undoc-members: boto.s3.multidelete ------------------- .. automodule:: boto.s3.multidelete :members: :undoc-members: boto.s3.resumable_download_handler ---------------------------------- .. automodule:: boto.s3.resumable_download_handler :members: :undoc-members: boto.s3.lifecycle -------------------- .. automodule:: boto.s3.lifecycle :members: :undoc-members: boto.s3.tagging --------------- .. automodule:: boto.s3.tagging :members: :undoc-members: boto.s3.user ------------ .. automodule:: boto.s3.user :members: :undoc-members: boto-2.20.1/docs/source/ref/sdb.rst000066400000000000000000000013441225267101000170430ustar00rootroot00000000000000.. ref-sdb ============= SDB Reference ============= In addition to what is seen below, boto includes an abstraction layer for SimpleDB that may be used: * :doc:`SimpleDB DB ` (Maintained, but little documentation) boto.sdb -------- .. automodule:: boto.sdb :members: :undoc-members: boto.sdb.connection ------------------- .. automodule:: boto.sdb.connection :members: :undoc-members: boto.sdb.domain --------------- .. automodule:: boto.sdb.domain :members: :undoc-members: boto.sdb.item ------------- .. automodule:: boto.sdb.item :members: :undoc-members: boto.sdb.queryresultset ----------------------- .. automodule:: boto.sdb.queryresultset :members: :undoc-members: boto-2.20.1/docs/source/ref/sdb_db.rst000066400000000000000000000021451225267101000175100ustar00rootroot00000000000000.. ref-sdbdb ================ SDB DB Reference ================ This module offers an ORM-like layer on top of SimpleDB. boto.sdb.db ----------- .. automodule:: boto.sdb.db :members: :undoc-members: boto.sdb.db.blob ---------------- .. automodule:: boto.sdb.db.blob :members: :undoc-members: boto.sdb.db.key --------------- .. automodule:: boto.sdb.db.key :members: :undoc-members: boto.sdb.db.manager ------------------- .. automodule:: boto.sdb.db.manager :members: :undoc-members: boto.sdb.db.manager.sdbmanager ------------------------------ .. automodule:: boto.sdb.db.manager.sdbmanager :members: :undoc-members: boto.sdb.db.manager.xmlmanager ------------------------------ .. automodule:: boto.sdb.db.manager.xmlmanager :members: :undoc-members: boto.sdb.db.model ----------------- .. automodule:: boto.sdb.db.model :members: :undoc-members: boto.sdb.db.property -------------------- .. automodule:: boto.sdb.db.property :members: :undoc-members: boto.sdb.db.query ----------------- .. automodule:: boto.sdb.db.query :members: :undoc-members: boto-2.20.1/docs/source/ref/services.rst000066400000000000000000000017031225267101000201150ustar00rootroot00000000000000.. ref-services ======== services ======== boto.services ------------- .. automodule:: boto.services :members: :undoc-members: boto.services.bs ---------------- .. automodule:: boto.services.bs :members: :undoc-members: boto.services.message --------------------- .. automodule:: boto.services.message :members: :undoc-members: boto.services.result -------------------- .. automodule:: boto.services.result :members: :undoc-members: boto.services.service --------------------- .. automodule:: boto.services.service :members: :undoc-members: boto.services.servicedef ------------------------ .. automodule:: boto.services.servicedef :members: :undoc-members: boto.services.sonofmmm ---------------------- .. automodule:: boto.services.sonofmmm :members: :undoc-members: boto.services.submit -------------------- .. automodule:: boto.services.submit :members: :undoc-members: boto-2.20.1/docs/source/ref/ses.rst000066400000000000000000000003411225267101000170610ustar00rootroot00000000000000.. ref-ses === SES === boto.ses ------------ .. automodule:: boto.ses :members: :undoc-members: boto.ses.connection --------------------- .. automodule:: boto.ses.connection :members: :undoc-members: boto-2.20.1/docs/source/ref/sns.rst000066400000000000000000000002601225267101000170720ustar00rootroot00000000000000.. ref-sns === SNS === boto.sns -------- .. automodule:: boto.sns :members: :undoc-members: .. autoclass:: boto.sns.SNSConnection :members: :undoc-members: boto-2.20.1/docs/source/ref/sqs.rst000066400000000000000000000015631225267101000171040ustar00rootroot00000000000000.. ref-sqs ==== SQS ==== boto.sqs -------- .. automodule:: boto.sqs :members: :undoc-members: boto.sqs.attributes ------------------- .. automodule:: boto.sqs.attributes :members: :undoc-members: boto.sqs.connection ------------------- .. automodule:: boto.sqs.connection :members: :undoc-members: boto.sqs.jsonmessage -------------------- .. automodule:: boto.sqs.jsonmessage :members: :undoc-members: boto.sqs.message ---------------- .. automodule:: boto.sqs.message :members: :undoc-members: boto.sqs.queue -------------- .. automodule:: boto.sqs.queue :members: :undoc-members: boto.sqs.regioninfo ------------------- .. automodule:: boto.sqs.regioninfo :members: :undoc-members: boto.sqs.batchresults --------------------- .. automodule:: boto.sqs.batchresults :members: :undoc-members: boto-2.20.1/docs/source/ref/sts.rst000066400000000000000000000004501225267101000171010ustar00rootroot00000000000000.. ref-sts === STS === boto.sts -------- .. automodule:: boto.sts :members: :undoc-members: .. autoclass:: boto.sts.STSConnection :members: :undoc-members: boto.sts.credentials -------------------- .. automodule:: boto.sts.credentials :members: :undoc-members: boto-2.20.1/docs/source/ref/support.rst000066400000000000000000000005531225267101000200100ustar00rootroot00000000000000.. _ref-support: ======= Support ======= boto.support ------------ .. automodule:: boto.support :members: :undoc-members: boto.support.layer1 ------------------- .. automodule:: boto.support.layer1 :members: :undoc-members: boto.support.exceptions ----------------------- .. automodule:: boto.support.exceptions :members: :undoc-members: boto-2.20.1/docs/source/ref/swf.rst000066400000000000000000000004431225267101000170710ustar00rootroot00000000000000.. ref-swf === SWF === boto.swf -------- .. automodule:: boto.swf :members: :undoc-members: boto.swf.layer1 -------------------- .. automodule:: boto.swf.layer1 :members: :undoc-members: boto.swf.layer2 -------------------- .. automodule:: boto.swf.layer2 :members: boto-2.20.1/docs/source/ref/vpc.rst000066400000000000000000000017721225267101000170700ustar00rootroot00000000000000.. _ref-vpc: ==== VPC ==== boto.vpc -------- .. automodule:: boto.vpc :members: :undoc-members: boto.vpc.customergateway ------------------------ .. automodule:: boto.vpc.customergateway :members: :undoc-members: boto.vpc.dhcpoptions -------------------- .. automodule:: boto.vpc.dhcpoptions :members: :undoc-members: boto.vpc.internetgateway ------------------------ .. automodule:: boto.vpc.internetgateway :members: :undoc-members: boto.vpc.routetable ------------------- .. automodule:: boto.vpc.routetable :members: :undoc-members: boto.vpc.subnet --------------- .. automodule:: boto.vpc.subnet :members: :undoc-members: boto.vpc.vpc ------------ .. automodule:: boto.vpc.vpc :members: :undoc-members: boto.vpc.vpnconnection ---------------------- .. automodule:: boto.vpc.vpnconnection :members: :undoc-members: boto.vpc.vpngateway ------------------- .. automodule:: boto.vpc.vpngateway :members: :undoc-members: boto-2.20.1/docs/source/releasenotes/000077500000000000000000000000001225267101000174545ustar00rootroot00000000000000boto-2.20.1/docs/source/releasenotes/dev.rst000066400000000000000000000003501225267101000207620ustar00rootroot00000000000000boto v2.xx.x ============ :date: 2013/xx/xx This release adds ____. Features -------- * . (:issue:``, :sha:``) Bugfixes -------- * (:issue:``, :sha:``) * Several documentation improvements/fixes: * (:issue:``, :sha:``) boto-2.20.1/docs/source/releasenotes/releasenotes_template.rst000066400000000000000000000003501225267101000245700ustar00rootroot00000000000000boto v2.xx.x ============ :date: 2013/xx/xx This release adds ____. Features -------- * . (:issue:``, :sha:``) Bugfixes -------- * (:issue:``, :sha:``) * Several documentation improvements/fixes: * (:issue:``, :sha:``) boto-2.20.1/docs/source/releasenotes/v2.0.0.rst000066400000000000000000000111001225267101000210220ustar00rootroot00000000000000========================== Release Notes for boto 2.0 ========================== Highlights ========== There have been many, many changes since the 2.0b4 release. This overview highlights some of those changes. * Fix connection pooling bug: don't close before reading. * Added AddInstanceGroup and ModifyInstanceGroup to boto.emr * Merge pull request #246 from chetan/multipart_s3put * AddInstanceGroupsResponse class to boto.emr.emrobject. * Removed extra print statement * Merge pull request #244 from ryansb/master * Added add_instance_groups function to boto.emr.connection. Built some helper methods for it, and added AddInstanceGroupsResponse class to boto.emr.emrobject. * Added a new class, InstanceGroup, with just a __init__ and __repr__. * Adding support for GetLoginProfile request to IAM. Removing commented lines in connection.py. Fixes GoogleCode issue 532. * Fixed issue #195 * Added correct sax reader for boto.emr.emrobject.BootstrapAction * Fixed a typo bug in ConsoleOutput sax parsing and some PEP8 cleanup in connection.py. * Added initial support for generating a registration url for the aws marketplace * Fix add_record and del_record to support multiple values, like change_record does * Add support to accept SecurityGroupId as a parameter for ec2 run instances. This is required to create EC2 instances under VPC security groups * Added support for aliases to the add_change method of ResourceRecordSets. * Resign each request in a retry situation. Some services are starting to incorporate replay detection algorithms and the boto approach of simply re-trying the original request triggers them. Also a small bug fix to roboto and added a delay in the ec2 test to wait for consistency. * Fixed a problem with InstanceMonitoring parameter of LaunchConfigurations for autoscale module. * Route 53 Alias Resource Record Sets * Fixed App Engine support * Fixed incorrect host on App Engine * Fixed issue 199 on github. * First pass at put_metric_data * Changed boto.s3.Bucket.set_acl_xml() to ISO-8859-1 encode the Unicode ACL text before sending over HTTP connection. * Added GetQualificationScore for mturk. * Added UpdateQualificationScore for mturk * import_key_pair base64 fix * Fixes for ses send_email method better handling of exceptions * Add optional support for SSL server certificate validation. * Specify a reasonable socket timeout for httplib * Support for ap-northeast-1 region * Close issue #153 * Close issue #154 * we must POST autoscale user-data, not GET. otherwise a HTTP 505 error is returned from AWS. see: http://groups.google.com/group/boto-dev/browse_thread/thread/d5eb79c97ea8eecf?pli=1 * autoscale userdata needs to be base64 encoded. * Use the unversioned streaming jar symlink provided by EMR * Updated lss3 to allow for prefix based listing (more like actual ls) * Deal with the groupSet element that appears in the instanceSet element in the DescribeInstances response. * Add a change_record command to bin/route53 * Incorporating a patch from AWS to allow security groups to be tagged. * Fixed an issue with extra headers in generated URLs. Fixes http://code.google.com/p/boto/issues/detail?id=499 * Incorporating a patch to handle obscure bug in apache/fastcgi. See http://goo.gl/0Tdax. * Reorganizing the existing test code. Part of a long-term project to completely revamp and improve boto tests. * Fixed an invalid parameter bug (ECS) #102 * Adding initial cut at s3 website support. Stats ===== * 465 commits since boto 2.0b4 * 70 authors * 111 Pull requests from 64 different authors Contributors (in order of last commits) ======================================= * Mitch Garnaat * Chris Moyer * Garrett Holmstrom * Justin Riley * Steve Johnson * Sean Talts * Brian Beach * Ryan Brown * Chetan Sarva * spenczar * Jonathan Drosdeck * garnaat * Nathaniel Moseley * Bradley Ayers * jibs * Kenneth Falck * chirag * Sean O'Connor * Scott Moser * Vineeth Pillai * Greg Taylor * root * darktable * flipkin * brimcfadden * Samuel Lucidi * Terence Honles * Mike Schwartz * Waldemar Kornewald * Lucas Hrabovsky * thaDude * Vinicius Ruan Cainelli * David Marin * Stanislav Ievlev * Victor Trac * Dan Fairs * David Pisoni * Matt Robenolt * Matt Billenstein * rgrp * vikalp * Christoph Kern * Gabriel Monroy * Ben Burry * Hinnerk * Jann Kleen * Louis R. Marascio * Matt Singleton * David Park * Nick Tarleton * Cory Mintz * Robert Mela * rlotun * John Walsh * Keith Fitzgerald * Pierre Riteau * ryancustommade * Fabian Topfstedt * Michael Thompson * sanbornm * Seth Golub * Jon Colverson * Steve Howard * Roberto Gaiser * James Downs * Gleicon Moraes * Blake Maltby * Mac Morgan * Rytis Sileika * winhamwr boto-2.20.1/docs/source/releasenotes/v2.0b1.rst000066400000000000000000000010641225267101000211170ustar00rootroot00000000000000=============================== Major changes for release 2.0b1 =============================== * Support for versioning in S3 * Support for MFA Delete in S3 * Support for Elastic Map Reduce * Support for Simple Notification Service * Support for Google Storage * Support for Consistent Reads and Conditional Puts in SimpleDB * Significant updates and improvements to Mechanical Turk (mturk) module * Support for Windows Bundle Tasks in EC2 * Support for Reduced Redundancy Storage (RRS) in S3 * Support for Cluster Computing instances and Placement Groups in EC2boto-2.20.1/docs/source/releasenotes/v2.1.0.rst000066400000000000000000000047201225267101000210350ustar00rootroot00000000000000=========== boto v2.1.0 =========== The 2.1.0 release of boto is now available on `PyPI`_ and `Google Code`_. .. _`PyPI`: http://pypi.python.org/pypi/boto .. _`Google Code`: http://code.google.com/p/boto/downloads/ You can view a list of issues that have been closed in this release at https://github.com/boto/boto/issues?milestone=4&state=closed) You can get a comprehensive list of all commits made between the 2.0 release and the 2.1.0 release at https://github.com/boto/boto/compare/033457f30d...a0a1fd54ef. Some highlights of this release: * Server-side encryption now supported in S3. * Better support for VPC in EC2. * Support for combiner in StreamingStep for EMR. * Support for CloudFormations. * Support for streaming uploads to Google Storage. * Support for generating signed URL's in CloudFront. * MTurk connection now uses HTTPS by default, like all other Connection objects. * You can now PUT multiple data points to CloudWatch in one call. * CloudWatch Dimension object now correctly supports multiple values for same dimension name. * Lots of documentation fixes/additions There were 235 commits in this release from 35 different authors. The authors are listed below, in no particular order: * Erick Fejta * Joel Barciauskas * Matthew Tai * Hyunjung Park * Mitch Garnaat * Victor Trac * Andy Grimm * ZerothAngel * Dan Lecocq * jmallen * Greg Taylor * Brian Grossman * Marc Brinkmann * Hunter Blanks * Steve Johnson * Keith Fitzgerald * Kamil Klimkiewicz * Eddie Hebert * garnaat * Samuel Lucidi * Kazuhiro Ogura * David Arthur * Michael Budde * Vineeth Pillai * Trevor Pounds * Mike Schwartz * Ryan Brown * Mark * Chetan Sarva * Dan Callahan * INADA Naoki * Mitchell Hashimoto * Chris Moyer * Riobard * Ted Romer * Justin Riley * Brian Beach * Simon Ratner We processed 60 pull requests for this release from 40 different contributors. Here are the github user id's for all of the pull request authors: * jtriley * mbr * jbarciauskas * hyunjung * bugi * ryansb * gtaylor * ehazlett * secretmike * riobard * simonratner * irskep * sanbornm * methane * jumping * mansam * miGlanz * dlecocq * fdr * mitchellh * ehebert * memory * hblanks * mbudde * ZerothAngel * goura * natedub * tpounds * bwbeach * mumrah * chetan * jmallen * a13m * mtai * fejta * jibs * callahad * vineethrp * JDrosdeck * gholms If you are trying to reconcile that data (i.e. 35 different authors and 40 users with pull requests), well so am I. I'm just reporting on the data that I get from the Github api 8^) boto-2.20.1/docs/source/releasenotes/v2.1.1.rst000066400000000000000000000002141225267101000210300ustar00rootroot00000000000000=========== boto v2.1.1 =========== The 2.1.1 release fixes one serious issue with the RDS module. https://github.com/boto/boto/issues/382boto-2.20.1/docs/source/releasenotes/v2.10.0.rst000066400000000000000000000045771225267101000211270ustar00rootroot00000000000000boto v2.10.0 ============ :date: 2013/08/13 This release adds Mobile Push Notification support to Amazon Simple Notification Service, better reporting for Amazon Redshift, SigV4 authorization for Amazon Elastic MapReduce & lots of bugfixes. Features -------- * Added support for Mobile Push Notifications to SNS. This enables you to send push notifications to mobile devices (such as iOS or Android) using SNS. (:sha:`ccba574`) * Added support for better reporting within Redshift. (:sha:`9d55dd3`) * Switched Elastic MapReduce to use SigV4 for authorization. (:sha:`b80aa48`) Bugfixes -------- * Added the ``MinAdjustmentType`` parameter to EC2 Autoscaling. (:issue:`1562`, :issue:`1619`, :sha:`1760284`, :sha:`2a11fd9`, :sha:`2d14006` & :sha:`b7f1ae1`) * Fixed how DynamoDB tracks changes to data in ``Item`` objects, fixing failures with modified sets not being sent. (:issue:`1565`, :sha:`b111fcf` & :sha:`812f9a6`) * Updated the CA certificates Boto ships with. (:issue:`1578`, :sha:`4dfadc8`) * Fixed how CloudSearch's ``Layer2`` object gets initialized. (:issue:`1629`, :issue:`1630`, :sha:`40b3652` & :sha:`f797ff9`) * Fixed the ``-w`` flag in ``s3put``. (:issue:`1637`, :sha:`0865004` & :sha:`3fe70ca`) * Added the ``ap-southeast-2`` endpoint for DynamoDB. (:issue:`1621`, :sha:`501b637`) * Fixed test suite to run faster. (:sha:`243a67e`) * Fixed how non-JSON responses are caught from CloudSearch. (:issue:`1633`, :issue:`1645`, :sha:`d5a5c01`, :sha:`954a50c`, :sha:`915d8ff` & :sha:`4407fcb`) * Fixed how ``DeviceIndex`` is parsed from EC2. (:issue:`1632`, :issue:`1646`, :sha:`ff15e1f`, :sha:`8337a0b` & :sha:`27c9b04`) * Fixed EC2's ``connect_to_region`` to respect the ``region`` parameter. ( :issue:`1616`, :issue:`1654`, :sha:`9c37256`, :sha:`5950d12` & :sha:`b7eebe8`) * Added ``modify_network_interface_atribute`` to EC2 connections. (:issue:`1613`, :issue:`1656`, :sha:`e00b601`, :sha:`5b62f27`, :sha:`126f6e9`, :sha:`bbfed1f` & :sha:`0c61293`) * Added support for ``param_group`` within RDS. (:issue:`1639`, :sha:`c47baf0`) * Added support for using ``Item.partial_save`` to create new records within DynamoDBv2. (:issue:`1660`, :issue:`1521`, :sha:`bfa469f` & :sha:`58a13d7`) * Several documentation improvements/fixes: * Updated guideline on how core should merge PRs. (:sha:`80a419c`) * Fixed a typo in a CloudFront docstring. (:issue:`1657`, :sha:`1aa0621`)boto-2.20.1/docs/source/releasenotes/v2.11.0.rst000066400000000000000000000053151225267101000211170ustar00rootroot00000000000000boto v2.11.0 ============ :date: 2013/08/29 This release adds Public IP address support for VPCs created by EC2. It also makes the GovCloud region available for all services. Finally, this release also fixes a number of bugs. Features -------- * Added Public IP address support within VPCs created by EC2. (:sha:`be132d1`) * All services can now easily use GovCloud. (:issue:`1651`, :sha:`542a301`, :sha:`3c56121`, :sha:`9167d89`) * Added ``db_subnet_group`` to ``RDSConnection.restore_dbinstance_from_point_in_time``. (:issue:`1640`, :sha:`06592b9`) * Added ``monthly_backups`` to EC2's ``trim_snapshots``. (:issue:`1688`, :sha:`a2ad606`, :sha:`2998c11`, :sha:`e32d033`) * Added ``get_all_reservations`` & ``get_only_instances`` methods to EC2. (:issue:`1572`, :sha:`ffc6cc0`) Bugfixes -------- * Fixed the parsing of CloudFormation's ``LastUpdatedTime``. (:issue:`1667`, :sha:` 70f363a`) * Fixed STS' ``assume_role_with_web_identity`` to work correctly. (:issue:`1671`, :sha:`ed1f403`, :sha:`ca794d5`, :sha:`ed7e563`, :sha:`859762d`) * Fixed how VPC security group filtering is done in EC2. (:issue:`1665`, :issue:`1677`, :sha:`be00956`, :sha:`5e85dd1`, :sha:`e63aae8`) * Fixed fetching more than 100 records with ``ResourceRecordSet``. (:issue:`1647`, :issue:`1648`, :issue:`1680`, :sha:`b64dd4f`, :sha:`276df7e`, :sha:`e57cab0`, :sha:`e62a58b`, :sha:`4c81bea`, :sha:`a3c635b`) * Fixed how VPC Security Groups are referred to when working with RDS. (:issue:`1602`, :issue:`1683`, :issue:`1685`, :issue:`1694`, :sha:`012aa0c`, :sha:`d5c6dfa`, :sha:`7841230`, :sha:`0a90627`, :sha:`ed4fd8c`, :sha:`61d394b`, :sha:`ebe84c9`, :sha:`a6b0f7e`) * Google Storage ``Key`` now uses transcoding-invariant headers where possible. (:sha:`d36eac3`) * Doing non-multipart uploads when using ``s3put`` no longer requires having the ``ListBucket`` permission. (:issue:`1642`, :issue:`1693`, :sha:`f35e914`) * Fixed the serialization of ``attributes`` in a variety of SNS methods. (:issue:`1686`, :sha:`4afb3dd`, :sha:`a58af54`) * Fixed SNS to be better behaved when constructing an mobile push notification. (:issue:`1692`, :sha:`62fdf34`) * Moved SWF to SigV4. (:sha:`ef7d255`) * Several documentation improvements/fixes: * Updated the DynamoDB v2 docs to correct how the connection is built. (:issue:`1662`, :sha:`047962d`) * Fixed a typo in the DynamoDB v2 docstring for ``Table.create``. (:sha:`be00956`) * Fixed a typo in the DynamoDB v2 docstring for ``Table`` for custom connections. (:issue:`1681`, :sha:`6a53020`) * Fixed incorrect parameter names for ``DBParameterGroup`` in RDS. (:issue:`1682`, :sha:`0d46aed`) * Fixed a typo in the SQS tutorial. (:issue:`1684`, :sha:`38b7889`) boto-2.20.1/docs/source/releasenotes/v2.12.0.rst000066400000000000000000000017121225267101000211150ustar00rootroot00000000000000boto v2.12.0 ============ :date: 2013/09/04 This release adds support for Redis & replication groups to Elasticache as well as several bug fixes. Features -------- * Added support for Redis & replication groups to Elasticache. (:sha:`f744ff6`) Bugfixes -------- * Boto's User-Agent string has changed. Mostly additive to include more information. (:sha:`edb038a`) * Headers that are part of S3's signing are now correctly coerced to the proper case. (:issue:`1687`, :sha:`89eae8c`) * Altered S3 so that it's possible to track what portions of a multipart upload succeeded. (:issue:`1305`, :issue:`1675`, :sha:`e9a2c59`) * Added ``create_lb_policy`` & ``set_lb_policies_of_backend_server`` to ELB. (:issue:`1695`, :sha:`77a9458`) * Fixed pagination when listing vaults in Glacier. (:issue:`1699`, :sha:`9afecca`) * Several documentation improvements/fixes: * Added some docs about what command-line utilities ship with boto. (:sha:`5d7d54d`) boto-2.20.1/docs/source/releasenotes/v2.13.0.rst000066400000000000000000000026401225267101000211170ustar00rootroot00000000000000boto v2.13.0 ============ :date: 2013/09/12 This release adds support for VPC within AWS Opsworks, added dry-run support & the ability to modify reserved instances in EC2 as well as several important bugfixes for EC2, SNS & DynamoDBv2. Features -------- * Added support for VPC within Opsworks. (:sha:`56e1df3`) * Added support for ``dry_run`` within EC2. (:sha:`dd7774c`) * Added support for ``modify_reserved_instances`` & ``describe_reserved_instances_modifications`` within EC2. (:sha:`7a08672`) Bugfixes -------- * Fixed EC2's ``associate_public_ip`` to work correctly. (:sha:`9db6101`) * Fixed a bug with ``dynamodb_load`` when working with sets. (:issue:`1664`, :sha:`ef2d28b`) * Changed SNS ``publish`` to use POST. (:sha:`9c11772`) * Fixed inability to create LaunchConfigurations when using Block Device Mappings. (:issue:`1709`, :issue:`1710`, :sha:`5fd728e`) * Fixed DynamoDBv2's ``batch_write`` to appropriately handle ``UnprocessedItems``. (:issue:`1566`, :issue:`1679`, :issue:`1714`, :sha:`2fc2369`) * Several documentation improvements/fixes: * Added Opsworks docs to the index. (:sha:`5d48763`) * Added docs on the correct string values for ``get_all_images``. (:issue:`1674`, :sha:`1e4ed2e`) * Removed a duplicate ``boto.s3.prefix`` entry from the docs. (:issue:`1707`, :sha:`b42d34c`) * Added an API reference for ``boto.swf.layer2``. (:issue:`1712`, :sha:`9f7b15f`) boto-2.20.1/docs/source/releasenotes/v2.13.2.rst000066400000000000000000000025651225267101000211270ustar00rootroot00000000000000boto v2.13.2 ============ :date: 2013/09/16 This release is a bugfix-only release, correcting several problems in EC2 as well as S3, DynamoDB v2 & SWF. .. note:: There was no v2.13.1 release made public. There was a packaging error that was discovered before it was published to PyPI. We apologise for the fault in the releases. Those responsible have been sacked. Bugfixes -------- * Fixed test fallout from the EC2 dry-run change. (:sha:`2159456`) * Added tests for more of SWF's ``layer2``. (:issue:`1718`, :sha:`35fb741`, :sha:`a84d401`, :sha:`1cf1641`, :sha:`a36429c`) * Changed EC2 to allow ``name`` to be optional in calls to ``copy_image``. (:issue:`1672`, :sha:` 26285aa`) * Added ``billingProducts`` support to EC2 ``Image``. (:issue:`1703`, :sha:`cccadaf`, :sha:`3914e91`) * Fixed a place where ``dry_run`` was handled in EC2. (:issue:`1722`, :sha:`0a52c82`) * Fixed ``run_instances`` with a block device mapping. (:issue:`1723`, :sha:`974743f`, :sha:`9049f05`, :sha:`d7edafc`) * Fixed ``s3put`` to accept headers with a ``=`` in them. (:issue:`1700`, :sha:`7958c70`) * Fixed a bug in DynamoDB v2 where scans with filters over large sets may not return all values. (:issue:`1713`, :sha:`02893e1`) * Cloudsearch now uses SigV4. (:sha:`b2bdbf5`) * Several documentation improvements/fixes: * Added the "Apps Built On Boto" doc. (:sha:`3bd628c`) boto-2.20.1/docs/source/releasenotes/v2.13.3.rst000066400000000000000000000005741225267101000211260ustar00rootroot00000000000000boto v2.13.3 ============ :date: 2013/09/16 This release fixes a packaging error with the previous version of boto. The version ``v2.13.2`` was provided instead of ``2.13.2``, causing things like ``pip`` to incorrectly resolve the latest release. That release was only available for several minutes & was removed from PyPI due to the way it would break installation for users. boto-2.20.1/docs/source/releasenotes/v2.14.0.rst000066400000000000000000000053761225267101000211310ustar00rootroot00000000000000boto v2.14.0 ============ :date: 2013/10/09 This release makes ``s3put`` region-aware, adds some missing features to EC2 and SNS, enables EPUB documentation output, and makes the HTTP(S) connection pooling port-aware, which in turn enables connecting to e.g. mock services running on ``localhost``. It also includes support for the latest EC2 and OpsWorks features, as well as several important bugfixes for EC2, DynamoDB, MWS, and Python 2.5 support. Features -------- * Add support for a ``--region`` argument to ``s3put`` and auto-detect bucket regions if possible (:issue:`1731`, :sha:`d9c28f6`) * Add ``delete_notification_configuration`` for EC2 autoscaling (:issue:`1717`, :sha:`ebb7ace`) * Add support for registering HVM instances (:issue:`1733`, :sha:`2afc68e`) * Add support for ``ReplaceRouteTableAssociation`` for EC2 (:issue:`1736`, :sha:`4296835`) * Add ``sms`` as an option for SNS subscribe (:issue:`1744`, :sha:`8ff08e5`) * Allow overriding ``has_google_credentials`` (:issue:`1752`, :sha:`052cc91`) * Add EPUB output format for docs (:issue:`1759`, :sha:`def7c67`) * Add handling of ``Connection: close`` HTTP headers in responses (:issue:`1773`, :sha:`1a38f32`) * Make connection pooling port-aware (:issue:`1764`, :issue:`1737`, :sha:`b6c7330`) * Add support for ``instance_type`` to ``modify_reserved_instances`` (:sha:`bf07eee`) * Add support for new OpsWorks features (:sha:`f512898`) Bugfixes -------- * Remove erroneous ``dry_run`` parameter (:issue:`1729`, :sha:`35a516e`) * Fix task_list override in poll methods of SWF Deciders and Workers ( :issue:`1724`, :sha:`fa8d871`) * Remove Content-Encoding header from metadata test (:issue:`1735`, :sha:`c8b0130`) * Fix the ability to override DynamoDBv2 host and port when creating connections (:issue:`1734`, :sha:`8d2b492`) * Fix UnboundLocalError (:sha:`e0e6aeb`) * ``self.rules`` is of type IPPermissionsList, remove takes no kwargs (:sha:`3c56b3f`) * Nicer error messages for 403s (:issue:`1753`, :sha:`d3d9eab`) * Various documentation fixes (:issue:`1762`, :sha:`76aef10`) * Various Python 2.5 fixes (:sha:`150aef6`, :sha:`67ae9ff`) * Prevent certificate tests from failing for non-govcloud accounts (:sha:`2d3d9f6`) * Fix flaky resumable upload test (:issue:`1768`, :sha:`6aa8ae2`) * Force the Host HTTP header to fix an issue with older httplibs (:sha:`202c456`) * Blacklist S3 from forced Host HTTP header (:sha:`9193226`) * Fix ``propagate_at_launch`` spelling error (:issue:`1739`, :sha:`e78d88a`) * Remove unused code that causes exceptions with bad response data (:issue:`1771`, :sha:`bec5e70`) * Fix ``detach_subnets`` typo (:issue:`1760`, :sha:`4424e1b`) * Fix result list handling of ``GetMatchingProductForIdResponse`` for MWS (:issue:`1751`, :sha:`977b7dc`) boto-2.20.1/docs/source/releasenotes/v2.15.0.rst000066400000000000000000000032221225267101000211160ustar00rootroot00000000000000boto v2.15.0 ============ :date: 2013/10/17 This release adds support for Amazon Elastic Transcoder audio transcoding, new regions for Amazon Simple Storage Service (S3), Amazon Glacier, and Amazon Redshift as well as new parameters in Amazon Simple Queue Service (SQS), Amazon Elastic Compute Cloud (EC2), and the ``lss3`` utility. Also included are documentation updates and fixes for S3, Amazon DynamoDB, Amazon Simple Workflow Service (SWF) and Amazon Marketplace Web Service (MWS). Features -------- * Add SWF tutorial and code sample (:issue:`1769`, :sha:`36524f5`) * Add ap-southeast-2 region to S3WebsiteEndpointTranslate (:issue:`1777`, :sha:`e7b0b39`) * Add support for ``owner_acct_id`` in SQS ``get_queue`` (:issue:`1786`, :sha:`c1ad303`) * Add ap-southeast-2 region to Glacier (:sha:`c316266`) * Add ap-southeast-1 and ap-southeast-2 to Redshift (:sha:`3d67a03`) * Add SSH timeout option (:issue:`1755`, :sha:`d8e70ef`, :sha:`653b82b`) * Add support for markers in ``lss3`` (:issue:`1783`, :sha:`8ee4b1f`) * Add ``block_device_mapping`` to EC2 ``create_image`` (:issue:`1794`, :sha:`86afe2e`) * Updated SWF tutorial (:issue:`1797`, :sha:`3804b16`) * Support Elastic Transcoder audio transcoding (:sha:`03a5087`) Bugfixes -------- * Fix VPC module docs, ELB docs, some formatting (:issue:`1770`, :sha:`75de377`) * Fix DynamoDB item ``attrs`` initialization (:issue:`1776`, :sha:`8454a2b`) * Fix parsing of empty member lists for MWS (:issue:`1785`, :sha:`7b46ca5`) * Fix link to release notes in docs (:sha:`a6bf794`) * Do not validate bucket when copying a key (:issue:`1763`, :sha:`5505113`) * Retry HTTP 502, 504 errors (:issue:`1798`, :sha:`c832e2d`) boto-2.20.1/docs/source/releasenotes/v2.16.0.rst000066400000000000000000000060341225267101000211230ustar00rootroot00000000000000boto v2.16.0 ============ :date: 2013/11/08 This release adds new Amazon Elastic MapReduce functionality, provides updates and fixes for Amazon EC2, Amazon VPC, Amazon DynamoDB, Amazon SQS, Amazon Elastic MapReduce, and documentation updates for several services. Features -------- * Added recipe for parallel execution of activities to SWF tutorial. (:issue:`1800`, :issue:`1800`, :sha:`52c5432`) * Added launch_config's parameter associate_ip_address for VPC. (:issue:`1799`, :issue:`1799`, :sha:`6685adb`) * Update elbadmin add/remove commands to support multiple instance arguments. (:issue:`1806`, :issue:`1806`, :sha:`4aad26d`) * Added documentation for valid auto scaling event types and tags. (:issue:`1807`, :issue:`1807`, :sha:`664f6e8`) * Support VPC tenancy restrictions and filters for DHCP options. (:issue:`1801`, :issue:`1801`, :sha:`8c5d8de`) * Add VPC network ACL support. (:issue:`1809`, :issue:`1098`, :issue:`1809`, :sha:`9043d09`) * Add convenience functions to make DynamoDB2 behave more like DynamoDB (:issue:`1780`, :sha:`2cecaca`) * EC2 cancel_spot_instance_requests now returns a list of SpotInstanceRequest objects. (:issue:`1811`, :issue:`1811`, :issue:`1754`, :sha:`f3361b9`) * Fix VPC DescribeVpnConnections call argument; Add support for static_routes_only when creating a new VPC. (:issue:`1816`, :issue:`1816`, :issue:`1481`, :sha:`b408637`) * Add a section about DynamoDB Local to the DynamoDBv2 high level docs. (:issue:`1821`, :issue:`1821`, :issue:`1818`, :sha:`639505f`) * Add support for new Elastic MapReduce APIs (:issue:`1836`, :sha:`5562264`) * Modify EMR add_jobflow_steps to return a JobFlowStepList. (:issue:`1838`, :issue:`1838`, :sha:`ef9564f`) * Generate docs for route53/zone, remove docs for route53/hostedzone. (:issue:`1837`, :issue:`1837`, :sha:`99e2e67`) BugFixes -------- * Fix for MWS iterator handling (:sha:`7e6f98d`) * Clarify documentation for MetricAlarm dimensions. (:issue:`1808`, :issue:`1808`, :issue:`1803`, :sha:`4233fbf`) * Fixes for general connection behind proxy. (:issue:`1781`, :issue:`1781`, :sha:`dc8bbea`) * Validate S3 method kwarg names to prevent misspelling. (:issue:`1810`, :issue:`1810`, :issue:`1782`, :sha:`947a14a`) * Fix dependencies so they show up as optional in CheeseShop (:issue:`1617`, :sha:`54da8b6`) * Route53 retry HTTP error 400s (:issue:`1618`, :sha:`6e355b3`) * Fix typo in IAMConnection documentation (:issue:`1820`, :sha:`3fc335d`) * Fix MWS MemberLists parsing. (:issue:`1815`, :issue:`1815`, :sha:`0f6f089`) * Fix typo in SQS documentation (:issue:`1830`, :sha:`20532a6`) * Update auto scaling documentation. (:issue:`1824`, :issue:`1824`, :issue:`1823`, :sha:`9a359ec`) * Fixing region endpoints for EMR (:issue:`1831`, :sha:`ed669f7`) * Raising an exception in SQS message decode() should not abort parsing. (:issue:`1835`, :issue:`1835`, :issue:`1833`, :sha:`2a00c92`) * Replace correct VPC ACL association instead of just the first one. (:issue:`1844`, :issue:`1844`, :issue:`1843`, :sha:`c70b8d6`) * Prevent swallowing CloudSearch errors (:issue:`1846`, :issue:`1842`, :sha:`c2f955b`) boto-2.20.1/docs/source/releasenotes/v2.17.0.rst000066400000000000000000000010531225267101000211200ustar00rootroot00000000000000boto v2.17.0 ============ :date: 2013/11/14 This release adds support for the new AWS CloudTrail service, support for Amazon Redshift's new features related encryption, audit logging, data load from external hosts, WLM configuration, database distribution styles and functions, as well as cross region snapshot copying. Features -------- * Add support for AWS CloudTrail (:sha:`53ba0c9`) * Add support for new Amazon Redshift features (:sha:`d94b48c`) Bugfixes -------- * Add missing argument for Google Storage resumable uploads. (:sha:`b777b62`) boto-2.20.1/docs/source/releasenotes/v2.18.0.rst000066400000000000000000000033411225267101000211230ustar00rootroot00000000000000boto v2.18.0 ============ :date: 2013/11/22 This release adds support for new AWS Identity and Access Management (IAM), AWS Security Token Service (STS), Elastic Load Balancing (ELB), Amazon Elastic Compute Cloud (EC2), Amazon Relational Database Service (RDS), and Amazon Elastic Transcoder APIs and parameters. Amazon Redshift SNS notifications are now supported. CloudWatch is updated to use signature version four, issues encoding HTTP headers are fixed and several services received documentation fixes. Features -------- * Add support for new STS and IAM calls related to SAML. (:issue:`1867`, :issue:`1867`, :sha:`1c51d17`) * Add SigV4 support to Cloudwatch (:sha:`ef43035`) * Add support for ELB Attributes and Cross Zone Balancing. (:issue:`1852`, :issue:`1852`, :sha:`76f8b7f`) * Add RDS promote and rename support. (:issue:`1857`, :issue:`1857`, :sha:`0b62c70`) * Update EC2 ``get_all_snapshots`` and add support for registering an image with a snapshot. (:issue:`1850`, :issue:`1850`, :sha:`3007956`) Bugfixes -------- * Fix issues related to encoding of values in HTTP headers when using unicode. (:issue:`1864`, :issue:`1864`, :issue:`1839`, :issue:`1829`, :issue:`1828`, :issue:`702`, :sha:`5610dd7`) * Fix order of Beanstalk documetation to match param order. (:issue:`1863`, :issue:`1863`, :sha:`a3a29f8`) * Make sure file is closed before attempting to delete it when downloading an S3 key. (:issue:`1791`, :sha:`0e6dcbe`) * Fix minor CloudTrail documentation typos. (:issue:`1861`, :issue:`1861`, :sha:`256a115`) * Fix DynamoDBv2 tutorial sentence with missing verb. (:issue:`1859`, :issue:`1825`, :issue:`1859`, :sha:`0fd5300`) * Fix parameter validation for gs (:issue:`1858`, :sha:`6b9a869`) boto-2.20.1/docs/source/releasenotes/v2.19.0.rst000066400000000000000000000012701225267101000211230ustar00rootroot00000000000000boto v2.19.0 ============ :date: 2013/11/27 This release adds support for max result limits for Amazon EC2 calls, adds support for Amazon RDS database snapshot copies and fixes links to the changelog. Features -------- * Add max results parameters to EC2 describe instances and describe tags. (:issue:`1873`, :issue:`1873`, :sha:`ad8a64a`) * Add support for RDS CopyDBSnapshot. (:issue:`1872`, :issue:`1872`, :issue:`1865`, :sha:`bffb758`) Bugfixes -------- * Update README.rst to link to ReadTheDocs changelogs. (:issue:`1869`, :sha:`26f3dbe`) * Delete the old changelog in favor of the README link to ReadTheDocs changelogs. (:issue:`1870`, :issue:`1870`, :sha:`32bc333`) boto-2.20.1/docs/source/releasenotes/v2.2.0.rst000066400000000000000000000041211225267101000210310ustar00rootroot00000000000000=========== boto v2.2.0 =========== The 2.2.0 release of boto is now available on `PyPI`_. .. _`PyPI`: http://pypi.python.org/pypi/boto You can view a list of issues that have been closed in this release at https://github.com/boto/boto/issues?milestone=5&state=closed. You can get a comprehensive list of all commits made between the 2.0 release and the 2.1.0 release at https://github.com/boto/boto/compare/fa0d6a1e49c8468abbe2c99cdc9f5fd8fd19f8f8...26c8eb108873bf8ce1b9d96d642eea2beef78c77. Some highlights of this release: * Support for Amazon DynamoDB service. * Support for S3 Object Lifecycle (Expiration). * Allow anonymous request for S3. * Support for creating Load Balancers in VPC. * Support for multi-dimension metrics in CloudWatch. * Support for Elastic Network Interfaces in EC2. * Support for Amazon S3 Multi-Delete capability. * Support for new AMIversion and overriding of parameters in EMR. * Support for SendMessageBatch request in SQS. * Support for DescribeInstanceStatus request in EC2. * Many, many improvements and additions to API documentation and Tutorials. Special thanks to Greg Taylor for all of the Sphinx cleanups and new docs. There were 336 commits in this release from 40 different authors. The authors are listed below, in no particular order: * Garrett Holmstrom * mLewisLogic * Warren Turkal * Nathan Binkert * Scott Moser * Jeremy Edberg * najeira * Marc Cohen * Jim Browne * Mitch Garnaat * David Ormsbee * Blake Maltby * Thomas O'Dowd * Victor Trac * David Marin * Greg Taylor * rdodev * Jonathan Sabo * rdoci * Mike Schwartz * l33twolf * Keith Fitzgerald * Oleksandr Gituliar * Jason Allum * Ilya Volodarsky * Rajesh * Felipe Reyes * Andy Grimm * Seth Davis * Dave King * andy * Chris Moyer * ruben * Spike Gronim * Daniel Norberg * Justin Riley * Milan Cermak timtebeek * unknown * Yotam Gingold * Brian Oldfield We processed 21 pull requests for this release from 40 different contributors. Here are the github user id's for all of the pull request authors: * milancermak * jsabo * gituliar * rdodev * marccohen * tpodowd * trun * jallum * binkert * ormsbee * timtebeek boto-2.20.1/docs/source/releasenotes/v2.2.1.rst000066400000000000000000000002031225267101000210270ustar00rootroot00000000000000=========== boto v2.2.1 =========== The 2.2.1 release fixes a packaging problem that was causing problems when installing via pip.boto-2.20.1/docs/source/releasenotes/v2.2.2.rst000066400000000000000000000012741225267101000210410ustar00rootroot00000000000000=========== boto v2.2.2 =========== The 2.2.2 release of boto is now available on `PyPI`_. .. _`PyPI`: http://pypi.python.org/pypi/boto You can view a list of issues that have been closed in this release at https://github.com/boto/boto/issues?milestone=8&state=closed. You can get a comprehensive list of all commits made between the 2.2.1 release and the 2.2.2 release at https://github.com/boto/boto/compare/2.2.1...2.2.2. This is a bugfix release. There were 71 commits in this release from 11 different authors. The authors are listed below, in no particular order: * aficionado * jimbrowne * rdodev * milancermak * garnaat * kopertop * samuraisam * tpodowd * psa * mfschwartz * gtaylor boto-2.20.1/docs/source/releasenotes/v2.20.0.rst000066400000000000000000000045641225267101000211240ustar00rootroot00000000000000boto v2.20.0 ============ :date: 2013/12/12 This release adds support for Amazon Kinesis and AWS Direct Connect. Amazon EC2 gets support for new i2 instance types and is more resilient against metadata failures, Amazon DynamoDB gets support for global secondary indexes and Amazon Relational Database Service (RDS) supports new DBInstance and DBSnapshot attributes. There are several other fixes for various services, including updated support for CloudStack and Eucalyptus. Features -------- * Add support for Amazon Kinesis (:sha:`d0b684e`) * Add support for i2 instance types to EC2. (:sha:`0f5371f`) * Add support for DynamoDB Global Secondary Indexes (:sha:`297cacb`) * Add support for AWS Direct Connect. (:issue:`1894`, :issue:`1894`, :sha:`3cbca26`) * Add option for sorting SDB dumps to sdbadmin. (:issue:`1888`, :issue:`1888`, :sha:`070e4f6`) * Add a retry when EC2 metadata is returned as corrupt JSON. (:issue:`1883`, :issue:`1883`, :issue:`1868`, :sha:`41470a0`) * Added some missing attributes to DBInstance and DBSnapshot. (:issue:`1880`, :issue:`1880`, :sha:`2751dff`) Bugfixes -------- * Implement nonzero for DynamoDB Item to consider empty items falsey (:issue:`1899`, :sha:`808e550`) * Remove `dimensions` from Metric.query() docstring. (:issue:`1901`, :issue:`1901`, :sha:`ba6b8c7`) * Make trailing slashes for EC2 metadata URLs explicit & remove them from userdata requests. This fixes using boto for CloudStack (:issue:`1900`, :issue:`1900`, :issue:`1897`, :issue:`1856`, :sha:`5f4506e`) * Fix the DynamoDB 'scan in' filter to compare the same attribute types in a list rather than using an attribute set. (:issue:`1896`, :issue:`1896`, :sha:`5fc59d6`) * Updating Amazon ElastiCache parameters to be optional when creating a new cache cluster. (:issue:`1876`, :issue:`1876`, :sha:`342b8df`) * Fix honor cooldown AutoScaling parameter serialization to prevent an exception and bad request. (:issue:`1895`, :issue:`1895`, :issue:`1892`, :sha:`fc4674f`) * Fix ignored RDS backup_retention_period when value was 0. (:issue:`1887`, :issue:`1887`, :issue:`1886`, :sha:`a19eb14`) * Use auth_handler to specify host header value including custom ports if possible, which are used by Eucalyptus. (:issue:`1862`, :issue:`1862`, :sha:`ce6df03`) * Fix documentation of launch config in Autoscaling Group. (:issue:`1881`, :issue:`1881`, :sha:`6f704d9`) * typo: AIM -> IAM (:issue:`1882`, :sha:`7ea2d5c`) boto-2.20.1/docs/source/releasenotes/v2.20.1.rst000066400000000000000000000005541225267101000211200ustar00rootroot00000000000000boto v2.20.1 ============ :date: 2013/12/13 This release fixes an important Amazon EC2 bug related to fetching security credentials via the meta-data service. It is recommended that users of boto-2.20.0 upgrade to boto-2.20.1. Bugfixes -------- * Bug fix for IAM security credentials metadata URL. (:issue:`1912`, :issue:`1908`, :issue:`1907`, :sha:`f82e7a5`) boto-2.20.1/docs/source/releasenotes/v2.3.0.rst000066400000000000000000000023051225267101000210340ustar00rootroot00000000000000=========== boto v2.3.0 =========== The 2.3.0 release of boto is now available on `PyPI`_. .. _`PyPI`: http://pypi.python.org/pypi/boto You can view a list of issues that have been closed in this release at https://github.com/boto/boto/issues?milestone=6&state=closed. You can get a comprehensive list of all commits made between the 2.2.2 release and the 2.3.0 release at https://github.com/boto/boto/compare/2.2.2...2.3.0. This release includes initial support for Amazon Simple Workflow Service. The API version of the FPS module was updated to 2010-08-28. This release also includes many bug fixes and improvements in the Amazon DynamoDB module. One change of particular note is the behavior of the ``new_item`` method of the ``Table`` object. See http://readthedocs.org/docs/boto/en/2.3.0/ref/dynamodb.html#module-boto.dynamodb.table for more details. There were 109 commits in this release from 21 different authors. The authors are listed below, in no particular order: * theju * garnaat * rdodev * mfschwartz * kopertop * tpodowd * gtaylor * kachok * croach * tmorgan * Erick Fejta * dherbst * marccohen * Arif Amirani * yuzeh * Roguelazer * awblocker * blinsay * Peter Broadwell * tierney * georgekola boto-2.20.1/docs/source/releasenotes/v2.4.0.rst000066400000000000000000000024361225267101000210420ustar00rootroot00000000000000=========== boto v2.4.0 =========== The 2.4.0 release of boto is now available on `PyPI`_. .. _`PyPI`: http://pypi.python.org/pypi/boto You can get a comprehensive list of all commits made between the 2.3.0 release and the 2.4.0 release at https://github.com/boto/boto/compare/2.3.0...2.4.0. This release includes: * Initial support for Amazon Cloudsearch Service. * Support for Amazon's Marketplace Web Service. * Latency-based routing for Route53 * Support for new domain verification features of SES. * A full rewrite of the FPS module. * Support for BatchWriteItem in DynamoDB. * Additional EMR steps for installing and running Pig scripts. * Support for additional batch operations in SQS. * Better support for VPC group-ids. * Many, many bugfixes from the community. Thanks for the reports and pull requests! There were 175 commits in this release from 32 different authors. The authors are listed below, in no particular order: * estebistec * tpodowd * Max Noel * garnaat * mfschwartz * jtriley * akoumjian * jreese * mulka * Nuutti Kotivuori * mboersma * ryansb * dampier * crschmidt * nithint * sievlev * eckamm * imlucas * disruptek * trevorsummerssmith * tmorgan * evanworley * iandanforth * oozie * aedeph * alexanderdean * abrinsmead * dlecocq * bsimpson63 * jamesls * cosmin * gtaylor boto-2.20.1/docs/source/releasenotes/v2.5.0.rst000066400000000000000000000014721225267101000210420ustar00rootroot00000000000000=========== boto v2.5.0 =========== The 2.5.0 release of boto is now available on `PyPI`_. .. _`PyPI`: http://pypi.python.org/pypi/boto You can get a comprehensive list of all commits made between the 2.4.1 release and the 2.5.0 release at https://github.com/boto/boto/compare/2.4.1...2.5.0. This release includes: * Support for IAM Roles for EC2 Instances * Added support for Capabilities in CloudFormation * Spot instances in autoscaling groups * Internal ELB's * Added tenancy option to run_instances There were 77 commits in this release from 18 different authors. The authors are listed below, in no particular order: * jimbrowne * cosmin * gtaylor * garnaat * brianjaystanley * jamesls * trevorsummerssmith * Bryan Donlan * davidmarble * jtriley * rdodev * toby * tpodowd * srs81 * mfschwartz * rdegges * gholms boto-2.20.1/docs/source/releasenotes/v2.5.1.rst000066400000000000000000000002001225267101000210270ustar00rootroot00000000000000=========== boto v2.5.1 =========== Release 2.5.1 is a bugfix release. It fixes the following critical issues: * :issue:`819` boto-2.20.1/docs/source/releasenotes/v2.5.2.rst000066400000000000000000000003311225267101000210350ustar00rootroot00000000000000=========== boto v2.5.2 =========== Release 2.5.2 is a bugfix release. It fixes the following critical issues: * :issue:`830` This issue only affects you if you are using DynamoDB on an EC2 instance with IAM Roles.boto-2.20.1/docs/source/releasenotes/v2.6.0.rst000066400000000000000000000046531225267101000210470ustar00rootroot00000000000000=========== boto v2.6.0 =========== The 2.6.0 release of boto is now available on `PyPI`_. .. _`PyPI`: http://pypi.python.org/pypi/boto You can get a comprehensive list of all commits made between the 2.5.2 release and the 2.6.0 release at https://github.com/boto/boto/compare/2.5.2...2.6.0. This release includes: * Support for Amazon Glacier * Support for AWS Elastic Beanstalk * CORS support for Amazon S3 * Support for Reserved Instances Resale in Amazon EC2 * Support for IAM Roles SSL Certificate Verification ============================ In addition, this release of boto changes the default behavior with respect to SSL certificate verification. Our friends at Google contributed code to boto well over a year ago that implemented SSL certificate verification. At the time, we felt the most prudent course of action was to make this feature an opt-in but we always felt that at some time in the future we would enable cert verification as the default behavior. Well, that time is now! However, in implementing this change, we came across a bug in Python for all versions prior to 2.7.3 (see http://bugs.python.org/issue13034 for details). The net result of this bug is that Python is able to check only the commonName in the SSL cert for verification purposes. Any subjectAltNames are ignored in large SSL keys. So, in addition to enabling verification as the default behavior we also changed some of the service endpoints in boto to match the commonName in the SSL certificate. If you want to disable verification for any reason (not advised, btw) you can still do so by editing your boto config file (see https://gist.github.com/3762068) or you can override it by passing `validate_certs=False` to the Connection class constructor or the `connect_*` function. Commits ======= There were 440 commits in this release from 53 different authors. The authors are listed below, in alphabetical order: * acorley * acrefoot * aedeph * allardhoeve * almost * awatts * buzztroll * cadams * cbednarski * cosmin * dangra * darjus-amzn * disruptek * djw * garnaat * gertjanol * gimbel0893 * gochist * graphaelli * gtaylor * gz * hardys * jamesls * jijojv * jimbrowne * jtlebigot * jtriley * kopertop * kotnik * marknca * mark_nunnikhoven * mfschwartz * moliware * NeilW * nkvoll * nsitarz * ohe * pasieronen * patricklucas * pfig * rajivnavada * reversefold * robie * scott * shawnps * smoser * sopel * staer * tedder * yamatt * Yossi * yovadia12 * zachhuff386boto-2.20.1/docs/source/releasenotes/v2.7.0.rst000066400000000000000000000033641225267101000210460ustar00rootroot00000000000000=========== boto v2.7.0 =========== The 2.7.0 release of boto is now available on `PyPI`_. .. _`PyPI`: http://pypi.python.org/pypi/boto You can get a comprehensive list of all commits made between the 2.6.0 release and the 2.7.0 release at https://github.com/boto/boto/compare/2.6.0...2.7.0. This release includes: * Added support for AWS Data Pipeline - :sha:`999902` * Integrated Slick53 into Route53 module - :issue:`1186` * Add ability to use Decimal for DynamoDB numeric types - :issue:`1183` * Query/Scan Count/ScannedCount support and TableGenerator improvements - :issue:`1181` * Added support for keyring in config files - :issue:`1157` * Add concurrent downloader to glacier - :issue:`1106` * Add support for tagged RDS DBInstances - :issue:`1050` * Updating RDS API Version to 2012-09-17 - :issue:`1033` * Added support for provisioned IOPS for RDS - :issue:`1028` * Add ability to set SQS Notifications in Mechanical Turk - :issue:`1018` Commits ======= There were 447 commits in this release from 60 different authors. The authors are listed below, in alphabetical order: * acrefoot * Alex Schoof * Andy Davidoff * anoopj * Benoit Dubertret * bobveznat * dahlia * dangra * disruptek * dmcritchie * emtrane * focus * fsouza * g2harris * garnaat * georgegoh * georgesequeira * GitsMcGee * glance- * gtaylor * hashbackup * hinnerk * hoov * isaacbowen * jamesls * JerryKwan * jimfulton * jimbrowne * jorourke * jterrace * jtriley * katzj * kennu * kevinburke * khagler * Kodiologist * kopertop * kotnik * Leftium * lpetc * marknca * matthewandrews * mfschwartz * mikek * mkmt * mleonhard * mraposa * oozie * phunter * potix2 * Rafael Cunha de Almeida * reinhillmann * reversefold * Robie Basak * seandst * siroken3 * staer * tpodowd * vladimir-sol * yovadia12 boto-2.20.1/docs/source/releasenotes/v2.8.0.rst000066400000000000000000000015161225267101000210440ustar00rootroot00000000000000=========== boto v2.8.0 =========== The 2.8.0 release of boto is now available on `PyPI`_. .. _`PyPI`: http://pypi.python.org/pypi/boto You can get a comprehensive list of all commits made between the 2.7.0 release and the 2.8.0 release at https://github.com/boto/boto/compare/2.7.0...2.8.0. This release includes: * Added support for Amazon Elasticache * Added support for Amazon Elastic Transcoding Service As well as numerous bug fixes and improvements. Commits ======= There were 115 commits in this release from 21 different authors. The authors are listed below, in alphabetical order: * conorbranagan * dkavanagh * gaige * garnaat * halfaleague * jamesls * jjhooper * jordansissel * jterrace * Kodiologist * kopertop * mfschwartz * nathan11g * pasc * phobologic * schworer * seandst * SirAlvarex * Yaniv Ovadia * yig * yovadia12 boto-2.20.1/docs/source/releasenotes/v2.9.0.rst000066400000000000000000000021451225267101000210440ustar00rootroot00000000000000=========== boto v2.9.0 =========== The 2.9.0 release of boto is now available on `PyPI`_. .. _`PyPI`: http://pypi.python.org/pypi/boto You can get a comprehensive list of all commits made between the 2.8.0 release and the 2.9.0 release at https://github.com/boto/boto/compare/2.8.0...2.9.0. This release includes: * Support for Amazon Redshift * Support for Amazon DynamoDB's new API * Support for AWS Opsworks * Add `copy_image` to EC2 (AMI copy) * Add `describe_account_attributes` and `describe_vpc_attribute`, and `modify_vpc_attribute` operations to EC2. There were 240 commits made by 34 different authors: * g2harris * Michael Barrett * Pascal Hakim * James Saryerwinnie * Mitch Garnaat * ChangMin Jeon * Mike Schwartz * Jeremy Katz * Alex Schoof * reinhillmann * Travis Hobrla * Zach Wilt * Daniel Lindsley * ksacry * Michael Wirth * Eric Smalling * pingwin * Chris Moyer * Olivier Hervieu * Iuri de Silvio * Joe Sondow * Max Noel * Nate * Chris Moyer * Lars Otten * Nathan Grigg * Rein Hillmann * Øyvind Saltvik * Rayson HO * Martin Matusiak * Royce Remer * Jeff Terrace * Yaniv Ovadia * Eduardo S. Klein boto-2.20.1/docs/source/releasenotes/v2.9.1.rst000066400000000000000000000031171225267101000210450ustar00rootroot00000000000000boto v2.9.1 =========== :date: 2013/04/30 Primarily a bugfix release, this release also includes support for the new AWS Support API. Features -------- * AWS Support API - A client was added to support the new AWS Support API. It gives programmatic access to Support cases opened with AWS. A short example might look like:: >>> from boto.support.layer1 import SupportConnection >>> conn = SupportConnection() >>> new_case = conn.create_case( ... subject='Description of the issue', ... service_code='amazon-cloudsearch', ... category_code='performance', ... communication_body="We're seeing some latency from one of our...", ... severity_code='low' ... ) >>> new_case['caseId'] u'case-...' The :ref:`Support Tutorial ` has more information on how to use the new API. (:sha:`8c0451`) Bugfixes -------- * The reintroduction of ``ResumableUploadHandler.get_upload_id`` that was accidentally removed in a previous commit. (:sha:`758322`) * Added ``OrdinaryCallingFormat`` to support Google Storage's certificate verification. (:sha:`4ca83b`) * Added the ``eu-west-1`` region for Redshift. (:sha:`e98b95`) * Added support for overriding the port any connection in ``boto`` uses. (:sha:`08e893`) * Added retry/checksumming support to the DynamoDB v2 client. (:sha:`969ae2`) * Several documentation improvements/fixes: * Incorrect docs on EC2's ``import_key_pair``. (:sha:`6ada7d`) * Clearer docs on the DynamoDB ``count`` parameter. (:sha:`dfa456`) * Fixed a typo in the ``autoscale_tut``. (:sha:`6df1ae`) boto-2.20.1/docs/source/releasenotes/v2.9.2.rst000066400000000000000000000003471225267101000210500ustar00rootroot00000000000000boto v2.9.2 =========== :date: 2013/04/30 A hotfix release that adds the ``boto.support`` into ``setup.py``. Features -------- * None. Bugfixes -------- * Fixed the missing ``boto.support`` in ``setup.py``. (:sha:`9ac196`) boto-2.20.1/docs/source/releasenotes/v2.9.3.rst000066400000000000000000000043561225267101000210550ustar00rootroot00000000000000boto v2.9.3 =========== :date: 2013/05/15 This release adds ELB support to Opsworks, optimized EBS support in EC2 AutoScale, Parallel Scan support to DynamoDB v2, a higher-level interface to DynamoDB v2 and API updates to DataPipeline. Features -------- * ELB support in Opsworks - You can now attach & describe the Elastic Load Balancers within the Opsworks client. (:sha:`ecda87`) * Optimized EBS support in EC2 AutoScale - You can now specify whether an AutoScale instance should be optimized for EBS I/O. (:sha:`f8acaa`) * Parallel Scan support in DynamoDB v2 - If you have extra read capacity & a large amount of data, you can scan over the records in parallel by telling DynamoDB to split the table into segments, then spinning up threads/processes to each run over their own segment. (:sha:`db7f7b` & :sha:`7ed73c`) * Higher-level interface to DynamoDB v2 - A more convenient API for using DynamoDB v2. The :ref:`DynamoDB v2 Tutorial ` has more information on how to use the new API. (:sha:`0f7c8b`) Backward-Incompatible Changes ----------------------------- * API Update for DataPipeline - The ``error_code`` (integer) argument to ``set_task_status`` changed to ``error_id`` (string). Many documentation updates were also added. (:sha:`a78572`) Bugfixes -------- * Bumped the AWS Support API version. (:sha:`0323f4`) * Fixed the S3 ``ResumableDownloadHandler`` so that it no longer tries to use a hashing algorithm when used outside of GCS. (:sha:`29b046`) * Fixed a bug where Sig V4 URIs were improperly canonicalized. (:sha:`5269d8`) * Fixed a bug where Sig V4 ports were not included. (:sha:`cfaba3`) * Fixed a bug in CloudWatch's ``build_put_params`` that would overwrite existing/necessary variables. (:sha:`550e00`) * Several documentation improvements/fixes: * Added docs for RDS ``modify/modify_dbinstance``. (:sha:`777d73`) * Fixed a typo in the ``README.rst``. (:sha:`181e0f`) * Documentation fallout from the previous release. (:sha:`14a111`) * Fixed a typo in the EC2 ``Image.run`` docs. (:sha:`5edd6a`) * Added/improved docs for EC2 ``Image.run``. (:sha:`773ce5`) * Added a CONTRIBUTING doc. (:sha:`cecbe8`) * Fixed S3 ``create_bucket`` docs to specify "European Union". (:sha:`ddddfd`) boto-2.20.1/docs/source/releasenotes/v2.9.4.rst000066400000000000000000000015301225267101000210450ustar00rootroot00000000000000boto v2.9.4 =========== :date: 2013/05/20 This release adds updated Elastic Transcoder support & fixes several bugs from recent releases & API updates. Features -------- * Updated Elastic Transcoder support - It now supports HLS, WebM, MPEG2-TS & a host of `other features`_. (:sha:`89196a`) .. _`other features`: http://aws.typepad.com/aws/2013/05/new-features-for-the-amazon-elastic-transcoder.html Bugfixes -------- * Fixed a bug in the canonicalization of URLs on Windows. (:sha:`09ef8c`) * Fixed glacier part size bug (:issue:`1478`, :sha:`9e04171`) * Fixed a bug in the bucket regex for S3 involving capital letters. (:sha:`950031`) * Fixed a bug where timestamps from Cloudformation would fail to be parsed. (:sha:`b40542`) * Several documentation improvements/fixes: * Added autodocs for many of the EC2 apis. (:sha:`79f939`) boto-2.20.1/docs/source/releasenotes/v2.9.5.rst000066400000000000000000000020051225267101000210440ustar00rootroot00000000000000boto v2.9.5 =========== :date: 2013/05/28 This release adds support for `web identity federation`_ within the Secure Token Service (STS) & fixes several bugs. .. _`web identity federation`: http://docs.aws.amazon.com/STS/latest/UsingSTS/CreatingWIF.html Features -------- * Added support for web identity federation - You can now delegate token access via either an Oauth 2.0 or OpenID provider. (:sha:`9bd0a3`) Bugfixes -------- * Altered the S3 key buffer to be a configurable value. (:issue:`1506`, :sha:`8e3e36`) * Added Sphinx extension for better release notes. (:issue:`1511`, :sha:`e2e32d` & :sha:`3d998b`) * Fixed a bug where DynamoDB v2 would only ever connect to the default endpoint. (:issue:`1508`, :sha:`139912`) * Fixed a iteration/empty results bug & a ``between`` bug in DynamoDB v2. (:issue:`1512`, :sha:`d109b6`) * Fixed an issue with ``EbsOptimized`` in EC2 Autoscale. (:issue:`1513`, :sha:`424c41`) * Fixed a missing instance variable bug in DynamoDB v2. (:issue:`1516`, :sha:`6fa8bf`) boto-2.20.1/docs/source/releasenotes/v2.9.6.rst000066400000000000000000000045501225267101000210540ustar00rootroot00000000000000boto v2.9.6 =========== :date: 2013/06/18 This release adds large payload support to Amazon SNS/SQS (from 32k to 256k bodies), several minor API additions, new regions for Redshift/Cloudsearch & a host of bugfixes. Features -------- * Added large body support to SNS/SQS. There's nothing to change in your application code, but you can now send payloads of up to 256k in size. (:sha:`b64947`) * Added ``Vault.retrieve_inventory_job`` to Glacier. (:issue:`1532`, :sha:`33de29`) * Added ``Item.get(...)`` support to DynamoDB v2. (:sha:`938cb6`) * Added the ``ap-northeast-1`` region to Redshift. (:sha:`d3eb61`) * Added all the current regions to Cloudsearch. (:issue:`1465`, :sha:`22b3b7`) Bugfixes -------- * Fixed a bug where ``date`` metadata couldn't be set on an S3 key. (:issue:`1519`, :sha:`1efde8`) * Fixed Python 2.5/Jython support in ``NetworkInterfaceCollection``. (:issue:`1518`, :sha:`0d6af2`) * Fixed a XML parsing error with ``InstanceStatusSet``. (:issue:`1493`, :sha:`55d4f6`) * Added a test case to try to demonstrate :issue:`443`. (:sha:`084dd5`) * Exposed the current tree-hash & upload size on Glacier's ``Writer``. (:issue:`1520`, :sha:`ade462`) * Updated EC2 Autoscale to incorporate new cron-like parameters. (:issue:`1433`, :sha:`266e25`, :sha:`871588` & :sha:`473e42`) * Fixed ``AttributeError`` being thrown from ``LoadBalancerZones``. (:issue:`1524`, :sha:`215ffa`) * Fixed a bug with empty facets in Cloudsearch. (:issue:`1366`, :sha:`7a108e`) * Fixed an S3 timeout/retry bug where HTTP 400s weren't being honored. (:issue:`1528`, :sha:`efd9af` & :sha:`16ae74`) * Fixed ``get_path`` when ``suppress_consec_slashes=False``. (:issue:`1522`, :sha:`c5dffc`) * Factored out how some of S3's ``query_args`` are constructed. (:sha:`9f73de`) * Added the ``generation`` query param to ``gs.Key.open_read``. (:sha:`cb4427`) * Fixed a bug with the canonicalization of URLs with trailing slashes in the SigV4 signer. (:issue:`1541`, :sha:`dec541`, :sha:`3f2b33`) * Several documentation improvements/fixes: * Updated the release notes slightly. (:sha:`7b6079`) * Corrected the ``num_cb`` param on ``set_contents_from_filename``. (:issue:`1523`, :sha:`44be69`) * Fixed some example code in the DDB migration guide. (:issue:`1525`, :sha:`6210ca`) * Fixed a typo in one of the DynamoDB v2 examples. (:issue:`1551`, :sha:`b0df3e`) boto-2.20.1/docs/source/releasenotes/v2.9.7.rst000066400000000000000000000026211225267101000210520ustar00rootroot00000000000000boto v2.9.7 =========== :date: 2013/07/08 This release is primarily a bugfix release, but also inludes support for Elastic Transcoder updates (variable bit rate, max frame rate & watermark features). Features -------- * Added support for selecting specific attributes in DynamoDB v2. (:issue:`1567`, :sha:`d9e5c2`) * Added support for variable bit rate, max frame rate & watermark features in Elastic Transcoder. (:sha:`3791c9`) Bugfixes -------- * Altered RDS to now use SigV4. (:sha:`be1633`) * Removed parsing check in ``StorageUri``. (:sha:`21bc8f`) * More information returned about GS key generation. (:issue:`1571`, :sha:`6d5e3a`) * Upload handling headers now case-insensitive. (:issue:`1575`, :sha:`60383d`) * Several CloudFormation timestamp updates. (:issue:`1582`, :issue:`1583`, :issue:`1588`, :sha:`0a23d34`, :sha:`6d4209`) * Corrected a bug in how limits are handled in DynamoDB v2. (:issue:`1590`, :sha:`710a62`) * Several documentation improvements/fixes: * Typo in ``boto.connection`` fixed. (:issue:`1569`, :sha:`cf39fd`) * All previous release notes added to the docs. (:sha:`165596`) * Corrected error in ``get_all_tags`` docs. (:sha:`4bca5d`) * Corrected a typo in the S3 tutorial. (:sha:`f0cef8`) * Corrected several import errors in the DDBv2 tutorial. (:sha:`5401a3`) * Fixed an error in the ``get_key_pair`` docstring. (:issue:`1590`, :sha:`a9cb8d`) boto-2.20.1/docs/source/releasenotes/v2.9.8.rst000066400000000000000000000022641225267101000210560ustar00rootroot00000000000000boto v2.9.8 =========== :date: 2013/07/18 This release is adds new methods in AWS Security Token Service (STS), AWS CloudFormation, updates AWS Relational Database Service (RDS) & Google Storage. It also has several bugfixes & documentation improvements. Features -------- * Added support for the ``DecodeAuthorizationMessage`` in STS (:sha:`1ada5ac`). * Added support for creating/deleting/describing ``OptionGroup`` in RDS. (:sha:`d629228` & :sha:`d059a3b`) * Added ``CancelUpdateStack`` to CloudFormation. (:issue:`1476`, :sha:`5bae130`) * Added support for getting/setting lifecycle configurations on GS buckets. (:issue:`1604`, :sha:`652fc81`) Bugfixes -------- * Added region support to ``bin/elbadmin``. (:issue:`1586`, :sha:`2ffbc60`) * Changed the mock storage to use case-insensitive headers. (:issue:`1594`, :sha:`71849cb`) * Added ``complex_listeners`` to ELB. (:issue:`1048`, :sha:`b782ce2`) * Added tests for Route53's ``ResourceRecordSets``. (:sha:`fad5bde`) * Several documentation improvements/fixes: * Updated CloudFront docs. (:issue:`1546`, :sha:`a811197`) * Updated the URL explaining the use of base64 in SQS messages. (:issue:`1596`, :sha:`00de3a2`) boto-2.20.1/docs/source/releasenotes/v2.9.9.rst000066400000000000000000000041321225267101000210530ustar00rootroot00000000000000boto v2.9.9 =========== :date: 2013/07/24 This release updates Opsworks to add AMI & Chef 11 support, DBSubnetGroup support in RDS & many other bugfixes. Features -------- * Added AMI, configuration manager & Chef 11 support to Opsworks. (:sha:`55725fc`). * Added ``in`` support for SQS messages. (:issue:`1593`, :sha:`e5fe1ed`) * Added support for the ``ap-southeast-2`` region in Elasticache. (:issue:`1607`, :sha:`9986b61`) * Added support for block device mappings in ELB. (:issue:`1343`, :issue:`753`, :issue:`1357`, :sha:`974a23a`) * Added support for DBSubnetGroup in RDS. (:issue:`1500`, :sha:`01eef87`, :sha:`45c60a0`, :sha:`c4c859e`) Bugfixes -------- * Fixed the canonicalization of paths on Windows. (:issue:`1609`, :sha:`a1fa98c`) * Fixed how ``BotoServerException`` uses ``message``. (:issue:`1353`, :sha:`b944f4b`) * Fixed ``DisableRollback`` always being ``True`` in a CloudFormation ``Stack``. (:issue:`1379`, :sha:`32b3150`) * Changed EMR instance groups to no longer require a string price (can now be a ``Decimal``). (:issue:`1396`, :sha:`dfc39ff`) * Altered ``Distribution._sign_string`` to accept any file-like object as well within CloudFront. (:issue:`1349`, :sha:`8df6c14`) * Fixed the ``detach_lb_from_subnets`` call within ELB. (:issue:`1417`, :issue:`1418` :sha:`4a397bd`, :sha:`c11d72b`, :sha:`9e595b5`, :sha:`634469d`, :sha:`586dd54`) * Altered boto to obey ``no_proxy`` environment variables. (:issue:`1600`, :issue:`1603`, :sha:`aaef5a9`) * Fixed ELB connections to use HTTPS by default. (:issue:`1587`, :sha:`fe158c4`) * Updated S3 to be Python 2.5 compatible again. (:issue:`1598`, :sha:`066009f`) * All calls within SES will now return *all* DKIMTokens, instead of just one. (:issue:`1550`, :issue:`1610`, :sha:`1a079da`, :sha:`1e82f85`, :sha:`5c8b6b8`) * Fixed the ``logging`` parameter within ``DistributionConfig`` in CloudFront to respect whatever is provided to the constructor. (:issue:`1457`, :sha:`e76180d`) * Fixed CloudSearch to no longer raise an error if a non-JSON response is received. (:issue:`1555`, :issue:`1614`, :sha:`5e2c292`, :sha:`6510e1f`) boto-2.20.1/docs/source/s3_tut.rst000066400000000000000000000405331225267101000167430ustar00rootroot00000000000000.. _s3_tut: ====================================== An Introduction to boto's S3 interface ====================================== This tutorial focuses on the boto interface to the Simple Storage Service from Amazon Web Services. This tutorial assumes that you have already downloaded and installed boto. Creating a Connection --------------------- The first step in accessing S3 is to create a connection to the service. There are two ways to do this in boto. The first is: >>> from boto.s3.connection import S3Connection >>> conn = S3Connection('', '') At this point the variable conn will point to an S3Connection object. In this example, the AWS access key and AWS secret key are passed in to the method explicitely. Alternatively, you can set the environment variables: * `AWS_ACCESS_KEY_ID` - Your AWS Access Key ID * `AWS_SECRET_ACCESS_KEY` - Your AWS Secret Access Key and then call the constructor without any arguments, like this: >>> conn = S3Connection() There is also a shortcut function in the boto package, called connect_s3 that may provide a slightly easier means of creating a connection:: >>> import boto >>> conn = boto.connect_s3() In either case, conn will point to an S3Connection object which we will use throughout the remainder of this tutorial. Creating a Bucket ----------------- Once you have a connection established with S3, you will probably want to create a bucket. A bucket is a container used to store key/value pairs in S3. A bucket can hold an unlimited amount of data so you could potentially have just one bucket in S3 for all of your information. Or, you could create separate buckets for different types of data. You can figure all of that out later, first let's just create a bucket. That can be accomplished like this:: >>> bucket = conn.create_bucket('mybucket') Traceback (most recent call last): File "", line 1, in ? File "boto/connection.py", line 285, in create_bucket raise S3CreateError(response.status, response.reason) boto.exception.S3CreateError: S3Error[409]: Conflict Whoa. What happended there? Well, the thing you have to know about buckets is that they are kind of like domain names. It's one flat name space that everyone who uses S3 shares. So, someone has already create a bucket called "mybucket" in S3 and that means no one else can grab that bucket name. So, you have to come up with a name that hasn't been taken yet. For example, something that uses a unique string as a prefix. Your AWS_ACCESS_KEY (NOT YOUR SECRET KEY!) could work but I'll leave it to your imagination to come up with something. I'll just assume that you found an acceptable name. The create_bucket method will create the requested bucket if it does not exist or will return the existing bucket if it does exist. Creating a Bucket In Another Location ------------------------------------- The example above assumes that you want to create a bucket in the standard US region. However, it is possible to create buckets in other locations. To do so, first import the Location object from the boto.s3.connection module, like this:: >>> from boto.s3.connection import Location >>> print '\n'.join(i for i in dir(Location) if i[0].isupper()) APNortheast APSoutheast APSoutheast2 DEFAULT EU SAEast USWest USWest2 As you can see, the Location object defines a number of possible locations. By default, the location is the empty string which is interpreted as the US Classic Region, the original S3 region. However, by specifying another location at the time the bucket is created, you can instruct S3 to create the bucket in that location. For example:: >>> conn.create_bucket('mybucket', location=Location.EU) will create the bucket in the EU region (assuming the name is available). Storing Data ---------------- Once you have a bucket, presumably you will want to store some data in it. S3 doesn't care what kind of information you store in your objects or what format you use to store it. All you need is a key that is unique within your bucket. The Key object is used in boto to keep track of data stored in S3. To store new data in S3, start by creating a new Key object:: >>> from boto.s3.key import Key >>> k = Key(bucket) >>> k.key = 'foobar' >>> k.set_contents_from_string('This is a test of S3') The net effect of these statements is to create a new object in S3 with a key of "foobar" and a value of "This is a test of S3". To validate that this worked, quit out of the interpreter and start it up again. Then:: >>> import boto >>> c = boto.connect_s3() >>> b = c.get_bucket('mybucket') # substitute your bucket name here >>> from boto.s3.key import Key >>> k = Key(b) >>> k.key = 'foobar' >>> k.get_contents_as_string() 'This is a test of S3' So, we can definitely store and retrieve strings. A more interesting example may be to store the contents of a local file in S3 and then retrieve the contents to another local file. :: >>> k = Key(b) >>> k.key = 'myfile' >>> k.set_contents_from_filename('foo.jpg') >>> k.get_contents_to_filename('bar.jpg') There are a couple of things to note about this. When you send data to S3 from a file or filename, boto will attempt to determine the correct mime type for that file and send it as a Content-Type header. The boto package uses the standard mimetypes package in Python to do the mime type guessing. The other thing to note is that boto does stream the content to and from S3 so you should be able to send and receive large files without any problem. Accessing A Bucket ------------------ Once a bucket exists, you can access it by getting the bucket. For example:: >>> mybucket = conn.get_bucket('mybucket') # Substitute in your bucket name >>> mybucket.list() >> nonexistent = conn.get_bucket('i-dont-exist-at-all', validate=False) If the bucket does not exist, a ``S3ResponseError`` will commonly be thrown. If you'd rather not deal with any exceptions, you can use the ``lookup`` method.:: >>> nonexistent = conn.lookup('i-dont-exist-at-all') >>> if nonexistent is None: ... print "No such bucket!" ... No such bucket! Deleting A Bucket ----------------- Removing a bucket can be done using the ``delete_bucket`` method. For example:: >>> conn.delete_bucket('mybucket') # Substitute in your bucket name The bucket must be empty of keys or this call will fail & an exception will be raised. You can remove a non-empty bucket by doing something like:: >>> full_bucket = conn.get_bucket('bucket-to-delete') # It's full of keys. Delete them all. >>> for key in full_bucket.list(): ... key.delete() ... # The bucket is empty now. Delete it. >>> conn.delete_bucket('bucket-to-delete') .. warning:: This method can cause data loss! Be very careful when using it. Additionally, be aware that using the above method for removing all keys and deleting the bucket involves a request for each key. As such, it's not particularly fast & is very chatty. Listing All Available Buckets ----------------------------- In addition to accessing specific buckets via the create_bucket method you can also get a list of all available buckets that you have created. :: >>> rs = conn.get_all_buckets() This returns a ResultSet object (see the SQS Tutorial for more info on ResultSet objects). The ResultSet can be used as a sequence or list type object to retrieve Bucket objects. :: >>> len(rs) 11 >>> for b in rs: ... print b.name ... >>> b = rs[0] Setting / Getting the Access Control List for Buckets and Keys -------------------------------------------------------------- The S3 service provides the ability to control access to buckets and keys within s3 via the Access Control List (ACL) associated with each object in S3. There are two ways to set the ACL for an object: 1. Create a custom ACL that grants specific rights to specific users. At the moment, the users that are specified within grants have to be registered users of Amazon Web Services so this isn't as useful or as general as it could be. 2. Use a "canned" access control policy. There are four canned policies defined: a. private: Owner gets FULL_CONTROL. No one else has any access rights. b. public-read: Owners gets FULL_CONTROL and the anonymous principal is granted READ access. c. public-read-write: Owner gets FULL_CONTROL and the anonymous principal is granted READ and WRITE access. d. authenticated-read: Owner gets FULL_CONTROL and any principal authenticated as a registered Amazon S3 user is granted READ access. To set a canned ACL for a bucket, use the set_acl method of the Bucket object. The argument passed to this method must be one of the four permissable canned policies named in the list CannedACLStrings contained in acl.py. For example, to make a bucket readable by anyone: >>> b.set_acl('public-read') You can also set the ACL for Key objects, either by passing an additional argument to the above method: >>> b.set_acl('public-read', 'foobar') where 'foobar' is the key of some object within the bucket b or you can call the set_acl method of the Key object: >>> k.set_acl('public-read') You can also retrieve the current ACL for a Bucket or Key object using the get_acl object. This method parses the AccessControlPolicy response sent by S3 and creates a set of Python objects that represent the ACL. :: >>> acp = b.get_acl() >>> acp >>> acp.acl >>> acp.acl.grants [] >>> for grant in acp.acl.grants: ... print grant.permission, grant.display_name, grant.email_address, grant.id ... FULL_CONTROL The Python objects representing the ACL can be found in the acl.py module of boto. Both the Bucket object and the Key object also provide shortcut methods to simplify the process of granting individuals specific access. For example, if you want to grant an individual user READ access to a particular object in S3 you could do the following:: >>> key = b.lookup('mykeytoshare') >>> key.add_email_grant('READ', 'foo@bar.com') The email address provided should be the one associated with the users AWS account. There is a similar method called add_user_grant that accepts the canonical id of the user rather than the email address. Setting/Getting Metadata Values on Key Objects ---------------------------------------------- S3 allows arbitrary user metadata to be assigned to objects within a bucket. To take advantage of this S3 feature, you should use the set_metadata and get_metadata methods of the Key object to set and retrieve metadata associated with an S3 object. For example:: >>> k = Key(b) >>> k.key = 'has_metadata' >>> k.set_metadata('meta1', 'This is the first metadata value') >>> k.set_metadata('meta2', 'This is the second metadata value') >>> k.set_contents_from_filename('foo.txt') This code associates two metadata key/value pairs with the Key k. To retrieve those values later:: >>> k = b.get_key('has_metadata') >>> k.get_metadata('meta1') 'This is the first metadata value' >>> k.get_metadata('meta2') 'This is the second metadata value' >>> Setting/Getting/Deleting CORS Configuration on a Bucket ------------------------------------------------------- Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support in Amazon S3, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources. To create a CORS configuration and associate it with a bucket:: >>> from boto.s3.cors import CORSConfiguration >>> cors_cfg = CORSConfiguration() >>> cors_cfg.add_rule(['PUT', 'POST', 'DELETE'], 'https://www.example.com', allowed_header='*', max_age_seconds=3000, expose_header='x-amz-server-side-encryption') >>> cors_cfg.add_rule('GET', '*') The above code creates a CORS configuration object with two rules. * The first rule allows cross-origin PUT, POST, and DELETE requests from the https://www.example.com/ origin. The rule also allows all headers in preflight OPTIONS request through the Access-Control-Request-Headers header. In response to any preflight OPTIONS request, Amazon S3 will return any requested headers. * The second rule allows cross-origin GET requests from all origins. To associate this configuration with a bucket:: >>> import boto >>> c = boto.connect_s3() >>> bucket = c.lookup('mybucket') >>> bucket.set_cors(cors_cfg) To retrieve the CORS configuration associated with a bucket:: >>> cors_cfg = bucket.get_cors() And, finally, to delete all CORS configurations from a bucket:: >>> bucket.delete_cors() Transitioning Objects to Glacier -------------------------------- You can configure objects in S3 to transition to Glacier after a period of time. This is done using lifecycle policies. A lifecycle policy can also specify that an object should be deleted after a period of time. Lifecycle configurations are assigned to buckets and require these parameters: * The object prefix that identifies the objects you are targeting. * The action you want S3 to perform on the identified objects. * The date (or time period) when you want S3 to perform these actions. For example, given a bucket ``s3-glacier-boto-demo``, we can first retrieve the bucket:: >>> import boto >>> c = boto.connect_s3() >>> bucket = c.get_bucket('s3-glacier-boto-demo') Then we can create a lifecycle object. In our example, we want all objects under ``logs/*`` to transition to Glacier 30 days after the object is created. :: >>> from boto.s3.lifecycle import Lifecycle, Transition, Rule >>> to_glacier = Transition(days=30, storage_class='GLACIER') >>> rule = Rule('ruleid', 'logs/', 'Enabled', transition=to_glacier) >>> lifecycle = Lifecycle() >>> lifecycle.append(rule) .. note:: For API docs for the lifecycle objects, see :py:mod:`boto.s3.lifecycle` We can now configure the bucket with this lifecycle policy:: >>> bucket.configure_lifecycle(lifecycle) True You can also retrieve the current lifecycle policy for the bucket:: >>> current = bucket.get_lifecycle_config() >>> print current[0].transition When an object transitions to Glacier, the storage class will be updated. This can be seen when you **list** the objects in a bucket:: >>> for key in bucket.list(): ... print key, key.storage_class ... GLACIER You can also use the prefix argument to the ``bucket.list`` method:: >>> print list(b.list(prefix='logs/testlog1.log'))[0].storage_class u'GLACIER' Restoring Objects from Glacier ------------------------------ Once an object has been transitioned to Glacier, you can restore the object back to S3. To do so, you can use the :py:meth:`boto.s3.key.Key.restore` method of the key object. The ``restore`` method takes an integer that specifies the number of days to keep the object in S3. :: >>> import boto >>> c = boto.connect_s3() >>> bucket = c.get_bucket('s3-glacier-boto-demo') >>> key = bucket.get_key('logs/testlog1.log') >>> key.restore(days=5) It takes about 4 hours for a restore operation to make a copy of the archive available for you to access. While the object is being restored, the ``ongoing_restore`` attribute will be set to ``True``:: >>> key = bucket.get_key('logs/testlog1.log') >>> print key.ongoing_restore True When the restore is finished, this value will be ``False`` and the expiry date of the object will be non ``None``:: >>> key = bucket.get_key('logs/testlog1.log') >>> print key.ongoing_restore False >>> print key.expiry_date "Fri, 21 Dec 2012 00:00:00 GMT" .. note:: If there is no restore operation either in progress or completed, the ``ongoing_restore`` attribute will be ``None``. Once the object is restored you can then download the contents:: >>> key.get_contents_to_filename('testlog1.log') boto-2.20.1/docs/source/security_groups.rst000066400000000000000000000057111225267101000207670ustar00rootroot00000000000000.. _security_groups: =================== EC2 Security Groups =================== Amazon defines a security group as: "A security group is a named collection of access rules. These access rules specify which ingress, i.e. incoming, network traffic should be delivered to your instance." To get a listing of all currently defined security groups:: >>> rs = conn.get_all_security_groups() >>> print rs [SecurityGroup:appserver, SecurityGroup:default, SecurityGroup:vnc, SecurityGroup:webserver] Each security group can have an arbitrary number of rules which represent different network ports which are being enabled. To find the rules for a particular security group, use the rules attribute:: >>> sg = rs[1] >>> sg.name u'default' >>> sg.rules [IPPermissions:tcp(0-65535), IPPermissions:udp(0-65535), IPPermissions:icmp(-1--1), IPPermissions:tcp(22-22), IPPermissions:tcp(80-80)] In addition to listing the available security groups you can also create a new security group. I'll follow through the "Three Tier Web Service" example included in the EC2 Developer's Guide for an example of how to create security groups and add rules to them. First, let's create a group for our Apache web servers that allows HTTP access to the world:: >>> web = conn.create_security_group('apache', 'Our Apache Group') >>> web SecurityGroup:apache >>> web.authorize('tcp', 80, 80, '0.0.0.0/0') True The first argument is the ip protocol which can be one of; tcp, udp or icmp. The second argument is the FromPort or the beginning port in the range, the third argument is the ToPort or the ending port in the range and the last argument is the CIDR IP range to authorize access to. Next we create another group for the app servers:: >>> app = conn.create_security_group('appserver', 'The application tier') We then want to grant access between the web server group and the app server group. So, rather than specifying an IP address as we did in the last example, this time we will specify another SecurityGroup object.: >>> app.authorize(src_group=web) True Now, to verify that the web group now has access to the app servers, we want to temporarily allow SSH access to the web servers from our computer. Let's say that our IP address is 192.168.1.130 as it is in the EC2 Developer Guide. To enable that access:: >>> web.authorize(ip_protocol='tcp', from_port=22, to_port=22, cidr_ip='192.168.1.130/32') True Now that this access is authorized, we could ssh into an instance running in the web group and then try to telnet to specific ports on servers in the appserver group, as shown in the EC2 Developer's Guide. When this testing is complete, we would want to revoke SSH access to the web server group, like this:: >>> web.rules [IPPermissions:tcp(80-80), IPPermissions:tcp(22-22)] >>> web.revoke('tcp', 22, 22, cidr_ip='192.168.1.130/32') True >>> web.rules [IPPermissions:tcp(80-80)]boto-2.20.1/docs/source/ses_tut.rst000066400000000000000000000136321225267101000172100ustar00rootroot00000000000000.. ses_tut: ============================= Simple Email Service Tutorial ============================= This tutorial focuses on the boto interface to AWS' `Simple Email Service (SES) `_. This tutorial assumes that you have boto already downloaded and installed. .. _SES: http://aws.amazon.com/ses/ Creating a Connection --------------------- The first step in accessing SES is to create a connection to the service. To do so, the most straight forward way is the following:: >>> import boto.ses >>> conn = boto.ses.connect_to_region( 'us-west-2', aws_access_key_id='', aws_secret_access_key='') >>> conn SESConnection:email.us-west-2.amazonaws.com Bear in mind that if you have your credentials in boto config in your home directory, the two keyword arguments in the call above are not needed. More details on configuration can be fond in :doc:`boto_config_tut`. The :py:func:`boto.ses.connect_to_region` functions returns a :py:class:`boto.ses.connection.SESConnection` instance, which is a the boto API for working with SES. Notes on Sending ---------------- It is important to keep in mind that while emails appear to come "from" the address that you specify via Reply-To, the sending is done through Amazon. Some clients do pick up on this disparity, and leave a note on emails. Verifying a Sender Email Address -------------------------------- Before you can send email "from" an address, you must prove that you have access to the account. When you send a validation request, an email is sent to the address with a link in it. Clicking on the link validates the address and adds it to your SES account. Here's how to send the validation email:: >>> conn.verify_email_address('some@address.com') { 'VerifyEmailAddressResponse': { 'ResponseMetadata': { 'RequestId': '4a974fd5-56c2-11e1-ad4c-c1f08c91d554' } } } After a short amount of time, you'll find an email with the validation link inside. Click it, and this address may be used to send emails. Listing Verified Addresses -------------------------- If you'd like to list the addresses that are currently verified on your SES account, use :py:meth:`list_verified_email_addresses `:: >>> conn.list_verified_email_addresses() { 'ListVerifiedEmailAddressesResponse': { 'ListVerifiedEmailAddressesResult': { 'VerifiedEmailAddresses': [ 'some@address.com', 'another@address.com' ] }, 'ResponseMetadata': { 'RequestId': '2ab45c18-56c3-11e1-be66-ffd2a4549d70' } } } Deleting a Verified Address --------------------------- In the event that you'd like to remove an email address from your account, use :py:meth:`delete_verified_email_address `:: >>> conn.delete_verified_email_address('another@address.com') Sending an Email ---------------- Sending an email is done via :py:meth:`send_email `:: >>> conn.send_email( 'some@address.com', 'Your subject', 'Body here', ['recipient-address-1@gmail.com']) { 'SendEmailResponse': { 'ResponseMetadata': { 'RequestId': '4743c2b7-56c3-11e1-bccd-c99bd68002fd' }, 'SendEmailResult': { 'MessageId': '000001357a177192-7b894025-147a-4705-8455-7c880b0c8270-000000' } } } If you're wanting to send a multipart MIME email, see the reference for :py:meth:`send_raw_email `, which is a bit more of a low-level alternative. Checking your Send Quota ------------------------ Staying within your quota is critical, since the upper limit is a hard cap. Once you have hit your quota, no further email may be sent until enough time elapses to where your 24 hour email count (rolling continuously) is within acceptable ranges. Use :py:meth:`get_send_quota `:: >>> conn.get_send_quota() { 'GetSendQuotaResponse': { 'GetSendQuotaResult': { 'Max24HourSend': '100000.0', 'SentLast24Hours': '181.0', 'MaxSendRate': '28.0' }, 'ResponseMetadata': { 'RequestId': u'8a629245-56c4-11e1-9c53-9d5f4d2cc8d3' } } } Checking your Send Statistics ----------------------------- In order to fight spammers and ensure quality mail is being sent from SES, Amazon tracks bounces, rejections, and complaints. This is done via :py:meth:`get_send_statistics `. Please be warned that the output is extremely verbose, to the point where we'll just show a short excerpt here:: >>> conn.get_send_statistics() { 'GetSendStatisticsResponse': { 'GetSendStatisticsResult': { 'SendDataPoints': [ { 'Complaints': '0', 'Timestamp': '2012-02-13T05:02:00Z', 'DeliveryAttempts': '8', 'Bounces': '0', 'Rejects': '0' }, { 'Complaints': '0', 'Timestamp': '2012-02-13T05:17:00Z', 'DeliveryAttempts': '12', 'Bounces': '0', 'Rejects': '0' } ] } } } boto-2.20.1/docs/source/simpledb_tut.rst000066400000000000000000000163031225267101000202130ustar00rootroot00000000000000.. simpledb_tut: ============================================ An Introduction to boto's SimpleDB interface ============================================ This tutorial focuses on the boto interface to AWS' SimpleDB_. This tutorial assumes that you have boto already downloaded and installed. .. _SimpleDB: http://aws.amazon.com/simpledb/ .. note:: If you're starting a new application, you might want to consider using :doc:`DynamoDB2 ` instead, as it has a more comprehensive feature set & has guaranteed performance throughput levels. Creating a Connection --------------------- The first step in accessing SimpleDB is to create a connection to the service. To do so, the most straight forward way is the following:: >>> import boto.sdb >>> conn = boto.sdb.connect_to_region( ... 'us-west-2', ... aws_access_key_id='', ... aws_secret_access_key='') >>> conn SDBConnection:sdb.amazonaws.com >>> Bear in mind that if you have your credentials in boto config in your home directory, the two keyword arguments in the call above are not needed. Also important to note is that just as any other AWS service, SimpleDB is region-specific and as such you might want to specify which region to connect to, by default, it'll connect to the US-EAST-1 region. Creating Domains ---------------- Arguably, once you have your connection established, you'll want to create one or more dmains. Creating new domains is a fairly straight forward operation. To do so, you can proceed as follows:: >>> conn.create_domain('test-domain') Domain:test-domain >>> >>> conn.create_domain('test-domain-2') Domain:test-domain >>> Please note that SimpleDB, unlike its newest sibling DynamoDB, is truly and completely schema-less. Thus, there's no need specify domain keys or ranges. Listing All Domains ------------------- Unlike DynamoDB or other database systems, SimpleDB uses the concept of 'domains' instead of tables. So, to list all your domains for your account in a region, you can simply do as follows:: >>> domains = conn.get_all_domains() >>> domains [Domain:test-domain, Domain:test-domain-2] >>> The get_all_domains() method returns a :py:class:`boto.resultset.ResultSet` containing all :py:class:`boto.sdb.domain.Domain` objects associated with this connection's Access Key ID for that region. Retrieving a Domain (by name) ----------------------------- If you wish to retrieve a specific domain whose name is known, you can do so as follows:: >>> dom = conn.get_domain('test-domain') >>> dom Domain:test-domain >>> The get_domain call has an optional validate parameter, which defaults to True. This will make sure to raise an exception if the domain you are looking for doesn't exist. If you set it to false, it will return a :py:class:`Domain ` object blindly regardless of its existence. Getting Domain Metadata ------------------------ There are times when you might want to know your domains' machine usage, aprox. item count and other such data. To this end, boto offers a simple and convenient way to do so as shown below:: >>> domain_meta = conn.domain_metadata(dom) >>> domain_meta >>> dir(domain_meta) ['BoxUsage', 'DomainMetadataResponse', 'DomainMetadataResult', 'RequestId', 'ResponseMetadata', '__doc__', '__init__', '__module__', 'attr_name_count', 'attr_names_size', 'attr_value_count', 'attr_values_size', 'domain', 'endElement', 'item_count', 'item_names_size', 'startElement', 'timestamp'] >>> domain_meta.item_count 0 >>> Please bear in mind that while in the example above we used a previously retrieved domain object as the parameter, you can retrieve the domain metadata via its name (string). Adding Items (and attributes) ----------------------------- Once you have your domain setup, presumably, you'll want to start adding items to it. In its most straight forward form, you need to provide a name for the item -- think of it as a record id -- and a collection of the attributes you want to store in the item (often a Dictionary-like object). So, adding an item to a domain looks as follows:: >>> item_name = 'ABC_123' >>> item_attrs = {'Artist': 'The Jackson 5', 'Genera':'Pop'} >>> dom.put_attributes(item_name, item_attrs) True >>> Now let's check if it worked:: >>> domain_meta = conn.domain_metadata(dom) >>> domain_meta.item_count 1 >>> Batch Adding Items (and attributes) ----------------------------------- You can also add a number of items at the same time in a similar fashion. All you have to provide to the batch_put_attributes() method is a Dictionary-like object with your items and their respective attributes, as follows:: >>> items = {'item1':{'attr1':'val1'},'item2':{'attr2':'val2'}} >>> dom.batch_put_attributes(items) True >>> Now, let's check the item count once again:: >>> domain_meta = conn.domain_metadata(dom) >>> domain_meta.item_count 3 >>> A few words of warning: both batch_put_attributes() and put_item(), by default, will overwrite the values of the attributes if both the item and attribute already exist. If the item exists, but not the attributes, it will append the new attributes to the attribute list of that item. If you do not wish these methods to behave in that manner, simply supply them with a 'replace=False' parameter. Retrieving Items ----------------- To retrieve an item along with its attributes is a fairly straight forward operation and can be accomplished as follows:: >>> dom.get_item('item1') {u'attr1': u'val1'} >>> Since SimpleDB works in an "eventual consistency" manner, we can also request a forced consistent read (though this will invariably adversely affect read performance). The way to accomplish that is as shown below:: >>> dom.get_item('item1', consistent_read=True) {u'attr1': u'val1'} >>> Retrieving One or More Items ---------------------------- Another way to retrieve items is through boto's select() method. This method, at the bare minimum, requires a standard SQL select query string and you would do something along the lines of:: >>> query = 'select * from `test-domain` where attr1="val1"' >>> rs = dom.select(query) >>> for j in rs: ... print 'o hai' ... o hai >>> This method returns a ResultSet collection you can iterate over. Updating Item Attributes ------------------------ The easiest way to modify an item's attributes is by manipulating the item's attributes and then saving those changes. For example:: >>> item = dom.get_item('item1') >>> item['attr1'] = 'val_changed' >>> item.save() Deleting Items (and its attributes) ----------------------------------- Deleting an item is a very simple operation. All you are required to provide is either the name of the item or an item object to the delete_item() method, boto will take care of the rest:: >>>dom.delete_item(item) >>>True Deleting Domains ----------------------------------- To delete a domain and all items under it (i.e. be very careful), you can do it as follows:: >>> conn.delete_domain('test-domain') True >>> boto-2.20.1/docs/source/sqs_tut.rst000066400000000000000000000227221225267101000172240ustar00rootroot00000000000000.. _sqs_tut: ======================================= An Introduction to boto's SQS interface ======================================= This tutorial focuses on the boto interface to the Simple Queue Service from Amazon Web Services. This tutorial assumes that you have boto already downloaded and installed. Creating a Connection --------------------- The first step in accessing SQS is to create a connection to the service. The recommended method of doing this is as follows:: >>> import boto.sqs >>> conn = boto.sqs.connect_to_region( ... "us-west-2", ... aws_access_key_id=', ... aws_secret_access_key='') At this point the variable conn will point to an SQSConnection object in the US-WEST-2 region. Bear in mind that just as any other AWS service, SQS is region-specific. In this example, the AWS access key and AWS secret key are passed in to the method explicitely. Alternatively, you can set the environment variables: * ``AWS_ACCESS_KEY_ID`` - Your AWS Access Key ID * ``AWS_SECRET_ACCESS_KEY`` - Your AWS Secret Access Key and then simply call:: >>> import boto.sqs >>> conn = boto.sqs.connect_to_region("us-west-2") In either case, conn will point to an SQSConnection object which we will use throughout the remainder of this tutorial. Creating a Queue ---------------- Once you have a connection established with SQS, you will probably want to create a queue. In its simplest form, that can be accomplished as follows:: >>> q = conn.create_queue('myqueue') The create_queue method will create (and return) the requested queue if it does not exist or will return the existing queue if it does. There is an optional parameter to create_queue called visibility_timeout. This basically controls how long a message will remain invisible to other queue readers once it has been read (see SQS documentation for more detailed explanation). If this is not explicitly specified the queue will be created with whatever default value SQS provides (currently 30 seconds). If you would like to specify another value, you could do so like this:: >>> q = conn.create_queue('myqueue', 120) This would establish a default visibility timeout for this queue of 120 seconds. As you will see later on, this default value for the queue can also be overridden each time a message is read from the queue. If you want to check what the default visibility timeout is for a queue:: >>> q.get_timeout() 30 Listing all Queues ------------------ To retrieve a list of the queues for your account in the current region:: >>> conn.get_all_queues() [ Queue(https://queue.amazonaws.com/411358162645/myqueue), Queue(https://queue.amazonaws.com/411358162645/another_queue), Queue(https://queue.amazonaws.com/411358162645/another_queue2) ] This will leave you with a list of all of your :py:class:`boto.sqs.queue.Queue` instances. Alternatively, if you wanted to only list the queues that started with ``'another'``:: >>> conn.get_all_queues(prefix='another') [ Queue(https://queue.amazonaws.com/411358162645/another_queue), Queue(https://queue.amazonaws.com/411358162645/another_queue2) ] Getting a Queue (by name) ------------------------- If you wish to explicitly retrieve an existing queue and the name of the queue is known, you can retrieve the queue as follows:: >>> my_queue = conn.get_queue('myqueue') Queue(https://queue.amazonaws.com/411358162645/myqueue) This leaves you with a single :py:class:`boto.sqs.queue.Queue`, which abstracts the SQS Queue named 'myqueue'. Writing Messages ---------------- Once you have a queue setup, presumably you will want to write some messages to it. SQS doesn't care what kind of information you store in your messages or what format you use to store it. As long as the amount of data per message is less than or equal to 256Kb, SQS won't complain. So, first we need to create a Message object:: >>> from boto.sqs.message import Message >>> m = Message() >>> m.set_body('This is my first message.') >>> status = q.write(m) The write method returns a True if everything went well. If the write didn't succeed it will either return a False (meaning SQS simply chose not to write the message for some reason) or an exception if there was some sort of problem with the request. Writing Messages (Custom Format) -------------------------------- The technique above will work only if you use boto's default Message payload format; however, you may have a lot of specific requirements around the format of the message data. For example, you may want to store one big string or you might want to store something that looks more like RFC822 messages or you might want to store a binary payload such as pickled Python objects. The way boto deals with this issue is to define a simple Message object that treats the message data as one big string which you can set and get. If that Message object meets your needs, you're good to go. However, if you need to incorporate different behavior in your message or handle different types of data you can create your own Message class. You just need to register that class with the boto queue object so that it knows that, when you read a message from the queue, it should create one of your message objects rather than the default boto Message object. To register your message class, you would:: >>> import MyMessage >>> q.set_message_class(MyMessage) >>> m = MyMessage() >>> m.set_body('This is my first message.') >>> status = q.write(m) where MyMessage is the class definition for your message class. Your message class should subclass the boto Message because there is a small bit of Python magic happening in the __setattr__ method of the boto Message class. Reading Messages ---------------- So, now we have a message in our queue. How would we go about reading it? Here's one way: >>> rs = q.get_messages() >>> len(rs) 1 >>> m = rs[0] >>> m.get_body() u'This is my first message' The get_messages method also returns a ResultSet object as described above. In addition to the special attributes that we already talked about the ResultSet object also contains any results returned by the request. To get at the results you can treat the ResultSet as a sequence object (e.g. a list). We can check the length (how many results) and access particular items within the list using the slice notation familiar to Python programmers. At this point, we have read the message from the queue and SQS will make sure that this message remains invisible to other readers of the queue until the visibility timeout period for the queue expires. If you delete the message before the timeout period expires then no one else will ever see the message again. However, if you don't delete it (maybe because your reader crashed or failed in some way, for example) it will magically reappear in my queue for someone else to read. If you aren't happy with the default visibility timeout defined for the queue, you can override it when you read a message: >>> q.get_messages(visibility_timeout=60) This means that regardless of what the default visibility timeout is for the queue, this message will remain invisible to other readers for 60 seconds. The get_messages method can also return more than a single message. By passing a num_messages parameter (defaults to 1) you can control the maximum number of messages that will be returned by the method. To show this feature off, first let's load up a few more messages. >>> for i in range(1, 11): ... m = Message() ... m.set_body('This is message %d' % i) ... q.write(m) ... >>> rs = q.get_messages(10) >>> len(rs) 10 Don't be alarmed if the length of the result set returned by the get_messages call is less than 10. Sometimes it takes some time for new messages to become visible in the queue. Give it a minute or two and they will all show up. If you want a slightly simpler way to read messages from a queue, you can use the read method. It will either return the message read or it will return None if no messages were available. You can also pass a visibility_timeout parameter to read, if you desire: >>> m = q.read(60) >>> m.get_body() u'This is my first message' Deleting Messages and Queues ---------------------------- As stated above, messages are never deleted by the queue unless explicitly told to do so. To remove a message from a queue: >>> q.delete_message(m) [] If I want to delete the entire queue, I would use: >>> conn.delete_queue(q) This will delete the queue, even if there are still messages within the queue. Additional Information ---------------------- The above tutorial covers the basic operations of creating queues, writing messages, reading messages, deleting messages, and deleting queues. There are a few utility methods in boto that might be useful as well. For example, to count the number of messages in a queue: >>> q.count() 10 This can be handy but this command as well as the other two utility methods I'll describe in a minute are inefficient and should be used with caution on queues with lots of messages (e.g. many hundreds or more). Similarly, you can clear (delete) all messages in a queue with: >>> q.clear() Be REAL careful with that one! Finally, if you want to dump all of the messages in a queue to a local file: >>> q.dump('messages.txt', sep='\n------------------\n') This will read all of the messages in the queue and write the bodies of each of the messages to the file messages.txt. The option sep argument is a separator that will be printed between each message body in the file. boto-2.20.1/docs/source/support_tut.rst000066400000000000000000000107341225267101000201320ustar00rootroot00000000000000.. _support_tut: =========================================== An Introduction to boto's Support interface =========================================== This tutorial focuses on the boto interface to Amazon Web Services Support, allowing you to programmatically interact with cases created with Support. This tutorial assumes that you have already downloaded and installed ``boto``. Creating a Connection --------------------- The first step in accessing Support is to create a connection to the service. There are two ways to do this in boto. The first is: >>> from boto.support.connection import SupportConnection >>> conn = SupportConnection('', '') At this point the variable ``conn`` will point to a ``SupportConnection`` object. In this example, the AWS access key and AWS secret key are passed in to the method explicitly. Alternatively, you can set the environment variables: **AWS_ACCESS_KEY_ID** Your AWS Access Key ID **AWS_SECRET_ACCESS_KEY** Your AWS Secret Access Key and then call the constructor without any arguments, like this: >>> conn = SupportConnection() There is also a shortcut function in boto that makes it easy to create Support connections: >>> import boto.support >>> conn = boto.support.connect_to_region('us-west-2') In either case, ``conn`` points to a ``SupportConnection`` object which we will use throughout the remainder of this tutorial. Describing Existing Cases ------------------------- If you have existing cases or want to fetch cases in the future, you'll use the ``SupportConnection.describe_cases`` method. For example:: >>> cases = conn.describe_cases() >>> len(cases['cases']) 1 >>> cases['cases'][0]['title'] 'A test case.' >>> cases['cases'][0]['caseId'] 'case-...' You can also fetch a set of cases (or single case) by providing a ``case_id_list`` parameter:: >>> cases = conn.describe_cases(case_id_list=['case-1']) >>> len(cases['cases']) 1 >>> cases['cases'][0]['title'] 'A test case.' >>> cases['cases'][0]['caseId'] 'case-...' Describing Service Codes ------------------------ In order to create a new case, you'll need to fetch the service (& category) codes available to you. Fetching them is a simple call to:: >>> services = conn.describe_services() >>> services['services'][0]['code'] 'amazon-cloudsearch' If you only care about certain services, you can pass a list of service codes:: >>> service_details = conn.describe_services(service_code_list=[ ... 'amazon-cloudsearch', ... 'amazon-dynamodb', ... ]) Describing Severity Levels -------------------------- In order to create a new case, you'll also need to fetch the severity levels available to you. Fetching them looks like:: >>> severities = conn.describe_severity_levels() >>> severities['severityLevels'][0]['code'] 'low' Creating a Case --------------- Upon creating a connection to Support, you can now work with existing Support cases, create new cases or resolve them. We'll start with creating a new case:: >>> new_case = conn.create_case( ... subject='This is a test case.', ... service_code='', ... category_code='', ... communication_body="", ... severity_code='low' ... ) >>> new_case['caseId'] 'case-...' For the ``service_code/category_code`` parameters, you'll need to do a ``SupportConnection.describe_services`` call, then select the appropriate service code (& appropriate category code within that service) from the response. For the ``severity_code`` parameter, you'll need to do a ``SupportConnection.describe_severity_levels`` call, then select the appropriate severity code from the response. Adding to a Case ---------------- Since the purpose of a support case involves back-and-forth communication, you can add additional communication to the case as well. Providing a response might look like:: >>> result = conn.add_communication_to_case( ... communication_body="This is a followup. It's working now." ... case_id='case-...' ... ) Fetching all Communications for a Case -------------------------------------- Getting all communications for a given case looks like:: >>> communications = conn.describe_communications('case-...') Resolving a Case ---------------- Once a case is finished, you should mark it as resolved to close it out. Resolving a case looks like:: >>> closed = conn.resolve_case(case_id='case-...') >>> closed['result'] True boto-2.20.1/docs/source/swf_tut.rst000066400000000000000000000520621225267101000172150ustar00rootroot00000000000000.. swf_tut: :Authors: Slawek "oozie" Ligus =============================== Amazon Simple Workflow Tutorial =============================== This tutorial focuses on boto's interface to AWS SimpleWorkflow service. .. _SimpleWorkflow: http://aws.amazon.com/swf/ What is a workflow? ------------------- A workflow is a sequence of multiple activities aimed at accomplishing a well-defined objective. For instance, booking an airline ticket as a workflow may encompass multiple activities, such as selection of itinerary, submission of personal details, payment validation and booking confirmation. Except for the start and completion of a workflow, each step has a well-defined predecessor and successor. With that - on successful completion of an activity the workflow can progress with its execution, - when one of workflow's activities fails it can be retried, - and when it keeps failing repeatedly the workflow may regress to the previous step to gather alternative inputs or it may simply fail at that stage. Why use workflows? ------------------ Modelling an application on a workflow provides a useful abstraction layer for writing highly-reliable programs for distributed systems, as individual responsibilities can be delegated to a set of redundant, independent and non-critical processing units. How does Amazon SWF help you accomplish this? --------------------------------------------- Amazon SimpleWorkflow service defines an interface for workflow orchestration and provides state persistence for workflow executions. Amazon SWF applications involve communication between the following entities: - The Amazon Simple Workflow Service - providing centralized orchestration and workflow state persistence, - Workflow Executors - some entity starting workflow executions, typically through an action taken by a user or from a cronjob. - Deciders - a program codifying the business logic, i.e. a set of instructions and decisions. Deciders take decisions based on initial set of conditions and outcomes from activities. - Activity Workers - their objective is very straightforward: to take inputs, execute the tasks and return a result to the Service. The Workflow Executor contacts SWF Service and requests instantiation of a workflow. A new workflow is created and its state is stored in the service. The next time a decider contacts SWF service to ask for a decision task, it will be informed about a new workflow execution is taking place and it will be asked to advise SWF service on what the next steps should be. The decider then instructs the service to dispatch specific tasks to activity workers. At the next activity worker poll, the task is dispatched, then executed and the results reported back to the SWF, which then passes them onto the deciders. This exchange keeps happening repeatedly until the decider is satisfied and instructs the service to complete the execution. Prerequisites ------------- You need a valid access and secret key. The examples below assume that you have exported them to your environment, as follows: .. code-block:: bash bash$ export AWS_ACCESS_KEY_ID= bash$ export AWS_SECRET_ACCESS_KEY= Before workflows and activities can be used, they have to be registered with SWF service: .. code-block:: python # register.py import boto.swf.layer2 as swf from boto.swf.exceptions import SWFTypeAlreadyExistsError, SWFDomainAlreadyExistsError DOMAIN = 'boto_tutorial' VERSION = '1.0' registerables = [] registerables.append(swf.Domain(name=DOMAIN)) for workflow_type in ('HelloWorkflow', 'SerialWorkflow', 'ParallelWorkflow'): registerables.append(swf.WorkflowType(domain=DOMAIN, name=workflow_type, version=VERSION, task_list='default')) for activity_type in ('HelloWorld', 'ActivityA', 'ActivityB', 'ActivityC'): registerables.append(swf.ActivityType(domain=DOMAIN, name=activity_type, version=VERSION, task_list='default')) for swf_entity in registerables: try: swf_entity.register() print swf_entity.name, 'registered successfully' except (SWFDomainAlreadyExistsError, SWFTypeAlreadyExistsError): print swf_entity.__class__.__name__, swf_entity.name, 'already exists' Execution of the above should produce no errors. .. code-block:: bash bash$ python -i register.py Domain boto_tutorial already exists WorkflowType HelloWorkflow already exists SerialWorkflow registered successfully ParallelWorkflow registered successfully ActivityType HelloWorld already exists ActivityA registered successfully ActivityB registered successfully ActivityC registered successfully >>> HelloWorld ---------- This example is an implementation of a minimal Hello World workflow. Its execution should unfold as follows: #. A workflow execution is started. #. The SWF service schedules the initial decision task. #. A decider polls for decision tasks and receives one. #. The decider requests scheduling of an activity task. #. The SWF service schedules the greeting activity task. #. An activity worker polls for activity task and receives one. #. The worker completes the greeting activity. #. The SWF service schedules a decision task to inform about work outcome. #. The decider polls and receives a new decision task. #. The decider schedules workflow completion. #. The workflow execution finishes. Workflow logic is encoded in the decider: .. code-block:: python # hello_decider.py import boto.swf.layer2 as swf DOMAIN = 'boto_tutorial' ACTIVITY = 'HelloWorld' VERSION = '1.0' TASKLIST = 'default' class HelloDecider(swf.Decider): domain = DOMAIN task_list = TASKLIST version = VERSION def run(self): history = self.poll() if 'events' in history: # Find workflow events not related to decision scheduling. workflow_events = [e for e in history['events'] if not e['eventType'].startswith('Decision')] last_event = workflow_events[-1] decisions = swf.Layer1Decisions() if last_event['eventType'] == 'WorkflowExecutionStarted': decisions.schedule_activity_task('saying_hi', ACTIVITY, VERSION, task_list=TASKLIST) elif last_event['eventType'] == 'ActivityTaskCompleted': decisions.complete_workflow_execution() self.complete(decisions=decisions) return True The activity worker is responsible for printing the greeting message when the activity task is dispatched to it by the service: .. code-block:: python import boto.swf.layer2 as swf DOMAIN = 'boto_tutorial' VERSION = '1.0' TASKLIST = 'default' class HelloWorker(swf.ActivityWorker): domain = DOMAIN version = VERSION task_list = TASKLIST def run(self): activity_task = self.poll() if 'activityId' in activity_task: print 'Hello, World!' self.complete() return True With actors implemented we can spin up a workflow execution: .. code-block:: bash $ python >>> import boto.swf.layer2 as swf >>> execution = swf.WorkflowType(name='HelloWorkflow', domain='boto_tutorial', version='1.0', task_list='default').start() >>> From separate terminals run an instance of a worker and a decider to carry out a workflow execution (the worker and decider may run from two independent machines). .. code-block:: bash $ python -i hello_decider.py >>> while HelloDecider().run(): pass ... .. code-block:: bash $ python -i hello_worker.py >>> while HelloWorker().run(): pass ... Hello, World! Great. Now, to see what just happened, go back to the original terminal from which the execution was started, and read its history. .. code-block:: bash >>> execution.history() [{'eventId': 1, 'eventTimestamp': 1381095173.2539999, 'eventType': 'WorkflowExecutionStarted', 'workflowExecutionStartedEventAttributes': {'childPolicy': 'TERMINATE', 'executionStartToCloseTimeout': '3600', 'parentInitiatedEventId': 0, 'taskList': {'name': 'default'}, 'taskStartToCloseTimeout': '300', 'workflowType': {'name': 'HelloWorkflow', 'version': '1.0'}}}, {'decisionTaskScheduledEventAttributes': {'startToCloseTimeout': '300', 'taskList': {'name': 'default'}}, 'eventId': 2, 'eventTimestamp': 1381095173.2539999, 'eventType': 'DecisionTaskScheduled'}, {'decisionTaskStartedEventAttributes': {'scheduledEventId': 2}, 'eventId': 3, 'eventTimestamp': 1381095177.5439999, 'eventType': 'DecisionTaskStarted'}, {'decisionTaskCompletedEventAttributes': {'scheduledEventId': 2, 'startedEventId': 3}, 'eventId': 4, 'eventTimestamp': 1381095177.855, 'eventType': 'DecisionTaskCompleted'}, {'activityTaskScheduledEventAttributes': {'activityId': 'saying_hi', 'activityType': {'name': 'HelloWorld', 'version': '1.0'}, 'decisionTaskCompletedEventId': 4, 'heartbeatTimeout': '600', 'scheduleToCloseTimeout': '3900', 'scheduleToStartTimeout': '300', 'startToCloseTimeout': '3600', 'taskList': {'name': 'default'}}, 'eventId': 5, 'eventTimestamp': 1381095177.855, 'eventType': 'ActivityTaskScheduled'}, {'activityTaskStartedEventAttributes': {'scheduledEventId': 5}, 'eventId': 6, 'eventTimestamp': 1381095179.427, 'eventType': 'ActivityTaskStarted'}, {'activityTaskCompletedEventAttributes': {'scheduledEventId': 5, 'startedEventId': 6}, 'eventId': 7, 'eventTimestamp': 1381095179.6989999, 'eventType': 'ActivityTaskCompleted'}, {'decisionTaskScheduledEventAttributes': {'startToCloseTimeout': '300', 'taskList': {'name': 'default'}}, 'eventId': 8, 'eventTimestamp': 1381095179.6989999, 'eventType': 'DecisionTaskScheduled'}, {'decisionTaskStartedEventAttributes': {'scheduledEventId': 8}, 'eventId': 9, 'eventTimestamp': 1381095179.7420001, 'eventType': 'DecisionTaskStarted'}, {'decisionTaskCompletedEventAttributes': {'scheduledEventId': 8, 'startedEventId': 9}, 'eventId': 10, 'eventTimestamp': 1381095180.026, 'eventType': 'DecisionTaskCompleted'}, {'eventId': 11, 'eventTimestamp': 1381095180.026, 'eventType': 'WorkflowExecutionCompleted', 'workflowExecutionCompletedEventAttributes': {'decisionTaskCompletedEventId': 10}}] Serial Activity Execution ------------------------- The following example implements a basic workflow with activities executed one after another. The business logic, i.e. the serial execution of activities, is encoded in the decider: .. code-block:: python # serial_decider.py import time import boto.swf.layer2 as swf class SerialDecider(swf.Decider): domain = 'boto_tutorial' task_list = 'default_tasks' version = '1.0' def run(self): history = self.poll() if 'events' in history: # Get a list of non-decision events to see what event came in last. workflow_events = [e for e in history['events'] if not e['eventType'].startswith('Decision')] decisions = swf.Layer1Decisions() # Record latest non-decision event. last_event = workflow_events[-1] last_event_type = last_event['eventType'] if last_event_type == 'WorkflowExecutionStarted': # Schedule the first activity. decisions.schedule_activity_task('%s-%i' % ('ActivityA', time.time()), 'ActivityA', self.version, task_list='a_tasks') elif last_event_type == 'ActivityTaskCompleted': # Take decision based on the name of activity that has just completed. # 1) Get activity's event id. last_event_attrs = last_event['activityTaskCompletedEventAttributes'] completed_activity_id = last_event_attrs['scheduledEventId'] - 1 # 2) Extract its name. activity_data = history['events'][completed_activity_id] activity_attrs = activity_data['activityTaskScheduledEventAttributes'] activity_name = activity_attrs['activityType']['name'] # 3) Optionally, get the result from the activity. result = last_event['activityTaskCompletedEventAttributes'].get('result') # Take the decision. if activity_name == 'ActivityA': decisions.schedule_activity_task('%s-%i' % ('ActivityB', time.time()), 'ActivityB', self.version, task_list='b_tasks', input=result) if activity_name == 'ActivityB': decisions.schedule_activity_task('%s-%i' % ('ActivityC', time.time()), 'ActivityC', self.version, task_list='c_tasks', input=result) elif activity_name == 'ActivityC': # Final activity completed. We're done. decisions.complete_workflow_execution() self.complete(decisions=decisions) return True The workers only need to know which task lists to poll. .. code-block:: python # serial_worker.py import time import boto.swf.layer2 as swf class MyBaseWorker(swf.ActivityWorker): domain = 'boto_tutorial' version = '1.0' task_list = None def run(self): activity_task = self.poll() if 'activityId' in activity_task: # Get input. # Get the method for the requested activity. try: print 'working on activity from tasklist %s at %i' % (self.task_list, time.time()) self.activity(activity_task.get('input')) except Exception, error: self.fail(reason=str(error)) raise error return True def activity(self, activity_input): raise NotImplementedError class WorkerA(MyBaseWorker): task_list = 'a_tasks' def activity(self, activity_input): self.complete(result="Now don't be givin him sambuca!") class WorkerB(MyBaseWorker): task_list = 'b_tasks' def activity(self, activity_input): self.complete() class WorkerC(MyBaseWorker): task_list = 'c_tasks' def activity(self, activity_input): self.complete() Spin up a workflow execution and run the decider: .. code-block:: bash $ python >>> import boto.swf.layer2 as swf >>> execution = swf.WorkflowType(name='SerialWorkflow', domain='boto_tutorial', version='1.0', task_list='default_tasks').start() >>> .. code-block:: bash $ python -i serial_decider.py >>> while SerialDecider().run(): pass ... Run the workers. The activities will be executed in order: .. code-block:: bash $ python -i serial_worker.py >>> while WorkerA().run(): pass ... working on activity from tasklist a_tasks at 1382046291 .. code-block:: bash $ python -i serial_worker.py >>> while WorkerB().run(): pass ... working on activity from tasklist b_tasks at 1382046541 .. code-block:: bash $ python -i serial_worker.py >>> while WorkerC().run(): pass ... working on activity from tasklist c_tasks at 1382046560 Looks good. Now, do the following to inspect the state and history of the execution: .. code-block:: python >>> execution.describe() {'executionConfiguration': {'childPolicy': 'TERMINATE', 'executionStartToCloseTimeout': '3600', 'taskList': {'name': 'default_tasks'}, 'taskStartToCloseTimeout': '300'}, 'executionInfo': {'cancelRequested': False, 'closeStatus': 'COMPLETED', 'closeTimestamp': 1382046560.901, 'execution': {'runId': '12fQ1zSaLmI5+lLXB8ux+8U+hLOnnXNZCY9Zy+ZvXgzhE=', 'workflowId': 'SerialWorkflow-1.0-1382046514'}, 'executionStatus': 'CLOSED', 'startTimestamp': 1382046514.994, 'workflowType': {'name': 'SerialWorkflow', 'version': '1.0'}}, 'latestActivityTaskTimestamp': 1382046560.632, 'openCounts': {'openActivityTasks': 0, 'openChildWorkflowExecutions': 0, 'openDecisionTasks': 0, 'openTimers': 0}} >>> execution.history() ... Parallel Activity Execution --------------------------- When activities are independent from one another, their execution may be scheduled in parallel. The decider schedules all activities at once and marks progress until all activities are completed, at which point the workflow is completed. .. code-block:: python # parallel_decider.py import boto.swf.layer2 as swf import time SCHED_COUNT = 5 class ParallelDecider(swf.Decider): domain = 'boto_tutorial' task_list = 'default' def run(self): decision_task = self.poll() if 'events' in decision_task: decisions = swf.Layer1Decisions() # Decision* events are irrelevant here and can be ignored. workflow_events = [e for e in decision_task['events'] if not e['eventType'].startswith('Decision')] # Record latest non-decision event. last_event = workflow_events[-1] last_event_type = last_event['eventType'] if last_event_type == 'WorkflowExecutionStarted': # At start, kickoff SCHED_COUNT activities in parallel. for i in range(SCHED_COUNT): decisions.schedule_activity_task('activity%i' % i, 'ActivityA', '1.0', task_list=self.task_list) elif last_event_type == 'ActivityTaskCompleted': # Monitor progress. When all activities complete, complete workflow. completed_count = sum([1 for a in decision_task['events'] if a['eventType'] == 'ActivityTaskCompleted']) print '%i/%i' % (completed_count, SCHED_COUNT) if completed_count == SCHED_COUNT: decisions.complete_workflow_execution() self.complete(decisions=decisions) return True Again, the only bit of information a worker needs is which task list to poll. .. code-block:: python # parallel_worker.py import time import boto.swf.layer2 as swf class ParallelWorker(swf.ActivityWorker): domain = 'boto_tutorial' task_list = 'default' def run(self): """Report current time.""" activity_task = self.poll() if 'activityId' in activity_task: print 'working on', activity_task['activityId'] self.complete(result=str(time.time())) return True Spin up a workflow execution and run the decider: .. code-block:: bash $ python -i parallel_decider.py >>> execution = swf.WorkflowType(name='ParallelWorkflow', domain='boto_tutorial', version='1.0', task_list='default').start() >>> while ParallelDecider().run(): pass ... 1/5 2/5 4/5 5/5 Run two or more workers to see how the service partitions work execution in parallel. .. code-block:: bash $ python -i parallel_worker.py >>> while ParallelWorker().run(): pass ... working on activity1 working on activity3 working on activity4 .. code-block:: bash $ python -i parallel_worker.py >>> while ParallelWorker().run(): pass ... working on activity2 working on activity0 As seen above, the work was partitioned between the two running workers. .. _Amazon SWF API Reference: http://docs.aws.amazon.com/amazonswf/latest/apireference/Welcome.html .. _StackOverflow questions: http://stackoverflow.com/questions/tagged/amazon-swf .. _Miscellaneous Blog Articles: http://log.ooz.ie/search/label/SimpleWorkflow boto-2.20.1/docs/source/vpc_tut.rst000066400000000000000000000044701225267101000172060ustar00rootroot00000000000000.. _vpc_tut: ======================================= An Introduction to boto's VPC interface ======================================= This tutorial is based on the examples in the Amazon Virtual Private Cloud Getting Started Guide (http://docs.amazonwebservices.com/AmazonVPC/latest/GettingStartedGuide/). In each example, it tries to show the boto request that correspond to the AWS command line tools. Creating a VPC connection ------------------------- First, we need to create a new VPC connection: >>> from boto.vpc import VPCConnection >>> c = VPCConnection() To create a VPC --------------- Now that we have a VPC connection, we can create our first VPC. >>> vpc = c.create_vpc('10.0.0.0/24') >>> vpc VPC:vpc-6b1fe402 >>> vpc.id u'vpc-6b1fe402' >>> vpc.state u'pending' >>> vpc.cidr_block u'10.0.0.0/24' >>> vpc.dhcp_options_id u'default' >>> To create a subnet ------------------ The next step is to create a subnet to associate with your VPC. >>> subnet = c.create_subnet(vpc.id, '10.0.0.0/25') >>> subnet.id u'subnet-6a1fe403' >>> subnet.state u'pending' >>> subnet.cidr_block u'10.0.0.0/25' >>> subnet.available_ip_address_count 123 >>> subnet.availability_zone u'us-east-1b' >>> To create a customer gateway ---------------------------- Next, we create a customer gateway. >>> cg = c.create_customer_gateway('ipsec.1', '12.1.2.3', 65534) >>> cg.id u'cgw-b6a247df' >>> cg.type u'ipsec.1' >>> cg.state u'available' >>> cg.ip_address u'12.1.2.3' >>> cg.bgp_asn u'65534' >>> To create a VPN gateway ----------------------- >>> vg = c.create_vpn_gateway('ipsec.1') >>> vg.id u'vgw-44ad482d' >>> vg.type u'ipsec.1' >>> vg.state u'pending' >>> vg.availability_zone u'us-east-1b' >>> Attaching a VPN Gateway to a VPC -------------------------------- >>> vg.attach(vpc.id) >>> Associating an Elastic IP with a VPC Instance --------------------------------------------- >>> ec2.connection.associate_address('i-71b2f60b', None, 'eipalloc-35cf685d') >>> Releasing an Elastic IP Attached to a VPC Instance -------------------------------------------------- >>> ec2.connection.release_address(None, 'eipalloc-35cf685d') >>> To Get All VPN Connections -------------------------- >>> vpns = c.get_all_vpn_connections() >>> vpns[0].id u'vpn-12ef67bv' >>> tunnels = vpns[0].tunnels >>> tunnels [VpnTunnel: 177.12.34.56, VpnTunnel: 177.12.34.57] boto-2.20.1/pylintrc000066400000000000000000000213321225267101000143230ustar00rootroot00000000000000# lint Python modules using external checkers. # # This is the main checker controlling the other ones and the reports # generation. It is itself both a raw checker and an astng checker in order # to: # * handle message activation / deactivation at the module level # * handle some basic but necessary stats'data (number of classes, methods...) # [MASTER] # Specify a configuration file. #rcfile= # Profiled execution. profile=no # Add to the black list. It should be a base name, not a # path. You may set this option multiple times. ignore=.svn # Pickle collected data for later comparisons. persistent=yes # Set the cache size for astng objects. cache-size=500 # List of plugins (as comma separated values of python modules names) to load, # usually to register additional checkers. load-plugins= [MESSAGES CONTROL] # Enable only checker(s) with the given id(s). This option conflict with the # disable-checker option #enable-checker= # Enable all checker(s) except those with the given id(s). This option conflict # with the disable-checker option #disable-checker= # Enable all messages in the listed categories. #enable-msg-cat= # Disable all messages in the listed categories. #disable-msg-cat= # Enable the message(s) with the given id(s). #enable-msg= # Disable the message(s) with the given id(s). # disable-msg=C0323,W0142,C0301,C0103,C0111,E0213,C0302,C0203,W0703,R0201 disable-msg=C0111,C0103,W0703,W0702 [REPORTS] # set the output format. Available formats are text, parseable, colorized and # html output-format=colorized # Include message's id in output include-ids=yes # Put messages in a separate file for each module / package specified on the # command line instead of printing them on stdout. Reports (if any) will be # written in a file name "pylint_global.[txt|html]". files-output=no # Tells wether to display a full report or only the messages reports=yes # Python expression which should return a note less than 10 (10 is the highest # note).You have access to the variables errors warning, statement which # respectivly contain the number of errors / warnings messages and the total # number of statements analyzed. This is used by the global evaluation report # (R0004). evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10) # Add a comment according to your evaluation note. This is used by the global # evaluation report (R0004). comment=no # Enable the report(s) with the given id(s). #enable-report= # Disable the report(s) with the given id(s). #disable-report= # checks for # * unused variables / imports # * undefined variables # * redefinition of variable from builtins or from an outer scope # * use of variable before assigment # [VARIABLES] # Tells wether we should check for unused import in __init__ files. init-import=yes # A regular expression matching names used for dummy variables (i.e. not used). dummy-variables-rgx=_|dummy # List of additional names supposed to be defined in builtins. Remember that # you should avoid to define new builtins when possible. additional-builtins= # try to find bugs in the code using type inference # [TYPECHECK] # Tells wether missing members accessed in mixin class should be ignored. A # mixin class is detected if its name ends with "mixin" (case insensitive). ignore-mixin-members=yes # When zope mode is activated, consider the acquired-members option to ignore # access to some undefined attributes. zope=no # List of members which are usually get through zope's acquisition mecanism and # so shouldn't trigger E0201 when accessed (need zope=yes to be considered). acquired-members=REQUEST,acl_users,aq_parent # checks for : # * doc strings # * modules / classes / functions / methods / arguments / variables name # * number of arguments, local variables, branches, returns and statements in # functions, methods # * required module attributes # * dangerous default values as arguments # * redefinition of function / method / class # * uses of the global statement # [BASIC] # Required attributes for module, separated by a comma required-attributes= # Regular expression which should only match functions or classes name which do # not require a docstring no-docstring-rgx=__.*__ # Regular expression which should only match correct module names module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$ # Regular expression which should only match correct module level names const-rgx=(([A-Z_][A-Z1-9_]*)|(__.*__))$ # Regular expression which should only match correct class names class-rgx=[A-Z_][a-zA-Z0-9]+$ # Regular expression which should only match correct function names function-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct method names method-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct instance attribute names attr-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct argument names argument-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct variable names variable-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct list comprehension / # generator expression variable names inlinevar-rgx=[A-Za-z_][A-Za-z0-9_]*$ # Good variable names which should always be accepted, separated by a comma good-names=i,j,k,ex,Run,_ # Bad variable names which should always be refused, separated by a comma bad-names=foo,bar,baz,toto,tutu,tata # List of builtins function names that should not be used, separated by a comma bad-functions=apply,input # checks for sign of poor/misdesign: # * number of methods, attributes, local variables... # * size, complexity of functions, methods # [DESIGN] # Maximum number of arguments for function / method max-args=12 # Maximum number of locals for function / method body max-locals=30 # Maximum number of return / yield for function / method body max-returns=12 # Maximum number of branch for function / method body max-branchs=30 # Maximum number of statements in function / method body max-statements=60 # Maximum number of parents for a class (see R0901). max-parents=7 # Maximum number of attributes for a class (see R0902). max-attributes=20 # Minimum number of public methods for a class (see R0903). min-public-methods=0 # Maximum number of public methods for a class (see R0904). max-public-methods=20 # checks for # * external modules dependencies # * relative / wildcard imports # * cyclic imports # * uses of deprecated modules # [IMPORTS] # Deprecated modules which should not be used, separated by a comma deprecated-modules=regsub,string,TERMIOS,Bastion,rexec # Create a graph of every (i.e. internal and external) dependencies in the # given file (report R0402 must not be disabled) import-graph= # Create a graph of external dependencies in the given file (report R0402 must # not be disabled) ext-import-graph= # Create a graph of internal dependencies in the given file (report R0402 must # not be disabled) int-import-graph= # checks for : # * methods without self as first argument # * overridden methods signature # * access only to existant members via self # * attributes not defined in the __init__ method # * supported interfaces implementation # * unreachable code # [CLASSES] # List of interface methods to ignore, separated by a comma. This is used for # instance to not check methods defines in Zope's Interface base class. # ignore-iface-methods=isImplementedBy,deferred,extends,names,namesAndDescriptions,queryDescriptionFor,getBases,getDescriptionFor,getDoc,getName,getTaggedValue,getTaggedValueTags,isEqualOrExtendedBy,setTaggedValue,isImplementedByInstancesOf,adaptWith,is_implemented_by # List of method names used to declare (i.e. assign) instance attributes. defining-attr-methods=__init__,__new__,setUp # checks for similarities and duplicated code. This computation may be # memory / CPU intensive, so you should disable it if you experiments some # problems. # [SIMILARITIES] # Minimum lines number of a similarity. min-similarity-lines=5 # Ignore comments when computing similarities. ignore-comments=yes # Ignore docstrings when computing similarities. ignore-docstrings=yes # checks for: # * warning notes in the code like FIXME, XXX # * PEP 263: source code with non ascii character but no encoding declaration # [MISCELLANEOUS] # List of note tags to take in consideration, separated by a comma. notes=FIXME,XXX,TODO,BUG: # checks for : # * unauthorized constructions # * strict indentation # * line length # * use of <> instead of != # [FORMAT] # Maximum number of characters on a single line. max-line-length=80 # Maximum number of lines in a module max-module-lines=1000 # String used as indentation unit. This is usually " " (4 spaces) or "\t" (1 # tab). indent-string=' ' [MESSAGES CONTROL] disable-msg=C0301,C0111,C0103,R0201,W0702,C0324 boto-2.20.1/requirements.txt000066400000000000000000000002641225267101000160210ustar00rootroot00000000000000mock==1.0.1 nose==1.2.1 requests>=1.2.3,<=2.0.1 rsa==3.1.1 tox==1.4 Sphinx==1.1.3 simplejson==2.5.2 argparse==1.2.1 unittest2==0.5.1 httpretty>=0.7.0 paramiko>=1.10.0 PyYAML>=3.10 boto-2.20.1/setup.cfg000066400000000000000000000000261225267101000143520ustar00rootroot00000000000000[wheel] universal = 1 boto-2.20.1/setup.py000066400000000000000000000102321225267101000142430ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from __future__ import with_statement try: from setuptools import setup extra = dict(test_suite="tests.test.suite", include_package_data=True) except ImportError: from distutils.core import setup extra = {} import sys from boto import __version__ if sys.version_info <= (2, 4): error = "ERROR: boto requires Python Version 2.5 or above...exiting." print >> sys.stderr, error sys.exit(1) def readme(): with open("README.rst") as f: return f.read() setup(name = "boto", version = __version__, description = "Amazon Web Services Library", long_description = readme(), author = "Mitch Garnaat", author_email = "mitch@garnaat.com", scripts = ["bin/sdbadmin", "bin/elbadmin", "bin/cfadmin", "bin/s3put", "bin/fetch_file", "bin/launch_instance", "bin/list_instances", "bin/taskadmin", "bin/kill_instance", "bin/bundle_image", "bin/pyami_sendmail", "bin/lss3", "bin/cq", "bin/route53", "bin/cwutil", "bin/instance_events", "bin/asadmin", "bin/glacier", "bin/mturk", "bin/dynamodb_dump", "bin/dynamodb_load"], url = "https://github.com/boto/boto/", packages = ["boto", "boto.sqs", "boto.s3", "boto.gs", "boto.file", "boto.ec2", "boto.ec2.cloudwatch", "boto.ec2.autoscale", "boto.ec2.elb", "boto.sdb", "boto.cacerts", "boto.sdb.db", "boto.sdb.db.manager", "boto.mturk", "boto.pyami", "boto.pyami.installers", "boto.pyami.installers.ubuntu", "boto.mashups", "boto.contrib", "boto.manage", "boto.services", "boto.cloudfront", "boto.roboto", "boto.rds", "boto.vpc", "boto.fps", "boto.fps", "boto.emr", "boto.emr", "boto.sns", "boto.ecs", "boto.iam", "boto.route53", "boto.ses", "boto.cloudformation", "boto.sts", "boto.dynamodb", "boto.swf", "boto.mws", "boto.cloudsearch", "boto.glacier", "boto.beanstalk", "boto.datapipeline", "boto.elasticache", "boto.elastictranscoder", "boto.opsworks", "boto.redshift", "boto.dynamodb2", "boto.support", "boto.cloudtrail", "boto.directconnect", "boto.kinesis"], package_data = {"boto.cacerts": ["cacerts.txt"]}, license = "MIT", platforms = "Posix; MacOS X; Windows", classifiers = ["Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Topic :: Internet", "Programming Language :: Python :: 2", "Programming Language :: Python :: 2.5", "Programming Language :: Python :: 2.6", "Programming Language :: Python :: 2.7"], **extra ) boto-2.20.1/tests/000077500000000000000000000000001225267101000136755ustar00rootroot00000000000000boto-2.20.1/tests/__init__.py000066400000000000000000000021201225267101000160010ustar00rootroot00000000000000# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. boto-2.20.1/tests/db/000077500000000000000000000000001225267101000142625ustar00rootroot00000000000000boto-2.20.1/tests/db/test_lists.py000066400000000000000000000066221225267101000170370ustar00rootroot00000000000000# Copyright (c) 2010 Chris Moyer http://coredumped.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.sdb.db.property import ListProperty from boto.sdb.db.model import Model import time class SimpleListModel(Model): """Test the List Property""" nums = ListProperty(int) strs = ListProperty(str) class TestLists(object): """Test the List property""" def setup_class(cls): """Setup this class""" cls.objs = [] def teardown_class(cls): """Remove our objects""" for o in cls.objs: try: o.delete() except: pass def test_list_order(self): """Testing the order of lists""" t = SimpleListModel() t.nums = [5, 4, 1, 3, 2] t.strs = ["B", "C", "A", "D", "Foo"] t.put() self.objs.append(t) time.sleep(3) t = SimpleListModel.get_by_id(t.id) assert(t.nums == [5, 4, 1, 3, 2]) assert(t.strs == ["B", "C", "A", "D", "Foo"]) def test_old_compat(self): """Testing to make sure the old method of encoding lists will still return results""" t = SimpleListModel() t.put() self.objs.append(t) time.sleep(3) item = t._get_raw_item() item['strs'] = ["A", "B", "C"] item.save() time.sleep(3) t = SimpleListModel.get_by_id(t.id) i1 = sorted(item['strs']) i2 = t.strs i2.sort() assert(i1 == i2) def test_query_equals(self): """We noticed a slight problem with querying, since the query uses the same encoder, it was asserting that the value was at the same position in the list, not just "in" the list""" t = SimpleListModel() t.strs = ["Bizzle", "Bar"] t.put() self.objs.append(t) time.sleep(3) assert(SimpleListModel.find(strs="Bizzle").count() == 1) assert(SimpleListModel.find(strs="Bar").count() == 1) assert(SimpleListModel.find(strs=["Bar", "Bizzle"]).count() == 1) def test_query_not_equals(self): """Test a not equal filter""" t = SimpleListModel() t.strs = ["Fizzle"] t.put() self.objs.append(t) time.sleep(3) print SimpleListModel.all().filter("strs !=", "Fizzle").get_query() for tt in SimpleListModel.all().filter("strs !=", "Fizzle"): print tt.strs assert("Fizzle" not in tt.strs) boto-2.20.1/tests/db/test_password.py000066400000000000000000000105151225267101000175370ustar00rootroot00000000000000# Copyright (c) 2010 Robert Mela # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import unittest import logging import time log= logging.getLogger('password_property_test') log.setLevel(logging.DEBUG) class PasswordPropertyTest(unittest.TestCase): """Test the PasswordProperty""" def tearDown(self): cls=self.test_model() for obj in cls.all(): obj.delete() def hmac_hashfunc(self): import hmac def hashfunc(msg): return hmac.new('mysecret', msg) return hashfunc def test_model(self,hashfunc=None): from boto.utils import Password from boto.sdb.db.model import Model from boto.sdb.db.property import PasswordProperty import hashlib class MyModel(Model): password=PasswordProperty(hashfunc=hashfunc) return MyModel def test_custom_password_class(self): from boto.utils import Password from boto.sdb.db.model import Model from boto.sdb.db.property import PasswordProperty import hmac, hashlib myhashfunc = hashlib.md5 ## Define a new Password class class MyPassword(Password): hashfunc = myhashfunc #hashlib.md5 #lambda cls,msg: hmac.new('mysecret',msg) ## Define a custom password property using the new Password class class MyPasswordProperty(PasswordProperty): data_type=MyPassword type_name=MyPassword.__name__ ## Define a model using the new password property class MyModel(Model): password=MyPasswordProperty()#hashfunc=hashlib.md5) obj = MyModel() obj.password = 'bar' expected = myhashfunc('bar').hexdigest() #hmac.new('mysecret','bar').hexdigest() log.debug("\npassword=%s\nexpected=%s" % (obj.password, expected)) self.assertTrue(obj.password == 'bar' ) obj.save() id= obj.id time.sleep(5) obj = MyModel.get_by_id(id) self.assertEquals(obj.password, 'bar') self.assertEquals(str(obj.password), expected) #hmac.new('mysecret','bar').hexdigest()) def test_aaa_default_password_property(self): cls = self.test_model() obj = cls(id='passwordtest') obj.password = 'foo' self.assertEquals('foo', obj.password) obj.save() time.sleep(5) obj = cls.get_by_id('passwordtest') self.assertEquals('foo', obj.password) def test_password_constructor_hashfunc(self): import hmac myhashfunc=lambda msg: hmac.new('mysecret', msg) cls = self.test_model(hashfunc=myhashfunc) obj = cls() obj.password='hello' expected = myhashfunc('hello').hexdigest() self.assertEquals(obj.password, 'hello') self.assertEquals(str(obj.password), expected) obj.save() id = obj.id time.sleep(5) obj = cls.get_by_id(id) log.debug("\npassword=%s" % obj.password) self.assertTrue(obj.password == 'hello') if __name__ == '__main__': import sys, os curdir = os.path.dirname( os.path.abspath(__file__) ) srcroot = curdir + "/../.." sys.path = [ srcroot ] + sys.path logging.basicConfig() log.setLevel(logging.INFO) suite = unittest.TestLoader().loadTestsFromTestCase(PasswordPropertyTest) unittest.TextTestRunner(verbosity=2).run(suite) import boto boto-2.20.1/tests/db/test_query.py000066400000000000000000000115531225267101000170450ustar00rootroot00000000000000# Copyright (c) 2010 Chris Moyer http://coredumped.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.sdb.db.property import ListProperty, StringProperty, ReferenceProperty, IntegerProperty from boto.sdb.db.model import Model import time class SimpleModel(Model): """Simple Test Model""" name = StringProperty() strs = ListProperty(str) num = IntegerProperty() class SubModel(SimpleModel): """Simple Subclassed Model""" ref = ReferenceProperty(SimpleModel, collection_name="reverse_ref") class TestQuerying(object): """Test different querying capabilities""" def setup_class(cls): """Setup this class""" cls.objs = [] o = SimpleModel() o.name = "Simple Object" o.strs = ["B", "A", "C", "Foo"] o.num = 1 o.put() cls.objs.append(o) o2 = SimpleModel() o2.name = "Referenced Object" o2.num = 2 o2.put() cls.objs.append(o2) o3 = SubModel() o3.name = "Sub Object" o3.num = 3 o3.ref = o2 o3.put() cls.objs.append(o3) time.sleep(3) def teardown_class(cls): """Remove our objects""" for o in cls.objs: try: o.delete() except: pass def test_find(self): """Test using the "Find" method""" assert(SimpleModel.find(name="Simple Object").next().id == self.objs[0].id) assert(SimpleModel.find(name="Referenced Object").next().id == self.objs[1].id) assert(SimpleModel.find(name="Sub Object").next().id == self.objs[2].id) def test_like_filter(self): """Test a "like" filter""" query = SimpleModel.all() query.filter("name like", "% Object") assert(query.count() == 3) query = SimpleModel.all() query.filter("name not like", "% Object") assert(query.count() == 0) def test_equals_filter(self): """Test an "=" and "!=" filter""" query = SimpleModel.all() query.filter("name =", "Simple Object") assert(query.count() == 1) query = SimpleModel.all() query.filter("name !=", "Simple Object") assert(query.count() == 2) def test_or_filter(self): """Test a filter function as an "or" """ query = SimpleModel.all() query.filter("name =", ["Simple Object", "Sub Object"]) assert(query.count() == 2) def test_and_filter(self): """Test Multiple filters which are an "and" """ query = SimpleModel.all() query.filter("name like", "% Object") query.filter("name like", "Simple %") assert(query.count() == 1) def test_none_filter(self): """Test filtering for a value that's not set""" query = SimpleModel.all() query.filter("ref =", None) assert(query.count() == 2) def test_greater_filter(self): """Test filtering Using >, >=""" query = SimpleModel.all() query.filter("num >", 1) assert(query.count() == 2) query = SimpleModel.all() query.filter("num >=", 1) assert(query.count() == 3) def test_less_filter(self): """Test filtering Using <, <=""" query = SimpleModel.all() query.filter("num <", 3) assert(query.count() == 2) query = SimpleModel.all() query.filter("num <=", 3) assert(query.count() == 3) def test_query_on_list(self): """Test querying on a list""" assert(SimpleModel.find(strs="A").next().id == self.objs[0].id) assert(SimpleModel.find(strs="B").next().id == self.objs[0].id) assert(SimpleModel.find(strs="C").next().id == self.objs[0].id) def test_like(self): """Test with a "like" expression""" query = SimpleModel.all() query.filter("strs like", "%oo%") print query.get_query() assert(query.count() == 1) boto-2.20.1/tests/db/test_sequence.py000066400000000000000000000077411225267101000175140ustar00rootroot00000000000000# Copyright (c) 2010 Chris Moyer http://coredumped.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. class TestDBHandler(object): """Test the DBHandler""" def setup_class(cls): """Setup this class""" cls.sequences = [] def teardown_class(cls): """Remove our sequences""" for s in cls.sequences: try: s.delete() except: pass def test_sequence_generator_no_rollover(self): """Test the sequence generator without rollover""" from boto.sdb.db.sequence import SequenceGenerator gen = SequenceGenerator("ABC") assert(gen("") == "A") assert(gen("A") == "B") assert(gen("B") == "C") assert(gen("C") == "AA") assert(gen("AC") == "BA") def test_sequence_generator_with_rollover(self): """Test the sequence generator with rollover""" from boto.sdb.db.sequence import SequenceGenerator gen = SequenceGenerator("ABC", rollover=True) assert(gen("") == "A") assert(gen("A") == "B") assert(gen("B") == "C") assert(gen("C") == "A") def test_sequence_simple_int(self): """Test a simple counter sequence""" from boto.sdb.db.sequence import Sequence s = Sequence() self.sequences.append(s) assert(s.val == 0) assert(s.next() == 1) assert(s.next() == 2) s2 = Sequence(s.id) assert(s2.val == 2) assert(s.next() == 3) assert(s.val == 3) assert(s2.val == 3) def test_sequence_simple_string(self): from boto.sdb.db.sequence import Sequence, increment_string s = Sequence(fnc=increment_string) self.sequences.append(s) assert(s.val == "A") assert(s.next() == "B") def test_fib(self): """Test the fibonacci sequence generator""" from boto.sdb.db.sequence import fib # Just check the first few numbers in the sequence lv = 0 for v in [1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144]: assert(fib(v, lv) == lv+v) lv = fib(v, lv) def test_sequence_fib(self): """Test the fibonacci sequence""" from boto.sdb.db.sequence import Sequence, fib s = Sequence(fnc=fib) s2 = Sequence(s.id) self.sequences.append(s) assert(s.val == 1) # Just check the first few numbers in the sequence for v in [1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144]: assert(s.next() == v) assert(s.val == v) assert(s2.val == v) # it shouldn't matter which reference we use since it's garunteed to be consistent def test_sequence_string(self): """Test the String incrementation sequence""" from boto.sdb.db.sequence import Sequence, increment_string s = Sequence(fnc=increment_string) self.sequences.append(s) assert(s.val == "A") assert(s.next() == "B") s.val = "Z" assert(s.val == "Z") assert(s.next() == "AA") boto-2.20.1/tests/devpay/000077500000000000000000000000001225267101000151655ustar00rootroot00000000000000boto-2.20.1/tests/devpay/__init__.py000066400000000000000000000000001225267101000172640ustar00rootroot00000000000000boto-2.20.1/tests/devpay/test_s3.py000066400000000000000000000163621225267101000171330ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Some unit tests for the S3Connection """ import time import os import urllib from boto.s3.connection import S3Connection from boto.exception import S3PermissionsError # this test requires a devpay product and user token to run: AMAZON_USER_TOKEN = '{UserToken}...your token here...' DEVPAY_HEADERS = { 'x-amz-security-token': AMAZON_USER_TOKEN } def test(): print '--- running S3Connection tests (DevPay) ---' c = S3Connection() # create a new, empty bucket bucket_name = 'test-%d' % int(time.time()) bucket = c.create_bucket(bucket_name, headers=DEVPAY_HEADERS) # now try a get_bucket call and see if it's really there bucket = c.get_bucket(bucket_name, headers=DEVPAY_HEADERS) # test logging logging_bucket = c.create_bucket(bucket_name + '-log', headers=DEVPAY_HEADERS) logging_bucket.set_as_logging_target(headers=DEVPAY_HEADERS) bucket.enable_logging(target_bucket=logging_bucket, target_prefix=bucket.name, headers=DEVPAY_HEADERS) bucket.disable_logging(headers=DEVPAY_HEADERS) c.delete_bucket(logging_bucket, headers=DEVPAY_HEADERS) # create a new key and store it's content from a string k = bucket.new_key() k.name = 'foobar' s1 = 'This is a test of file upload and download' s2 = 'This is a second string to test file upload and download' k.set_contents_from_string(s1, headers=DEVPAY_HEADERS) fp = open('foobar', 'wb') # now get the contents from s3 to a local file k.get_contents_to_file(fp, headers=DEVPAY_HEADERS) fp.close() fp = open('foobar') # check to make sure content read from s3 is identical to original assert s1 == fp.read(), 'corrupted file' fp.close() # test generated URLs url = k.generate_url(3600, headers=DEVPAY_HEADERS) file = urllib.urlopen(url) assert s1 == file.read(), 'invalid URL %s' % url url = k.generate_url(3600, force_http=True, headers=DEVPAY_HEADERS) file = urllib.urlopen(url) assert s1 == file.read(), 'invalid URL %s' % url bucket.delete_key(k, headers=DEVPAY_HEADERS) # test a few variations on get_all_keys - first load some data # for the first one, let's override the content type phony_mimetype = 'application/x-boto-test' headers = {'Content-Type': phony_mimetype} headers.update(DEVPAY_HEADERS) k.name = 'foo/bar' k.set_contents_from_string(s1, headers) k.name = 'foo/bas' k.set_contents_from_filename('foobar', headers=DEVPAY_HEADERS) k.name = 'foo/bat' k.set_contents_from_string(s1, headers=DEVPAY_HEADERS) k.name = 'fie/bar' k.set_contents_from_string(s1, headers=DEVPAY_HEADERS) k.name = 'fie/bas' k.set_contents_from_string(s1, headers=DEVPAY_HEADERS) k.name = 'fie/bat' k.set_contents_from_string(s1, headers=DEVPAY_HEADERS) # try resetting the contents to another value md5 = k.md5 k.set_contents_from_string(s2, headers=DEVPAY_HEADERS) assert k.md5 != md5 os.unlink('foobar') all = bucket.get_all_keys(headers=DEVPAY_HEADERS) assert len(all) == 6 rs = bucket.get_all_keys(prefix='foo', headers=DEVPAY_HEADERS) assert len(rs) == 3 rs = bucket.get_all_keys(prefix='', delimiter='/', headers=DEVPAY_HEADERS) assert len(rs) == 2 rs = bucket.get_all_keys(maxkeys=5, headers=DEVPAY_HEADERS) assert len(rs) == 5 # test the lookup method k = bucket.lookup('foo/bar', headers=DEVPAY_HEADERS) assert isinstance(k, bucket.key_class) assert k.content_type == phony_mimetype k = bucket.lookup('notthere', headers=DEVPAY_HEADERS) assert k == None # try some metadata stuff k = bucket.new_key() k.name = 'has_metadata' mdkey1 = 'meta1' mdval1 = 'This is the first metadata value' k.set_metadata(mdkey1, mdval1) mdkey2 = 'meta2' mdval2 = 'This is the second metadata value' k.set_metadata(mdkey2, mdval2) k.set_contents_from_string(s1, headers=DEVPAY_HEADERS) k = bucket.lookup('has_metadata', headers=DEVPAY_HEADERS) assert k.get_metadata(mdkey1) == mdval1 assert k.get_metadata(mdkey2) == mdval2 k = bucket.new_key() k.name = 'has_metadata' k.get_contents_as_string(headers=DEVPAY_HEADERS) assert k.get_metadata(mdkey1) == mdval1 assert k.get_metadata(mdkey2) == mdval2 bucket.delete_key(k, headers=DEVPAY_HEADERS) # test list and iterator rs1 = bucket.list(headers=DEVPAY_HEADERS) num_iter = 0 for r in rs1: num_iter = num_iter + 1 rs = bucket.get_all_keys(headers=DEVPAY_HEADERS) num_keys = len(rs) assert num_iter == num_keys # try a key with a funny character k = bucket.new_key() k.name = 'testnewline\n' k.set_contents_from_string('This is a test', headers=DEVPAY_HEADERS) rs = bucket.get_all_keys(headers=DEVPAY_HEADERS) assert len(rs) == num_keys + 1 bucket.delete_key(k, headers=DEVPAY_HEADERS) rs = bucket.get_all_keys(headers=DEVPAY_HEADERS) assert len(rs) == num_keys # try some acl stuff bucket.set_acl('public-read', headers=DEVPAY_HEADERS) policy = bucket.get_acl(headers=DEVPAY_HEADERS) assert len(policy.acl.grants) == 2 bucket.set_acl('private', headers=DEVPAY_HEADERS) policy = bucket.get_acl(headers=DEVPAY_HEADERS) assert len(policy.acl.grants) == 1 k = bucket.lookup('foo/bar', headers=DEVPAY_HEADERS) k.set_acl('public-read', headers=DEVPAY_HEADERS) policy = k.get_acl(headers=DEVPAY_HEADERS) assert len(policy.acl.grants) == 2 k.set_acl('private', headers=DEVPAY_HEADERS) policy = k.get_acl(headers=DEVPAY_HEADERS) assert len(policy.acl.grants) == 1 # try the convenience methods for grants # this doesn't work with devpay #bucket.add_user_grant('FULL_CONTROL', # 'c1e724fbfa0979a4448393c59a8c055011f739b6d102fb37a65f26414653cd67', # headers=DEVPAY_HEADERS) try: bucket.add_email_grant('foobar', 'foo@bar.com', headers=DEVPAY_HEADERS) except S3PermissionsError: pass # now delete all keys in bucket for k in all: bucket.delete_key(k, headers=DEVPAY_HEADERS) # now delete bucket c.delete_bucket(bucket, headers=DEVPAY_HEADERS) print '--- tests completed ---' if __name__ == '__main__': test() boto-2.20.1/tests/fps/000077500000000000000000000000001225267101000144655ustar00rootroot00000000000000boto-2.20.1/tests/fps/__init__.py000066400000000000000000000000001225267101000165640ustar00rootroot00000000000000boto-2.20.1/tests/fps/test.py000077500000000000000000000073161225267101000160300ustar00rootroot00000000000000#!/usr/bin/env python from tests.unit import unittest import sys import os import os.path simple = True advanced = False if __name__ == "__main__": devpath = os.path.relpath(os.path.join('..', '..'), start=os.path.dirname(__file__)) sys.path = [devpath] + sys.path print '>>> advanced FPS tests; using local boto sources' advanced = True from boto.fps.connection import FPSConnection from boto.fps.response import ComplexAmount class FPSTestCase(unittest.TestCase): def setUp(self): self.fps = FPSConnection(host='fps.sandbox.amazonaws.com') if advanced: self.activity = self.fps.get_account_activity(\ StartDate='2012-01-01') result = self.activity.GetAccountActivityResult self.transactions = result.Transaction @unittest.skipUnless(simple, "skipping simple test") def test_get_account_balance(self): response = self.fps.get_account_balance() self.assertTrue(hasattr(response, 'GetAccountBalanceResult')) self.assertTrue(hasattr(response.GetAccountBalanceResult, 'AccountBalance')) accountbalance = response.GetAccountBalanceResult.AccountBalance self.assertTrue(hasattr(accountbalance, 'TotalBalance')) self.assertIsInstance(accountbalance.TotalBalance, ComplexAmount) self.assertTrue(hasattr(accountbalance, 'AvailableBalances')) availablebalances = accountbalance.AvailableBalances self.assertTrue(hasattr(availablebalances, 'RefundBalance')) @unittest.skipUnless(simple, "skipping simple test") def test_complex_amount(self): response = self.fps.get_account_balance() accountbalance = response.GetAccountBalanceResult.AccountBalance asfloat = float(accountbalance.TotalBalance.Value) self.assertIn('.', str(asfloat)) @unittest.skipUnless(simple, "skipping simple test") def test_required_arguments(self): with self.assertRaises(KeyError): self.fps.write_off_debt(AdjustmentAmount=123.45) @unittest.skipUnless(simple, "skipping simple test") def test_cbui_url(self): inputs = { 'transactionAmount': 123.45, 'pipelineName': 'SingleUse', 'returnURL': 'https://localhost/', 'paymentReason': 'a reason for payment', 'callerReference': 'foo', } result = self.fps.cbui_url(**inputs) print "cbui_url() yields {0}".format(result) @unittest.skipUnless(simple, "skipping simple test") def test_get_account_activity(self): response = self.fps.get_account_activity(StartDate='2012-01-01') self.assertTrue(hasattr(response, 'GetAccountActivityResult')) result = response.GetAccountActivityResult self.assertTrue(hasattr(result, 'BatchSize')) try: int(result.BatchSize) except: self.assertTrue(False) @unittest.skipUnless(advanced, "skipping advanced test") def test_get_transaction(self): assert len(self.transactions) transactionid = self.transactions[0].TransactionId result = self.fps.get_transaction(TransactionId=transactionid) self.assertTrue(hasattr(result.GetTransactionResult, 'Transaction')) @unittest.skip('cosmetic') def test_bad_request(self): try: self.fps.write_off_debt(CreditInstrumentId='foo', AdjustmentAmount=123.45) except Exception, e: print e @unittest.skip('cosmetic') def test_repr(self): print self.fps.get_account_balance() if __name__ == "__main__": unittest.main() boto-2.20.1/tests/fps/test_verify_signature.py000066400000000000000000000016671225267101000214750ustar00rootroot00000000000000from boto.fps.connection import FPSConnection def test(): conn = FPSConnection() # example response from the docs params = 'expiry=08%2F2015&signature=ynDukZ9%2FG77uSJVb5YM0cadwHVwYKPMKOO3PNvgADbv6VtymgBxeOWEhED6KGHsGSvSJnMWDN%2FZl639AkRe9Ry%2F7zmn9CmiM%2FZkp1XtshERGTqi2YL10GwQpaH17MQqOX3u1cW4LlyFoLy4celUFBPq1WM2ZJnaNZRJIEY%2FvpeVnCVK8VIPdY3HMxPAkNi5zeF2BbqH%2BL2vAWef6vfHkNcJPlOuOl6jP4E%2B58F24ni%2B9ek%2FQH18O4kw%2FUJ7ZfKwjCCI13%2BcFybpofcKqddq8CuUJj5Ii7Pdw1fje7ktzHeeNhF0r9siWcYmd4JaxTP3NmLJdHFRq2T%2FgsF3vK9m3gw%3D%3D&signatureVersion=2&signatureMethod=RSA-SHA1&certificateUrl=https%3A%2F%2Ffps.sandbox.amazonaws.com%2Fcerts%2F090909%2FPKICert.pem&tokenID=A5BB3HUNAZFJ5CRXIPH72LIODZUNAUZIVP7UB74QNFQDSQ9MN4HPIKISQZWPLJXF&status=SC&callerReference=callerReferenceMultiUse1' endpoint = 'http://vamsik.desktop.amazon.com:8080/ipn.jsp' conn.verify_signature(endpoint, params) if __name__ == '__main__': test() boto-2.20.1/tests/integration/000077500000000000000000000000001225267101000162205ustar00rootroot00000000000000boto-2.20.1/tests/integration/__init__.py000066400000000000000000000046051225267101000203360ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Base class to make checking the certs easier. """ import httplib import socket import unittest # We subclass from ``object`` instead of ``TestCase`` here so that this doesn't # add noise to the test suite (otherwise these no-ops would run on every # import). class ServiceCertVerificationTest(object): ssl = True # SUBCLASSES MUST OVERRIDE THIS! # Something like ``boto.sqs.regions()``... regions = [] def test_certs(self): self.assertTrue(len(self.regions) > 0) for region in self.regions: try: c = region.connect() self.sample_service_call(c) except (socket.gaierror, httplib.BadStatusLine): # This is bad (because the SSL cert failed). Re-raise the # exception. raise except: if 'gov' in region.name: # Ignore it. GovCloud accounts require special permission # to use. continue # Anything else is bad. Re-raise. raise def sample_service_call(self, conn): """ Subclasses should override this method to do a service call that will always succeed (like fetch a list, even if it's empty). """ pass boto-2.20.1/tests/integration/beanstalk/000077500000000000000000000000001225267101000201645ustar00rootroot00000000000000boto-2.20.1/tests/integration/beanstalk/test_wrapper.py000066400000000000000000000220641225267101000232610ustar00rootroot00000000000000import unittest import random import time from functools import partial from boto.beanstalk.wrapper import Layer1Wrapper import boto.beanstalk.response as response class BasicSuite(unittest.TestCase): def setUp(self): self.random_id = str(random.randint(1, 1000000)) self.app_name = 'app-' + self.random_id self.app_version = 'version-' + self.random_id self.template = 'template-' + self.random_id self.environment = 'environment-' + self.random_id self.beanstalk = Layer1Wrapper() class MiscSuite(BasicSuite): def test_check_dns_availability(self): result = self.beanstalk.check_dns_availability('amazon') self.assertIsInstance(result, response.CheckDNSAvailabilityResponse, 'incorrect response object returned') self.assertFalse(result.available) class TestApplicationObjects(BasicSuite): def create_application(self): # This method is used for any API calls that require an application # object. This also adds a cleanup step to automatically delete the # app when the test is finished. No assertions are performed # here. If you want to validate create_application, don't use this # method. self.beanstalk.create_application(application_name=self.app_name) self.addCleanup(partial(self.beanstalk.delete_application, application_name=self.app_name)) def test_create_delete_application_version(self): # This will create an app, create an app version, delete the app # version, and delete the app. For each API call we check that the # return type is what we expect and that a few attributes have the # correct values. app_result = self.beanstalk.create_application(application_name=self.app_name) self.assertIsInstance(app_result, response.CreateApplicationResponse) self.assertEqual(app_result.application.application_name, self.app_name) version_result = self.beanstalk.create_application_version( application_name=self.app_name, version_label=self.app_version) self.assertIsInstance(version_result, response.CreateApplicationVersionResponse) self.assertEqual(version_result.application_version.version_label, self.app_version) result = self.beanstalk.delete_application_version( application_name=self.app_name, version_label=self.app_version) self.assertIsInstance(result, response.DeleteApplicationVersionResponse) result = self.beanstalk.delete_application( application_name=self.app_name ) self.assertIsInstance(result, response.DeleteApplicationResponse) def test_create_configuration_template(self): self.create_application() result = self.beanstalk.create_configuration_template( application_name=self.app_name, template_name=self.template, solution_stack_name='32bit Amazon Linux running Tomcat 6') self.assertIsInstance( result, response.CreateConfigurationTemplateResponse) self.assertEqual(result.solution_stack_name, '32bit Amazon Linux running Tomcat 6') def test_create_storage_location(self): result = self.beanstalk.create_storage_location() self.assertIsInstance(result, response.CreateStorageLocationResponse) def test_update_application(self): self.create_application() result = self.beanstalk.update_application(application_name=self.app_name) self.assertIsInstance(result, response.UpdateApplicationResponse) def test_update_application_version(self): self.create_application() self.beanstalk.create_application_version( application_name=self.app_name, version_label=self.app_version) result = self.beanstalk.update_application_version( application_name=self.app_name, version_label=self.app_version) self.assertIsInstance( result, response.UpdateApplicationVersionResponse) class GetSuite(BasicSuite): def test_describe_applications(self): result = self.beanstalk.describe_applications() self.assertIsInstance(result, response.DescribeApplicationsResponse) def test_describe_application_versions(self): result = self.beanstalk.describe_application_versions() self.assertIsInstance(result, response.DescribeApplicationVersionsResponse) def test_describe_configuration_options(self): result = self.beanstalk.describe_configuration_options() self.assertIsInstance(result, response.DescribeConfigurationOptionsResponse) def test_12_describe_environments(self): result = self.beanstalk.describe_environments() self.assertIsInstance( result, response.DescribeEnvironmentsResponse) def test_14_describe_events(self): result = self.beanstalk.describe_events() self.assertIsInstance(result, response.DescribeEventsResponse) def test_15_list_available_solution_stacks(self): result = self.beanstalk.list_available_solution_stacks() self.assertIsInstance( result, response.ListAvailableSolutionStacksResponse) self.assertIn('32bit Amazon Linux running Tomcat 6', result.solution_stacks) class TestsWithEnvironment(unittest.TestCase): @classmethod def setUpClass(cls): cls.random_id = str(random.randint(1, 1000000)) cls.app_name = 'app-' + cls.random_id cls.environment = 'environment-' + cls.random_id cls.template = 'template-' + cls.random_id cls.beanstalk = Layer1Wrapper() cls.beanstalk.create_application(application_name=cls.app_name) cls.beanstalk.create_configuration_template( application_name=cls.app_name, template_name=cls.template, solution_stack_name='32bit Amazon Linux running Tomcat 6') cls.app_version = 'version-' + cls.random_id cls.beanstalk.create_application_version( application_name=cls.app_name, version_label=cls.app_version) cls.beanstalk.create_environment(cls.app_name, cls.environment, template_name=cls.template) cls.wait_for_env(cls.environment) @classmethod def tearDownClass(cls): cls.beanstalk.delete_application(application_name=cls.app_name, terminate_env_by_force=True) cls.wait_for_env(cls.environment, 'Terminated') @classmethod def wait_for_env(cls, env_name, status='Ready'): while not cls.env_ready(env_name, status): time.sleep(15) @classmethod def env_ready(cls, env_name, desired_status): result = cls.beanstalk.describe_environments( application_name=cls.app_name, environment_names=env_name) status = result.environments[0].status return status == desired_status def test_describe_environment_resources(self): result = self.beanstalk.describe_environment_resources( environment_name=self.environment) self.assertIsInstance( result, response.DescribeEnvironmentResourcesResponse) def test_describe_configuration_settings(self): result = self.beanstalk.describe_configuration_settings( application_name=self.app_name, environment_name=self.environment) self.assertIsInstance( result, response.DescribeConfigurationSettingsResponse) def test_request_environment_info(self): result = self.beanstalk.request_environment_info( environment_name=self.environment, info_type='tail') self.assertIsInstance(result, response.RequestEnvironmentInfoResponse) self.wait_for_env(self.environment) result = self.beanstalk.retrieve_environment_info( environment_name=self.environment, info_type='tail') self.assertIsInstance(result, response.RetrieveEnvironmentInfoResponse) def test_rebuild_environment(self): result = self.beanstalk.rebuild_environment( environment_name=self.environment) self.assertIsInstance(result, response.RebuildEnvironmentResponse) self.wait_for_env(self.environment) def test_restart_app_server(self): result = self.beanstalk.restart_app_server( environment_name=self.environment) self.assertIsInstance(result, response.RestartAppServerResponse) self.wait_for_env(self.environment) def test_update_configuration_template(self): result = self.beanstalk.update_configuration_template( application_name=self.app_name, template_name=self.template) self.assertIsInstance( result, response.UpdateConfigurationTemplateResponse) def test_update_environment(self): result = self.beanstalk.update_environment( environment_name=self.environment) self.assertIsInstance(result, response.UpdateEnvironmentResponse) self.wait_for_env(self.environment) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/integration/cloudformation/000077500000000000000000000000001225267101000212455ustar00rootroot00000000000000boto-2.20.1/tests/integration/cloudformation/__init__.py000066400000000000000000000022271225267101000233610ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. boto-2.20.1/tests/integration/cloudformation/test_cert_verification.py000066400000000000000000000030641225267101000263600ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Check that all of the certs on all service endpoints validate. """ import unittest from tests.integration import ServiceCertVerificationTest import boto.cloudformation class CloudFormationCertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): cloudformation = True regions = boto.cloudformation.regions() def sample_service_call(self, conn): conn.describe_stacks() boto-2.20.1/tests/integration/cloudformation/test_connection.py000066400000000000000000000060171225267101000250210ustar00rootroot00000000000000#!/usr/bin/env python import time import json from tests.unit import unittest from boto.cloudformation.connection import CloudFormationConnection BASIC_EC2_TEMPLATE = { "AWSTemplateFormatVersion": "2010-09-09", "Description": "AWS CloudFormation Sample Template EC2InstanceSample", "Parameters": { }, "Mappings": { "RegionMap": { "us-east-1": { "AMI": "ami-7f418316" } } }, "Resources": { "Ec2Instance": { "Type": "AWS::EC2::Instance", "Properties": { "ImageId": { "Fn::FindInMap": [ "RegionMap", { "Ref": "AWS::Region" }, "AMI" ] }, "UserData": { "Fn::Base64": "a" * 15000 } } } }, "Outputs": { "InstanceId": { "Description": "InstanceId of the newly created EC2 instance", "Value": { "Ref": "Ec2Instance" } }, "AZ": { "Description": "Availability Zone of the newly created EC2 instance", "Value": { "Fn::GetAtt": [ "Ec2Instance", "AvailabilityZone" ] } }, "PublicIP": { "Description": "Public IP address of the newly created EC2 instance", "Value": { "Fn::GetAtt": [ "Ec2Instance", "PublicIp" ] } }, "PrivateIP": { "Description": "Private IP address of the newly created EC2 instance", "Value": { "Fn::GetAtt": [ "Ec2Instance", "PrivateIp" ] } }, "PublicDNS": { "Description": "Public DNSName of the newly created EC2 instance", "Value": { "Fn::GetAtt": [ "Ec2Instance", "PublicDnsName" ] } }, "PrivateDNS": { "Description": "Private DNSName of the newly created EC2 instance", "Value": { "Fn::GetAtt": [ "Ec2Instance", "PrivateDnsName" ] } } } } class TestCloudformationConnection(unittest.TestCase): def setUp(self): self.connection = CloudFormationConnection() self.stack_name = 'testcfnstack' + str(int(time.time())) def test_large_template_stack_size(self): # See https://github.com/boto/boto/issues/1037 body = self.connection.create_stack( self.stack_name, template_body=json.dumps(BASIC_EC2_TEMPLATE)) self.addCleanup(self.connection.delete_stack, self.stack_name) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/integration/cloudsearch/000077500000000000000000000000001225267101000205145ustar00rootroot00000000000000boto-2.20.1/tests/integration/cloudsearch/__init__.py000066400000000000000000000022271225267101000226300ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. boto-2.20.1/tests/integration/cloudsearch/test_cert_verification.py000066400000000000000000000030511225267101000256230ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Check that all of the certs on all service endpoints validate. """ import unittest from tests.integration import ServiceCertVerificationTest import boto.cloudsearch class CloudSearchCertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): cloudsearch = True regions = boto.cloudsearch.regions() def sample_service_call(self, conn): conn.describe_domains() boto-2.20.1/tests/integration/cloudsearch/test_layers.py000066400000000000000000000052661225267101000234350ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Tests for Layer1 of Cloudsearch """ import time from tests.unit import unittest from boto.cloudsearch.layer1 import Layer1 from boto.cloudsearch.layer2 import Layer2 from boto.regioninfo import RegionInfo class CloudSearchLayer1Test(unittest.TestCase): cloudsearch = True def setUp(self): super(CloudSearchLayer1Test, self).setUp() self.layer1 = Layer1() self.domain_name = 'test-%d' % int(time.time()) def test_create_domain(self): resp = self.layer1.create_domain(self.domain_name) self.addCleanup(self.layer1.delete_domain, self.domain_name) self.assertTrue(resp.get('created', False)) class CloudSearchLayer2Test(unittest.TestCase): cloudsearch = True def setUp(self): super(CloudSearchLayer2Test, self).setUp() self.layer2 = Layer2() self.domain_name = 'test-%d' % int(time.time()) def test_create_domain(self): domain = self.layer2.create_domain(self.domain_name) self.addCleanup(domain.delete) self.assertTrue(domain.created, False) self.assertEqual(domain.domain_name, self.domain_name) self.assertEqual(domain.num_searchable_docs, 0) def test_initialization_regression(self): us_west_2 = RegionInfo( name='us-west-2', endpoint='cloudsearch.us-west-2.amazonaws.com' ) self.layer2 = Layer2( region=us_west_2, host='cloudsearch.us-west-2.amazonaws.com' ) self.assertEqual( self.layer2.layer1.host, 'cloudsearch.us-west-2.amazonaws.com' ) boto-2.20.1/tests/integration/cloudtrail/000077500000000000000000000000001225267101000203625ustar00rootroot00000000000000boto-2.20.1/tests/integration/cloudtrail/__init__.py000066400000000000000000000000001225267101000224610ustar00rootroot00000000000000boto-2.20.1/tests/integration/cloudtrail/test_cloudtrail.py000066400000000000000000000057441225267101000241470ustar00rootroot00000000000000import boto from time import time from unittest import TestCase DEFAULT_S3_POLICY = """{ "Version": "2012-10-17", "Statement": [ { "Sid": "AWSCloudTrailAclCheck20131101", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::086441151436:root", "arn:aws:iam::113285607260:root" ] }, "Action": "s3:GetBucketAcl", "Resource": "arn:aws:s3:::" }, { "Sid": "AWSCloudTrailWrite20131101", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::086441151436:root", "arn:aws:iam::113285607260:root" ] }, "Action": "s3:PutObject", "Resource": "arn:aws:s3::://AWSLogs//*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } } } ] }""" class TestCloudTrail(TestCase): def test_cloudtrail(self): cloudtrail = boto.connect_cloudtrail() # Don't delete existing customer data! res = cloudtrail.describe_trails() if len(res['trailList']): self.fail('A trail already exists on this account!') # Who am I? iam = boto.connect_iam() response = iam.get_user() account_id = response['get_user_response']['get_user_result'] \ ['user']['user_id'] # Setup a new bucket s3 = boto.connect_s3() bucket_name = 'cloudtrail-integ-{0}'.format(time()) policy = DEFAULT_S3_POLICY.replace('', bucket_name)\ .replace('', account_id)\ .replace('/', '') b = s3.create_bucket(bucket_name) b.set_policy(policy) # Setup CloudTrail cloudtrail.create_trail(trail={'Name': 'test', 'S3BucketName': bucket_name}) cloudtrail.update_trail(trail={'Name': 'test', 'IncludeGlobalServiceEvents': False}) trails = cloudtrail.describe_trails() self.assertEqual('test', trails['trailList'][0]['Name']) self.assertFalse(trails['trailList'][0]['IncludeGlobalServiceEvents']) cloudtrail.start_logging(name='test') status = cloudtrail.get_trail_status(name='test') self.assertTrue(status['IsLogging']) cloudtrail.stop_logging(name='test') status = cloudtrail.get_trail_status(name='test') self.assertFalse(status['IsLogging']) # Clean up cloudtrail.delete_trail(name='test') for key in b.list(): key.delete() s3.delete_bucket(bucket_name) boto-2.20.1/tests/integration/datapipeline/000077500000000000000000000000001225267101000206575ustar00rootroot00000000000000boto-2.20.1/tests/integration/datapipeline/test_layer1.py000066400000000000000000000125051225267101000234700ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import time from tests.unit import unittest from boto.datapipeline import layer1 class TestDataPipeline(unittest.TestCase): datapipeline = True def setUp(self): self.connection = layer1.DataPipelineConnection() self.sample_pipeline_objects = [ {'fields': [ {'key': 'workerGroup', 'stringValue': 'MyworkerGroup'}], 'id': 'Default', 'name': 'Default'}, {'fields': [ {'key': 'startDateTime', 'stringValue': '2012-09-25T17:00:00'}, {'key': 'type', 'stringValue': 'Schedule'}, {'key': 'period', 'stringValue': '1 hour'}, {'key': 'endDateTime', 'stringValue': '2012-09-25T18:00:00'}], 'id': 'Schedule', 'name': 'Schedule'}, {'fields': [ {'key': 'type', 'stringValue': 'ShellCommandActivity'}, {'key': 'command', 'stringValue': 'echo hello'}, {'key': 'parent', 'refValue': 'Default'}, {'key': 'schedule', 'refValue': 'Schedule'}], 'id': 'SayHello', 'name': 'SayHello'} ] self.connection.auth_service_name = 'datapipeline' def create_pipeline(self, name, unique_id, description=None): response = self.connection.create_pipeline(name, unique_id, description) pipeline_id = response['pipelineId'] self.addCleanup(self.connection.delete_pipeline, pipeline_id) return pipeline_id def get_pipeline_state(self, pipeline_id): response = self.connection.describe_pipelines([pipeline_id]) for attr in response['pipelineDescriptionList'][0]['fields']: if attr['key'] == '@pipelineState': return attr['stringValue'] def test_can_create_and_delete_a_pipeline(self): response = self.connection.create_pipeline('name', 'unique_id', 'description') self.connection.delete_pipeline(response['pipelineId']) def test_validate_pipeline(self): pipeline_id = self.create_pipeline('name2', 'unique_id2') self.connection.validate_pipeline_definition( self.sample_pipeline_objects, pipeline_id) def test_put_pipeline_definition(self): pipeline_id = self.create_pipeline('name3', 'unique_id3') self.connection.put_pipeline_definition(self.sample_pipeline_objects, pipeline_id) # We should now be able to get the pipeline definition and see # that it matches what we put. response = self.connection.get_pipeline_definition(pipeline_id) objects = response['pipelineObjects'] self.assertEqual(len(objects), 3) self.assertEqual(objects[0]['id'], 'Default') self.assertEqual(objects[0]['name'], 'Default') self.assertEqual(objects[0]['fields'], [{'key': 'workerGroup', 'stringValue': 'MyworkerGroup'}]) def test_activate_pipeline(self): pipeline_id = self.create_pipeline('name4', 'unique_id4') self.connection.put_pipeline_definition(self.sample_pipeline_objects, pipeline_id) self.connection.activate_pipeline(pipeline_id) attempts = 0 state = self.get_pipeline_state(pipeline_id) while state != 'SCHEDULED' and attempts < 10: time.sleep(10) attempts += 1 state = self.get_pipeline_state(pipeline_id) if attempts > 10: self.fail("Pipeline did not become scheduled " "after 10 attempts.") objects = self.connection.describe_objects(['Default'], pipeline_id) field = objects['pipelineObjects'][0]['fields'][0] self.assertDictEqual(field, {'stringValue': 'COMPONENT', 'key': '@sphere'}) def test_list_pipelines(self): pipeline_id = self.create_pipeline('name5', 'unique_id5') pipeline_id_list = [p['id'] for p in self.connection.list_pipelines()['pipelineIdList']] self.assertTrue(pipeline_id in pipeline_id_list) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/integration/directconnect/000077500000000000000000000000001225267101000210445ustar00rootroot00000000000000boto-2.20.1/tests/integration/directconnect/__init__.py000066400000000000000000000000001225267101000231430ustar00rootroot00000000000000boto-2.20.1/tests/integration/directconnect/test_directconnect.py000066400000000000000000000030251225267101000253010ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import boto from unittest import TestCase class DirectConnectTest(TestCase): """ A very basic test to make sure signatures and basic calls work. """ def test_basic(self): conn = boto.connect_directconnect() response = conn.describe_connections() self.assertTrue(response) self.assertTrue('connections' in response) self.assertIsInstance(response['connections'], list) boto-2.20.1/tests/integration/dynamodb/000077500000000000000000000000001225267101000200155ustar00rootroot00000000000000boto-2.20.1/tests/integration/dynamodb/__init__.py000066400000000000000000000021131225267101000221230ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. boto-2.20.1/tests/integration/dynamodb/test_cert_verification.py000066400000000000000000000030371225267101000251300ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Check that all of the certs on all service endpoints validate. """ import unittest from tests.integration import ServiceCertVerificationTest import boto.dynamodb class DynamoDBCertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): dynamodb = True regions = boto.dynamodb.regions() def sample_service_call(self, conn): conn.layer1.list_tables() boto-2.20.1/tests/integration/dynamodb/test_layer1.py000066400000000000000000000267601225267101000226360ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Tests for Layer1 of DynamoDB """ import time import base64 from tests.unit import unittest from boto.dynamodb.exceptions import DynamoDBKeyNotFoundError from boto.dynamodb.exceptions import DynamoDBConditionalCheckFailedError from boto.dynamodb.exceptions import DynamoDBValidationError from boto.dynamodb.layer1 import Layer1 class DynamoDBLayer1Test(unittest.TestCase): dynamodb = True def setUp(self): self.dynamodb = Layer1() self.table_name = 'test-%d' % int(time.time()) self.hash_key_name = 'forum_name' self.hash_key_type = 'S' self.range_key_name = 'subject' self.range_key_type = 'S' self.read_units = 5 self.write_units = 5 self.schema = {'HashKeyElement': {'AttributeName': self.hash_key_name, 'AttributeType': self.hash_key_type}, 'RangeKeyElement': {'AttributeName': self.range_key_name, 'AttributeType': self.range_key_type}} self.provisioned_throughput = {'ReadCapacityUnits': self.read_units, 'WriteCapacityUnits': self.write_units} def tearDown(self): pass def create_table(self, table_name, schema, provisioned_throughput): result = self.dynamodb.create_table(table_name, schema, provisioned_throughput) self.addCleanup(self.dynamodb.delete_table, table_name) return result def test_layer1_basic(self): print '--- running DynamoDB Layer1 tests ---' c = self.dynamodb # First create a table table_name = self.table_name hash_key_name = self.hash_key_name hash_key_type = self.hash_key_type range_key_name = self.range_key_name range_key_type = self.range_key_type read_units = self.read_units write_units = self.write_units schema = self.schema provisioned_throughput = self.provisioned_throughput result = self.create_table(table_name, schema, provisioned_throughput) assert result['TableDescription']['TableName'] == table_name result_schema = result['TableDescription']['KeySchema'] assert result_schema['HashKeyElement']['AttributeName'] == hash_key_name assert result_schema['HashKeyElement']['AttributeType'] == hash_key_type assert result_schema['RangeKeyElement']['AttributeName'] == range_key_name assert result_schema['RangeKeyElement']['AttributeType'] == range_key_type result_thruput = result['TableDescription']['ProvisionedThroughput'] assert result_thruput['ReadCapacityUnits'] == read_units assert result_thruput['WriteCapacityUnits'] == write_units # Wait for table to become active result = c.describe_table(table_name) while result['Table']['TableStatus'] != 'ACTIVE': time.sleep(5) result = c.describe_table(table_name) # List tables and make sure new one is there result = c.list_tables() assert table_name in result['TableNames'] # Update the tables ProvisionedThroughput new_read_units = 10 new_write_units = 5 new_provisioned_throughput = {'ReadCapacityUnits': new_read_units, 'WriteCapacityUnits': new_write_units} result = c.update_table(table_name, new_provisioned_throughput) # Wait for table to be updated result = c.describe_table(table_name) while result['Table']['TableStatus'] == 'UPDATING': time.sleep(5) result = c.describe_table(table_name) result_thruput = result['Table']['ProvisionedThroughput'] assert result_thruput['ReadCapacityUnits'] == new_read_units assert result_thruput['WriteCapacityUnits'] == new_write_units # Put an item item1_key = 'Amazon DynamoDB' item1_range = 'DynamoDB Thread 1' item1_data = { hash_key_name: {hash_key_type: item1_key}, range_key_name: {range_key_type: item1_range}, 'Message': {'S': 'DynamoDB thread 1 message text'}, 'LastPostedBy': {'S': 'User A'}, 'Views': {'N': '0'}, 'Replies': {'N': '0'}, 'Answered': {'N': '0'}, 'Tags': {'SS': ["index", "primarykey", "table"]}, 'LastPostDateTime': {'S': '12/9/2011 11:36:03 PM'} } result = c.put_item(table_name, item1_data) # Now do a consistent read and check results key1 = {'HashKeyElement': {hash_key_type: item1_key}, 'RangeKeyElement': {range_key_type: item1_range}} result = c.get_item(table_name, key=key1, consistent_read=True) for name in item1_data: assert name in result['Item'] # Try to get an item that does not exist. invalid_key = {'HashKeyElement': {hash_key_type: 'bogus_key'}, 'RangeKeyElement': {range_key_type: item1_range}} self.assertRaises(DynamoDBKeyNotFoundError, c.get_item, table_name, key=invalid_key) # Try retrieving only select attributes attributes = ['Message', 'Views'] result = c.get_item(table_name, key=key1, consistent_read=True, attributes_to_get=attributes) for name in result['Item']: assert name in attributes # Try to delete the item with the wrong Expected value expected = {'Views': {'Value': {'N': '1'}}} self.assertRaises(DynamoDBConditionalCheckFailedError, c.delete_item, table_name, key=key1, expected=expected) # Now update the existing object attribute_updates = {'Views': {'Value': {'N': '5'}, 'Action': 'PUT'}, 'Tags': {'Value': {'SS': ['foobar']}, 'Action': 'ADD'}} result = c.update_item(table_name, key=key1, attribute_updates=attribute_updates) # Try and update an item, in a fashion which makes it too large. # The new message text is the item size limit minus 32 bytes and # the current object is larger than 32 bytes. item_size_overflow_text = 'Text to be padded'.zfill(64*1024-32) attribute_updates = {'Message': {'Value': {'S': item_size_overflow_text}, 'Action': 'PUT'}} self.assertRaises(DynamoDBValidationError, c.update_item, table_name, key=key1, attribute_updates=attribute_updates) # Put a few more items into the table item2_key = 'Amazon DynamoDB' item2_range = 'DynamoDB Thread 2' item2_data = { hash_key_name: {hash_key_type: item2_key}, range_key_name: {range_key_type: item2_range}, 'Message': {'S': 'DynamoDB thread 2 message text'}, 'LastPostedBy': {'S': 'User A'}, 'Views': {'N': '0'}, 'Replies': {'N': '0'}, 'Answered': {'N': '0'}, 'Tags': {'SS': ["index", "primarykey", "table"]}, 'LastPostDateTime': {'S': '12/9/2011 11:36:03 PM'} } result = c.put_item(table_name, item2_data) key2 = {'HashKeyElement': {hash_key_type: item2_key}, 'RangeKeyElement': {range_key_type: item2_range}} item3_key = 'Amazon S3' item3_range = 'S3 Thread 1' item3_data = { hash_key_name: {hash_key_type: item3_key}, range_key_name: {range_key_type: item3_range}, 'Message': {'S': 'S3 Thread 1 message text'}, 'LastPostedBy': {'S': 'User A'}, 'Views': {'N': '0'}, 'Replies': {'N': '0'}, 'Answered': {'N': '0'}, 'Tags': {'SS': ['largeobject', 'multipart upload']}, 'LastPostDateTime': {'S': '12/9/2011 11:36:03 PM'} } result = c.put_item(table_name, item3_data) key3 = {'HashKeyElement': {hash_key_type: item3_key}, 'RangeKeyElement': {range_key_type: item3_range}} # Try a few queries result = c.query(table_name, {'S': 'Amazon DynamoDB'}, {'AttributeValueList': [{'S': 'DynamoDB'}], 'ComparisonOperator': 'BEGINS_WITH'}) assert 'Count' in result assert result['Count'] == 2 # Try a few scans result = c.scan(table_name, {'Tags': {'AttributeValueList':[{'S': 'table'}], 'ComparisonOperator': 'CONTAINS'}}) assert 'Count' in result assert result['Count'] == 2 # Now delete the items result = c.delete_item(table_name, key=key1) result = c.delete_item(table_name, key=key2) result = c.delete_item(table_name, key=key3) print '--- tests completed ---' def test_binary_attributes(self): c = self.dynamodb result = self.create_table(self.table_name, self.schema, self.provisioned_throughput) # Wait for table to become active result = c.describe_table(self.table_name) while result['Table']['TableStatus'] != 'ACTIVE': time.sleep(5) result = c.describe_table(self.table_name) # Put an item item1_key = 'Amazon DynamoDB' item1_range = 'DynamoDB Thread 1' item1_data = { self.hash_key_name: {self.hash_key_type: item1_key}, self.range_key_name: {self.range_key_type: item1_range}, 'Message': {'S': 'DynamoDB thread 1 message text'}, 'LastPostedBy': {'S': 'User A'}, 'Views': {'N': '0'}, 'Replies': {'N': '0'}, 'BinaryData': {'B': base64.b64encode(bytes('\x01\x02\x03\x04'))}, 'Answered': {'N': '0'}, 'Tags': {'SS': ["index", "primarykey", "table"]}, 'LastPostDateTime': {'S': '12/9/2011 11:36:03 PM'} } result = c.put_item(self.table_name, item1_data) # Now do a consistent read and check results key1 = {'HashKeyElement': {self.hash_key_type: item1_key}, 'RangeKeyElement': {self.range_key_type: item1_range}} result = c.get_item(self.table_name, key=key1, consistent_read=True) self.assertEqual(result['Item']['BinaryData'], {'B': base64.b64encode(bytes('\x01\x02\x03\x04'))}) boto-2.20.1/tests/integration/dynamodb/test_layer2.py000066400000000000000000000463201225267101000226310ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Tests for Layer2 of Amazon DynamoDB """ import unittest import time import uuid from decimal import Decimal from boto.dynamodb.exceptions import DynamoDBKeyNotFoundError from boto.dynamodb.exceptions import DynamoDBConditionalCheckFailedError from boto.dynamodb.layer2 import Layer2 from boto.dynamodb.types import get_dynamodb_type, Binary from boto.dynamodb.condition import BEGINS_WITH, CONTAINS, GT class DynamoDBLayer2Test(unittest.TestCase): dynamodb = True def setUp(self): self.dynamodb = Layer2() self.hash_key_name = 'forum_name' self.hash_key_proto_value = '' self.range_key_name = 'subject' self.range_key_proto_value = '' self.table_name = 'sample_data_%s' % int(time.time()) def create_sample_table(self): schema = self.dynamodb.create_schema( self.hash_key_name, self.hash_key_proto_value, self.range_key_name, self.range_key_proto_value) table = self.create_table(self.table_name, schema, 5, 5) table.refresh(wait_for_active=True) return table def create_table(self, table_name, schema, read_units, write_units): result = self.dynamodb.create_table(table_name, schema, read_units, write_units) self.addCleanup(self.dynamodb.delete_table, result) return result def test_layer2_basic(self): print '--- running Amazon DynamoDB Layer2 tests ---' c = self.dynamodb # First create a schema for the table schema = c.create_schema(self.hash_key_name, self.hash_key_proto_value, self.range_key_name, self.range_key_proto_value) # Create another schema without a range key schema2 = c.create_schema('post_id', '') # Now create a table index = int(time.time()) table_name = 'test-%d' % index read_units = 5 write_units = 5 table = self.create_table(table_name, schema, read_units, write_units) assert table.name == table_name assert table.schema.hash_key_name == self.hash_key_name assert table.schema.hash_key_type == get_dynamodb_type(self.hash_key_proto_value) assert table.schema.range_key_name == self.range_key_name assert table.schema.range_key_type == get_dynamodb_type(self.range_key_proto_value) assert table.read_units == read_units assert table.write_units == write_units assert table.item_count == 0 assert table.size_bytes == 0 # Create the second table table2_name = 'test-%d' % (index + 1) table2 = self.create_table(table2_name, schema2, read_units, write_units) # Wait for table to become active table.refresh(wait_for_active=True) table2.refresh(wait_for_active=True) # List tables and make sure new one is there table_names = c.list_tables() assert table_name in table_names assert table2_name in table_names # Update the tables ProvisionedThroughput new_read_units = 10 new_write_units = 5 table.update_throughput(new_read_units, new_write_units) # Wait for table to be updated table.refresh(wait_for_active=True) assert table.read_units == new_read_units assert table.write_units == new_write_units # Put an item item1_key = 'Amazon DynamoDB' item1_range = 'DynamoDB Thread 1' item1_attrs = { 'Message': 'DynamoDB thread 1 message text', 'LastPostedBy': 'User A', 'Views': 0, 'Replies': 0, 'Answered': 0, 'Public': True, 'Tags': set(['index', 'primarykey', 'table']), 'LastPostDateTime': '12/9/2011 11:36:03 PM'} # Test a few corner cases with new_item # Try supplying a hash_key as an arg and as an item in attrs item1_attrs[self.hash_key_name] = 'foo' foobar_item = table.new_item(item1_key, item1_range, item1_attrs) assert foobar_item.hash_key == item1_key # Try supplying a range_key as an arg and as an item in attrs item1_attrs[self.range_key_name] = 'bar' foobar_item = table.new_item(item1_key, item1_range, item1_attrs) assert foobar_item.range_key == item1_range # Try supplying hash and range key in attrs dict foobar_item = table.new_item(attrs=item1_attrs) assert foobar_item.hash_key == 'foo' assert foobar_item.range_key == 'bar' del item1_attrs[self.hash_key_name] del item1_attrs[self.range_key_name] item1 = table.new_item(item1_key, item1_range, item1_attrs) # make sure the put() succeeds try: item1.put() except c.layer1.ResponseError, e: raise Exception("Item put failed: %s" % e) # Try to get an item that does not exist. self.assertRaises(DynamoDBKeyNotFoundError, table.get_item, 'bogus_key', item1_range) # Now do a consistent read and check results item1_copy = table.get_item(item1_key, item1_range, consistent_read=True) assert item1_copy.hash_key == item1.hash_key assert item1_copy.range_key == item1.range_key for attr_name in item1_attrs: val = item1_copy[attr_name] if isinstance(val, (int, long, float, basestring)): assert val == item1[attr_name] # Try retrieving only select attributes attributes = ['Message', 'Views'] item1_small = table.get_item(item1_key, item1_range, attributes_to_get=attributes, consistent_read=True) for attr_name in item1_small: # The item will include the attributes we asked for as # well as the hashkey and rangekey, so filter those out. if attr_name not in (item1_small.hash_key_name, item1_small.range_key_name): assert attr_name in attributes self.assertTrue(table.has_item(item1_key, range_key=item1_range, consistent_read=True)) # Try to delete the item with the wrong Expected value expected = {'Views': 1} self.assertRaises(DynamoDBConditionalCheckFailedError, item1.delete, expected_value=expected) # Try to delete a value while expecting a non-existant attribute expected = {'FooBar': True} try: item1.delete(expected_value=expected) except c.layer1.ResponseError, e: pass # Now update the existing object item1.add_attribute('Replies', 2) removed_attr = 'Public' item1.delete_attribute(removed_attr) removed_tag = item1_attrs['Tags'].copy().pop() item1.delete_attribute('Tags', set([removed_tag])) replies_by_set = set(['Adam', 'Arnie']) item1.put_attribute('RepliesBy', replies_by_set) retvals = item1.save(return_values='ALL_OLD') # Need more tests here for variations on return_values assert 'Attributes' in retvals # Check for correct updates item1_updated = table.get_item(item1_key, item1_range, consistent_read=True) assert item1_updated['Replies'] == item1_attrs['Replies'] + 2 self.assertFalse(removed_attr in item1_updated) self.assertTrue(removed_tag not in item1_updated['Tags']) self.assertTrue('RepliesBy' in item1_updated) self.assertTrue(item1_updated['RepliesBy'] == replies_by_set) # Put a few more items into the table item2_key = 'Amazon DynamoDB' item2_range = 'DynamoDB Thread 2' item2_attrs = { 'Message': 'DynamoDB thread 2 message text', 'LastPostedBy': 'User A', 'Views': 0, 'Replies': 0, 'Answered': 0, 'Tags': set(["index", "primarykey", "table"]), 'LastPost2DateTime': '12/9/2011 11:36:03 PM'} item2 = table.new_item(item2_key, item2_range, item2_attrs) item2.put() item3_key = 'Amazon S3' item3_range = 'S3 Thread 1' item3_attrs = { 'Message': 'S3 Thread 1 message text', 'LastPostedBy': 'User A', 'Views': 0, 'Replies': 0, 'Answered': 0, 'Tags': set(['largeobject', 'multipart upload']), 'LastPostDateTime': '12/9/2011 11:36:03 PM' } item3 = table.new_item(item3_key, item3_range, item3_attrs) item3.put() # Put an item into the second table table2_item1_key = uuid.uuid4().hex table2_item1_attrs = { 'DateTimePosted': '25/1/2011 12:34:56 PM', 'Text': 'I think boto rocks and so does DynamoDB' } table2_item1 = table2.new_item(table2_item1_key, attrs=table2_item1_attrs) table2_item1.put() # Try a few queries items = table.query('Amazon DynamoDB', range_key_condition=BEGINS_WITH('DynamoDB')) n = 0 for item in items: n += 1 assert n == 2 assert items.consumed_units > 0 items = table.query('Amazon DynamoDB', range_key_condition=BEGINS_WITH('DynamoDB'), request_limit=1, max_results=1) n = 0 for item in items: n += 1 assert n == 1 assert items.consumed_units > 0 # Try a few scans items = table.scan() n = 0 for item in items: n += 1 assert n == 3 assert items.consumed_units > 0 items = table.scan(scan_filter={'Replies': GT(0)}) n = 0 for item in items: n += 1 assert n == 1 assert items.consumed_units > 0 # Test some integer and float attributes integer_value = 42 float_value = 345.678 item3['IntAttr'] = integer_value item3['FloatAttr'] = float_value # Test booleans item3['TrueBoolean'] = True item3['FalseBoolean'] = False # Test some set values integer_set = set([1, 2, 3, 4, 5]) float_set = set([1.1, 2.2, 3.3, 4.4, 5.5]) mixed_set = set([1, 2, 3.3, 4, 5.555]) str_set = set(['foo', 'bar', 'fie', 'baz']) item3['IntSetAttr'] = integer_set item3['FloatSetAttr'] = float_set item3['MixedSetAttr'] = mixed_set item3['StrSetAttr'] = str_set item3.put() # Now do a consistent read item4 = table.get_item(item3_key, item3_range, consistent_read=True) assert item4['IntAttr'] == integer_value assert item4['FloatAttr'] == float_value assert bool(item4['TrueBoolean']) is True assert bool(item4['FalseBoolean']) is False # The values will not necessarily be in the same order as when # we wrote them to the DB. for i in item4['IntSetAttr']: assert i in integer_set for i in item4['FloatSetAttr']: assert i in float_set for i in item4['MixedSetAttr']: assert i in mixed_set for i in item4['StrSetAttr']: assert i in str_set # Try a batch get batch_list = c.new_batch_list() batch_list.add_batch(table, [(item2_key, item2_range), (item3_key, item3_range)]) response = batch_list.submit() assert len(response['Responses'][table.name]['Items']) == 2 # Try an empty batch get batch_list = c.new_batch_list() batch_list.add_batch(table, []) response = batch_list.submit() assert response == {} # Try a few batch write operations item4_key = 'Amazon S3' item4_range = 'S3 Thread 2' item4_attrs = { 'Message': 'S3 Thread 2 message text', 'LastPostedBy': 'User A', 'Views': 0, 'Replies': 0, 'Answered': 0, 'Tags': set(['largeobject', 'multipart upload']), 'LastPostDateTime': '12/9/2011 11:36:03 PM' } item5_key = 'Amazon S3' item5_range = 'S3 Thread 3' item5_attrs = { 'Message': 'S3 Thread 3 message text', 'LastPostedBy': 'User A', 'Views': 0, 'Replies': 0, 'Answered': 0, 'Tags': set(['largeobject', 'multipart upload']), 'LastPostDateTime': '12/9/2011 11:36:03 PM' } item4 = table.new_item(item4_key, item4_range, item4_attrs) item5 = table.new_item(item5_key, item5_range, item5_attrs) batch_list = c.new_batch_write_list() batch_list.add_batch(table, puts=[item4, item5]) response = batch_list.submit() # should really check for unprocessed items # Do some generator gymnastics results = table.scan(scan_filter={'Tags': CONTAINS('table')}) assert results.scanned_count == 5 results = table.scan(request_limit=2, max_results=5) assert results.count == 2 for item in results: if results.count == 2: assert results.remaining == 4 results.remaining -= 2 results.next_response() else: assert results.count == 4 assert results.remaining in (0, 1) assert results.count == 4 results = table.scan(request_limit=6, max_results=4) assert len(list(results)) == 4 assert results.count == 4 batch_list = c.new_batch_write_list() batch_list.add_batch(table, deletes=[(item4_key, item4_range), (item5_key, item5_range)]) response = batch_list.submit() # Try queries results = table.query('Amazon DynamoDB', range_key_condition=BEGINS_WITH('DynamoDB')) n = 0 for item in results: n += 1 assert n == 2 # Try to delete the item with the right Expected value expected = {'Views': 0} item1.delete(expected_value=expected) self.assertFalse(table.has_item(item1_key, range_key=item1_range, consistent_read=True)) # Now delete the remaining items ret_vals = item2.delete(return_values='ALL_OLD') # some additional checks here would be useful assert ret_vals['Attributes'][self.hash_key_name] == item2_key assert ret_vals['Attributes'][self.range_key_name] == item2_range item3.delete() table2_item1.delete() print '--- tests completed ---' def test_binary_attrs(self): c = self.dynamodb schema = c.create_schema(self.hash_key_name, self.hash_key_proto_value, self.range_key_name, self.range_key_proto_value) index = int(time.time()) table_name = 'test-%d' % index read_units = 5 write_units = 5 table = self.create_table(table_name, schema, read_units, write_units) table.refresh(wait_for_active=True) item1_key = 'Amazon S3' item1_range = 'S3 Thread 1' item1_attrs = { 'Message': 'S3 Thread 1 message text', 'LastPostedBy': 'User A', 'Views': 0, 'Replies': 0, 'Answered': 0, 'BinaryData': Binary('\x01\x02\x03\x04'), 'BinarySequence': set([Binary('\x01\x02'), Binary('\x03\x04')]), 'Tags': set(['largeobject', 'multipart upload']), 'LastPostDateTime': '12/9/2011 11:36:03 PM' } item1 = table.new_item(item1_key, item1_range, item1_attrs) item1.put() retrieved = table.get_item(item1_key, item1_range, consistent_read=True) self.assertEqual(retrieved['Message'], 'S3 Thread 1 message text') self.assertEqual(retrieved['Views'], 0) self.assertEqual(retrieved['Tags'], set(['largeobject', 'multipart upload'])) self.assertEqual(retrieved['BinaryData'], Binary('\x01\x02\x03\x04')) # Also comparable directly to bytes: self.assertEqual(retrieved['BinaryData'], bytes('\x01\x02\x03\x04')) self.assertEqual(retrieved['BinarySequence'], set([Binary('\x01\x02'), Binary('\x03\x04')])) def test_put_decimal_attrs(self): self.dynamodb.use_decimals() table = self.create_sample_table() item = table.new_item('foo', 'bar') item['decimalvalue'] = Decimal('1.12345678912345') item.put() retrieved = table.get_item('foo', 'bar') self.assertEqual(retrieved['decimalvalue'], Decimal('1.12345678912345')) def test_lossy_float_conversion(self): table = self.create_sample_table() item = table.new_item('foo', 'bar') item['floatvalue'] = 1.12345678912345 item.put() retrieved = table.get_item('foo', 'bar')['floatvalue'] # Notice how this is not equal to the original value. self.assertNotEqual(1.12345678912345, retrieved) # Instead, it's truncated: self.assertEqual(1.12345678912, retrieved) def test_large_integers(self): # It's not just floating point numbers, large integers # can trigger rouding issues. self.dynamodb.use_decimals() table = self.create_sample_table() item = table.new_item('foo', 'bar') item['decimalvalue'] = Decimal('129271300103398600') item.put() retrieved = table.get_item('foo', 'bar') self.assertEqual(retrieved['decimalvalue'], Decimal('129271300103398600')) # Also comparable directly to an int. self.assertEqual(retrieved['decimalvalue'], 129271300103398600) def test_put_single_letter_attr(self): # When an attr is added that is a single letter, if it overlaps with # the built-in "types", the decoding used to fall down. Assert that # it's now working correctly. table = self.create_sample_table() item = table.new_item('foo', 'foo1') item.put_attribute('b', 4) stored = item.save(return_values='UPDATED_NEW') self.assertEqual(stored['Attributes'], {'b': 4}) boto-2.20.1/tests/integration/dynamodb/test_table.py000066400000000000000000000067411225267101000225250ustar00rootroot00000000000000# Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import time from tests.unit import unittest from boto.dynamodb.layer2 import Layer2 from boto.dynamodb.table import Table from boto.dynamodb.schema import Schema class TestDynamoDBTable(unittest.TestCase): dynamodb = True def setUp(self): self.dynamodb = Layer2() self.schema = Schema.create(('foo', 'N'), ('bar', 'S')) self.table_name = 'testtable%s' % int(time.time()) def create_table(self, table_name, schema, read_units, write_units): result = self.dynamodb.create_table(table_name, schema, read_units, write_units) self.addCleanup(self.dynamodb.delete_table, result) return result def assertAllEqual(self, *items): first = items[0] for item in items[1:]: self.assertEqual(first, item) def test_table_retrieval_parity(self): created_table = self.dynamodb.create_table( self.table_name, self.schema, 1, 1) created_table.refresh(wait_for_active=True) retrieved_table = self.dynamodb.get_table(self.table_name) constructed_table = self.dynamodb.table_from_schema(self.table_name, self.schema) # All three tables should have the same name # and schema attributes. self.assertAllEqual(created_table.name, retrieved_table.name, constructed_table.name) self.assertAllEqual(created_table.schema, retrieved_table.schema, constructed_table.schema) # However for create_time, status, read/write units, # only the created/retrieved table will have equal # values. self.assertEqual(created_table.create_time, retrieved_table.create_time) self.assertEqual(created_table.status, retrieved_table.status) self.assertEqual(created_table.read_units, retrieved_table.read_units) self.assertEqual(created_table.write_units, retrieved_table.write_units) # The constructed table will have values of None. self.assertIsNone(constructed_table.create_time) self.assertIsNone(constructed_table.status) self.assertIsNone(constructed_table.read_units) self.assertIsNone(constructed_table.write_units) boto-2.20.1/tests/integration/dynamodb2/000077500000000000000000000000001225267101000200775ustar00rootroot00000000000000boto-2.20.1/tests/integration/dynamodb2/__init__.py000066400000000000000000000000001225267101000221760ustar00rootroot00000000000000boto-2.20.1/tests/integration/dynamodb2/test_cert_verification.py000066400000000000000000000030341225267101000252070ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Check that all of the certs on all service endpoints validate. """ import unittest from tests.integration import ServiceCertVerificationTest import boto.dynamodb2 class DynamoDB2CertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): dynamodb2 = True regions = boto.dynamodb2.regions() def sample_service_call(self, conn): conn.list_tables() boto-2.20.1/tests/integration/dynamodb2/test_highlevel.py000066400000000000000000000265641225267101000234740ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Tests for DynamoDB v2 high-level abstractions. """ from __future__ import with_statement import time from tests.unit import unittest from boto.dynamodb2 import exceptions from boto.dynamodb2.fields import HashKey, RangeKey, KeysOnlyIndex from boto.dynamodb2.items import Item from boto.dynamodb2.table import Table from boto.dynamodb2.types import NUMBER class DynamoDBv2Test(unittest.TestCase): dynamodb = True def test_integration(self): # Test creating a full table with all options specified. users = Table.create('users', schema=[ HashKey('username'), RangeKey('friend_count', data_type=NUMBER) ], throughput={ 'read': 5, 'write': 5, }, indexes=[ KeysOnlyIndex('LastNameIndex', parts=[ HashKey('username'), RangeKey('last_name') ]), ]) self.addCleanup(users.delete) self.assertEqual(len(users.schema), 2) self.assertEqual(users.throughput['read'], 5) # Wait for it. time.sleep(60) # Make sure things line up if we're introspecting the table. users_hit_api = Table('users') users_hit_api.describe() self.assertEqual(len(users.schema), len(users_hit_api.schema)) self.assertEqual(users.throughput, users_hit_api.throughput) self.assertEqual(len(users.indexes), len(users_hit_api.indexes)) # Test putting some items individually. users.put_item(data={ 'username': 'johndoe', 'first_name': 'John', 'last_name': 'Doe', 'friend_count': 4 }) users.put_item(data={ 'username': 'alice', 'first_name': 'Alice', 'last_name': 'Expert', 'friend_count': 2 }) time.sleep(5) # Test batch writing. with users.batch_write() as batch: batch.put_item({ 'username': 'jane', 'first_name': 'Jane', 'last_name': 'Doe', 'friend_count': 3 }) batch.delete_item(username='alice', friend_count=2) batch.put_item({ 'username': 'bob', 'first_name': 'Bob', 'last_name': 'Smith', 'friend_count': 1 }) time.sleep(5) # Test getting an item & updating it. # This is the "safe" variant (only write if there have been no # changes). jane = users.get_item(username='jane', friend_count=3) self.assertEqual(jane['first_name'], 'Jane') jane['last_name'] = 'Doh' self.assertTrue(jane.save()) # Test strongly consistent getting of an item. # Additionally, test the overwrite behavior. client_1_jane = users.get_item( username='jane', friend_count=3, consistent=True ) self.assertEqual(jane['first_name'], 'Jane') client_2_jane = users.get_item( username='jane', friend_count=3, consistent=True ) self.assertEqual(jane['first_name'], 'Jane') # Write & assert the ``first_name`` is gone, then... del client_1_jane['first_name'] self.assertTrue(client_1_jane.save()) check_name = users.get_item( username='jane', friend_count=3, consistent=True ) self.assertEqual(check_name['first_name'], None) # ...overwrite the data with what's in memory. client_2_jane['first_name'] = 'Joan' # Now a write that fails due to default expectations... self.assertRaises(exceptions.JSONResponseError, client_2_jane.save) # ... so we force an overwrite. self.assertTrue(client_2_jane.save(overwrite=True)) check_name_again = users.get_item( username='jane', friend_count=3, consistent=True ) self.assertEqual(check_name_again['first_name'], 'Joan') # Reset it. jane['username'] = 'jane' jane['first_name'] = 'Jane' jane['last_name'] = 'Doe' jane['friend_count'] = 3 self.assertTrue(jane.save(overwrite=True)) # Test the partial update behavior. client_3_jane = users.get_item( username='jane', friend_count=3, consistent=True ) client_4_jane = users.get_item( username='jane', friend_count=3, consistent=True ) client_3_jane['favorite_band'] = 'Feed Me' # No ``overwrite`` needed due to new data. self.assertTrue(client_3_jane.save()) # Expectations are only checked on the ``first_name``, so what wouldn't # have succeeded by default does succeed here. client_4_jane['first_name'] = 'Jacqueline' self.assertTrue(client_4_jane.partial_save()) partial_jane = users.get_item( username='jane', friend_count=3, consistent=True ) self.assertEqual(partial_jane['favorite_band'], 'Feed Me') self.assertEqual(partial_jane['first_name'], 'Jacqueline') # Reset it. jane['username'] = 'jane' jane['first_name'] = 'Jane' jane['last_name'] = 'Doe' jane['friend_count'] = 3 self.assertTrue(jane.save(overwrite=True)) # Ensure that partial saves of a brand-new object work. sadie = Item(users, data={ 'username': 'sadie', 'first_name': 'Sadie', 'favorite_band': 'Zedd', 'friend_count': 7 }) self.assertTrue(sadie.partial_save()) serverside_sadie = users.get_item( username='sadie', friend_count=7, consistent=True ) self.assertEqual(serverside_sadie['first_name'], 'Sadie') # Test the eventually consistent query. results = users.query( username__eq='johndoe', last_name__eq='Doe', index='LastNameIndex', attributes=('username',), reverse=True ) for res in results: self.assertTrue(res['username'] in ['johndoe',]) self.assertEqual(res.keys(), ['username']) # Test the strongly consistent query. c_results = users.query( username__eq='johndoe', last_name__eq='Doe', index='LastNameIndex', reverse=True, consistent=True ) for res in c_results: self.assertTrue(res['username'] in ['johndoe',]) # Test scans without filters. all_users = users.scan(limit=7) self.assertEqual(all_users.next()['username'], 'bob') self.assertEqual(all_users.next()['username'], 'jane') self.assertEqual(all_users.next()['username'], 'johndoe') # Test scans with a filter. filtered_users = users.scan(limit=2, username__beginswith='j') self.assertEqual(filtered_users.next()['username'], 'jane') self.assertEqual(filtered_users.next()['username'], 'johndoe') # Test deleting a single item. johndoe = users.get_item(username='johndoe', friend_count=4) johndoe.delete() # Test the eventually consistent batch get. results = users.batch_get(keys=[ {'username': 'bob', 'friend_count': 1}, {'username': 'jane', 'friend_count': 3} ]) batch_users = [] for res in results: batch_users.append(res) self.assertTrue(res['first_name'] in ['Bob', 'Jane']) self.assertEqual(len(batch_users), 2) # Test the strongly consistent batch get. c_results = users.batch_get(keys=[ {'username': 'bob', 'friend_count': 1}, {'username': 'jane', 'friend_count': 3} ], consistent=True) c_batch_users = [] for res in c_results: c_batch_users.append(res) self.assertTrue(res['first_name'] in ['Bob', 'Jane']) self.assertEqual(len(c_batch_users), 2) # Test count, but in a weak fashion. Because lag time. self.assertTrue(users.count() > -1) # Test query count count = users.query_count( username__eq='bob', ) self.assertEqual(count, 1) # Test without LSIs (describe calls shouldn't fail). admins = Table.create('admins', schema=[ HashKey('username') ]) self.addCleanup(admins.delete) time.sleep(60) admins.describe() self.assertEqual(admins.throughput['read'], 5) self.assertEqual(admins.indexes, []) # A single query term should fail on a table with *ONLY* a HashKey. self.assertRaises( exceptions.QueryError, admins.query, username__eq='johndoe' ) # But it shouldn't break on more complex tables. res = users.query(username__eq='johndoe') # Test putting with/without sets. mau5_created = users.put_item(data={ 'username': 'mau5', 'first_name': 'dead', 'last_name': 'mau5', 'friend_count': 2, 'friends': set(['skrill', 'penny']), }) self.assertTrue(mau5_created) penny_created = users.put_item(data={ 'username': 'penny', 'first_name': 'Penny', 'friend_count': 0, 'friends': set([]), }) self.assertTrue(penny_created) def test_unprocessed_batch_writes(self): # Create a very limited table w/ low throughput. users = Table.create('slow_users', schema=[ HashKey('user_id'), ], throughput={ 'read': 1, 'write': 1, }) self.addCleanup(users.delete) # Wait for it. time.sleep(60) with users.batch_write() as batch: for i in range(500): batch.put_item(data={ 'user_id': str(i), 'name': 'Droid #{0}'.format(i), }) # Before ``__exit__`` runs, we should have a bunch of unprocessed # items. self.assertTrue(len(batch._unprocessed) > 0) # Post-__exit__, they should all be gone. self.assertEqual(len(batch._unprocessed), 0) boto-2.20.1/tests/integration/dynamodb2/test_layer1.py000066400000000000000000000263201225267101000227100ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Tests for Layer1 of DynamoDB v2 """ import time from tests.unit import unittest from boto.dynamodb2 import exceptions from boto.dynamodb2.layer1 import DynamoDBConnection class DynamoDBv2Layer1Test(unittest.TestCase): dynamodb = True def setUp(self): self.dynamodb = DynamoDBConnection() self.table_name = 'test-%d' % int(time.time()) self.hash_key_name = 'username' self.hash_key_type = 'S' self.range_key_name = 'date_joined' self.range_key_type = 'N' self.read_units = 5 self.write_units = 5 self.attributes = [ { 'AttributeName': self.hash_key_name, 'AttributeType': self.hash_key_type, }, { 'AttributeName': self.range_key_name, 'AttributeType': self.range_key_type, } ] self.schema = [ { 'AttributeName': self.hash_key_name, 'KeyType': 'HASH', }, { 'AttributeName': self.range_key_name, 'KeyType': 'RANGE', }, ] self.provisioned_throughput = { 'ReadCapacityUnits': self.read_units, 'WriteCapacityUnits': self.write_units, } self.lsi = [ { 'IndexName': 'MostRecentIndex', 'KeySchema': [ { 'AttributeName': self.hash_key_name, 'KeyType': 'HASH', }, { 'AttributeName': self.range_key_name, 'KeyType': 'RANGE', }, ], 'Projection': { 'ProjectionType': 'KEYS_ONLY', } } ] def create_table(self, table_name, attributes, schema, provisioned_throughput, lsi=None, wait=True): # Note: This is a slightly different ordering that makes less sense. result = self.dynamodb.create_table( attributes, table_name, schema, provisioned_throughput, local_secondary_indexes=lsi ) self.addCleanup(self.dynamodb.delete_table, table_name) if wait: while True: description = self.dynamodb.describe_table(table_name) if description['Table']['TableStatus'].lower() == 'active': return result else: time.sleep(5) else: return result def test_integrated(self): result = self.create_table( self.table_name, self.attributes, self.schema, self.provisioned_throughput, self.lsi ) self.assertEqual( result['TableDescription']['TableName'], self.table_name ) description = self.dynamodb.describe_table(self.table_name) self.assertEqual(description['Table']['ItemCount'], 0) # Create some records. record_1_data = { 'username': {'S': 'johndoe'}, 'first_name': {'S': 'John'}, 'last_name': {'S': 'Doe'}, 'date_joined': {'N': '1366056668'}, 'friend_count': {'N': '3'}, 'friends': {'SS': ['alice', 'bob', 'jane']}, } r1_result = self.dynamodb.put_item(self.table_name, record_1_data) # Get the data. record_1 = self.dynamodb.get_item(self.table_name, key={ 'username': {'S': 'johndoe'}, 'date_joined': {'N': '1366056668'}, }, consistent_read=True) self.assertEqual(record_1['Item']['username']['S'], 'johndoe') self.assertEqual(record_1['Item']['first_name']['S'], 'John') self.assertEqual(record_1['Item']['friends']['SS'], [ 'alice', 'bob', 'jane' ]) # Now in a batch. self.dynamodb.batch_write_item({ self.table_name: [ { 'PutRequest': { 'Item': { 'username': {'S': 'jane'}, 'first_name': {'S': 'Jane'}, 'last_name': {'S': 'Doe'}, 'date_joined': {'N': '1366056789'}, 'friend_count': {'N': '1'}, 'friends': {'SS': ['johndoe']}, }, }, }, ] }) # Now a query. lsi_results = self.dynamodb.query( self.table_name, index_name='MostRecentIndex', key_conditions={ 'username': { 'AttributeValueList': [ {'S': 'johndoe'}, ], 'ComparisonOperator': 'EQ', }, }, consistent_read=True ) self.assertEqual(lsi_results['Count'], 1) results = self.dynamodb.query(self.table_name, key_conditions={ 'username': { 'AttributeValueList': [ {'S': 'jane'}, ], 'ComparisonOperator': 'EQ', }, 'date_joined': { 'AttributeValueList': [ {'N': '1366050000'} ], 'ComparisonOperator': 'GT', } }, consistent_read=True) self.assertEqual(results['Count'], 1) # Now a scan. results = self.dynamodb.scan(self.table_name) self.assertEqual(results['Count'], 2) s_items = sorted([res['username']['S'] for res in results['Items']]) self.assertEqual(s_items, ['jane', 'johndoe']) self.dynamodb.delete_item(self.table_name, key={ 'username': {'S': 'johndoe'}, 'date_joined': {'N': '1366056668'}, }) results = self.dynamodb.scan(self.table_name) self.assertEqual(results['Count'], 1) # Parallel scan (minus client-side threading). self.dynamodb.batch_write_item({ self.table_name: [ { 'PutRequest': { 'Item': { 'username': {'S': 'johndoe'}, 'first_name': {'S': 'Johann'}, 'last_name': {'S': 'Does'}, 'date_joined': {'N': '1366058000'}, 'friend_count': {'N': '1'}, 'friends': {'SS': ['jane']}, }, }, 'PutRequest': { 'Item': { 'username': {'S': 'alice'}, 'first_name': {'S': 'Alice'}, 'last_name': {'S': 'Expert'}, 'date_joined': {'N': '1366056800'}, 'friend_count': {'N': '2'}, 'friends': {'SS': ['johndoe', 'jane']}, }, }, }, ] }) time.sleep(20) results = self.dynamodb.scan(self.table_name, segment=0, total_segments=2) self.assertTrue(results['Count'] in [1, 2]) results = self.dynamodb.scan(self.table_name, segment=1, total_segments=2) self.assertTrue(results['Count'] in [1, 2]) def test_without_range_key(self): result = self.create_table( self.table_name, [ { 'AttributeName': self.hash_key_name, 'AttributeType': self.hash_key_type, }, ], [ { 'AttributeName': self.hash_key_name, 'KeyType': 'HASH', }, ], self.provisioned_throughput ) self.assertEqual( result['TableDescription']['TableName'], self.table_name ) description = self.dynamodb.describe_table(self.table_name) self.assertEqual(description['Table']['ItemCount'], 0) # Create some records. record_1_data = { 'username': {'S': 'johndoe'}, 'first_name': {'S': 'John'}, 'last_name': {'S': 'Doe'}, 'date_joined': {'N': '1366056668'}, 'friend_count': {'N': '3'}, 'friends': {'SS': ['alice', 'bob', 'jane']}, } r1_result = self.dynamodb.put_item(self.table_name, record_1_data) # Now try a range-less get. johndoe = self.dynamodb.get_item(self.table_name, key={ 'username': {'S': 'johndoe'}, }, consistent_read=True) self.assertEqual(johndoe['Item']['username']['S'], 'johndoe') self.assertEqual(johndoe['Item']['first_name']['S'], 'John') self.assertEqual(johndoe['Item']['friends']['SS'], [ 'alice', 'bob', 'jane' ]) def test_throughput_exceeded_regression(self): tiny_tablename = 'TinyThroughput' tiny = self.create_table( tiny_tablename, self.attributes, self.schema, { 'ReadCapacityUnits': 1, 'WriteCapacityUnits': 1, } ) self.dynamodb.put_item(tiny_tablename, { 'username': {'S': 'johndoe'}, 'first_name': {'S': 'John'}, 'last_name': {'S': 'Doe'}, 'date_joined': {'N': '1366056668'}, }) self.dynamodb.put_item(tiny_tablename, { 'username': {'S': 'jane'}, 'first_name': {'S': 'Jane'}, 'last_name': {'S': 'Doe'}, 'date_joined': {'N': '1366056669'}, }) self.dynamodb.put_item(tiny_tablename, { 'username': {'S': 'alice'}, 'first_name': {'S': 'Alice'}, 'last_name': {'S': 'Expert'}, 'date_joined': {'N': '1366057000'}, }) time.sleep(20) for i in range(100): # This would cause an exception due to a non-existant instance variable. self.dynamodb.scan(tiny_tablename) boto-2.20.1/tests/integration/ec2/000077500000000000000000000000001225267101000166715ustar00rootroot00000000000000boto-2.20.1/tests/integration/ec2/__init__.py000066400000000000000000000021201225267101000207750ustar00rootroot00000000000000# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. boto-2.20.1/tests/integration/ec2/autoscale/000077500000000000000000000000001225267101000206515ustar00rootroot00000000000000boto-2.20.1/tests/integration/ec2/autoscale/__init__.py000066400000000000000000000021131225267101000227570ustar00rootroot00000000000000# Copyright (c) 2011 Reza Lotun http://reza.lotun.name # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. boto-2.20.1/tests/integration/ec2/autoscale/test_cert_verification.py000066400000000000000000000030471225267101000257650ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Check that all of the certs on all service endpoints validate. """ import unittest from tests.integration import ServiceCertVerificationTest import boto.ec2.autoscale class AutoscaleCertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): autoscale = True regions = boto.ec2.autoscale.regions() def sample_service_call(self, conn): conn.get_all_groups() boto-2.20.1/tests/integration/ec2/autoscale/test_connection.py000066400000000000000000000146231225267101000244270ustar00rootroot00000000000000# Copyright (c) 2011 Reza Lotun http://reza.lotun.name # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Some unit tests for the AutoscaleConnection """ import unittest import time from boto.ec2.autoscale import AutoScaleConnection from boto.ec2.autoscale.activity import Activity from boto.ec2.autoscale.group import AutoScalingGroup, ProcessType from boto.ec2.autoscale.launchconfig import LaunchConfiguration from boto.ec2.autoscale.policy import AdjustmentType, MetricCollectionTypes, ScalingPolicy from boto.ec2.autoscale.scheduled import ScheduledUpdateGroupAction from boto.ec2.autoscale.instance import Instance from boto.ec2.autoscale.tag import Tag class AutoscaleConnectionTest(unittest.TestCase): ec2 = True autoscale = True def test_basic(self): # NB: as it says on the tin these are really basic tests that only # (lightly) exercise read-only behaviour - and that's only if you # have any autoscale groups to introspect. It's useful, however, to # catch simple errors print '--- running %s tests ---' % self.__class__.__name__ c = AutoScaleConnection() self.assertTrue(repr(c).startswith('AutoScaleConnection')) groups = c.get_all_groups() for group in groups: self.assertTrue(type(group), AutoScalingGroup) # get activities activities = group.get_activities() for activity in activities: self.assertEqual(type(activity), Activity) # get launch configs configs = c.get_all_launch_configurations() for config in configs: self.assertTrue(type(config), LaunchConfiguration) # get policies policies = c.get_all_policies() for policy in policies: self.assertTrue(type(policy), ScalingPolicy) # get scheduled actions actions = c.get_all_scheduled_actions() for action in actions: self.assertTrue(type(action), ScheduledUpdateGroupAction) # get instances instances = c.get_all_autoscaling_instances() for instance in instances: self.assertTrue(type(instance), Instance) # get all scaling process types ptypes = c.get_all_scaling_process_types() for ptype in ptypes: self.assertTrue(type(ptype), ProcessType) # get adjustment types adjustments = c.get_all_adjustment_types() for adjustment in adjustments: self.assertTrue(type(adjustment), AdjustmentType) # get metrics collection types types = c.get_all_metric_collection_types() self.assertTrue(type(types), MetricCollectionTypes) # create the simplest possible AutoScale group # first create the launch configuration time_string = '%d' % int(time.time()) lc_name = 'lc-%s' % time_string lc = LaunchConfiguration(name=lc_name, image_id='ami-2272864b', instance_type='t1.micro') c.create_launch_configuration(lc) found = False lcs = c.get_all_launch_configurations() for lc in lcs: if lc.name == lc_name: found = True break assert found # now create autoscaling group group_name = 'group-%s' % time_string group = AutoScalingGroup(name=group_name, launch_config=lc, availability_zones=['us-east-1a'], min_size=1, max_size=1) c.create_auto_scaling_group(group) found = False groups = c.get_all_groups() for group in groups: if group.name == group_name: found = True break assert found # now create a tag tag = Tag(key='foo', value='bar', resource_id=group_name, propagate_at_launch=True) c.create_or_update_tags([tag]) found = False tags = c.get_all_tags() for tag in tags: if tag.resource_id == group_name and tag.key == 'foo': found = True break assert found c.delete_tags([tag]) # shutdown instances and wait for them to disappear group.shutdown_instances() instances = True while instances: time.sleep(5) groups = c.get_all_groups() for group in groups: if group.name == group_name: if not group.instances: instances = False group.delete() lc.delete() found = True while found: found = False time.sleep(5) tags = c.get_all_tags() for tag in tags: if tag.resource_id == group_name and tag.key == 'foo': found = True assert not found print '--- tests completed ---' def test_ebs_optimized_regression(self): c = AutoScaleConnection() time_string = '%d' % int(time.time()) lc_name = 'lc-%s' % time_string lc = LaunchConfiguration( name=lc_name, image_id='ami-2272864b', instance_type='t1.micro', ebs_optimized=True ) # This failed due to the difference between native Python ``True/False`` # & the expected string variants. c.create_launch_configuration(lc) self.addCleanup(c.delete_launch_configuration, lc_name) boto-2.20.1/tests/integration/ec2/cloudwatch/000077500000000000000000000000001225267101000210265ustar00rootroot00000000000000boto-2.20.1/tests/integration/ec2/cloudwatch/__init__.py000066400000000000000000000021201225267101000231320ustar00rootroot00000000000000# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. boto-2.20.1/tests/integration/ec2/cloudwatch/test_cert_verification.py000066400000000000000000000030541225267101000261400ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Check that all of the certs on all service endpoints validate. """ import unittest from tests.integration import ServiceCertVerificationTest import boto.ec2.cloudwatch class CloudWatchCertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): cloudwatch = True regions = boto.ec2.cloudwatch.regions() def sample_service_call(self, conn): conn.describe_alarms() boto-2.20.1/tests/integration/ec2/cloudwatch/test_connection.py000066400000000000000000000270131225267101000246010ustar00rootroot00000000000000# Copyright (c) 2010 Hunter Blanks http://artifex.org/~hblanks/ # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Initial, and very limited, unit tests for CloudWatchConnection. """ import datetime import time import unittest from boto.ec2.cloudwatch import CloudWatchConnection from boto.ec2.cloudwatch.metric import Metric # HTTP response body for CloudWatchConnection.describe_alarms DESCRIBE_ALARMS_BODY = """ mynexttoken 2011-11-18T23:43:59.111Z {"version":"1.0","queryDate":"2011-11-18T23:43:59.089+0000","startDate":"2011-11-18T23:30:00.000+0000","statistic":"Maximum","period":60,"recentDatapoints":[1.0,null,null,null,null,null,null,null,null,null,1.0],"threshold":1.0} arn:aws:cloudwatch:us-east-1:1234:alarm:FancyAlarm 2011-11-18T23:43:58.489Z FancyAlarm OK 60 true AcmeCo/Cronjobs 15 1.0 Maximum arn:aws:sns:us-east-1:1234:Alerts Threshold Crossed: 2 datapoints were not less than the threshold (1.0). The most recent datapoints: [1.0, 1.0]. Job ANiceCronJob LessThanThreshold Success 2011-11-19T08:09:20.655Z {"version":"1.0","queryDate":"2011-11-19T08:09:20.633+0000","startDate":"2011-11-19T08:07:00.000+0000","statistic":"Maximum","period":60,"recentDatapoints":[1.0],"threshold":1.0} arn:aws:cloudwatch:us-east-1:1234:alarm:SuprtFancyAlarm 2011-11-19T16:20:19.687Z SuperFancyAlarm OK 60 true AcmeCo/CronJobs 60 1.0 Maximum arn:aws:sns:us-east-1:1234:alerts Threshold Crossed: 1 datapoint (1.0) was not less than the threshold (1.0). Job ABadCronJob GreaterThanThreshold Success f621311-1463-11e1-95c3-312389123 """ class CloudWatchConnectionTest(unittest.TestCase): ec2 = True def test_build_list_params(self): c = CloudWatchConnection() params = {} c.build_list_params( params, ['thing1', 'thing2', 'thing3'], 'ThingName%d') expected_params = { 'ThingName1': 'thing1', 'ThingName2': 'thing2', 'ThingName3': 'thing3' } self.assertEqual(params, expected_params) def test_build_put_params_one(self): c = CloudWatchConnection() params = {} c.build_put_params(params, name="N", value=1, dimensions={"D": "V"}) expected_params = { 'MetricData.member.1.MetricName': 'N', 'MetricData.member.1.Value': 1, 'MetricData.member.1.Dimensions.member.1.Name': 'D', 'MetricData.member.1.Dimensions.member.1.Value': 'V', } self.assertEqual(params, expected_params) def test_build_put_params_multiple_metrics(self): c = CloudWatchConnection() params = {} c.build_put_params(params, name=["N", "M"], value=[1, 2], dimensions={"D": "V"}) expected_params = { 'MetricData.member.1.MetricName': 'N', 'MetricData.member.1.Value': 1, 'MetricData.member.1.Dimensions.member.1.Name': 'D', 'MetricData.member.1.Dimensions.member.1.Value': 'V', 'MetricData.member.2.MetricName': 'M', 'MetricData.member.2.Value': 2, 'MetricData.member.2.Dimensions.member.1.Name': 'D', 'MetricData.member.2.Dimensions.member.1.Value': 'V', } self.assertEqual(params, expected_params) def test_build_put_params_multiple_dimensions(self): c = CloudWatchConnection() params = {} c.build_put_params(params, name="N", value=[1, 2], dimensions=[{"D": "V"}, {"D": "W"}]) expected_params = { 'MetricData.member.1.MetricName': 'N', 'MetricData.member.1.Value': 1, 'MetricData.member.1.Dimensions.member.1.Name': 'D', 'MetricData.member.1.Dimensions.member.1.Value': 'V', 'MetricData.member.2.MetricName': 'N', 'MetricData.member.2.Value': 2, 'MetricData.member.2.Dimensions.member.1.Name': 'D', 'MetricData.member.2.Dimensions.member.1.Value': 'W', } self.assertEqual(params, expected_params) def test_build_put_params_multiple_parameter_dimension(self): from collections import OrderedDict self.maxDiff = None c = CloudWatchConnection() params = {} dimensions = [OrderedDict((("D1", "V"), ("D2", "W")))] c.build_put_params(params, name="N", value=[1], dimensions=dimensions) expected_params = { 'MetricData.member.1.MetricName': 'N', 'MetricData.member.1.Value': 1, 'MetricData.member.1.Dimensions.member.1.Name': 'D1', 'MetricData.member.1.Dimensions.member.1.Value': 'V', 'MetricData.member.1.Dimensions.member.2.Name': 'D2', 'MetricData.member.1.Dimensions.member.2.Value': 'W', } self.assertEqual(params, expected_params) def test_build_get_params_multiple_parameter_dimension1(self): from collections import OrderedDict self.maxDiff = None c = CloudWatchConnection() params = {} dimensions = OrderedDict((("D1", "V"), ("D2", "W"))) c.build_dimension_param(dimensions, params) expected_params = { 'Dimensions.member.1.Name': 'D1', 'Dimensions.member.1.Value': 'V', 'Dimensions.member.2.Name': 'D2', 'Dimensions.member.2.Value': 'W', } self.assertEqual(params, expected_params) def test_build_get_params_multiple_parameter_dimension2(self): from collections import OrderedDict self.maxDiff = None c = CloudWatchConnection() params = {} dimensions = OrderedDict((("D1", ["V1", "V2"]), ("D2", "W"), ("D3", None))) c.build_dimension_param(dimensions, params) expected_params = { 'Dimensions.member.1.Name': 'D1', 'Dimensions.member.1.Value': 'V1', 'Dimensions.member.2.Name': 'D1', 'Dimensions.member.2.Value': 'V2', 'Dimensions.member.3.Name': 'D2', 'Dimensions.member.3.Value': 'W', 'Dimensions.member.4.Name': 'D3', } self.assertEqual(params, expected_params) def test_build_put_params_invalid(self): c = CloudWatchConnection() params = {} try: c.build_put_params(params, name=["N", "M"], value=[1, 2, 3]) except: pass else: self.fail("Should not accept lists of different lengths.") def test_get_metric_statistics(self): c = CloudWatchConnection() m = c.list_metrics()[0] end = datetime.datetime.now() start = end - datetime.timedelta(hours=24*14) c.get_metric_statistics( 3600*24, start, end, m.name, m.namespace, ['Average', 'Sum']) def test_put_metric_data(self): c = CloudWatchConnection() now = datetime.datetime.now() name, namespace = 'unit-test-metric', 'boto-unit-test' c.put_metric_data(namespace, name, 5, now, 'Bytes') # Uncomment the following lines for a slower but more thorough # test. (Hurrah for eventual consistency...) # # metric = Metric(connection=c) # metric.name = name # metric.namespace = namespace # time.sleep(60) # l = metric.query( # now - datetime.timedelta(seconds=60), # datetime.datetime.now(), # 'Average') # assert l # for row in l: # self.assertEqual(row['Unit'], 'Bytes') # self.assertEqual(row['Average'], 5.0) def test_describe_alarms(self): c = CloudWatchConnection() def make_request(*args, **kwargs): class Body(object): def __init__(self): self.status = 200 def read(self): return DESCRIBE_ALARMS_BODY return Body() c.make_request = make_request alarms = c.describe_alarms() self.assertEquals(alarms.next_token, 'mynexttoken') self.assertEquals(alarms[0].name, 'FancyAlarm') self.assertEquals(alarms[0].comparison, '<') self.assertEquals(alarms[0].dimensions, {u'Job': [u'ANiceCronJob']}) self.assertEquals(alarms[1].name, 'SuperFancyAlarm') self.assertEquals(alarms[1].comparison, '>') self.assertEquals(alarms[1].dimensions, {u'Job': [u'ABadCronJob']}) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/integration/ec2/elb/000077500000000000000000000000001225267101000174335ustar00rootroot00000000000000boto-2.20.1/tests/integration/ec2/elb/__init__.py000066400000000000000000000021201225267101000215370ustar00rootroot00000000000000# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. boto-2.20.1/tests/integration/ec2/elb/test_cert_verification.py000066400000000000000000000030271225267101000245450ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Check that all of the certs on all service endpoints validate. """ import unittest from tests.integration import ServiceCertVerificationTest import boto.ec2.elb class ELBCertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): elb = True regions = boto.ec2.elb.regions() def sample_service_call(self, conn): conn.get_all_load_balancers() boto-2.20.1/tests/integration/ec2/elb/test_connection.py000066400000000000000000000164211225267101000232070ustar00rootroot00000000000000# Copyright (c) 2010 Hunter Blanks http://artifex.org/~hblanks/ # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Initial, and very limited, unit tests for ELBConnection. """ import unittest from boto.ec2.elb import ELBConnection class ELBConnectionTest(unittest.TestCase): ec2 = True def setUp(self): """Creates a named load balancer that can be safely deleted at the end of each test""" self.conn = ELBConnection() self.name = 'elb-boto-unit-test' self.availability_zones = ['us-east-1a'] self.listeners = [(80, 8000, 'HTTP')] self.balancer = self.conn.create_load_balancer(self.name, self.availability_zones, self.listeners) def tearDown(self): """ Deletes the test load balancer after every test. It does not delete EVERY load balancer in your account""" self.balancer.delete() def test_build_list_params(self): params = {} self.conn.build_list_params( params, ['thing1', 'thing2', 'thing3'], 'ThingName%d') expected_params = { 'ThingName1': 'thing1', 'ThingName2': 'thing2', 'ThingName3': 'thing3' } self.assertEqual(params, expected_params) # TODO: for these next tests, consider sleeping until our load # balancer comes up, then testing for connectivity to # balancer.dns_name, along the lines of the existing EC2 unit tests. def test_create_load_balancer(self): self.assertEqual(self.balancer.name, self.name) self.assertEqual(self.balancer.availability_zones,\ self.availability_zones) self.assertEqual(self.balancer.listeners, self.listeners) balancers = self.conn.get_all_load_balancers() self.assertEqual([lb.name for lb in balancers], [self.name]) def test_create_load_balancer_listeners(self): more_listeners = [(443, 8001, 'HTTP')] self.conn.create_load_balancer_listeners(self.name, more_listeners) balancers = self.conn.get_all_load_balancers() self.assertEqual([lb.name for lb in balancers], [self.name]) self.assertEqual( sorted(l.get_tuple() for l in balancers[0].listeners), sorted(self.listeners + more_listeners) ) def test_delete_load_balancer_listeners(self): mod_listeners = [(80, 8000, 'HTTP'), (443, 8001, 'HTTP')] mod_name = self.name + "-mod" self.mod_balancer = self.conn.create_load_balancer(mod_name,\ self.availability_zones, mod_listeners) mod_balancers = self.conn.get_all_load_balancers(load_balancer_names=[mod_name]) self.assertEqual([lb.name for lb in mod_balancers], [mod_name]) self.assertEqual( sorted([l.get_tuple() for l in mod_balancers[0].listeners]), sorted(mod_listeners)) self.conn.delete_load_balancer_listeners(self.mod_balancer.name, [443]) mod_balancers = self.conn.get_all_load_balancers(load_balancer_names=[mod_name]) self.assertEqual([lb.name for lb in mod_balancers], [mod_name]) self.assertEqual([l.get_tuple() for l in mod_balancers[0].listeners], mod_listeners[:1]) self.mod_balancer.delete() def test_create_load_balancer_listeners_with_policies(self): more_listeners = [(443, 8001, 'HTTP')] self.conn.create_load_balancer_listeners(self.name, more_listeners) lb_policy_name = 'lb-policy' self.conn.create_lb_cookie_stickiness_policy(1000, self.name, lb_policy_name) self.conn.set_lb_policies_of_listener(self.name, self.listeners[0][0], lb_policy_name) app_policy_name = 'app-policy' self.conn.create_app_cookie_stickiness_policy('appcookie', self.name, app_policy_name) self.conn.set_lb_policies_of_listener(self.name, more_listeners[0][0], app_policy_name) balancers = self.conn.get_all_load_balancers(load_balancer_names=[self.name]) self.assertEqual([lb.name for lb in balancers], [self.name]) self.assertEqual( sorted(l.get_tuple() for l in balancers[0].listeners), sorted(self.listeners + more_listeners) ) # Policy names should be checked here once they are supported # in the Listener object. def test_create_load_balancer_backend_with_policies(self): other_policy_name = 'enable-proxy-protocol' backend_port = 8081 self.conn.create_lb_policy(self.name, other_policy_name, 'ProxyProtocolPolicyType', {'ProxyProtocol': True}) self.conn.set_lb_policies_of_backend_server(self.name, backend_port, [other_policy_name]) balancers = self.conn.get_all_load_balancers(load_balancer_names=[self.name]) self.assertEqual([lb.name for lb in balancers], [self.name]) self.assertEqual(len(balancers[0].policies.other_policies), 1) self.assertEqual(balancers[0].policies.other_policies[0].policy_name, other_policy_name) self.assertEqual(len(balancers[0].backends), 1) self.assertEqual(balancers[0].backends[0].instance_port, backend_port) self.assertEqual(balancers[0].backends[0].policies[0].policy_name, other_policy_name) self.conn.set_lb_policies_of_backend_server(self.name, backend_port, []) balancers = self.conn.get_all_load_balancers(load_balancer_names=[self.name]) self.assertEqual([lb.name for lb in balancers], [self.name]) self.assertEqual(len(balancers[0].policies.other_policies), 1) self.assertEqual(len(balancers[0].backends), 0) def test_create_load_balancer_complex_listeners(self): complex_listeners = [ (8080, 80, 'HTTP', 'HTTP'), (2525, 25, 'TCP', 'TCP'), ] self.conn.create_load_balancer_listeners( self.name, complex_listeners=complex_listeners ) balancers = self.conn.get_all_load_balancers( load_balancer_names=[self.name] ) self.assertEqual([lb.name for lb in balancers], [self.name]) self.assertEqual( sorted(l.get_complex_tuple() for l in balancers[0].listeners), # We need an extra 'HTTP' here over what ``self.listeners`` uses. sorted([(80, 8000, 'HTTP', 'HTTP')] + complex_listeners) ) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/integration/ec2/test_cert_verification.py000066400000000000000000000030151225267101000240000ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Check that all of the certs on all service endpoints validate. """ import unittest from tests.integration import ServiceCertVerificationTest import boto.ec2 class EC2CertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): ec2 = True regions = boto.ec2.regions() def sample_service_call(self, conn): conn.get_all_reservations() boto-2.20.1/tests/integration/ec2/test_connection.py000066400000000000000000000212741225267101000224470ustar00rootroot00000000000000# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2009, Eucalyptus Systems, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Some unit tests for the EC2Connection """ import unittest import time import telnetlib import socket from nose.plugins.attrib import attr from boto.ec2.connection import EC2Connection from boto.exception import EC2ResponseError class EC2ConnectionTest(unittest.TestCase): ec2 = True @attr('notdefault') def test_launch_permissions(self): # this is my user_id, if you want to run these tests you should # replace this with yours or they won't work user_id = '963068290131' print '--- running EC2Connection tests ---' c = EC2Connection() # get list of private AMI's rs = c.get_all_images(owners=[user_id]) assert len(rs) > 0 # now pick the first one image = rs[0] # temporarily make this image runnable by everyone status = image.set_launch_permissions(group_names=['all']) assert status d = image.get_launch_permissions() assert 'groups' in d assert len(d['groups']) > 0 # now remove that permission status = image.remove_launch_permissions(group_names=['all']) assert status time.sleep(10) d = image.get_launch_permissions() assert 'groups' not in d def test_1_basic(self): # create 2 new security groups c = EC2Connection() group1_name = 'test-%d' % int(time.time()) group_desc = 'This is a security group created during unit testing' group1 = c.create_security_group(group1_name, group_desc) time.sleep(2) group2_name = 'test-%d' % int(time.time()) group_desc = 'This is a security group created during unit testing' group2 = c.create_security_group(group2_name, group_desc) # now get a listing of all security groups and look for our new one rs = c.get_all_security_groups() found = False for g in rs: if g.name == group1_name: found = True assert found # now pass arg to filter results to only our new group rs = c.get_all_security_groups([group1_name]) assert len(rs) == 1 # try some group to group authorizations/revocations # first try the old style status = c.authorize_security_group(group1.name, group2.name, group2.owner_id) assert status status = c.revoke_security_group(group1.name, group2.name, group2.owner_id) assert status # now try specifying a specific port status = c.authorize_security_group(group1.name, group2.name, group2.owner_id, 'tcp', 22, 22) assert status status = c.revoke_security_group(group1.name, group2.name, group2.owner_id, 'tcp', 22, 22) assert status # now delete the second security group status = c.delete_security_group(group2_name) # now make sure it's really gone rs = c.get_all_security_groups() found = False for g in rs: if g.name == group2_name: found = True assert not found group = group1 # now try to launch apache image with our new security group rs = c.get_all_images() img_loc = 'ec2-public-images/fedora-core4-apache.manifest.xml' for image in rs: if image.location == img_loc: break reservation = image.run(security_groups=[group.name]) instance = reservation.instances[0] while instance.state != 'running': print '\tinstance is %s' % instance.state time.sleep(30) instance.update() # instance in now running, try to telnet to port 80 t = telnetlib.Telnet() try: t.open(instance.dns_name, 80) except socket.error: pass # now open up port 80 and try again, it should work group.authorize('tcp', 80, 80, '0.0.0.0/0') t.open(instance.dns_name, 80) t.close() # now revoke authorization and try again group.revoke('tcp', 80, 80, '0.0.0.0/0') try: t.open(instance.dns_name, 80) except socket.error: pass # now kill the instance and delete the security group instance.terminate() # check that state and previous_state have updated assert instance.state == 'shutting-down' assert instance.state_code == 32 assert instance.previous_state == 'running' assert instance.previous_state_code == 16 # unfortunately, I can't delete the sg within this script #sg.delete() # create a new key pair key_name = 'test-%d' % int(time.time()) status = c.create_key_pair(key_name) assert status # now get a listing of all key pairs and look for our new one rs = c.get_all_key_pairs() found = False for k in rs: if k.name == key_name: found = True assert found # now pass arg to filter results to only our new key pair rs = c.get_all_key_pairs([key_name]) assert len(rs) == 1 key_pair = rs[0] # now delete the key pair status = c.delete_key_pair(key_name) # now make sure it's really gone rs = c.get_all_key_pairs() found = False for k in rs: if k.name == key_name: found = True assert not found # short test around Paid AMI capability demo_paid_ami_id = 'ami-bd9d78d4' demo_paid_ami_product_code = 'A79EC0DB' l = c.get_all_images([demo_paid_ami_id]) assert len(l) == 1 assert len(l[0].product_codes) == 1 assert l[0].product_codes[0] == demo_paid_ami_product_code print '--- tests completed ---' def test_dry_run(self): c = EC2Connection() dry_run_msg = 'Request would have succeeded, but DryRun flag is set.' try: rs = c.get_all_images(dry_run=True) self.fail("Should have gotten an exception") except EC2ResponseError, e: self.assertTrue(dry_run_msg in str(e)) try: rs = c.run_instances( image_id='ami-a0cd60c9', instance_type='m1.small', dry_run=True ) self.fail("Should have gotten an exception") except EC2ResponseError, e: self.assertTrue(dry_run_msg in str(e)) # Need an actual instance for the rest of this... rs = c.run_instances( image_id='ami-a0cd60c9', instance_type='m1.small' ) time.sleep(120) try: rs = c.stop_instances( instance_ids=[rs.instances[0].id], dry_run=True ) self.fail("Should have gotten an exception") except EC2ResponseError, e: self.assertTrue(dry_run_msg in str(e)) try: rs = c.terminate_instances( instance_ids=[rs.instances[0].id], dry_run=True ) self.fail("Should have gotten an exception") except EC2ResponseError, e: self.assertTrue(dry_run_msg in str(e)) # And kill it. rs.instances[0].terminate() boto-2.20.1/tests/integration/ec2/vpc/000077500000000000000000000000001225267101000174615ustar00rootroot00000000000000boto-2.20.1/tests/integration/ec2/vpc/__init__.py000066400000000000000000000000001225267101000215600ustar00rootroot00000000000000boto-2.20.1/tests/integration/ec2/vpc/test_connection.py000066400000000000000000000132301225267101000232300ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import unittest import time import boto from boto.ec2.networkinterface import NetworkInterfaceCollection from boto.ec2.networkinterface import NetworkInterfaceSpecification from boto.ec2.networkinterface import PrivateIPAddress class TestVPCConnection(unittest.TestCase): def setUp(self): self.api = boto.connect_vpc() vpc = self.api.create_vpc('10.0.0.0/16') self.addCleanup(self.api.delete_vpc, vpc.id) # Need time for the VPC to be in place. :/ time.sleep(5) self.subnet = self.api.create_subnet(vpc.id, '10.0.0.0/24') self.addCleanup(self.api.delete_subnet, self.subnet.id) # Need time for the subnet to be in place. time.sleep(10) def terminate_instance(self, instance): instance.terminate() for i in xrange(300): instance.update() if instance.state == 'terminated': # Give it a litle more time to settle. time.sleep(10) return else: time.sleep(10) def test_multi_ip_create(self): interface = NetworkInterfaceSpecification( device_index=0, subnet_id=self.subnet.id, private_ip_address='10.0.0.21', description="This is a test interface using boto.", delete_on_termination=True, private_ip_addresses=[ PrivateIPAddress(private_ip_address='10.0.0.22', primary=False), PrivateIPAddress(private_ip_address='10.0.0.23', primary=False), PrivateIPAddress(private_ip_address='10.0.0.24', primary=False)]) interfaces = NetworkInterfaceCollection(interface) reservation = self.api.run_instances(image_id='ami-a0cd60c9', instance_type='m1.small', network_interfaces=interfaces) # Give it a few seconds to start up. time.sleep(10) instance = reservation.instances[0] self.addCleanup(self.terminate_instance, instance) retrieved = self.api.get_all_reservations(instance_ids=[instance.id]) self.assertEqual(len(retrieved), 1) retrieved_instances = retrieved[0].instances self.assertEqual(len(retrieved_instances), 1) retrieved_instance = retrieved_instances[0] self.assertEqual(len(retrieved_instance.interfaces), 1) interface = retrieved_instance.interfaces[0] private_ip_addresses = interface.private_ip_addresses self.assertEqual(len(private_ip_addresses), 4) self.assertEqual(private_ip_addresses[0].private_ip_address, '10.0.0.21') self.assertEqual(private_ip_addresses[0].primary, True) self.assertEqual(private_ip_addresses[1].private_ip_address, '10.0.0.22') self.assertEqual(private_ip_addresses[2].private_ip_address, '10.0.0.23') self.assertEqual(private_ip_addresses[3].private_ip_address, '10.0.0.24') def test_associate_public_ip(self): # Supplying basically nothing ought to work. interface = NetworkInterfaceSpecification( associate_public_ip_address=True, subnet_id=self.subnet.id, # Just for testing. delete_on_termination=True ) interfaces = NetworkInterfaceCollection(interface) reservation = self.api.run_instances( image_id='ami-a0cd60c9', instance_type='m1.small', network_interfaces=interfaces ) instance = reservation.instances[0] self.addCleanup(self.terminate_instance, instance) # Give it a **LONG** time to start up. # Because the public IP won't be there right away. time.sleep(60) retrieved = self.api.get_all_reservations( instance_ids=[ instance.id ] ) self.assertEqual(len(retrieved), 1) retrieved_instances = retrieved[0].instances self.assertEqual(len(retrieved_instances), 1) retrieved_instance = retrieved_instances[0] self.assertEqual(len(retrieved_instance.interfaces), 1) interface = retrieved_instance.interfaces[0] # There ought to be a public IP there. # We can't reason about the IP itself, so just make sure it vaguely # resembles an IP (& isn't empty/``None``)... self.assertTrue(interface.publicIp.count('.') >= 3) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/integration/elasticache/000077500000000000000000000000001225267101000204655ustar00rootroot00000000000000boto-2.20.1/tests/integration/elasticache/__init__.py000066400000000000000000000000001225267101000225640ustar00rootroot00000000000000boto-2.20.1/tests/integration/elasticache/test_layer1.py000066400000000000000000000052241225267101000232760ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import time from tests.unit import unittest from boto.elasticache import layer1 from boto.exception import BotoServerError class TestElastiCacheConnection(unittest.TestCase): def setUp(self): self.elasticache = layer1.ElastiCacheConnection() def wait_until_cluster_available(self, cluster_id): timeout = time.time() + 600 while time.time() < timeout: response = self.elasticache.describe_cache_clusters(cluster_id) status = response['DescribeCacheClustersResponse']\ ['DescribeCacheClustersResult']\ ['CacheClusters'][0]['CacheClusterStatus'] if status == 'available': break time.sleep(5) else: self.fail('Timeout waiting for cache cluster %r' 'to become available.' % cluster_id) def test_create_delete_cache_cluster(self): cluster_id = 'cluster-id2' self.elasticache.create_cache_cluster( cluster_id, 1, 'cache.t1.micro', 'memcached') self.wait_until_cluster_available(cluster_id) self.elasticache.delete_cache_cluster(cluster_id) timeout = time.time() + 600 while time.time() < timeout: try: self.elasticache.describe_cache_clusters(cluster_id) except BotoServerError: break time.sleep(5) else: self.fail('Timeout waiting for cache cluster %s' 'to be deleted.' % cluster_id) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/integration/elastictranscoder/000077500000000000000000000000001225267101000217315ustar00rootroot00000000000000boto-2.20.1/tests/integration/elastictranscoder/__init__.py000066400000000000000000000000001225267101000240300ustar00rootroot00000000000000boto-2.20.1/tests/integration/elastictranscoder/test_cert_verification.py000066400000000000000000000027001225267101000270400ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import unittest from tests.integration import ServiceCertVerificationTest import boto.elastictranscoder class ElasticTranscoderCertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): elastictranscoder = True regions = boto.elastictranscoder.regions() def sample_service_call(self, conn): conn.list_pipelines() boto-2.20.1/tests/integration/elastictranscoder/test_layer1.py000066400000000000000000000115461225267101000245460ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import unittest import time from boto.elastictranscoder.layer1 import ElasticTranscoderConnection from boto.elastictranscoder.exceptions import ValidationException import boto.s3 import boto.sns import boto.iam import boto.sns class TestETSLayer1PipelineManagement(unittest.TestCase): def setUp(self): self.api = ElasticTranscoderConnection() self.s3 = boto.connect_s3() self.sns = boto.connect_sns() self.iam = boto.connect_iam() self.sns = boto.connect_sns() self.timestamp = str(int(time.time())) self.input_bucket = 'boto-pipeline-%s' % self.timestamp self.output_bucket = 'boto-pipeline-out-%s' % self.timestamp self.role_name = 'boto-ets-role-%s' % self.timestamp self.pipeline_name = 'boto-pipeline-%s' % self.timestamp self.s3.create_bucket(self.input_bucket) self.s3.create_bucket(self.output_bucket) self.addCleanup(self.s3.delete_bucket, self.input_bucket) self.addCleanup(self.s3.delete_bucket, self.output_bucket) self.role = self.iam.create_role(self.role_name) self.role_arn = self.role['create_role_response']['create_role_result']\ ['role']['arn'] self.addCleanup(self.iam.delete_role, self.role_name) def create_pipeline(self): pipeline = self.api.create_pipeline( self.pipeline_name, self.input_bucket, self.output_bucket, self.role_arn, {'Progressing': '', 'Completed': '', 'Warning': '', 'Error': ''}) pipeline_id = pipeline['Pipeline']['Id'] self.addCleanup(self.api.delete_pipeline, pipeline_id) return pipeline_id def test_create_delete_pipeline(self): pipeline = self.api.create_pipeline( self.pipeline_name, self.input_bucket, self.output_bucket, self.role_arn, {'Progressing': '', 'Completed': '', 'Warning': '', 'Error': ''}) pipeline_id = pipeline['Pipeline']['Id'] self.api.delete_pipeline(pipeline_id) def test_can_retrieve_pipeline_information(self): pipeline_id = self.create_pipeline() # The pipeline shows up in list_pipelines pipelines = self.api.list_pipelines()['Pipelines'] pipeline_names = [p['Name'] for p in pipelines] self.assertIn(self.pipeline_name, pipeline_names) # The pipeline shows up in read_pipeline response = self.api.read_pipeline(pipeline_id) self.assertEqual(response['Pipeline']['Id'], pipeline_id) def test_update_pipeline(self): pipeline_id = self.create_pipeline() self.api.update_pipeline_status(pipeline_id, 'Paused') response = self.api.read_pipeline(pipeline_id) self.assertEqual(response['Pipeline']['Status'], 'Paused') def test_update_pipeline_notification(self): pipeline_id = self.create_pipeline() response = self.sns.create_topic('pipeline-errors') topic_arn = response['CreateTopicResponse']['CreateTopicResult']\ ['TopicArn'] self.addCleanup(self.sns.delete_topic, topic_arn) self.api.update_pipeline_notifications( pipeline_id, {'Progressing': '', 'Completed': '', 'Warning': '', 'Error': topic_arn}) response = self.api.read_pipeline(pipeline_id) self.assertEqual(response['Pipeline']['Notifications']['Error'], topic_arn) def test_list_jobs_by_pipeline(self): pipeline_id = self.create_pipeline() response = self.api.list_jobs_by_pipeline(pipeline_id) self.assertEqual(response['Jobs'], []) def test_proper_error_when_pipeline_does_not_exist(self): with self.assertRaises(ValidationException): self.api.read_pipeline('badpipelineid') boto-2.20.1/tests/integration/emr/000077500000000000000000000000001225267101000170035ustar00rootroot00000000000000boto-2.20.1/tests/integration/emr/__init__.py000066400000000000000000000021201225267101000211070ustar00rootroot00000000000000# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. boto-2.20.1/tests/integration/emr/test_cert_verification.py000066400000000000000000000030061225267101000241120ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Check that all of the certs on all EMR endpoints validate. """ import unittest from tests.integration import ServiceCertVerificationTest import boto.emr class EMRCertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): emr = True regions = boto.emr.regions() def sample_service_call(self, conn): conn.describe_jobflows() boto-2.20.1/tests/integration/glacier/000077500000000000000000000000001225267101000176265ustar00rootroot00000000000000boto-2.20.1/tests/integration/glacier/__init__.py000066400000000000000000000021541225267101000217410ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2012 Thomas Parslow http://almostobsolete.net/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # boto-2.20.1/tests/integration/glacier/test_cert_verification.py000066400000000000000000000030241225267101000247350ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Check that all of the certs on all service endpoints validate. """ import unittest from tests.integration import ServiceCertVerificationTest import boto.glacier class GlacierCertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): glacier = True regions = boto.glacier.regions() def sample_service_call(self, conn): conn.list_vaults() boto-2.20.1/tests/integration/glacier/test_layer1.py000066400000000000000000000037501225267101000224410ustar00rootroot00000000000000# Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from tests.unit import unittest from boto.glacier.layer1 import Layer1 class TestGlacierLayer1(unittest.TestCase): glacier = True def delete_vault(self, vault_name): pass def test_initialiate_multipart_upload(self): # Create a vault, initiate a multipart upload, # then cancel it. glacier = Layer1() glacier.create_vault('l1testvault') self.addCleanup(glacier.delete_vault, 'l1testvault') upload_id = glacier.initiate_multipart_upload('l1testvault', 4*1024*1024, 'double spaces here')['UploadId'] self.addCleanup(glacier.abort_multipart_upload, 'l1testvault', upload_id) response = glacier.list_multipart_uploads('l1testvault')['UploadsList'] self.assertEqual(len(response), 1) self.assertEqual(response[0]['MultipartUploadId'], upload_id) boto-2.20.1/tests/integration/glacier/test_layer2.py000066400000000000000000000037511225267101000224430ustar00rootroot00000000000000# Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import time from tests.unit import unittest from boto.glacier.layer2 import Layer1, Layer2 class TestGlacierLayer2(unittest.TestCase): glacier = True def setUp(self): self.layer2 = Layer2() self.vault_name = 'testvault%s' % int(time.time()) def test_create_delete_vault(self): vault = self.layer2.create_vault(self.vault_name) retrieved_vault = self.layer2.get_vault(self.vault_name) self.layer2.delete_vault(self.vault_name) self.assertEqual(vault.name, retrieved_vault.name) self.assertEqual(vault.arn, retrieved_vault.arn) self.assertEqual(vault.creation_date, retrieved_vault.creation_date) self.assertEqual(vault.last_inventory_date, retrieved_vault.last_inventory_date) self.assertEqual(vault.number_of_archives, retrieved_vault.number_of_archives) boto-2.20.1/tests/integration/gs/000077500000000000000000000000001225267101000166315ustar00rootroot00000000000000boto-2.20.1/tests/integration/gs/__init__.py000066400000000000000000000000001225267101000207300ustar00rootroot00000000000000boto-2.20.1/tests/integration/gs/cb_test_harness.py000066400000000000000000000066211225267101000223560ustar00rootroot00000000000000# Copyright 2010 Google Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Test harness that allows us to raise exceptions, change file content, and record the byte transfer callback sequence, to test various resumable upload and download cases. The 'call' method of this harness can be passed as the 'cb' parameter to boto.s3.Key.send_file() and boto.s3.Key.get_file(), allowing testing of various file upload/download conditions. """ import socket import time class CallbackTestHarness(object): def __init__(self, fail_after_n_bytes=0, num_times_to_fail=1, exception=socket.error('mock socket error', 0), fp_to_change=None, fp_change_pos=None, delay_after_change=None): self.fail_after_n_bytes = fail_after_n_bytes self.num_times_to_fail = num_times_to_fail self.exception = exception # If fp_to_change and fp_change_pos are specified, 3 bytes will be # written at that position just before the first exception is thrown. self.fp_to_change = fp_to_change self.fp_change_pos = fp_change_pos self.delay_after_change = delay_after_change self.num_failures = 0 self.transferred_seq_before_first_failure = [] self.transferred_seq_after_first_failure = [] def call(self, total_bytes_transferred, unused_total_size): """ To use this test harness, pass the 'call' method of the instantiated object as the cb param to the set_contents_from_file() or get_contents_to_file() call. """ # Record transfer sequence to allow verification. if self.num_failures: self.transferred_seq_after_first_failure.append( total_bytes_transferred) else: self.transferred_seq_before_first_failure.append( total_bytes_transferred) if (total_bytes_transferred >= self.fail_after_n_bytes and self.num_failures < self.num_times_to_fail): self.num_failures += 1 if self.fp_to_change and self.fp_change_pos is not None: cur_pos = self.fp_to_change.tell() self.fp_to_change.seek(self.fp_change_pos) self.fp_to_change.write('abc') self.fp_to_change.seek(cur_pos) if self.delay_after_change: time.sleep(self.delay_after_change) self.called = True raise self.exceptionboto-2.20.1/tests/integration/gs/test_basic.py000066400000000000000000000423751225267101000213360ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # Copyright (c) 2011, Nexenta Systems, Inc. # Copyright (c) 2012, Google, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Some integration tests for the GSConnection """ import os import re import StringIO import xml.sax from boto import handler from boto import storage_uri from boto.gs.acl import ACL from boto.gs.cors import Cors from boto.gs.lifecycle import LifecycleConfig from tests.integration.gs.testcase import GSTestCase CORS_EMPTY = '' CORS_DOC = ('origin1.example.com' 'origin2.example.com' 'GETPUT' 'POST' 'foo' 'bar' '') LIFECYCLE_EMPTY = ('' '') LIFECYCLE_DOC = ('' '' '' '365' '2013-01-15' '3' 'true' '') LIFECYCLE_CONDITIONS = {'Age': '365', 'CreatedBefore': '2013-01-15', 'NumberOfNewerVersions': '3', 'IsLive': 'true'} # Regexp for matching project-private default object ACL. PROJECT_PRIVATE_RE = ('\s*\s*\s*' '\s*[0-9a-fA-F]+' '\s*FULL_CONTROL\s*\s*' '\s*[0-9a-fA-F]+' '\s*FULL_CONTROL\s*\s*' '\s*[0-9a-fA-F]+' '\s*READ\s*' '\s*\s*') class GSBasicTest(GSTestCase): """Tests some basic GCS functionality.""" def test_read_write(self): """Tests basic read/write to keys.""" bucket = self._MakeBucket() bucket_name = bucket.name # now try a get_bucket call and see if it's really there bucket = self._GetConnection().get_bucket(bucket_name) key_name = 'foobar' k = bucket.new_key(key_name) s1 = 'This is a test of file upload and download' k.set_contents_from_string(s1) tmpdir = self._MakeTempDir() fpath = os.path.join(tmpdir, key_name) fp = open(fpath, 'wb') # now get the contents from gcs to a local file k.get_contents_to_file(fp) fp.close() fp = open(fpath) # check to make sure content read from gcs is identical to original self.assertEqual(s1, fp.read()) fp.close() # check to make sure set_contents_from_file is working sfp = StringIO.StringIO('foo') k.set_contents_from_file(sfp) self.assertEqual(k.get_contents_as_string(), 'foo') sfp2 = StringIO.StringIO('foo2') k.set_contents_from_file(sfp2) self.assertEqual(k.get_contents_as_string(), 'foo2') def test_get_all_keys(self): """Tests get_all_keys.""" phony_mimetype = 'application/x-boto-test' headers = {'Content-Type': phony_mimetype} tmpdir = self._MakeTempDir() fpath = os.path.join(tmpdir, 'foobar1') fpath2 = os.path.join(tmpdir, 'foobar') with open(fpath2, 'w') as f: f.write('test-data') bucket = self._MakeBucket() # First load some data for the first one, overriding content type. k = bucket.new_key('foobar') s1 = 'test-contents' s2 = 'test-contents2' k.name = 'foo/bar' k.set_contents_from_string(s1, headers) k.name = 'foo/bas' k.set_contents_from_filename(fpath2) k.name = 'foo/bat' k.set_contents_from_string(s1) k.name = 'fie/bar' k.set_contents_from_string(s1) k.name = 'fie/bas' k.set_contents_from_string(s1) k.name = 'fie/bat' k.set_contents_from_string(s1) # try resetting the contents to another value md5 = k.md5 k.set_contents_from_string(s2) self.assertNotEqual(k.md5, md5) fp2 = open(fpath2, 'rb') k.md5 = None k.base64md5 = None k.set_contents_from_stream(fp2) fp = open(fpath, 'wb') k.get_contents_to_file(fp) fp.close() fp2.seek(0, 0) fp = open(fpath, 'rb') self.assertEqual(fp2.read(), fp.read()) fp.close() fp2.close() all = bucket.get_all_keys() self.assertEqual(len(all), 6) rs = bucket.get_all_keys(prefix='foo') self.assertEqual(len(rs), 3) rs = bucket.get_all_keys(prefix='', delimiter='/') self.assertEqual(len(rs), 2) rs = bucket.get_all_keys(maxkeys=5) self.assertEqual(len(rs), 5) def test_bucket_lookup(self): """Test the bucket lookup method.""" bucket = self._MakeBucket() k = bucket.new_key('foo/bar') phony_mimetype = 'application/x-boto-test' headers = {'Content-Type': phony_mimetype} k.set_contents_from_string('testdata', headers) k = bucket.lookup('foo/bar') self.assertIsInstance(k, bucket.key_class) self.assertEqual(k.content_type, phony_mimetype) k = bucket.lookup('notthere') self.assertIsNone(k) def test_metadata(self): """Test key metadata operations.""" bucket = self._MakeBucket() k = self._MakeKey(bucket=bucket) key_name = k.name s1 = 'This is a test of file upload and download' mdkey1 = 'meta1' mdval1 = 'This is the first metadata value' k.set_metadata(mdkey1, mdval1) mdkey2 = 'meta2' mdval2 = 'This is the second metadata value' k.set_metadata(mdkey2, mdval2) # Test unicode character. mdval3 = u'föö' mdkey3 = 'meta3' k.set_metadata(mdkey3, mdval3) k.set_contents_from_string(s1) k = bucket.lookup(key_name) self.assertEqual(k.get_metadata(mdkey1), mdval1) self.assertEqual(k.get_metadata(mdkey2), mdval2) self.assertEqual(k.get_metadata(mdkey3), mdval3) k = bucket.new_key(key_name) k.get_contents_as_string() self.assertEqual(k.get_metadata(mdkey1), mdval1) self.assertEqual(k.get_metadata(mdkey2), mdval2) self.assertEqual(k.get_metadata(mdkey3), mdval3) def test_list_iterator(self): """Test list and iterator.""" bucket = self._MakeBucket() num_iter = len([k for k in bucket.list()]) rs = bucket.get_all_keys() num_keys = len(rs) self.assertEqual(num_iter, num_keys) def test_acl(self): """Test bucket and key ACLs.""" bucket = self._MakeBucket() # try some acl stuff bucket.set_acl('public-read') acl = bucket.get_acl() self.assertEqual(len(acl.entries.entry_list), 2) bucket.set_acl('private') acl = bucket.get_acl() self.assertEqual(len(acl.entries.entry_list), 1) k = self._MakeKey(bucket=bucket) k.set_acl('public-read') acl = k.get_acl() self.assertEqual(len(acl.entries.entry_list), 2) k.set_acl('private') acl = k.get_acl() self.assertEqual(len(acl.entries.entry_list), 1) # Test case-insensitivity of XML ACL parsing. acl_xml = ( '' + 'READ' + '') acl = ACL() h = handler.XmlHandler(acl, bucket) xml.sax.parseString(acl_xml, h) bucket.set_acl(acl) self.assertEqual(len(acl.entries.entry_list), 1) aclstr = k.get_xml_acl() self.assertGreater(aclstr.count('/Entry', 1), 0) def test_logging(self): """Test set/get raw logging subresource.""" bucket = self._MakeBucket() empty_logging_str="" logging_str = ( "" "log-bucket" + "example" + "") bucket.set_subresource('logging', logging_str) self.assertEqual(bucket.get_subresource('logging'), logging_str) # try disable/enable logging bucket.disable_logging() self.assertEqual(bucket.get_subresource('logging'), empty_logging_str) bucket.enable_logging('log-bucket', 'example') self.assertEqual(bucket.get_subresource('logging'), logging_str) def test_copy_key(self): """Test copying a key from one bucket to another.""" # create two new, empty buckets bucket1 = self._MakeBucket() bucket2 = self._MakeBucket() bucket_name_1 = bucket1.name bucket_name_2 = bucket2.name # verify buckets got created bucket1 = self._GetConnection().get_bucket(bucket_name_1) bucket2 = self._GetConnection().get_bucket(bucket_name_2) # create a key in bucket1 and give it some content key_name = 'foobar' k1 = bucket1.new_key(key_name) self.assertIsInstance(k1, bucket1.key_class) k1.name = key_name s = 'This is a test.' k1.set_contents_from_string(s) # copy the new key from bucket1 to bucket2 k1.copy(bucket_name_2, key_name) # now copy the contents from bucket2 to a local file k2 = bucket2.lookup(key_name) self.assertIsInstance(k2, bucket2.key_class) tmpdir = self._MakeTempDir() fpath = os.path.join(tmpdir, 'foobar') fp = open(fpath, 'wb') k2.get_contents_to_file(fp) fp.close() fp = open(fpath) # check to make sure content read is identical to original self.assertEqual(s, fp.read()) fp.close() # delete keys bucket1.delete_key(k1) bucket2.delete_key(k2) def test_default_object_acls(self): """Test default object acls.""" # create a new bucket bucket = self._MakeBucket() # get default acl and make sure it's project-private acl = bucket.get_def_acl() self.assertIsNotNone(re.search(PROJECT_PRIVATE_RE, acl.to_xml())) # set default acl to a canned acl and verify it gets set bucket.set_def_acl('public-read') acl = bucket.get_def_acl() # save public-read acl for later test public_read_acl = acl self.assertEqual(acl.to_xml(), ('' 'READ' '')) # back to private acl bucket.set_def_acl('private') acl = bucket.get_def_acl() self.assertEqual(acl.to_xml(), '') # set default acl to an xml acl and verify it gets set bucket.set_def_acl(public_read_acl) acl = bucket.get_def_acl() self.assertEqual(acl.to_xml(), ('' 'READ' '')) # back to private acl bucket.set_def_acl('private') acl = bucket.get_def_acl() self.assertEqual(acl.to_xml(), '') def test_default_object_acls_storage_uri(self): """Test default object acls using storage_uri.""" # create a new bucket bucket = self._MakeBucket() bucket_name = bucket.name uri = storage_uri('gs://' + bucket_name) # get default acl and make sure it's project-private acl = uri.get_def_acl() self.assertIsNotNone(re.search(PROJECT_PRIVATE_RE, acl.to_xml())) # set default acl to a canned acl and verify it gets set uri.set_def_acl('public-read') acl = uri.get_def_acl() # save public-read acl for later test public_read_acl = acl self.assertEqual(acl.to_xml(), ('' 'READ' '')) # back to private acl uri.set_def_acl('private') acl = uri.get_def_acl() self.assertEqual(acl.to_xml(), '') # set default acl to an xml acl and verify it gets set uri.set_def_acl(public_read_acl) acl = uri.get_def_acl() self.assertEqual(acl.to_xml(), ('' 'READ' '')) # back to private acl uri.set_def_acl('private') acl = uri.get_def_acl() self.assertEqual(acl.to_xml(), '') def test_cors_xml_bucket(self): """Test setting and getting of CORS XML documents on Bucket.""" # create a new bucket bucket = self._MakeBucket() bucket_name = bucket.name # now call get_bucket to see if it's really there bucket = self._GetConnection().get_bucket(bucket_name) # get new bucket cors and make sure it's empty cors = re.sub(r'\s', '', bucket.get_cors().to_xml()) self.assertEqual(cors, CORS_EMPTY) # set cors document on new bucket bucket.set_cors(CORS_DOC) cors = re.sub(r'\s', '', bucket.get_cors().to_xml()) self.assertEqual(cors, CORS_DOC) def test_cors_xml_storage_uri(self): """Test setting and getting of CORS XML documents with storage_uri.""" # create a new bucket bucket = self._MakeBucket() bucket_name = bucket.name uri = storage_uri('gs://' + bucket_name) # get new bucket cors and make sure it's empty cors = re.sub(r'\s', '', uri.get_cors().to_xml()) self.assertEqual(cors, CORS_EMPTY) # set cors document on new bucket cors_obj = Cors() h = handler.XmlHandler(cors_obj, None) xml.sax.parseString(CORS_DOC, h) uri.set_cors(cors_obj) cors = re.sub(r'\s', '', uri.get_cors().to_xml()) self.assertEqual(cors, CORS_DOC) def test_lifecycle_config_bucket(self): """Test setting and getting of lifecycle config on Bucket.""" # create a new bucket bucket = self._MakeBucket() bucket_name = bucket.name # now call get_bucket to see if it's really there bucket = self._GetConnection().get_bucket(bucket_name) # get lifecycle config and make sure it's empty xml = bucket.get_lifecycle_config().to_xml() self.assertEqual(xml, LIFECYCLE_EMPTY) # set lifecycle config lifecycle_config = LifecycleConfig() lifecycle_config.add_rule('Delete', None, LIFECYCLE_CONDITIONS) bucket.configure_lifecycle(lifecycle_config) xml = bucket.get_lifecycle_config().to_xml() self.assertEqual(xml, LIFECYCLE_DOC) def test_lifecycle_config_storage_uri(self): """Test setting and getting of lifecycle config with storage_uri.""" # create a new bucket bucket = self._MakeBucket() bucket_name = bucket.name uri = storage_uri('gs://' + bucket_name) # get lifecycle config and make sure it's empty xml = uri.get_lifecycle_config().to_xml() self.assertEqual(xml, LIFECYCLE_EMPTY) # set lifecycle config lifecycle_config = LifecycleConfig() lifecycle_config.add_rule('Delete', None, LIFECYCLE_CONDITIONS) uri.configure_lifecycle(lifecycle_config) xml = uri.get_lifecycle_config().to_xml() self.assertEqual(xml, LIFECYCLE_DOC) boto-2.20.1/tests/integration/gs/test_generation_conditionals.py000066400000000000000000000337431225267101000251550ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2013, Google, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """Integration tests for GS versioning support.""" import StringIO import os import tempfile from xml import sax from boto import handler from boto.exception import GSResponseError from boto.gs.acl import ACL from tests.integration.gs.testcase import GSTestCase # HTTP Error returned when a generation precondition fails. VERSION_MISMATCH = "412" class GSGenerationConditionalsTest(GSTestCase): def testConditionalSetContentsFromFile(self): b = self._MakeBucket() k = b.new_key("foo") s1 = "test1" fp = StringIO.StringIO(s1) with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH): k.set_contents_from_file(fp, if_generation=999) fp = StringIO.StringIO(s1) k.set_contents_from_file(fp, if_generation=0) g1 = k.generation s2 = "test2" fp = StringIO.StringIO(s2) with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH): k.set_contents_from_file(fp, if_generation=int(g1)+1) fp = StringIO.StringIO(s2) k.set_contents_from_file(fp, if_generation=g1) self.assertEqual(k.get_contents_as_string(), s2) def testConditionalSetContentsFromString(self): b = self._MakeBucket() k = b.new_key("foo") s1 = "test1" with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH): k.set_contents_from_string(s1, if_generation=999) k.set_contents_from_string(s1, if_generation=0) g1 = k.generation s2 = "test2" with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH): k.set_contents_from_string(s2, if_generation=int(g1)+1) k.set_contents_from_string(s2, if_generation=g1) self.assertEqual(k.get_contents_as_string(), s2) def testConditionalSetContentsFromFilename(self): s1 = "test1" s2 = "test2" f1 = tempfile.NamedTemporaryFile(prefix="boto-gs-test", delete=False) f2 = tempfile.NamedTemporaryFile(prefix="boto-gs-test", delete=False) fname1 = f1.name fname2 = f2.name f1.write(s1) f1.close() f2.write(s2) f2.close() try: b = self._MakeBucket() k = b.new_key("foo") with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH): k.set_contents_from_filename(fname1, if_generation=999) k.set_contents_from_filename(fname1, if_generation=0) g1 = k.generation with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH): k.set_contents_from_filename(fname2, if_generation=int(g1)+1) k.set_contents_from_filename(fname2, if_generation=g1) self.assertEqual(k.get_contents_as_string(), s2) finally: os.remove(fname1) os.remove(fname2) def testBucketConditionalSetAcl(self): b = self._MakeVersionedBucket() k = b.new_key("foo") s1 = "test1" k.set_contents_from_string(s1) g1 = k.generation mg1 = k.metageneration self.assertEqual(str(mg1), "1") b.set_acl("public-read", key_name="foo") k = b.get_key("foo") g2 = k.generation mg2 = k.metageneration self.assertEqual(g2, g1) self.assertGreater(mg2, mg1) with self.assertRaisesRegexp(ValueError, ("Received if_metageneration " "argument with no " "if_generation argument")): b.set_acl("bucket-owner-full-control", key_name="foo", if_metageneration=123) with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH): b.set_acl("bucket-owner-full-control", key_name="foo", if_generation=int(g2) + 1) with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH): b.set_acl("bucket-owner-full-control", key_name="foo", if_generation=g2, if_metageneration=int(mg2) + 1) b.set_acl("bucket-owner-full-control", key_name="foo", if_generation=g2) k = b.get_key("foo") g3 = k.generation mg3 = k.metageneration self.assertEqual(g3, g2) self.assertGreater(mg3, mg2) b.set_acl("public-read", key_name="foo", if_generation=g3, if_metageneration=mg3) def testConditionalSetContentsFromStream(self): b = self._MakeBucket() k = b.new_key("foo") s1 = "test1" fp = StringIO.StringIO(s1) with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH): k.set_contents_from_stream(fp, if_generation=999) fp = StringIO.StringIO(s1) k.set_contents_from_stream(fp, if_generation=0) g1 = k.generation k = b.get_key("foo") s2 = "test2" fp = StringIO.StringIO(s2) with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH): k.set_contents_from_stream(fp, if_generation=int(g1)+1) fp = StringIO.StringIO(s2) k.set_contents_from_stream(fp, if_generation=g1) self.assertEqual(k.get_contents_as_string(), s2) def testBucketConditionalSetCannedAcl(self): b = self._MakeVersionedBucket() k = b.new_key("foo") s1 = "test1" k.set_contents_from_string(s1) g1 = k.generation mg1 = k.metageneration self.assertEqual(str(mg1), "1") b.set_canned_acl("public-read", key_name="foo") k = b.get_key("foo") g2 = k.generation mg2 = k.metageneration self.assertEqual(g2, g1) self.assertGreater(mg2, mg1) with self.assertRaisesRegexp(ValueError, ("Received if_metageneration " "argument with no " "if_generation argument")): b.set_canned_acl("bucket-owner-full-control", key_name="foo", if_metageneration=123) with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH): b.set_canned_acl("bucket-owner-full-control", key_name="foo", if_generation=int(g2) + 1) with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH): b.set_canned_acl("bucket-owner-full-control", key_name="foo", if_generation=g2, if_metageneration=int(mg2) + 1) b.set_canned_acl("bucket-owner-full-control", key_name="foo", if_generation=g2) k = b.get_key("foo") g3 = k.generation mg3 = k.metageneration self.assertEqual(g3, g2) self.assertGreater(mg3, mg2) b.set_canned_acl("public-read", key_name="foo", if_generation=g3, if_metageneration=mg3) def testBucketConditionalSetXmlAcl(self): b = self._MakeVersionedBucket() k = b.new_key("foo") s1 = "test1" k.set_contents_from_string(s1) g1 = k.generation mg1 = k.metageneration self.assertEqual(str(mg1), "1") acl_xml = ( '' + 'READ' + '') acl = ACL() h = handler.XmlHandler(acl, b) sax.parseString(acl_xml, h) acl = acl.to_xml() b.set_xml_acl(acl, key_name="foo") k = b.get_key("foo") g2 = k.generation mg2 = k.metageneration self.assertEqual(g2, g1) self.assertGreater(mg2, mg1) with self.assertRaisesRegexp(ValueError, ("Received if_metageneration " "argument with no " "if_generation argument")): b.set_xml_acl(acl, key_name="foo", if_metageneration=123) with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH): b.set_xml_acl(acl, key_name="foo", if_generation=int(g2) + 1) with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH): b.set_xml_acl(acl, key_name="foo", if_generation=g2, if_metageneration=int(mg2) + 1) b.set_xml_acl(acl, key_name="foo", if_generation=g2) k = b.get_key("foo") g3 = k.generation mg3 = k.metageneration self.assertEqual(g3, g2) self.assertGreater(mg3, mg2) b.set_xml_acl(acl, key_name="foo", if_generation=g3, if_metageneration=mg3) def testObjectConditionalSetAcl(self): b = self._MakeVersionedBucket() k = b.new_key("foo") k.set_contents_from_string("test1") g1 = k.generation mg1 = k.metageneration self.assertEqual(str(mg1), "1") k.set_acl("public-read") k = b.get_key("foo") g2 = k.generation mg2 = k.metageneration self.assertEqual(g2, g1) self.assertGreater(mg2, mg1) with self.assertRaisesRegexp(ValueError, ("Received if_metageneration " "argument with no " "if_generation argument")): k.set_acl("bucket-owner-full-control", if_metageneration=123) with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH): k.set_acl("bucket-owner-full-control", if_generation=int(g2) + 1) with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH): k.set_acl("bucket-owner-full-control", if_generation=g2, if_metageneration=int(mg2) + 1) k.set_acl("bucket-owner-full-control", if_generation=g2) k = b.get_key("foo") g3 = k.generation mg3 = k.metageneration self.assertEqual(g3, g2) self.assertGreater(mg3, mg2) k.set_acl("public-read", if_generation=g3, if_metageneration=mg3) def testObjectConditionalSetCannedAcl(self): b = self._MakeVersionedBucket() k = b.new_key("foo") k.set_contents_from_string("test1") g1 = k.generation mg1 = k.metageneration self.assertEqual(str(mg1), "1") k.set_canned_acl("public-read") k = b.get_key("foo") g2 = k.generation mg2 = k.metageneration self.assertEqual(g2, g1) self.assertGreater(mg2, mg1) with self.assertRaisesRegexp(ValueError, ("Received if_metageneration " "argument with no " "if_generation argument")): k.set_canned_acl("bucket-owner-full-control", if_metageneration=123) with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH): k.set_canned_acl("bucket-owner-full-control", if_generation=int(g2) + 1) with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH): k.set_canned_acl("bucket-owner-full-control", if_generation=g2, if_metageneration=int(mg2) + 1) k.set_canned_acl("bucket-owner-full-control", if_generation=g2) k = b.get_key("foo") g3 = k.generation mg3 = k.metageneration self.assertEqual(g3, g2) self.assertGreater(mg3, mg2) k.set_canned_acl("public-read", if_generation=g3, if_metageneration=mg3) def testObjectConditionalSetXmlAcl(self): b = self._MakeVersionedBucket() k = b.new_key("foo") s1 = "test1" k.set_contents_from_string(s1) g1 = k.generation mg1 = k.metageneration self.assertEqual(str(mg1), "1") acl_xml = ( '' + 'READ' + '') acl = ACL() h = handler.XmlHandler(acl, b) sax.parseString(acl_xml, h) acl = acl.to_xml() k.set_xml_acl(acl) k = b.get_key("foo") g2 = k.generation mg2 = k.metageneration self.assertEqual(g2, g1) self.assertGreater(mg2, mg1) with self.assertRaisesRegexp(ValueError, ("Received if_metageneration " "argument with no " "if_generation argument")): k.set_xml_acl(acl, if_metageneration=123) with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH): k.set_xml_acl(acl, if_generation=int(g2) + 1) with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH): k.set_xml_acl(acl, if_generation=g2, if_metageneration=int(mg2) + 1) k.set_xml_acl(acl, if_generation=g2) k = b.get_key("foo") g3 = k.generation mg3 = k.metageneration self.assertEqual(g3, g2) self.assertGreater(mg3, mg2) k.set_xml_acl(acl, if_generation=g3, if_metageneration=mg3) boto-2.20.1/tests/integration/gs/test_resumable_downloads.py000066400000000000000000000374561225267101000243120ustar00rootroot00000000000000# Copyright 2010 Google Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Tests of resumable downloads. """ import errno import os import re import boto from boto.s3.resumable_download_handler import get_cur_file_size from boto.s3.resumable_download_handler import ResumableDownloadHandler from boto.exception import ResumableTransferDisposition from boto.exception import ResumableDownloadException from cb_test_harness import CallbackTestHarness from tests.integration.gs.testcase import GSTestCase SMALL_KEY_SIZE = 2 * 1024 # 2 KB. LARGE_KEY_SIZE = 500 * 1024 # 500 KB. class ResumableDownloadTests(GSTestCase): """Resumable download test suite.""" def make_small_key(self): small_src_key_as_string = os.urandom(SMALL_KEY_SIZE) small_src_key = self._MakeKey(data=small_src_key_as_string) return small_src_key_as_string, small_src_key def make_tracker_file(self, tmpdir=None): if not tmpdir: tmpdir = self._MakeTempDir() tracker_file = os.path.join(tmpdir, 'tracker') return tracker_file def make_dst_fp(self, tmpdir=None): if not tmpdir: tmpdir = self._MakeTempDir() dst_file = os.path.join(tmpdir, 'dstfile') return open(dst_file, 'w') def test_non_resumable_download(self): """ Tests that non-resumable downloads work """ dst_fp = self.make_dst_fp() small_src_key_as_string, small_src_key = self.make_small_key() small_src_key.get_contents_to_file(dst_fp) self.assertEqual(SMALL_KEY_SIZE, get_cur_file_size(dst_fp)) self.assertEqual(small_src_key_as_string, small_src_key.get_contents_as_string()) def test_download_without_persistent_tracker(self): """ Tests a single resumable download, with no tracker persistence """ res_download_handler = ResumableDownloadHandler() dst_fp = self.make_dst_fp() small_src_key_as_string, small_src_key = self.make_small_key() small_src_key.get_contents_to_file( dst_fp, res_download_handler=res_download_handler) self.assertEqual(SMALL_KEY_SIZE, get_cur_file_size(dst_fp)) self.assertEqual(small_src_key_as_string, small_src_key.get_contents_as_string()) def test_failed_download_with_persistent_tracker(self): """ Tests that failed resumable download leaves a correct tracker file """ harness = CallbackTestHarness() tmpdir = self._MakeTempDir() tracker_file_name = self.make_tracker_file(tmpdir) dst_fp = self.make_dst_fp(tmpdir) res_download_handler = ResumableDownloadHandler( tracker_file_name=tracker_file_name, num_retries=0) small_src_key_as_string, small_src_key = self.make_small_key() try: small_src_key.get_contents_to_file( dst_fp, cb=harness.call, res_download_handler=res_download_handler) self.fail('Did not get expected ResumableDownloadException') except ResumableDownloadException, e: # We'll get a ResumableDownloadException at this point because # of CallbackTestHarness (above). Check that the tracker file was # created correctly. self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT_CUR_PROCESS) self.assertTrue(os.path.exists(tracker_file_name)) f = open(tracker_file_name) etag_line = f.readline() self.assertEquals(etag_line.rstrip('\n'), small_src_key.etag.strip('"\'')) def test_retryable_exception_recovery(self): """ Tests handling of a retryable exception """ # Test one of the RETRYABLE_EXCEPTIONS. exception = ResumableDownloadHandler.RETRYABLE_EXCEPTIONS[0] harness = CallbackTestHarness(exception=exception) res_download_handler = ResumableDownloadHandler(num_retries=1) dst_fp = self.make_dst_fp() small_src_key_as_string, small_src_key = self.make_small_key() small_src_key.get_contents_to_file( dst_fp, cb=harness.call, res_download_handler=res_download_handler) # Ensure downloaded object has correct content. self.assertEqual(SMALL_KEY_SIZE, get_cur_file_size(dst_fp)) self.assertEqual(small_src_key_as_string, small_src_key.get_contents_as_string()) def test_broken_pipe_recovery(self): """ Tests handling of a Broken Pipe (which interacts with an httplib bug) """ exception = IOError(errno.EPIPE, "Broken pipe") harness = CallbackTestHarness(exception=exception) res_download_handler = ResumableDownloadHandler(num_retries=1) dst_fp = self.make_dst_fp() small_src_key_as_string, small_src_key = self.make_small_key() small_src_key.get_contents_to_file( dst_fp, cb=harness.call, res_download_handler=res_download_handler) # Ensure downloaded object has correct content. self.assertEqual(SMALL_KEY_SIZE, get_cur_file_size(dst_fp)) self.assertEqual(small_src_key_as_string, small_src_key.get_contents_as_string()) def test_non_retryable_exception_handling(self): """ Tests resumable download that fails with a non-retryable exception """ harness = CallbackTestHarness( exception=OSError(errno.EACCES, 'Permission denied')) res_download_handler = ResumableDownloadHandler(num_retries=1) dst_fp = self.make_dst_fp() small_src_key_as_string, small_src_key = self.make_small_key() try: small_src_key.get_contents_to_file( dst_fp, cb=harness.call, res_download_handler=res_download_handler) self.fail('Did not get expected OSError') except OSError, e: # Ensure the error was re-raised. self.assertEqual(e.errno, 13) def test_failed_and_restarted_download_with_persistent_tracker(self): """ Tests resumable download that fails once and then completes, with tracker file """ harness = CallbackTestHarness() tmpdir = self._MakeTempDir() tracker_file_name = self.make_tracker_file(tmpdir) dst_fp = self.make_dst_fp(tmpdir) small_src_key_as_string, small_src_key = self.make_small_key() res_download_handler = ResumableDownloadHandler( tracker_file_name=tracker_file_name, num_retries=1) small_src_key.get_contents_to_file( dst_fp, cb=harness.call, res_download_handler=res_download_handler) # Ensure downloaded object has correct content. self.assertEqual(SMALL_KEY_SIZE, get_cur_file_size(dst_fp)) self.assertEqual(small_src_key_as_string, small_src_key.get_contents_as_string()) # Ensure tracker file deleted. self.assertFalse(os.path.exists(tracker_file_name)) def test_multiple_in_process_failures_then_succeed(self): """ Tests resumable download that fails twice in one process, then completes """ res_download_handler = ResumableDownloadHandler(num_retries=3) dst_fp = self.make_dst_fp() small_src_key_as_string, small_src_key = self.make_small_key() small_src_key.get_contents_to_file( dst_fp, res_download_handler=res_download_handler) # Ensure downloaded object has correct content. self.assertEqual(SMALL_KEY_SIZE, get_cur_file_size(dst_fp)) self.assertEqual(small_src_key_as_string, small_src_key.get_contents_as_string()) def test_multiple_in_process_failures_then_succeed_with_tracker_file(self): """ Tests resumable download that fails completely in one process, then when restarted completes, using a tracker file """ # Set up test harness that causes more failures than a single # ResumableDownloadHandler instance will handle, writing enough data # before the first failure that some of it survives that process run. harness = CallbackTestHarness( fail_after_n_bytes=LARGE_KEY_SIZE/2, num_times_to_fail=2) larger_src_key_as_string = os.urandom(LARGE_KEY_SIZE) larger_src_key = self._MakeKey(data=larger_src_key_as_string) tmpdir = self._MakeTempDir() tracker_file_name = self.make_tracker_file(tmpdir) dst_fp = self.make_dst_fp(tmpdir) res_download_handler = ResumableDownloadHandler( tracker_file_name=tracker_file_name, num_retries=0) try: larger_src_key.get_contents_to_file( dst_fp, cb=harness.call, res_download_handler=res_download_handler) self.fail('Did not get expected ResumableDownloadException') except ResumableDownloadException, e: self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT_CUR_PROCESS) # Ensure a tracker file survived. self.assertTrue(os.path.exists(tracker_file_name)) # Try it one more time; this time should succeed. larger_src_key.get_contents_to_file( dst_fp, cb=harness.call, res_download_handler=res_download_handler) self.assertEqual(LARGE_KEY_SIZE, get_cur_file_size(dst_fp)) self.assertEqual(larger_src_key_as_string, larger_src_key.get_contents_as_string()) self.assertFalse(os.path.exists(tracker_file_name)) # Ensure some of the file was downloaded both before and after failure. self.assertTrue( len(harness.transferred_seq_before_first_failure) > 1 and len(harness.transferred_seq_after_first_failure) > 1) def test_download_with_inital_partial_download_before_failure(self): """ Tests resumable download that successfully downloads some content before it fails, then restarts and completes """ # Set up harness to fail download after several hundred KB so download # server will have saved something before we retry. harness = CallbackTestHarness( fail_after_n_bytes=LARGE_KEY_SIZE/2) larger_src_key_as_string = os.urandom(LARGE_KEY_SIZE) larger_src_key = self._MakeKey(data=larger_src_key_as_string) res_download_handler = ResumableDownloadHandler(num_retries=1) dst_fp = self.make_dst_fp() larger_src_key.get_contents_to_file( dst_fp, cb=harness.call, res_download_handler=res_download_handler) # Ensure downloaded object has correct content. self.assertEqual(LARGE_KEY_SIZE, get_cur_file_size(dst_fp)) self.assertEqual(larger_src_key_as_string, larger_src_key.get_contents_as_string()) # Ensure some of the file was downloaded both before and after failure. self.assertTrue( len(harness.transferred_seq_before_first_failure) > 1 and len(harness.transferred_seq_after_first_failure) > 1) def test_zero_length_object_download(self): """ Tests downloading a zero-length object (exercises boundary conditions). """ res_download_handler = ResumableDownloadHandler() dst_fp = self.make_dst_fp() k = self._MakeKey() k.get_contents_to_file(dst_fp, res_download_handler=res_download_handler) self.assertEqual(0, get_cur_file_size(dst_fp)) def test_download_with_invalid_tracker_etag(self): """ Tests resumable download with a tracker file containing an invalid etag """ tmp_dir = self._MakeTempDir() dst_fp = self.make_dst_fp(tmp_dir) small_src_key_as_string, small_src_key = self.make_small_key() invalid_etag_tracker_file_name = os.path.join(tmp_dir, 'invalid_etag_tracker') f = open(invalid_etag_tracker_file_name, 'w') f.write('3.14159\n') f.close() res_download_handler = ResumableDownloadHandler( tracker_file_name=invalid_etag_tracker_file_name) # An error should be printed about the invalid tracker, but then it # should run the update successfully. small_src_key.get_contents_to_file( dst_fp, res_download_handler=res_download_handler) self.assertEqual(SMALL_KEY_SIZE, get_cur_file_size(dst_fp)) self.assertEqual(small_src_key_as_string, small_src_key.get_contents_as_string()) def test_download_with_inconsistent_etag_in_tracker(self): """ Tests resumable download with an inconsistent etag in tracker file """ tmp_dir = self._MakeTempDir() dst_fp = self.make_dst_fp(tmp_dir) small_src_key_as_string, small_src_key = self.make_small_key() inconsistent_etag_tracker_file_name = os.path.join(tmp_dir, 'inconsistent_etag_tracker') f = open(inconsistent_etag_tracker_file_name, 'w') good_etag = small_src_key.etag.strip('"\'') new_val_as_list = [] for c in reversed(good_etag): new_val_as_list.append(c) f.write('%s\n' % ''.join(new_val_as_list)) f.close() res_download_handler = ResumableDownloadHandler( tracker_file_name=inconsistent_etag_tracker_file_name) # An error should be printed about the expired tracker, but then it # should run the update successfully. small_src_key.get_contents_to_file( dst_fp, res_download_handler=res_download_handler) self.assertEqual(SMALL_KEY_SIZE, get_cur_file_size(dst_fp)) self.assertEqual(small_src_key_as_string, small_src_key.get_contents_as_string()) def test_download_with_unwritable_tracker_file(self): """ Tests resumable download with an unwritable tracker file """ # Make dir where tracker_file lives temporarily unwritable. tmp_dir = self._MakeTempDir() tracker_file_name = os.path.join(tmp_dir, 'tracker') save_mod = os.stat(tmp_dir).st_mode try: os.chmod(tmp_dir, 0) res_download_handler = ResumableDownloadHandler( tracker_file_name=tracker_file_name) except ResumableDownloadException, e: self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT) self.assertNotEqual( e.message.find('Couldn\'t write URI tracker file'), -1) finally: # Restore original protection of dir where tracker_file lives. os.chmod(tmp_dir, save_mod) boto-2.20.1/tests/integration/gs/test_resumable_uploads.py000066400000000000000000000622751225267101000237640ustar00rootroot00000000000000# Copyright 2010 Google Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Tests of Google Cloud Storage resumable uploads. """ import StringIO import errno import random import os import time import boto from boto import storage_uri from boto.gs.resumable_upload_handler import ResumableUploadHandler from boto.exception import InvalidUriError from boto.exception import ResumableTransferDisposition from boto.exception import ResumableUploadException from cb_test_harness import CallbackTestHarness from tests.integration.gs.testcase import GSTestCase SMALL_KEY_SIZE = 2 * 1024 # 2 KB. LARGE_KEY_SIZE = 500 * 1024 # 500 KB. LARGEST_KEY_SIZE = 1024 * 1024 # 1 MB. class ResumableUploadTests(GSTestCase): """Resumable upload test suite.""" def build_input_file(self, size): buf = [] # I manually construct the random data here instead of calling # os.urandom() because I want to constrain the range of data (in # this case to 0'..'9') so the test # code can easily overwrite part of the StringIO file with # known-to-be-different values. for i in range(size): buf.append(str(random.randint(0, 9))) file_as_string = ''.join(buf) return (file_as_string, StringIO.StringIO(file_as_string)) def make_small_file(self): return self.build_input_file(SMALL_KEY_SIZE) def make_large_file(self): return self.build_input_file(LARGE_KEY_SIZE) def make_tracker_file(self, tmpdir=None): if not tmpdir: tmpdir = self._MakeTempDir() tracker_file = os.path.join(tmpdir, 'tracker') return tracker_file def test_non_resumable_upload(self): """ Tests that non-resumable uploads work """ small_src_file_as_string, small_src_file = self.make_small_file() # Seek to end incase its the first test. small_src_file.seek(0, os.SEEK_END) dst_key = self._MakeKey(set_contents=False) try: dst_key.set_contents_from_file(small_src_file) self.fail("should fail as need to rewind the filepointer") except AttributeError: pass # Now try calling with a proper rewind. dst_key.set_contents_from_file(small_src_file, rewind=True) self.assertEqual(SMALL_KEY_SIZE, dst_key.size) self.assertEqual(small_src_file_as_string, dst_key.get_contents_as_string()) def test_upload_without_persistent_tracker(self): """ Tests a single resumable upload, with no tracker URI persistence """ res_upload_handler = ResumableUploadHandler() small_src_file_as_string, small_src_file = self.make_small_file() small_src_file.seek(0) dst_key = self._MakeKey(set_contents=False) dst_key.set_contents_from_file( small_src_file, res_upload_handler=res_upload_handler) self.assertEqual(SMALL_KEY_SIZE, dst_key.size) self.assertEqual(small_src_file_as_string, dst_key.get_contents_as_string()) def test_failed_upload_with_persistent_tracker(self): """ Tests that failed resumable upload leaves a correct tracker URI file """ harness = CallbackTestHarness() tracker_file_name = self.make_tracker_file() res_upload_handler = ResumableUploadHandler( tracker_file_name=tracker_file_name, num_retries=0) small_src_file_as_string, small_src_file = self.make_small_file() small_src_file.seek(0) dst_key = self._MakeKey(set_contents=False) try: dst_key.set_contents_from_file( small_src_file, cb=harness.call, res_upload_handler=res_upload_handler) self.fail('Did not get expected ResumableUploadException') except ResumableUploadException, e: # We'll get a ResumableUploadException at this point because # of CallbackTestHarness (above). Check that the tracker file was # created correctly. self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT_CUR_PROCESS) self.assertTrue(os.path.exists(tracker_file_name)) f = open(tracker_file_name) uri_from_file = f.readline().strip() f.close() self.assertEqual(uri_from_file, res_upload_handler.get_tracker_uri()) def test_retryable_exception_recovery(self): """ Tests handling of a retryable exception """ # Test one of the RETRYABLE_EXCEPTIONS. exception = ResumableUploadHandler.RETRYABLE_EXCEPTIONS[0] harness = CallbackTestHarness(exception=exception) res_upload_handler = ResumableUploadHandler(num_retries=1) small_src_file_as_string, small_src_file = self.make_small_file() small_src_file.seek(0) dst_key = self._MakeKey(set_contents=False) dst_key.set_contents_from_file( small_src_file, cb=harness.call, res_upload_handler=res_upload_handler) # Ensure uploaded object has correct content. self.assertEqual(SMALL_KEY_SIZE, dst_key.size) self.assertEqual(small_src_file_as_string, dst_key.get_contents_as_string()) def test_broken_pipe_recovery(self): """ Tests handling of a Broken Pipe (which interacts with an httplib bug) """ exception = IOError(errno.EPIPE, "Broken pipe") harness = CallbackTestHarness(exception=exception) res_upload_handler = ResumableUploadHandler(num_retries=1) small_src_file_as_string, small_src_file = self.make_small_file() small_src_file.seek(0) dst_key = self._MakeKey(set_contents=False) dst_key.set_contents_from_file( small_src_file, cb=harness.call, res_upload_handler=res_upload_handler) # Ensure uploaded object has correct content. self.assertEqual(SMALL_KEY_SIZE, dst_key.size) self.assertEqual(small_src_file_as_string, dst_key.get_contents_as_string()) def test_non_retryable_exception_handling(self): """ Tests a resumable upload that fails with a non-retryable exception """ harness = CallbackTestHarness( exception=OSError(errno.EACCES, 'Permission denied')) res_upload_handler = ResumableUploadHandler(num_retries=1) small_src_file_as_string, small_src_file = self.make_small_file() small_src_file.seek(0) dst_key = self._MakeKey(set_contents=False) try: dst_key.set_contents_from_file( small_src_file, cb=harness.call, res_upload_handler=res_upload_handler) self.fail('Did not get expected OSError') except OSError, e: # Ensure the error was re-raised. self.assertEqual(e.errno, 13) def test_failed_and_restarted_upload_with_persistent_tracker(self): """ Tests resumable upload that fails once and then completes, with tracker file """ harness = CallbackTestHarness() tracker_file_name = self.make_tracker_file() res_upload_handler = ResumableUploadHandler( tracker_file_name=tracker_file_name, num_retries=1) small_src_file_as_string, small_src_file = self.make_small_file() small_src_file.seek(0) dst_key = self._MakeKey(set_contents=False) dst_key.set_contents_from_file( small_src_file, cb=harness.call, res_upload_handler=res_upload_handler) # Ensure uploaded object has correct content. self.assertEqual(SMALL_KEY_SIZE, dst_key.size) self.assertEqual(small_src_file_as_string, dst_key.get_contents_as_string()) # Ensure tracker file deleted. self.assertFalse(os.path.exists(tracker_file_name)) def test_multiple_in_process_failures_then_succeed(self): """ Tests resumable upload that fails twice in one process, then completes """ res_upload_handler = ResumableUploadHandler(num_retries=3) small_src_file_as_string, small_src_file = self.make_small_file() small_src_file.seek(0) dst_key = self._MakeKey(set_contents=False) dst_key.set_contents_from_file( small_src_file, res_upload_handler=res_upload_handler) # Ensure uploaded object has correct content. self.assertEqual(SMALL_KEY_SIZE, dst_key.size) self.assertEqual(small_src_file_as_string, dst_key.get_contents_as_string()) def test_multiple_in_process_failures_then_succeed_with_tracker_file(self): """ Tests resumable upload that fails completely in one process, then when restarted completes, using a tracker file """ # Set up test harness that causes more failures than a single # ResumableUploadHandler instance will handle, writing enough data # before the first failure that some of it survives that process run. harness = CallbackTestHarness( fail_after_n_bytes=LARGE_KEY_SIZE/2, num_times_to_fail=2) tracker_file_name = self.make_tracker_file() res_upload_handler = ResumableUploadHandler( tracker_file_name=tracker_file_name, num_retries=1) larger_src_file_as_string, larger_src_file = self.make_large_file() larger_src_file.seek(0) dst_key = self._MakeKey(set_contents=False) try: dst_key.set_contents_from_file( larger_src_file, cb=harness.call, res_upload_handler=res_upload_handler) self.fail('Did not get expected ResumableUploadException') except ResumableUploadException, e: self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT_CUR_PROCESS) # Ensure a tracker file survived. self.assertTrue(os.path.exists(tracker_file_name)) # Try it one more time; this time should succeed. larger_src_file.seek(0) dst_key.set_contents_from_file( larger_src_file, cb=harness.call, res_upload_handler=res_upload_handler) self.assertEqual(LARGE_KEY_SIZE, dst_key.size) self.assertEqual(larger_src_file_as_string, dst_key.get_contents_as_string()) self.assertFalse(os.path.exists(tracker_file_name)) # Ensure some of the file was uploaded both before and after failure. self.assertTrue(len(harness.transferred_seq_before_first_failure) > 1 and len(harness.transferred_seq_after_first_failure) > 1) def test_upload_with_inital_partial_upload_before_failure(self): """ Tests resumable upload that successfully uploads some content before it fails, then restarts and completes """ # Set up harness to fail upload after several hundred KB so upload # server will have saved something before we retry. harness = CallbackTestHarness( fail_after_n_bytes=LARGE_KEY_SIZE/2) res_upload_handler = ResumableUploadHandler(num_retries=1) larger_src_file_as_string, larger_src_file = self.make_large_file() larger_src_file.seek(0) dst_key = self._MakeKey(set_contents=False) dst_key.set_contents_from_file( larger_src_file, cb=harness.call, res_upload_handler=res_upload_handler) # Ensure uploaded object has correct content. self.assertEqual(LARGE_KEY_SIZE, dst_key.size) self.assertEqual(larger_src_file_as_string, dst_key.get_contents_as_string()) # Ensure some of the file was uploaded both before and after failure. self.assertTrue(len(harness.transferred_seq_before_first_failure) > 1 and len(harness.transferred_seq_after_first_failure) > 1) def test_empty_file_upload(self): """ Tests uploading an empty file (exercises boundary conditions). """ res_upload_handler = ResumableUploadHandler() empty_src_file = StringIO.StringIO('') empty_src_file.seek(0) dst_key = self._MakeKey(set_contents=False) dst_key.set_contents_from_file( empty_src_file, res_upload_handler=res_upload_handler) self.assertEqual(0, dst_key.size) def test_upload_retains_metadata(self): """ Tests that resumable upload correctly sets passed metadata """ res_upload_handler = ResumableUploadHandler() headers = {'Content-Type' : 'text/plain', 'x-goog-meta-abc' : 'my meta', 'x-goog-acl' : 'public-read'} small_src_file_as_string, small_src_file = self.make_small_file() small_src_file.seek(0) dst_key = self._MakeKey(set_contents=False) dst_key.set_contents_from_file( small_src_file, headers=headers, res_upload_handler=res_upload_handler) self.assertEqual(SMALL_KEY_SIZE, dst_key.size) self.assertEqual(small_src_file_as_string, dst_key.get_contents_as_string()) dst_key.open_read() self.assertEqual('text/plain', dst_key.content_type) self.assertTrue('abc' in dst_key.metadata) self.assertEqual('my meta', str(dst_key.metadata['abc'])) acl = dst_key.get_acl() for entry in acl.entries.entry_list: if str(entry.scope) == '': self.assertEqual('READ', str(acl.entries.entry_list[1].permission)) return self.fail('No scope found') def test_upload_with_file_size_change_between_starts(self): """ Tests resumable upload on a file that changes sizes between initial upload start and restart """ harness = CallbackTestHarness( fail_after_n_bytes=LARGE_KEY_SIZE/2) tracker_file_name = self.make_tracker_file() # Set up first process' ResumableUploadHandler not to do any # retries (initial upload request will establish expected size to # upload server). res_upload_handler = ResumableUploadHandler( tracker_file_name=tracker_file_name, num_retries=0) larger_src_file_as_string, larger_src_file = self.make_large_file() larger_src_file.seek(0) dst_key = self._MakeKey(set_contents=False) try: dst_key.set_contents_from_file( larger_src_file, cb=harness.call, res_upload_handler=res_upload_handler) self.fail('Did not get expected ResumableUploadException') except ResumableUploadException, e: # First abort (from harness-forced failure) should be # ABORT_CUR_PROCESS. self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT_CUR_PROCESS) # Ensure a tracker file survived. self.assertTrue(os.path.exists(tracker_file_name)) # Try it again, this time with different size source file. # Wait 1 second between retry attempts, to give upload server a # chance to save state so it can respond to changed file size with # 500 response in the next attempt. time.sleep(1) try: largest_src_file = self.build_input_file(LARGEST_KEY_SIZE)[1] largest_src_file.seek(0) dst_key.set_contents_from_file( largest_src_file, res_upload_handler=res_upload_handler) self.fail('Did not get expected ResumableUploadException') except ResumableUploadException, e: # This abort should be a hard abort (file size changing during # transfer). self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT) self.assertNotEqual(e.message.find('file size changed'), -1, e.message) def test_upload_with_file_size_change_during_upload(self): """ Tests resumable upload on a file that changes sizes while upload in progress """ # Create a file we can change during the upload. test_file_size = 500 * 1024 # 500 KB. test_file = self.build_input_file(test_file_size)[1] harness = CallbackTestHarness(fp_to_change=test_file, fp_change_pos=test_file_size) res_upload_handler = ResumableUploadHandler(num_retries=1) dst_key = self._MakeKey(set_contents=False) try: dst_key.set_contents_from_file( test_file, cb=harness.call, res_upload_handler=res_upload_handler) self.fail('Did not get expected ResumableUploadException') except ResumableUploadException, e: self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT) self.assertNotEqual( e.message.find('File changed during upload'), -1) def test_upload_with_file_content_change_during_upload(self): """ Tests resumable upload on a file that changes one byte of content (so, size stays the same) while upload in progress. """ def Execute(): res_upload_handler = ResumableUploadHandler(num_retries=1) dst_key = self._MakeKey(set_contents=False) bucket_uri = storage_uri('gs://' + dst_key.bucket.name) dst_key_uri = bucket_uri.clone_replace_name(dst_key.name) try: dst_key.set_contents_from_file( test_file, cb=harness.call, res_upload_handler=res_upload_handler) return False except ResumableUploadException, e: self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT) # Ensure the file size didn't change. test_file.seek(0, os.SEEK_END) self.assertEqual(test_file_size, test_file.tell()) self.assertNotEqual( e.message.find('md5 signature doesn\'t match etag'), -1) # Ensure the bad data wasn't left around. try: dst_key_uri.get_key() self.fail('Did not get expected InvalidUriError') except InvalidUriError, e: pass return True test_file_size = 500 * 1024 # 500 KB # The sizes of all the blocks written, except the final block, must be a # multiple of 256K bytes. We need to trigger a failure after the first # 256K bytes have been uploaded so that at least one block of data is # written on the server. # See https://developers.google.com/storage/docs/concepts-techniques#resumable # for more information about chunking of uploads. n_bytes = 300 * 1024 # 300 KB delay = 0 # First, try the test without a delay. If that fails, try it with a # 15-second delay. The first attempt may fail to recognize that the # server has a block if the server hasn't yet committed that block # when we resume the transfer. This would cause a restarted upload # instead of a resumed upload. for attempt in range(2): test_file = self.build_input_file(test_file_size)[1] harness = CallbackTestHarness( fail_after_n_bytes=n_bytes, fp_to_change=test_file, # Write to byte 1, as the CallbackTestHarness writes # 3 bytes. This will result in the data on the server # being different than the local file. fp_change_pos=1, delay_after_change=delay) if Execute(): break if (attempt == 0 and 0 in harness.transferred_seq_after_first_failure): # We can confirm the upload was restarted instead of resumed # by determining if there is an entry of 0 in the # transferred_seq_after_first_failure list. # In that case, try again with a 15 second delay. delay = 15 continue self.fail('Did not get expected ResumableUploadException') def test_upload_with_content_length_header_set(self): """ Tests resumable upload on a file when the user supplies a Content-Length header. This is used by gsutil, for example, to set the content length when gzipping a file. """ res_upload_handler = ResumableUploadHandler() small_src_file_as_string, small_src_file = self.make_small_file() small_src_file.seek(0) dst_key = self._MakeKey(set_contents=False) try: dst_key.set_contents_from_file( small_src_file, res_upload_handler=res_upload_handler, headers={'Content-Length' : SMALL_KEY_SIZE}) self.fail('Did not get expected ResumableUploadException') except ResumableUploadException, e: self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT) self.assertNotEqual( e.message.find('Attempt to specify Content-Length header'), -1) def test_upload_with_syntactically_invalid_tracker_uri(self): """ Tests resumable upload with a syntactically invalid tracker URI """ tmp_dir = self._MakeTempDir() syntactically_invalid_tracker_file_name = os.path.join(tmp_dir, 'synt_invalid_uri_tracker') with open(syntactically_invalid_tracker_file_name, 'w') as f: f.write('ftp://example.com') res_upload_handler = ResumableUploadHandler( tracker_file_name=syntactically_invalid_tracker_file_name) small_src_file_as_string, small_src_file = self.make_small_file() # An error should be printed about the invalid URI, but then it # should run the update successfully. small_src_file.seek(0) dst_key = self._MakeKey(set_contents=False) dst_key.set_contents_from_file( small_src_file, res_upload_handler=res_upload_handler) self.assertEqual(SMALL_KEY_SIZE, dst_key.size) self.assertEqual(small_src_file_as_string, dst_key.get_contents_as_string()) def test_upload_with_invalid_upload_id_in_tracker_file(self): """ Tests resumable upload with invalid upload ID """ invalid_upload_id = ('http://pub.storage.googleapis.com/?upload_id=' 'AyzB2Uo74W4EYxyi5dp_-r68jz8rtbvshsv4TX7srJVkJ57CxTY5Dw2') tmpdir = self._MakeTempDir() invalid_upload_id_tracker_file_name = os.path.join(tmpdir, 'invalid_upload_id_tracker') with open(invalid_upload_id_tracker_file_name, 'w') as f: f.write(invalid_upload_id) res_upload_handler = ResumableUploadHandler( tracker_file_name=invalid_upload_id_tracker_file_name) small_src_file_as_string, small_src_file = self.make_small_file() # An error should occur, but then the tracker URI should be # regenerated and the the update should succeed. small_src_file.seek(0) dst_key = self._MakeKey(set_contents=False) dst_key.set_contents_from_file( small_src_file, res_upload_handler=res_upload_handler) self.assertEqual(SMALL_KEY_SIZE, dst_key.size) self.assertEqual(small_src_file_as_string, dst_key.get_contents_as_string()) self.assertNotEqual(invalid_upload_id, res_upload_handler.get_tracker_uri()) def test_upload_with_unwritable_tracker_file(self): """ Tests resumable upload with an unwritable tracker file """ # Make dir where tracker_file lives temporarily unwritable. tmp_dir = self._MakeTempDir() tracker_file_name = self.make_tracker_file(tmp_dir) save_mod = os.stat(tmp_dir).st_mode try: os.chmod(tmp_dir, 0) res_upload_handler = ResumableUploadHandler( tracker_file_name=tracker_file_name) except ResumableUploadException, e: self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT) self.assertNotEqual( e.message.find('Couldn\'t write URI tracker file'), -1) finally: # Restore original protection of dir where tracker_file lives. os.chmod(tmp_dir, save_mod) boto-2.20.1/tests/integration/gs/test_storage_uri.py000066400000000000000000000146361225267101000225770ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2013, Google, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """Integration tests for StorageUri interface.""" import binascii import re import StringIO from boto import storage_uri from boto.exception import BotoClientError from boto.gs.acl import SupportedPermissions as perms from tests.integration.gs.testcase import GSTestCase class GSStorageUriTest(GSTestCase): def testHasVersion(self): uri = storage_uri("gs://bucket/obj") self.assertFalse(uri.has_version()) uri.version_id = "versionid" self.assertTrue(uri.has_version()) uri = storage_uri("gs://bucket/obj") # Generation triggers versioning. uri.generation = 12345 self.assertTrue(uri.has_version()) uri.generation = None self.assertFalse(uri.has_version()) # Zero-generation counts as a version. uri = storage_uri("gs://bucket/obj") uri.generation = 0 self.assertTrue(uri.has_version()) def testCloneReplaceKey(self): b = self._MakeBucket() k = b.new_key("obj") k.set_contents_from_string("stringdata") orig_uri = storage_uri("gs://%s/" % b.name) uri = orig_uri.clone_replace_key(k) self.assertTrue(uri.has_version()) self.assertRegexpMatches(str(uri.generation), r"[0-9]+") def testSetAclXml(self): """Ensures that calls to the set_xml_acl functions succeed.""" b = self._MakeBucket() k = b.new_key("obj") k.set_contents_from_string("stringdata") bucket_uri = storage_uri("gs://%s/" % b.name) # Get a valid ACL for an object. bucket_uri.object_name = "obj" bucket_acl = bucket_uri.get_acl() bucket_uri.object_name = None # Add a permission to the ACL. all_users_read_permission = ("" "READ") acl_string = re.sub(r"", all_users_read_permission + "", bucket_acl.to_xml()) # Test-generated owner IDs are not currently valid for buckets acl_no_owner_string = re.sub(r".*", "", acl_string) # Set ACL on an object. bucket_uri.set_xml_acl(acl_string, "obj") # Set ACL on a bucket. bucket_uri.set_xml_acl(acl_no_owner_string) # Set the default ACL for a bucket. bucket_uri.set_def_xml_acl(acl_no_owner_string) # Verify all the ACLs were successfully applied. new_obj_acl_string = k.get_acl().to_xml() new_bucket_acl_string = bucket_uri.get_acl().to_xml() new_bucket_def_acl_string = bucket_uri.get_def_acl().to_xml() self.assertRegexpMatches(new_obj_acl_string, r"AllUsers") self.assertRegexpMatches(new_bucket_acl_string, r"AllUsers") self.assertRegexpMatches(new_bucket_def_acl_string, r"AllUsers") def testPropertiesUpdated(self): b = self._MakeBucket() bucket_uri = storage_uri("gs://%s" % b.name) key_uri = bucket_uri.clone_replace_name("obj") key_uri.set_contents_from_string("data1") self.assertRegexpMatches(str(key_uri.generation), r"[0-9]+") k = b.get_key("obj") self.assertEqual(k.generation, key_uri.generation) self.assertEquals(k.get_contents_as_string(), "data1") key_uri.set_contents_from_stream(StringIO.StringIO("data2")) self.assertRegexpMatches(str(key_uri.generation), r"[0-9]+") self.assertGreater(key_uri.generation, k.generation) k = b.get_key("obj") self.assertEqual(k.generation, key_uri.generation) self.assertEquals(k.get_contents_as_string(), "data2") key_uri.set_contents_from_file(StringIO.StringIO("data3")) self.assertRegexpMatches(str(key_uri.generation), r"[0-9]+") self.assertGreater(key_uri.generation, k.generation) k = b.get_key("obj") self.assertEqual(k.generation, key_uri.generation) self.assertEquals(k.get_contents_as_string(), "data3") def testCompose(self): data1 = 'hello ' data2 = 'world!' expected_crc = 1238062967 b = self._MakeBucket() bucket_uri = storage_uri("gs://%s" % b.name) key_uri1 = bucket_uri.clone_replace_name("component1") key_uri1.set_contents_from_string(data1) key_uri2 = bucket_uri.clone_replace_name("component2") key_uri2.set_contents_from_string(data2) # Simple compose. key_uri_composite = bucket_uri.clone_replace_name("composite") components = [key_uri1, key_uri2] key_uri_composite.compose(components, content_type='text/plain') self.assertEquals(key_uri_composite.get_contents_as_string(), data1 + data2) composite_key = key_uri_composite.get_key() cloud_crc32c = binascii.hexlify( composite_key.cloud_hashes['crc32c']) self.assertEquals(cloud_crc32c, hex(expected_crc)[2:]) self.assertEquals(composite_key.content_type, 'text/plain') # Compose disallowed between buckets. key_uri1.bucket_name += '2' try: key_uri_composite.compose(components) self.fail('Composing between buckets didn\'t fail as expected.') except BotoClientError as err: self.assertEquals( err.reason, 'GCS does not support inter-bucket composing') boto-2.20.1/tests/integration/gs/test_versioning.py000066400000000000000000000230721225267101000224310ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2012, Google, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """Integration tests for GS versioning support.""" from xml import sax from boto import handler from boto.gs import acl from tests.integration.gs.testcase import GSTestCase class GSVersioningTest(GSTestCase): def testVersioningToggle(self): b = self._MakeBucket() self.assertFalse(b.get_versioning_status()) b.configure_versioning(True) self.assertTrue(b.get_versioning_status()) b.configure_versioning(False) self.assertFalse(b.get_versioning_status()) def testDeleteVersionedKey(self): b = self._MakeVersionedBucket() k = b.new_key("foo") s1 = "test1" k.set_contents_from_string(s1) k = b.get_key("foo") g1 = k.generation s2 = "test2" k.set_contents_from_string(s2) k = b.get_key("foo") g2 = k.generation versions = list(b.list_versions()) self.assertEqual(len(versions), 2) self.assertEqual(versions[0].name, "foo") self.assertEqual(versions[1].name, "foo") generations = [k.generation for k in versions] self.assertIn(g1, generations) self.assertIn(g2, generations) # Delete "current" version and make sure that version is no longer # visible from a basic GET call. b.delete_key("foo", generation=None) self.assertIsNone(b.get_key("foo")) # Both old versions should still be there when listed using the versions # query parameter. versions = list(b.list_versions()) self.assertEqual(len(versions), 2) self.assertEqual(versions[0].name, "foo") self.assertEqual(versions[1].name, "foo") generations = [k.generation for k in versions] self.assertIn(g1, generations) self.assertIn(g2, generations) # Delete generation 2 and make sure it's gone. b.delete_key("foo", generation=g2) versions = list(b.list_versions()) self.assertEqual(len(versions), 1) self.assertEqual(versions[0].name, "foo") self.assertEqual(versions[0].generation, g1) # Delete generation 1 and make sure it's gone. b.delete_key("foo", generation=g1) versions = list(b.list_versions()) self.assertEqual(len(versions), 0) def testGetVersionedKey(self): b = self._MakeVersionedBucket() k = b.new_key("foo") s1 = "test1" k.set_contents_from_string(s1) k = b.get_key("foo") g1 = k.generation o1 = k.get_contents_as_string() self.assertEqual(o1, s1) s2 = "test2" k.set_contents_from_string(s2) k = b.get_key("foo") g2 = k.generation self.assertNotEqual(g2, g1) o2 = k.get_contents_as_string() self.assertEqual(o2, s2) k = b.get_key("foo", generation=g1) self.assertEqual(k.get_contents_as_string(), s1) k = b.get_key("foo", generation=g2) self.assertEqual(k.get_contents_as_string(), s2) def testVersionedBucketCannedAcl(self): b = self._MakeVersionedBucket() k = b.new_key("foo") s1 = "test1" k.set_contents_from_string(s1) k = b.get_key("foo") g1 = k.generation s2 = "test2" k.set_contents_from_string(s2) k = b.get_key("foo") g2 = k.generation acl1g1 = b.get_acl("foo", generation=g1) acl1g2 = b.get_acl("foo", generation=g2) owner1g1 = acl1g1.owner.id owner1g2 = acl1g2.owner.id self.assertEqual(owner1g1, owner1g2) entries1g1 = acl1g1.entries.entry_list entries1g2 = acl1g2.entries.entry_list self.assertEqual(len(entries1g1), len(entries1g2)) b.set_acl("public-read", key_name="foo", generation=g1) acl2g1 = b.get_acl("foo", generation=g1) acl2g2 = b.get_acl("foo", generation=g2) entries2g1 = acl2g1.entries.entry_list entries2g2 = acl2g2.entries.entry_list self.assertEqual(len(entries2g2), len(entries1g2)) public_read_entries1 = [e for e in entries2g1 if e.permission == "READ" and e.scope.type == acl.ALL_USERS] public_read_entries2 = [e for e in entries2g2 if e.permission == "READ" and e.scope.type == acl.ALL_USERS] self.assertEqual(len(public_read_entries1), 1) self.assertEqual(len(public_read_entries2), 0) def testVersionedBucketXmlAcl(self): b = self._MakeVersionedBucket() k = b.new_key("foo") s1 = "test1" k.set_contents_from_string(s1) k = b.get_key("foo") g1 = k.generation s2 = "test2" k.set_contents_from_string(s2) k = b.get_key("foo") g2 = k.generation acl1g1 = b.get_acl("foo", generation=g1) acl1g2 = b.get_acl("foo", generation=g2) owner1g1 = acl1g1.owner.id owner1g2 = acl1g2.owner.id self.assertEqual(owner1g1, owner1g2) entries1g1 = acl1g1.entries.entry_list entries1g2 = acl1g2.entries.entry_list self.assertEqual(len(entries1g1), len(entries1g2)) acl_xml = ( '' + 'READ' + '') aclo = acl.ACL() h = handler.XmlHandler(aclo, b) sax.parseString(acl_xml, h) b.set_acl(aclo, key_name="foo", generation=g1) acl2g1 = b.get_acl("foo", generation=g1) acl2g2 = b.get_acl("foo", generation=g2) entries2g1 = acl2g1.entries.entry_list entries2g2 = acl2g2.entries.entry_list self.assertEqual(len(entries2g2), len(entries1g2)) public_read_entries1 = [e for e in entries2g1 if e.permission == "READ" and e.scope.type == acl.ALL_USERS] public_read_entries2 = [e for e in entries2g2 if e.permission == "READ" and e.scope.type == acl.ALL_USERS] self.assertEqual(len(public_read_entries1), 1) self.assertEqual(len(public_read_entries2), 0) def testVersionedObjectCannedAcl(self): b = self._MakeVersionedBucket() k = b.new_key("foo") s1 = "test1" k.set_contents_from_string(s1) k = b.get_key("foo") g1 = k.generation s2 = "test2" k.set_contents_from_string(s2) k = b.get_key("foo") g2 = k.generation acl1g1 = b.get_acl("foo", generation=g1) acl1g2 = b.get_acl("foo", generation=g2) owner1g1 = acl1g1.owner.id owner1g2 = acl1g2.owner.id self.assertEqual(owner1g1, owner1g2) entries1g1 = acl1g1.entries.entry_list entries1g2 = acl1g2.entries.entry_list self.assertEqual(len(entries1g1), len(entries1g2)) b.set_acl("public-read", key_name="foo", generation=g1) acl2g1 = b.get_acl("foo", generation=g1) acl2g2 = b.get_acl("foo", generation=g2) entries2g1 = acl2g1.entries.entry_list entries2g2 = acl2g2.entries.entry_list self.assertEqual(len(entries2g2), len(entries1g2)) public_read_entries1 = [e for e in entries2g1 if e.permission == "READ" and e.scope.type == acl.ALL_USERS] public_read_entries2 = [e for e in entries2g2 if e.permission == "READ" and e.scope.type == acl.ALL_USERS] self.assertEqual(len(public_read_entries1), 1) self.assertEqual(len(public_read_entries2), 0) def testCopyVersionedKey(self): b = self._MakeVersionedBucket() k = b.new_key("foo") s1 = "test1" k.set_contents_from_string(s1) k = b.get_key("foo") g1 = k.generation s2 = "test2" k.set_contents_from_string(s2) b2 = self._MakeVersionedBucket() b2.copy_key("foo2", b.name, "foo", src_generation=g1) k2 = b2.get_key("foo2") s3 = k2.get_contents_as_string() self.assertEqual(s3, s1) def testKeyGenerationUpdatesOnSet(self): b = self._MakeVersionedBucket() k = b.new_key("foo") self.assertIsNone(k.generation) k.set_contents_from_string("test1") g1 = k.generation self.assertRegexpMatches(g1, r'[0-9]+') self.assertEqual(k.metageneration, '1') k.set_contents_from_string("test2") g2 = k.generation self.assertNotEqual(g1, g2) self.assertRegexpMatches(g2, r'[0-9]+') self.assertGreater(int(g2), int(g1)) self.assertEqual(k.metageneration, '1') boto-2.20.1/tests/integration/gs/testcase.py000066400000000000000000000107761225267101000210310ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2013, Google, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """Base TestCase class for gs integration tests.""" import shutil import tempfile import time from boto.exception import GSResponseError from boto.gs.connection import GSConnection from tests.integration.gs import util from tests.integration.gs.util import retry from tests.unit import unittest @unittest.skipUnless(util.has_google_credentials(), "Google credentials are required to run the Google " "Cloud Storage tests. Update your boto.cfg to run " "these tests.") class GSTestCase(unittest.TestCase): gs = True def setUp(self): self._conn = GSConnection() self._buckets = [] self._tempdirs = [] # Retry with an exponential backoff if a server error is received. This # ensures that we try *really* hard to clean up after ourselves. @retry(GSResponseError) def tearDown(self): while len(self._tempdirs): tmpdir = self._tempdirs.pop() shutil.rmtree(tmpdir, ignore_errors=True) while(len(self._buckets)): b = self._buckets[-1] bucket = self._conn.get_bucket(b) while len(list(bucket.list_versions())) > 0: for k in bucket.list_versions(): bucket.delete_key(k.name, generation=k.generation) bucket.delete() self._buckets.pop() def _GetConnection(self): """Returns the GSConnection object used to connect to GCS.""" return self._conn def _MakeTempName(self): """Creates and returns a temporary name for testing that is likely to be unique.""" return "boto-gs-test-%s" % repr(time.time()).replace(".", "-") def _MakeBucketName(self): """Creates and returns a temporary bucket name for testing that is likely to be unique.""" b = self._MakeTempName() self._buckets.append(b) return b def _MakeBucket(self): """Creates and returns temporary bucket for testing. After the test, the contents of the bucket and the bucket itself will be deleted.""" b = self._conn.create_bucket(self._MakeBucketName()) return b def _MakeKey(self, data='', bucket=None, set_contents=True): """Creates and returns a Key with provided data. If no bucket is given, a temporary bucket is created.""" if data and not set_contents: # The data and set_contents parameters are mutually exclusive. raise ValueError('MakeKey called with a non-empty data parameter ' 'but set_contents was set to False.') if not bucket: bucket = self._MakeBucket() key_name = self._MakeTempName() k = bucket.new_key(key_name) if set_contents: k.set_contents_from_string(data) return k def _MakeVersionedBucket(self): """Creates and returns temporary versioned bucket for testing. After the test, the contents of the bucket and the bucket itself will be deleted.""" b = self._MakeBucket() b.configure_versioning(True) return b def _MakeTempDir(self): """Creates and returns a temporary directory on disk. After the test, the contents of the directory and the directory itself will be deleted.""" tmpdir = tempfile.mkdtemp(prefix=self._MakeTempName()) self._tempdirs.append(tmpdir) return tmpdir boto-2.20.1/tests/integration/gs/util.py000066400000000000000000000062511225267101000201640ustar00rootroot00000000000000# Copyright (c) 2012, Google, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import time from boto.provider import Provider _HAS_GOOGLE_CREDENTIALS = None def has_google_credentials(): global _HAS_GOOGLE_CREDENTIALS if _HAS_GOOGLE_CREDENTIALS is None: provider = Provider('google') if (provider.get_access_key() is None or provider.get_secret_key() is None): _HAS_GOOGLE_CREDENTIALS = False else: _HAS_GOOGLE_CREDENTIALS = True return _HAS_GOOGLE_CREDENTIALS def retry(ExceptionToCheck, tries=4, delay=3, backoff=2, logger=None): """Retry calling the decorated function using an exponential backoff. Taken from: https://github.com/saltycrane/retry-decorator Licensed under BSD: https://github.com/saltycrane/retry-decorator/blob/master/LICENSE :param ExceptionToCheck: the exception to check. may be a tuple of exceptions to check :type ExceptionToCheck: Exception or tuple :param tries: number of times to try (not retry) before giving up :type tries: int :param delay: initial delay between retries in seconds :type delay: int :param backoff: backoff multiplier e.g. value of 2 will double the delay each retry :type backoff: int :param logger: logger to use. If None, print :type logger: logging.Logger instance """ def deco_retry(f): def f_retry(*args, **kwargs): mtries, mdelay = tries, delay try_one_last_time = True while mtries > 1: try: return f(*args, **kwargs) try_one_last_time = False break except ExceptionToCheck, e: msg = "%s, Retrying in %d seconds..." % (str(e), mdelay) if logger: logger.warning(msg) else: print msg time.sleep(mdelay) mtries -= 1 mdelay *= backoff if try_one_last_time: return f(*args, **kwargs) return return f_retry # true decorator return deco_retry boto-2.20.1/tests/integration/iam/000077500000000000000000000000001225267101000167665ustar00rootroot00000000000000boto-2.20.1/tests/integration/iam/__init__.py000066400000000000000000000021121225267101000210730ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. boto-2.20.1/tests/integration/iam/test_cert_verification.py000066400000000000000000000030061225267101000240750ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Check that all of the certs on all service endpoints validate. """ import unittest from tests.integration import ServiceCertVerificationTest import boto.iam class IAMCertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): iam = True regions = boto.iam.regions() def sample_service_call(self, conn): conn.get_all_users() boto-2.20.1/tests/integration/kinesis/000077500000000000000000000000001225267101000176655ustar00rootroot00000000000000boto-2.20.1/tests/integration/kinesis/__init__.py000066400000000000000000000000001225267101000217640ustar00rootroot00000000000000boto-2.20.1/tests/integration/kinesis/test_kinesis.py000066400000000000000000000055771225267101000227610ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import boto import time from unittest import TestCase class TimeoutError(Exception): pass class TestKinesis(TestCase): def setUp(self): self.kinesis = boto.connect_kinesis() def tearDown(self): # Delete the stream even if there is a failure self.kinesis.delete_stream('test') def test_kinesis(self): kinesis = self.kinesis # Create a new stream kinesis.create_stream('test', 1) # Wait for the stream to be ready tries = 0 while tries < 10: tries += 1 time.sleep(15) response = kinesis.describe_stream('test') if response['StreamDescription']['StreamStatus'] == 'ACTIVE': shard_id = response['StreamDescription']['Shards'][0]['ShardId'] break else: raise TimeoutError('Stream is still not active, aborting...') # Get ready to process some data from the stream response = kinesis.get_shard_iterator('test', shard_id, 'TRIM_HORIZON') shard_iterator = response['ShardIterator'] # Write some data to the stream data = 'Some data ...' response = kinesis.put_record('test', data, data) # Wait for the data to show up tries = 0 while tries < 100: tries += 1 time.sleep(1) response = kinesis.get_records(shard_iterator) shard_iterator = response['NextShardIterator'] if len(response['Records']): break else: raise TimeoutError('No records found, aborting...') # Read the data, which should be the same as what we wrote self.assertEqual(1, len(response['Records'])) self.assertEqual(data, response['Records'][0]['Data']) boto-2.20.1/tests/integration/mws/000077500000000000000000000000001225267101000170265ustar00rootroot00000000000000boto-2.20.1/tests/integration/mws/__init__.py000066400000000000000000000000001225267101000211250ustar00rootroot00000000000000boto-2.20.1/tests/integration/mws/test.py000066400000000000000000000107501225267101000203620ustar00rootroot00000000000000#!/usr/bin/env python try: from tests.unit import unittest except ImportError: import unittest import sys import os import os.path from datetime import datetime, timedelta simple = os.environ.get('MWS_MERCHANT', None) if not simple: print """ Please set the MWS_MERCHANT environmental variable to your Merchant or SellerId to enable MWS tests. """ advanced = False isolator = True if __name__ == "__main__": devpath = os.path.relpath(os.path.join('..', '..', '..'), start=os.path.dirname(__file__)) sys.path = [devpath] + sys.path advanced = simple and True or False if advanced: print '>>> advanced MWS tests; using local boto sources' from boto.mws.connection import MWSConnection class MWSTestCase(unittest.TestCase): def setUp(self): self.mws = MWSConnection(Merchant=simple, debug=0) @unittest.skipUnless(simple and isolator, "skipping simple test") def test_feedlist(self): self.mws.get_feed_submission_list() @unittest.skipUnless(simple and isolator, "skipping simple test") def test_inbound_status(self): response = self.mws.get_inbound_service_status() status = response.GetServiceStatusResult.Status self.assertIn(status, ('GREEN', 'GREEN_I', 'YELLOW', 'RED')) @property def marketplace(self): try: return self._marketplace except AttributeError: response = self.mws.list_marketplace_participations() result = response.ListMarketplaceParticipationsResult self._marketplace = result.ListMarketplaces.Marketplace[0] return self.marketplace @property def marketplace_id(self): return self.marketplace.MarketplaceId @unittest.skipUnless(simple and isolator, "skipping simple test") def test_marketplace_participations(self): response = self.mws.list_marketplace_participations() result = response.ListMarketplaceParticipationsResult self.assertTrue(result.ListMarketplaces.Marketplace[0].MarketplaceId) @unittest.skipUnless(simple and isolator, "skipping simple test") def test_get_product_categories_for_asin(self): asin = '144930544X' response = self.mws.get_product_categories_for_asin( MarketplaceId=self.marketplace_id, ASIN=asin) result = response._result self.assertTrue(int(result.Self.ProductCategoryId) == 21) @unittest.skipUnless(simple and isolator, "skipping simple test") def test_list_matching_products(self): response = self.mws.list_matching_products( MarketplaceId=self.marketplace_id, Query='boto') products = response._result.Products self.assertTrue(len(products)) @unittest.skipUnless(simple and isolator, "skipping simple test") def test_get_matching_product(self): asin = 'B001UDRNHO' response = self.mws.get_matching_product( MarketplaceId=self.marketplace_id, ASINList=[asin]) attributes = response._result[0].Product.AttributeSets.ItemAttributes self.assertEqual(attributes[0].Label, 'Serengeti') @unittest.skipUnless(simple and isolator, "skipping simple test") def test_get_matching_product_for_id(self): asins = ['B001UDRNHO', '144930544X'] response = self.mws.get_matching_product_for_id( MarketplaceId=self.marketplace_id, IdType='ASIN', IdList=asins) self.assertEqual(len(response._result), 2) for result in response._result: self.assertEqual(len(result.Products.Product), 1) @unittest.skipUnless(simple and isolator, "skipping simple test") def test_get_lowest_offer_listings_for_asin(self): asin = '144930544X' response = self.mws.get_lowest_offer_listings_for_asin( MarketplaceId=self.marketplace_id, ItemCondition='New', ASINList=[asin]) listings = response._result[0].Product.LowestOfferListings self.assertTrue(len(listings.LowestOfferListing)) @unittest.skipUnless(simple and isolator, "skipping simple test") def test_list_inventory_supply(self): asof = (datetime.today() - timedelta(days=30)).isoformat() response = self.mws.list_inventory_supply(QueryStartDateTime=asof, ResponseGroup='Basic') self.assertTrue(hasattr(response._result, 'InventorySupplyList')) if __name__ == "__main__": unittest.main() boto-2.20.1/tests/integration/opsworks/000077500000000000000000000000001225267101000201075ustar00rootroot00000000000000boto-2.20.1/tests/integration/opsworks/__init__.py000066400000000000000000000000001225267101000222060ustar00rootroot00000000000000boto-2.20.1/tests/integration/opsworks/test_layer1.py000066400000000000000000000032551225267101000227220ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import unittest import time from boto.opsworks.layer1 import OpsWorksConnection from boto.opsworks.exceptions import ValidationException class TestOpsWorksConnection(unittest.TestCase): def setUp(self): self.api = OpsWorksConnection() def test_describe_stacks(self): response = self.api.describe_stacks() self.assertIn('Stacks', response) def test_validation_errors(self): with self.assertRaises(ValidationException): self.api.create_stack('testbotostack', 'us-east-1', 'badarn', 'badarn2') boto-2.20.1/tests/integration/rds/000077500000000000000000000000001225267101000170105ustar00rootroot00000000000000boto-2.20.1/tests/integration/rds/__init__.py000066400000000000000000000022271225267101000211240ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. boto-2.20.1/tests/integration/rds/test_cert_verification.py000066400000000000000000000030141225267101000241160ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Check that all of the certs on all service endpoints validate. """ import unittest from tests.integration import ServiceCertVerificationTest import boto.rds class RDSCertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): rds = True regions = boto.rds.regions() def sample_service_call(self, conn): conn.get_all_dbinstances() boto-2.20.1/tests/integration/rds/test_db_subnet_group.py000066400000000000000000000073761225267101000236170ustar00rootroot00000000000000# Copyright (c) 2013 Franc Carter franc.carter@gmail.com # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Check that db_subnet_groups behave sanely """ import time import unittest import boto.rds from boto.vpc import VPCConnection from boto.rds import RDSConnection def _is_ok(subnet_group, vpc_id, description, subnets): if subnet_group.vpc_id != vpc_id: print 'vpc_id is ',subnet_group.vpc_id, 'but should be ', vpc_id return 0 if subnet_group.description != description: print "description is '"+subnet_group.description+"' but should be '"+description+"'" return 0 if set(subnet_group.subnet_ids) != set(subnets): subnets_are = ','.join(subnet_group.subnet_ids) should_be = ','.join(subnets) print "subnets are "+subnets_are+" but should be "+should_be return 0 return 1 class DbSubnetGroupTest(unittest.TestCase): rds = True def test_db_subnet_group(self): vpc_api = VPCConnection() rds_api = RDSConnection() vpc = vpc_api.create_vpc('10.0.0.0/16') az_list = vpc_api.get_all_zones(filters={'state':'available'}) subnet = list() n = 0; for az in az_list: try: subnet.append(vpc_api.create_subnet(vpc.id, '10.0.'+str(n)+'.0/24',availability_zone=az.name)) n = n+1 except: pass grp_name = 'db_subnet_group'+str(int(time.time())) subnet_group = rds_api.create_db_subnet_group(grp_name, grp_name, [subnet[0].id,subnet[1].id]) if not _is_ok(subnet_group, vpc.id, grp_name, [subnet[0].id,subnet[1].id]): raise Exception("create_db_subnet_group returned bad values") rds_api.modify_db_subnet_group(grp_name, description='new description') subnet_grps = rds_api.get_all_db_subnet_groups(name=grp_name) if not _is_ok(subnet_grps[0], vpc.id, 'new description', [subnet[0].id,subnet[1].id]): raise Exception("modifying the subnet group desciption returned bad values") rds_api.modify_db_subnet_group(grp_name, subnet_ids=[subnet[1].id,subnet[2].id]) subnet_grps = rds_api.get_all_db_subnet_groups(name=grp_name) if not _is_ok(subnet_grps[0], vpc.id, 'new description', [subnet[1].id,subnet[2].id]): raise Exception("modifying the subnet group subnets returned bad values") rds_api.delete_db_subnet_group(subnet_group.name) try: rds_api.get_all_db_subnet_groups(name=grp_name) raise Exception(subnet_group.name+" still accessible after delete_db_subnet_group") except: pass while n > 0: n = n-1 vpc_api.delete_subnet(subnet[n].id) vpc_api.delete_vpc(vpc.id) boto-2.20.1/tests/integration/rds/test_promote_modify.py000066400000000000000000000124611225267101000234610ustar00rootroot00000000000000# Author: Bruce Pennypacker # # Create a temporary RDS database instance, then create a read-replica of the # instance. Once the replica is available, promote it and verify that the # promotion succeeds, then rename it. Delete the databases upon completion i # of the tests. # # For each step (creating the databases, promoting, etc) we loop for up # to 15 minutes to wait for the instance to become available. It should # never take that long for any of the steps to complete. """ Check that promotion of read replicas and renaming instances works as expected """ import unittest import time from boto.rds import RDSConnection class PromoteReadReplicaTest(unittest.TestCase): rds = True def setUp(self): self.conn = RDSConnection() self.masterDB_name = "boto-db-%s" % str(int(time.time())) self.replicaDB_name = "replica-%s" % self.masterDB_name self.renamedDB_name = "renamed-replica-%s" % self.masterDB_name def tearDown(self): instances = self.conn.get_all_dbinstances() for db in [self.masterDB_name, self.replicaDB_name, self.renamedDB_name]: for i in instances: if i.id == db: self.conn.delete_dbinstance(db, skip_final_snapshot=True) def test_promote(self): print '--- running RDS promotion & renaming tests ---' self.masterDB = self.conn.create_dbinstance(self.masterDB_name, 5, 'db.t1.micro', 'root', 'bototestpw') # Wait up to 15 minutes for the masterDB to become available print '--- waiting for "%s" to become available ---' % self.masterDB_name wait_timeout = time.time() + (15 * 60) time.sleep(60) instances = self.conn.get_all_dbinstances(self.masterDB_name) inst = instances[0] while wait_timeout > time.time() and inst.status != 'available': time.sleep(15) instances = self.conn.get_all_dbinstances(self.masterDB_name) inst = instances[0] self.assertTrue(inst.status == 'available') self.replicaDB = self.conn.create_dbinstance_read_replica(self.replicaDB_name, self.masterDB_name) # Wait up to 15 minutes for the replicaDB to become available print '--- waiting for "%s" to become available ---' % self.replicaDB_name wait_timeout = time.time() + (15 * 60) time.sleep(60) instances = self.conn.get_all_dbinstances(self.replicaDB_name) inst = instances[0] while wait_timeout > time.time() and inst.status != 'available': time.sleep(15) instances = self.conn.get_all_dbinstances(self.replicaDB_name) inst = instances[0] self.assertTrue(inst.status == 'available') # Promote the replicaDB and wait for it to become available self.replicaDB = self.conn.promote_read_replica(self.replicaDB_name) # Wait up to 15 minutes for the replicaDB to become available print '--- waiting for "%s" to be promoted and available ---' % self.replicaDB_name wait_timeout = time.time() + (15 * 60) time.sleep(60) instances = self.conn.get_all_dbinstances(self.replicaDB_name) inst = instances[0] while wait_timeout > time.time() and inst.status != 'available': time.sleep(15) instances = self.conn.get_all_dbinstances(self.replicaDB_name) inst = instances[0] # Verify that the replica is now a standalone instance and no longer # functioning as a read replica self.assertTrue(inst) self.assertTrue(inst.status == 'available') self.assertFalse(inst.status_infos) # Verify that the master no longer has any read replicas instances = self.conn.get_all_dbinstances(self.masterDB_name) inst = instances[0] self.assertFalse(inst.read_replica_dbinstance_identifiers) print '--- renaming "%s" to "%s" ---' % ( self.replicaDB_name, self.renamedDB_name ) self.renamedDB = self.conn.modify_dbinstance(self.replicaDB_name, new_instance_id=self.renamedDB_name, apply_immediately=True) # Wait up to 15 minutes for the masterDB to become available print '--- waiting for "%s" to exist ---' % self.renamedDB_name wait_timeout = time.time() + (15 * 60) time.sleep(60) # Wait up to 15 minutes until the new name shows up in the instance table found = False while found == False and wait_timeout > time.time(): instances = self.conn.get_all_dbinstances() for i in instances: if i.id == self.renamedDB_name: found = True if found == False: time.sleep(15) self.assertTrue(found) print '--- waiting for "%s" to become available ---' % self.renamedDB_name instances = self.conn.get_all_dbinstances(self.renamedDB_name) inst = instances[0] # Now wait for the renamed instance to become available while wait_timeout > time.time() and inst.status != 'available': time.sleep(15) instances = self.conn.get_all_dbinstances(self.renamedDB_name) inst = instances[0] self.assertTrue(inst.status == 'available') # Since the replica DB was renamed... self.replicaDB = None print '--- tests completed ---' boto-2.20.1/tests/integration/redshift/000077500000000000000000000000001225267101000200305ustar00rootroot00000000000000boto-2.20.1/tests/integration/redshift/__init__.py000066400000000000000000000000001225267101000221270ustar00rootroot00000000000000boto-2.20.1/tests/integration/redshift/test_cert_verification.py000066400000000000000000000026471225267101000251510ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import unittest from tests.integration import ServiceCertVerificationTest import boto.redshift class RedshiftCertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): redshift = True regions = boto.redshift.regions() def sample_service_call(self, conn): conn.describe_cluster_versions() boto-2.20.1/tests/integration/redshift/test_layer1.py000066400000000000000000000124101225267101000226340ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import unittest import time from nose.plugins.attrib import attr from boto.redshift.layer1 import RedshiftConnection from boto.redshift.exceptions import ClusterNotFoundFault from boto.redshift.exceptions import ResizeNotFoundFault class TestRedshiftLayer1Management(unittest.TestCase): redshift = True def setUp(self): self.api = RedshiftConnection() self.cluster_prefix = 'boto-redshift-cluster-%s' self.node_type = 'dw.hs1.xlarge' self.master_username = 'mrtest' self.master_password = 'P4ssword' self.db_name = 'simon' # Redshift was taking ~20 minutes to bring clusters up in testing. self.wait_time = 60 * 20 def cluster_id(self): # This need to be unique per-test method. return self.cluster_prefix % str(int(time.time())) def create_cluster(self): cluster_id = self.cluster_id() self.api.create_cluster( cluster_id, self.node_type, self.master_username, self.master_password, db_name=self.db_name, number_of_nodes=3 ) # Wait for it to come up. time.sleep(self.wait_time) self.addCleanup(self.delete_cluster_the_slow_way, cluster_id) return cluster_id def delete_cluster_the_slow_way(self, cluster_id): # Because there might be other operations in progress. :( time.sleep(self.wait_time) self.api.delete_cluster(cluster_id, skip_final_cluster_snapshot=True) @attr('notdefault') def test_create_delete_cluster(self): cluster_id = self.cluster_id() self.api.create_cluster( cluster_id, self.node_type, self.master_username, self.master_password, db_name=self.db_name, number_of_nodes=3 ) # Wait for it to come up. time.sleep(self.wait_time) self.api.delete_cluster(cluster_id, skip_final_cluster_snapshot=True) @attr('notdefault') def test_as_much_as_possible_before_teardown(self): # Per @garnaat, for the sake of suite time, we'll test as much as we # can before we teardown. # Test a non-existent cluster ID. with self.assertRaises(ClusterNotFoundFault): self.api.describe_clusters('badpipelineid') # Now create the cluster & move on. cluster_id = self.create_cluster() # Test never resized. with self.assertRaises(ResizeNotFoundFault): self.api.describe_resize(cluster_id) # The cluster shows up in describe_clusters clusters = self.api.describe_clusters()['DescribeClustersResponse']\ ['DescribeClustersResult']\ ['Clusters'] cluster_ids = [c['ClusterIdentifier'] for c in clusters] self.assertIn(cluster_id, cluster_ids) # The cluster shows up in describe_clusters w/ id response = self.api.describe_clusters(cluster_id) self.assertEqual(response['DescribeClustersResponse']\ ['DescribeClustersResult']['Clusters'][0]\ ['ClusterIdentifier'], cluster_id) snapshot_id = "snap-%s" % cluster_id # Test creating a snapshot. response = self.api.create_cluster_snapshot(snapshot_id, cluster_id) self.assertEqual(response['CreateClusterSnapshotResponse']\ ['CreateClusterSnapshotResult']['Snapshot']\ ['SnapshotIdentifier'], snapshot_id) self.assertEqual(response['CreateClusterSnapshotResponse']\ ['CreateClusterSnapshotResult']['Snapshot']\ ['Status'], 'creating') self.addCleanup(self.api.delete_cluster_snapshot, snapshot_id) # More waiting. :( time.sleep(self.wait_time) # Describe the snapshots. response = self.api.describe_cluster_snapshots( cluster_identifier=cluster_id ) snap = response['DescribeClusterSnapshotsResponse']\ ['DescribeClusterSnapshotsResult']['Snapshots'][-1] self.assertEqual(snap['SnapshotType'], 'manual') self.assertEqual(snap['DBName'], self.db_name) boto-2.20.1/tests/integration/route53/000077500000000000000000000000001225267101000175265ustar00rootroot00000000000000boto-2.20.1/tests/integration/route53/__init__.py000066400000000000000000000021121225267101000216330ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. boto-2.20.1/tests/integration/route53/test_cert_verification.py000066400000000000000000000030351225267101000246370ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Check that all of the certs on all service endpoints validate. """ import unittest from tests.integration import ServiceCertVerificationTest import boto.route53 class Route53CertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): route53 = True regions = boto.route53.regions() def sample_service_call(self, conn): conn.get_all_hosted_zones() boto-2.20.1/tests/integration/route53/test_resourcerecordsets.py000066400000000000000000000067101225267101000250700ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import unittest from boto.route53.connection import Route53Connection from boto.route53.record import ResourceRecordSets class TestRoute53ResourceRecordSets(unittest.TestCase): def setUp(self): super(TestRoute53ResourceRecordSets, self).setUp() self.conn = Route53Connection() self.zone = self.conn.create_zone('example.com') def tearDown(self): self.zone.delete() super(TestRoute53ResourceRecordSets, self).tearDown() def test_add_change(self): rrs = ResourceRecordSets(self.conn, self.zone.id) created = rrs.add_change("CREATE", "vpn.example.com.", "A") created.add_value('192.168.0.25') rrs.commit() rrs = ResourceRecordSets(self.conn, self.zone.id) deleted = rrs.add_change('DELETE', "vpn.example.com.", "A") deleted.add_value('192.168.0.25') rrs.commit() def test_record_count(self): rrs = ResourceRecordSets(self.conn, self.zone.id) hosts = 101 for hostid in range(hosts): rec = "test" + str(hostid) + ".example.com" created = rrs.add_change("CREATE", rec, "A") ip = '192.168.0.' + str(hostid) created.add_value(ip) # Max 100 changes per commit if (hostid + 1) % 100 == 0: rrs.commit() rrs = ResourceRecordSets(self.conn, self.zone.id) rrs.commit() all_records = self.conn.get_all_rrsets(self.zone.id) # First time around was always fine i = 0 for rset in all_records: i += 1 # Second time was a failure i = 0 for rset in all_records: i += 1 # Cleanup indivual records rrs = ResourceRecordSets(self.conn, self.zone.id) for hostid in range(hosts): rec = "test" + str(hostid) + ".example.com" deleted = rrs.add_change("DELETE", rec, "A") ip = '192.168.0.' + str(hostid) deleted.add_value(ip) # Max 100 changes per commit if (hostid + 1) % 100 == 0: rrs.commit() rrs = ResourceRecordSets(self.conn, self.zone.id) rrs.commit() # 2nd count should match the number of hosts plus NS/SOA records records = hosts + 2 self.assertEqual(i, records) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/integration/route53/test_zone.py000066400000000000000000000130631225267101000221150ustar00rootroot00000000000000# Copyright (c) 2011 Blue Pines Technologies LLC, Brad Carleton # www.bluepines.org # Copyright (c) 2012 42 Lines Inc., Jim Browne # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import unittest from boto.route53.connection import Route53Connection from boto.exception import TooManyRecordsException class TestRoute53Zone(unittest.TestCase): @classmethod def setUpClass(self): route53 = Route53Connection() zone = route53.get_zone('example.com') if zone is not None: zone.delete() self.zone = route53.create_zone('example.com') def test_nameservers(self): self.zone.get_nameservers() def test_a(self): self.zone.add_a('example.com', '102.11.23.1', 80) record = self.zone.get_a('example.com') self.assertEquals(record.name, u'example.com.') self.assertEquals(record.resource_records, [u'102.11.23.1']) self.assertEquals(record.ttl, u'80') self.zone.update_a('example.com', '186.143.32.2', '800') record = self.zone.get_a('example.com') self.assertEquals(record.name, u'example.com.') self.assertEquals(record.resource_records, [u'186.143.32.2']) self.assertEquals(record.ttl, u'800') def test_cname(self): self.zone.add_cname('www.example.com', 'webserver.example.com', 200) record = self.zone.get_cname('www.example.com') self.assertEquals(record.name, u'www.example.com.') self.assertEquals(record.resource_records, [u'webserver.example.com.']) self.assertEquals(record.ttl, u'200') self.zone.update_cname('www.example.com', 'web.example.com', 45) record = self.zone.get_cname('www.example.com') self.assertEquals(record.name, u'www.example.com.') self.assertEquals(record.resource_records, [u'web.example.com.']) self.assertEquals(record.ttl, u'45') def test_mx(self): self.zone.add_mx('example.com', ['10 mx1.example.com', '20 mx2.example.com'], 1000) record = self.zone.get_mx('example.com') self.assertEquals(set(record.resource_records), set([u'10 mx1.example.com.', u'20 mx2.example.com.'])) self.assertEquals(record.ttl, u'1000') self.zone.update_mx('example.com', ['10 mail1.example.com', '20 mail2.example.com'], 50) record = self.zone.get_mx('example.com') self.assertEquals(set(record.resource_records), set([u'10 mail1.example.com.', '20 mail2.example.com.'])) self.assertEquals(record.ttl, u'50') def test_get_records(self): self.zone.get_records() def test_get_nameservers(self): self.zone.get_nameservers() def test_get_zones(self): route53 = Route53Connection() route53.get_zones() def test_identifiers_wrrs(self): self.zone.add_a('wrr.example.com', '1.2.3.4', identifier=('foo', '20')) self.zone.add_a('wrr.example.com', '5.6.7.8', identifier=('bar', '10')) wrrs = self.zone.find_records('wrr.example.com', 'A', all=True) self.assertEquals(len(wrrs), 2) self.zone.delete_a('wrr.example.com', all=True) def test_identifiers_lbrs(self): self.zone.add_a('lbr.example.com', '4.3.2.1', identifier=('baz', 'us-east-1')) self.zone.add_a('lbr.example.com', '8.7.6.5', identifier=('bam', 'us-west-1')) lbrs = self.zone.find_records('lbr.example.com', 'A', all=True) self.assertEquals(len(lbrs), 2) self.zone.delete_a('lbr.example.com', identifier=('bam', 'us-west-1')) self.zone.delete_a('lbr.example.com', identifier=('baz', 'us-east-1')) def test_toomany_exception(self): self.zone.add_a('exception.example.com', '4.3.2.1', identifier=('baz', 'us-east-1')) self.zone.add_a('exception.example.com', '8.7.6.5', identifier=('bam', 'us-west-1')) with self.assertRaises(TooManyRecordsException): lbrs = self.zone.get_a('exception.example.com') self.zone.delete_a('exception.example.com', all=True) @classmethod def tearDownClass(self): self.zone.delete_a('example.com') self.zone.delete_cname('www.example.com') self.zone.delete_mx('example.com') self.zone.delete() if __name__ == '__main__': unittest.main(verbosity=3) boto-2.20.1/tests/integration/s3/000077500000000000000000000000001225267101000165455ustar00rootroot00000000000000boto-2.20.1/tests/integration/s3/__init__.py000066400000000000000000000021201225267101000206510ustar00rootroot00000000000000# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. boto-2.20.1/tests/integration/s3/mock_storage_service.py000066400000000000000000000556061225267101000233300ustar00rootroot00000000000000# Copyright 2010 Google Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Provides basic mocks of core storage service classes, for unit testing: ACL, Key, Bucket, Connection, and StorageUri. We implement a subset of the interfaces defined in the real boto classes, but don't handle most of the optional params (which we indicate with the constant "NOT_IMPL"). """ import copy import boto import base64 import re from boto.utils import compute_md5 from boto.utils import find_matching_headers from boto.utils import merge_headers_by_name from boto.s3.prefix import Prefix try: from hashlib import md5 except ImportError: from md5 import md5 NOT_IMPL = None class MockAcl(object): def __init__(self, parent=NOT_IMPL): pass def startElement(self, name, attrs, connection): pass def endElement(self, name, value, connection): pass def to_xml(self): return '' class MockKey(object): def __init__(self, bucket=None, name=None): self.bucket = bucket self.name = name self.data = None self.etag = None self.size = None self.closed = True self.content_encoding = None self.content_language = None self.content_type = None self.last_modified = 'Wed, 06 Oct 2010 05:11:54 GMT' self.BufferSize = 8192 def __repr__(self): if self.bucket: return '' % (self.bucket.name, self.name) else: return '' % self.name def get_contents_as_string(self, headers=NOT_IMPL, cb=NOT_IMPL, num_cb=NOT_IMPL, torrent=NOT_IMPL, version_id=NOT_IMPL): return self.data def get_contents_to_file(self, fp, headers=NOT_IMPL, cb=NOT_IMPL, num_cb=NOT_IMPL, torrent=NOT_IMPL, version_id=NOT_IMPL, res_download_handler=NOT_IMPL): fp.write(self.data) def get_file(self, fp, headers=NOT_IMPL, cb=NOT_IMPL, num_cb=NOT_IMPL, torrent=NOT_IMPL, version_id=NOT_IMPL, override_num_retries=NOT_IMPL): fp.write(self.data) def _handle_headers(self, headers): if not headers: return if find_matching_headers('Content-Encoding', headers): self.content_encoding = merge_headers_by_name('Content-Encoding', headers) if find_matching_headers('Content-Type', headers): self.content_type = merge_headers_by_name('Content-Type', headers) if find_matching_headers('Content-Language', headers): self.content_language = merge_headers_by_name('Content-Language', headers) # Simplistic partial implementation for headers: Just supports range GETs # of flavor 'Range: bytes=xyz-'. def open_read(self, headers=None, query_args=NOT_IMPL, override_num_retries=NOT_IMPL): if self.closed: self.read_pos = 0 self.closed = False if headers and 'Range' in headers: match = re.match('bytes=([0-9]+)-$', headers['Range']) if match: self.read_pos = int(match.group(1)) def close(self, fast=NOT_IMPL): self.closed = True def read(self, size=0): self.open_read() if size == 0: data = self.data[self.read_pos:] self.read_pos = self.size else: data = self.data[self.read_pos:self.read_pos+size] self.read_pos += size if not data: self.close() return data def set_contents_from_file(self, fp, headers=None, replace=NOT_IMPL, cb=NOT_IMPL, num_cb=NOT_IMPL, policy=NOT_IMPL, md5=NOT_IMPL, res_upload_handler=NOT_IMPL): self.data = fp.read() self.set_etag() self.size = len(self.data) self._handle_headers(headers) def set_contents_from_stream(self, fp, headers=None, replace=NOT_IMPL, cb=NOT_IMPL, num_cb=NOT_IMPL, policy=NOT_IMPL, reduced_redundancy=NOT_IMPL, query_args=NOT_IMPL, size=NOT_IMPL): self.data = '' chunk = fp.read(self.BufferSize) while chunk: self.data += chunk chunk = fp.read(self.BufferSize) self.set_etag() self.size = len(self.data) self._handle_headers(headers) def set_contents_from_string(self, s, headers=NOT_IMPL, replace=NOT_IMPL, cb=NOT_IMPL, num_cb=NOT_IMPL, policy=NOT_IMPL, md5=NOT_IMPL, reduced_redundancy=NOT_IMPL): self.data = copy.copy(s) self.set_etag() self.size = len(s) self._handle_headers(headers) def set_contents_from_filename(self, filename, headers=None, replace=NOT_IMPL, cb=NOT_IMPL, num_cb=NOT_IMPL, policy=NOT_IMPL, md5=NOT_IMPL, res_upload_handler=NOT_IMPL): fp = open(filename, 'rb') self.set_contents_from_file(fp, headers, replace, cb, num_cb, policy, md5, res_upload_handler) fp.close() def copy(self, dst_bucket_name, dst_key, metadata=NOT_IMPL, reduced_redundancy=NOT_IMPL, preserve_acl=NOT_IMPL): dst_bucket = self.bucket.connection.get_bucket(dst_bucket_name) return dst_bucket.copy_key(dst_key, self.bucket.name, self.name, metadata) @property def provider(self): provider = None if self.bucket and self.bucket.connection: provider = self.bucket.connection.provider return provider def set_etag(self): """ Set etag attribute by generating hex MD5 checksum on current contents of mock key. """ m = md5() m.update(self.data) hex_md5 = m.hexdigest() self.etag = hex_md5 def compute_md5(self, fp): """ :type fp: file :param fp: File pointer to the file to MD5 hash. The file pointer will be reset to the beginning of the file before the method returns. :rtype: tuple :return: A tuple containing the hex digest version of the MD5 hash as the first element and the base64 encoded version of the plain digest as the second element. """ tup = compute_md5(fp) # Returned values are MD5 hash, base64 encoded MD5 hash, and file size. # The internal implementation of compute_md5() needs to return the # file size but we don't want to return that value to the external # caller because it changes the class interface (i.e. it might # break some code) so we consume the third tuple value here and # return the remainder of the tuple to the caller, thereby preserving # the existing interface. self.size = tup[2] return tup[0:2] class MockBucket(object): def __init__(self, connection=None, name=None, key_class=NOT_IMPL): self.name = name self.keys = {} self.acls = {name: MockAcl()} # default object ACLs are one per bucket and not supported for keys self.def_acl = MockAcl() self.subresources = {} self.connection = connection self.logging = False def __repr__(self): return 'MockBucket: %s' % self.name def copy_key(self, new_key_name, src_bucket_name, src_key_name, metadata=NOT_IMPL, src_version_id=NOT_IMPL, storage_class=NOT_IMPL, preserve_acl=NOT_IMPL, encrypt_key=NOT_IMPL, headers=NOT_IMPL, query_args=NOT_IMPL): new_key = self.new_key(key_name=new_key_name) src_key = self.connection.get_bucket( src_bucket_name).get_key(src_key_name) new_key.data = copy.copy(src_key.data) new_key.size = len(new_key.data) return new_key def disable_logging(self): self.logging = False def enable_logging(self, target_bucket_prefix): self.logging = True def get_logging_config(self): return {"Logging": {}} def get_versioning_status(self, headers=NOT_IMPL): return False def get_acl(self, key_name='', headers=NOT_IMPL, version_id=NOT_IMPL): if key_name: # Return ACL for the key. return self.acls[key_name] else: # Return ACL for the bucket. return self.acls[self.name] def get_def_acl(self, key_name=NOT_IMPL, headers=NOT_IMPL, version_id=NOT_IMPL): # Return default ACL for the bucket. return self.def_acl def get_subresource(self, subresource, key_name=NOT_IMPL, headers=NOT_IMPL, version_id=NOT_IMPL): if subresource in self.subresources: return self.subresources[subresource] else: return '' def new_key(self, key_name=None): mock_key = MockKey(self, key_name) self.keys[key_name] = mock_key self.acls[key_name] = MockAcl() return mock_key def delete_key(self, key_name, headers=NOT_IMPL, version_id=NOT_IMPL, mfa_token=NOT_IMPL): if key_name not in self.keys: raise boto.exception.StorageResponseError(404, 'Not Found') del self.keys[key_name] def get_all_keys(self, headers=NOT_IMPL): return self.keys.itervalues() def get_key(self, key_name, headers=NOT_IMPL, version_id=NOT_IMPL): # Emulate behavior of boto when get_key called with non-existent key. if key_name not in self.keys: return None return self.keys[key_name] def list(self, prefix='', delimiter='', marker=NOT_IMPL, headers=NOT_IMPL): prefix = prefix or '' # Turn None into '' for prefix match. # Return list instead of using a generator so we don't get # 'dictionary changed size during iteration' error when performing # deletions while iterating (e.g., during test cleanup). result = [] key_name_set = set() for k in self.keys.itervalues(): if k.name.startswith(prefix): k_name_past_prefix = k.name[len(prefix):] if delimiter: pos = k_name_past_prefix.find(delimiter) else: pos = -1 if (pos != -1): key_or_prefix = Prefix( bucket=self, name=k.name[:len(prefix)+pos+1]) else: key_or_prefix = MockKey(bucket=self, name=k.name) if key_or_prefix.name not in key_name_set: key_name_set.add(key_or_prefix.name) result.append(key_or_prefix) return result def set_acl(self, acl_or_str, key_name='', headers=NOT_IMPL, version_id=NOT_IMPL): # We only handle setting ACL XML here; if you pass a canned ACL # the get_acl call will just return that string name. if key_name: # Set ACL for the key. self.acls[key_name] = MockAcl(acl_or_str) else: # Set ACL for the bucket. self.acls[self.name] = MockAcl(acl_or_str) def set_def_acl(self, acl_or_str, key_name=NOT_IMPL, headers=NOT_IMPL, version_id=NOT_IMPL): # We only handle setting ACL XML here; if you pass a canned ACL # the get_acl call will just return that string name. # Set default ACL for the bucket. self.def_acl = acl_or_str def set_subresource(self, subresource, value, key_name=NOT_IMPL, headers=NOT_IMPL, version_id=NOT_IMPL): self.subresources[subresource] = value class MockProvider(object): def __init__(self, provider): self.provider = provider def get_provider_name(self): return self.provider class MockConnection(object): def __init__(self, aws_access_key_id=NOT_IMPL, aws_secret_access_key=NOT_IMPL, is_secure=NOT_IMPL, port=NOT_IMPL, proxy=NOT_IMPL, proxy_port=NOT_IMPL, proxy_user=NOT_IMPL, proxy_pass=NOT_IMPL, host=NOT_IMPL, debug=NOT_IMPL, https_connection_factory=NOT_IMPL, calling_format=NOT_IMPL, path=NOT_IMPL, provider='s3', bucket_class=NOT_IMPL): self.buckets = {} self.provider = MockProvider(provider) def create_bucket(self, bucket_name, headers=NOT_IMPL, location=NOT_IMPL, policy=NOT_IMPL, storage_class=NOT_IMPL): if bucket_name in self.buckets: raise boto.exception.StorageCreateError( 409, 'BucketAlreadyOwnedByYou', "Your previous request to create the named bucket " "succeeded and you already own it.") mock_bucket = MockBucket(name=bucket_name, connection=self) self.buckets[bucket_name] = mock_bucket return mock_bucket def delete_bucket(self, bucket, headers=NOT_IMPL): if bucket not in self.buckets: raise boto.exception.StorageResponseError( 404, 'NoSuchBucket', 'no such bucket') del self.buckets[bucket] def get_bucket(self, bucket_name, validate=NOT_IMPL, headers=NOT_IMPL): if bucket_name not in self.buckets: raise boto.exception.StorageResponseError(404, 'NoSuchBucket', 'Not Found') return self.buckets[bucket_name] def get_all_buckets(self, headers=NOT_IMPL): return self.buckets.itervalues() # We only mock a single provider/connection. mock_connection = MockConnection() class MockBucketStorageUri(object): delim = '/' def __init__(self, scheme, bucket_name=None, object_name=None, debug=NOT_IMPL, suppress_consec_slashes=NOT_IMPL, version_id=None, generation=None, is_latest=False): self.scheme = scheme self.bucket_name = bucket_name self.object_name = object_name self.suppress_consec_slashes = suppress_consec_slashes if self.bucket_name and self.object_name: self.uri = ('%s://%s/%s' % (self.scheme, self.bucket_name, self.object_name)) elif self.bucket_name: self.uri = ('%s://%s/' % (self.scheme, self.bucket_name)) else: self.uri = ('%s://' % self.scheme) self.version_id = version_id self.generation = generation and int(generation) self.is_version_specific = (bool(self.generation) or bool(self.version_id)) self.is_latest = is_latest if bucket_name and object_name: self.versionless_uri = '%s://%s/%s' % (scheme, bucket_name, object_name) def __repr__(self): """Returns string representation of URI.""" return self.uri def acl_class(self): return MockAcl def canned_acls(self): return boto.provider.Provider('aws').canned_acls def clone_replace_name(self, new_name): return self.__class__(self.scheme, self.bucket_name, new_name) def clone_replace_key(self, key): return self.__class__( key.provider.get_provider_name(), bucket_name=key.bucket.name, object_name=key.name, suppress_consec_slashes=self.suppress_consec_slashes, version_id=getattr(key, 'version_id', None), generation=getattr(key, 'generation', None), is_latest=getattr(key, 'is_latest', None)) def connect(self, access_key_id=NOT_IMPL, secret_access_key=NOT_IMPL): return mock_connection def create_bucket(self, headers=NOT_IMPL, location=NOT_IMPL, policy=NOT_IMPL, storage_class=NOT_IMPL): return self.connect().create_bucket(self.bucket_name) def delete_bucket(self, headers=NOT_IMPL): return self.connect().delete_bucket(self.bucket_name) def get_versioning_config(self, headers=NOT_IMPL): self.get_bucket().get_versioning_status(headers) def has_version(self): return (issubclass(type(self), MockBucketStorageUri) and ((self.version_id is not None) or (self.generation is not None))) def delete_key(self, validate=NOT_IMPL, headers=NOT_IMPL, version_id=NOT_IMPL, mfa_token=NOT_IMPL): self.get_bucket().delete_key(self.object_name) def disable_logging(self, validate=NOT_IMPL, headers=NOT_IMPL, version_id=NOT_IMPL): self.get_bucket().disable_logging() def enable_logging(self, target_bucket, target_prefix, validate=NOT_IMPL, headers=NOT_IMPL, version_id=NOT_IMPL): self.get_bucket().enable_logging(target_bucket) def get_logging_config(self, validate=NOT_IMPL, headers=NOT_IMPL, version_id=NOT_IMPL): return self.get_bucket().get_logging_config() def equals(self, uri): return self.uri == uri.uri def get_acl(self, validate=NOT_IMPL, headers=NOT_IMPL, version_id=NOT_IMPL): return self.get_bucket().get_acl(self.object_name) def get_def_acl(self, validate=NOT_IMPL, headers=NOT_IMPL, version_id=NOT_IMPL): return self.get_bucket().get_def_acl(self.object_name) def get_subresource(self, subresource, validate=NOT_IMPL, headers=NOT_IMPL, version_id=NOT_IMPL): return self.get_bucket().get_subresource(subresource, self.object_name) def get_all_buckets(self, headers=NOT_IMPL): return self.connect().get_all_buckets() def get_all_keys(self, validate=NOT_IMPL, headers=NOT_IMPL): return self.get_bucket().get_all_keys(self) def list_bucket(self, prefix='', delimiter='', headers=NOT_IMPL, all_versions=NOT_IMPL): return self.get_bucket().list(prefix=prefix, delimiter=delimiter) def get_bucket(self, validate=NOT_IMPL, headers=NOT_IMPL): return self.connect().get_bucket(self.bucket_name) def get_key(self, validate=NOT_IMPL, headers=NOT_IMPL, version_id=NOT_IMPL): return self.get_bucket().get_key(self.object_name) def is_file_uri(self): return False def is_cloud_uri(self): return True def names_container(self): return bool(not self.object_name) def names_singleton(self): return bool(self.object_name) def names_directory(self): return False def names_provider(self): return bool(not self.bucket_name) def names_bucket(self): return self.names_container() def names_file(self): return False def names_object(self): return not self.names_container() def is_stream(self): return False def new_key(self, validate=NOT_IMPL, headers=NOT_IMPL): bucket = self.get_bucket() return bucket.new_key(self.object_name) def set_acl(self, acl_or_str, key_name='', validate=NOT_IMPL, headers=NOT_IMPL, version_id=NOT_IMPL): self.get_bucket().set_acl(acl_or_str, key_name) def set_def_acl(self, acl_or_str, key_name=NOT_IMPL, validate=NOT_IMPL, headers=NOT_IMPL, version_id=NOT_IMPL): self.get_bucket().set_def_acl(acl_or_str) def set_subresource(self, subresource, value, validate=NOT_IMPL, headers=NOT_IMPL, version_id=NOT_IMPL): self.get_bucket().set_subresource(subresource, value, self.object_name) def copy_key(self, src_bucket_name, src_key_name, metadata=NOT_IMPL, src_version_id=NOT_IMPL, storage_class=NOT_IMPL, preserve_acl=NOT_IMPL, encrypt_key=NOT_IMPL, headers=NOT_IMPL, query_args=NOT_IMPL, src_generation=NOT_IMPL): dst_bucket = self.get_bucket() return dst_bucket.copy_key(new_key_name=self.object_name, src_bucket_name=src_bucket_name, src_key_name=src_key_name) def set_contents_from_string(self, s, headers=NOT_IMPL, replace=NOT_IMPL, cb=NOT_IMPL, num_cb=NOT_IMPL, policy=NOT_IMPL, md5=NOT_IMPL, reduced_redundancy=NOT_IMPL): key = self.new_key() key.set_contents_from_string(s) def set_contents_from_file(self, fp, headers=None, replace=NOT_IMPL, cb=NOT_IMPL, num_cb=NOT_IMPL, policy=NOT_IMPL, md5=NOT_IMPL, size=NOT_IMPL, rewind=NOT_IMPL, res_upload_handler=NOT_IMPL): key = self.new_key() return key.set_contents_from_file(fp, headers=headers) def set_contents_from_stream(self, fp, headers=NOT_IMPL, replace=NOT_IMPL, cb=NOT_IMPL, num_cb=NOT_IMPL, policy=NOT_IMPL, reduced_redundancy=NOT_IMPL, query_args=NOT_IMPL, size=NOT_IMPL): dst_key.set_contents_from_stream(fp) def get_contents_to_file(self, fp, headers=NOT_IMPL, cb=NOT_IMPL, num_cb=NOT_IMPL, torrent=NOT_IMPL, version_id=NOT_IMPL, res_download_handler=NOT_IMPL, response_headers=NOT_IMPL): key = self.get_key() key.get_contents_to_file(fp) def get_contents_to_stream(self, fp, headers=NOT_IMPL, cb=NOT_IMPL, num_cb=NOT_IMPL, version_id=NOT_IMPL): key = self.get_key() return key.get_contents_to_file(fp) boto-2.20.1/tests/integration/s3/other_cacerts.txt000066400000000000000000000067251225267101000221450ustar00rootroot00000000000000# Certifcate Authority certificates for validating SSL connections. # # This file contains PEM format certificates generated from # http://mxr.mozilla.org/seamonkey/source/security/nss/lib/ckfw/builtins/certdata.txt # # ***** BEGIN LICENSE BLOCK ***** # Version: MPL 1.1/GPL 2.0/LGPL 2.1 # # The contents of this file are subject to the Mozilla Public License Version # 1.1 (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # http://www.mozilla.org/MPL/ # # Software distributed under the License is distributed on an "AS IS" basis, # WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License # for the specific language governing rights and limitations under the # License. # # The Original Code is the Netscape security libraries. # # The Initial Developer of the Original Code is # Netscape Communications Corporation. # Portions created by the Initial Developer are Copyright (C) 1994-2000 # the Initial Developer. All Rights Reserved. # # Contributor(s): # # Alternatively, the contents of this file may be used under the terms of # either the GNU General Public License Version 2 or later (the "GPL"), or # the GNU Lesser General Public License Version 2.1 or later (the "LGPL"), # in which case the provisions of the GPL or the LGPL are applicable instead # of those above. If you wish to allow use of your version of this file only # under the terms of either the GPL or the LGPL, and not to allow others to # use your version of this file under the terms of the MPL, indicate your # decision by deleting the provisions above and replace them with the notice # and other provisions required by the GPL or the LGPL. If you do not delete # the provisions above, a recipient may use your version of this file under # the terms of any one of the MPL, the GPL or the LGPL. # # ***** END LICENSE BLOCK ***** Comodo CA Limited, CN=Trusted Certificate Services ================================================== -----BEGIN CERTIFICATE----- MIIEQzCCAyugAwIBAgIBATANBgkqhkiG9w0BAQUFADB/MQswCQYDVQQGEwJHQjEb MBkGA1UECAwSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHDAdTYWxmb3JkMRow GAYDVQQKDBFDb21vZG8gQ0EgTGltaXRlZDElMCMGA1UEAwwcVHJ1c3RlZCBDZXJ0 aWZpY2F0ZSBTZXJ2aWNlczAeFw0wNDAxMDEwMDAwMDBaFw0yODEyMzEyMzU5NTla MH8xCzAJBgNVBAYTAkdCMRswGQYDVQQIDBJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAO BgNVBAcMB1NhbGZvcmQxGjAYBgNVBAoMEUNvbW9kbyBDQSBMaW1pdGVkMSUwIwYD VQQDDBxUcnVzdGVkIENlcnRpZmljYXRlIFNlcnZpY2VzMIIBIjANBgkqhkiG9w0B AQEFAAOCAQ8AMIIBCgKCAQEA33FvNlhTWvI2VFeAxHQIIO0Yfyod5jWaHiWsnOWW fnJSoBVC21ndZHoa0Lh73TkVvFVIxO06AOoxEbrycXQaZ7jPM8yoMa+j49d/vzMt TGo87IvDktJTdyR0nAducPy9C1t2ul/y/9c3S0pgePfw+spwtOpZqqPOSC+pw7IL fhdyFgymBwwbOM/JYrc/oJOlh0Hyt3BAd9i+FHzjqMB6juljatEPmsbS9Is6FARW 1O24zG71++IsWL1/T2sr92AkWCTOJu80kTrV44HQsvAEAtdbtz6SrGsSivnkBbA7 kUlcsutT6vifR4buv5XAwAaf0lteERv0xwQ1KdJVXOTt6wIDAQABo4HJMIHGMB0G A1UdDgQWBBTFe1i97doladL3WRaoszLAeydb9DAOBgNVHQ8BAf8EBAMCAQYwDwYD VR0TAQH/BAUwAwEB/zCBgwYDVR0fBHwwejA8oDqgOIY2aHR0cDovL2NybC5jb21v ZG9jYS5jb20vVHJ1c3RlZENlcnRpZmljYXRlU2VydmljZXMuY3JsMDqgOKA2hjRo dHRwOi8vY3JsLmNvbW9kby5uZXQvVHJ1c3RlZENlcnRpZmljYXRlU2VydmljZXMu Y3JsMA0GCSqGSIb3DQEBBQUAA4IBAQDIk4E7ibSvuIQSTI3S8NtwuleGFTQQuS9/ HrCoiWChisJ3DFBKmwCL2Iv0QeLQg4pKHBQGsKNoBXAxMKdTmw7pSqBYaWcOrp32 pSxBvzwGa+RZzG0Q8ZZvH9/0BAKkn0U+yNj6NkZEUD+Cl5EfKNsYEYwq5GWDVxIS jBc/lDb+XbDABHcTuPQV1T84zJQ6VdCsmPW6AF/ghhmBeC8owH7TzEIK9a5QoNE+ xqFx7D+gIIxmOom0jtTYsU0lR+4viMi14QVFwL4Ucd56/Y57fU0IlqUSc/Atyjcn dBInTMu2l+nZrghtWjlA3QVHdWpaIbOjGM9O9y5Xt5hwXsjEeLBi -----END CERTIFICATE----- boto-2.20.1/tests/integration/s3/test_bucket.py000066400000000000000000000252411225267101000214370ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2011 Mitch Garnaat http://garnaat.org/ # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Some unit tests for the S3 Bucket """ import unittest import time from boto.exception import S3ResponseError from boto.s3.connection import S3Connection from boto.s3.bucketlogging import BucketLogging from boto.s3.lifecycle import Lifecycle from boto.s3.lifecycle import Transition from boto.s3.lifecycle import Rule from boto.s3.acl import Grant from boto.s3.tagging import Tags, TagSet from boto.s3.lifecycle import Lifecycle, Expiration, Transition from boto.s3.website import RedirectLocation class S3BucketTest (unittest.TestCase): s3 = True def setUp(self): self.conn = S3Connection() self.bucket_name = 'bucket-%d' % int(time.time()) self.bucket = self.conn.create_bucket(self.bucket_name) def tearDown(self): for key in self.bucket: key.delete() self.bucket.delete() def test_next_marker(self): expected = ["a/", "b", "c"] for key_name in expected: key = self.bucket.new_key(key_name) key.set_contents_from_string(key_name) # Normal list of first 2 keys will have # no NextMarker set, so we use last key to iterate # last element will be "b" so no issue. rs = self.bucket.get_all_keys(max_keys=2) for element in rs: pass self.assertEqual(element.name, "b") self.assertEqual(rs.next_marker, None) # list using delimiter of first 2 keys will have # a NextMarker set (when truncated). As prefixes # are grouped together at the end, we get "a/" as # last element, but luckily we have next_marker. rs = self.bucket.get_all_keys(max_keys=2, delimiter="/") for element in rs: pass self.assertEqual(element.name, "a/") self.assertEqual(rs.next_marker, "b") # ensure bucket.list() still works by just # popping elements off the front of expected. rs = self.bucket.list() for element in rs: self.assertEqual(element.name, expected.pop(0)) self.assertEqual(expected, []) def test_logging(self): # use self.bucket as the target bucket so that teardown # will delete any log files that make it into the bucket # automatically and all we have to do is delete the # source bucket. sb_name = "src-" + self.bucket_name sb = self.conn.create_bucket(sb_name) # grant log write perms to target bucket using canned-acl self.bucket.set_acl("log-delivery-write") target_bucket = self.bucket_name target_prefix = u"jp/ログ/" # Check existing status is disabled bls = sb.get_logging_status() self.assertEqual(bls.target, None) # Create a logging status and grant auth users READ PERM authuri = "http://acs.amazonaws.com/groups/global/AuthenticatedUsers" authr = Grant(permission="READ", type="Group", uri=authuri) sb.enable_logging(target_bucket, target_prefix=target_prefix, grants=[authr]) # Check the status and confirm its set. bls = sb.get_logging_status() self.assertEqual(bls.target, target_bucket) self.assertEqual(bls.prefix, target_prefix) self.assertEqual(len(bls.grants), 1) self.assertEqual(bls.grants[0].type, "Group") self.assertEqual(bls.grants[0].uri, authuri) # finally delete the src bucket sb.delete() def test_tagging(self): tagging = """ tagkey tagvalue """ self.bucket.set_xml_tags(tagging) response = self.bucket.get_tags() self.assertEqual(response[0][0].key, 'tagkey') self.assertEqual(response[0][0].value, 'tagvalue') self.bucket.delete_tags() try: self.bucket.get_tags() except S3ResponseError, e: self.assertEqual(e.code, 'NoSuchTagSet') except Exception, e: self.fail("Wrong exception raised (expected S3ResponseError): %s" % e) else: self.fail("Expected S3ResponseError, but no exception raised.") def test_tagging_from_objects(self): """Create tags from python objects rather than raw xml.""" t = Tags() tag_set = TagSet() tag_set.add_tag('akey', 'avalue') tag_set.add_tag('anotherkey', 'anothervalue') t.add_tag_set(tag_set) self.bucket.set_tags(t) response = self.bucket.get_tags() self.assertEqual(response[0][0].key, 'akey') self.assertEqual(response[0][0].value, 'avalue') self.assertEqual(response[0][1].key, 'anotherkey') self.assertEqual(response[0][1].value, 'anothervalue') def test_website_configuration(self): response = self.bucket.configure_website('index.html') self.assertTrue(response) config = self.bucket.get_website_configuration() self.assertEqual(config, {'WebsiteConfiguration': {'IndexDocument': {'Suffix': 'index.html'}}}) config2, xml = self.bucket.get_website_configuration_with_xml() self.assertEqual(config, config2) self.assertTrue('index.html' in xml, xml) def test_website_redirect_all_requests(self): response = self.bucket.configure_website( redirect_all_requests_to=RedirectLocation('example.com')) config = self.bucket.get_website_configuration() self.assertEqual(config, { 'WebsiteConfiguration': { 'RedirectAllRequestsTo': { 'HostName': 'example.com'}}}) # Can configure the protocol as well. response = self.bucket.configure_website( redirect_all_requests_to=RedirectLocation('example.com', 'https')) config = self.bucket.get_website_configuration() self.assertEqual(config, { 'WebsiteConfiguration': {'RedirectAllRequestsTo': { 'HostName': 'example.com', 'Protocol': 'https', }}} ) def test_lifecycle(self): lifecycle = Lifecycle() lifecycle.add_rule('myid', '', 'Enabled', 30) self.assertTrue(self.bucket.configure_lifecycle(lifecycle)) response = self.bucket.get_lifecycle_config() self.assertEqual(len(response), 1) actual_lifecycle = response[0] self.assertEqual(actual_lifecycle.id, 'myid') self.assertEqual(actual_lifecycle.prefix, '') self.assertEqual(actual_lifecycle.status, 'Enabled') self.assertEqual(actual_lifecycle.transition, None) def test_lifecycle_with_glacier_transition(self): lifecycle = Lifecycle() transition = Transition(days=30, storage_class='GLACIER') rule = Rule('myid', prefix='', status='Enabled', expiration=None, transition=transition) lifecycle.append(rule) self.assertTrue(self.bucket.configure_lifecycle(lifecycle)) response = self.bucket.get_lifecycle_config() transition = response[0].transition self.assertEqual(transition.days, 30) self.assertEqual(transition.storage_class, 'GLACIER') self.assertEqual(transition.date, None) def test_lifecycle_multi(self): date = '2022-10-12T00:00:00.000Z' sc = 'GLACIER' lifecycle = Lifecycle() lifecycle.add_rule("1", "1/", "Enabled", 1) lifecycle.add_rule("2", "2/", "Enabled", Expiration(days=2)) lifecycle.add_rule("3", "3/", "Enabled", Expiration(date=date)) lifecycle.add_rule("4", "4/", "Enabled", None, Transition(days=4, storage_class=sc)) lifecycle.add_rule("5", "5/", "Enabled", None, Transition(date=date, storage_class=sc)) # set the lifecycle self.bucket.configure_lifecycle(lifecycle) # read the lifecycle back readlifecycle = self.bucket.get_lifecycle_config(); for rule in readlifecycle: if rule.id == "1": self.assertEqual(rule.prefix, "1/") self.assertEqual(rule.expiration.days, 1) elif rule.id == "2": self.assertEqual(rule.prefix, "2/") self.assertEqual(rule.expiration.days, 2) elif rule.id == "3": self.assertEqual(rule.prefix, "3/") self.assertEqual(rule.expiration.date, date) elif rule.id == "4": self.assertEqual(rule.prefix, "4/") self.assertEqual(rule.transition.days, 4) self.assertEqual(rule.transition.storage_class, sc) elif rule.id == "5": self.assertEqual(rule.prefix, "5/") self.assertEqual(rule.transition.date, date) self.assertEqual(rule.transition.storage_class, sc) else: self.fail("unexpected id %s" % rule.id) def test_lifecycle_jp(self): # test lifecycle with Japanese prefix name = "Japanese files" prefix = u"日本語/" days = 30 lifecycle = Lifecycle() lifecycle.add_rule(name, prefix, "Enabled", days) # set the lifecycle self.bucket.configure_lifecycle(lifecycle) # read the lifecycle back readlifecycle = self.bucket.get_lifecycle_config(); for rule in readlifecycle: self.assertEqual(rule.id, name) self.assertEqual(rule.expiration.days, days) #Note: Boto seems correct? AWS seems broken? #self.assertEqual(rule.prefix, prefix) boto-2.20.1/tests/integration/s3/test_cert_verification.py000066400000000000000000000027741225267101000236670ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Check that all of the certs on SQS endpoints validate. """ import unittest from tests.integration import ServiceCertVerificationTest import boto.s3 class S3CertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): s3 = True regions = boto.s3.regions() def sample_service_call(self, conn): conn.get_all_buckets() boto-2.20.1/tests/integration/s3/test_connection.py000066400000000000000000000231331225267101000223170ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Some unit tests for the S3Connection """ import unittest import time import os import urllib import urlparse import httplib from boto.s3.connection import S3Connection from boto.s3.bucket import Bucket from boto.exception import S3PermissionsError, S3ResponseError class S3ConnectionTest (unittest.TestCase): s3 = True def test_1_basic(self): print '--- running S3Connection tests ---' c = S3Connection() # create a new, empty bucket bucket_name = 'test-%d' % int(time.time()) bucket = c.create_bucket(bucket_name) # now try a get_bucket call and see if it's really there bucket = c.get_bucket(bucket_name) # test logging logging_bucket = c.create_bucket(bucket_name + '-log') logging_bucket.set_as_logging_target() bucket.enable_logging(target_bucket=logging_bucket, target_prefix=bucket.name) bucket.disable_logging() c.delete_bucket(logging_bucket) k = bucket.new_key('foobar') s1 = 'This is a test of file upload and download' s2 = 'This is a second string to test file upload and download' k.set_contents_from_string(s1) fp = open('foobar', 'wb') # now get the contents from s3 to a local file k.get_contents_to_file(fp) fp.close() fp = open('foobar') # check to make sure content read from s3 is identical to original assert s1 == fp.read(), 'corrupted file' fp.close() # test generated URLs url = k.generate_url(3600) file = urllib.urlopen(url) assert s1 == file.read(), 'invalid URL %s' % url url = k.generate_url(3600, force_http=True) file = urllib.urlopen(url) assert s1 == file.read(), 'invalid URL %s' % url url = k.generate_url(3600, force_http=True, headers={'x-amz-x-token' : 'XYZ'}) file = urllib.urlopen(url) assert s1 == file.read(), 'invalid URL %s' % url rh = {'response-content-disposition': 'attachment; filename="foo.txt"'} url = k.generate_url(60, response_headers=rh) file = urllib.urlopen(url) assert s1 == file.read(), 'invalid URL %s' % url #test whether amperands and to-be-escaped characters work in header filename rh = {'response-content-disposition': 'attachment; filename="foo&z%20ar&ar&zar&bar.txt"'} url = k.generate_url(60, response_headers=rh, force_http=True) file = urllib.urlopen(url) assert s1 == file.read(), 'invalid URL %s' % url # overwrite foobar contents with a PUT url = k.generate_url(3600, 'PUT', force_http=True, policy='private', reduced_redundancy=True) up = urlparse.urlsplit(url) con = httplib.HTTPConnection(up.hostname, up.port) con.request("PUT", up.path + '?' + up.query, body="hello there") resp = con.getresponse() assert 200 == resp.status assert "hello there" == k.get_contents_as_string() bucket.delete_key(k) # test a few variations on get_all_keys - first load some data # for the first one, let's override the content type phony_mimetype = 'application/x-boto-test' headers = {'Content-Type': phony_mimetype} k.name = 'foo/bar' k.set_contents_from_string(s1, headers) k.name = 'foo/bas' size = k.set_contents_from_filename('foobar') assert size == 42 k.name = 'foo/bat' k.set_contents_from_string(s1) k.name = 'fie/bar' k.set_contents_from_string(s1) k.name = 'fie/bas' k.set_contents_from_string(s1) k.name = 'fie/bat' k.set_contents_from_string(s1) # try resetting the contents to another value md5 = k.md5 k.set_contents_from_string(s2) assert k.md5 != md5 os.unlink('foobar') all = bucket.get_all_keys() assert len(all) == 6 rs = bucket.get_all_keys(prefix='foo') assert len(rs) == 3 rs = bucket.get_all_keys(prefix='', delimiter='/') assert len(rs) == 2 rs = bucket.get_all_keys(maxkeys=5) assert len(rs) == 5 # test the lookup method k = bucket.lookup('foo/bar') assert isinstance(k, bucket.key_class) assert k.content_type == phony_mimetype k = bucket.lookup('notthere') assert k == None # try some metadata stuff k = bucket.new_key('has_metadata') mdkey1 = 'meta1' mdval1 = 'This is the first metadata value' k.set_metadata(mdkey1, mdval1) mdkey2 = 'meta2' mdval2 = 'This is the second metadata value' k.set_metadata(mdkey2, mdval2) # try a unicode metadata value mdval3 = u'föö' mdkey3 = 'meta3' k.set_metadata(mdkey3, mdval3) k.set_contents_from_string(s1) k = bucket.lookup('has_metadata') assert k.get_metadata(mdkey1) == mdval1 assert k.get_metadata(mdkey2) == mdval2 assert k.get_metadata(mdkey3) == mdval3 k = bucket.new_key('has_metadata') k.get_contents_as_string() assert k.get_metadata(mdkey1) == mdval1 assert k.get_metadata(mdkey2) == mdval2 assert k.get_metadata(mdkey3) == mdval3 bucket.delete_key(k) # test list and iterator rs1 = bucket.list() num_iter = 0 for r in rs1: num_iter = num_iter + 1 rs = bucket.get_all_keys() num_keys = len(rs) assert num_iter == num_keys # try a key with a funny character k = bucket.new_key('testnewline\n') k.set_contents_from_string('This is a test') rs = bucket.get_all_keys() assert len(rs) == num_keys + 1 bucket.delete_key(k) rs = bucket.get_all_keys() assert len(rs) == num_keys # try some acl stuff bucket.set_acl('public-read') policy = bucket.get_acl() assert len(policy.acl.grants) == 2 bucket.set_acl('private') policy = bucket.get_acl() assert len(policy.acl.grants) == 1 k = bucket.lookup('foo/bar') k.set_acl('public-read') policy = k.get_acl() assert len(policy.acl.grants) == 2 k.set_acl('private') policy = k.get_acl() assert len(policy.acl.grants) == 1 # try the convenience methods for grants bucket.add_user_grant('FULL_CONTROL', 'c1e724fbfa0979a4448393c59a8c055011f739b6d102fb37a65f26414653cd67') try: bucket.add_email_grant('foobar', 'foo@bar.com') except S3PermissionsError: pass # now try to create an RRS key k = bucket.new_key('reduced_redundancy') k.set_contents_from_string('This key has reduced redundancy', reduced_redundancy=True) # now try to inject a response header data = k.get_contents_as_string(response_headers={'response-content-type' : 'foo/bar'}) assert k.content_type == 'foo/bar' # now delete all keys in bucket for k in bucket: if k.name == 'reduced_redundancy': assert k.storage_class == 'REDUCED_REDUNDANCY' bucket.delete_key(k) # now delete bucket time.sleep(5) c.delete_bucket(bucket) print '--- tests completed ---' def test_basic_anon(self): auth_con = S3Connection() # create a new, empty bucket bucket_name = 'test-%d' % int(time.time()) auth_bucket = auth_con.create_bucket(bucket_name) # try read the bucket anonymously anon_con = S3Connection(anon=True) anon_bucket = Bucket(anon_con, bucket_name) try: iter(anon_bucket.list()).next() self.fail("anon bucket list should fail") except S3ResponseError: pass # give bucket anon user access and anon read again auth_bucket.set_acl('public-read') try: iter(anon_bucket.list()).next() self.fail("not expecting contents") except S3ResponseError, e: self.fail("We should have public-read access, but received " "an error: %s" % e) except StopIteration: pass # cleanup auth_con.delete_bucket(auth_bucket) def test_error_code_populated(self): c = S3Connection() try: c.create_bucket('bad$bucket$name') except S3ResponseError, e: self.assertEqual(e.error_code, 'InvalidBucketName') else: self.fail("S3ResponseError not raised.") boto-2.20.1/tests/integration/s3/test_cors.py000066400000000000000000000060441225267101000211300ustar00rootroot00000000000000# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Some integration tests for S3 CORS """ import unittest import time from boto.s3.connection import S3Connection from boto.exception import S3ResponseError from boto.s3.cors import CORSConfiguration class S3CORSTest (unittest.TestCase): s3 = True def setUp(self): self.conn = S3Connection() self.bucket_name = 'cors-%d' % int(time.time()) self.bucket = self.conn.create_bucket(self.bucket_name) def tearDown(self): self.bucket.delete() def test_cors(self): self.cfg = CORSConfiguration() self.cfg.add_rule(['PUT', 'POST', 'DELETE'], 'http://www.example.com', allowed_header='*', max_age_seconds=3000, expose_header='x-amz-server-side-encryption', id='foobar_rule') assert self.bucket.set_cors(self.cfg) time.sleep(5) cfg = self.bucket.get_cors() for i, rule in enumerate(cfg): self.assertEqual(rule.id, self.cfg[i].id) self.assertEqual(rule.max_age_seconds, self.cfg[i].max_age_seconds) methods = zip(rule.allowed_method, self.cfg[i].allowed_method) for v1, v2 in methods: self.assertEqual(v1, v2) origins = zip(rule.allowed_origin, self.cfg[i].allowed_origin) for v1, v2 in origins: self.assertEqual(v1, v2) headers = zip(rule.allowed_header, self.cfg[i].allowed_header) for v1, v2 in headers: self.assertEqual(v1, v2) headers = zip(rule.expose_header, self.cfg[i].expose_header) for v1, v2 in headers: self.assertEqual(v1, v2) self.bucket.delete_cors() time.sleep(5) try: self.bucket.get_cors() self.fail('CORS configuration should not be there') except S3ResponseError: pass boto-2.20.1/tests/integration/s3/test_encryption.py000066400000000000000000000071671225267101000223630ustar00rootroot00000000000000# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Some unit tests for the S3 Encryption """ import unittest import time from boto.s3.connection import S3Connection from boto.exception import S3ResponseError json_policy = """{ "Version":"2008-10-17", "Id":"PutObjPolicy", "Statement":[{ "Sid":"DenyUnEncryptedObjectUploads", "Effect":"Deny", "Principal":{ "AWS":"*" }, "Action":"s3:PutObject", "Resource":"arn:aws:s3:::%s/*", "Condition":{ "StringNotEquals":{ "s3:x-amz-server-side-encryption":"AES256" } } } ] }""" class S3EncryptionTest (unittest.TestCase): s3 = True def test_1_versions(self): print '--- running S3Encryption tests ---' c = S3Connection() # create a new, empty bucket bucket_name = 'encryption-%d' % int(time.time()) bucket = c.create_bucket(bucket_name) # now try a get_bucket call and see if it's really there bucket = c.get_bucket(bucket_name) # create an unencrypted key k = bucket.new_key('foobar') s1 = 'This is unencrypted data' s2 = 'This is encrypted data' k.set_contents_from_string(s1) time.sleep(5) # now get the contents from s3 o = k.get_contents_as_string() # check to make sure content read from s3 is identical to original assert o == s1 # now overwrite that same key with encrypted data k.set_contents_from_string(s2, encrypt_key=True) time.sleep(5) # now retrieve the contents as a string and compare o = k.get_contents_as_string() assert o == s2 # now set bucket policy to require encrypted objects bucket.set_policy(json_policy % bucket.name) time.sleep(5) # now try to write unencrypted key write_failed = False try: k.set_contents_from_string(s1) except S3ResponseError: write_failed = True assert write_failed # now try to write unencrypted key write_failed = False try: k.set_contents_from_string(s1, encrypt_key=True) except S3ResponseError: write_failed = True assert not write_failed # Now do regular delete k.delete() time.sleep(5) # now delete bucket bucket.delete() print '--- tests completed ---' boto-2.20.1/tests/integration/s3/test_https_cert_validation.py000066400000000000000000000136071225267101000245560ustar00rootroot00000000000000# Copyright 2011 Google Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Tests to validate correct validation of SSL server certificates. Note that this test assumes two external dependencies are available: - A http proxy, which by default is assumed to be at host 'cache' and port 3128. This can be overridden with environment variables PROXY_HOST and PROXY_PORT, respectively. - An ssl-enabled web server that will return a valid certificate signed by one of the bundled CAs, and which can be reached by an alternate hostname that does not match the CN in that certificate. By default, this test uses host 'www' (without fully qualified domain). This can be overridden with environment variable INVALID_HOSTNAME_HOST. If no suitable host is already available, such a mapping can be established by temporarily adding an IP address for, say, www.google.com or www.amazon.com to /etc/hosts. """ import os import ssl import unittest from nose.plugins.attrib import attr import boto from boto import exception, https_connection from boto.gs.connection import GSConnection from boto.s3.connection import S3Connection # File 'other_cacerts.txt' contains a valid CA certificate of a CA that is used # by neither S3 nor Google Cloud Storage. Validation against this CA cert should # result in a certificate error. DEFAULT_CA_CERTS_FILE = os.path.join( os.path.dirname(os.path.abspath(__file__ )), 'other_cacerts.txt') PROXY_HOST = os.environ.get('PROXY_HOST', 'cache') PROXY_PORT = os.environ.get('PROXY_PORT', '3128') # This test assumes that this host returns a certificate signed by one of the # trusted CAs, but with a Common Name that won't match host name 'www' (i.e., # the server should return a certificate with CN 'www..com'). INVALID_HOSTNAME_HOST = os.environ.get('INVALID_HOSTNAME_HOST', 'www') @attr('notdefault', 'ssl') class CertValidationTest(unittest.TestCase): def setUp(self): # Clear config for section in boto.config.sections(): boto.config.remove_section(section) # Enable https_validate_certificates. boto.config.add_section('Boto') boto.config.setbool('Boto', 'https_validate_certificates', True) # Set up bogus credentials so that the auth module is willing to go # ahead and make a request; the request should fail with a service-level # error if it does get to the service (S3 or GS). boto.config.add_section('Credentials') boto.config.set('Credentials', 'gs_access_key_id', 'xyz') boto.config.set('Credentials', 'gs_secret_access_key', 'xyz') boto.config.set('Credentials', 'aws_access_key_id', 'xyz') boto.config.set('Credentials', 'aws_secret_access_key', 'xyz') def enableProxy(self): boto.config.set('Boto', 'proxy', PROXY_HOST) boto.config.set('Boto', 'proxy_port', PROXY_PORT) def assertConnectionThrows(self, connection_class, error): conn = connection_class() self.assertRaises(error, conn.get_all_buckets) def do_test_valid_cert(self): # When connecting to actual servers with bundled root certificates, no # cert errors should be thrown; instead we will get "invalid # credentials" errors since the config used does not contain any # credentials. self.assertConnectionThrows(S3Connection, exception.S3ResponseError) self.assertConnectionThrows(GSConnection, exception.GSResponseError) def test_valid_cert(self): self.do_test_valid_cert() def test_valid_cert_with_proxy(self): self.enableProxy() self.do_test_valid_cert() def do_test_invalid_signature(self): boto.config.set('Boto', 'ca_certificates_file', DEFAULT_CA_CERTS_FILE) self.assertConnectionThrows(S3Connection, ssl.SSLError) self.assertConnectionThrows(GSConnection, ssl.SSLError) def test_invalid_signature(self): self.do_test_invalid_signature() def test_invalid_signature_with_proxy(self): self.enableProxy() self.do_test_invalid_signature() def do_test_invalid_host(self): boto.config.set('Credentials', 'gs_host', INVALID_HOSTNAME_HOST) boto.config.set('Credentials', 's3_host', INVALID_HOSTNAME_HOST) self.assertConnectionThrows(S3Connection, ssl.SSLError) self.assertConnectionThrows(GSConnection, ssl.SSLError) def do_test_invalid_host(self): boto.config.set('Credentials', 'gs_host', INVALID_HOSTNAME_HOST) boto.config.set('Credentials', 's3_host', INVALID_HOSTNAME_HOST) self.assertConnectionThrows( S3Connection, https_connection.InvalidCertificateException) self.assertConnectionThrows( GSConnection, https_connection.InvalidCertificateException) def test_invalid_host(self): self.do_test_invalid_host() def test_invalid_host_with_proxy(self): self.enableProxy() self.do_test_invalid_host() boto-2.20.1/tests/integration/s3/test_key.py000066400000000000000000000354321225267101000207550ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Some unit tests for S3 Key """ from tests.unit import unittest import time import StringIO import urllib from boto.s3.connection import S3Connection from boto.s3.key import Key from boto.exception import S3ResponseError class S3KeyTest(unittest.TestCase): s3 = True def setUp(self): self.conn = S3Connection() self.bucket_name = 'keytest-%d' % int(time.time()) self.bucket = self.conn.create_bucket(self.bucket_name) def tearDown(self): for key in self.bucket: key.delete() self.bucket.delete() def test_set_contents_from_file_dataloss(self): # Create an empty stringio and write to it. content = "abcde" sfp = StringIO.StringIO() sfp.write(content) # Try set_contents_from_file() without rewinding sfp k = self.bucket.new_key("k") try: k.set_contents_from_file(sfp) self.fail("forgot to rewind so should fail.") except AttributeError: pass # call with rewind and check if we wrote 5 bytes k.set_contents_from_file(sfp, rewind=True) self.assertEqual(k.size, 5) # check actual contents by getting it. kn = self.bucket.new_key("k") ks = kn.get_contents_as_string() self.assertEqual(ks, content) # finally, try with a 0 length string sfp = StringIO.StringIO() k = self.bucket.new_key("k") k.set_contents_from_file(sfp) self.assertEqual(k.size, 0) # check actual contents by getting it. kn = self.bucket.new_key("k") ks = kn.get_contents_as_string() self.assertEqual(ks, "") def test_set_contents_as_file(self): content="01234567890123456789" sfp = StringIO.StringIO(content) # fp is set at 0 for just opened (for read) files. # set_contents should write full content to key. k = self.bucket.new_key("k") k.set_contents_from_file(sfp) self.assertEqual(k.size, 20) kn = self.bucket.new_key("k") ks = kn.get_contents_as_string() self.assertEqual(ks, content) # set fp to 5 and set contents. this should # set "567890123456789" to the key sfp.seek(5) k = self.bucket.new_key("k") k.set_contents_from_file(sfp) self.assertEqual(k.size, 15) kn = self.bucket.new_key("k") ks = kn.get_contents_as_string() self.assertEqual(ks, content[5:]) # set fp to 5 and only set 5 bytes. this should # write the value "56789" to the key. sfp.seek(5) k = self.bucket.new_key("k") k.set_contents_from_file(sfp, size=5) self.assertEqual(k.size, 5) self.assertEqual(sfp.tell(), 10) kn = self.bucket.new_key("k") ks = kn.get_contents_as_string() self.assertEqual(ks, content[5:10]) def test_set_contents_with_md5(self): content="01234567890123456789" sfp = StringIO.StringIO(content) # fp is set at 0 for just opened (for read) files. # set_contents should write full content to key. k = self.bucket.new_key("k") good_md5 = k.compute_md5(sfp) k.set_contents_from_file(sfp, md5=good_md5) kn = self.bucket.new_key("k") ks = kn.get_contents_as_string() self.assertEqual(ks, content) # set fp to 5 and only set 5 bytes. this should # write the value "56789" to the key. sfp.seek(5) k = self.bucket.new_key("k") good_md5 = k.compute_md5(sfp, size=5) k.set_contents_from_file(sfp, size=5, md5=good_md5) self.assertEqual(sfp.tell(), 10) kn = self.bucket.new_key("k") ks = kn.get_contents_as_string() self.assertEqual(ks, content[5:10]) # let's try a wrong md5 by just altering it. k = self.bucket.new_key("k") sfp.seek(0) hexdig, base64 = k.compute_md5(sfp) bad_md5 = (hexdig, base64[3:]) try: k.set_contents_from_file(sfp, md5=bad_md5) self.fail("should fail with bad md5") except S3ResponseError: pass def test_get_contents_with_md5(self): content="01234567890123456789" sfp = StringIO.StringIO(content) k = self.bucket.new_key("k") k.set_contents_from_file(sfp) kn = self.bucket.new_key("k") s = kn.get_contents_as_string() self.assertEqual(kn.md5, k.md5) self.assertEqual(s, content) def test_file_callback(self): def callback(wrote, total): self.my_cb_cnt += 1 self.assertNotEqual(wrote, self.my_cb_last, "called twice with same value") self.my_cb_last = wrote # Zero bytes written => 1 call self.my_cb_cnt = 0 self.my_cb_last = None k = self.bucket.new_key("k") k.BufferSize = 2 sfp = StringIO.StringIO("") k.set_contents_from_file(sfp, cb=callback, num_cb=10) self.assertEqual(self.my_cb_cnt, 1) self.assertEqual(self.my_cb_last, 0) sfp.close() # Read back zero bytes => 1 call self.my_cb_cnt = 0 self.my_cb_last = None s = k.get_contents_as_string(cb=callback) self.assertEqual(self.my_cb_cnt, 1) self.assertEqual(self.my_cb_last, 0) content="01234567890123456789" sfp = StringIO.StringIO(content) # expect 2 calls due start/finish self.my_cb_cnt = 0 self.my_cb_last = None k = self.bucket.new_key("k") k.set_contents_from_file(sfp, cb=callback, num_cb=10) self.assertEqual(self.my_cb_cnt, 2) self.assertEqual(self.my_cb_last, 20) # Read back all bytes => 2 calls self.my_cb_cnt = 0 self.my_cb_last = None s = k.get_contents_as_string(cb=callback) self.assertEqual(self.my_cb_cnt, 2) self.assertEqual(self.my_cb_last, 20) self.assertEqual(s, content) # rewind sfp and try upload again. -1 should call # for every read/write so that should make 11 when bs=2 sfp.seek(0) self.my_cb_cnt = 0 self.my_cb_last = None k = self.bucket.new_key("k") k.BufferSize = 2 k.set_contents_from_file(sfp, cb=callback, num_cb=-1) self.assertEqual(self.my_cb_cnt, 11) self.assertEqual(self.my_cb_last, 20) # Read back all bytes => 11 calls self.my_cb_cnt = 0 self.my_cb_last = None s = k.get_contents_as_string(cb=callback, num_cb=-1) self.assertEqual(self.my_cb_cnt, 11) self.assertEqual(self.my_cb_last, 20) self.assertEqual(s, content) # no more than 1 times => 2 times # last time always 20 bytes sfp.seek(0) self.my_cb_cnt = 0 self.my_cb_last = None k = self.bucket.new_key("k") k.BufferSize = 2 k.set_contents_from_file(sfp, cb=callback, num_cb=1) self.assertTrue(self.my_cb_cnt <= 2) self.assertEqual(self.my_cb_last, 20) # no more than 1 times => 2 times self.my_cb_cnt = 0 self.my_cb_last = None s = k.get_contents_as_string(cb=callback, num_cb=1) self.assertTrue(self.my_cb_cnt <= 2) self.assertEqual(self.my_cb_last, 20) self.assertEqual(s, content) # no more than 2 times # last time always 20 bytes sfp.seek(0) self.my_cb_cnt = 0 self.my_cb_last = None k = self.bucket.new_key("k") k.BufferSize = 2 k.set_contents_from_file(sfp, cb=callback, num_cb=2) self.assertTrue(self.my_cb_cnt <= 2) self.assertEqual(self.my_cb_last, 20) # no more than 2 times self.my_cb_cnt = 0 self.my_cb_last = None s = k.get_contents_as_string(cb=callback, num_cb=2) self.assertTrue(self.my_cb_cnt <= 2) self.assertEqual(self.my_cb_last, 20) self.assertEqual(s, content) # no more than 3 times # last time always 20 bytes sfp.seek(0) self.my_cb_cnt = 0 self.my_cb_last = None k = self.bucket.new_key("k") k.BufferSize = 2 k.set_contents_from_file(sfp, cb=callback, num_cb=3) self.assertTrue(self.my_cb_cnt <= 3) self.assertEqual(self.my_cb_last, 20) # no more than 3 times self.my_cb_cnt = 0 self.my_cb_last = None s = k.get_contents_as_string(cb=callback, num_cb=3) self.assertTrue(self.my_cb_cnt <= 3) self.assertEqual(self.my_cb_last, 20) self.assertEqual(s, content) # no more than 4 times # last time always 20 bytes sfp.seek(0) self.my_cb_cnt = 0 self.my_cb_last = None k = self.bucket.new_key("k") k.BufferSize = 2 k.set_contents_from_file(sfp, cb=callback, num_cb=4) self.assertTrue(self.my_cb_cnt <= 4) self.assertEqual(self.my_cb_last, 20) # no more than 4 times self.my_cb_cnt = 0 self.my_cb_last = None s = k.get_contents_as_string(cb=callback, num_cb=4) self.assertTrue(self.my_cb_cnt <= 4) self.assertEqual(self.my_cb_last, 20) self.assertEqual(s, content) # no more than 6 times # last time always 20 bytes sfp.seek(0) self.my_cb_cnt = 0 self.my_cb_last = None k = self.bucket.new_key("k") k.BufferSize = 2 k.set_contents_from_file(sfp, cb=callback, num_cb=6) self.assertTrue(self.my_cb_cnt <= 6) self.assertEqual(self.my_cb_last, 20) # no more than 6 times self.my_cb_cnt = 0 self.my_cb_last = None s = k.get_contents_as_string(cb=callback, num_cb=6) self.assertTrue(self.my_cb_cnt <= 6) self.assertEqual(self.my_cb_last, 20) self.assertEqual(s, content) # no more than 10 times # last time always 20 bytes sfp.seek(0) self.my_cb_cnt = 0 self.my_cb_last = None k = self.bucket.new_key("k") k.BufferSize = 2 k.set_contents_from_file(sfp, cb=callback, num_cb=10) self.assertTrue(self.my_cb_cnt <= 10) self.assertEqual(self.my_cb_last, 20) # no more than 10 times self.my_cb_cnt = 0 self.my_cb_last = None s = k.get_contents_as_string(cb=callback, num_cb=10) self.assertTrue(self.my_cb_cnt <= 10) self.assertEqual(self.my_cb_last, 20) self.assertEqual(s, content) # no more than 1000 times # last time always 20 bytes sfp.seek(0) self.my_cb_cnt = 0 self.my_cb_last = None k = self.bucket.new_key("k") k.BufferSize = 2 k.set_contents_from_file(sfp, cb=callback, num_cb=1000) self.assertTrue(self.my_cb_cnt <= 1000) self.assertEqual(self.my_cb_last, 20) # no more than 1000 times self.my_cb_cnt = 0 self.my_cb_last = None s = k.get_contents_as_string(cb=callback, num_cb=1000) self.assertTrue(self.my_cb_cnt <= 1000) self.assertEqual(self.my_cb_last, 20) self.assertEqual(s, content) def test_website_redirects(self): self.bucket.configure_website('index.html') key = self.bucket.new_key('redirect-key') self.assertTrue(key.set_redirect('http://www.amazon.com/')) self.assertEqual(key.get_redirect(), 'http://www.amazon.com/') self.assertTrue(key.set_redirect('http://aws.amazon.com/')) self.assertEqual(key.get_redirect(), 'http://aws.amazon.com/') def test_website_redirect_none_configured(self): key = self.bucket.new_key('redirect-key') key.set_contents_from_string('') self.assertEqual(key.get_redirect(), None) def test_website_redirect_with_bad_value(self): self.bucket.configure_website('index.html') key = self.bucket.new_key('redirect-key') with self.assertRaises(key.provider.storage_response_error): # Must start with a / or http key.set_redirect('ftp://ftp.example.org') with self.assertRaises(key.provider.storage_response_error): # Must start with a / or http key.set_redirect('') def test_setting_date(self): key = self.bucket.new_key('test_date') # This should actually set x-amz-meta-date & not fail miserably. key.set_metadata('date', '20130524T155935Z') key.set_contents_from_string('Some text here.') check = self.bucket.get_key('test_date') self.assertEqual(check.get_metadata('date'), u'20130524T155935Z') self.assertTrue('x-amz-meta-date' in check._get_remote_metadata()) def test_header_casing(self): key = self.bucket.new_key('test_header_case') # Using anything but CamelCase on ``Content-Type`` or ``Content-MD5`` # used to cause a signature error (when using ``s3`` for signing). key.set_metadata('Content-type', 'application/json') key.set_metadata('Content-md5', 'XmUKnus7svY1frWsVskxXg==') key.set_contents_from_string('{"abc": 123}') check = self.bucket.get_key('test_header_case') self.assertEqual(check.content_type, 'application/json') def test_header_encoding(self): key = self.bucket.new_key('test_header_encoding') key.set_metadata('Cache-control', 'public, max-age=500') key.set_metadata('Content-disposition', u'filename=Schöne Zeit.txt') key.set_contents_from_string('foo') check = self.bucket.get_key('test_header_encoding') self.assertEqual(check.cache_control, 'public, max-age=500') self.assertEqual(check.content_disposition, 'filename=Sch%C3%B6ne+Zeit.txt') self.assertEqual( urllib.unquote_plus(check.content_disposition).decode('utf-8'), 'filename=Schöne Zeit.txt'.decode('utf-8') ) boto-2.20.1/tests/integration/s3/test_mfa.py000066400000000000000000000070551225267101000207300ustar00rootroot00000000000000# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Some unit tests for S3 MfaDelete with versioning """ import unittest import time from nose.plugins.attrib import attr from boto.s3.connection import S3Connection from boto.exception import S3ResponseError from boto.s3.deletemarker import DeleteMarker @attr('notdefault', 's3mfa') class S3MFATest (unittest.TestCase): def setUp(self): self.conn = S3Connection() self.bucket_name = 'mfa-%d' % int(time.time()) self.bucket = self.conn.create_bucket(self.bucket_name) def tearDown(self): for k in self.bucket.list_versions(): self.bucket.delete_key(k.name, version_id=k.version_id) self.bucket.delete() def test_mfadel(self): # Enable Versioning with MfaDelete mfa_sn = raw_input('MFA S/N: ') mfa_code = raw_input('MFA Code: ') self.bucket.configure_versioning(True, mfa_delete=True, mfa_token=(mfa_sn, mfa_code)) # Check enabling mfa worked. i = 0 for i in range(1, 8): time.sleep(2**i) d = self.bucket.get_versioning_status() if d['Versioning'] == 'Enabled' and d['MfaDelete'] == 'Enabled': break self.assertEqual('Enabled', d['Versioning']) self.assertEqual('Enabled', d['MfaDelete']) # Add a key to the bucket k = self.bucket.new_key('foobar') s1 = 'This is v1' k.set_contents_from_string(s1) v1 = k.version_id # Now try to delete v1 without the MFA token try: self.bucket.delete_key('foobar', version_id=v1) self.fail("Must fail if not using MFA token") except S3ResponseError: pass # Now try delete again with the MFA token mfa_code = raw_input('MFA Code: ') self.bucket.delete_key('foobar', version_id=v1, mfa_token=(mfa_sn, mfa_code)) # Next suspend versioning and disable MfaDelete on the bucket mfa_code = raw_input('MFA Code: ') self.bucket.configure_versioning(False, mfa_delete=False, mfa_token=(mfa_sn, mfa_code)) # Lastly, check disabling mfa worked. i = 0 for i in range(1, 8): time.sleep(2**i) d = self.bucket.get_versioning_status() if d['Versioning'] == 'Suspended' and d['MfaDelete'] != 'Enabled': break self.assertEqual('Suspended', d['Versioning']) self.assertNotEqual('Enabled', d['MfaDelete']) boto-2.20.1/tests/integration/s3/test_multidelete.py000066400000000000000000000152371225267101000225030ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2011 Mitch Garnaat http://garnaat.org/ # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Some unit tests for the S3 MultiDelete """ import unittest import time from boto.s3.key import Key from boto.s3.deletemarker import DeleteMarker from boto.s3.prefix import Prefix from boto.s3.connection import S3Connection from boto.exception import S3ResponseError class S3MultiDeleteTest(unittest.TestCase): s3 = True def setUp(self): self.conn = S3Connection() self.bucket_name = 'multidelete-%d' % int(time.time()) self.bucket = self.conn.create_bucket(self.bucket_name) def tearDown(self): for key in self.bucket: key.delete() self.bucket.delete() def test_delete_nothing(self): result = self.bucket.delete_keys([]) self.assertEqual(len(result.deleted), 0) self.assertEqual(len(result.errors), 0) def test_delete_illegal(self): result = self.bucket.delete_keys([{"dict":"notallowed"}]) self.assertEqual(len(result.deleted), 0) self.assertEqual(len(result.errors), 1) def test_delete_mix(self): result = self.bucket.delete_keys(["king", ("mice", None), Key(name="regular"), Key(), Prefix(name="folder/"), DeleteMarker(name="deleted"), {"bad":"type"}]) self.assertEqual(len(result.deleted), 4) self.assertEqual(len(result.errors), 3) def test_delete_quietly(self): result = self.bucket.delete_keys(["king"], quiet=True) self.assertEqual(len(result.deleted), 0) self.assertEqual(len(result.errors), 0) def test_delete_must_escape(self): result = self.bucket.delete_keys([Key(name=">_<;")]) self.assertEqual(len(result.deleted), 1) self.assertEqual(len(result.errors), 0) def test_delete_unknown_version(self): no_ver = Key(name="no") no_ver.version_id = "version" result = self.bucket.delete_keys([no_ver]) self.assertEqual(len(result.deleted), 0) self.assertEqual(len(result.errors), 1) def test_delete_kanji(self): result = self.bucket.delete_keys([u"漢字", Key(name=u"日本語")]) self.assertEqual(len(result.deleted), 2) self.assertEqual(len(result.errors), 0) def test_delete_empty_by_list(self): result = self.bucket.delete_keys(self.bucket.list()) self.assertEqual(len(result.deleted), 0) self.assertEqual(len(result.errors), 0) def test_delete_kanji_by_list(self): for key_name in [u"漢字", u"日本語", u"テスト"]: key = self.bucket.new_key(key_name) key.set_contents_from_string('this is a test') result = self.bucket.delete_keys(self.bucket.list()) self.assertEqual(len(result.deleted), 3) self.assertEqual(len(result.errors), 0) def test_delete_with_prefixes(self): for key_name in ["a", "a/b", "b"]: key = self.bucket.new_key(key_name) key.set_contents_from_string('this is a test') # First delete all "files": "a" and "b" result = self.bucket.delete_keys(self.bucket.list(delimiter="/")) self.assertEqual(len(result.deleted), 2) # Using delimiter will cause 1 common prefix to be listed # which will be skipped as an error. self.assertEqual(len(result.errors), 1) self.assertEqual(result.errors[0].key, "a/") # Next delete any remaining objects: "a/b" result = self.bucket.delete_keys(self.bucket.list()) self.assertEqual(len(result.deleted), 1) self.assertEqual(len(result.errors), 0) self.assertEqual(result.deleted[0].key, "a/b") def test_delete_too_many_versions(self): # configure versioning first self.bucket.configure_versioning(True) # Add 1000 initial versions as DMs by deleting them :-) # Adding 1000 objects is painful otherwise... key_names = ['key-%03d' % i for i in range(0, 1000)] result = self.bucket.delete_keys(key_names) self.assertEqual(len(result.deleted), 1000) self.assertEqual(len(result.errors), 0) # delete them again to create 1000 more delete markers result = self.bucket.delete_keys(key_names) self.assertEqual(len(result.deleted), 1000) self.assertEqual(len(result.errors), 0) # Sometimes takes AWS sometime to settle time.sleep(10) # delete all versions to delete 2000 objects. # this tests the 1000 limit. result = self.bucket.delete_keys(self.bucket.list_versions()) self.assertEqual(len(result.deleted), 2000) self.assertEqual(len(result.errors), 0) def test_1(self): nkeys = 100 # create a bunch of keynames key_names = ['key-%03d' % i for i in range(0, nkeys)] # create the corresponding keys for key_name in key_names: key = self.bucket.new_key(key_name) key.set_contents_from_string('this is a test') # now count keys in bucket n = 0 for key in self.bucket: n += 1 self.assertEqual(n, nkeys) # now delete them all result = self.bucket.delete_keys(key_names) self.assertEqual(len(result.deleted), nkeys) self.assertEqual(len(result.errors), 0) time.sleep(5) # now count keys in bucket n = 0 for key in self.bucket: n += 1 self.assertEqual(n, 0) boto-2.20.1/tests/integration/s3/test_multipart.py000066400000000000000000000131401225267101000221760ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2011 Mitch Garnaat http://garnaat.org/ # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Some unit tests for the S3 MultiPartUpload """ # Note: # Multipart uploads require at least one part. If you upload # multiple parts then all parts except the last part has to be # bigger than 5M. Hence we just use 1 part so we can keep # things small and still test logic. import unittest import time import StringIO from boto.s3.connection import S3Connection class S3MultiPartUploadTest(unittest.TestCase): s3 = True def setUp(self): self.conn = S3Connection(is_secure=False) self.bucket_name = 'multipart-%d' % int(time.time()) self.bucket = self.conn.create_bucket(self.bucket_name) def tearDown(self): for key in self.bucket: key.delete() self.bucket.delete() def test_abort(self): key_name = u"テスト" mpu = self.bucket.initiate_multipart_upload(key_name) mpu.cancel_upload() def test_complete_ascii(self): key_name = "test" mpu = self.bucket.initiate_multipart_upload(key_name) fp = StringIO.StringIO("small file") mpu.upload_part_from_file(fp, part_num=1) fp.close() cmpu = mpu.complete_upload() self.assertEqual(cmpu.key_name, key_name) self.assertNotEqual(cmpu.etag, None) def test_complete_japanese(self): key_name = u"テスト" mpu = self.bucket.initiate_multipart_upload(key_name) fp = StringIO.StringIO("small file") mpu.upload_part_from_file(fp, part_num=1) fp.close() cmpu = mpu.complete_upload() self.assertEqual(cmpu.key_name, key_name) self.assertNotEqual(cmpu.etag, None) def test_list_japanese(self): key_name = u"テスト" mpu = self.bucket.initiate_multipart_upload(key_name) rs = self.bucket.list_multipart_uploads() # New bucket, so only one upload expected lmpu = iter(rs).next() self.assertEqual(lmpu.id, mpu.id) self.assertEqual(lmpu.key_name, key_name) # Abort using the one returned in the list lmpu.cancel_upload() def test_list_multipart_uploads(self): key_name = u"テスト" mpus = [] mpus.append(self.bucket.initiate_multipart_upload(key_name)) mpus.append(self.bucket.initiate_multipart_upload(key_name)) rs = self.bucket.list_multipart_uploads() # uploads (for a key) are returned in time initiated asc order for lmpu in rs: ompu = mpus.pop(0) self.assertEqual(lmpu.key_name, ompu.key_name) self.assertEqual(lmpu.id, ompu.id) self.assertEqual(0, len(mpus)) def test_four_part_file(self): key_name = "k" contents = "01234567890123456789" sfp = StringIO.StringIO(contents) # upload 20 bytes in 4 parts of 5 bytes each mpu = self.bucket.initiate_multipart_upload(key_name) mpu.upload_part_from_file(sfp, part_num=1, size=5) mpu.upload_part_from_file(sfp, part_num=2, size=5) mpu.upload_part_from_file(sfp, part_num=3, size=5) mpu.upload_part_from_file(sfp, part_num=4, size=5) sfp.close() etags = {} pn = 0 for part in mpu: pn += 1 self.assertEqual(5, part.size) etags[pn] = part.etag self.assertEqual(pn, 4) # etags for 01234 self.assertEqual(etags[1], etags[3]) # etags for 56789 self.assertEqual(etags[2], etags[4]) # etag 01234 != etag 56789 self.assertNotEqual(etags[1], etags[2]) # parts are too small to compete as each part must # be a min of 5MB so so we'll assume that is enough # testing and abort the upload. mpu.cancel_upload() # mpu.upload_part_from_file() now returns the uploaded part # which makes the etag available. Confirm the etag is # available and equal to the etag returned by the parts list. def test_etag_of_parts(self): key_name = "etagtest" mpu = self.bucket.initiate_multipart_upload(key_name) fp = StringIO.StringIO("small file") # upload 2 parts and save each part uparts = [] uparts.append(mpu.upload_part_from_file(fp, part_num=1, size=5)) uparts.append(mpu.upload_part_from_file(fp, part_num=2)) fp.close() # compare uploaded parts etag to listed parts pn = 0 for lpart in mpu: self.assertEqual(uparts[pn].etag, lpart.etag) pn += 1 # Can't complete 2 small parts so just clean up. mpu.cancel_upload() boto-2.20.1/tests/integration/s3/test_pool.py000066400000000000000000000200211225267101000211220ustar00rootroot00000000000000# Copyright (c) 2011 Brian Beach # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Some multi-threading tests of boto in a greenlet environment. """ import boto import time import uuid from StringIO import StringIO from threading import Thread def spawn(function, *args, **kwargs): """ Spawns a new thread. API is the same as gevent.greenlet.Greenlet.spawn. """ t = Thread(target = function, args = args, kwargs = kwargs) t.start() return t def put_object(bucket, name): bucket.new_key(name).set_contents_from_string(name) def get_object(bucket, name): assert bucket.get_key(name).get_contents_as_string() == name def test_close_connections(): """ A test that exposes the problem where connections are returned to the connection pool (and closed) before the caller reads the response. I couldn't think of a way to test it without greenlets, so this test doesn't run as part of the standard test suite. That way, no more dependencies are added to the test suite. """ print "Running test_close_connections" # Connect to S3 s3 = boto.connect_s3() # Clean previous tests. for b in s3.get_all_buckets(): if b.name.startswith('test-'): for key in b.get_all_keys(): key.delete() b.delete() # Make a test bucket bucket = s3.create_bucket('test-%d' % int(time.time())) # Create 30 threads that each create an object in S3. The number # 30 is chosen because it is larger than the connection pool size # (20). names = [str(uuid.uuid4) for _ in range(30)] threads = [ spawn(put_object, bucket, name) for name in names ] for t in threads: t.join() # Create 30 threads to read the contents of the new objects. This # is where closing the connection early is a problem, because # there is a response that needs to be read, and it can't be read # if the connection has already been closed. threads = [ spawn(get_object, bucket, name) for name in names ] for t in threads: t.join() # test_reuse_connections needs to read a file that is big enough that # one read() call on the socket won't read the whole thing. BIG_SIZE = 10000 class WriteAndCount(object): """ A file-like object that counts the number of characters written. """ def __init__(self): self.size = 0 def write(self, data): self.size += len(data) time.sleep(0) # yield to other threads def read_big_object(s3, bucket, name, count): for _ in range(count): key = bucket.get_key(name) out = WriteAndCount() key.get_contents_to_file(out) if out.size != BIG_SIZE: print out.size, BIG_SIZE assert out.size == BIG_SIZE print " pool size:", s3._pool.size() class LittleQuerier(object): """ An object that manages a thread that keeps pulling down small objects from S3 and checking the answers until told to stop. """ def __init__(self, bucket, small_names): self.running = True self.bucket = bucket self.small_names = small_names self.thread = spawn(self.run) def stop(self): self.running = False self.thread.join() def run(self): count = 0 while self.running: i = count % 4 key = self.bucket.get_key(self.small_names[i]) expected = str(i) rh = { 'response-content-type' : 'small/' + str(i) } actual = key.get_contents_as_string(response_headers = rh) if expected != actual: print "AHA:", repr(expected), repr(actual) assert expected == actual count += 1 def test_reuse_connections(): """ This test is an attempt to expose problems because of the fact that boto returns connections to the connection pool before reading the response. The strategy is to start a couple big reads from S3, where it will take time to read the response, and then start other requests that will reuse the same connection from the pool while the big response is still being read. The test passes because of an interesting combination of factors. I was expecting that it would fail because two threads would be reading the same connection at the same time. That doesn't happen because httplib catches the problem before it happens and raises an exception. Here's the sequence of events: - Thread 1: Send a request to read a big S3 object. - Thread 1: Returns connection to pool. - Thread 1: Start reading the body if the response. - Thread 2: Get the same connection from the pool. - Thread 2: Send another request on the same connection. - Thread 2: Try to read the response, but HTTPConnection.get_response notices that the previous response isn't done reading yet, and raises a ResponseNotReady exception. - Thread 2: _mexe catches the exception, does not return the connection to the pool, gets a new connection, and retries. - Thread 1: Finish reading the body of its response. - Server: Gets the second request on the connection, and sends a response. This response is ignored because the connection has been dropped on the client end. If you add a print statement in HTTPConnection.get_response at the point where it raises ResponseNotReady, and then run this test, you can see that it's happening. """ print "Running test_reuse_connections" # Connect to S3 s3 = boto.connect_s3() # Make a test bucket bucket = s3.create_bucket('test-%d' % int(time.time())) # Create some small objects in S3. small_names = [str(uuid.uuid4()) for _ in range(4)] for (i, name) in enumerate(small_names): bucket.new_key(name).set_contents_from_string(str(i)) # Wait, clean the connection pool, and make sure it's empty. print " waiting for all connections to become stale" time.sleep(s3._pool.STALE_DURATION + 1) s3._pool.clean() assert s3._pool.size() == 0 print " pool is empty" # Create a big object in S3. big_name = str(uuid.uuid4()) contents = "-" * BIG_SIZE bucket.new_key(big_name).set_contents_from_string(contents) # Start some threads to read it and check that they are reading # the correct thing. Each thread will read the object 40 times. threads = [ spawn(read_big_object, s3, bucket, big_name, 20) for _ in range(5) ] # Do some other things that may (incorrectly) re-use the same # connections while the big objects are being read. queriers = [ LittleQuerier(bucket, small_names) for _ in range(5) ] # Clean up. for t in threads: t.join() for q in queriers: q.stop() def main(): test_close_connections() test_reuse_connections() if __name__ == '__main__': main() boto-2.20.1/tests/integration/s3/test_versioning.py000066400000000000000000000137641225267101000223540ustar00rootroot00000000000000# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Some unit tests for the S3 Versioning. """ import unittest import time from boto.s3.connection import S3Connection from boto.exception import S3ResponseError from boto.s3.deletemarker import DeleteMarker class S3VersionTest (unittest.TestCase): def setUp(self): self.conn = S3Connection() self.bucket_name = 'version-%d' % int(time.time()) self.bucket = self.conn.create_bucket(self.bucket_name) def tearDown(self): for k in self.bucket.list_versions(): self.bucket.delete_key(k.name, version_id=k.version_id) self.bucket.delete() def test_1_versions(self): # check versioning off d = self.bucket.get_versioning_status() self.assertFalse('Versioning' in d) # enable versioning self.bucket.configure_versioning(versioning=True) d = self.bucket.get_versioning_status() self.assertEqual('Enabled', d['Versioning']) # create a new key in the versioned bucket k = self.bucket.new_key("foobar") s1 = 'This is v1' k.set_contents_from_string(s1) # remember the version id of this object v1 = k.version_id # now get the contents from s3 o1 = k.get_contents_as_string() # check to make sure content read from k is identical to original self.assertEqual(s1, o1) # now overwrite that same key with new data s2 = 'This is v2' k.set_contents_from_string(s2) v2 = k.version_id # now retrieve latest contents as a string and compare k2 = self.bucket.new_key("foobar") o2 = k2.get_contents_as_string() self.assertEqual(s2, o2) # next retrieve explicit versions and compare o1 = k.get_contents_as_string(version_id=v1) o2 = k.get_contents_as_string(version_id=v2) self.assertEqual(s1, o1) self.assertEqual(s2, o2) # Now list all versions and compare to what we have rs = self.bucket.get_all_versions() self.assertEqual(v2, rs[0].version_id) self.assertEqual(v1, rs[1].version_id) # Now do a regular list command and make sure only the new key shows up rs = self.bucket.get_all_keys() self.assertEqual(1, len(rs)) # Now do regular delete self.bucket.delete_key('foobar') # Now list versions and make sure old versions are there # plus the DeleteMarker which is latest. rs = self.bucket.get_all_versions() self.assertEqual(3, len(rs)) self.assertTrue(isinstance(rs[0], DeleteMarker)) # Now delete v1 of the key self.bucket.delete_key('foobar', version_id=v1) # Now list versions again and make sure v1 is not there rs = self.bucket.get_all_versions() versions = [k.version_id for k in rs] self.assertTrue(v1 not in versions) self.assertTrue(v2 in versions) # Now suspend Versioning on the bucket self.bucket.configure_versioning(False) # Allow time for the change to fully propagate. time.sleep(3) d = self.bucket.get_versioning_status() self.assertEqual('Suspended', d['Versioning']) def test_latest_version(self): self.bucket.configure_versioning(versioning=True) # add v1 of an object key_name = "key" kv1 = self.bucket.new_key(key_name) kv1.set_contents_from_string("v1") # read list which should contain latest v1 listed_kv1 = iter(self.bucket.get_all_versions()).next() self.assertEqual(listed_kv1.name, key_name) self.assertEqual(listed_kv1.version_id, kv1.version_id) self.assertEqual(listed_kv1.is_latest, True) # add v2 of the object kv2 = self.bucket.new_key(key_name) kv2.set_contents_from_string("v2") # read 2 versions, confirm v2 is latest i = iter(self.bucket.get_all_versions()) listed_kv2 = i.next() listed_kv1 = i.next() self.assertEqual(listed_kv2.version_id, kv2.version_id) self.assertEqual(listed_kv1.version_id, kv1.version_id) self.assertEqual(listed_kv2.is_latest, True) self.assertEqual(listed_kv1.is_latest, False) # delete key, which creates a delete marker as latest self.bucket.delete_key(key_name) i = iter(self.bucket.get_all_versions()) listed_kv3 = i.next() listed_kv2 = i.next() listed_kv1 = i.next() self.assertNotEqual(listed_kv3.version_id, None) self.assertEqual(listed_kv2.version_id, kv2.version_id) self.assertEqual(listed_kv1.version_id, kv1.version_id) self.assertEqual(listed_kv3.is_latest, True) self.assertEqual(listed_kv2.is_latest, False) self.assertEqual(listed_kv1.is_latest, False) boto-2.20.1/tests/integration/sdb/000077500000000000000000000000001225267101000167705ustar00rootroot00000000000000boto-2.20.1/tests/integration/sdb/__init__.py000066400000000000000000000021201225267101000210740ustar00rootroot00000000000000# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. boto-2.20.1/tests/integration/sdb/test_cert_verification.py000066400000000000000000000030101225267101000240720ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Check that all of the certs on all service endpoints validate. """ import unittest from tests.integration import ServiceCertVerificationTest import boto.sdb class SDBCertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): sdb = True regions = boto.sdb.regions() def sample_service_call(self, conn): conn.get_all_domains() boto-2.20.1/tests/integration/sdb/test_connection.py000066400000000000000000000103551225267101000225440ustar00rootroot00000000000000# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Some unit tests for the SDBConnection """ import unittest import time from boto.sdb.connection import SDBConnection from boto.exception import SDBResponseError class SDBConnectionTest (unittest.TestCase): sdb = True def test_1_basic(self): print '--- running SDBConnection tests ---' c = SDBConnection() rs = c.get_all_domains() num_domains = len(rs) # try illegal name try: domain = c.create_domain('bad:domain:name') except SDBResponseError: pass # now create one that should work and should be unique (i.e. a new one) domain_name = 'test%d' % int(time.time()) domain = c.create_domain(domain_name) rs = c.get_all_domains() assert len(rs) == num_domains + 1 # now let's a couple of items and attributes item_1 = 'item1' same_value = 'same_value' attrs_1 = {'name1' : same_value, 'name2' : 'diff_value_1'} domain.put_attributes(item_1, attrs_1) item_2 = 'item2' attrs_2 = {'name1' : same_value, 'name2' : 'diff_value_2'} domain.put_attributes(item_2, attrs_2) # try to get the attributes and see if they match item = domain.get_attributes(item_1, consistent_read=True) assert len(item.keys()) == len(attrs_1.keys()) assert item['name1'] == attrs_1['name1'] assert item['name2'] == attrs_1['name2'] # try a search or two query = 'select * from %s where name1="%s"' % (domain_name, same_value) rs = domain.select(query, consistent_read=True) n = 0 for item in rs: n += 1 assert n == 2 query = 'select * from %s where name2="diff_value_2"' % domain_name rs = domain.select(query, consistent_read=True) n = 0 for item in rs: n += 1 assert n == 1 # delete all attributes associated with item_1 stat = domain.delete_attributes(item_1) assert stat # now try a batch put operation on the domain item3 = {'name3_1' : 'value3_1', 'name3_2' : 'value3_2', 'name3_3' : ['value3_3_1', 'value3_3_2']} item4 = {'name4_1' : 'value4_1', 'name4_2' : ['value4_2_1', 'value4_2_2'], 'name4_3' : 'value4_3'} items = {'item3' : item3, 'item4' : item4} domain.batch_put_attributes(items) item = domain.get_attributes('item3', consistent_read=True) assert item['name3_2'] == 'value3_2' # now try a batch delete operation (variation #1) items = {'item3' : item3} stat = domain.batch_delete_attributes(items) item = domain.get_attributes('item3', consistent_read=True) assert not item # now try a batch delete operation (variation #2) stat = domain.batch_delete_attributes({'item4' : None}) item = domain.get_attributes('item4', consistent_read=True) assert not item # now delete the domain stat = c.delete_domain(domain) assert stat print '--- tests completed ---' boto-2.20.1/tests/integration/ses/000077500000000000000000000000001225267101000170125ustar00rootroot00000000000000boto-2.20.1/tests/integration/ses/__init__.py000066400000000000000000000000001225267101000211110ustar00rootroot00000000000000boto-2.20.1/tests/integration/ses/test_cert_verification.py000066400000000000000000000030261225267101000241230ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Check that all of the certs on all service endpoints validate. """ import unittest from tests.integration import ServiceCertVerificationTest import boto.ses class SESCertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): ses = True regions = boto.ses.regions() def sample_service_call(self, conn): conn.list_verified_email_addresses() boto-2.20.1/tests/integration/ses/test_connection.py000066400000000000000000000027771225267101000225770ustar00rootroot00000000000000from __future__ import with_statement from tests.unit import unittest from boto.ses.connection import SESConnection from boto.ses import exceptions class SESConnectionTest(unittest.TestCase): ses = True def setUp(self): self.ses = SESConnection() def test_get_dkim_attributes(self): response = self.ses.get_identity_dkim_attributes(['example.com']) # Verify we get the structure we expect, we don't care about the # values. self.assertTrue('GetIdentityDkimAttributesResponse' in response) self.assertTrue('GetIdentityDkimAttributesResult' in response['GetIdentityDkimAttributesResponse']) self.assertTrue( 'DkimAttributes' in response['GetIdentityDkimAttributesResponse']\ ['GetIdentityDkimAttributesResult']) def test_set_identity_dkim_enabled(self): # This api call should fail because have not verified the domain, # so we can test that it at least fails we we expect. with self.assertRaises(exceptions.SESIdentityNotVerifiedError): self.ses.set_identity_dkim_enabled('example.com', True) def test_verify_domain_dkim(self): # This api call should fail because have not confirmed the domain, # so we can test that it at least fails we we expect. with self.assertRaises(exceptions.SESDomainNotConfirmedError): self.ses.verify_domain_dkim('example.com') if __name__ == '__main__': unittest.main() boto-2.20.1/tests/integration/sns/000077500000000000000000000000001225267101000170235ustar00rootroot00000000000000boto-2.20.1/tests/integration/sns/__init__.py000066400000000000000000000021201225267101000211270ustar00rootroot00000000000000# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. boto-2.20.1/tests/integration/sns/test_cert_verification.py000066400000000000000000000027771225267101000241500ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Check that all of the certs on SQS endpoints validate. """ import unittest from tests.integration import ServiceCertVerificationTest import boto.sns class SNSCertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): sns = True regions = boto.sns.regions() def sample_service_call(self, conn): conn.get_all_topics() boto-2.20.1/tests/integration/sns/test_connection.py000066400000000000000000000047741225267101000226070ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from __future__ import with_statement import mock import httplib from tests.unit import unittest from boto.sns import connect_to_region class StubResponse(object): status = 403 reason = 'nopenopenope' def getheader(self, val): return '' def read(self): return '' class TestSNSConnection(unittest.TestCase): sns = True def setUp(self): self.connection = connect_to_region('us-west-2') def test_list_platform_applications(self): response = self.connection.list_platform_applications() def test_forced_host(self): # This test asserts that the ``Host`` header is correctly set. # On Python 2.5(.6), not having this in place would cause any SigV4 # calls to fail, due to a signature mismatch (the port would be present # when it shouldn't be). https = httplib.HTTPSConnection mpo = mock.patch.object with mpo(https, 'request') as mock_request: with mpo(https, 'getresponse', return_value=StubResponse()): with self.assertRaises(self.connection.ResponseError): self.connection.list_platform_applications() # Now, assert that the ``Host`` was there & correct. call = mock_request.call_args_list[0] headers = call[0][3] self.assertTrue('Host' in headers) self.assertEqual(headers['Host'], 'sns.us-west-2.amazonaws.com') boto-2.20.1/tests/integration/sns/test_sns_sqs_subscription.py000066400000000000000000000073521225267101000247400ustar00rootroot00000000000000# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Unit tests for subscribing SQS queues to SNS topics. """ import hashlib import time from tests.unit import unittest from boto.compat import json from boto.sqs.connection import SQSConnection from boto.sns.connection import SNSConnection class SNSSubcribeSQSTest(unittest.TestCase): sqs = True sns = True def setUp(self): self.sqsc = SQSConnection() self.snsc = SNSConnection() def get_policy_statements(self, queue): attrs = queue.get_attributes('Policy') policy = json.loads(attrs.get('Policy', "{}")) return policy.get('Statement', {}) def test_correct_sid(self): now = time.time() topic_name = queue_name = "test_correct_sid%d" % (now) timeout = 60 queue = self.sqsc.create_queue(queue_name, timeout) self.addCleanup(self.sqsc.delete_queue, queue, True) queue_arn = queue.arn topic = self.snsc.create_topic(topic_name) topic_arn = topic['CreateTopicResponse']['CreateTopicResult']\ ['TopicArn'] self.addCleanup(self.snsc.delete_topic, topic_arn) expected_sid = hashlib.md5(topic_arn + queue_arn).hexdigest() resp = self.snsc.subscribe_sqs_queue(topic_arn, queue) found_expected_sid = False statements = self.get_policy_statements(queue) for statement in statements: if statement['Sid'] == expected_sid: found_expected_sid = True break self.assertTrue(found_expected_sid) def test_idempotent_subscribe(self): now = time.time() topic_name = queue_name = "test_idempotent_subscribe%d" % (now) timeout = 60 queue = self.sqsc.create_queue(queue_name, timeout) self.addCleanup(self.sqsc.delete_queue, queue, True) initial_statements = self.get_policy_statements(queue) queue_arn = queue.arn topic = self.snsc.create_topic(topic_name) topic_arn = topic['CreateTopicResponse']['CreateTopicResult']\ ['TopicArn'] self.addCleanup(self.snsc.delete_topic, topic_arn) resp = self.snsc.subscribe_sqs_queue(topic_arn, queue) time.sleep(3) first_subscribe_statements = self.get_policy_statements(queue) self.assertEqual(len(first_subscribe_statements), len(initial_statements) + 1) resp2 = self.snsc.subscribe_sqs_queue(topic_arn, queue) time.sleep(3) second_subscribe_statements = self.get_policy_statements(queue) self.assertEqual(len(second_subscribe_statements), len(first_subscribe_statements)) boto-2.20.1/tests/integration/sqs/000077500000000000000000000000001225267101000170265ustar00rootroot00000000000000boto-2.20.1/tests/integration/sqs/__init__.py000066400000000000000000000021201225267101000211320ustar00rootroot00000000000000# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. boto-2.20.1/tests/integration/sqs/test_cert_verification.py000066400000000000000000000027771225267101000241530ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Check that all of the certs on SQS endpoints validate. """ import unittest from tests.integration import ServiceCertVerificationTest import boto.sqs class SQSCertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): sqs = True regions = boto.sqs.regions() def sample_service_call(self, conn): conn.get_all_queues() boto-2.20.1/tests/integration/sqs/test_connection.py000066400000000000000000000241011225267101000225740ustar00rootroot00000000000000# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2010, Eucalyptus Systems, Inc. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Some unit tests for the SQSConnection """ from __future__ import with_statement import time from threading import Timer from tests.unit import unittest from boto.sqs.connection import SQSConnection from boto.sqs.message import Message from boto.sqs.message import MHMessage from boto.exception import SQSError class SQSConnectionTest(unittest.TestCase): sqs = True def test_1_basic(self): print '--- running SQSConnection tests ---' c = SQSConnection() rs = c.get_all_queues() num_queues = 0 for q in rs: num_queues += 1 # try illegal name try: queue = c.create_queue('bad*queue*name') self.fail('queue name should have been bad') except SQSError: pass # now create one that should work and should be unique (i.e. a new one) queue_name = 'test%d' % int(time.time()) timeout = 60 queue_1 = c.create_queue(queue_name, timeout) self.addCleanup(c.delete_queue, queue_1, True) time.sleep(60) rs = c.get_all_queues() i = 0 for q in rs: i += 1 assert i == num_queues + 1 assert queue_1.count_slow() == 0 # check the visibility timeout t = queue_1.get_timeout() assert t == timeout, '%d != %d' % (t, timeout) # now try to get queue attributes a = q.get_attributes() assert 'ApproximateNumberOfMessages' in a assert 'VisibilityTimeout' in a a = q.get_attributes('ApproximateNumberOfMessages') assert 'ApproximateNumberOfMessages' in a assert 'VisibilityTimeout' not in a a = q.get_attributes('VisibilityTimeout') assert 'ApproximateNumberOfMessages' not in a assert 'VisibilityTimeout' in a # now change the visibility timeout timeout = 45 queue_1.set_timeout(timeout) time.sleep(60) t = queue_1.get_timeout() assert t == timeout, '%d != %d' % (t, timeout) # now add a message message_body = 'This is a test\n' message = queue_1.new_message(message_body) queue_1.write(message) time.sleep(60) assert queue_1.count_slow() == 1 time.sleep(90) # now read the message from the queue with a 10 second timeout message = queue_1.read(visibility_timeout=10) assert message assert message.get_body() == message_body # now immediately try another read, shouldn't find anything message = queue_1.read() assert message == None # now wait 30 seconds and try again time.sleep(30) message = queue_1.read() assert message # now delete the message queue_1.delete_message(message) time.sleep(30) assert queue_1.count_slow() == 0 # try a batch write num_msgs = 10 msgs = [(i, 'This is message %d' % i, 0) for i in range(num_msgs)] queue_1.write_batch(msgs) # try to delete all of the messages using batch delete deleted = 0 while deleted < num_msgs: time.sleep(5) msgs = queue_1.get_messages(num_msgs) if msgs: br = queue_1.delete_message_batch(msgs) deleted += len(br.results) # create another queue so we can test force deletion # we will also test MHMessage with this queue queue_name = 'test%d' % int(time.time()) timeout = 60 queue_2 = c.create_queue(queue_name, timeout) self.addCleanup(c.delete_queue, queue_2, True) queue_2.set_message_class(MHMessage) time.sleep(30) # now add a couple of messages message = queue_2.new_message() message['foo'] = 'bar' queue_2.write(message) message_body = {'fie': 'baz', 'foo': 'bar'} message = queue_2.new_message(body=message_body) queue_2.write(message) time.sleep(30) m = queue_2.read() assert m['foo'] == 'bar' print '--- tests completed ---' def test_sqs_timeout(self): c = SQSConnection() queue_name = 'test_sqs_timeout_%s' % int(time.time()) queue = c.create_queue(queue_name) self.addCleanup(c.delete_queue, queue, True) start = time.time() poll_seconds = 2 response = queue.read(visibility_timeout=None, wait_time_seconds=poll_seconds) total_time = time.time() - start self.assertTrue(total_time > poll_seconds, "SQS queue did not block for at least %s seconds: %s" % (poll_seconds, total_time)) self.assertIsNone(response) # Now that there's an element in the queue, we should not block for 2 # seconds. c.send_message(queue, 'test message') start = time.time() poll_seconds = 2 message = c.receive_message( queue, number_messages=1, visibility_timeout=None, attributes=None, wait_time_seconds=poll_seconds)[0] total_time = time.time() - start self.assertTrue(total_time < poll_seconds, "SQS queue blocked longer than %s seconds: %s" % (poll_seconds, total_time)) self.assertEqual(message.get_body(), 'test message') attrs = c.get_queue_attributes(queue, 'ReceiveMessageWaitTimeSeconds') self.assertEqual(attrs['ReceiveMessageWaitTimeSeconds'], '0') def test_sqs_longpoll(self): c = SQSConnection() queue_name = 'test_sqs_longpoll_%s' % int(time.time()) queue = c.create_queue(queue_name) self.addCleanup(c.delete_queue, queue, True) messages = [] # The basic idea is to spawn a timer thread that will put something # on the queue in 5 seconds and verify that our long polling client # sees the message after waiting for approximately that long. def send_message(): messages.append( queue.write(queue.new_message('this is a test message'))) t = Timer(5.0, send_message) t.start() self.addCleanup(t.join) start = time.time() response = queue.read(wait_time_seconds=10) end = time.time() t.join() self.assertEqual(response.id, messages[0].id) self.assertEqual(response.get_body(), messages[0].get_body()) # The timer thread should send the message in 5 seconds, so # we're giving +- .5 seconds for the total time the queue # was blocked on the read call. self.assertTrue(4.5 <= (end - start) <= 5.5) def test_queue_deletion_affects_full_queues(self): conn = SQSConnection() initial_count = len(conn.get_all_queues()) empty = conn.create_queue('empty%d' % int(time.time())) full = conn.create_queue('full%d' % int(time.time())) time.sleep(60) # Make sure they're both around. self.assertEqual(len(conn.get_all_queues()), initial_count + 2) # Put a message in the full queue. m1 = Message() m1.set_body('This is a test message.') full.write(m1) self.assertEqual(full.count(), 1) self.assertTrue(conn.delete_queue(empty)) # Here's the regression for the docs. SQS will delete a queue with # messages in it, no ``force_deletion`` needed. self.assertTrue(conn.delete_queue(full)) # Wait long enough for SQS to finally remove the queues. time.sleep(90) self.assertEqual(len(conn.get_all_queues()), initial_count) def test_get_messages_attributes(self): conn = SQSConnection() current_timestamp = int(time.time()) queue_name = 'test%d' % int(time.time()) test = conn.create_queue(queue_name) self.addCleanup(conn.delete_queue, test) time.sleep(65) # Put a message in the queue. m1 = Message() m1.set_body('This is a test message.') test.write(m1) self.assertEqual(test.count(), 1) # Check all attributes. msgs = test.get_messages( num_messages=1, attributes='All' ) for msg in msgs: self.assertEqual(msg.attributes['ApproximateReceiveCount'], '1') first_rec = msg.attributes['ApproximateFirstReceiveTimestamp'] first_rec = int(first_rec) / 1000 self.assertTrue(first_rec >= current_timestamp) # Put another message in the queue. m2 = Message() m2.set_body('This is another test message.') test.write(m2) self.assertEqual(test.count(), 1) # Check a specific attribute. msgs = test.get_messages( num_messages=1, attributes='ApproximateReceiveCount' ) for msg in msgs: self.assertEqual(msg.attributes['ApproximateReceiveCount'], '1') with self.assertRaises(KeyError): msg.attributes['ApproximateFirstReceiveTimestamp'] boto-2.20.1/tests/integration/storage_uri/000077500000000000000000000000001225267101000205435ustar00rootroot00000000000000boto-2.20.1/tests/integration/storage_uri/__init__.py000066400000000000000000000000001225267101000226420ustar00rootroot00000000000000boto-2.20.1/tests/integration/storage_uri/test_storage_uri.py000066400000000000000000000044631225267101000245060ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Some unit tests for StorageUri """ from tests.unit import unittest import time import boto from boto.s3.connection import S3Connection, Location class StorageUriTest(unittest.TestCase): s3 = True def nuke_bucket(self, bucket): for key in bucket: key.delete() bucket.delete() def test_storage_uri_regionless(self): # First, create a bucket in a different region. conn = S3Connection( host='s3-us-west-2.amazonaws.com' ) bucket_name = 'keytest-%d' % int(time.time()) bucket = conn.create_bucket(bucket_name, location=Location.USWest2) self.addCleanup(self.nuke_bucket, bucket) # Now use ``storage_uri`` to try to make a new key. # This would throw a 301 exception. suri = boto.storage_uri('s3://%s/test' % bucket_name) the_key = suri.new_key() the_key.key = 'Test301' the_key.set_contents_from_string( 'This should store in a different region.' ) # Check it a different way. alt_conn = boto.connect_s3(host='s3-us-west-2.amazonaws.com') alt_bucket = alt_conn.get_bucket(bucket_name) alt_key = alt_bucket.get_key('Test301') boto-2.20.1/tests/integration/sts/000077500000000000000000000000001225267101000170315ustar00rootroot00000000000000boto-2.20.1/tests/integration/sts/__init__.py000066400000000000000000000021131225267101000211370ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. boto-2.20.1/tests/integration/sts/test_cert_verification.py000066400000000000000000000030121225267101000241350ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Check that all of the certs on all service endpoints validate. """ import unittest from tests.integration import ServiceCertVerificationTest import boto.sts class STSCertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): sts = True regions = boto.sts.regions() def sample_service_call(self, conn): conn.get_session_token() boto-2.20.1/tests/integration/sts/test_session_token.py000066400000000000000000000065051225267101000233330ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Tests for Session Tokens """ import unittest import time import os from boto.exception import BotoServerError from boto.sts.connection import STSConnection from boto.sts.credentials import Credentials from boto.s3.connection import S3Connection class SessionTokenTest (unittest.TestCase): sts = True def test_session_token(self): print '--- running Session Token tests ---' c = STSConnection() # Create a session token token = c.get_session_token() # Save session token to a file token.save('token.json') # Now load up a copy of that token token_copy = Credentials.load('token.json') assert token_copy.access_key == token.access_key assert token_copy.secret_key == token.secret_key assert token_copy.session_token == token.session_token assert token_copy.expiration == token.expiration assert token_copy.request_id == token.request_id os.unlink('token.json') assert not token.is_expired() # Try using the session token with S3 s3 = S3Connection(aws_access_key_id=token.access_key, aws_secret_access_key=token.secret_key, security_token=token.session_token) buckets = s3.get_all_buckets() print '--- tests completed ---' def test_assume_role_with_web_identity(self): c = STSConnection(anon=True) arn = 'arn:aws:iam::000240903217:role/FederatedWebIdentityRole' wit = 'b94d27b9934d3e08a52e52d7da7dabfac484efe37a5380ee9088f7ace2efcde9' try: creds = c.assume_role_with_web_identity( role_arn=arn, role_session_name='guestuser', web_identity_token=wit, provider_id='www.amazon.com', ) except BotoServerError as err: self.assertEqual(err.status, 403) self.assertTrue('Not authorized' in err.body) def test_decode_authorization_message(self): c = STSConnection() try: creds = c.decode_authorization_message('b94d27b9934') except BotoServerError as err: self.assertEqual(err.status, 400) self.assertTrue('Invalid token' in err.body) boto-2.20.1/tests/integration/support/000077500000000000000000000000001225267101000177345ustar00rootroot00000000000000boto-2.20.1/tests/integration/support/__init__.py000066400000000000000000000000001225267101000220330ustar00rootroot00000000000000boto-2.20.1/tests/integration/support/test_cert_verification.py000066400000000000000000000026331225267101000250500ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import unittest from tests.integration import ServiceCertVerificationTest import boto.support class SupportCertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): support = True regions = boto.support.regions() def sample_service_call(self, conn): conn.describe_services() boto-2.20.1/tests/integration/support/test_layer1.py000066400000000000000000000057251225267101000225530ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import unittest import time from boto.support.layer1 import SupportConnection from boto.support import exceptions class TestSupportLayer1Management(unittest.TestCase): support = True def setUp(self): self.api = SupportConnection() self.wait_time = 5 def test_as_much_as_possible_before_teardown(self): cases = self.api.describe_cases() preexisting_count = len(cases.get('cases', [])) services = self.api.describe_services() self.assertTrue('services' in services) service_codes = [serv['code'] for serv in services['services']] self.assertTrue('amazon-cloudsearch' in service_codes) severity = self.api.describe_severity_levels() self.assertTrue('severityLevels' in severity) severity_codes = [sev['code'] for sev in severity['severityLevels']] self.assertTrue('low' in severity_codes) case_1 = self.api.create_case( subject='TEST: I am a test case.', service_code='amazon-cloudsearch', category_code='other', communication_body="This is a test problem", severity_code='low', language='en' ) time.sleep(self.wait_time) case_id = case_1['caseId'] new_cases = self.api.describe_cases() self.assertTrue(len(new_cases['cases']) > preexisting_count) result = self.api.add_communication_to_case( communication_body="This is a test solution.", case_id=case_id ) self.assertTrue(result.get('result', False)) time.sleep(self.wait_time) final_cases = self.api.describe_cases(case_id_list=[case_id]) comms = final_cases['cases'][0]['recentCommunications']\ ['communications'] self.assertEqual(len(comms), 2) close_result = self.api.resolve_case(case_id=case_id) boto-2.20.1/tests/integration/swf/000077500000000000000000000000001225267101000170175ustar00rootroot00000000000000boto-2.20.1/tests/integration/swf/__init__.py000066400000000000000000000000001225267101000211160ustar00rootroot00000000000000boto-2.20.1/tests/integration/swf/test_cert_verification.py000066400000000000000000000030211225267101000241230ustar00rootroot00000000000000# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/ # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Check that all of the certs on all service endpoints validate. """ import unittest from tests.integration import ServiceCertVerificationTest import boto.swf class SWFCertVerificationTest(unittest.TestCase, ServiceCertVerificationTest): swf = True regions = boto.swf.regions() def sample_service_call(self, conn): conn.list_domains('REGISTERED') boto-2.20.1/tests/integration/swf/test_layer1.py000066400000000000000000000247671225267101000216450ustar00rootroot00000000000000""" Tests for Layer1 of Simple Workflow """ import os import unittest import time from boto.swf.layer1 import Layer1 from boto.swf import exceptions as swf_exceptions # A standard AWS account is permitted a maximum of 100 of SWF domains, # registered or deprecated. Deleting deprecated domains on demand does # not appear possible. Therefore, these tests reuse a default or # user-named testing domain. This is named by the user via the environment # variable BOTO_SWF_UNITTEST_DOMAIN, if available. Otherwise the default # testing domain is literally "boto-swf-unittest-domain". Do not use # the testing domain for other purposes. BOTO_SWF_UNITTEST_DOMAIN = os.environ.get("BOTO_SWF_UNITTEST_DOMAIN", "boto-swf-unittest-domain") # A standard domain can have a maxiumum of 10,000 workflow types and # activity types, registered or deprecated. Therefore, eventually any # tests which register new workflow types or activity types would begin # to fail with LimitExceeded. Instead of generating new workflow types # and activity types, these tests reuse the existing types. # The consequence of the limits and inability to delete deprecated # domains, workflow types, and activity types is that the tests in # this module will not test for the three register actions: # * register_domain # * register_workflow_type # * register_activity_type # Instead, the setUp of the TestCase create a domain, workflow type, # and activity type, expecting that they may already exist, and the # tests themselves test other things. # If you really want to re-test the register_* functions in their # ability to create things (rather than just reporting that they # already exist), you'll need to use a new BOTO_SWF_UNITTEST_DOMAIN. # But, beware that once you hit 100 domains, you are cannot create any # more, delete existing ones, or rename existing ones. # Some API calls establish resources, but these resources are not instantly # available to the next API call. For testing purposes, it is necessary to # have a short pause to avoid having tests fail for invalid reasons. PAUSE_SECONDS = 4 class SimpleWorkflowLayer1TestBase(unittest.TestCase): """ There are at least two test cases which share this setUp/tearDown and the class-based parameter definitions: * SimpleWorkflowLayer1Test * tests.swf.test_layer1_workflow_execution.SwfL1WorkflowExecutionTest """ swf = True # Some params used throughout the tests... # Domain registration params... _domain = BOTO_SWF_UNITTEST_DOMAIN _workflow_execution_retention_period_in_days = 'NONE' _domain_description = 'test workflow domain' # Type registration params used for workflow type and activity type... _task_list = 'tasklist1' # Workflow type registration params... _workflow_type_name = 'wft1' _workflow_type_version = '1' _workflow_type_description = 'wft1 description' _default_child_policy = 'REQUEST_CANCEL' _default_execution_start_to_close_timeout = '600' _default_task_start_to_close_timeout = '60' # Activity type registration params... _activity_type_name = 'at1' _activity_type_version = '1' _activity_type_description = 'at1 description' _default_task_heartbeat_timeout = '30' _default_task_schedule_to_close_timeout = '90' _default_task_schedule_to_start_timeout = '10' _default_task_start_to_close_timeout = '30' def setUp(self): # Create a Layer1 connection for testing. # Tester needs boto config or keys in environment variables. self.conn = Layer1() # Register a domain. Expect None (success) or # SWFDomainAlreadyExistsError. try: r = self.conn.register_domain(self._domain, self._workflow_execution_retention_period_in_days, description=self._domain_description) assert r is None time.sleep(PAUSE_SECONDS) except swf_exceptions.SWFDomainAlreadyExistsError: pass # Register a workflow type. Expect None (success) or # SWFTypeAlreadyExistsError. try: r = self.conn.register_workflow_type(self._domain, self._workflow_type_name, self._workflow_type_version, task_list=self._task_list, default_child_policy=self._default_child_policy, default_execution_start_to_close_timeout= self._default_execution_start_to_close_timeout, default_task_start_to_close_timeout= self._default_task_start_to_close_timeout, description=self._workflow_type_description) assert r is None time.sleep(PAUSE_SECONDS) except swf_exceptions.SWFTypeAlreadyExistsError: pass # Register an activity type. Expect None (success) or # SWFTypeAlreadyExistsError. try: r = self.conn.register_activity_type(self._domain, self._activity_type_name, self._activity_type_version, task_list=self._task_list, default_task_heartbeat_timeout= self._default_task_heartbeat_timeout, default_task_schedule_to_close_timeout= self._default_task_schedule_to_close_timeout, default_task_schedule_to_start_timeout= self._default_task_schedule_to_start_timeout, default_task_start_to_close_timeout= self._default_task_start_to_close_timeout, description=self._activity_type_description) assert r is None time.sleep(PAUSE_SECONDS) except swf_exceptions.SWFTypeAlreadyExistsError: pass def tearDown(self): # Delete what we can... pass class SimpleWorkflowLayer1Test(SimpleWorkflowLayer1TestBase): def test_list_domains(self): # Find the domain. r = self.conn.list_domains('REGISTERED') found = None for info in r['domainInfos']: if info['name'] == self._domain: found = info break self.assertNotEqual(found, None, 'list_domains; test domain not found') # Validate some properties. self.assertEqual(found['description'], self._domain_description, 'list_domains; description does not match') self.assertEqual(found['status'], 'REGISTERED', 'list_domains; status does not match') def test_list_workflow_types(self): # Find the workflow type. r = self.conn.list_workflow_types(self._domain, 'REGISTERED') found = None for info in r['typeInfos']: if ( info['workflowType']['name'] == self._workflow_type_name and info['workflowType']['version'] == self._workflow_type_version ): found = info break self.assertNotEqual(found, None, 'list_workflow_types; test type not found') # Validate some properties. self.assertEqual(found['description'], self._workflow_type_description, 'list_workflow_types; description does not match') self.assertEqual(found['status'], 'REGISTERED', 'list_workflow_types; status does not match') def test_list_activity_types(self): # Find the activity type. r = self.conn.list_activity_types(self._domain, 'REGISTERED') found = None for info in r['typeInfos']: if info['activityType']['name'] == self._activity_type_name: found = info break self.assertNotEqual(found, None, 'list_activity_types; test type not found') # Validate some properties. self.assertEqual(found['description'], self._activity_type_description, 'list_activity_types; description does not match') self.assertEqual(found['status'], 'REGISTERED', 'list_activity_types; status does not match') def test_list_closed_workflow_executions(self): # Test various legal ways to call function. latest_date = time.time() oldest_date = time.time() - 3600 # With startTimeFilter... self.conn.list_closed_workflow_executions(self._domain, start_latest_date=latest_date, start_oldest_date=oldest_date) # With closeTimeFilter... self.conn.list_closed_workflow_executions(self._domain, close_latest_date=latest_date, close_oldest_date=oldest_date) # With closeStatusFilter... self.conn.list_closed_workflow_executions(self._domain, close_latest_date=latest_date, close_oldest_date=oldest_date, close_status='COMPLETED') # With tagFilter... self.conn.list_closed_workflow_executions(self._domain, close_latest_date=latest_date, close_oldest_date=oldest_date, tag='ig') # With executionFilter... self.conn.list_closed_workflow_executions(self._domain, close_latest_date=latest_date, close_oldest_date=oldest_date, workflow_id='ig') # With typeFilter... self.conn.list_closed_workflow_executions(self._domain, close_latest_date=latest_date, close_oldest_date=oldest_date, workflow_name='ig', workflow_version='ig') # With reverseOrder... self.conn.list_closed_workflow_executions(self._domain, close_latest_date=latest_date, close_oldest_date=oldest_date, reverse_order=True) def test_list_open_workflow_executions(self): # Test various legal ways to call function. latest_date = time.time() oldest_date = time.time() - 3600 # With required params only... self.conn.list_closed_workflow_executions(self._domain, latest_date, oldest_date) # With tagFilter... self.conn.list_closed_workflow_executions(self._domain, latest_date, oldest_date, tag='ig') # With executionFilter... self.conn.list_closed_workflow_executions(self._domain, latest_date, oldest_date, workflow_id='ig') # With typeFilter... self.conn.list_closed_workflow_executions(self._domain, latest_date, oldest_date, workflow_name='ig', workflow_version='ig') # With reverseOrder... self.conn.list_closed_workflow_executions(self._domain, latest_date, oldest_date, reverse_order=True) boto-2.20.1/tests/integration/swf/test_layer1_workflow_execution.py000066400000000000000000000152741225267101000256530ustar00rootroot00000000000000""" Tests for Layer1 of Simple Workflow """ import time import uuid import json import traceback from boto.swf.layer1_decisions import Layer1Decisions from test_layer1 import SimpleWorkflowLayer1TestBase class SwfL1WorkflowExecutionTest(SimpleWorkflowLayer1TestBase): """ test a simple workflow execution """ swf = True def run_decider(self): """ run one iteration of a simple decision engine """ # Poll for a decision task. tries = 0 while True: dtask = self.conn.poll_for_decision_task(self._domain, self._task_list, reverse_order=True) if dtask.get('taskToken') is not None: # This means a real decision task has arrived. break time.sleep(2) tries += 1 if tries > 10: # Give up if it's taking too long. Probably # means something is broken somewhere else. assert False, 'no decision task occurred' # Get the most recent interesting event. ignorable = ( 'DecisionTaskScheduled', 'DecisionTaskStarted', 'DecisionTaskTimedOut', ) event = None for tevent in dtask['events']: if tevent['eventType'] not in ignorable: event = tevent break # Construct the decision response. decisions = Layer1Decisions() if event['eventType'] == 'WorkflowExecutionStarted': activity_id = str(uuid.uuid1()) decisions.schedule_activity_task(activity_id, self._activity_type_name, self._activity_type_version, task_list=self._task_list, input=event['workflowExecutionStartedEventAttributes']['input']) elif event['eventType'] == 'ActivityTaskCompleted': decisions.complete_workflow_execution( result=event['activityTaskCompletedEventAttributes']['result']) elif event['eventType'] == 'ActivityTaskFailed': decisions.fail_workflow_execution( reason=event['activityTaskFailedEventAttributes']['reason'], details=event['activityTaskFailedEventAttributes']['details']) else: decisions.fail_workflow_execution( reason='unhandled decision task type; %r' % (event['eventType'],)) # Send the decision response. r = self.conn.respond_decision_task_completed(dtask['taskToken'], decisions=decisions._data, execution_context=None) assert r is None def run_worker(self): """ run one iteration of a simple worker engine """ # Poll for an activity task. tries = 0 while True: atask = self.conn.poll_for_activity_task(self._domain, self._task_list, identity='test worker') if atask.get('activityId') is not None: # This means a real activity task has arrived. break time.sleep(2) tries += 1 if tries > 10: # Give up if it's taking too long. Probably # means something is broken somewhere else. assert False, 'no activity task occurred' # Do the work or catch a "work exception." reason = None try: result = json.dumps(sum(json.loads(atask['input']))) except: reason = 'an exception was raised' details = traceback.format_exc() if reason is None: r = self.conn.respond_activity_task_completed( atask['taskToken'], result) else: r = self.conn.respond_activity_task_failed( atask['taskToken'], reason=reason, details=details) assert r is None def test_workflow_execution(self): # Start a workflow execution whose activity task will succeed. workflow_id = 'wfid-%.2f' % (time.time(),) r = self.conn.start_workflow_execution(self._domain, workflow_id, self._workflow_type_name, self._workflow_type_version, execution_start_to_close_timeout='20', input='[600, 15]') # Need the run_id to lookup the execution history later. run_id = r['runId'] # Move the workflow execution forward by having the # decider schedule an activity task. self.run_decider() # Run the worker to handle the scheduled activity task. self.run_worker() # Complete the workflow execution by having the # decider close it down. self.run_decider() # Check that the result was stored in the execution history. r = self.conn.get_workflow_execution_history(self._domain, run_id, workflow_id, reverse_order=True)['events'][0] result = r['workflowExecutionCompletedEventAttributes']['result'] assert json.loads(result) == 615 def test_failed_workflow_execution(self): # Start a workflow execution whose activity task will fail. workflow_id = 'wfid-%.2f' % (time.time(),) r = self.conn.start_workflow_execution(self._domain, workflow_id, self._workflow_type_name, self._workflow_type_version, execution_start_to_close_timeout='20', input='[600, "s"]') # Need the run_id to lookup the execution history later. run_id = r['runId'] # Move the workflow execution forward by having the # decider schedule an activity task. self.run_decider() # Run the worker to handle the scheduled activity task. self.run_worker() # Complete the workflow execution by having the # decider close it down. self.run_decider() # Check that the failure was stored in the execution history. r = self.conn.get_workflow_execution_history(self._domain, run_id, workflow_id, reverse_order=True)['events'][0] reason = r['workflowExecutionFailedEventAttributes']['reason'] assert reason == 'an exception was raised' boto-2.20.1/tests/mturk/000077500000000000000000000000001225267101000150375ustar00rootroot00000000000000boto-2.20.1/tests/mturk/.gitignore000066400000000000000000000000121225267101000170200ustar00rootroot00000000000000local.py boto-2.20.1/tests/mturk/__init__.py000066400000000000000000000000001225267101000171360ustar00rootroot00000000000000boto-2.20.1/tests/mturk/_init_environment.py000066400000000000000000000016461225267101000211460ustar00rootroot00000000000000import os import functools live_connection = False mturk_host = 'mechanicalturk.sandbox.amazonaws.com' external_url = 'http://www.example.com/' SetHostMTurkConnection = None def config_environment(): global SetHostMTurkConnection try: local = os.path.join(os.path.dirname(__file__), 'local.py') execfile(local) except: pass if live_connection: #TODO: you must set the auth credentials to something valid from boto.mturk.connection import MTurkConnection else: # Here the credentials must be set, but it doesn't matter what # they're set to. os.environ.setdefault('AWS_ACCESS_KEY_ID', 'foo') os.environ.setdefault('AWS_SECRET_ACCESS_KEY', 'bar') from mocks import MTurkConnection SetHostMTurkConnection = functools.partial(MTurkConnection, host=mturk_host) boto-2.20.1/tests/mturk/all_tests.py000066400000000000000000000011061225267101000174010ustar00rootroot00000000000000 import unittest import doctest from glob import glob from create_hit_test import * from create_hit_with_qualifications import * from create_hit_external import * from create_hit_with_qualifications import * from hit_persistence import * doctest_suite = doctest.DocFileSuite( *glob('*.doctest'), **{'optionflags': doctest.REPORT_ONLY_FIRST_FAILURE} ) class Program(unittest.TestProgram): def runTests(self, *args, **kwargs): self.test = unittest.TestSuite([self.test, doctest_suite]) super(Program, self).runTests(*args, **kwargs) if __name__ == '__main__': Program() boto-2.20.1/tests/mturk/cleanup_tests.py000066400000000000000000000026671225267101000202750ustar00rootroot00000000000000import itertools from _init_environment import SetHostMTurkConnection from _init_environment import config_environment def description_filter(substring): return lambda hit: substring in hit.Title def disable_hit(hit): return conn.disable_hit(hit.HITId) def dispose_hit(hit): # assignments must be first approved or rejected for assignment in conn.get_assignments(hit.HITId): if assignment.AssignmentStatus == 'Submitted': conn.approve_assignment(assignment.AssignmentId) return conn.dispose_hit(hit.HITId) def cleanup(): """Remove any boto test related HIT's""" config_environment() global conn conn = SetHostMTurkConnection() is_boto = description_filter('Boto') print 'getting hits...' all_hits = list(conn.get_all_hits()) is_reviewable = lambda hit: hit.HITStatus == 'Reviewable' is_not_reviewable = lambda hit: not is_reviewable(hit) hits_to_process = filter(is_boto, all_hits) hits_to_disable = filter(is_not_reviewable, hits_to_process) hits_to_dispose = filter(is_reviewable, hits_to_process) print 'disabling/disposing %d/%d hits' % (len(hits_to_disable), len(hits_to_dispose)) map(disable_hit, hits_to_disable) map(dispose_hit, hits_to_dispose) total_hits = len(all_hits) hits_processed = len(hits_to_process) skipped = total_hits - hits_processed fmt = 'Processed: %(total_hits)d HITs, disabled/disposed: %(hits_processed)d, skipped: %(skipped)d' print fmt % vars() if __name__ == '__main__': cleanup() boto-2.20.1/tests/mturk/common.py000066400000000000000000000034231225267101000167030ustar00rootroot00000000000000import unittest import uuid import datetime from boto.mturk.question import ( Question, QuestionContent, AnswerSpecification, FreeTextAnswer, ) from _init_environment import SetHostMTurkConnection, config_environment class MTurkCommon(unittest.TestCase): def setUp(self): config_environment() self.conn = SetHostMTurkConnection() @staticmethod def get_question(): # create content for a question qn_content = QuestionContent() qn_content.append_field('Title', 'Boto no hit type question content') qn_content.append_field('Text', 'What is a boto no hit type?') # create the question specification qn = Question(identifier=str(uuid.uuid4()), content=qn_content, answer_spec=AnswerSpecification(FreeTextAnswer())) return qn @staticmethod def get_hit_params(): return dict( lifetime=datetime.timedelta(minutes=65), max_assignments=2, title='Boto create_hit title', description='Boto create_hit description', keywords=['boto', 'test'], reward=0.23, duration=datetime.timedelta(minutes=6), approval_delay=60*60, annotation='An annotation from boto create_hit test', response_groups=['Minimal', 'HITDetail', 'HITQuestion', 'HITAssignmentSummary',], ) boto-2.20.1/tests/mturk/create_free_text_question_regex.doctest000066400000000000000000000064451225267101000250700ustar00rootroot00000000000000>>> import uuid >>> import datetime >>> from _init_environment import MTurkConnection, mturk_host >>> from boto.mturk.question import Question, QuestionContent, AnswerSpecification, FreeTextAnswer, RegExConstraint >>> conn = MTurkConnection(host=mturk_host) # create content for a question >>> qn_content = QuestionContent() >>> qn_content.append_field('Title', 'Boto no hit type question content') >>> qn_content.append_field('Text', 'What is a boto no hit type?') # create a free text answer that is not quite so free! >>> constraints = [ ... RegExConstraint( ... "^[12][0-9]{3}-[01]?\d-[0-3]?\d$", ... error_text="You must enter a date with the format yyyy-mm-dd.", ... flags='i', ... )] >>> ft_answer = FreeTextAnswer(constraints=constraints, ... default="This is not a valid format") # create the question specification >>> qn = Question(identifier=str(uuid.uuid4()), ... content=qn_content, ... answer_spec=AnswerSpecification(ft_answer)) # now, create the actual HIT for the question without using a HIT type # NOTE - the response_groups are specified to get back additional information for testing >>> keywords=['boto', 'test', 'doctest'] >>> create_hit_rs = conn.create_hit(question=qn, ... lifetime=60*65, ... max_assignments=2, ... title='Boto create_hit title', ... description='Boto create_hit description', ... keywords=keywords, ... reward=0.23, ... duration=60*6, ... approval_delay=60*60, ... annotation='An annotation from boto create_hit test', ... response_groups=['Minimal', ... 'HITDetail', ... 'HITQuestion', ... 'HITAssignmentSummary',]) # this is a valid request >>> create_hit_rs.status True # for the requested hit type id # the HIT Type Id is a unicode string >>> len(create_hit_rs) 1 >>> hit = create_hit_rs[0] >>> hit_type_id = hit.HITTypeId >>> hit_type_id # doctest: +ELLIPSIS u'...' >>> hit.MaxAssignments u'2' >>> hit.AutoApprovalDelayInSeconds u'3600' # expiration should be very close to now + the lifetime in seconds >>> expected_datetime = datetime.datetime.utcnow() + datetime.timedelta(seconds=3900) >>> expiration_datetime = datetime.datetime.strptime(hit.Expiration, '%Y-%m-%dT%H:%M:%SZ') >>> delta = expected_datetime - expiration_datetime >>> abs(delta).seconds < 5 True # duration is as specified for the HIT type >>> hit.AssignmentDurationInSeconds u'360' # the reward has been set correctly (allow for float error here) >>> int(float(hit.Amount) * 100) 23 >>> hit.FormattedPrice u'$0.23' # only US currency supported at present >>> hit.CurrencyCode u'USD' # title is the HIT type title >>> hit.Title u'Boto create_hit title' # title is the HIT type description >>> hit.Description u'Boto create_hit description' # annotation is correct >>> hit.RequesterAnnotation u'An annotation from boto create_hit test' >>> hit.HITReviewStatus u'NotReviewed' boto-2.20.1/tests/mturk/create_hit.doctest000066400000000000000000000056531225267101000205460ustar00rootroot00000000000000>>> import uuid >>> import datetime >>> from _init_environment import MTurkConnection, mturk_host >>> from boto.mturk.question import Question, QuestionContent, AnswerSpecification, FreeTextAnswer >>> conn = MTurkConnection(host=mturk_host) # create content for a question >>> qn_content = QuestionContent() >>> qn_content.append_field('Title', 'Boto no hit type question content') >>> qn_content.append_field('Text', 'What is a boto no hit type?') # create the question specification >>> qn = Question(identifier=str(uuid.uuid4()), ... content=qn_content, ... answer_spec=AnswerSpecification(FreeTextAnswer())) # now, create the actual HIT for the question without using a HIT type # NOTE - the response_groups are specified to get back additional information for testing >>> keywords=['boto', 'test', 'doctest'] >>> lifetime = datetime.timedelta(minutes=65) >>> create_hit_rs = conn.create_hit(question=qn, ... lifetime=lifetime, ... max_assignments=2, ... title='Boto create_hit title', ... description='Boto create_hit description', ... keywords=keywords, ... reward=0.23, ... duration=60*6, ... approval_delay=60*60, ... annotation='An annotation from boto create_hit test', ... response_groups=['Minimal', ... 'HITDetail', ... 'HITQuestion', ... 'HITAssignmentSummary',]) # this is a valid request >>> create_hit_rs.status True >>> len(create_hit_rs) 1 >>> hit = create_hit_rs[0] # for the requested hit type id # the HIT Type Id is a unicode string >>> hit_type_id = hit.HITTypeId >>> hit_type_id # doctest: +ELLIPSIS u'...' >>> hit.MaxAssignments u'2' >>> hit.AutoApprovalDelayInSeconds u'3600' # expiration should be very close to now + the lifetime >>> expected_datetime = datetime.datetime.utcnow() + lifetime >>> expiration_datetime = datetime.datetime.strptime(hit.Expiration, '%Y-%m-%dT%H:%M:%SZ') >>> delta = expected_datetime - expiration_datetime >>> abs(delta).seconds < 5 True # duration is as specified for the HIT type >>> hit.AssignmentDurationInSeconds u'360' # the reward has been set correctly (allow for float error here) >>> int(float(hit.Amount) * 100) 23 >>> hit.FormattedPrice u'$0.23' # only US currency supported at present >>> hit.CurrencyCode u'USD' # title is the HIT type title >>> hit.Title u'Boto create_hit title' # title is the HIT type description >>> hit.Description u'Boto create_hit description' # annotation is correct >>> hit.RequesterAnnotation u'An annotation from boto create_hit test' >>> hit.HITReviewStatus u'NotReviewed' boto-2.20.1/tests/mturk/create_hit_binary.doctest000066400000000000000000000061121225267101000221010ustar00rootroot00000000000000>>> import uuid >>> import datetime >>> from _init_environment import MTurkConnection, mturk_host >>> from boto.mturk.question import Question, QuestionContent, AnswerSpecification, FreeTextAnswer, Binary >>> conn = MTurkConnection(host=mturk_host) # create content for a question >>> qn_content = QuestionContent() >>> qn_content.append_field('Title','Boto no hit type question content') >>> qn_content.append_field('Text', 'What is a boto binary hit type?') >>> binary_content = Binary('image', 'jpeg', 'http://www.example.com/test1.jpg', alt_text='image is missing') >>> qn_content.append(binary_content) # create the question specification >>> qn = Question(identifier=str(uuid.uuid4()), ... content=qn_content, ... answer_spec=AnswerSpecification(FreeTextAnswer())) # now, create the actual HIT for the question without using a HIT type # NOTE - the response_groups are specified to get back additional information for testing >>> keywords=['boto', 'test', 'doctest'] >>> lifetime = datetime.timedelta(minutes=65) >>> create_hit_rs = conn.create_hit(question=qn, ... lifetime=lifetime, ... max_assignments=2, ... title='Boto create_hit title', ... description='Boto create_hit description', ... keywords=keywords, ... reward=0.23, ... duration=60*6, ... approval_delay=60*60, ... annotation='An annotation from boto create_hit test', ... response_groups=['Minimal', ... 'HITDetail', ... 'HITQuestion', ... 'HITAssignmentSummary',]) # this is a valid request >>> create_hit_rs.status True >>> len(create_hit_rs) 1 >>> hit = create_hit_rs[0] # for the requested hit type id # the HIT Type Id is a unicode string >>> hit_type_id = hit.HITTypeId >>> hit_type_id # doctest: +ELLIPSIS u'...' >>> hit.MaxAssignments u'2' >>> hit.AutoApprovalDelayInSeconds u'3600' # expiration should be very close to now + the lifetime >>> expected_datetime = datetime.datetime.utcnow() + lifetime >>> expiration_datetime = datetime.datetime.strptime(hit.Expiration, '%Y-%m-%dT%H:%M:%SZ') >>> delta = expected_datetime - expiration_datetime >>> abs(delta).seconds < 5 True # duration is as specified for the HIT type >>> hit.AssignmentDurationInSeconds u'360' # the reward has been set correctly (allow for float error here) >>> int(float(hit.Amount) * 100) 23 >>> hit.FormattedPrice u'$0.23' # only US currency supported at present >>> hit.CurrencyCode u'USD' # title is the HIT type title >>> hit.Title u'Boto create_hit title' # title is the HIT type description >>> hit.Description u'Boto create_hit description' # annotation is correct >>> hit.RequesterAnnotation u'An annotation from boto create_hit test' >>> hit.HITReviewStatus u'NotReviewed' boto-2.20.1/tests/mturk/create_hit_external.py000066400000000000000000000017041225267101000214240ustar00rootroot00000000000000import unittest import uuid import datetime from boto.mturk.question import ExternalQuestion from _init_environment import SetHostMTurkConnection, external_url, \ config_environment class Test(unittest.TestCase): def setUp(self): config_environment() def test_create_hit_external(self): q = ExternalQuestion(external_url=external_url, frame_height=800) conn = SetHostMTurkConnection() keywords=['boto', 'test', 'doctest'] create_hit_rs = conn.create_hit(question=q, lifetime=60*65, max_assignments=2, title="Boto External Question Test", keywords=keywords, reward = 0.05, duration=60*6, approval_delay=60*60, annotation='An annotation from boto external question test', response_groups=['Minimal', 'HITDetail', 'HITQuestion', 'HITAssignmentSummary',]) assert(create_hit_rs.status == True) if __name__ == "__main__": unittest.main() boto-2.20.1/tests/mturk/create_hit_from_hit_type.doctest000066400000000000000000000062561225267101000234760ustar00rootroot00000000000000>>> import uuid >>> import datetime >>> from _init_environment import MTurkConnection, mturk_host >>> from boto.mturk.question import Question, QuestionContent, AnswerSpecification, FreeTextAnswer >>> >>> conn = MTurkConnection(host=mturk_host) >>> keywords=['boto', 'test', 'doctest'] >>> hit_type_rs = conn.register_hit_type('Boto Test HIT type', ... 'HIT Type for testing Boto', ... 0.12, ... 60*6, ... keywords=keywords, ... approval_delay=60*60) # this was a valid request >>> hit_type_rs.status True # the HIT Type Id is a unicode string >>> hit_type_id = hit_type_rs.HITTypeId >>> hit_type_id # doctest: +ELLIPSIS u'...' # create content for a question >>> qn_content = QuestionContent() >>> qn_content.append_field('Title', 'Boto question content create_hit_from_hit_type') >>> qn_content.append_field('Text', 'What is a boto create_hit_from_hit_type?') # create the question specification >>> qn = Question(identifier=str(uuid.uuid4()), ... content=qn_content, ... answer_spec=AnswerSpecification(FreeTextAnswer())) # now, create the actual HIT for the question using the HIT type # NOTE - the response_groups are specified to get back additional information for testing >>> create_hit_rs = conn.create_hit(hit_type=hit_type_rs.HITTypeId, ... question=qn, ... lifetime=60*65, ... max_assignments=2, ... annotation='An annotation from boto create_hit_from_hit_type test', ... response_groups=['Minimal', ... 'HITDetail', ... 'HITQuestion', ... 'HITAssignmentSummary',]) # this is a valid request >>> create_hit_rs.status True >>> len(create_hit_rs) 1 >>> hit = create_hit_rs[0] # for the requested hit type id >>> hit.HITTypeId == hit_type_id True # with the correct number of maximum assignments >>> hit.MaxAssignments u'2' # and the approval delay >>> hit.AutoApprovalDelayInSeconds u'3600' # expiration should be very close to now + the lifetime in seconds >>> expected_datetime = datetime.datetime.utcnow() + datetime.timedelta(seconds=3900) >>> expiration_datetime = datetime.datetime.strptime(hit.Expiration, '%Y-%m-%dT%H:%M:%SZ') >>> delta = expected_datetime - expiration_datetime >>> abs(delta).seconds < 5 True # duration is as specified for the HIT type >>> hit.AssignmentDurationInSeconds u'360' # the reward has been set correctly >>> float(hit.Amount) == 0.12 True >>> hit.FormattedPrice u'$0.12' # only US currency supported at present >>> hit.CurrencyCode u'USD' # title is the HIT type title >>> hit.Title u'Boto Test HIT type' # title is the HIT type description >>> hit.Description u'HIT Type for testing Boto' # annotation is correct >>> hit.RequesterAnnotation u'An annotation from boto create_hit_from_hit_type test' # not reviewed yet >>> hit.HITReviewStatus u'NotReviewed' boto-2.20.1/tests/mturk/create_hit_test.py000066400000000000000000000010151225267101000205540ustar00rootroot00000000000000import unittest import os from boto.mturk.question import QuestionForm from common import MTurkCommon class TestHITCreation(MTurkCommon): def testCallCreateHitWithOneQuestion(self): create_hit_rs = self.conn.create_hit( question=self.get_question(), **self.get_hit_params() ) def testCallCreateHitWithQuestionForm(self): create_hit_rs = self.conn.create_hit( questions=QuestionForm([self.get_question()]), **self.get_hit_params() ) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/mturk/create_hit_with_qualifications.py000066400000000000000000000016621225267101000236530ustar00rootroot00000000000000from boto.mturk.connection import MTurkConnection from boto.mturk.question import ExternalQuestion from boto.mturk.qualification import Qualifications, PercentAssignmentsApprovedRequirement def test(): q = ExternalQuestion(external_url="http://websort.net/s/F3481C", frame_height=800) conn = MTurkConnection(host='mechanicalturk.sandbox.amazonaws.com') keywords=['boto', 'test', 'doctest'] qualifications = Qualifications() qualifications.add(PercentAssignmentsApprovedRequirement(comparator="GreaterThan", integer_value="95")) create_hit_rs = conn.create_hit(question=q, lifetime=60*65, max_assignments=2, title="Boto External Question Test", keywords=keywords, reward = 0.05, duration=60*6, approval_delay=60*60, annotation='An annotation from boto external question test', qualifications=qualifications) assert(create_hit_rs.status == True) print create_hit_rs.HITTypeId if __name__ == "__main__": test() boto-2.20.1/tests/mturk/hit_persistence.py000066400000000000000000000014051225267101000206010ustar00rootroot00000000000000import unittest import pickle from common import MTurkCommon class TestHITPersistence(MTurkCommon): def create_hit_result(self): return self.conn.create_hit( question=self.get_question(), **self.get_hit_params() ) def test_pickle_hit_result(self): result = self.create_hit_result() new_result = pickle.loads(pickle.dumps(result)) def test_pickle_deserialized_version(self): """ It seems the technique used to store and reload the object must result in an equivalent object, or subsequent pickles may fail. This tests a double-pickle to elicit that error. """ result = self.create_hit_result() new_result = pickle.loads(pickle.dumps(result)) pickle.dumps(new_result) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/mturk/mocks.py000066400000000000000000000006561225267101000165340ustar00rootroot00000000000000from boto.mturk.connection import MTurkConnection as RealMTurkConnection class MTurkConnection(RealMTurkConnection): """ Mock MTurkConnection that doesn't connect, but instead just prepares the request and captures information about its usage. """ def _process_request(self, *args, **kwargs): saved_args = self.__dict__.setdefault('_mock_saved_args', dict()) saved_args['_process_request'] = (args, kwargs) boto-2.20.1/tests/mturk/reviewable_hits.doctest000066400000000000000000000075701225267101000216130ustar00rootroot00000000000000>>> import uuid >>> import datetime >>> from _init_environment import MTurkConnection, mturk_host >>> from boto.mturk.question import Question, QuestionContent, AnswerSpecification, FreeTextAnswer >>> conn = MTurkConnection(host=mturk_host) # create content for a question >>> qn_content = QuestionContent() >>> qn_content.append_field('Title', 'Boto no hit type question content') >>> qn_content.append_field('Text', 'What is a boto no hit type?') # create the question specification >>> qn = Question(identifier=str(uuid.uuid4()), ... content=qn_content, ... answer_spec=AnswerSpecification(FreeTextAnswer())) # now, create the actual HIT for the question without using a HIT type # NOTE - the response_groups are specified to get back additional information for testing >>> keywords=['boto', 'test', 'doctest'] >>> create_hit_rs = conn.create_hit(question=qn, ... lifetime=60*65, ... max_assignments=1, ... title='Boto Hit to be Reviewed', ... description='Boto reviewable_hits description', ... keywords=keywords, ... reward=0.23, ... duration=60*6, ... approval_delay=60*60, ... annotation='An annotation from boto create_hit test', ... response_groups=['Minimal', ... 'HITDetail', ... 'HITQuestion', ... 'HITAssignmentSummary',]) # this is a valid request >>> create_hit_rs.status True >>> len(create_hit_rs) 1 >>> hit = create_hit_rs[0] # for the requested hit type id # the HIT Type Id is a unicode string >>> hit_type_id = hit.HITTypeId >>> hit_type_id # doctest: +ELLIPSIS u'...' >>> from selenium_support import complete_hit, has_selenium >>> if has_selenium(): complete_hit(hit_type_id, response='reviewable_hits_test') >>> import time Give mechanical turk some time to process the hit >>> if has_selenium(): time.sleep(10) # should have some reviewable HIT's returned, especially if returning all HIT type's # NOTE: but only if your account has existing HIT's in the reviewable state >>> reviewable_rs = conn.get_reviewable_hits() # this is a valid request >>> reviewable_rs.status True >>> len(reviewable_rs) >= 1 True # should contain at least one HIT object >>> reviewable_rs # doctest: +ELLIPSIS [>> hit_id = reviewable_rs[0].HITId # check that we can retrieve the assignments for a HIT >>> assignments_rs = conn.get_assignments(hit_id) # this is a valid request >>> assignments_rs.status True >>> int(assignments_rs.NumResults) >= 1 True >>> len(assignments_rs) == int(assignments_rs.NumResults) True >>> assignments_rs.PageNumber u'1' >>> assignments_rs.TotalNumResults >= 1 True # should contain at least one Assignment object >>> assignments_rs # doctest: +ELLIPSIS [>> assignment = assignments_rs[0] >>> assignment.HITId == hit_id True # should have a valid status >>> assignment.AssignmentStatus in ['Submitted', 'Approved', 'Rejected'] True # should have returned at least one answer >>> len(assignment.answers) > 0 True # should contain at least one set of QuestionFormAnswer objects >>> assignment.answers # doctest: +ELLIPSIS [[>> answer = assignment.answers[0][0] # the answer should have exactly one field >>> len(answer.fields) 1 >>> qid, text = answer.fields[0] >>> text # doctest: +ELLIPSIS u'...' # question identifier should be a unicode string >>> qid # doctest: +ELLIPSIS u'...' boto-2.20.1/tests/mturk/run-doctest.py000066400000000000000000000004061225267101000176600ustar00rootroot00000000000000import argparse import doctest parser = argparse.ArgumentParser( description="Run a test by name" ) parser.add_argument('test_name') args = parser.parse_args() doctest.testfile( args.test_name, optionflags=doctest.REPORT_ONLY_FIRST_FAILURE ) boto-2.20.1/tests/mturk/search_hits.doctest000066400000000000000000000006161225267101000207250ustar00rootroot00000000000000>>> from _init_environment import MTurkConnection, mturk_host >>> conn = MTurkConnection(host=mturk_host) # should have some HIT's returned by a search (but only if your account has existing HIT's) >>> search_rs = conn.search_hits() # this is a valid request >>> search_rs.status True >>> len(search_rs) > 1 True >>> search_rs # doctest: +ELLIPSIS [= (2, 7): import unittest else: import unittest2 as unittest boto-2.20.1/tests/mturk/test_disable_hit.py000066400000000000000000000005041225267101000207160ustar00rootroot00000000000000from tests.mturk.support import unittest from common import MTurkCommon from boto.mturk.connection import MTurkRequestError class TestDisableHITs(MTurkCommon): def test_disable_invalid_hit(self): self.assertRaises(MTurkRequestError, self.conn.disable_hit, 'foo') if __name__ == '__main__': unittest.main() boto-2.20.1/tests/test.py000077500000000000000000000046641225267101000152430ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import logging import sys import unittest from nose.core import run import argparse def main(): description = ("Runs boto unit and/or integration tests. " "Arguments will be passed on to nosetests. " "See nosetests --help for more information.") parser = argparse.ArgumentParser(description=description) parser.add_argument('-t', '--service-tests', action="append", default=[], help="Run tests for a given service. This will " "run any test tagged with the specified value, " "e.g -t s3 -t ec2") known_args, remaining_args = parser.parse_known_args() attribute_args = [] for service_attribute in known_args.service_tests: attribute_args.extend(['-a', '!notdefault,' +service_attribute]) if not attribute_args: # If the user did not specify any filtering criteria, we at least # will filter out any test tagged 'notdefault'. attribute_args = ['-a', '!notdefault'] all_args = [__file__] + attribute_args + remaining_args print "nose command:", ' '.join(all_args) if run(argv=all_args): # run will return True is all the tests pass. We want # this to equal a 0 rc return 0 else: return 1 if __name__ == "__main__": sys.exit(main()) boto-2.20.1/tests/unit/000077500000000000000000000000001225267101000146545ustar00rootroot00000000000000boto-2.20.1/tests/unit/__init__.py000066400000000000000000000060221225267101000167650ustar00rootroot00000000000000try: import unittest2 as unittest except ImportError: import unittest import httplib from mock import Mock class AWSMockServiceTestCase(unittest.TestCase): """Base class for mocking aws services.""" # This param is used by the unittest module to display a full # diff when assert*Equal methods produce an error message. maxDiff = None connection_class = None def setUp(self): self.https_connection = Mock(spec=httplib.HTTPSConnection) self.https_connection_factory = ( Mock(return_value=self.https_connection), ()) self.service_connection = self.create_service_connection( https_connection_factory=self.https_connection_factory, aws_access_key_id='aws_access_key_id', aws_secret_access_key='aws_secret_access_key') self.initialize_service_connection() def initialize_service_connection(self): self.actual_request = None self.original_mexe = self.service_connection._mexe self.service_connection._mexe = self._mexe_spy def create_service_connection(self, **kwargs): if self.connection_class is None: raise ValueError("The connection_class class attribute must be " "set to a non-None value.") return self.connection_class(**kwargs) def _mexe_spy(self, request, *args, **kwargs): self.actual_request = request return self.original_mexe(request, *args, **kwargs) def create_response(self, status_code, reason='', header=[], body=None): if body is None: body = self.default_body() response = Mock(spec=httplib.HTTPResponse) response.status = status_code response.read.return_value = body response.reason = reason response.getheaders.return_value = header response.msg = dict(header) def overwrite_header(arg, default=None): header_dict = dict(header) if header_dict.has_key(arg): return header_dict[arg] else: return default response.getheader.side_effect = overwrite_header return response def assert_request_parameters(self, params, ignore_params_values=None): """Verify the actual parameters sent to the service API.""" request_params = self.actual_request.params.copy() if ignore_params_values is not None: for param in ignore_params_values: # We still want to check that the ignore_params_values params # are in the request parameters, we just don't need to check # their value. self.assertIn(param, request_params) del request_params[param] self.assertDictEqual(request_params, params) def set_http_response(self, status_code, reason='', header=[], body=None): http_response = self.create_response(status_code, reason, header, body) self.https_connection.getresponse.return_value = http_response def default_body(self): return '' boto-2.20.1/tests/unit/auth/000077500000000000000000000000001225267101000156155ustar00rootroot00000000000000boto-2.20.1/tests/unit/auth/__init__.py000066400000000000000000000000001225267101000177140ustar00rootroot00000000000000boto-2.20.1/tests/unit/auth/test_query.py000066400000000000000000000062211225267101000203740ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import copy from mock import Mock from tests.unit import unittest from boto.auth import QueryAuthHandler from boto.connection import HTTPRequest class TestQueryAuthHandler(unittest.TestCase): def setUp(self): self.provider = Mock() self.provider.access_key = 'access_key' self.provider.secret_key = 'secret_key' self.request = HTTPRequest( method='GET', protocol='https', host='sts.amazonaws.com', port=443, path='/', auth_path=None, params={ 'Action': 'AssumeRoleWithWebIdentity', 'Version': '2011-06-15', 'RoleSessionName': 'web-identity-federation', 'ProviderId': '2012-06-01', 'WebIdentityToken': 'Atza|IQEBLjAsAhRkcxQ', }, headers={}, body='' ) def test_escape_value(self): auth = QueryAuthHandler('sts.amazonaws.com', Mock(), self.provider) # This should **NOT** get escaped. value = auth._escape_value('Atza|IQEBLjAsAhRkcxQ') self.assertEqual(value, 'Atza|IQEBLjAsAhRkcxQ') def test_build_query_string(self): auth = QueryAuthHandler('sts.amazonaws.com', Mock(), self.provider) query_string = auth._build_query_string(self.request.params) self.assertEqual(query_string, 'Action=AssumeRoleWithWebIdentity' + \ '&ProviderId=2012-06-01&RoleSessionName=web-identity-federation' + \ '&Version=2011-06-15&WebIdentityToken=Atza|IQEBLjAsAhRkcxQ') def test_add_auth(self): auth = QueryAuthHandler('sts.amazonaws.com', Mock(), self.provider) req = copy.copy(self.request) auth.add_auth(req) self.assertEqual(req.path, '/?Action=AssumeRoleWithWebIdentity' + \ '&ProviderId=2012-06-01&RoleSessionName=web-identity-federation' + \ '&Version=2011-06-15&WebIdentityToken=Atza|IQEBLjAsAhRkcxQ') boto-2.20.1/tests/unit/auth/test_sigv4.py000066400000000000000000000230521225267101000202640ustar00rootroot00000000000000# Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from mock import Mock from tests.unit import unittest from boto.auth import HmacAuthV4Handler from boto.connection import HTTPRequest class TestSigV4Handler(unittest.TestCase): def setUp(self): self.provider = Mock() self.provider.access_key = 'access_key' self.provider.secret_key = 'secret_key' self.request = HTTPRequest( 'POST', 'https', 'glacier.us-east-1.amazonaws.com', 443, '/-/vaults/foo/archives', None, {}, {'x-amz-glacier-version': '2012-06-01'}, '') def test_inner_whitespace_is_collapsed(self): auth = HmacAuthV4Handler('glacier.us-east-1.amazonaws.com', Mock(), self.provider) self.request.headers['x-amz-archive-description'] = 'two spaces' headers = auth.headers_to_sign(self.request) self.assertEqual(headers, {'Host': 'glacier.us-east-1.amazonaws.com', 'x-amz-archive-description': 'two spaces', 'x-amz-glacier-version': '2012-06-01'}) # Note the single space between the "two spaces". self.assertEqual(auth.canonical_headers(headers), 'host:glacier.us-east-1.amazonaws.com\n' 'x-amz-archive-description:two spaces\n' 'x-amz-glacier-version:2012-06-01') def test_canonical_query_string(self): auth = HmacAuthV4Handler('glacier.us-east-1.amazonaws.com', Mock(), self.provider) request = HTTPRequest( 'GET', 'https', 'glacier.us-east-1.amazonaws.com', 443, '/-/vaults/foo/archives', None, {}, {'x-amz-glacier-version': '2012-06-01'}, '') request.params['Foo.1'] = 'aaa' request.params['Foo.10'] = 'zzz' query_string = auth.canonical_query_string(request) self.assertEqual(query_string, 'Foo.1=aaa&Foo.10=zzz') def test_canonical_uri(self): auth = HmacAuthV4Handler('glacier.us-east-1.amazonaws.com', Mock(), self.provider) request = HTTPRequest( 'GET', 'https', 'glacier.us-east-1.amazonaws.com', 443, 'x/./././x .html', None, {}, {'x-amz-glacier-version': '2012-06-01'}, '') canonical_uri = auth.canonical_uri(request) # This should be both normalized & urlencoded. self.assertEqual(canonical_uri, 'x/x%20.html') auth = HmacAuthV4Handler('glacier.us-east-1.amazonaws.com', Mock(), self.provider) request = HTTPRequest( 'GET', 'https', 'glacier.us-east-1.amazonaws.com', 443, 'x/./././x/html/', None, {}, {'x-amz-glacier-version': '2012-06-01'}, '') canonical_uri = auth.canonical_uri(request) # Trailing slashes should be preserved. self.assertEqual(canonical_uri, 'x/x/html/') request = HTTPRequest( 'GET', 'https', 'glacier.us-east-1.amazonaws.com', 443, '/', None, {}, {'x-amz-glacier-version': '2012-06-01'}, '') canonical_uri = auth.canonical_uri(request) # There should not be two-slashes. self.assertEqual(canonical_uri, '/') # Make sure Windows-style slashes are converted properly request = HTTPRequest( 'GET', 'https', 'glacier.us-east-1.amazonaws.com', 443, '\\x\\x.html', None, {}, {'x-amz-glacier-version': '2012-06-01'}, '') canonical_uri = auth.canonical_uri(request) self.assertEqual(canonical_uri, '/x/x.html') def test_credential_scope(self): # test the AWS standard regions IAM endpoint auth = HmacAuthV4Handler('iam.amazonaws.com', Mock(), self.provider) request = HTTPRequest( 'POST', 'https', 'iam.amazonaws.com', 443, '/', '/', {'Action': 'ListAccountAliases', 'Version': '2010-05-08'}, { 'Content-Length': '44', 'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8', 'X-Amz-Date': '20130808T013210Z' }, 'Action=ListAccountAliases&Version=2010-05-08') credential_scope = auth.credential_scope(request) region_name = credential_scope.split('/')[1] self.assertEqual(region_name, 'us-east-1') # test the AWS GovCloud region IAM endpoint auth = HmacAuthV4Handler('iam.us-gov.amazonaws.com', Mock(), self.provider) request = HTTPRequest( 'POST', 'https', 'iam.us-gov.amazonaws.com', 443, '/', '/', {'Action': 'ListAccountAliases', 'Version': '2010-05-08'}, { 'Content-Length': '44', 'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8', 'X-Amz-Date': '20130808T013210Z' }, 'Action=ListAccountAliases&Version=2010-05-08') credential_scope = auth.credential_scope(request) region_name = credential_scope.split('/')[1] self.assertEqual(region_name, 'us-gov-west-1') # iam.us-west-1.amazonaws.com does not exist however this # covers the remaining region_name control structure for a # different region name auth = HmacAuthV4Handler('iam.us-west-1.amazonaws.com', Mock(), self.provider) request = HTTPRequest( 'POST', 'https', 'iam.us-west-1.amazonaws.com', 443, '/', '/', {'Action': 'ListAccountAliases', 'Version': '2010-05-08'}, { 'Content-Length': '44', 'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8', 'X-Amz-Date': '20130808T013210Z' }, 'Action=ListAccountAliases&Version=2010-05-08') credential_scope = auth.credential_scope(request) region_name = credential_scope.split('/')[1] self.assertEqual(region_name, 'us-west-1') # Test connections to custom locations, e.g. localhost:8080 auth = HmacAuthV4Handler('localhost', Mock(), self.provider, service_name='iam') request = HTTPRequest( 'POST', 'http', 'localhost', 8080, '/', '/', {'Action': 'ListAccountAliases', 'Version': '2010-05-08'}, { 'Content-Length': '44', 'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8', 'X-Amz-Date': '20130808T013210Z' }, 'Action=ListAccountAliases&Version=2010-05-08') credential_scope = auth.credential_scope(request) timestamp, region, service, v = credential_scope.split('/') self.assertEqual(region, 'localhost') self.assertEqual(service, 'iam') def test_headers_to_sign(self): auth = HmacAuthV4Handler('glacier.us-east-1.amazonaws.com', Mock(), self.provider) request = HTTPRequest( 'GET', 'http', 'glacier.us-east-1.amazonaws.com', 80, 'x/./././x .html', None, {}, {'x-amz-glacier-version': '2012-06-01'}, '') headers = auth.headers_to_sign(request) # Port 80 & not secure excludes the port. self.assertEqual(headers['Host'], 'glacier.us-east-1.amazonaws.com') request = HTTPRequest( 'GET', 'https', 'glacier.us-east-1.amazonaws.com', 443, 'x/./././x .html', None, {}, {'x-amz-glacier-version': '2012-06-01'}, '') headers = auth.headers_to_sign(request) # SSL port excludes the port. self.assertEqual(headers['Host'], 'glacier.us-east-1.amazonaws.com') request = HTTPRequest( 'GET', 'https', 'glacier.us-east-1.amazonaws.com', 8080, 'x/./././x .html', None, {}, {'x-amz-glacier-version': '2012-06-01'}, '') headers = auth.headers_to_sign(request) # URL should include port. self.assertEqual(headers['Host'], 'glacier.us-east-1.amazonaws.com:8080') def test_region_and_service_can_be_overriden(self): auth = HmacAuthV4Handler('queue.amazonaws.com', Mock(), self.provider) self.request.headers['X-Amz-Date'] = '20121121000000' auth.region_name = 'us-west-2' auth.service_name = 'sqs' scope = auth.credential_scope(self.request) self.assertEqual(scope, '20121121/us-west-2/sqs/aws4_request') boto-2.20.1/tests/unit/beanstalk/000077500000000000000000000000001225267101000166205ustar00rootroot00000000000000boto-2.20.1/tests/unit/beanstalk/__init__.py000066400000000000000000000000001225267101000207170ustar00rootroot00000000000000boto-2.20.1/tests/unit/beanstalk/test_layer1.py000066400000000000000000000125051225267101000214310ustar00rootroot00000000000000#!/usr/bin/env python import json from tests.unit import AWSMockServiceTestCase from boto.beanstalk.layer1 import Layer1 # These tests are just checking the basic structure of # the Elastic Beanstalk code, by picking a few calls # and verifying we get the expected results with mocked # responses. The integration tests actually verify the # API calls interact with the service correctly. class TestListAvailableSolutionStacks(AWSMockServiceTestCase): connection_class = Layer1 def default_body(self): return json.dumps( {u'ListAvailableSolutionStacksResponse': {u'ListAvailableSolutionStacksResult': {u'SolutionStackDetails': [ {u'PermittedFileTypes': [u'war', u'zip'], u'SolutionStackName': u'32bit Amazon Linux running Tomcat 7'}, {u'PermittedFileTypes': [u'zip'], u'SolutionStackName': u'32bit Amazon Linux running PHP 5.3'}], u'SolutionStacks': [u'32bit Amazon Linux running Tomcat 7', u'32bit Amazon Linux running PHP 5.3']}, u'ResponseMetadata': {u'RequestId': u'request_id'}}}) def test_list_available_solution_stacks(self): self.set_http_response(status_code=200) api_response = self.service_connection.list_available_solution_stacks() stack_details = api_response['ListAvailableSolutionStacksResponse']\ ['ListAvailableSolutionStacksResult']\ ['SolutionStackDetails'] solution_stacks = api_response['ListAvailableSolutionStacksResponse']\ ['ListAvailableSolutionStacksResult']\ ['SolutionStacks'] self.assertEqual(solution_stacks, [u'32bit Amazon Linux running Tomcat 7', u'32bit Amazon Linux running PHP 5.3']) # These are the parameters that are actually sent to the CloudFormation # service. self.assert_request_parameters({ 'Action': 'ListAvailableSolutionStacks', 'ContentType': 'JSON', 'Version': '2010-12-01', }) class TestCreateApplicationVersion(AWSMockServiceTestCase): connection_class = Layer1 def default_body(self): return json.dumps({ 'CreateApplicationVersionResponse': {u'CreateApplicationVersionResult': {u'ApplicationVersion': {u'ApplicationName': u'application1', u'DateCreated': 1343067094.342, u'DateUpdated': 1343067094.342, u'Description': None, u'SourceBundle': {u'S3Bucket': u'elasticbeanstalk-us-east-1', u'S3Key': u'resources/elasticbeanstalk-sampleapp.war'}, u'VersionLabel': u'version1'}}}}) def test_create_application_version(self): self.set_http_response(status_code=200) api_response = self.service_connection.create_application_version( 'application1', 'version1', s3_bucket='mybucket', s3_key='mykey', auto_create_application=True) app_version = api_response['CreateApplicationVersionResponse']\ ['CreateApplicationVersionResult']\ ['ApplicationVersion'] self.assert_request_parameters({ 'Action': 'CreateApplicationVersion', 'ContentType': 'JSON', 'Version': '2010-12-01', 'ApplicationName': 'application1', 'AutoCreateApplication': 'true', 'SourceBundle.S3Bucket': 'mybucket', 'SourceBundle.S3Key': 'mykey', 'VersionLabel': 'version1', }) self.assertEqual(app_version['ApplicationName'], 'application1') self.assertEqual(app_version['VersionLabel'], 'version1') class TestCreateEnvironment(AWSMockServiceTestCase): connection_class = Layer1 def default_body(self): return json.dumps({}) def test_create_environment(self): self.set_http_response(status_code=200) api_response = self.service_connection.create_environment( 'application1', 'environment1', 'version1', '32bit Amazon Linux running Tomcat 7', option_settings=[ ('aws:autoscaling:launchconfiguration', 'Ec2KeyName', 'mykeypair'), ('aws:elasticbeanstalk:application:environment', 'ENVVAR', 'VALUE1')]) self.assert_request_parameters({ 'Action': 'CreateEnvironment', 'ApplicationName': 'application1', 'EnvironmentName': 'environment1', 'TemplateName': '32bit Amazon Linux running Tomcat 7', 'ContentType': 'JSON', 'Version': '2010-12-01', 'VersionLabel': 'version1', 'OptionSettings.member.1.Namespace': 'aws:autoscaling:launchconfiguration', 'OptionSettings.member.1.OptionName': 'Ec2KeyName', 'OptionSettings.member.1.Value': 'mykeypair', 'OptionSettings.member.2.Namespace': 'aws:elasticbeanstalk:application:environment', 'OptionSettings.member.2.OptionName': 'ENVVAR', 'OptionSettings.member.2.Value': 'VALUE1', }) boto-2.20.1/tests/unit/cloudformation/000077500000000000000000000000001225267101000177015ustar00rootroot00000000000000boto-2.20.1/tests/unit/cloudformation/__init__.py000066400000000000000000000000001225267101000220000ustar00rootroot00000000000000boto-2.20.1/tests/unit/cloudformation/test_connection.py000077500000000000000000000631331225267101000234620ustar00rootroot00000000000000#!/usr/bin/env python import unittest import httplib from datetime import datetime try: import json except ImportError: import simplejson as json from mock import Mock from tests.unit import AWSMockServiceTestCase from boto.cloudformation.connection import CloudFormationConnection SAMPLE_TEMPLATE = r""" { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Sample template", "Parameters" : { "KeyName" : { "Description" : "key pair", "Type" : "String" } }, "Resources" : { "Ec2Instance" : { "Type" : "AWS::EC2::Instance", "Properties" : { "KeyName" : { "Ref" : "KeyName" }, "ImageId" : "ami-7f418316", "UserData" : { "Fn::Base64" : "80" } } } }, "Outputs" : { "InstanceId" : { "Description" : "InstanceId of the newly created EC2 instance", "Value" : { "Ref" : "Ec2Instance" } } } """ class CloudFormationConnectionBase(AWSMockServiceTestCase): connection_class = CloudFormationConnection def setUp(self): super(CloudFormationConnectionBase, self).setUp() self.stack_id = u'arn:aws:cloudformation:us-east-1:18:stack/Name/id' class TestCloudFormationCreateStack(CloudFormationConnectionBase): def default_body(self): return json.dumps( {u'CreateStackResponse': {u'CreateStackResult': {u'StackId': self.stack_id}, u'ResponseMetadata': {u'RequestId': u'1'}}}) def test_create_stack_has_correct_request_params(self): self.set_http_response(status_code=200) api_response = self.service_connection.create_stack( 'stack_name', template_url='http://url', template_body=SAMPLE_TEMPLATE, parameters=[('KeyName', 'myKeyName')], tags={'TagKey': 'TagValue'}, notification_arns=['arn:notify1', 'arn:notify2'], disable_rollback=True, timeout_in_minutes=20, capabilities=['CAPABILITY_IAM'] ) self.assertEqual(api_response, self.stack_id) # These are the parameters that are actually sent to the CloudFormation # service. self.assert_request_parameters({ 'Action': 'CreateStack', 'Capabilities.member.1': 'CAPABILITY_IAM', 'ContentType': 'JSON', 'DisableRollback': 'true', 'NotificationARNs.member.1': 'arn:notify1', 'NotificationARNs.member.2': 'arn:notify2', 'Parameters.member.1.ParameterKey': 'KeyName', 'Parameters.member.1.ParameterValue': 'myKeyName', 'Tags.member.1.Key': 'TagKey', 'Tags.member.1.Value': 'TagValue', 'StackName': 'stack_name', 'Version': '2010-05-15', 'TimeoutInMinutes': 20, 'TemplateBody': SAMPLE_TEMPLATE, 'TemplateURL': 'http://url', }) # The test_create_stack_has_correct_request_params verified all of the # params needed when making a create_stack service call. The rest of the # tests for create_stack only verify specific parts of the params sent # to CloudFormation. def test_create_stack_with_minimum_args(self): # This will fail in practice, but the API docs only require stack_name. self.set_http_response(status_code=200) api_response = self.service_connection.create_stack('stack_name') self.assertEqual(api_response, self.stack_id) self.assert_request_parameters({ 'Action': 'CreateStack', 'ContentType': 'JSON', 'DisableRollback': 'false', 'StackName': 'stack_name', 'Version': '2010-05-15', }) def test_create_stack_fails(self): self.set_http_response(status_code=400, reason='Bad Request', body='Invalid arg.') with self.assertRaises(self.service_connection.ResponseError): api_response = self.service_connection.create_stack( 'stack_name', template_body=SAMPLE_TEMPLATE, parameters=[('KeyName', 'myKeyName')]) class TestCloudFormationUpdateStack(CloudFormationConnectionBase): def default_body(self): return json.dumps( {u'UpdateStackResponse': {u'UpdateStackResult': {u'StackId': self.stack_id}, u'ResponseMetadata': {u'RequestId': u'1'}}}) def test_update_stack_all_args(self): self.set_http_response(status_code=200) api_response = self.service_connection.update_stack( 'stack_name', template_url='http://url', template_body=SAMPLE_TEMPLATE, parameters=[('KeyName', 'myKeyName')], tags={'TagKey': 'TagValue'}, notification_arns=['arn:notify1', 'arn:notify2'], disable_rollback=True, timeout_in_minutes=20 ) self.assert_request_parameters({ 'Action': 'UpdateStack', 'ContentType': 'JSON', 'DisableRollback': 'true', 'NotificationARNs.member.1': 'arn:notify1', 'NotificationARNs.member.2': 'arn:notify2', 'Parameters.member.1.ParameterKey': 'KeyName', 'Parameters.member.1.ParameterValue': 'myKeyName', 'Tags.member.1.Key': 'TagKey', 'Tags.member.1.Value': 'TagValue', 'StackName': 'stack_name', 'Version': '2010-05-15', 'TimeoutInMinutes': 20, 'TemplateBody': SAMPLE_TEMPLATE, 'TemplateURL': 'http://url', }) def test_update_stack_with_minimum_args(self): self.set_http_response(status_code=200) api_response = self.service_connection.update_stack('stack_name') self.assertEqual(api_response, self.stack_id) self.assert_request_parameters({ 'Action': 'UpdateStack', 'ContentType': 'JSON', 'DisableRollback': 'false', 'StackName': 'stack_name', 'Version': '2010-05-15', }) def test_update_stack_fails(self): self.set_http_response(status_code=400, reason='Bad Request', body='Invalid arg.') with self.assertRaises(self.service_connection.ResponseError): api_response = self.service_connection.update_stack( 'stack_name', template_body=SAMPLE_TEMPLATE, parameters=[('KeyName', 'myKeyName')]) class TestCloudFormationDeleteStack(CloudFormationConnectionBase): def default_body(self): return json.dumps( {u'DeleteStackResponse': {u'ResponseMetadata': {u'RequestId': u'1'}}}) def test_delete_stack(self): self.set_http_response(status_code=200) api_response = self.service_connection.delete_stack('stack_name') self.assertEqual(api_response, json.loads(self.default_body())) self.assert_request_parameters({ 'Action': 'DeleteStack', 'ContentType': 'JSON', 'StackName': 'stack_name', 'Version': '2010-05-15', }) def test_delete_stack_fails(self): self.set_http_response(status_code=400) with self.assertRaises(self.service_connection.ResponseError): api_response = self.service_connection.delete_stack('stack_name') class TestCloudFormationDescribeStackResource(CloudFormationConnectionBase): def default_body(self): return json.dumps('fake server response') def test_describe_stack_resource(self): self.set_http_response(status_code=200) api_response = self.service_connection.describe_stack_resource( 'stack_name', 'resource_id') self.assertEqual(api_response, 'fake server response') self.assert_request_parameters({ 'Action': 'DescribeStackResource', 'ContentType': 'JSON', 'LogicalResourceId': 'resource_id', 'StackName': 'stack_name', 'Version': '2010-05-15', }) def test_describe_stack_resource_fails(self): self.set_http_response(status_code=400) with self.assertRaises(self.service_connection.ResponseError): api_response = self.service_connection.describe_stack_resource( 'stack_name', 'resource_id') class TestCloudFormationGetTemplate(CloudFormationConnectionBase): def default_body(self): return json.dumps('fake server response') def test_get_template(self): self.set_http_response(status_code=200) api_response = self.service_connection.get_template('stack_name') self.assertEqual(api_response, 'fake server response') self.assert_request_parameters({ 'Action': 'GetTemplate', 'ContentType': 'JSON', 'StackName': 'stack_name', 'Version': '2010-05-15', }) def test_get_template_fails(self): self.set_http_response(status_code=400) with self.assertRaises(self.service_connection.ResponseError): api_response = self.service_connection.get_template('stack_name') class TestCloudFormationGetStackevents(CloudFormationConnectionBase): def default_body(self): return """ Event-1-Id arn:aws:cfn:us-east-1:1:stack MyStack MyStack MyStack_One AWS::CloudFormation::Stack 2010-07-27T22:26:28Z CREATE_IN_PROGRESS User initiated Event-2-Id arn:aws:cfn:us-east-1:1:stack MyStack MySG1 MyStack_SG1 AWS::SecurityGroup 2010-07-27T22:28:28Z CREATE_COMPLETE """ def test_describe_stack_events(self): self.set_http_response(status_code=200) first, second = self.service_connection.describe_stack_events('stack_name', next_token='next_token') self.assertEqual(first.event_id, 'Event-1-Id') self.assertEqual(first.logical_resource_id, 'MyStack') self.assertEqual(first.physical_resource_id, 'MyStack_One') self.assertEqual(first.resource_properties, None) self.assertEqual(first.resource_status, 'CREATE_IN_PROGRESS') self.assertEqual(first.resource_status_reason, 'User initiated') self.assertEqual(first.resource_type, 'AWS::CloudFormation::Stack') self.assertEqual(first.stack_id, 'arn:aws:cfn:us-east-1:1:stack') self.assertEqual(first.stack_name, 'MyStack') self.assertIsNotNone(first.timestamp) self.assertEqual(second.event_id, 'Event-2-Id') self.assertEqual(second.logical_resource_id, 'MySG1') self.assertEqual(second.physical_resource_id, 'MyStack_SG1') self.assertEqual(second.resource_properties, None) self.assertEqual(second.resource_status, 'CREATE_COMPLETE') self.assertEqual(second.resource_status_reason, None) self.assertEqual(second.resource_type, 'AWS::SecurityGroup') self.assertEqual(second.stack_id, 'arn:aws:cfn:us-east-1:1:stack') self.assertEqual(second.stack_name, 'MyStack') self.assertIsNotNone(second.timestamp) self.assert_request_parameters({ 'Action': 'DescribeStackEvents', 'NextToken': 'next_token', 'StackName': 'stack_name', 'Version': '2010-05-15', }) class TestCloudFormationDescribeStackResources(CloudFormationConnectionBase): def default_body(self): return """ arn:aws:cfn:us-east-1:1:stack MyStack MyDBInstance MyStack_DB1 AWS::DBInstance 2010-07-27T22:27:28Z CREATE_COMPLETE arn:aws:cfn:us-east-1:1:stack MyStack MyAutoScalingGroup MyStack_ASG1 AWS::AutoScalingGroup 2010-07-27T22:28:28Z CREATE_IN_PROGRESS """ def test_describe_stack_resources(self): self.set_http_response(status_code=200) first, second = self.service_connection.describe_stack_resources( 'stack_name', 'logical_resource_id', 'physical_resource_id') self.assertEqual(first.description, None) self.assertEqual(first.logical_resource_id, 'MyDBInstance') self.assertEqual(first.physical_resource_id, 'MyStack_DB1') self.assertEqual(first.resource_status, 'CREATE_COMPLETE') self.assertEqual(first.resource_status_reason, None) self.assertEqual(first.resource_type, 'AWS::DBInstance') self.assertEqual(first.stack_id, 'arn:aws:cfn:us-east-1:1:stack') self.assertEqual(first.stack_name, 'MyStack') self.assertIsNotNone(first.timestamp) self.assertEqual(second.description, None) self.assertEqual(second.logical_resource_id, 'MyAutoScalingGroup') self.assertEqual(second.physical_resource_id, 'MyStack_ASG1') self.assertEqual(second.resource_status, 'CREATE_IN_PROGRESS') self.assertEqual(second.resource_status_reason, None) self.assertEqual(second.resource_type, 'AWS::AutoScalingGroup') self.assertEqual(second.stack_id, 'arn:aws:cfn:us-east-1:1:stack') self.assertEqual(second.stack_name, 'MyStack') self.assertIsNotNone(second.timestamp) self.assert_request_parameters({ 'Action': 'DescribeStackResources', 'LogicalResourceId': 'logical_resource_id', 'PhysicalResourceId': 'physical_resource_id', 'StackName': 'stack_name', 'Version': '2010-05-15', }) class TestCloudFormationDescribeStacks(CloudFormationConnectionBase): def default_body(self): return """ arn:aws:cfn:us-east-1:1:stack CREATE_COMPLETE MyStack My Description 2012-05-16T22:55:31Z CAPABILITY_IAM arn:aws:sns:region-name:account-name:topic-name false MyValue MyKey http://url/ Server URL ServerURL MyTagKey MyTagValue 12345 """ def test_describe_stacks(self): self.set_http_response(status_code=200) stacks = self.service_connection.describe_stacks('MyStack') self.assertEqual(len(stacks), 1) stack = stacks[0] self.assertEqual(stack.creation_time, datetime(2012, 5, 16, 22, 55, 31)) self.assertEqual(stack.description, 'My Description') self.assertEqual(stack.disable_rollback, False) self.assertEqual(stack.stack_id, 'arn:aws:cfn:us-east-1:1:stack') self.assertEqual(stack.stack_status, 'CREATE_COMPLETE') self.assertEqual(stack.stack_name, 'MyStack') self.assertEqual(stack.stack_name_reason, None) self.assertEqual(stack.timeout_in_minutes, None) self.assertEqual(len(stack.outputs), 1) self.assertEqual(stack.outputs[0].description, 'Server URL') self.assertEqual(stack.outputs[0].key, 'ServerURL') self.assertEqual(stack.outputs[0].value, 'http://url/') self.assertEqual(len(stack.parameters), 1) self.assertEqual(stack.parameters[0].key, 'MyKey') self.assertEqual(stack.parameters[0].value, 'MyValue') self.assertEqual(len(stack.capabilities), 1) self.assertEqual(stack.capabilities[0].value, 'CAPABILITY_IAM') self.assertEqual(len(stack.notification_arns), 1) self.assertEqual(stack.notification_arns[0].value, 'arn:aws:sns:region-name:account-name:topic-name') self.assertEqual(len(stack.tags), 1) self.assertEqual(stack.tags['MyTagKey'], 'MyTagValue') self.assert_request_parameters({ 'Action': 'DescribeStacks', 'StackName': 'MyStack', 'Version': '2010-05-15', }) class TestCloudFormationListStackResources(CloudFormationConnectionBase): def default_body(self): return """ CREATE_COMPLETE SampleDB 2011-06-21T20:25:57Z My-db-ycx AWS::RDS::DBInstance CREATE_COMPLETE CPUAlarmHigh 2011-06-21T20:29:23Z MyStack-CPUH-PF AWS::CloudWatch::Alarm 2d06e36c-ac1d-11e0-a958-f9382b6eb86b """ def test_list_stack_resources(self): self.set_http_response(status_code=200) resources = self.service_connection.list_stack_resources('MyStack', next_token='next_token') self.assertEqual(len(resources), 2) self.assertEqual(resources[0].last_updated_time, datetime(2011, 6, 21, 20, 25, 57)) self.assertEqual(resources[0].logical_resource_id, 'SampleDB') self.assertEqual(resources[0].physical_resource_id, 'My-db-ycx') self.assertEqual(resources[0].resource_status, 'CREATE_COMPLETE') self.assertEqual(resources[0].resource_status_reason, None) self.assertEqual(resources[0].resource_type, 'AWS::RDS::DBInstance') self.assertEqual(resources[1].last_updated_time, datetime(2011, 6, 21, 20, 29, 23)) self.assertEqual(resources[1].logical_resource_id, 'CPUAlarmHigh') self.assertEqual(resources[1].physical_resource_id, 'MyStack-CPUH-PF') self.assertEqual(resources[1].resource_status, 'CREATE_COMPLETE') self.assertEqual(resources[1].resource_status_reason, None) self.assertEqual(resources[1].resource_type, 'AWS::CloudWatch::Alarm') self.assert_request_parameters({ 'Action': 'ListStackResources', 'NextToken': 'next_token', 'StackName': 'MyStack', 'Version': '2010-05-15', }) class TestCloudFormationListStacks(CloudFormationConnectionBase): def default_body(self): return """ arn:aws:cfn:us-east-1:1:stack/Test1/aa CREATE_IN_PROGRESS vpc1 2011-05-23T15:47:44Z My Description. """ def test_list_stacks(self): self.set_http_response(status_code=200) stacks = self.service_connection.list_stacks(['CREATE_IN_PROGRESS'], next_token='next_token') self.assertEqual(len(stacks), 1) self.assertEqual(stacks[0].stack_id, 'arn:aws:cfn:us-east-1:1:stack/Test1/aa') self.assertEqual(stacks[0].stack_status, 'CREATE_IN_PROGRESS') self.assertEqual(stacks[0].stack_name, 'vpc1') self.assertEqual(stacks[0].creation_time, datetime(2011, 5, 23, 15, 47, 44)) self.assertEqual(stacks[0].deletion_time, None) self.assertEqual(stacks[0].template_description, 'My Description.') self.assert_request_parameters({ 'Action': 'ListStacks', 'NextToken': 'next_token', 'StackStatusFilter.member.1': 'CREATE_IN_PROGRESS', 'Version': '2010-05-15', }) class TestCloudFormationValidateTemplate(CloudFormationConnectionBase): def default_body(self): return """ My Description. false InstanceType Type of instance to launch m1.small false KeyName EC2 KeyPair 0be7b6e8-e4a0-11e0-a5bd-9f8d5a7dbc91 """ def test_validate_template(self): self.set_http_response(status_code=200) template = self.service_connection.validate_template(template_body=SAMPLE_TEMPLATE, template_url='http://url') self.assertEqual(template.description, 'My Description.') self.assertEqual(len(template.template_parameters), 2) param1, param2 = template.template_parameters self.assertEqual(param1.default_value, 'm1.small') self.assertEqual(param1.description, 'Type of instance to launch') self.assertEqual(param1.no_echo, True) self.assertEqual(param1.parameter_key, 'InstanceType') self.assertEqual(param2.default_value, None) self.assertEqual(param2.description, 'EC2 KeyPair') self.assertEqual(param2.no_echo, True) self.assertEqual(param2.parameter_key, 'KeyName') self.assert_request_parameters({ 'Action': 'ValidateTemplate', 'TemplateBody': SAMPLE_TEMPLATE, 'TemplateURL': 'http://url', 'Version': '2010-05-15', }) class TestCloudFormationCancelUpdateStack(CloudFormationConnectionBase): def default_body(self): return """""" def test_cancel_update_stack(self): self.set_http_response(status_code=200) api_response = self.service_connection.cancel_update_stack('stack_name') self.assertEqual(api_response, True) self.assert_request_parameters({ 'Action': 'CancelUpdateStack', 'StackName': 'stack_name', 'Version': '2010-05-15', }) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/cloudformation/test_stack.py000077500000000000000000000217021225267101000224240ustar00rootroot00000000000000#!/usr/bin/env python import datetime import xml.sax import unittest import boto.handler import boto.resultset import boto.cloudformation SAMPLE_XML = r""" value0 key0 key1 value1 arn:aws:cloudformation:ap-southeast-1:100:stack/Name/id CREATE_COMPLETE Name arn:aws:sns:ap-southeast-1:100:name 2013-01-10T05:04:56Z false value0 output0 key0 value1 output1 key1 1 """ DESCRIBE_STACK_RESOURCE_XML = r""" arn:aws:cloudformation:us-east-1:123456789:stack/MyStack/aaf549a0-a413-11df-adb3-5081b3858e83 MyStack MyDBInstance MyStack_DB1 AWS::DBInstance 2010-07-27T22:27:28Z CREATE_COMPLETE arn:aws:cloudformation:us-east-1:123456789:stack/MyStack/aaf549a0-a413-11df-adb3-5081b3858e83 MyStack MyAutoScalingGroup MyStack_ASG1 AWS::AutoScalingGroup 2010-07-27T22:28:28.123456Z CREATE_IN_PROGRESS """ LIST_STACKS_XML = r""" arn:aws:cloudformation:us-east-1:1234567:stack/TestCreate1/aaaaa CREATE_IN_PROGRESS vpc1 2011-05-23T15:47:44Z Creates one EC2 instance and a load balancer. arn:aws:cloudformation:us-east-1:1234567:stack/TestDelete2/bbbbb DELETE_COMPLETE 2011-03-10T16:20:51.575757Z WP1 2011-03-05T19:57:58.161616Z A simple basic Cloudformation Template. """ LIST_STACK_RESOURCES_XML = r""" CREATE_COMPLETE DBSecurityGroup 2011-06-21T20:15:58Z gmarcteststack-dbsecuritygroup-1s5m0ez5lkk6w AWS::RDS::DBSecurityGroup CREATE_COMPLETE SampleDB 2011-06-21T20:25:57.875643Z MyStack-sampledb-ycwhk1v830lx AWS::RDS::DBInstance 2d06e36c-ac1d-11e0-a958-f9382b6eb86b """ class TestStackParse(unittest.TestCase): def test_parse_tags(self): rs = boto.resultset.ResultSet([ ('member', boto.cloudformation.stack.Stack) ]) h = boto.handler.XmlHandler(rs, None) xml.sax.parseString(SAMPLE_XML, h) tags = rs[0].tags self.assertEqual(tags, {u'key0': u'value0', u'key1': u'value1'}) def test_event_creation_time_with_millis(self): millis_xml = SAMPLE_XML.replace( "2013-01-10T05:04:56Z", "2013-01-10T05:04:56.102342Z" ) rs = boto.resultset.ResultSet([ ('member', boto.cloudformation.stack.Stack) ]) h = boto.handler.XmlHandler(rs, None) xml.sax.parseString(millis_xml, h) creation_time = rs[0].creation_time self.assertEqual( creation_time, datetime.datetime(2013, 1, 10, 5, 4, 56, 102342) ) def test_resource_time_with_millis(self): rs = boto.resultset.ResultSet([ ('member', boto.cloudformation.stack.StackResource) ]) h = boto.handler.XmlHandler(rs, None) xml.sax.parseString(DESCRIBE_STACK_RESOURCE_XML, h) timestamp_1 = rs[0].timestamp self.assertEqual( timestamp_1, datetime.datetime(2010, 7, 27, 22, 27, 28) ) timestamp_2 = rs[1].timestamp self.assertEqual( timestamp_2, datetime.datetime(2010, 7, 27, 22, 28, 28, 123456) ) def test_list_stacks_time_with_millis(self): rs = boto.resultset.ResultSet([ ('member', boto.cloudformation.stack.StackSummary) ]) h = boto.handler.XmlHandler(rs, None) xml.sax.parseString(LIST_STACKS_XML, h) timestamp_1 = rs[0].creation_time self.assertEqual( timestamp_1, datetime.datetime(2011, 5, 23, 15, 47, 44) ) timestamp_2 = rs[1].creation_time self.assertEqual( timestamp_2, datetime.datetime(2011, 3, 5, 19, 57, 58, 161616) ) timestamp_3 = rs[1].deletion_time self.assertEqual( timestamp_3, datetime.datetime(2011, 3, 10, 16, 20, 51, 575757) ) def test_list_stacks_time_with_millis(self): rs = boto.resultset.ResultSet([ ('member', boto.cloudformation.stack.StackResourceSummary) ]) h = boto.handler.XmlHandler(rs, None) xml.sax.parseString(LIST_STACK_RESOURCES_XML, h) timestamp_1 = rs[0].last_updated_time self.assertEqual( timestamp_1, datetime.datetime(2011, 6, 21, 20, 15, 58) ) timestamp_2 = rs[1].last_updated_time self.assertEqual( timestamp_2, datetime.datetime(2011, 6, 21, 20, 25, 57, 875643) ) def test_disable_rollback_false(self): # SAMPLE_XML defines DisableRollback=="false" rs = boto.resultset.ResultSet([('member', boto.cloudformation.stack.Stack)]) h = boto.handler.XmlHandler(rs, None) xml.sax.parseString(SAMPLE_XML, h) disable_rollback = rs[0].disable_rollback self.assertFalse(disable_rollback) def test_disable_rollback_false_upper(self): # Should also handle "False" rs = boto.resultset.ResultSet([('member', boto.cloudformation.stack.Stack)]) h = boto.handler.XmlHandler(rs, None) sample_xml_upper = SAMPLE_XML.replace('false', 'False') xml.sax.parseString(sample_xml_upper, h) disable_rollback = rs[0].disable_rollback self.assertFalse(disable_rollback) def test_disable_rollback_true(self): rs = boto.resultset.ResultSet([('member', boto.cloudformation.stack.Stack)]) h = boto.handler.XmlHandler(rs, None) sample_xml_upper = SAMPLE_XML.replace('false', 'true') xml.sax.parseString(sample_xml_upper, h) disable_rollback = rs[0].disable_rollback self.assertTrue(disable_rollback) def test_disable_rollback_true_upper(self): rs = boto.resultset.ResultSet([('member', boto.cloudformation.stack.Stack)]) h = boto.handler.XmlHandler(rs, None) sample_xml_upper = SAMPLE_XML.replace('false', 'True') xml.sax.parseString(sample_xml_upper, h) disable_rollback = rs[0].disable_rollback self.assertTrue(disable_rollback) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/cloudfront/000077500000000000000000000000001225267101000170335ustar00rootroot00000000000000boto-2.20.1/tests/unit/cloudfront/__init__.py000066400000000000000000000000001225267101000211320ustar00rootroot00000000000000boto-2.20.1/tests/unit/cloudfront/test_distribution.py000066400000000000000000000011431225267101000231620ustar00rootroot00000000000000import unittest from boto.cloudfront.distribution import DistributionConfig from boto.cloudfront.logging import LoggingInfo class CloudfrontDistributionTest(unittest.TestCase): cloudfront = True def setUp(self): self.dist = DistributionConfig() def test_logging(self): # Default. self.assertEqual(self.dist.logging, None) # Override. lo = LoggingInfo(bucket='whatever', prefix='override_') dist = DistributionConfig(logging=lo) self.assertEqual(dist.logging.bucket, 'whatever') self.assertEqual(dist.logging.prefix, 'override_') boto-2.20.1/tests/unit/cloudfront/test_invalidation_list.py000066400000000000000000000077051225267101000241710ustar00rootroot00000000000000#!/usr/bin/env python import random import string from tests.unit import unittest import mock import boto RESPONSE_TEMPLATE = r""" %(next_marker)s %(max_items)s %(is_truncated)s %(inval_summaries)s """ INVAL_SUMMARY_TEMPLATE = r""" %(cfid)s %(status)s """ class CFInvalidationListTest(unittest.TestCase): cloudfront = True def setUp(self): self.cf = boto.connect_cloudfront('aws.aws_access_key_id', 'aws.aws_secret_access_key') def _get_random_id(self, length=14): return ''.join([random.choice(string.ascii_letters) for i in range(length)]) def _group_iter(self, iterator, n): accumulator = [] for item in iterator: accumulator.append(item) if len(accumulator) == n: yield accumulator accumulator = [] if len(accumulator) != 0: yield accumulator def _get_mock_responses(self, num, max_items): max_items = min(max_items, 100) cfid_groups = list(self._group_iter([self._get_random_id() for i in range(num)], max_items)) cfg = dict(status='Completed', max_items=max_items, next_marker='') responses = [] is_truncated = 'true' for i, group in enumerate(cfid_groups): next_marker = group[-1] if (i + 1) == len(cfid_groups): is_truncated = 'false' next_marker = '' invals = '' cfg.update(dict(next_marker=next_marker, is_truncated=is_truncated)) for cfid in group: cfg.update(dict(cfid=cfid)) invals += INVAL_SUMMARY_TEMPLATE % cfg cfg.update(dict(inval_summaries=invals)) mock_response = mock.Mock() mock_response.read.return_value = RESPONSE_TEMPLATE % cfg mock_response.status = 200 responses.append(mock_response) return responses def test_manual_pagination(self, num_invals=30, max_items=4): """ Test that paginating manually works properly """ self.assertGreater(num_invals, max_items) responses = self._get_mock_responses(num=num_invals, max_items=max_items) self.cf.make_request = mock.Mock(side_effect=responses) ir = self.cf.get_invalidation_requests('dist-id-here', max_items=max_items) all_invals = list(ir) self.assertEqual(len(all_invals), max_items) while ir.is_truncated: ir = self.cf.get_invalidation_requests('dist-id-here', marker=ir.next_marker, max_items=max_items) invals = list(ir) self.assertLessEqual(len(invals), max_items) all_invals.extend(invals) remainder = num_invals % max_items if remainder != 0: self.assertEqual(len(invals), remainder) self.assertEqual(len(all_invals), num_invals) def test_auto_pagination(self, num_invals=1024): """ Test that auto-pagination works properly """ max_items = 100 self.assertGreaterEqual(num_invals, max_items) responses = self._get_mock_responses(num=num_invals, max_items=max_items) self.cf.make_request = mock.Mock(side_effect=responses) ir = self.cf.get_invalidation_requests('dist-id-here') self.assertEqual(len(ir._inval_cache), max_items) self.assertEqual(len(list(ir)), num_invals) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/cloudfront/test_signed_urls.py000066400000000000000000000407321225267101000227700ustar00rootroot00000000000000import tempfile import unittest try: import simplejson as json except ImportError: import json from cStringIO import StringIO from textwrap import dedent from boto.cloudfront.distribution import Distribution class CloudfrontSignedUrlsTest(unittest.TestCase): cloudfront = True notdefault = True def setUp(self): self.pk_str = dedent(""" -----BEGIN RSA PRIVATE KEY----- MIICXQIBAAKBgQDA7ki9gI/lRygIoOjV1yymgx6FYFlzJ+z1ATMaLo57nL57AavW hb68HYY8EA0GJU9xQdMVaHBogF3eiCWYXSUZCWM/+M5+ZcdQraRRScucmn6g4EvY 2K4W2pxbqH8vmUikPxir41EeBPLjMOzKvbzzQy9e/zzIQVREKSp/7y1mywIDAQAB AoGABc7mp7XYHynuPZxChjWNJZIq+A73gm0ASDv6At7F8Vi9r0xUlQe/v0AQS3yc N8QlyR4XMbzMLYk3yjxFDXo4ZKQtOGzLGteCU2srANiLv26/imXA8FVidZftTAtL viWQZBVPTeYIA69ATUYPEq0a5u5wjGyUOij9OWyuy01mbPkCQQDluYoNpPOekQ0Z WrPgJ5rxc8f6zG37ZVoDBiexqtVShIF5W3xYuWhW5kYb0hliYfkq15cS7t9m95h3 1QJf/xI/AkEA1v9l/WN1a1N3rOK4VGoCokx7kR2SyTMSbZgF9IWJNOugR/WZw7HT njipO3c9dy1Ms9pUKwUF46d7049ck8HwdQJARgrSKuLWXMyBH+/l1Dx/I4tXuAJI rlPyo+VmiOc7b5NzHptkSHEPfR9s1OK0VqjknclqCJ3Ig86OMEtEFBzjZQJBAKYz 470hcPkaGk7tKYAgP48FvxRsnzeooptURW5E+M+PQ2W9iDPPOX9739+Xi02hGEWF B0IGbQoTRFdE4VVcPK0CQQCeS84lODlC0Y2BZv2JxW3Osv/WkUQ4dslfAQl1T303 7uwwr7XTroMv8dIFQIPreoPhRKmd/SbJzbiKfS/4QDhU -----END RSA PRIVATE KEY----- """) self.pk_id = "PK123456789754" self.dist = Distribution() self.canned_policy = ( '{"Statement":[{"Resource":' '"http://d604721fxaaqy9.cloudfront.net/horizon.jpg' '?large=yes&license=yes",' '"Condition":{"DateLessThan":{"AWS:EpochTime":1258237200}}}]}') self.custom_policy_1 = ( '{ \n' ' "Statement": [{ \n' ' "Resource":"http://d604721fxaaqy9.cloudfront.net/training/*", \n' ' "Condition":{ \n' ' "IpAddress":{"AWS:SourceIp":"145.168.143.0/24"}, \n' ' "DateLessThan":{"AWS:EpochTime":1258237200} \n' ' } \n' ' }] \n' '}\n') self.custom_policy_2 = ( '{ \n' ' "Statement": [{ \n' ' "Resource":"http://*", \n' ' "Condition":{ \n' ' "IpAddress":{"AWS:SourceIp":"216.98.35.1/32"},\n' ' "DateGreaterThan":{"AWS:EpochTime":1241073790},\n' ' "DateLessThan":{"AWS:EpochTime":1255674716}\n' ' } \n' ' }] \n' '}\n') def test_encode_custom_policy_1(self): """ Test base64 encoding custom policy 1 from Amazon's documentation. """ expected = ("eyAKICAgIlN0YXRlbWVudCI6IFt7IAogICAgICAiUmVzb3VyY2Ui" "OiJodHRwOi8vZDYwNDcyMWZ4YWFxeTkuY2xvdWRmcm9udC5uZXQv" "dHJhaW5pbmcvKiIsIAogICAgICAiQ29uZGl0aW9uIjp7IAogICAg" "ICAgICAiSXBBZGRyZXNzIjp7IkFXUzpTb3VyY2VJcCI6IjE0NS4x" "NjguMTQzLjAvMjQifSwgCiAgICAgICAgICJEYXRlTGVzc1RoYW4i" "OnsiQVdTOkVwb2NoVGltZSI6MTI1ODIzNzIwMH0gICAgICAKICAg" "ICAgfSAKICAgfV0gCn0K") encoded = self.dist._url_base64_encode(self.custom_policy_1) self.assertEqual(expected, encoded) def test_encode_custom_policy_2(self): """ Test base64 encoding custom policy 2 from Amazon's documentation. """ expected = ("eyAKICAgIlN0YXRlbWVudCI6IFt7IAogICAgICAiUmVzb3VyY2Ui" "OiJodHRwOi8vKiIsIAogICAgICAiQ29uZGl0aW9uIjp7IAogICAg" "ICAgICAiSXBBZGRyZXNzIjp7IkFXUzpTb3VyY2VJcCI6IjIxNi45" "OC4zNS4xLzMyIn0sCiAgICAgICAgICJEYXRlR3JlYXRlclRoYW4i" "OnsiQVdTOkVwb2NoVGltZSI6MTI0MTA3Mzc5MH0sCiAgICAgICAg" "ICJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTI1NTY3" "NDcxNn0KICAgICAgfSAKICAgfV0gCn0K") encoded = self.dist._url_base64_encode(self.custom_policy_2) self.assertEqual(expected, encoded) def test_sign_canned_policy(self): """ Test signing the canned policy from amazon's cloudfront documentation. """ expected = ("Nql641NHEUkUaXQHZINK1FZ~SYeUSoBJMxjdgqrzIdzV2gyEXPDN" "v0pYdWJkflDKJ3xIu7lbwRpSkG98NBlgPi4ZJpRRnVX4kXAJK6td" "Nx6FucDB7OVqzcxkxHsGFd8VCG1BkC-Afh9~lOCMIYHIaiOB6~5j" "t9w2EOwi6sIIqrg_") sig = self.dist._sign_string(self.canned_policy, private_key_string=self.pk_str) encoded_sig = self.dist._url_base64_encode(sig) self.assertEqual(expected, encoded_sig) def test_sign_canned_policy_pk_file(self): """ Test signing the canned policy from amazon's cloudfront documentation with a file object. """ expected = ("Nql641NHEUkUaXQHZINK1FZ~SYeUSoBJMxjdgqrzIdzV2gyEXPDN" "v0pYdWJkflDKJ3xIu7lbwRpSkG98NBlgPi4ZJpRRnVX4kXAJK6td" "Nx6FucDB7OVqzcxkxHsGFd8VCG1BkC-Afh9~lOCMIYHIaiOB6~5j" "t9w2EOwi6sIIqrg_") pk_file = tempfile.TemporaryFile() pk_file.write(self.pk_str) pk_file.seek(0) sig = self.dist._sign_string(self.canned_policy, private_key_file=pk_file) encoded_sig = self.dist._url_base64_encode(sig) self.assertEqual(expected, encoded_sig) def test_sign_canned_policy_pk_file_name(self): """ Test signing the canned policy from amazon's cloudfront documentation with a file name. """ expected = ("Nql641NHEUkUaXQHZINK1FZ~SYeUSoBJMxjdgqrzIdzV2gyEXPDN" "v0pYdWJkflDKJ3xIu7lbwRpSkG98NBlgPi4ZJpRRnVX4kXAJK6td" "Nx6FucDB7OVqzcxkxHsGFd8VCG1BkC-Afh9~lOCMIYHIaiOB6~5j" "t9w2EOwi6sIIqrg_") pk_file = tempfile.NamedTemporaryFile() pk_file.write(self.pk_str) pk_file.flush() sig = self.dist._sign_string(self.canned_policy, private_key_file=pk_file.name) encoded_sig = self.dist._url_base64_encode(sig) self.assertEqual(expected, encoded_sig) def test_sign_canned_policy_pk_file_like(self): """ Test signing the canned policy from amazon's cloudfront documentation with a file-like object (not a subclass of 'file' type) """ expected = ("Nql641NHEUkUaXQHZINK1FZ~SYeUSoBJMxjdgqrzIdzV2gyEXPDN" "v0pYdWJkflDKJ3xIu7lbwRpSkG98NBlgPi4ZJpRRnVX4kXAJK6td" "Nx6FucDB7OVqzcxkxHsGFd8VCG1BkC-Afh9~lOCMIYHIaiOB6~5j" "t9w2EOwi6sIIqrg_") pk_file = StringIO() pk_file.write(self.pk_str) pk_file.seek(0) sig = self.dist._sign_string(self.canned_policy, private_key_file=pk_file) encoded_sig = self.dist._url_base64_encode(sig) self.assertEqual(expected, encoded_sig) def test_sign_canned_policy_unicode(self): """ Test signing the canned policy from amazon's cloudfront documentation. """ expected = ("Nql641NHEUkUaXQHZINK1FZ~SYeUSoBJMxjdgqrzIdzV2gyEXPDN" "v0pYdWJkflDKJ3xIu7lbwRpSkG98NBlgPi4ZJpRRnVX4kXAJK6td" "Nx6FucDB7OVqzcxkxHsGFd8VCG1BkC-Afh9~lOCMIYHIaiOB6~5j" "t9w2EOwi6sIIqrg_") unicode_policy = unicode(self.canned_policy) sig = self.dist._sign_string(unicode_policy, private_key_string=self.pk_str) encoded_sig = self.dist._url_base64_encode(sig) self.assertEqual(expected, encoded_sig) def test_sign_custom_policy_1(self): """ Test signing custom policy 1 from amazon's cloudfront documentation. """ expected = ("cPFtRKvUfYNYmxek6ZNs6vgKEZP6G3Cb4cyVt~FjqbHOnMdxdT7e" "T6pYmhHYzuDsFH4Jpsctke2Ux6PCXcKxUcTIm8SO4b29~1QvhMl-" "CIojki3Hd3~Unxjw7Cpo1qRjtvrimW0DPZBZYHFZtiZXsaPt87yB" "P9GWnTQoaVysMxQ_") sig = self.dist._sign_string(self.custom_policy_1, private_key_string=self.pk_str) encoded_sig = self.dist._url_base64_encode(sig) self.assertEqual(expected, encoded_sig) def test_sign_custom_policy_2(self): """ Test signing custom policy 2 from amazon's cloudfront documentation. """ expected = ("rc~5Qbbm8EJXjUTQ6Cn0LAxR72g1DOPrTmdtfbWVVgQNw0q~KHUA" "mBa2Zv1Wjj8dDET4XSL~Myh44CLQdu4dOH~N9huH7QfPSR~O4tIO" "S1WWcP~2JmtVPoQyLlEc8YHRCuN3nVNZJ0m4EZcXXNAS-0x6Zco2" "SYx~hywTRxWR~5Q_") sig = self.dist._sign_string(self.custom_policy_2, private_key_string=self.pk_str) encoded_sig = self.dist._url_base64_encode(sig) self.assertEqual(expected, encoded_sig) def test_create_canned_policy(self): """ Test that a canned policy is generated correctly. """ url = "http://1234567.cloudfront.com/test_resource.mp3?dog=true" expires = 999999 policy = self.dist._canned_policy(url, expires) policy = json.loads(policy) self.assertEqual(1, len(policy.keys())) statements = policy["Statement"] self.assertEqual(1, len(statements)) statement = statements[0] resource = statement["Resource"] self.assertEqual(url, resource) condition = statement["Condition"] self.assertEqual(1, len(condition.keys())) date_less_than = condition["DateLessThan"] self.assertEqual(1, len(date_less_than.keys())) aws_epoch_time = date_less_than["AWS:EpochTime"] self.assertEqual(expires, aws_epoch_time) def test_custom_policy_expires_and_policy_url(self): """ Test that a custom policy can be created with an expire time and an arbitrary URL. """ url = "http://1234567.cloudfront.com/*" expires = 999999 policy = self.dist._custom_policy(url, expires=expires) policy = json.loads(policy) self.assertEqual(1, len(policy.keys())) statements = policy["Statement"] self.assertEqual(1, len(statements)) statement = statements[0] resource = statement["Resource"] self.assertEqual(url, resource) condition = statement["Condition"] self.assertEqual(1, len(condition.keys())) date_less_than = condition["DateLessThan"] self.assertEqual(1, len(date_less_than.keys())) aws_epoch_time = date_less_than["AWS:EpochTime"] self.assertEqual(expires, aws_epoch_time) def test_custom_policy_valid_after(self): """ Test that a custom policy can be created with a valid-after time and an arbitrary URL. """ url = "http://1234567.cloudfront.com/*" valid_after = 999999 policy = self.dist._custom_policy(url, valid_after=valid_after) policy = json.loads(policy) self.assertEqual(1, len(policy.keys())) statements = policy["Statement"] self.assertEqual(1, len(statements)) statement = statements[0] resource = statement["Resource"] self.assertEqual(url, resource) condition = statement["Condition"] self.assertEqual(2, len(condition.keys())) date_less_than = condition["DateLessThan"] date_greater_than = condition["DateGreaterThan"] self.assertEqual(1, len(date_greater_than.keys())) aws_epoch_time = date_greater_than["AWS:EpochTime"] self.assertEqual(valid_after, aws_epoch_time) def test_custom_policy_ip_address(self): """ Test that a custom policy can be created with an IP address and an arbitrary URL. """ url = "http://1234567.cloudfront.com/*" ip_range = "192.168.0.1" policy = self.dist._custom_policy(url, ip_address=ip_range) policy = json.loads(policy) self.assertEqual(1, len(policy.keys())) statements = policy["Statement"] self.assertEqual(1, len(statements)) statement = statements[0] resource = statement["Resource"] self.assertEqual(url, resource) condition = statement["Condition"] self.assertEqual(2, len(condition.keys())) ip_address = condition["IpAddress"] self.assertTrue("DateLessThan" in condition) self.assertEqual(1, len(ip_address.keys())) source_ip = ip_address["AWS:SourceIp"] self.assertEqual("%s/32" % ip_range, source_ip) def test_custom_policy_ip_range(self): """ Test that a custom policy can be created with an IP address and an arbitrary URL. """ url = "http://1234567.cloudfront.com/*" ip_range = "192.168.0.0/24" policy = self.dist._custom_policy(url, ip_address=ip_range) policy = json.loads(policy) self.assertEqual(1, len(policy.keys())) statements = policy["Statement"] self.assertEqual(1, len(statements)) statement = statements[0] resource = statement["Resource"] self.assertEqual(url, resource) condition = statement["Condition"] self.assertEqual(2, len(condition.keys())) self.assertTrue("DateLessThan" in condition) ip_address = condition["IpAddress"] self.assertEqual(1, len(ip_address.keys())) source_ip = ip_address["AWS:SourceIp"] self.assertEqual(ip_range, source_ip) def test_custom_policy_all(self): """ Test that a custom policy can be created with an IP address and an arbitrary URL. """ url = "http://1234567.cloudfront.com/test.txt" expires = 999999 valid_after = 111111 ip_range = "192.168.0.0/24" policy = self.dist._custom_policy(url, expires=expires, valid_after=valid_after, ip_address=ip_range) policy = json.loads(policy) self.assertEqual(1, len(policy.keys())) statements = policy["Statement"] self.assertEqual(1, len(statements)) statement = statements[0] resource = statement["Resource"] self.assertEqual(url, resource) condition = statement["Condition"] self.assertEqual(3, len(condition.keys())) #check expires condition date_less_than = condition["DateLessThan"] self.assertEqual(1, len(date_less_than.keys())) aws_epoch_time = date_less_than["AWS:EpochTime"] self.assertEqual(expires, aws_epoch_time) #check valid_after condition date_greater_than = condition["DateGreaterThan"] self.assertEqual(1, len(date_greater_than.keys())) aws_epoch_time = date_greater_than["AWS:EpochTime"] self.assertEqual(valid_after, aws_epoch_time) #check source ip address condition ip_address = condition["IpAddress"] self.assertEqual(1, len(ip_address.keys())) source_ip = ip_address["AWS:SourceIp"] self.assertEqual(ip_range, source_ip) def test_params_canned_policy(self): """ Test the correct params are generated for a canned policy. """ url = "http://d604721fxaaqy9.cloudfront.net/horizon.jpg?large=yes&license=yes" expire_time = 1258237200 expected_sig = ("Nql641NHEUkUaXQHZINK1FZ~SYeUSoBJMxjdgqrzIdzV2gyE" "XPDNv0pYdWJkflDKJ3xIu7lbwRpSkG98NBlgPi4ZJpRRnVX4" "kXAJK6tdNx6FucDB7OVqzcxkxHsGFd8VCG1BkC-Afh9~lOCM" "IYHIaiOB6~5jt9w2EOwi6sIIqrg_") signed_url_params = self.dist._create_signing_params(url, self.pk_id, expire_time, private_key_string=self.pk_str) self.assertEqual(3, len(signed_url_params)) self.assertEqual(signed_url_params["Expires"], "1258237200") self.assertEqual(signed_url_params["Signature"], expected_sig) self.assertEqual(signed_url_params["Key-Pair-Id"], "PK123456789754") def test_canned_policy(self): """ Generate signed url from the Example Canned Policy in Amazon's documentation. """ url = "http://d604721fxaaqy9.cloudfront.net/horizon.jpg?large=yes&license=yes" expire_time = 1258237200 expected_url = "http://d604721fxaaqy9.cloudfront.net/horizon.jpg?large=yes&license=yes&Expires=1258237200&Signature=Nql641NHEUkUaXQHZINK1FZ~SYeUSoBJMxjdgqrzIdzV2gyEXPDNv0pYdWJkflDKJ3xIu7lbwRpSkG98NBlgPi4ZJpRRnVX4kXAJK6tdNx6FucDB7OVqzcxkxHsGFd8VCG1BkC-Afh9~lOCMIYHIaiOB6~5jt9w2EOwi6sIIqrg_&Key-Pair-Id=PK123456789754" signed_url = self.dist.create_signed_url( url, self.pk_id, expire_time, private_key_string=self.pk_str) self.assertEqual(expected_url, signed_url) boto-2.20.1/tests/unit/cloudsearch/000077500000000000000000000000001225267101000171505ustar00rootroot00000000000000boto-2.20.1/tests/unit/cloudsearch/__init__.py000066400000000000000000000000011225267101000212500ustar00rootroot00000000000000 boto-2.20.1/tests/unit/cloudsearch/test_connection.py000066400000000000000000000205131225267101000227210ustar00rootroot00000000000000#!/usr/bin env python from tests.unit import AWSMockServiceTestCase from boto.cloudsearch.domain import Domain from boto.cloudsearch.layer1 import Layer1 import json class TestCloudSearchCreateDomain(AWSMockServiceTestCase): connection_class = Layer1 def default_body(self): return """ 0 arn:aws:cs:us-east-1:1234567890:search/demo search-demo-userdomain.us-east-1.cloudsearch.amazonaws.com 0 true 1234567890/demo false 0 demo false false arn:aws:cs:us-east-1:1234567890:doc/demo doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com 00000000-0000-0000-0000-000000000000 """ def test_create_domain(self): self.set_http_response(status_code=200) api_response = self.service_connection.create_domain('demo') self.assert_request_parameters({ 'Action': 'CreateDomain', 'DomainName': 'demo', 'Version': '2011-02-01', }) def test_cloudsearch_connect_result_endpoints(self): """Check that endpoints & ARNs are correctly returned from AWS""" self.set_http_response(status_code=200) api_response = self.service_connection.create_domain('demo') domain = Domain(self, api_response) self.assertEqual(domain.doc_service_arn, "arn:aws:cs:us-east-1:1234567890:doc/demo") self.assertEqual( domain.doc_service_endpoint, "doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com") self.assertEqual(domain.search_service_arn, "arn:aws:cs:us-east-1:1234567890:search/demo") self.assertEqual( domain.search_service_endpoint, "search-demo-userdomain.us-east-1.cloudsearch.amazonaws.com") def test_cloudsearch_connect_result_statuses(self): """Check that domain statuses are correctly returned from AWS""" self.set_http_response(status_code=200) api_response = self.service_connection.create_domain('demo') domain = Domain(self, api_response) self.assertEqual(domain.created, True) self.assertEqual(domain.processing, False) self.assertEqual(domain.requires_index_documents, False) self.assertEqual(domain.deleted, False) def test_cloudsearch_connect_result_details(self): """Check that the domain information is correctly returned from AWS""" self.set_http_response(status_code=200) api_response = self.service_connection.create_domain('demo') domain = Domain(self, api_response) self.assertEqual(domain.id, "1234567890/demo") self.assertEqual(domain.name, "demo") def test_cloudsearch_documentservice_creation(self): self.set_http_response(status_code=200) api_response = self.service_connection.create_domain('demo') domain = Domain(self, api_response) document = domain.get_document_service() self.assertEqual( document.endpoint, "doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com") def test_cloudsearch_searchservice_creation(self): self.set_http_response(status_code=200) api_response = self.service_connection.create_domain('demo') domain = Domain(self, api_response) search = domain.get_search_service() self.assertEqual( search.endpoint, "search-demo-userdomain.us-east-1.cloudsearch.amazonaws.com") class CloudSearchConnectionDeletionTest(AWSMockServiceTestCase): connection_class = Layer1 def default_body(self): return """ 0 arn:aws:cs:us-east-1:1234567890:search/demo search-demo-userdomain.us-east-1.cloudsearch.amazonaws.com 0 true 1234567890/demo false 0 demo false false arn:aws:cs:us-east-1:1234567890:doc/demo doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com 00000000-0000-0000-0000-000000000000 """ def test_cloudsearch_deletion(self): """ Check that the correct arguments are sent to AWS when creating a cloudsearch connection. """ self.set_http_response(status_code=200) api_response = self.service_connection.delete_domain('demo') self.assert_request_parameters({ 'Action': 'DeleteDomain', 'DomainName': 'demo', 'Version': '2011-02-01', }) class CloudSearchConnectionIndexDocumentTest(AWSMockServiceTestCase): connection_class = Layer1 def default_body(self): return """ average_score brand_id colors context context_owner created_at creator_id description file_size format has_logo has_messaging height image_id ingested_from is_advertising is_photo is_reviewed modified_at subject_date tags title width eb2b2390-6bbd-11e2-ab66-93f3a90dcf2a """ def test_cloudsearch_index_documents(self): """ Check that the correct arguments are sent to AWS when indexing a domain. """ self.set_http_response(status_code=200) api_response = self.service_connection.index_documents('demo') self.assert_request_parameters({ 'Action': 'IndexDocuments', 'DomainName': 'demo', 'Version': '2011-02-01', }) def test_cloudsearch_index_documents_resp(self): """ Check that the AWS response is being parsed correctly when indexing a domain. """ self.set_http_response(status_code=200) api_response = self.service_connection.index_documents('demo') self.assertEqual(api_response, ['average_score', 'brand_id', 'colors', 'context', 'context_owner', 'created_at', 'creator_id', 'description', 'file_size', 'format', 'has_logo', 'has_messaging', 'height', 'image_id', 'ingested_from', 'is_advertising', 'is_photo', 'is_reviewed', 'modified_at', 'subject_date', 'tags', 'title', 'width']) boto-2.20.1/tests/unit/cloudsearch/test_document.py000066400000000000000000000261071225267101000224050ustar00rootroot00000000000000#!/usr/bin env python from tests.unit import unittest from httpretty import HTTPretty from mock import MagicMock import urlparse import json from boto.cloudsearch.document import DocumentServiceConnection from boto.cloudsearch.document import CommitMismatchError, EncodingError, \ ContentTooLongError, DocumentServiceConnection import boto class CloudSearchDocumentTest(unittest.TestCase): def setUp(self): HTTPretty.enable() HTTPretty.register_uri( HTTPretty.POST, ("http://doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com/" "2011-02-01/documents/batch"), body=json.dumps(self.response), content_type="application/json") def tearDown(self): HTTPretty.disable() class CloudSearchDocumentSingleTest(CloudSearchDocumentTest): response = { 'status': 'success', 'adds': 1, 'deletes': 0, } def test_cloudsearch_add_basics(self): """ Check that a simple add document actually sends an add document request to AWS. """ document = DocumentServiceConnection( endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com") document.add("1234", 10, {"id": "1234", "title": "Title 1", "category": ["cat_a", "cat_b", "cat_c"]}) document.commit() args = json.loads(HTTPretty.last_request.body)[0] self.assertEqual(args['lang'], 'en') self.assertEqual(args['type'], 'add') def test_cloudsearch_add_single_basic(self): """ Check that a simple add document sends correct document metadata to AWS. """ document = DocumentServiceConnection( endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com") document.add("1234", 10, {"id": "1234", "title": "Title 1", "category": ["cat_a", "cat_b", "cat_c"]}) document.commit() args = json.loads(HTTPretty.last_request.body)[0] self.assertEqual(args['id'], '1234') self.assertEqual(args['version'], 10) self.assertEqual(args['type'], 'add') def test_cloudsearch_add_single_fields(self): """ Check that a simple add document sends the actual document to AWS. """ document = DocumentServiceConnection( endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com") document.add("1234", 10, {"id": "1234", "title": "Title 1", "category": ["cat_a", "cat_b", "cat_c"]}) document.commit() args = json.loads(HTTPretty.last_request.body)[0] self.assertEqual(args['fields']['category'], ['cat_a', 'cat_b', 'cat_c']) self.assertEqual(args['fields']['id'], '1234') self.assertEqual(args['fields']['title'], 'Title 1') def test_cloudsearch_add_single_result(self): """ Check that the reply from adding a single document is correctly parsed. """ document = DocumentServiceConnection( endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com") document.add("1234", 10, {"id": "1234", "title": "Title 1", "category": ["cat_a", "cat_b", "cat_c"]}) doc = document.commit() self.assertEqual(doc.status, 'success') self.assertEqual(doc.adds, 1) self.assertEqual(doc.deletes, 0) self.assertEqual(doc.doc_service, document) class CloudSearchDocumentMultipleAddTest(CloudSearchDocumentTest): response = { 'status': 'success', 'adds': 3, 'deletes': 0, } objs = { '1234': { 'version': 10, 'fields': {"id": "1234", "title": "Title 1", "category": ["cat_a", "cat_b", "cat_c"]}}, '1235': { 'version': 11, 'fields': {"id": "1235", "title": "Title 2", "category": ["cat_b", "cat_c", "cat_d"]}}, '1236': { 'version': 12, 'fields': {"id": "1236", "title": "Title 3", "category": ["cat_e", "cat_f", "cat_g"]}}, } def test_cloudsearch_add_basics(self): """Check that multiple documents are added correctly to AWS""" document = DocumentServiceConnection( endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com") for (key, obj) in self.objs.items(): document.add(key, obj['version'], obj['fields']) document.commit() args = json.loads(HTTPretty.last_request.body) for arg in args: self.assertTrue(arg['id'] in self.objs) self.assertEqual(arg['version'], self.objs[arg['id']]['version']) self.assertEqual(arg['fields']['id'], self.objs[arg['id']]['fields']['id']) self.assertEqual(arg['fields']['title'], self.objs[arg['id']]['fields']['title']) self.assertEqual(arg['fields']['category'], self.objs[arg['id']]['fields']['category']) def test_cloudsearch_add_results(self): """ Check that the result from adding multiple documents is parsed correctly. """ document = DocumentServiceConnection( endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com") for (key, obj) in self.objs.items(): document.add(key, obj['version'], obj['fields']) doc = document.commit() self.assertEqual(doc.status, 'success') self.assertEqual(doc.adds, len(self.objs)) self.assertEqual(doc.deletes, 0) class CloudSearchDocumentDelete(CloudSearchDocumentTest): response = { 'status': 'success', 'adds': 0, 'deletes': 1, } def test_cloudsearch_delete(self): """ Test that the request for a single document deletion is done properly. """ document = DocumentServiceConnection( endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com") document.delete("5", "10") document.commit() args = json.loads(HTTPretty.last_request.body)[0] self.assertEqual(args['version'], '10') self.assertEqual(args['type'], 'delete') self.assertEqual(args['id'], '5') def test_cloudsearch_delete_results(self): """ Check that the result of a single document deletion is parsed properly. """ document = DocumentServiceConnection( endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com") document.delete("5", "10") doc = document.commit() self.assertEqual(doc.status, 'success') self.assertEqual(doc.adds, 0) self.assertEqual(doc.deletes, 1) class CloudSearchDocumentDeleteMultiple(CloudSearchDocumentTest): response = { 'status': 'success', 'adds': 0, 'deletes': 2, } def test_cloudsearch_delete_multiples(self): document = DocumentServiceConnection( endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com") document.delete("5", "10") document.delete("6", "11") document.commit() args = json.loads(HTTPretty.last_request.body) self.assertEqual(len(args), 2) for arg in args: self.assertEqual(arg['type'], 'delete') if arg['id'] == '5': self.assertEqual(arg['version'], '10') elif arg['id'] == '6': self.assertEqual(arg['version'], '11') else: # Unknown result out of AWS that shouldn't be there self.assertTrue(False) class CloudSearchSDFManipulation(CloudSearchDocumentTest): response = { 'status': 'success', 'adds': 1, 'deletes': 0, } def test_cloudsearch_initial_sdf_is_blank(self): document = DocumentServiceConnection( endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com") self.assertEqual(document.get_sdf(), '[]') def test_cloudsearch_single_document_sdf(self): document = DocumentServiceConnection( endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com") document.add("1234", 10, {"id": "1234", "title": "Title 1", "category": ["cat_a", "cat_b", "cat_c"]}) self.assertNotEqual(document.get_sdf(), '[]') document.clear_sdf() self.assertEqual(document.get_sdf(), '[]') class CloudSearchBadSDFTesting(CloudSearchDocumentTest): response = { 'status': 'success', 'adds': 1, 'deletes': 0, } def test_cloudsearch_erroneous_sdf(self): original = boto.log.error boto.log.error = MagicMock() document = DocumentServiceConnection( endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com") document.add("1234", 10, {"id": "1234", "title": None, "category": ["cat_a", "cat_b", "cat_c"]}) document.commit() self.assertNotEqual(len(boto.log.error.call_args_list), 1) boto.log.error = original class CloudSearchDocumentErrorBadUnicode(CloudSearchDocumentTest): response = { 'status': 'error', 'adds': 0, 'deletes': 0, 'errors': [{'message': 'Illegal Unicode character in document'}] } def test_fake_bad_unicode(self): document = DocumentServiceConnection( endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com") document.add("1234", 10, {"id": "1234", "title": "Title 1", "category": ["cat_a", "cat_b", "cat_c"]}) self.assertRaises(EncodingError, document.commit) class CloudSearchDocumentErrorDocsTooBig(CloudSearchDocumentTest): response = { 'status': 'error', 'adds': 0, 'deletes': 0, 'errors': [{'message': 'The Content-Length is too long'}] } def test_fake_docs_too_big(self): document = DocumentServiceConnection( endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com") document.add("1234", 10, {"id": "1234", "title": "Title 1", "category": ["cat_a", "cat_b", "cat_c"]}) self.assertRaises(ContentTooLongError, document.commit) class CloudSearchDocumentErrorMismatch(CloudSearchDocumentTest): response = { 'status': 'error', 'adds': 0, 'deletes': 0, 'errors': [{'message': 'Something went wrong'}] } def test_fake_failure(self): document = DocumentServiceConnection( endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com") document.add("1234", 10, {"id": "1234", "title": "Title 1", "category": ["cat_a", "cat_b", "cat_c"]}) self.assertRaises(CommitMismatchError, document.commit) boto-2.20.1/tests/unit/cloudsearch/test_exceptions.py000066400000000000000000000025661225267101000227530ustar00rootroot00000000000000import mock from boto.compat import json from tests.unit import unittest from .test_search import HOSTNAME, CloudSearchSearchBaseTest from boto.cloudsearch.search import SearchConnection, SearchServiceException def fake_loads_value_error(content, *args, **kwargs): """Callable to generate a fake ValueError""" raise ValueError("HAHAHA! Totally not simplejson & you gave me bad JSON.") def fake_loads_json_error(content, *args, **kwargs): """Callable to generate a fake JSONDecodeError""" raise json.JSONDecodeError('Using simplejson & you gave me bad JSON.', '', 0) class CloudSearchJSONExceptionTest(CloudSearchSearchBaseTest): response = '{}' def test_no_simplejson_value_error(self): with mock.patch.object(json, 'loads', fake_loads_value_error): search = SearchConnection(endpoint=HOSTNAME) with self.assertRaisesRegexp(SearchServiceException, 'non-json'): search.search(q='test') @unittest.skipUnless(hasattr(json, 'JSONDecodeError'), 'requires simplejson') def test_simplejson_jsondecodeerror(self): with mock.patch.object(json, 'loads', fake_loads_json_error): search = SearchConnection(endpoint=HOSTNAME) with self.assertRaisesRegexp(SearchServiceException, 'non-json'): search.search(q='test') boto-2.20.1/tests/unit/cloudsearch/test_search.py000066400000000000000000000324431225267101000220340ustar00rootroot00000000000000#!/usr/bin env python from tests.unit import unittest from httpretty import HTTPretty import urlparse import json import mock import requests from boto.cloudsearch.search import SearchConnection, SearchServiceException HOSTNAME = "search-demo-userdomain.us-east-1.cloudsearch.amazonaws.com" FULL_URL = 'http://%s/2011-02-01/search' % HOSTNAME class CloudSearchSearchBaseTest(unittest.TestCase): hits = [ { 'id': '12341', 'title': 'Document 1', }, { 'id': '12342', 'title': 'Document 2', }, { 'id': '12343', 'title': 'Document 3', }, { 'id': '12344', 'title': 'Document 4', }, { 'id': '12345', 'title': 'Document 5', }, { 'id': '12346', 'title': 'Document 6', }, { 'id': '12347', 'title': 'Document 7', }, ] content_type = "text/xml" response_status = 200 def get_args(self, requestline): (_, request, _) = requestline.split(" ") (_, request) = request.split("?", 1) args = urlparse.parse_qs(request) return args def setUp(self): HTTPretty.enable() body = self.response if not isinstance(body, basestring): body = json.dumps(body) HTTPretty.register_uri(HTTPretty.GET, FULL_URL, body=body, content_type=self.content_type, status=self.response_status) def tearDown(self): HTTPretty.disable() class CloudSearchSearchTest(CloudSearchSearchBaseTest): response = { 'rank': '-text_relevance', 'match-expr':"Test", 'hits': { 'found': 30, 'start': 0, 'hit':CloudSearchSearchBaseTest.hits }, 'info': { 'rid':'b7c167f6c2da6d93531b9a7b314ad030b3a74803b4b7797edb905ba5a6a08', 'time-ms': 2, 'cpu-time-ms': 0 } } def test_cloudsearch_qsearch(self): search = SearchConnection(endpoint=HOSTNAME) search.search(q='Test') args = self.get_args(HTTPretty.last_request.raw_requestline) self.assertEqual(args['q'], ["Test"]) self.assertEqual(args['start'], ["0"]) self.assertEqual(args['size'], ["10"]) def test_cloudsearch_bqsearch(self): search = SearchConnection(endpoint=HOSTNAME) search.search(bq="'Test'") args = self.get_args(HTTPretty.last_request.raw_requestline) self.assertEqual(args['bq'], ["'Test'"]) def test_cloudsearch_search_details(self): search = SearchConnection(endpoint=HOSTNAME) search.search(q='Test', size=50, start=20) args = self.get_args(HTTPretty.last_request.raw_requestline) self.assertEqual(args['q'], ["Test"]) self.assertEqual(args['size'], ["50"]) self.assertEqual(args['start'], ["20"]) def test_cloudsearch_facet_single(self): search = SearchConnection(endpoint=HOSTNAME) search.search(q='Test', facet=["Author"]) args = self.get_args(HTTPretty.last_request.raw_requestline) self.assertEqual(args['facet'], ["Author"]) def test_cloudsearch_facet_multiple(self): search = SearchConnection(endpoint=HOSTNAME) search.search(q='Test', facet=["author", "cat"]) args = self.get_args(HTTPretty.last_request.raw_requestline) self.assertEqual(args['facet'], ["author,cat"]) def test_cloudsearch_facet_constraint_single(self): search = SearchConnection(endpoint=HOSTNAME) search.search( q='Test', facet_constraints={'author': "'John Smith','Mark Smith'"}) args = self.get_args(HTTPretty.last_request.raw_requestline) self.assertEqual(args['facet-author-constraints'], ["'John Smith','Mark Smith'"]) def test_cloudsearch_facet_constraint_multiple(self): search = SearchConnection(endpoint=HOSTNAME) search.search( q='Test', facet_constraints={'author': "'John Smith','Mark Smith'", 'category': "'News','Reviews'"}) args = self.get_args(HTTPretty.last_request.raw_requestline) self.assertEqual(args['facet-author-constraints'], ["'John Smith','Mark Smith'"]) self.assertEqual(args['facet-category-constraints'], ["'News','Reviews'"]) def test_cloudsearch_facet_sort_single(self): search = SearchConnection(endpoint=HOSTNAME) search.search(q='Test', facet_sort={'author': 'alpha'}) args = self.get_args(HTTPretty.last_request.raw_requestline) self.assertEqual(args['facet-author-sort'], ['alpha']) def test_cloudsearch_facet_sort_multiple(self): search = SearchConnection(endpoint=HOSTNAME) search.search(q='Test', facet_sort={'author': 'alpha', 'cat': 'count'}) args = self.get_args(HTTPretty.last_request.raw_requestline) self.assertEqual(args['facet-author-sort'], ['alpha']) self.assertEqual(args['facet-cat-sort'], ['count']) def test_cloudsearch_top_n_single(self): search = SearchConnection(endpoint=HOSTNAME) search.search(q='Test', facet_top_n={'author': 5}) args = self.get_args(HTTPretty.last_request.raw_requestline) self.assertEqual(args['facet-author-top-n'], ['5']) def test_cloudsearch_top_n_multiple(self): search = SearchConnection(endpoint=HOSTNAME) search.search(q='Test', facet_top_n={'author': 5, 'cat': 10}) args = self.get_args(HTTPretty.last_request.raw_requestline) self.assertEqual(args['facet-author-top-n'], ['5']) self.assertEqual(args['facet-cat-top-n'], ['10']) def test_cloudsearch_rank_single(self): search = SearchConnection(endpoint=HOSTNAME) search.search(q='Test', rank=["date"]) args = self.get_args(HTTPretty.last_request.raw_requestline) self.assertEqual(args['rank'], ['date']) def test_cloudsearch_rank_multiple(self): search = SearchConnection(endpoint=HOSTNAME) search.search(q='Test', rank=["date", "score"]) args = self.get_args(HTTPretty.last_request.raw_requestline) self.assertEqual(args['rank'], ['date,score']) def test_cloudsearch_result_fields_single(self): search = SearchConnection(endpoint=HOSTNAME) search.search(q='Test', return_fields=['author']) args = self.get_args(HTTPretty.last_request.raw_requestline) self.assertEqual(args['return-fields'], ['author']) def test_cloudsearch_result_fields_multiple(self): search = SearchConnection(endpoint=HOSTNAME) search.search(q='Test', return_fields=['author', 'title']) args = self.get_args(HTTPretty.last_request.raw_requestline) self.assertEqual(args['return-fields'], ['author,title']) def test_cloudsearch_t_field_single(self): search = SearchConnection(endpoint=HOSTNAME) search.search(q='Test', t={'year':'2001..2007'}) args = self.get_args(HTTPretty.last_request.raw_requestline) self.assertEqual(args['t-year'], ['2001..2007']) def test_cloudsearch_t_field_multiple(self): search = SearchConnection(endpoint=HOSTNAME) search.search(q='Test', t={'year':'2001..2007', 'score':'10..50'}) args = self.get_args(HTTPretty.last_request.raw_requestline) self.assertEqual(args['t-year'], ['2001..2007']) self.assertEqual(args['t-score'], ['10..50']) def test_cloudsearch_results_meta(self): """Check returned metadata is parsed correctly""" search = SearchConnection(endpoint=HOSTNAME) results = search.search(q='Test') # These rely on the default response which is fed into HTTPretty self.assertEqual(results.rank, "-text_relevance") self.assertEqual(results.match_expression, "Test") def test_cloudsearch_results_info(self): """Check num_pages_needed is calculated correctly""" search = SearchConnection(endpoint=HOSTNAME) results = search.search(q='Test') # This relies on the default response which is fed into HTTPretty self.assertEqual(results.num_pages_needed, 3.0) def test_cloudsearch_results_matched(self): """ Check that information objects are passed back through the API correctly. """ search = SearchConnection(endpoint=HOSTNAME) query = search.build_query(q='Test') results = search(query) self.assertEqual(results.search_service, search) self.assertEqual(results.query, query) def test_cloudsearch_results_hits(self): """Check that documents are parsed properly from AWS""" search = SearchConnection(endpoint=HOSTNAME) results = search.search(q='Test') hits = map(lambda x: x['id'], results.docs) # This relies on the default response which is fed into HTTPretty self.assertEqual( hits, ["12341", "12342", "12343", "12344", "12345", "12346", "12347"]) def test_cloudsearch_results_iterator(self): """Check the results iterator""" search = SearchConnection(endpoint=HOSTNAME) results = search.search(q='Test') results_correct = iter(["12341", "12342", "12343", "12344", "12345", "12346", "12347"]) for x in results: self.assertEqual(x['id'], results_correct.next()) def test_cloudsearch_results_internal_consistancy(self): """Check the documents length matches the iterator details""" search = SearchConnection(endpoint=HOSTNAME) results = search.search(q='Test') self.assertEqual(len(results), len(results.docs)) def test_cloudsearch_search_nextpage(self): """Check next page query is correct""" search = SearchConnection(endpoint=HOSTNAME) query1 = search.build_query(q='Test') query2 = search.build_query(q='Test') results = search(query2) self.assertEqual(results.next_page().query.start, query1.start + query1.size) self.assertEqual(query1.q, query2.q) class CloudSearchSearchFacetTest(CloudSearchSearchBaseTest): response = { 'rank': '-text_relevance', 'match-expr':"Test", 'hits': { 'found': 30, 'start': 0, 'hit':CloudSearchSearchBaseTest.hits }, 'info': { 'rid':'b7c167f6c2da6d93531b9a7b314ad030b3a74803b4b7797edb905ba5a6a08', 'time-ms': 2, 'cpu-time-ms': 0 }, 'facets': { 'tags': {}, 'animals': {'constraints': [{'count': '2', 'value': 'fish'}, {'count': '1', 'value':'lions'}]}, } } def test_cloudsearch_search_facets(self): #self.response['facets'] = {'tags': {}} search = SearchConnection(endpoint=HOSTNAME) results = search.search(q='Test', facet=['tags']) self.assertTrue('tags' not in results.facets) self.assertEqual(results.facets['animals'], {u'lions': u'1', u'fish': u'2'}) class CloudSearchNonJsonTest(CloudSearchSearchBaseTest): response = '

500 Internal Server Error

' response_status = 500 content_type = 'text/xml' def test_response(self): search = SearchConnection(endpoint=HOSTNAME) with self.assertRaises(SearchServiceException): search.search(q='Test') class CloudSearchUnauthorizedTest(CloudSearchSearchBaseTest): response = '

403 Forbidden

foo bar baz' response_status = 403 content_type = 'text/html' def test_response(self): search = SearchConnection(endpoint=HOSTNAME) with self.assertRaisesRegexp(SearchServiceException, 'foo bar baz'): search.search(q='Test') class FakeResponse(object): status_code = 405 content = '' class CloudSearchConnectionTest(unittest.TestCase): cloudsearch = True def setUp(self): super(CloudSearchConnectionTest, self).setUp() self.conn = SearchConnection( endpoint='test-domain.cloudsearch.amazonaws.com' ) def test_expose_additional_error_info(self): mpo = mock.patch.object fake = FakeResponse() fake.content = 'Nopenopenope' # First, in the case of a non-JSON, non-403 error. with mpo(requests, 'get', return_value=fake) as mock_request: with self.assertRaises(SearchServiceException) as cm: self.conn.search(q='not_gonna_happen') self.assertTrue('non-json response' in str(cm.exception)) self.assertTrue('Nopenopenope' in str(cm.exception)) # Then with JSON & an 'error' key within. fake.content = json.dumps({ 'error': "Something went wrong. Oops." }) with mpo(requests, 'get', return_value=fake) as mock_request: with self.assertRaises(SearchServiceException) as cm: self.conn.search(q='no_luck_here') self.assertTrue('Unknown error' in str(cm.exception)) self.assertTrue('went wrong. Oops' in str(cm.exception)) boto-2.20.1/tests/unit/cloudtrail/000077500000000000000000000000001225267101000170165ustar00rootroot00000000000000boto-2.20.1/tests/unit/cloudtrail/__init__.py000066400000000000000000000000001225267101000211150ustar00rootroot00000000000000boto-2.20.1/tests/unit/cloudtrail/test_layer1.py000066400000000000000000000047331225267101000216330ustar00rootroot00000000000000#!/usr/bin/env python import json from boto.cloudtrail.layer1 import CloudTrailConnection from tests.unit import AWSMockServiceTestCase class TestDescribeTrails(AWSMockServiceTestCase): connection_class = CloudTrailConnection def default_body(self): return ''' {"trailList": [ { "IncludeGlobalServiceEvents": false, "Name": "test", "SnsTopicName": "cloudtrail-1", "S3BucketName": "cloudtrail-1" } ] }''' def test_describe(self): self.set_http_response(status_code=200) api_response = self.service_connection.describe_trails() self.assertEqual(1, len(api_response['trailList'])) self.assertEqual('test', api_response['trailList'][0]['Name']) self.assert_request_parameters({}) target = self.actual_request.headers['X-Amz-Target'] self.assertTrue('DescribeTrails' in target) def test_describe_name_list(self): self.set_http_response(status_code=200) api_response = self.service_connection.describe_trails( trail_name_list=['test']) self.assertEqual(1, len(api_response['trailList'])) self.assertEqual('test', api_response['trailList'][0]['Name']) self.assertEqual(json.dumps({ 'trailNameList': ['test'] }), self.actual_request.body) target = self.actual_request.headers['X-Amz-Target'] self.assertTrue('DescribeTrails' in target) class TestCreateTrail(AWSMockServiceTestCase): connection_class = CloudTrailConnection def default_body(self): return ''' {"trail": { "IncludeGlobalServiceEvents": false, "Name": "test", "SnsTopicName": "cloudtrail-1", "S3BucketName": "cloudtrail-1" } }''' def test_create(self): self.set_http_response(status_code=200) trail = {'Name': 'test', 'S3BucketName': 'cloudtrail-1', 'SnsTopicName': 'cloudtrail-1', 'IncludeGlobalServiceEvents': False} api_response = self.service_connection.create_trail(trail=trail) self.assertEqual(trail, api_response['trail']) target = self.actual_request.headers['X-Amz-Target'] self.assertTrue('CreateTrail' in target) boto-2.20.1/tests/unit/directconnect/000077500000000000000000000000001225267101000175005ustar00rootroot00000000000000boto-2.20.1/tests/unit/directconnect/__init__.py000066400000000000000000000000001225267101000215770ustar00rootroot00000000000000boto-2.20.1/tests/unit/directconnect/test_layer1.py000066400000000000000000000042311225267101000223060ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. # All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.directconnect.layer1 import DirectConnectConnection from tests.unit import AWSMockServiceTestCase class TestDescribeTrails(AWSMockServiceTestCase): connection_class = DirectConnectConnection def default_body(self): return ''' { "connections": [ { "bandwidth": "string", "connectionId": "string", "connectionName": "string", "connectionState": "string", "location": "string", "ownerAccount": "string", "partnerName": "string", "region": "string", "vlan": 1 } ] }''' def test_describe(self): self.set_http_response(status_code=200) api_response = self.service_connection.describe_connections() self.assertEqual(1, len(api_response['connections'])) self.assertEqual('string', api_response['connections'][0]['region']) self.assert_request_parameters({}) target = self.actual_request.headers['X-Amz-Target'] self.assertTrue('DescribeConnections' in target) boto-2.20.1/tests/unit/dynamodb/000077500000000000000000000000001225267101000164515ustar00rootroot00000000000000boto-2.20.1/tests/unit/dynamodb/__init__.py000066400000000000000000000000001225267101000205500ustar00rootroot00000000000000boto-2.20.1/tests/unit/dynamodb/test_batch.py000066400000000000000000000077511225267101000211550ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from tests.unit import unittest from boto.dynamodb.batch import Batch from boto.dynamodb.table import Table from boto.dynamodb.layer2 import Layer2 from boto.dynamodb.batch import BatchList DESCRIBE_TABLE_1 = { 'Table': { 'CreationDateTime': 1349910554.478, 'ItemCount': 1, 'KeySchema': {'HashKeyElement': {'AttributeName': u'foo', 'AttributeType': u'S'}}, 'ProvisionedThroughput': {'ReadCapacityUnits': 10, 'WriteCapacityUnits': 10}, 'TableName': 'testtable', 'TableSizeBytes': 54, 'TableStatus': 'ACTIVE'} } DESCRIBE_TABLE_2 = { 'Table': { 'CreationDateTime': 1349910554.478, 'ItemCount': 1, 'KeySchema': {'HashKeyElement': {'AttributeName': u'baz', 'AttributeType': u'S'}, 'RangeKeyElement': {'AttributeName': 'myrange', 'AttributeType': 'N'}}, 'ProvisionedThroughput': {'ReadCapacityUnits': 10, 'WriteCapacityUnits': 10}, 'TableName': 'testtable2', 'TableSizeBytes': 54, 'TableStatus': 'ACTIVE'} } class TestBatchObjects(unittest.TestCase): maxDiff = None def setUp(self): self.layer2 = Layer2('access_key', 'secret_key') self.table = Table(self.layer2, DESCRIBE_TABLE_1) self.table2 = Table(self.layer2, DESCRIBE_TABLE_2) def test_batch_to_dict(self): b = Batch(self.table, ['k1', 'k2'], attributes_to_get=['foo'], consistent_read=True) self.assertDictEqual( b.to_dict(), {'AttributesToGet': ['foo'], 'Keys': [{'HashKeyElement': {'S': 'k1'}}, {'HashKeyElement': {'S': 'k2'}}], 'ConsistentRead': True} ) def test_batch_consistent_read_defaults_to_false(self): b = Batch(self.table, ['k1']) self.assertDictEqual( b.to_dict(), {'Keys': [{'HashKeyElement': {'S': 'k1'}}], 'ConsistentRead': False} ) def test_batch_list_consistent_read(self): b = BatchList(self.layer2) b.add_batch(self.table, ['k1'], ['foo'], consistent_read=True) b.add_batch(self.table2, [('k2', 54)], ['bar'], consistent_read=False) self.assertDictEqual( b.to_dict(), {'testtable': {'AttributesToGet': ['foo'], 'Keys': [{'HashKeyElement': {'S': 'k1'}}], 'ConsistentRead': True}, 'testtable2': {'AttributesToGet': ['bar'], 'Keys': [{'HashKeyElement': {'S': 'k2'}, 'RangeKeyElement': {'N': '54'}}], 'ConsistentRead': False}}) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/dynamodb/test_layer2.py000066400000000000000000000113301225267101000212560ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from tests.unit import unittest from mock import Mock from boto.dynamodb.layer2 import Layer2 from boto.dynamodb.table import Table, Schema DESCRIBE_TABLE = { "Table": { "CreationDateTime": 1.353526122785E9, "ItemCount":1, "KeySchema": { "HashKeyElement":{"AttributeName": "foo", "AttributeType": "N"}}, "ProvisionedThroughput": { "NumberOfDecreasesToday": 0, "ReadCapacityUnits": 5, "WriteCapacityUnits": 5}, "TableName": "footest", "TableSizeBytes": 21, "TableStatus": "ACTIVE"} } class TestTableConstruction(unittest.TestCase): def setUp(self): self.layer2 = Layer2('access_key', 'secret_key') self.api = Mock() self.layer2.layer1 = self.api def test_get_table(self): self.api.describe_table.return_value = DESCRIBE_TABLE table = self.layer2.get_table('footest') self.assertEqual(table.name, 'footest') self.assertEqual(table.create_time, 1353526122.785) self.assertEqual(table.status, 'ACTIVE') self.assertEqual(table.item_count, 1) self.assertEqual(table.size_bytes, 21) self.assertEqual(table.read_units, 5) self.assertEqual(table.write_units, 5) self.assertEqual(table.schema, Schema.create(hash_key=('foo', 'N'))) def test_create_table_without_api_call(self): table = self.layer2.table_from_schema( name='footest', schema=Schema.create(hash_key=('foo', 'N'))) self.assertEqual(table.name, 'footest') self.assertEqual(table.schema, Schema.create(hash_key=('foo', 'N'))) # describe_table is never called. self.assertEqual(self.api.describe_table.call_count, 0) def test_create_schema_with_hash_and_range(self): schema = self.layer2.create_schema('foo', int, 'bar', str) self.assertEqual(schema.hash_key_name, 'foo') self.assertEqual(schema.hash_key_type, 'N') self.assertEqual(schema.range_key_name, 'bar') self.assertEqual(schema.range_key_type, 'S') def test_create_schema_with_hash(self): schema = self.layer2.create_schema('foo', str) self.assertEqual(schema.hash_key_name, 'foo') self.assertEqual(schema.hash_key_type, 'S') self.assertIsNone(schema.range_key_name) self.assertIsNone(schema.range_key_type) class TestSchemaEquality(unittest.TestCase): def test_schema_equal(self): s1 = Schema.create(hash_key=('foo', 'N')) s2 = Schema.create(hash_key=('foo', 'N')) self.assertEqual(s1, s2) def test_schema_not_equal(self): s1 = Schema.create(hash_key=('foo', 'N')) s2 = Schema.create(hash_key=('bar', 'N')) s3 = Schema.create(hash_key=('foo', 'S')) self.assertNotEqual(s1, s2) self.assertNotEqual(s1, s3) def test_equal_with_hash_and_range(self): s1 = Schema.create(hash_key=('foo', 'N'), range_key=('bar', 'S')) s2 = Schema.create(hash_key=('foo', 'N'), range_key=('bar', 'S')) self.assertEqual(s1, s2) def test_schema_with_hash_and_range_not_equal(self): s1 = Schema.create(hash_key=('foo', 'N'), range_key=('bar', 'S')) s2 = Schema.create(hash_key=('foo', 'N'), range_key=('bar', 'N')) s3 = Schema.create(hash_key=('foo', 'S'), range_key=('baz', 'N')) s4 = Schema.create(hash_key=('bar', 'N'), range_key=('baz', 'N')) self.assertNotEqual(s1, s2) self.assertNotEqual(s1, s3) self.assertNotEqual(s1, s4) self.assertNotEqual(s2, s4) self.assertNotEqual(s3, s4) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/dynamodb/test_types.py000066400000000000000000000072421225267101000212330ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from decimal import Decimal from tests.unit import unittest from boto.dynamodb import types from boto.dynamodb.exceptions import DynamoDBNumberError class TestDynamizer(unittest.TestCase): def setUp(self): pass def test_encoding_to_dynamodb(self): dynamizer = types.Dynamizer() self.assertEqual(dynamizer.encode('foo'), {'S': 'foo'}) self.assertEqual(dynamizer.encode(54), {'N': '54'}) self.assertEqual(dynamizer.encode(Decimal('1.1')), {'N': '1.1'}) self.assertEqual(dynamizer.encode(set([1, 2, 3])), {'NS': ['1', '2', '3']}) self.assertEqual(dynamizer.encode(set(['foo', 'bar'])), {'SS': ['foo', 'bar']}) self.assertEqual(dynamizer.encode(types.Binary('\x01')), {'B': 'AQ=='}) self.assertEqual(dynamizer.encode(set([types.Binary('\x01')])), {'BS': ['AQ==']}) def test_decoding_to_dynamodb(self): dynamizer = types.Dynamizer() self.assertEqual(dynamizer.decode({'S': 'foo'}), 'foo') self.assertEqual(dynamizer.decode({'N': '54'}), 54) self.assertEqual(dynamizer.decode({'N': '1.1'}), Decimal('1.1')) self.assertEqual(dynamizer.decode({'NS': ['1', '2', '3']}), set([1, 2, 3])) self.assertEqual(dynamizer.decode({'SS': ['foo', 'bar']}), set(['foo', 'bar'])) self.assertEqual(dynamizer.decode({'B': 'AQ=='}), types.Binary('\x01')) self.assertEqual(dynamizer.decode({'BS': ['AQ==']}), set([types.Binary('\x01')])) def test_float_conversion_errors(self): dynamizer = types.Dynamizer() # When supporting decimals, certain floats will work: self.assertEqual(dynamizer.encode(1.25), {'N': '1.25'}) # And some will generate errors, which is why it's best # to just use Decimals directly: with self.assertRaises(DynamoDBNumberError): dynamizer.encode(1.1) def test_lossy_float_conversions(self): dynamizer = types.LossyFloatDynamizer() # Just testing the differences here, specifically float conversions: self.assertEqual(dynamizer.encode(1.1), {'N': '1.1'}) self.assertEqual(dynamizer.decode({'N': '1.1'}), 1.1) self.assertEqual(dynamizer.encode(set([1.1])), {'NS': ['1.1']}) self.assertEqual(dynamizer.decode({'NS': ['1.1', '2.2', '3.3']}), set([1.1, 2.2, 3.3])) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/dynamodb2/000077500000000000000000000000001225267101000165335ustar00rootroot00000000000000boto-2.20.1/tests/unit/dynamodb2/__init__.py000066400000000000000000000000001225267101000206320ustar00rootroot00000000000000boto-2.20.1/tests/unit/dynamodb2/test_layer1.py000066400000000000000000000043431225267101000213450ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. """ Tests for Layer1 of DynamoDB v2 """ from tests.unit import unittest from boto.dynamodb2.layer1 import DynamoDBConnection from boto.regioninfo import RegionInfo class DynamoDBv2Layer1UnitTest(unittest.TestCase): dynamodb = True def test_init_region(self): dynamodb = DynamoDBConnection( aws_access_key_id='aws_access_key_id', aws_secret_access_key='aws_secret_access_key') self.assertEqual(dynamodb.region.name, 'us-east-1') dynamodb = DynamoDBConnection( region=RegionInfo(name='us-west-2', endpoint='dynamodb.us-west-2.amazonaws.com'), aws_access_key_id='aws_access_key_id', aws_secret_access_key='aws_secret_access_key', ) self.assertEqual(dynamodb.region.name, 'us-west-2') def test_init_host_override(self): dynamodb = DynamoDBConnection( aws_access_key_id='aws_access_key_id', aws_secret_access_key='aws_secret_access_key', host='localhost', port=8000) self.assertEqual(dynamodb.host, 'localhost') self.assertEqual(dynamodb.port, 8000) boto-2.20.1/tests/unit/dynamodb2/test_table.py000066400000000000000000002326511225267101000212440ustar00rootroot00000000000000import mock import unittest from boto.dynamodb2 import exceptions from boto.dynamodb2.fields import (HashKey, RangeKey, AllIndex, KeysOnlyIndex, IncludeIndex) from boto.dynamodb2.items import Item from boto.dynamodb2.layer1 import DynamoDBConnection from boto.dynamodb2.results import ResultSet, BatchGetResultSet from boto.dynamodb2.table import Table from boto.dynamodb2.types import (STRING, NUMBER, FILTER_OPERATORS, QUERY_OPERATORS) FakeDynamoDBConnection = mock.create_autospec(DynamoDBConnection) class SchemaFieldsTestCase(unittest.TestCase): def test_hash_key(self): hash_key = HashKey('hello') self.assertEqual(hash_key.name, 'hello') self.assertEqual(hash_key.data_type, STRING) self.assertEqual(hash_key.attr_type, 'HASH') self.assertEqual(hash_key.definition(), { 'AttributeName': 'hello', 'AttributeType': 'S' }) self.assertEqual(hash_key.schema(), { 'AttributeName': 'hello', 'KeyType': 'HASH' }) def test_range_key(self): range_key = RangeKey('hello') self.assertEqual(range_key.name, 'hello') self.assertEqual(range_key.data_type, STRING) self.assertEqual(range_key.attr_type, 'RANGE') self.assertEqual(range_key.definition(), { 'AttributeName': 'hello', 'AttributeType': 'S' }) self.assertEqual(range_key.schema(), { 'AttributeName': 'hello', 'KeyType': 'RANGE' }) def test_alternate_type(self): alt_key = HashKey('alt', data_type=NUMBER) self.assertEqual(alt_key.name, 'alt') self.assertEqual(alt_key.data_type, NUMBER) self.assertEqual(alt_key.attr_type, 'HASH') self.assertEqual(alt_key.definition(), { 'AttributeName': 'alt', 'AttributeType': 'N' }) self.assertEqual(alt_key.schema(), { 'AttributeName': 'alt', 'KeyType': 'HASH' }) class IndexFieldTestCase(unittest.TestCase): def test_all_index(self): all_index = AllIndex('AllKeys', parts=[ HashKey('username'), RangeKey('date_joined') ]) self.assertEqual(all_index.name, 'AllKeys') self.assertEqual([part.attr_type for part in all_index.parts], [ 'HASH', 'RANGE' ]) self.assertEqual(all_index.projection_type, 'ALL') self.assertEqual(all_index.definition(), [ {'AttributeName': 'username', 'AttributeType': 'S'}, {'AttributeName': 'date_joined', 'AttributeType': 'S'} ]) self.assertEqual(all_index.schema(), { 'IndexName': 'AllKeys', 'KeySchema': [ { 'AttributeName': 'username', 'KeyType': 'HASH' }, { 'AttributeName': 'date_joined', 'KeyType': 'RANGE' } ], 'Projection': { 'ProjectionType': 'ALL' } }) def test_keys_only_index(self): keys_only = KeysOnlyIndex('KeysOnly', parts=[ HashKey('username'), RangeKey('date_joined') ]) self.assertEqual(keys_only.name, 'KeysOnly') self.assertEqual([part.attr_type for part in keys_only.parts], [ 'HASH', 'RANGE' ]) self.assertEqual(keys_only.projection_type, 'KEYS_ONLY') self.assertEqual(keys_only.definition(), [ {'AttributeName': 'username', 'AttributeType': 'S'}, {'AttributeName': 'date_joined', 'AttributeType': 'S'} ]) self.assertEqual(keys_only.schema(), { 'IndexName': 'KeysOnly', 'KeySchema': [ { 'AttributeName': 'username', 'KeyType': 'HASH' }, { 'AttributeName': 'date_joined', 'KeyType': 'RANGE' } ], 'Projection': { 'ProjectionType': 'KEYS_ONLY' } }) def test_include_index(self): include_index = IncludeIndex('IncludeKeys', parts=[ HashKey('username'), RangeKey('date_joined') ], includes=[ 'gender', 'friend_count' ]) self.assertEqual(include_index.name, 'IncludeKeys') self.assertEqual([part.attr_type for part in include_index.parts], [ 'HASH', 'RANGE' ]) self.assertEqual(include_index.projection_type, 'INCLUDE') self.assertEqual(include_index.definition(), [ {'AttributeName': 'username', 'AttributeType': 'S'}, {'AttributeName': 'date_joined', 'AttributeType': 'S'} ]) self.assertEqual(include_index.schema(), { 'IndexName': 'IncludeKeys', 'KeySchema': [ { 'AttributeName': 'username', 'KeyType': 'HASH' }, { 'AttributeName': 'date_joined', 'KeyType': 'RANGE' } ], 'Projection': { 'ProjectionType': 'INCLUDE', 'NonKeyAttributes': [ 'gender', 'friend_count', ] } }) class ItemTestCase(unittest.TestCase): def setUp(self): super(ItemTestCase, self).setUp() self.table = Table('whatever', connection=FakeDynamoDBConnection()) self.johndoe = self.create_item({ 'username': 'johndoe', 'first_name': 'John', 'date_joined': 12345, }) def create_item(self, data): return Item(self.table, data=data) def test_initialization(self): empty_item = Item(self.table) self.assertEqual(empty_item.table, self.table) self.assertEqual(empty_item._data, {}) full_item = Item(self.table, data={ 'username': 'johndoe', 'date_joined': 12345, }) self.assertEqual(full_item.table, self.table) self.assertEqual(full_item._data, { 'username': 'johndoe', 'date_joined': 12345, }) # The next couple methods make use of ``sorted(...)`` so we get consistent # ordering everywhere & no erroneous failures. def test_keys(self): self.assertEqual(sorted(self.johndoe.keys()), [ 'date_joined', 'first_name', 'username', ]) def test_values(self): self.assertEqual(sorted(self.johndoe.values()), [ 12345, 'John', 'johndoe', ]) def test_contains(self): self.assertTrue('username' in self.johndoe) self.assertTrue('first_name' in self.johndoe) self.assertTrue('date_joined' in self.johndoe) self.assertFalse('whatever' in self.johndoe) def test_iter(self): self.assertEqual(list(self.johndoe), [ 'johndoe', 'John', 12345, ]) def test_get(self): self.assertEqual(self.johndoe.get('username'), 'johndoe') self.assertEqual(self.johndoe.get('first_name'), 'John') self.assertEqual(self.johndoe.get('date_joined'), 12345) # Test a missing key. No default yields ``None``. self.assertEqual(self.johndoe.get('last_name'), None) # This time with a default. self.assertEqual(self.johndoe.get('last_name', True), True) def test_items(self): self.assertEqual(sorted(self.johndoe.items()), [ ('date_joined', 12345), ('first_name', 'John'), ('username', 'johndoe'), ]) def test_attribute_access(self): self.assertEqual(self.johndoe['username'], 'johndoe') self.assertEqual(self.johndoe['first_name'], 'John') self.assertEqual(self.johndoe['date_joined'], 12345) # Test a missing key. self.assertEqual(self.johndoe['last_name'], None) # Set a key. self.johndoe['last_name'] = 'Doe' # Test accessing the new key. self.assertEqual(self.johndoe['last_name'], 'Doe') # Delete a key. del self.johndoe['last_name'] # Test the now-missing-again key. self.assertEqual(self.johndoe['last_name'], None) def test_needs_save(self): self.johndoe.mark_clean() self.assertFalse(self.johndoe.needs_save()) self.johndoe['last_name'] = 'Doe' self.assertTrue(self.johndoe.needs_save()) def test_needs_save_set_changed(self): # First, ensure we're clean. self.johndoe.mark_clean() self.assertFalse(self.johndoe.needs_save()) # Add a friends collection. self.johndoe['friends'] = set(['jane', 'alice']) self.assertTrue(self.johndoe.needs_save()) # Now mark it clean, then change the collection. # This does NOT call ``__setitem__``, so the item used to be # incorrectly appearing to be clean, when it had in fact been changed. self.johndoe.mark_clean() self.assertFalse(self.johndoe.needs_save()) self.johndoe['friends'].add('bob') self.assertTrue(self.johndoe.needs_save()) def test_mark_clean(self): self.johndoe['last_name'] = 'Doe' self.assertTrue(self.johndoe.needs_save()) self.johndoe.mark_clean() self.assertFalse(self.johndoe.needs_save()) def test_load(self): empty_item = Item(self.table) empty_item.load({ 'Item': { 'username': {'S': 'johndoe'}, 'first_name': {'S': 'John'}, 'last_name': {'S': 'Doe'}, 'date_joined': {'N': '1366056668'}, 'friend_count': {'N': '3'}, 'friends': {'SS': ['alice', 'bob', 'jane']}, } }) self.assertEqual(empty_item['username'], 'johndoe') self.assertEqual(empty_item['date_joined'], 1366056668) self.assertEqual(sorted(empty_item['friends']), sorted([ 'alice', 'bob', 'jane' ])) def test_get_keys(self): # Setup the data. self.table.schema = [ HashKey('username'), RangeKey('date_joined'), ] self.assertEqual(self.johndoe.get_keys(), { 'username': 'johndoe', 'date_joined': 12345, }) def test_get_raw_keys(self): # Setup the data. self.table.schema = [ HashKey('username'), RangeKey('date_joined'), ] self.assertEqual(self.johndoe.get_raw_keys(), { 'username': {'S': 'johndoe'}, 'date_joined': {'N': '12345'}, }) def test_build_expects(self): # Pristine. self.assertEqual(self.johndoe.build_expects(), { 'first_name': { 'Exists': False, }, 'username': { 'Exists': False, }, 'date_joined': { 'Exists': False, }, }) # Without modifications. self.johndoe.mark_clean() self.assertEqual(self.johndoe.build_expects(), { 'first_name': { 'Exists': True, 'Value': { 'S': 'John', }, }, 'username': { 'Exists': True, 'Value': { 'S': 'johndoe', }, }, 'date_joined': { 'Exists': True, 'Value': { 'N': '12345', }, }, }) # Change some data. self.johndoe['first_name'] = 'Johann' # Add some data. self.johndoe['last_name'] = 'Doe' # Delete some data. del self.johndoe['date_joined'] # All fields (default). self.assertEqual(self.johndoe.build_expects(), { 'first_name': { 'Exists': True, 'Value': { 'S': 'John', }, }, 'last_name': { 'Exists': False, }, 'username': { 'Exists': True, 'Value': { 'S': 'johndoe', }, }, 'date_joined': { 'Exists': True, 'Value': { 'N': '12345', }, }, }) # Only a subset of the fields. self.assertEqual(self.johndoe.build_expects(fields=[ 'first_name', 'last_name', 'date_joined', ]), { 'first_name': { 'Exists': True, 'Value': { 'S': 'John', }, }, 'last_name': { 'Exists': False, }, 'date_joined': { 'Exists': True, 'Value': { 'N': '12345', }, }, }) def test_prepare_full(self): self.assertEqual(self.johndoe.prepare_full(), { 'username': {'S': 'johndoe'}, 'first_name': {'S': 'John'}, 'date_joined': {'N': '12345'} }) self.johndoe['friends'] = set(['jane', 'alice']) self.assertEqual(self.johndoe.prepare_full(), { 'username': {'S': 'johndoe'}, 'first_name': {'S': 'John'}, 'date_joined': {'N': '12345'}, 'friends': {'SS': ['jane', 'alice']}, }) def test_prepare_full_empty_set(self): self.johndoe['friends'] = set() self.assertEqual(self.johndoe.prepare_full(), { 'username': {'S': 'johndoe'}, 'first_name': {'S': 'John'}, 'date_joined': {'N': '12345'} }) def test_prepare_partial(self): self.johndoe.mark_clean() # Change some data. self.johndoe['first_name'] = 'Johann' # Add some data. self.johndoe['last_name'] = 'Doe' # Delete some data. del self.johndoe['date_joined'] final_data, fields = self.johndoe.prepare_partial() self.assertEqual(final_data, { 'date_joined': { 'Action': 'DELETE', }, 'first_name': { 'Action': 'PUT', 'Value': {'S': 'Johann'}, }, 'last_name': { 'Action': 'PUT', 'Value': {'S': 'Doe'}, }, }) self.assertEqual(fields, set([ 'first_name', 'last_name', 'date_joined' ])) def test_prepare_partial(self): self.johndoe.mark_clean() # Change some data. self.johndoe['first_name'] = 'Johann' # Add some data. self.johndoe['last_name'] = 'Doe' # Delete some data. del self.johndoe['date_joined'] # Put an empty set on the ``Item``. self.johndoe['friends'] = set() final_data, fields = self.johndoe.prepare_partial() self.assertEqual(final_data, { 'date_joined': { 'Action': 'DELETE', }, 'first_name': { 'Action': 'PUT', 'Value': {'S': 'Johann'}, }, 'last_name': { 'Action': 'PUT', 'Value': {'S': 'Doe'}, }, }) self.assertEqual(fields, set([ 'first_name', 'last_name', 'date_joined' ])) def test_save_no_changes(self): # Unchanged, no save. with mock.patch.object(self.table, '_put_item', return_value=True) \ as mock_put_item: # Pretend we loaded it via ``get_item``... self.johndoe.mark_clean() self.assertFalse(self.johndoe.save()) self.assertFalse(mock_put_item.called) def test_save_with_changes(self): # With changed data. with mock.patch.object(self.table, '_put_item', return_value=True) \ as mock_put_item: self.johndoe.mark_clean() self.johndoe['first_name'] = 'J' self.johndoe['new_attr'] = 'never_seen_before' self.assertTrue(self.johndoe.save()) self.assertFalse(self.johndoe.needs_save()) self.assertTrue(mock_put_item.called) mock_put_item.assert_called_once_with({ 'username': {'S': 'johndoe'}, 'first_name': {'S': 'J'}, 'new_attr': {'S': 'never_seen_before'}, 'date_joined': {'N': '12345'} }, expects={ 'username': { 'Value': { 'S': 'johndoe', }, 'Exists': True, }, 'first_name': { 'Value': { 'S': 'John', }, 'Exists': True, }, 'new_attr': { 'Exists': False, }, 'date_joined': { 'Value': { 'N': '12345', }, 'Exists': True, }, }) def test_save_with_changes_overwrite(self): # With changed data. with mock.patch.object(self.table, '_put_item', return_value=True) \ as mock_put_item: self.johndoe['first_name'] = 'J' self.johndoe['new_attr'] = 'never_seen_before' # OVERWRITE ALL THE THINGS self.assertTrue(self.johndoe.save(overwrite=True)) self.assertFalse(self.johndoe.needs_save()) self.assertTrue(mock_put_item.called) mock_put_item.assert_called_once_with({ 'username': {'S': 'johndoe'}, 'first_name': {'S': 'J'}, 'new_attr': {'S': 'never_seen_before'}, 'date_joined': {'N': '12345'} }, expects=None) def test_partial_no_changes(self): # Unchanged, no save. with mock.patch.object(self.table, '_update_item', return_value=True) \ as mock_update_item: # Pretend we loaded it via ``get_item``... self.johndoe.mark_clean() self.assertFalse(self.johndoe.partial_save()) self.assertFalse(mock_update_item.called) def test_partial_with_changes(self): # Setup the data. self.table.schema = [ HashKey('username'), ] # With changed data. with mock.patch.object(self.table, '_update_item', return_value=True) \ as mock_update_item: # Pretend we loaded it via ``get_item``... self.johndoe.mark_clean() # Now... MODIFY!!! self.johndoe['first_name'] = 'J' self.johndoe['last_name'] = 'Doe' del self.johndoe['date_joined'] self.assertTrue(self.johndoe.partial_save()) self.assertFalse(self.johndoe.needs_save()) self.assertTrue(mock_update_item.called) mock_update_item.assert_called_once_with({ 'username': 'johndoe', }, { 'first_name': { 'Action': 'PUT', 'Value': {'S': 'J'}, }, 'last_name': { 'Action': 'PUT', 'Value': {'S': 'Doe'}, }, 'date_joined': { 'Action': 'DELETE', } }, expects={ 'first_name': { 'Value': { 'S': 'John', }, 'Exists': True }, 'last_name': { 'Exists': False }, 'date_joined': { 'Value': { 'N': '12345', }, 'Exists': True }, }) def test_delete(self): # Setup the data. self.table.schema = [ HashKey('username'), RangeKey('date_joined'), ] with mock.patch.object(self.table, 'delete_item', return_value=True) \ as mock_delete_item: self.johndoe.delete() self.assertTrue(mock_delete_item.called) mock_delete_item.assert_called_once_with( username='johndoe', date_joined=12345 ) def test_nonzero(self): self.assertTrue(self.johndoe) self.assertFalse(self.create_item({})) def fake_results(name, greeting='hello', exclusive_start_key=None, limit=None): if exclusive_start_key is None: exclusive_start_key = -1 if limit == 0: raise Exception("Web Service Returns '400 Bad Request'") end_cap = 13 results = [] start_key = exclusive_start_key + 1 for i in range(start_key, start_key + 5): if i < end_cap: results.append("%s %s #%s" % (greeting, name, i)) # Don't return more than limit results if limit < len(results): results = results[:limit] retval = { 'results': results, } if exclusive_start_key + 5 < end_cap: retval['last_key'] = exclusive_start_key + 5 return retval class ResultSetTestCase(unittest.TestCase): def setUp(self): super(ResultSetTestCase, self).setUp() self.results = ResultSet() self.results.to_call(fake_results, 'john', greeting='Hello', limit=20) def test_first_key(self): self.assertEqual(self.results.first_key, 'exclusive_start_key') def test_fetch_more(self): # First "page". self.results.fetch_more() self.assertEqual(self.results._results, [ 'Hello john #0', 'Hello john #1', 'Hello john #2', 'Hello john #3', 'Hello john #4', ]) # Fake in a last key. self.results._last_key_seen = 4 # Second "page". self.results.fetch_more() self.assertEqual(self.results._results, [ 'Hello john #5', 'Hello john #6', 'Hello john #7', 'Hello john #8', 'Hello john #9', ]) # Fake in a last key. self.results._last_key_seen = 9 # Last "page". self.results.fetch_more() self.assertEqual(self.results._results, [ 'Hello john #10', 'Hello john #11', 'Hello john #12', ]) # Fake in a key outside the range. self.results._last_key_seen = 15 # Empty "page". Nothing new gets added self.results.fetch_more() self.assertEqual(self.results._results, []) # Make sure we won't check for results in the future. self.assertFalse(self.results._results_left) def test_iteration(self): # First page. self.assertEqual(self.results.next(), 'Hello john #0') self.assertEqual(self.results.next(), 'Hello john #1') self.assertEqual(self.results.next(), 'Hello john #2') self.assertEqual(self.results.next(), 'Hello john #3') self.assertEqual(self.results.next(), 'Hello john #4') self.assertEqual(self.results.call_kwargs['limit'], 15) # Second page. self.assertEqual(self.results.next(), 'Hello john #5') self.assertEqual(self.results.next(), 'Hello john #6') self.assertEqual(self.results.next(), 'Hello john #7') self.assertEqual(self.results.next(), 'Hello john #8') self.assertEqual(self.results.next(), 'Hello john #9') self.assertEqual(self.results.call_kwargs['limit'], 10) # Third page. self.assertEqual(self.results.next(), 'Hello john #10') self.assertEqual(self.results.next(), 'Hello john #11') self.assertEqual(self.results.next(), 'Hello john #12') self.assertRaises(StopIteration, self.results.next) self.assertEqual(self.results.call_kwargs['limit'], 7) def test_limit_smaller_than_first_page(self): results = ResultSet() results.to_call(fake_results, 'john', greeting='Hello', limit=2) self.assertEqual(results.next(), 'Hello john #0') self.assertEqual(results.next(), 'Hello john #1') self.assertRaises(StopIteration, results.next) def test_limit_equals_page(self): results = ResultSet() results.to_call(fake_results, 'john', greeting='Hello', limit=5) # First page self.assertEqual(results.next(), 'Hello john #0') self.assertEqual(results.next(), 'Hello john #1') self.assertEqual(results.next(), 'Hello john #2') self.assertEqual(results.next(), 'Hello john #3') self.assertEqual(results.next(), 'Hello john #4') self.assertRaises(StopIteration, results.next) def test_limit_greater_than_page(self): results = ResultSet() results.to_call(fake_results, 'john', greeting='Hello', limit=6) # First page self.assertEqual(results.next(), 'Hello john #0') self.assertEqual(results.next(), 'Hello john #1') self.assertEqual(results.next(), 'Hello john #2') self.assertEqual(results.next(), 'Hello john #3') self.assertEqual(results.next(), 'Hello john #4') # Second page self.assertEqual(results.next(), 'Hello john #5') self.assertRaises(StopIteration, results.next) def test_iteration_noresults(self): def none(limit=10): return { 'results': [], } results = ResultSet() results.to_call(none, limit=20) self.assertRaises(StopIteration, results.next) def test_iteration_sporadic_pages(self): # Some pages have no/incomplete results but have a ``LastEvaluatedKey`` # (for instance, scans with filters), so we need to accommodate that. def sporadic(): # A dict, because Python closures have read-only access to the # reference itself. count = {'value': -1} def _wrapper(limit=10, exclusive_start_key=None): count['value'] = count['value'] + 1 if count['value'] == 0: # Full page. return { 'results': [ 'Result #0', 'Result #1', 'Result #2', 'Result #3', ], 'last_key': 'page-1' } elif count['value'] == 1: # Empty page but continue. return { 'results': [], 'last_key': 'page-2' } elif count['value'] == 2: # Final page. return { 'results': [ 'Result #4', 'Result #5', 'Result #6', ], } return _wrapper results = ResultSet() results.to_call(sporadic(), limit=20) # First page self.assertEqual(results.next(), 'Result #0') self.assertEqual(results.next(), 'Result #1') self.assertEqual(results.next(), 'Result #2') self.assertEqual(results.next(), 'Result #3') # Second page (misses!) # Moves on to the third page self.assertEqual(results.next(), 'Result #4') self.assertEqual(results.next(), 'Result #5') self.assertEqual(results.next(), 'Result #6') self.assertRaises(StopIteration, results.next) def test_list(self): self.assertEqual(list(self.results), [ 'Hello john #0', 'Hello john #1', 'Hello john #2', 'Hello john #3', 'Hello john #4', 'Hello john #5', 'Hello john #6', 'Hello john #7', 'Hello john #8', 'Hello john #9', 'Hello john #10', 'Hello john #11', 'Hello john #12' ]) def fake_batch_results(keys): results = [] simulate_unprocessed = True if len(keys) and keys[0] == 'johndoe': simulate_unprocessed = False for key in keys: if simulate_unprocessed and key == 'johndoe': continue results.append("hello %s" % key) retval = { 'results': results, 'last_key': None, } if simulate_unprocessed: retval['unprocessed_keys'] = ['johndoe'] return retval class BatchGetResultSetTestCase(unittest.TestCase): def setUp(self): super(BatchGetResultSetTestCase, self).setUp() self.results = BatchGetResultSet(keys=[ 'alice', 'bob', 'jane', 'johndoe', ]) self.results.to_call(fake_batch_results) def test_fetch_more(self): # First "page". self.results.fetch_more() self.assertEqual(self.results._results, [ 'hello alice', 'hello bob', 'hello jane', ]) self.assertEqual(self.results._keys_left, ['johndoe']) # Second "page". self.results.fetch_more() self.assertEqual(self.results._results, [ 'hello johndoe', ]) # Empty "page". Nothing new gets added self.results.fetch_more() self.assertEqual(self.results._results, []) # Make sure we won't check for results in the future. self.assertFalse(self.results._results_left) def test_iteration(self): # First page. self.assertEqual(self.results.next(), 'hello alice') self.assertEqual(self.results.next(), 'hello bob') self.assertEqual(self.results.next(), 'hello jane') self.assertEqual(self.results.next(), 'hello johndoe') self.assertRaises(StopIteration, self.results.next) class TableTestCase(unittest.TestCase): def setUp(self): super(TableTestCase, self).setUp() self.users = Table('users', connection=FakeDynamoDBConnection()) self.default_connection = DynamoDBConnection( aws_access_key_id='access_key', aws_secret_access_key='secret_key' ) def test__introspect_schema(self): raw_schema_1 = [ { "AttributeName": "username", "KeyType": "HASH" }, { "AttributeName": "date_joined", "KeyType": "RANGE" } ] schema_1 = self.users._introspect_schema(raw_schema_1) self.assertEqual(len(schema_1), 2) self.assertTrue(isinstance(schema_1[0], HashKey)) self.assertEqual(schema_1[0].name, 'username') self.assertTrue(isinstance(schema_1[1], RangeKey)) self.assertEqual(schema_1[1].name, 'date_joined') raw_schema_2 = [ { "AttributeName": "username", "KeyType": "BTREE" }, ] self.assertRaises( exceptions.UnknownSchemaFieldError, self.users._introspect_schema, raw_schema_2 ) def test__introspect_indexes(self): raw_indexes_1 = [ { "IndexName": "MostRecentlyJoinedIndex", "KeySchema": [ { "AttributeName": "username", "KeyType": "HASH" }, { "AttributeName": "date_joined", "KeyType": "RANGE" } ], "Projection": { "ProjectionType": "KEYS_ONLY" } }, { "IndexName": "EverybodyIndex", "KeySchema": [ { "AttributeName": "username", "KeyType": "HASH" }, ], "Projection": { "ProjectionType": "ALL" } }, { "IndexName": "GenderIndex", "KeySchema": [ { "AttributeName": "username", "KeyType": "HASH" }, { "AttributeName": "date_joined", "KeyType": "RANGE" } ], "Projection": { "ProjectionType": "INCLUDE", "NonKeyAttributes": [ 'gender', ] } } ] indexes_1 = self.users._introspect_indexes(raw_indexes_1) self.assertEqual(len(indexes_1), 3) self.assertTrue(isinstance(indexes_1[0], KeysOnlyIndex)) self.assertEqual(indexes_1[0].name, 'MostRecentlyJoinedIndex') self.assertEqual(len(indexes_1[0].parts), 2) self.assertTrue(isinstance(indexes_1[1], AllIndex)) self.assertEqual(indexes_1[1].name, 'EverybodyIndex') self.assertEqual(len(indexes_1[1].parts), 1) self.assertTrue(isinstance(indexes_1[2], IncludeIndex)) self.assertEqual(indexes_1[2].name, 'GenderIndex') self.assertEqual(len(indexes_1[2].parts), 2) self.assertEqual(indexes_1[2].includes_fields, ['gender']) raw_indexes_2 = [ { "IndexName": "MostRecentlyJoinedIndex", "KeySchema": [ { "AttributeName": "username", "KeyType": "HASH" }, { "AttributeName": "date_joined", "KeyType": "RANGE" } ], "Projection": { "ProjectionType": "SOMETHING_CRAZY" } }, ] self.assertRaises( exceptions.UnknownIndexFieldError, self.users._introspect_indexes, raw_indexes_2 ) def test_initialization(self): users = Table('users', connection=self.default_connection) self.assertEqual(users.table_name, 'users') self.assertTrue(isinstance(users.connection, DynamoDBConnection)) self.assertEqual(users.throughput['read'], 5) self.assertEqual(users.throughput['write'], 5) self.assertEqual(users.schema, None) self.assertEqual(users.indexes, None) groups = Table('groups', connection=FakeDynamoDBConnection()) self.assertEqual(groups.table_name, 'groups') self.assertTrue(hasattr(groups.connection, 'assert_called_once_with')) def test_create_simple(self): conn = FakeDynamoDBConnection() with mock.patch.object(conn, 'create_table', return_value={}) \ as mock_create_table: retval = Table.create('users', schema=[ HashKey('username'), RangeKey('date_joined', data_type=NUMBER) ], connection=conn) self.assertTrue(retval) self.assertTrue(mock_create_table.called) mock_create_table.assert_called_once_with(attribute_definitions=[ { 'AttributeName': 'username', 'AttributeType': 'S' }, { 'AttributeName': 'date_joined', 'AttributeType': 'N' } ], table_name='users', key_schema=[ { 'KeyType': 'HASH', 'AttributeName': 'username' }, { 'KeyType': 'RANGE', 'AttributeName': 'date_joined' } ], provisioned_throughput={ 'WriteCapacityUnits': 5, 'ReadCapacityUnits': 5 }) def test_create_full(self): conn = FakeDynamoDBConnection() with mock.patch.object(conn, 'create_table', return_value={}) \ as mock_create_table: retval = Table.create('users', schema=[ HashKey('username'), RangeKey('date_joined', data_type=NUMBER) ], throughput={ 'read':20, 'write': 10, }, indexes=[ KeysOnlyIndex('FriendCountIndex', parts=[ RangeKey('friend_count') ]), ], connection=conn) self.assertTrue(retval) self.assertTrue(mock_create_table.called) mock_create_table.assert_called_once_with(attribute_definitions=[ { 'AttributeName': 'username', 'AttributeType': 'S' }, { 'AttributeName': 'date_joined', 'AttributeType': 'N' }, { 'AttributeName': 'friend_count', 'AttributeType': 'S' } ], key_schema=[ { 'KeyType': 'HASH', 'AttributeName': 'username' }, { 'KeyType': 'RANGE', 'AttributeName': 'date_joined' } ], table_name='users', provisioned_throughput={ 'WriteCapacityUnits': 10, 'ReadCapacityUnits': 20 }, local_secondary_indexes=[ { 'KeySchema': [ { 'KeyType': 'RANGE', 'AttributeName': 'friend_count' } ], 'IndexName': 'FriendCountIndex', 'Projection': { 'ProjectionType': 'KEYS_ONLY' } } ]) def test_describe(self): expected = { "Table": { "AttributeDefinitions": [ { "AttributeName": "username", "AttributeType": "S" } ], "ItemCount": 5, "KeySchema": [ { "AttributeName": "username", "KeyType": "HASH" } ], "LocalSecondaryIndexes": [ { "IndexName": "UsernameIndex", "KeySchema": [ { "AttributeName": "username", "KeyType": "HASH" } ], "Projection": { "ProjectionType": "KEYS_ONLY" } } ], "ProvisionedThroughput": { "ReadCapacityUnits": 20, "WriteCapacityUnits": 6 }, "TableName": "Thread", "TableStatus": "ACTIVE" } } with mock.patch.object( self.users.connection, 'describe_table', return_value=expected) as mock_describe: self.assertEqual(self.users.throughput['read'], 5) self.assertEqual(self.users.throughput['write'], 5) self.assertEqual(self.users.schema, None) self.assertEqual(self.users.indexes, None) self.users.describe() self.assertEqual(self.users.throughput['read'], 20) self.assertEqual(self.users.throughput['write'], 6) self.assertEqual(len(self.users.schema), 1) self.assertEqual(isinstance(self.users.schema[0], HashKey), 1) self.assertEqual(len(self.users.indexes), 1) mock_describe.assert_called_once_with('users') def test_update(self): with mock.patch.object( self.users.connection, 'update_table', return_value={}) as mock_update: self.assertEqual(self.users.throughput['read'], 5) self.assertEqual(self.users.throughput['write'], 5) self.users.update(throughput={ 'read': 7, 'write': 2, }) self.assertEqual(self.users.throughput['read'], 7) self.assertEqual(self.users.throughput['write'], 2) mock_update.assert_called_once_with('users', { 'WriteCapacityUnits': 2, 'ReadCapacityUnits': 7 }) def test_delete(self): with mock.patch.object( self.users.connection, 'delete_table', return_value={}) as mock_delete: self.assertTrue(self.users.delete()) mock_delete.assert_called_once_with('users') def test_get_item(self): expected = { 'Item': { 'username': {'S': 'johndoe'}, 'first_name': {'S': 'John'}, 'last_name': {'S': 'Doe'}, 'date_joined': {'N': '1366056668'}, 'friend_count': {'N': '3'}, 'friends': {'SS': ['alice', 'bob', 'jane']}, } } with mock.patch.object( self.users.connection, 'get_item', return_value=expected) as mock_get_item: item = self.users.get_item(username='johndoe') self.assertEqual(item['username'], 'johndoe') self.assertEqual(item['first_name'], 'John') mock_get_item.assert_called_once_with('users', { 'username': {'S': 'johndoe'} }, consistent_read=False) def test_lookup_hash(self): """Tests the "lookup" function with just a hash key""" expected = { 'Item': { 'username': {'S': 'johndoe'}, 'first_name': {'S': 'John'}, 'last_name': {'S': 'Doe'}, 'date_joined': {'N': '1366056668'}, 'friend_count': {'N': '3'}, 'friends': {'SS': ['alice', 'bob', 'jane']}, } } # Set the Schema self.users.schema = [ HashKey('username'), RangeKey('date_joined', data_type=NUMBER), ] with mock.patch.object( self.users, 'get_item', return_value=expected) as mock_get_item: self.users.lookup('johndoe') mock_get_item.assert_called_once_with( username= 'johndoe') def test_lookup_hash_and_range(self): """Test the "lookup" function with a hash and range key""" expected = { 'Item': { 'username': {'S': 'johndoe'}, 'first_name': {'S': 'John'}, 'last_name': {'S': 'Doe'}, 'date_joined': {'N': '1366056668'}, 'friend_count': {'N': '3'}, 'friends': {'SS': ['alice', 'bob', 'jane']}, } } # Set the Schema self.users.schema = [ HashKey('username'), RangeKey('date_joined', data_type=NUMBER), ] with mock.patch.object( self.users, 'get_item', return_value=expected) as mock_get_item: self.users.lookup('johndoe', 1366056668) mock_get_item.assert_called_once_with( username= 'johndoe', date_joined= 1366056668) def test_put_item(self): with mock.patch.object( self.users.connection, 'put_item', return_value={}) as mock_put_item: self.users.put_item(data={ 'username': 'johndoe', 'last_name': 'Doe', 'date_joined': 12345, }) mock_put_item.assert_called_once_with('users', { 'username': {'S': 'johndoe'}, 'last_name': {'S': 'Doe'}, 'date_joined': {'N': '12345'} }, expected={ 'username': { 'Exists': False, }, 'last_name': { 'Exists': False, }, 'date_joined': { 'Exists': False, } }) def test_private_put_item(self): with mock.patch.object( self.users.connection, 'put_item', return_value={}) as mock_put_item: self.users._put_item({'some': 'data'}) mock_put_item.assert_called_once_with('users', {'some': 'data'}) def test_private_update_item(self): with mock.patch.object( self.users.connection, 'update_item', return_value={}) as mock_update_item: self.users._update_item({ 'username': 'johndoe' }, { 'some': 'data', }) mock_update_item.assert_called_once_with('users', { 'username': {'S': 'johndoe'}, }, { 'some': 'data', }) def test_delete_item(self): with mock.patch.object( self.users.connection, 'delete_item', return_value={}) as mock_delete_item: self.assertTrue(self.users.delete_item(username='johndoe', date_joined=23456)) mock_delete_item.assert_called_once_with('users', { 'username': { 'S': 'johndoe' }, 'date_joined': { 'N': '23456' } }) def test_get_key_fields_no_schema_populated(self): expected = { "Table": { "AttributeDefinitions": [ { "AttributeName": "username", "AttributeType": "S" }, { "AttributeName": "date_joined", "AttributeType": "N" } ], "ItemCount": 5, "KeySchema": [ { "AttributeName": "username", "KeyType": "HASH" }, { "AttributeName": "date_joined", "KeyType": "RANGE" } ], "LocalSecondaryIndexes": [ { "IndexName": "UsernameIndex", "KeySchema": [ { "AttributeName": "username", "KeyType": "HASH" } ], "Projection": { "ProjectionType": "KEYS_ONLY" } } ], "ProvisionedThroughput": { "ReadCapacityUnits": 20, "WriteCapacityUnits": 6 }, "TableName": "Thread", "TableStatus": "ACTIVE" } } with mock.patch.object( self.users.connection, 'describe_table', return_value=expected) as mock_describe: self.assertEqual(self.users.schema, None) key_fields = self.users.get_key_fields() self.assertEqual(key_fields, ['username', 'date_joined']) self.assertEqual(len(self.users.schema), 2) mock_describe.assert_called_once_with('users') def test_batch_write_no_writes(self): with mock.patch.object( self.users.connection, 'batch_write_item', return_value={}) as mock_batch: with self.users.batch_write() as batch: pass self.assertFalse(mock_batch.called) def test_batch_write(self): with mock.patch.object( self.users.connection, 'batch_write_item', return_value={}) as mock_batch: with self.users.batch_write() as batch: batch.put_item(data={ 'username': 'jane', 'date_joined': 12342547 }) batch.delete_item(username='johndoe') batch.put_item(data={ 'username': 'alice', 'date_joined': 12342888 }) mock_batch.assert_called_once_with({ 'users': [ { 'PutRequest': { 'Item': { 'username': {'S': 'jane'}, 'date_joined': {'N': '12342547'} } } }, { 'PutRequest': { 'Item': { 'username': {'S': 'alice'}, 'date_joined': {'N': '12342888'} } } }, { 'DeleteRequest': { 'Key': { 'username': {'S': 'johndoe'}, } } }, ] }) def test_batch_write_dont_swallow_exceptions(self): with mock.patch.object( self.users.connection, 'batch_write_item', return_value={}) as mock_batch: try: with self.users.batch_write() as batch: raise Exception('OH NOES') except Exception, e: self.assertEqual(str(e), 'OH NOES') self.assertFalse(mock_batch.called) def test_batch_write_flushing(self): with mock.patch.object( self.users.connection, 'batch_write_item', return_value={}) as mock_batch: with self.users.batch_write() as batch: batch.put_item(data={ 'username': 'jane', 'date_joined': 12342547 }) # This would only be enough for one batch. batch.delete_item(username='johndoe1') batch.delete_item(username='johndoe2') batch.delete_item(username='johndoe3') batch.delete_item(username='johndoe4') batch.delete_item(username='johndoe5') batch.delete_item(username='johndoe6') batch.delete_item(username='johndoe7') batch.delete_item(username='johndoe8') batch.delete_item(username='johndoe9') batch.delete_item(username='johndoe10') batch.delete_item(username='johndoe11') batch.delete_item(username='johndoe12') batch.delete_item(username='johndoe13') batch.delete_item(username='johndoe14') batch.delete_item(username='johndoe15') batch.delete_item(username='johndoe16') batch.delete_item(username='johndoe17') batch.delete_item(username='johndoe18') batch.delete_item(username='johndoe19') batch.delete_item(username='johndoe20') batch.delete_item(username='johndoe21') batch.delete_item(username='johndoe22') batch.delete_item(username='johndoe23') # We're only at 24 items. No flushing yet. self.assertEqual(mock_batch.call_count, 0) # This pushes it over the edge. A flush happens then we start # queuing objects again. batch.delete_item(username='johndoe24') self.assertEqual(mock_batch.call_count, 1) # Since we add another, there's enough for a second call to # flush. batch.delete_item(username='johndoe25') self.assertEqual(mock_batch.call_count, 2) def test_batch_write_unprocessed_items(self): unprocessed = { 'UnprocessedItems': { 'users': [ { 'PutRequest': { 'username': { 'S': 'jane', }, 'date_joined': { 'N': 12342547 } }, }, ], }, } # Test enqueuing the unprocessed bits. with mock.patch.object( self.users.connection, 'batch_write_item', return_value=unprocessed) as mock_batch: with self.users.batch_write() as batch: self.assertEqual(len(batch._unprocessed), 0) # Trash the ``resend_unprocessed`` method so that we don't # infinite loop forever here. batch.resend_unprocessed = lambda: True batch.put_item(data={ 'username': 'jane', 'date_joined': 12342547 }) batch.delete_item(username='johndoe') batch.put_item(data={ 'username': 'alice', 'date_joined': 12342888 }) self.assertEqual(len(batch._unprocessed), 1) # Now test resending those unprocessed items. with mock.patch.object( self.users.connection, 'batch_write_item', return_value={}) as mock_batch: with self.users.batch_write() as batch: self.assertEqual(len(batch._unprocessed), 0) # Toss in faked unprocessed items, as though a previous batch # had failed. batch._unprocessed = [ { 'PutRequest': { 'username': { 'S': 'jane', }, 'date_joined': { 'N': 12342547 } }, }, ] batch.put_item(data={ 'username': 'jane', 'date_joined': 12342547 }) batch.delete_item(username='johndoe') batch.put_item(data={ 'username': 'alice', 'date_joined': 12342888 }) # Flush, to make sure everything has been processed. # Unprocessed items should still be hanging around. batch.flush() self.assertEqual(len(batch._unprocessed), 1) # Post-exit, this should be emptied. self.assertEqual(len(batch._unprocessed), 0) def test__build_filters(self): filters = self.users._build_filters({ 'username__eq': 'johndoe', 'date_joined__gte': 1234567, 'age__in': [30, 31, 32, 33], 'last_name__between': ['danzig', 'only'], 'first_name__null': False, 'gender__null': True, }, using=FILTER_OPERATORS) self.assertEqual(filters, { 'username': { 'AttributeValueList': [ { 'S': 'johndoe', }, ], 'ComparisonOperator': 'EQ', }, 'date_joined': { 'AttributeValueList': [ { 'N': '1234567', }, ], 'ComparisonOperator': 'GE', }, 'age': { 'AttributeValueList': [ {'N': '30'}, {'N': '31'}, {'N': '32'}, {'N': '33'}, ], 'ComparisonOperator': 'IN', }, 'last_name': { 'AttributeValueList': [{'S': 'danzig'}, {'S': 'only'}], 'ComparisonOperator': 'BETWEEN', }, 'first_name': { 'ComparisonOperator': 'NOT_NULL' }, 'gender': { 'ComparisonOperator': 'NULL' }, }) self.assertRaises(exceptions.UnknownFilterTypeError, self.users._build_filters, { 'darling__die': True, } ) q_filters = self.users._build_filters({ 'username__eq': 'johndoe', 'date_joined__gte': 1234567, 'last_name__between': ['danzig', 'only'], 'gender__beginswith': 'm', }, using=QUERY_OPERATORS) self.assertEqual(q_filters, { 'username': { 'AttributeValueList': [ { 'S': 'johndoe', }, ], 'ComparisonOperator': 'EQ', }, 'date_joined': { 'AttributeValueList': [ { 'N': '1234567', }, ], 'ComparisonOperator': 'GE', }, 'last_name': { 'AttributeValueList': [{'S': 'danzig'}, {'S': 'only'}], 'ComparisonOperator': 'BETWEEN', }, 'gender': { 'AttributeValueList': [{'S': 'm'}], 'ComparisonOperator': 'BEGINS_WITH', }, }) self.assertRaises(exceptions.UnknownFilterTypeError, self.users._build_filters, { 'darling__die': True, }, using=QUERY_OPERATORS ) self.assertRaises(exceptions.UnknownFilterTypeError, self.users._build_filters, { 'first_name__null': True, }, using=QUERY_OPERATORS ) def test_private_query(self): expected = { "ConsumedCapacity": { "CapacityUnits": 0.5, "TableName": "users" }, "Count": 4, "Items": [ { 'username': {'S': 'johndoe'}, 'first_name': {'S': 'John'}, 'last_name': {'S': 'Doe'}, 'date_joined': {'N': '1366056668'}, 'friend_count': {'N': '3'}, 'friends': {'SS': ['alice', 'bob', 'jane']}, }, { 'username': {'S': 'jane'}, 'first_name': {'S': 'Jane'}, 'last_name': {'S': 'Doe'}, 'date_joined': {'N': '1366057777'}, 'friend_count': {'N': '2'}, 'friends': {'SS': ['alice', 'johndoe']}, }, { 'username': {'S': 'alice'}, 'first_name': {'S': 'Alice'}, 'last_name': {'S': 'Expert'}, 'date_joined': {'N': '1366056680'}, 'friend_count': {'N': '1'}, 'friends': {'SS': ['jane']}, }, { 'username': {'S': 'bob'}, 'first_name': {'S': 'Bob'}, 'last_name': {'S': 'Smith'}, 'date_joined': {'N': '1366056888'}, 'friend_count': {'N': '1'}, 'friends': {'SS': ['johndoe']}, }, ], "ScannedCount": 4 } with mock.patch.object( self.users.connection, 'query', return_value=expected) as mock_query: results = self.users._query( limit=4, reverse=True, username__between=['aaa', 'mmm'] ) usernames = [res['username'] for res in results['results']] self.assertEqual(usernames, ['johndoe', 'jane', 'alice', 'bob']) self.assertEqual(len(results['results']), 4) self.assertEqual(results['last_key'], None) mock_query.assert_called_once_with('users', consistent_read=False, scan_index_forward=True, index_name=None, attributes_to_get=None, limit=4, key_conditions={ 'username': { 'AttributeValueList': [{'S': 'aaa'}, {'S': 'mmm'}], 'ComparisonOperator': 'BETWEEN', } }, select=None ) # Now alter the expected. expected['LastEvaluatedKey'] = { 'username': { 'S': 'johndoe', }, } with mock.patch.object( self.users.connection, 'query', return_value=expected) as mock_query_2: results = self.users._query( limit=4, reverse=True, username__between=['aaa', 'mmm'], exclusive_start_key={ 'username': 'adam', }, consistent=True ) usernames = [res['username'] for res in results['results']] self.assertEqual(usernames, ['johndoe', 'jane', 'alice', 'bob']) self.assertEqual(len(results['results']), 4) self.assertEqual(results['last_key'], {'username': 'johndoe'}) mock_query_2.assert_called_once_with('users', key_conditions={ 'username': { 'AttributeValueList': [{'S': 'aaa'}, {'S': 'mmm'}], 'ComparisonOperator': 'BETWEEN', } }, index_name=None, attributes_to_get=None, scan_index_forward=True, limit=4, exclusive_start_key={ 'username': { 'S': 'adam', }, }, consistent_read=True, select=None ) def test_private_scan(self): expected = { "ConsumedCapacity": { "CapacityUnits": 0.5, "TableName": "users" }, "Count": 4, "Items": [ { 'username': {'S': 'alice'}, 'first_name': {'S': 'Alice'}, 'last_name': {'S': 'Expert'}, 'date_joined': {'N': '1366056680'}, 'friend_count': {'N': '1'}, 'friends': {'SS': ['jane']}, }, { 'username': {'S': 'bob'}, 'first_name': {'S': 'Bob'}, 'last_name': {'S': 'Smith'}, 'date_joined': {'N': '1366056888'}, 'friend_count': {'N': '1'}, 'friends': {'SS': ['johndoe']}, }, { 'username': {'S': 'jane'}, 'first_name': {'S': 'Jane'}, 'last_name': {'S': 'Doe'}, 'date_joined': {'N': '1366057777'}, 'friend_count': {'N': '2'}, 'friends': {'SS': ['alice', 'johndoe']}, }, ], "ScannedCount": 4 } with mock.patch.object( self.users.connection, 'scan', return_value=expected) as mock_scan: results = self.users._scan( limit=2, friend_count__lte=2 ) usernames = [res['username'] for res in results['results']] self.assertEqual(usernames, ['alice', 'bob', 'jane']) self.assertEqual(len(results['results']), 3) self.assertEqual(results['last_key'], None) mock_scan.assert_called_once_with('users', scan_filter={ 'friend_count': { 'AttributeValueList': [{'N': '2'}], 'ComparisonOperator': 'LE', } }, limit=2, segment=None, total_segments=None ) # Now alter the expected. expected['LastEvaluatedKey'] = { 'username': { 'S': 'jane', }, } with mock.patch.object( self.users.connection, 'scan', return_value=expected) as mock_scan_2: results = self.users._scan( limit=3, friend_count__lte=2, exclusive_start_key={ 'username': 'adam', }, segment=None, total_segments=None ) usernames = [res['username'] for res in results['results']] self.assertEqual(usernames, ['alice', 'bob', 'jane']) self.assertEqual(len(results['results']), 3) self.assertEqual(results['last_key'], {'username': 'jane'}) mock_scan_2.assert_called_once_with('users', scan_filter={ 'friend_count': { 'AttributeValueList': [{'N': '2'}], 'ComparisonOperator': 'LE', } }, limit=3, exclusive_start_key={ 'username': { 'S': 'adam', }, }, segment=None, total_segments=None ) def test_query(self): items_1 = { 'results': [ Item(self.users, data={ 'username': 'johndoe', 'first_name': 'John', 'last_name': 'Doe', }), Item(self.users, data={ 'username': 'jane', 'first_name': 'Jane', 'last_name': 'Doe', }), ], 'last_key': 'jane', } results = self.users.query(last_name__eq='Doe') self.assertTrue(isinstance(results, ResultSet)) self.assertEqual(len(results._results), 0) self.assertEqual(results.the_callable, self.users._query) with mock.patch.object( results, 'the_callable', return_value=items_1) as mock_query: res_1 = results.next() # Now it should be populated. self.assertEqual(len(results._results), 2) self.assertEqual(res_1['username'], 'johndoe') res_2 = results.next() self.assertEqual(res_2['username'], 'jane') self.assertEqual(mock_query.call_count, 1) items_2 = { 'results': [ Item(self.users, data={ 'username': 'foodoe', 'first_name': 'Foo', 'last_name': 'Doe', }), ], } with mock.patch.object( results, 'the_callable', return_value=items_2) as mock_query_2: res_3 = results.next() # New results should have been found. self.assertEqual(len(results._results), 1) self.assertEqual(res_3['username'], 'foodoe') self.assertRaises(StopIteration, results.next) self.assertEqual(mock_query_2.call_count, 1) def test_query_with_specific_attributes(self): items_1 = { 'results': [ Item(self.users, data={ 'username': 'johndoe', }), Item(self.users, data={ 'username': 'jane', }), ], 'last_key': 'jane', } results = self.users.query(last_name__eq='Doe', attributes=['username']) self.assertTrue(isinstance(results, ResultSet)) self.assertEqual(len(results._results), 0) self.assertEqual(results.the_callable, self.users._query) with mock.patch.object( results, 'the_callable', return_value=items_1) as mock_query: res_1 = results.next() # Now it should be populated. self.assertEqual(len(results._results), 2) self.assertEqual(res_1['username'], 'johndoe') self.assertEqual(res_1.keys(), ['username']) res_2 = results.next() self.assertEqual(res_2['username'], 'jane') self.assertEqual(mock_query.call_count, 1) def test_scan(self): items_1 = { 'results': [ Item(self.users, data={ 'username': 'johndoe', 'first_name': 'John', 'last_name': 'Doe', }), Item(self.users, data={ 'username': 'jane', 'first_name': 'Jane', 'last_name': 'Doe', }), ], 'last_key': 'jane', } results = self.users.scan(last_name__eq='Doe') self.assertTrue(isinstance(results, ResultSet)) self.assertEqual(len(results._results), 0) self.assertEqual(results.the_callable, self.users._scan) with mock.patch.object( results, 'the_callable', return_value=items_1) as mock_scan: res_1 = results.next() # Now it should be populated. self.assertEqual(len(results._results), 2) self.assertEqual(res_1['username'], 'johndoe') res_2 = results.next() self.assertEqual(res_2['username'], 'jane') self.assertEqual(mock_scan.call_count, 1) items_2 = { 'results': [ Item(self.users, data={ 'username': 'zoeydoe', 'first_name': 'Zoey', 'last_name': 'Doe', }), ], } with mock.patch.object( results, 'the_callable', return_value=items_2) as mock_scan_2: res_3 = results.next() # New results should have been found. self.assertEqual(len(results._results), 1) self.assertEqual(res_3['username'], 'zoeydoe') self.assertRaises(StopIteration, results.next) self.assertEqual(mock_scan_2.call_count, 1) def test_count(self): expected = { "Table": { "AttributeDefinitions": [ { "AttributeName": "username", "AttributeType": "S" } ], "ItemCount": 5, "KeySchema": [ { "AttributeName": "username", "KeyType": "HASH" } ], "LocalSecondaryIndexes": [ { "IndexName": "UsernameIndex", "KeySchema": [ { "AttributeName": "username", "KeyType": "HASH" } ], "Projection": { "ProjectionType": "KEYS_ONLY" } } ], "ProvisionedThroughput": { "ReadCapacityUnits": 20, "WriteCapacityUnits": 6 }, "TableName": "Thread", "TableStatus": "ACTIVE" } } with mock.patch.object( self.users, 'describe', return_value=expected) as mock_count: self.assertEqual(self.users.count(), 5) def test_private_batch_get(self): expected = { "ConsumedCapacity": { "CapacityUnits": 0.5, "TableName": "users" }, 'Responses': { 'users': [ { 'username': {'S': 'alice'}, 'first_name': {'S': 'Alice'}, 'last_name': {'S': 'Expert'}, 'date_joined': {'N': '1366056680'}, 'friend_count': {'N': '1'}, 'friends': {'SS': ['jane']}, }, { 'username': {'S': 'bob'}, 'first_name': {'S': 'Bob'}, 'last_name': {'S': 'Smith'}, 'date_joined': {'N': '1366056888'}, 'friend_count': {'N': '1'}, 'friends': {'SS': ['johndoe']}, }, { 'username': {'S': 'jane'}, 'first_name': {'S': 'Jane'}, 'last_name': {'S': 'Doe'}, 'date_joined': {'N': '1366057777'}, 'friend_count': {'N': '2'}, 'friends': {'SS': ['alice', 'johndoe']}, }, ], }, "UnprocessedKeys": { }, } with mock.patch.object( self.users.connection, 'batch_get_item', return_value=expected) as mock_batch_get: results = self.users._batch_get(keys=[ {'username': 'alice', 'friend_count': 1}, {'username': 'bob', 'friend_count': 1}, {'username': 'jane'}, ]) usernames = [res['username'] for res in results['results']] self.assertEqual(usernames, ['alice', 'bob', 'jane']) self.assertEqual(len(results['results']), 3) self.assertEqual(results['last_key'], None) self.assertEqual(results['unprocessed_keys'], []) mock_batch_get.assert_called_once_with(request_items={ 'users': { 'Keys': [ { 'username': {'S': 'alice'}, 'friend_count': {'N': '1'} }, { 'username': {'S': 'bob'}, 'friend_count': {'N': '1'} }, { 'username': {'S': 'jane'}, } ] } }) # Now alter the expected. del expected['Responses']['users'][2] expected['UnprocessedKeys'] = { 'Keys': [ {'username': {'S': 'jane',}}, ], } with mock.patch.object( self.users.connection, 'batch_get_item', return_value=expected) as mock_batch_get_2: results = self.users._batch_get(keys=[ {'username': 'alice', 'friend_count': 1}, {'username': 'bob', 'friend_count': 1}, {'username': 'jane'}, ]) usernames = [res['username'] for res in results['results']] self.assertEqual(usernames, ['alice', 'bob']) self.assertEqual(len(results['results']), 2) self.assertEqual(results['last_key'], None) self.assertEqual(results['unprocessed_keys'], [ {'username': 'jane'} ]) mock_batch_get_2.assert_called_once_with(request_items={ 'users': { 'Keys': [ { 'username': {'S': 'alice'}, 'friend_count': {'N': '1'} }, { 'username': {'S': 'bob'}, 'friend_count': {'N': '1'} }, { 'username': {'S': 'jane'}, } ] } }) def test_batch_get(self): items_1 = { 'results': [ Item(self.users, data={ 'username': 'johndoe', 'first_name': 'John', 'last_name': 'Doe', }), Item(self.users, data={ 'username': 'jane', 'first_name': 'Jane', 'last_name': 'Doe', }), ], 'last_key': None, 'unprocessed_keys': [ 'zoeydoe', ] } results = self.users.batch_get(keys=[ {'username': 'johndoe'}, {'username': 'jane'}, {'username': 'zoeydoe'}, ]) self.assertTrue(isinstance(results, BatchGetResultSet)) self.assertEqual(len(results._results), 0) self.assertEqual(results.the_callable, self.users._batch_get) with mock.patch.object( results, 'the_callable', return_value=items_1) as mock_batch_get: res_1 = results.next() # Now it should be populated. self.assertEqual(len(results._results), 2) self.assertEqual(res_1['username'], 'johndoe') res_2 = results.next() self.assertEqual(res_2['username'], 'jane') self.assertEqual(mock_batch_get.call_count, 1) self.assertEqual(results._keys_left, ['zoeydoe']) items_2 = { 'results': [ Item(self.users, data={ 'username': 'zoeydoe', 'first_name': 'Zoey', 'last_name': 'Doe', }), ], } with mock.patch.object( results, 'the_callable', return_value=items_2) as mock_batch_get_2: res_3 = results.next() # New results should have been found. self.assertEqual(len(results._results), 1) self.assertEqual(res_3['username'], 'zoeydoe') self.assertRaises(StopIteration, results.next) self.assertEqual(mock_batch_get_2.call_count, 1) self.assertEqual(results._keys_left, []) boto-2.20.1/tests/unit/ec2/000077500000000000000000000000001225267101000153255ustar00rootroot00000000000000boto-2.20.1/tests/unit/ec2/__init__.py000066400000000000000000000000001225267101000174240ustar00rootroot00000000000000boto-2.20.1/tests/unit/ec2/autoscale/000077500000000000000000000000001225267101000173055ustar00rootroot00000000000000boto-2.20.1/tests/unit/ec2/autoscale/__init__.py000066400000000000000000000000001225267101000214040ustar00rootroot00000000000000boto-2.20.1/tests/unit/ec2/autoscale/test_group.py000066400000000000000000000461071225267101000220620ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from datetime import datetime from tests.unit import unittest from tests.unit import AWSMockServiceTestCase from boto.ec2.autoscale import AutoScaleConnection from boto.ec2.autoscale.group import AutoScalingGroup from boto.ec2.autoscale.policy import ScalingPolicy from boto.ec2.autoscale.tag import Tag from boto.ec2.blockdevicemapping import EBSBlockDeviceType, BlockDeviceMapping from boto.ec2.autoscale import launchconfig class TestAutoScaleGroup(AWSMockServiceTestCase): connection_class = AutoScaleConnection def setUp(self): super(TestAutoScaleGroup, self).setUp() def default_body(self): return """ requestid """ def test_autoscaling_group_with_termination_policies(self): self.set_http_response(status_code=200) autoscale = AutoScalingGroup( name='foo', launch_config='lauch_config', min_size=1, max_size=2, termination_policies=['OldestInstance', 'OldestLaunchConfiguration']) self.service_connection.create_auto_scaling_group(autoscale) self.assert_request_parameters({ 'Action': 'CreateAutoScalingGroup', 'AutoScalingGroupName': 'foo', 'LaunchConfigurationName': 'lauch_config', 'MaxSize': 2, 'MinSize': 1, 'TerminationPolicies.member.1': 'OldestInstance', 'TerminationPolicies.member.2': 'OldestLaunchConfiguration', }, ignore_params_values=['Version']) class TestAutoScaleGroupHonorCooldown(AWSMockServiceTestCase): connection_class = AutoScaleConnection def default_body(self): return """ 9fb7e2db-6998-11e2-a985-57c82EXAMPLE """ def test_honor_cooldown(self): self.set_http_response(status_code=200) self.service_connection.set_desired_capacity('foo', 10, True) self.assert_request_parameters({ 'Action': 'SetDesiredCapacity', 'AutoScalingGroupName': 'foo', 'DesiredCapacity': 10, 'HonorCooldown': 'true', }, ignore_params_values=['Version']) class TestScheduledGroup(AWSMockServiceTestCase): connection_class = AutoScaleConnection def setUp(self): super(TestScheduledGroup, self).setUp() def default_body(self): return """ requestid """ def test_scheduled_group_creation(self): self.set_http_response(status_code=200) self.service_connection.create_scheduled_group_action('foo', 'scheduled-foo', desired_capacity=1, start_time=datetime(2013, 1, 1, 22, 55, 31), end_time=datetime(2013, 2, 1, 22, 55, 31), min_size=1, max_size=2, recurrence='0 10 * * *') self.assert_request_parameters({ 'Action': 'PutScheduledUpdateGroupAction', 'AutoScalingGroupName': 'foo', 'ScheduledActionName': 'scheduled-foo', 'MaxSize': 2, 'MinSize': 1, 'DesiredCapacity': 1, 'EndTime': '2013-02-01T22:55:31', 'StartTime': '2013-01-01T22:55:31', 'Recurrence': '0 10 * * *', }, ignore_params_values=['Version']) class TestParseAutoScaleGroupResponse(AWSMockServiceTestCase): connection_class = AutoScaleConnection def default_body(self): return """ test_group EC2 2012-09-27T20:19:47.082Z test_launchconfig Healthy us-east-1a i-z118d054 test_launchconfig InService 1 us-east-1c us-east-1a 1 0 300 myarn OldestInstance OldestLaunchConfiguration 2 """ def test_get_all_groups_is_parsed_correctly(self): self.set_http_response(status_code=200) response = self.service_connection.get_all_groups(names=['test_group']) self.assertEqual(len(response), 1, response) as_group = response[0] self.assertEqual(as_group.availability_zones, ['us-east-1c', 'us-east-1a']) self.assertEqual(as_group.default_cooldown, 300) self.assertEqual(as_group.desired_capacity, 1) self.assertEqual(as_group.enabled_metrics, []) self.assertEqual(as_group.health_check_period, 0) self.assertEqual(as_group.health_check_type, 'EC2') self.assertEqual(as_group.launch_config_name, 'test_launchconfig') self.assertEqual(as_group.load_balancers, []) self.assertEqual(as_group.min_size, 1) self.assertEqual(as_group.max_size, 2) self.assertEqual(as_group.name, 'test_group') self.assertEqual(as_group.suspended_processes, []) self.assertEqual(as_group.tags, []) self.assertEqual(as_group.termination_policies, ['OldestInstance', 'OldestLaunchConfiguration']) class TestDescribeTerminationPolicies(AWSMockServiceTestCase): connection_class = AutoScaleConnection def default_body(self): return """ ClosestToNextInstanceHour Default NewestInstance OldestInstance OldestLaunchConfiguration requestid """ def test_autoscaling_group_with_termination_policies(self): self.set_http_response(status_code=200) response = self.service_connection.get_termination_policies() self.assertListEqual( response, ['ClosestToNextInstanceHour', 'Default', 'NewestInstance', 'OldestInstance', 'OldestLaunchConfiguration']) class TestLaunchConfiguration(AWSMockServiceTestCase): connection_class = AutoScaleConnection def default_body(self): # This is a dummy response return """ """ def test_launch_config(self): # This unit test is based on #753 and #1343 self.set_http_response(status_code=200) dev_sdf = EBSBlockDeviceType(snapshot_id='snap-12345') dev_sdg = EBSBlockDeviceType(snapshot_id='snap-12346') bdm = BlockDeviceMapping() bdm['/dev/sdf'] = dev_sdf bdm['/dev/sdg'] = dev_sdg lc = launchconfig.LaunchConfiguration( connection=self.service_connection, name='launch_config', image_id='123456', instance_type = 'm1.large', security_groups = ['group1', 'group2'], spot_price='price', block_device_mappings = [bdm], associate_public_ip_address = True ) response = self.service_connection.create_launch_configuration(lc) self.assert_request_parameters({ 'Action': 'CreateLaunchConfiguration', 'BlockDeviceMappings.member.1.DeviceName': '/dev/sdf', 'BlockDeviceMappings.member.1.Ebs.DeleteOnTermination': 'false', 'BlockDeviceMappings.member.1.Ebs.SnapshotId': 'snap-12345', 'BlockDeviceMappings.member.2.DeviceName': '/dev/sdg', 'BlockDeviceMappings.member.2.Ebs.DeleteOnTermination': 'false', 'BlockDeviceMappings.member.2.Ebs.SnapshotId': 'snap-12346', 'EbsOptimized': 'false', 'LaunchConfigurationName': 'launch_config', 'ImageId': '123456', 'InstanceMonitoring.Enabled': 'false', 'InstanceType': 'm1.large', 'SecurityGroups.member.1': 'group1', 'SecurityGroups.member.2': 'group2', 'SpotPrice': 'price', 'AssociatePublicIpAddress' : 'true' }, ignore_params_values=['Version']) class TestCreateAutoScalePolicy(AWSMockServiceTestCase): connection_class = AutoScaleConnection def setUp(self): super(TestCreateAutoScalePolicy, self).setUp() def default_body(self): return """ arn:aws:autoscaling:us-east-1:803981987763:scaling\ Policy:b0dcf5e8 -02e6-4e31-9719-0675d0dc31ae:autoScalingGroupName/my-test-asg:\ policyName/my-scal eout-policy 3cfc6fef-c08b-11e2-a697-2922EXAMPLE """ def test_scaling_policy_with_min_adjustment_step(self): self.set_http_response(status_code=200) policy = ScalingPolicy( name='foo', as_name='bar', adjustment_type='PercentChangeInCapacity', scaling_adjustment=50, min_adjustment_step=30) self.service_connection.create_scaling_policy(policy) self.assert_request_parameters({ 'Action': 'PutScalingPolicy', 'PolicyName': 'foo', 'AutoScalingGroupName': 'bar', 'AdjustmentType': 'PercentChangeInCapacity', 'ScalingAdjustment': 50, 'MinAdjustmentStep': 30 }, ignore_params_values=['Version']) def test_scaling_policy_with_wrong_adjustment_type(self): self.set_http_response(status_code=200) policy = ScalingPolicy( name='foo', as_name='bar', adjustment_type='ChangeInCapacity', scaling_adjustment=50, min_adjustment_step=30) self.service_connection.create_scaling_policy(policy) self.assert_request_parameters({ 'Action': 'PutScalingPolicy', 'PolicyName': 'foo', 'AutoScalingGroupName': 'bar', 'AdjustmentType': 'ChangeInCapacity', 'ScalingAdjustment': 50 }, ignore_params_values=['Version']) def test_scaling_policy_without_min_adjustment_step(self): self.set_http_response(status_code=200) policy = ScalingPolicy( name='foo', as_name='bar', adjustment_type='PercentChangeInCapacity', scaling_adjustment=50) self.service_connection.create_scaling_policy(policy) self.assert_request_parameters({ 'Action': 'PutScalingPolicy', 'PolicyName': 'foo', 'AutoScalingGroupName': 'bar', 'AdjustmentType': 'PercentChangeInCapacity', 'ScalingAdjustment': 50 }, ignore_params_values=['Version']) class TestPutNotificationConfiguration(AWSMockServiceTestCase): connection_class = AutoScaleConnection def setUp(self): super(TestPutNotificationConfiguration, self).setUp() def default_body(self): return """ requestid """ def test_autoscaling_group_put_notification_configuration(self): self.set_http_response(status_code=200) autoscale = AutoScalingGroup( name='ana', launch_config='lauch_config', min_size=1, max_size=2, termination_policies=['OldestInstance', 'OldestLaunchConfiguration']) self.service_connection.put_notification_configuration(autoscale, 'arn:aws:sns:us-east-1:19890506:AutoScaling-Up', ['autoscaling:EC2_INSTANCE_LAUNCH']) self.assert_request_parameters({ 'Action': 'PutNotificationConfiguration', 'AutoScalingGroupName': 'ana', 'NotificationTypes.member.1': 'autoscaling:EC2_INSTANCE_LAUNCH', 'TopicARN': 'arn:aws:sns:us-east-1:19890506:AutoScaling-Up', }, ignore_params_values=['Version']) class TestDeleteNotificationConfiguration(AWSMockServiceTestCase): connection_class = AutoScaleConnection def setUp(self): super(TestDeleteNotificationConfiguration, self).setUp() def default_body(self): return """ requestid """ def test_autoscaling_group_put_notification_configuration(self): self.set_http_response(status_code=200) autoscale = AutoScalingGroup( name='ana', launch_config='lauch_config', min_size=1, max_size=2, termination_policies=['OldestInstance', 'OldestLaunchConfiguration']) self.service_connection.delete_notification_configuration(autoscale, 'arn:aws:sns:us-east-1:19890506:AutoScaling-Up') self.assert_request_parameters({ 'Action': 'DeleteNotificationConfiguration', 'AutoScalingGroupName': 'ana', 'TopicARN': 'arn:aws:sns:us-east-1:19890506:AutoScaling-Up', }, ignore_params_values=['Version']) class TestAutoScalingTag(AWSMockServiceTestCase): connection_class = AutoScaleConnection def default_body(self): return """ requestId """ def test_create_or_update_tags(self): self.set_http_response(status_code=200) tags = [ Tag( connection=self.service_connection, key='alpha', value='tango', resource_id='sg-00000000', resource_type='auto-scaling-group', propagate_at_launch=True ), Tag( connection=self.service_connection, key='bravo', value='sierra', resource_id='sg-00000000', resource_type='auto-scaling-group', propagate_at_launch=False )] response = self.service_connection.create_or_update_tags(tags) self.assert_request_parameters({ 'Action': 'CreateOrUpdateTags', 'Tags.member.1.ResourceType': 'auto-scaling-group', 'Tags.member.1.ResourceId': 'sg-00000000', 'Tags.member.1.Key': 'alpha', 'Tags.member.1.Value': 'tango', 'Tags.member.1.PropagateAtLaunch': 'true', 'Tags.member.2.ResourceType': 'auto-scaling-group', 'Tags.member.2.ResourceId': 'sg-00000000', 'Tags.member.2.Key': 'bravo', 'Tags.member.2.Value': 'sierra', 'Tags.member.2.PropagateAtLaunch': 'false' }, ignore_params_values=['Version']) def test_endElement(self): for i in [ ('Key', 'mykey', 'key'), ('Value', 'myvalue', 'value'), ('ResourceType', 'auto-scaling-group', 'resource_type'), ('ResourceId', 'sg-01234567', 'resource_id'), ('PropagateAtLaunch', 'true', 'propagate_at_launch')]: self.check_tag_attributes_set(i[0], i[1], i[2]) def check_tag_attributes_set(self, name, value, attr): tag = Tag() tag.endElement(name, value, None) if value == 'true': self.assertEqual(getattr(tag, attr), True) else: self.assertEqual(getattr(tag, attr), value) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/ec2/cloudwatch/000077500000000000000000000000001225267101000174625ustar00rootroot00000000000000boto-2.20.1/tests/unit/ec2/cloudwatch/__init__.py000066400000000000000000000000001225267101000215610ustar00rootroot00000000000000boto-2.20.1/tests/unit/ec2/cloudwatch/test_connection.py000066400000000000000000000073131225267101000232360ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import datetime from tests.unit import unittest from tests.unit import AWSMockServiceTestCase from boto.ec2.cloudwatch import CloudWatchConnection class TestCloudWatchConnection(AWSMockServiceTestCase): connection_class = CloudWatchConnection def test_build_put_params_multiple_everything(self): # This dictionary gets modified by the method call. # Check to make sure all updates happen appropriately. params = {} # Again, these are rubbish parameters. Pay them no mind, we care more # about the functionality of the method name = ['whatever', 'goeshere'] value = None timestamp = [ datetime.datetime(2013, 5, 13, 9, 2, 35), datetime.datetime(2013, 5, 12, 9, 2, 35), ] unit = ['lbs', 'ft'] dimensions = None statistics = [ { 'maximum': 5, 'minimum': 1, 'samplecount': 3, 'sum': 7, }, { 'maximum': 6, 'minimum': 2, 'samplecount': 4, 'sum': 5, }, ] # The important part is that this shouldn't generate a warning (due # to overwriting a variable) & should have the correct number of # Metrics (2). self.service_connection.build_put_params( params, name=name, value=value, timestamp=timestamp, unit=unit, dimensions=dimensions, statistics=statistics ) self.assertEqual(params, { 'MetricData.member.1.MetricName': 'whatever', 'MetricData.member.1.StatisticValues.Maximum': 5, 'MetricData.member.1.StatisticValues.Minimum': 1, 'MetricData.member.1.StatisticValues.SampleCount': 3, 'MetricData.member.1.StatisticValues.Sum': 7, 'MetricData.member.1.Timestamp': '2013-05-13T09:02:35', 'MetricData.member.1.Unit': 'lbs', 'MetricData.member.2.MetricName': 'goeshere', 'MetricData.member.2.StatisticValues.Maximum': 6, 'MetricData.member.2.StatisticValues.Minimum': 2, 'MetricData.member.2.StatisticValues.SampleCount': 4, 'MetricData.member.2.StatisticValues.Sum': 5, 'MetricData.member.2.Timestamp': '2013-05-12T09:02:35', # If needed, comment this next line to cause a test failure & see # the logging warning. 'MetricData.member.2.Unit': 'ft', }) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/ec2/elb/000077500000000000000000000000001225267101000160675ustar00rootroot00000000000000boto-2.20.1/tests/unit/ec2/elb/__init__.py000066400000000000000000000000001225267101000201660ustar00rootroot00000000000000boto-2.20.1/tests/unit/ec2/elb/test_attribute.py000066400000000000000000000157631225267101000215170ustar00rootroot00000000000000from tests.unit import unittest import mock from boto.ec2.elb import ELBConnection from boto.ec2.elb import LoadBalancer from boto.ec2.elb.attributes import LbAttributes ATTRIBUTE_GET_TRUE_CZL_RESPONSE = r""" true 83c88b9d-12b7-11e3-8b82-87b12EXAMPLE """ ATTRIBUTE_GET_FALSE_CZL_RESPONSE = r""" false 83c88b9d-12b7-11e3-8b82-87b12EXAMPLE """ ATTRIBUTE_SET_RESPONSE = r""" 83c88b9d-12b7-11e3-8b82-87b12EXAMPLE """ # make_request arguments for setting attributes. # Format: (API_COMMAND, API_PARAMS, API_PATH, API_METHOD) ATTRIBUTE_SET_CZL_TRUE_REQUEST = ( 'ModifyLoadBalancerAttributes', {'LoadBalancerAttributes.CrossZoneLoadBalancing.Enabled': 'true', 'LoadBalancerName': 'test_elb'}, mock.ANY, mock.ANY) ATTRIBUTE_SET_CZL_FALSE_REQUEST = ( 'ModifyLoadBalancerAttributes', {'LoadBalancerAttributes.CrossZoneLoadBalancing.Enabled': 'false', 'LoadBalancerName': 'test_elb'}, mock.ANY, mock.ANY) # Tests to be run on an LbAttributes # Format: # (EC2_RESPONSE_STRING, list( (string_of_attribute_to_test, value) ) ) ATTRIBUTE_TESTS = [ (ATTRIBUTE_GET_TRUE_CZL_RESPONSE, [('cross_zone_load_balancing.enabled', True)]), (ATTRIBUTE_GET_FALSE_CZL_RESPONSE, [('cross_zone_load_balancing.enabled', False)]), ] class TestLbAttributes(unittest.TestCase): """Tests LB Attributes.""" def _setup_mock(self): """Sets up a mock elb request. Returns: response, elb connection and LoadBalancer """ mock_response = mock.Mock() mock_response.status = 200 elb = ELBConnection(aws_access_key_id='aws_access_key_id', aws_secret_access_key='aws_secret_access_key') elb.make_request = mock.Mock(return_value=mock_response) return mock_response, elb, LoadBalancer(elb, 'test_elb') def _verify_attributes(self, attributes, attr_tests): """Verifies an LbAttributes object.""" for attr, result in attr_tests: attr_result = attributes for sub_attr in attr.split('.'): attr_result = getattr(attr_result, sub_attr, None) self.assertEqual(attr_result, result) def test_get_all_lb_attributes(self): """Tests getting the LbAttributes from the elb.connection.""" mock_response, elb, _ = self._setup_mock() for response, attr_tests in ATTRIBUTE_TESTS: mock_response.read.return_value = response attributes = elb.get_all_lb_attributes('test_elb') self.assertTrue(isinstance(attributes, LbAttributes)) self._verify_attributes(attributes, attr_tests) def test_get_lb_attribute(self): """Tests getting a single attribute from elb.connection.""" mock_response, elb, _ = self._setup_mock() tests = [ ('crossZoneLoadBalancing', True, ATTRIBUTE_GET_TRUE_CZL_RESPONSE), ('crossZoneLoadBalancing', False, ATTRIBUTE_GET_FALSE_CZL_RESPONSE), ] for attr, value, response in tests: mock_response.read.return_value = response status = elb.get_lb_attribute('test_elb', attr) self.assertEqual(status, value) def test_modify_lb_attribute(self): """Tests setting the attributes from elb.connection.""" mock_response, elb, _ = self._setup_mock() tests = [ ('crossZoneLoadBalancing', True, ATTRIBUTE_SET_CZL_TRUE_REQUEST), ('crossZoneLoadBalancing', False, ATTRIBUTE_SET_CZL_FALSE_REQUEST), ] for attr, value, args in tests: mock_response.read.return_value = ATTRIBUTE_SET_RESPONSE result = elb.modify_lb_attribute('test_elb', attr, value) self.assertTrue(result) elb.make_request.assert_called_with(*args) def test_lb_get_attributes(self): """Tests the LbAttributes from the ELB object.""" mock_response, _, lb = self._setup_mock() for response, attr_tests in ATTRIBUTE_TESTS: mock_response.read.return_value = response attributes = lb.get_attributes(force=True) self.assertTrue(isinstance(attributes, LbAttributes)) self._verify_attributes(attributes, attr_tests) def test_lb_is_cross_zone_load_balancing(self): """Tests checking is_cross_zone_load_balancing.""" mock_response, _, lb = self._setup_mock() tests = [ # Format: (method, args, result, response) # Gets a true result. (lb.is_cross_zone_load_balancing, [], True, ATTRIBUTE_GET_TRUE_CZL_RESPONSE), # Returns the previous calls cached value. (lb.is_cross_zone_load_balancing, [], True, ATTRIBUTE_GET_FALSE_CZL_RESPONSE), # Gets a false result. (lb.is_cross_zone_load_balancing, [True], False, ATTRIBUTE_GET_FALSE_CZL_RESPONSE), ] for method, args, result, response in tests: mock_response.read.return_value = response self.assertEqual(method(*args), result) def test_lb_enable_cross_zone_load_balancing(self): """Tests enabling cross zone balancing from LoadBalancer.""" mock_response, elb, lb = self._setup_mock() mock_response.read.return_value = ATTRIBUTE_SET_RESPONSE self.assertTrue(lb.enable_cross_zone_load_balancing()) elb.make_request.assert_called_with(*ATTRIBUTE_SET_CZL_TRUE_REQUEST) def test_lb_disable_cross_zone_load_balancing(self): """Tests disabling cross zone balancing from LoadBalancer.""" mock_response, elb, lb = self._setup_mock() mock_response.read.return_value = ATTRIBUTE_SET_RESPONSE self.assertTrue(lb.disable_cross_zone_load_balancing()) elb.make_request.assert_called_with(*ATTRIBUTE_SET_CZL_FALSE_REQUEST) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/ec2/elb/test_listener.py000066400000000000000000000065231225267101000213330ustar00rootroot00000000000000#!/usr/bin/env python import xml.sax from tests.unit import unittest import boto.resultset from boto.ec2.elb.loadbalancer import LoadBalancer LISTENERS_RESPONSE = r""" 2013-07-09T19:18:00.520Z elb-boto-unit-test 30 TCP:8000 10 5 2 HTTP 80 HTTP 8000 HTTP 8080 HTTP 80 TCP 2525 TCP 25 us-east-1a elb-boto-unit-test-408121642.us-east-1.elb.amazonaws.com Z3DZXE0Q79N41H internet-facing amazon-elb amazon-elb-sg elb-boto-unit-test-408121642.us-east-1.elb.amazonaws.com 5763d932-e8cc-11e2-a940-11136cceffb8 """ class TestListenerResponseParsing(unittest.TestCase): def test_parse_complex(self): rs = boto.resultset.ResultSet([ ('member', LoadBalancer) ]) h = boto.handler.XmlHandler(rs, None) xml.sax.parseString(LISTENERS_RESPONSE, h) listeners = rs[0].listeners self.assertEqual( sorted([l.get_complex_tuple() for l in listeners]), [ (80, 8000, 'HTTP', 'HTTP'), (2525, 25, 'TCP', 'TCP'), (8080, 80, 'HTTP', 'HTTP'), ] ) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/ec2/elb/test_loadbalancer.py000066400000000000000000000116541225267101000221160ustar00rootroot00000000000000#!/usr/bin/env python from tests.unit import unittest from tests.unit import AWSMockServiceTestCase import mock from boto.ec2.elb import ELBConnection from boto.ec2.elb import LoadBalancer DISABLE_RESPONSE = r""" 3be1508e-c444-4fef-89cc-0b1223c4f02fEXAMPLE sample-zone """ class TestInstanceStatusResponseParsing(unittest.TestCase): def test_next_token(self): elb = ELBConnection(aws_access_key_id='aws_access_key_id', aws_secret_access_key='aws_secret_access_key') mock_response = mock.Mock() mock_response.read.return_value = DISABLE_RESPONSE mock_response.status = 200 elb.make_request = mock.Mock(return_value=mock_response) disabled = elb.disable_availability_zones('mine', ['sample-zone']) self.assertEqual(disabled, ['sample-zone']) DESCRIBE_RESPONSE = r""" 2013-07-09T19:18:00.520Z elb-boto-unit-test AWSConsole-SSLNegotiationPolicy-my-test-loadbalancer EnableProxyProtocol us-east-1a elb-boto-unit-test-408121642.us-east-1.elb.amazonaws.com Z3DZXE0Q79N41H internet-facing amazon-elb amazon-elb-sg elb-boto-unit-test-408121642.us-east-1.elb.amazonaws.com EnableProxyProtocol 80 5763d932-e8cc-11e2-a940-11136cceffb8 """ class TestDescribeLoadBalancers(unittest.TestCase): def test_other_policy(self): elb = ELBConnection(aws_access_key_id='aws_access_key_id', aws_secret_access_key='aws_secret_access_key') mock_response = mock.Mock() mock_response.read.return_value = DESCRIBE_RESPONSE mock_response.status = 200 elb.make_request = mock.Mock(return_value=mock_response) load_balancers = elb.get_all_load_balancers() self.assertEqual(len(load_balancers), 1) lb = load_balancers[0] self.assertEqual(len(lb.policies.other_policies), 2) self.assertEqual(lb.policies.other_policies[0].policy_name, 'AWSConsole-SSLNegotiationPolicy-my-test-loadbalancer') self.assertEqual(lb.policies.other_policies[1].policy_name, 'EnableProxyProtocol') self.assertEqual(len(lb.backends), 1) self.assertEqual(len(lb.backends[0].policies), 1) self.assertEqual(lb.backends[0].policies[0].policy_name, 'EnableProxyProtocol') self.assertEqual(lb.backends[0].instance_port, 80) DETACH_RESPONSE = r""" 3be1508e-c444-4fef-89cc-0b1223c4f02fEXAMPLE """ class TestDetachSubnets(unittest.TestCase): def test_detach_subnets(self): elb = ELBConnection(aws_access_key_id='aws_access_key_id', aws_secret_access_key='aws_secret_access_key') lb = LoadBalancer(elb, "mylb") mock_response = mock.Mock() mock_response.read.return_value = DETACH_RESPONSE mock_response.status = 200 elb.make_request = mock.Mock(return_value=mock_response) lb.detach_subnets("s-xxx") if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/ec2/test_address.py000066400000000000000000000034461225267101000203720ustar00rootroot00000000000000import mock import unittest from boto.ec2.address import Address class AddressTest(unittest.TestCase): def setUp(self): self.address = Address() self.address.connection = mock.Mock() self.address.public_ip = "192.168.1.1" def check_that_attribute_has_been_set(self, name, value, attribute): self.address.endElement(name, value, None) self.assertEqual(getattr(self.address, attribute), value) def test_endElement_sets_correct_attributes_with_values(self): for arguments in [("publicIp", "192.168.1.1", "public_ip"), ("instanceId", 1, "instance_id"), ("domain", "some domain", "domain"), ("allocationId", 1, "allocation_id"), ("associationId", 1, "association_id"), ("somethingRandom", "somethingRandom", "somethingRandom")]: self.check_that_attribute_has_been_set(arguments[0], arguments[1], arguments[2]) def test_release_calls_connection_release_address_with_correct_args(self): self.address.release() self.address.connection.release_address.assert_called_with( "192.168.1.1", dry_run=False ) def test_associate_calls_connection_associate_address_with_correct_args(self): self.address.associate(1) self.address.connection.associate_address.assert_called_with( 1, "192.168.1.1", dry_run=False ) def test_disassociate_calls_connection_disassociate_address_with_correct_args(self): self.address.disassociate() self.address.connection.disassociate_address.assert_called_with( "192.168.1.1", dry_run=False ) if __name__ == "__main__": unittest.main() boto-2.20.1/tests/unit/ec2/test_blockdevicemapping.py000066400000000000000000000135751225267101000225770ustar00rootroot00000000000000import mock import unittest from boto.ec2.connection import EC2Connection from boto.ec2.blockdevicemapping import BlockDeviceType, BlockDeviceMapping from tests.unit import AWSMockServiceTestCase class BlockDeviceTypeTests(unittest.TestCase): def setUp(self): self.block_device_type = BlockDeviceType() def check_that_attribute_has_been_set(self, name, value, attribute): self.block_device_type.endElement(name, value, None) self.assertEqual(getattr(self.block_device_type, attribute), value) def test_endElement_sets_correct_attributes_with_values(self): for arguments in [("volumeId", 1, "volume_id"), ("virtualName", "some name", "ephemeral_name"), ("snapshotId", 1, "snapshot_id"), ("volumeSize", 1, "size"), ("status", "some status", "status"), ("attachTime", 1, "attach_time"), ("somethingRandom", "somethingRandom", "somethingRandom")]: self.check_that_attribute_has_been_set(arguments[0], arguments[1], arguments[2]) def test_endElement_with_name_NoDevice_value_true(self): self.block_device_type.endElement("NoDevice", 'true', None) self.assertEqual(self.block_device_type.no_device, True) def test_endElement_with_name_NoDevice_value_other(self): self.block_device_type.endElement("NoDevice", 'something else', None) self.assertEqual(self.block_device_type.no_device, False) def test_endElement_with_name_deleteOnTermination_value_true(self): self.block_device_type.endElement("deleteOnTermination", "true", None) self.assertEqual(self.block_device_type.delete_on_termination, True) def test_endElement_with_name_deleteOnTermination_value_other(self): self.block_device_type.endElement("deleteOnTermination", 'something else', None) self.assertEqual(self.block_device_type.delete_on_termination, False) class BlockDeviceMappingTests(unittest.TestCase): def setUp(self): self.block_device_mapping = BlockDeviceMapping() def block_device_type_eq(self, b1, b2): if isinstance(b1, BlockDeviceType) and isinstance(b2, BlockDeviceType): return all([b1.connection == b2.connection, b1.ephemeral_name == b2.ephemeral_name, b1.no_device == b2.no_device, b1.volume_id == b2.volume_id, b1.snapshot_id == b2.snapshot_id, b1.status == b2.status, b1.attach_time == b2.attach_time, b1.delete_on_termination == b2.delete_on_termination, b1.size == b2.size]) def test_startElement_with_name_ebs_sets_and_returns_current_value(self): retval = self.block_device_mapping.startElement("ebs", None, None) assert self.block_device_type_eq(retval, BlockDeviceType(self.block_device_mapping)) def test_startElement_with_name_virtualName_sets_and_returns_current_value(self): retval = self.block_device_mapping.startElement("virtualName", None, None) assert self.block_device_type_eq(retval, BlockDeviceType(self.block_device_mapping)) def test_endElement_with_name_device_sets_current_name(self): self.block_device_mapping.endElement("device", "/dev/null", None) self.assertEqual(self.block_device_mapping.current_name, "/dev/null") def test_endElement_with_name_device_sets_current_name(self): self.block_device_mapping.endElement("deviceName", "some device name", None) self.assertEqual(self.block_device_mapping.current_name, "some device name") def test_endElement_with_name_item_sets_current_name_key_to_current_value(self): self.block_device_mapping.current_name = "some name" self.block_device_mapping.current_value = "some value" self.block_device_mapping.endElement("item", "some item", None) self.assertEqual(self.block_device_mapping["some name"], "some value") class TestLaunchConfiguration(AWSMockServiceTestCase): connection_class = EC2Connection def default_body(self): # This is a dummy response return """ """ def test_run_instances_block_device_mapping(self): # Same as the test in ``unit/ec2/autoscale/test_group.py:TestLaunchConfiguration``, # but with modified request parameters (due to a mismatch between EC2 & # Autoscaling). self.set_http_response(status_code=200) dev_sdf = BlockDeviceType(snapshot_id='snap-12345') dev_sdg = BlockDeviceType(snapshot_id='snap-12346') bdm = BlockDeviceMapping() bdm['/dev/sdf'] = dev_sdf bdm['/dev/sdg'] = dev_sdg response = self.service_connection.run_instances( image_id='123456', instance_type='m1.large', security_groups=['group1', 'group2'], block_device_map=bdm ) self.assert_request_parameters({ 'Action': 'RunInstances', 'BlockDeviceMapping.1.DeviceName': '/dev/sdf', 'BlockDeviceMapping.1.Ebs.DeleteOnTermination': 'false', 'BlockDeviceMapping.1.Ebs.SnapshotId': 'snap-12345', 'BlockDeviceMapping.2.DeviceName': '/dev/sdg', 'BlockDeviceMapping.2.Ebs.DeleteOnTermination': 'false', 'BlockDeviceMapping.2.Ebs.SnapshotId': 'snap-12346', 'ImageId': '123456', 'InstanceType': 'm1.large', 'MaxCount': 1, 'MinCount': 1, 'SecurityGroup.1': 'group1', 'SecurityGroup.2': 'group2', }, ignore_params_values=[ 'Version', 'AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp' ]) if __name__ == "__main__": unittest.main() boto-2.20.1/tests/unit/ec2/test_connection.py000066400000000000000000001547721225267101000211150ustar00rootroot00000000000000#!/usr/bin/env python import httplib from datetime import datetime, timedelta from mock import MagicMock, Mock, patch from tests.unit import unittest from tests.unit import AWSMockServiceTestCase import boto.ec2 from boto.regioninfo import RegionInfo from boto.ec2.blockdevicemapping import BlockDeviceType, BlockDeviceMapping from boto.ec2.connection import EC2Connection from boto.ec2.snapshot import Snapshot from boto.ec2.reservedinstance import ReservedInstancesConfiguration class TestEC2ConnectionBase(AWSMockServiceTestCase): connection_class = EC2Connection def setUp(self): super(TestEC2ConnectionBase, self).setUp() self.ec2 = self.service_connection class TestReservedInstanceOfferings(TestEC2ConnectionBase): def default_body(self): return """ d3253568-edcf-4897-9a3d-fb28e0b3fa38 2964d1bf71d8 c1.medium us-east-1c 94608000 775.0 0.0 product description default USD Heavy Utilization Hourly 0.095 false 0.045 1 2dce26e46889 c1.medium us-east-1c 94608000 775.0 0.0 Linux/UNIX default USD Heavy Utilization Hourly 0.035 false next_token """ def test_get_reserved_instance_offerings(self): self.set_http_response(status_code=200) response = self.ec2.get_all_reserved_instances_offerings() self.assertEqual(len(response), 2) instance = response[0] self.assertEqual(instance.id, '2964d1bf71d8') self.assertEqual(instance.instance_type, 'c1.medium') self.assertEqual(instance.availability_zone, 'us-east-1c') self.assertEqual(instance.duration, 94608000) self.assertEqual(instance.fixed_price, '775.0') self.assertEqual(instance.usage_price, '0.0') self.assertEqual(instance.description, 'product description') self.assertEqual(instance.instance_tenancy, 'default') self.assertEqual(instance.currency_code, 'USD') self.assertEqual(instance.offering_type, 'Heavy Utilization') self.assertEqual(len(instance.recurring_charges), 1) self.assertEqual(instance.recurring_charges[0].frequency, 'Hourly') self.assertEqual(instance.recurring_charges[0].amount, '0.095') self.assertEqual(len(instance.pricing_details), 1) self.assertEqual(instance.pricing_details[0].price, '0.045') self.assertEqual(instance.pricing_details[0].count, '1') def test_get_reserved_instance_offerings_params(self): self.set_http_response(status_code=200) self.ec2.get_all_reserved_instances_offerings( reserved_instances_offering_ids=['id1','id2'], instance_type='t1.micro', availability_zone='us-east-1', product_description='description', instance_tenancy='dedicated', offering_type='offering_type', include_marketplace=False, min_duration=100, max_duration=1000, max_instance_count=1, next_token='next_token', max_results=10 ) self.assert_request_parameters({ 'Action': 'DescribeReservedInstancesOfferings', 'ReservedInstancesOfferingId.1': 'id1', 'ReservedInstancesOfferingId.2': 'id2', 'InstanceType': 't1.micro', 'AvailabilityZone': 'us-east-1', 'ProductDescription': 'description', 'InstanceTenancy': 'dedicated', 'OfferingType': 'offering_type', 'IncludeMarketplace': 'false', 'MinDuration': '100', 'MaxDuration': '1000', 'MaxInstanceCount': '1', 'NextToken': 'next_token', 'MaxResults': '10',}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) class TestPurchaseReservedInstanceOffering(TestEC2ConnectionBase): def default_body(self): return """""" def test_serialized_api_args(self): self.set_http_response(status_code=200) response = self.ec2.purchase_reserved_instance_offering( 'offering_id', 1, (100.0, 'USD')) self.assert_request_parameters({ 'Action': 'PurchaseReservedInstancesOffering', 'InstanceCount': 1, 'ReservedInstancesOfferingId': 'offering_id', 'LimitPrice.Amount': '100.0', 'LimitPrice.CurrencyCode': 'USD',}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) class TestCreateImage(TestEC2ConnectionBase): def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE ami-4fa54026 """ def test_minimal(self): self.set_http_response(status_code=200) response = self.ec2.create_image( 'instance_id', 'name') self.assert_request_parameters({ 'Action': 'CreateImage', 'InstanceId': 'instance_id', 'Name': 'name'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) def test_block_device_mapping(self): self.set_http_response(status_code=200) bdm = BlockDeviceMapping() bdm['test'] = BlockDeviceType() response = self.ec2.create_image( 'instance_id', 'name', block_device_mapping=bdm) self.assert_request_parameters({ 'Action': 'CreateImage', 'InstanceId': 'instance_id', 'Name': 'name', 'BlockDeviceMapping.1.DeviceName': 'test', 'BlockDeviceMapping.1.Ebs.DeleteOnTermination': 'false'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) class TestCancelReservedInstancesListing(TestEC2ConnectionBase): def default_body(self): return """ request_id listing_id instance_id 2012-07-12T16:55:28.000Z 2012-07-12T16:55:28.000Z cancelled CANCELLED Available 0 Sold 0 Cancelled 1 Pending 0 5 166.64 USD false 4 133.32 USD false 3 99.99 USD false 2 66.66 USD false 1 33.33 USD false XqJIt1342112125076 """ def test_reserved_instances_listing(self): self.set_http_response(status_code=200) response = self.ec2.cancel_reserved_instances_listing() self.assertEqual(len(response), 1) cancellation = response[0] self.assertEqual(cancellation.status, 'cancelled') self.assertEqual(cancellation.status_message, 'CANCELLED') self.assertEqual(len(cancellation.instance_counts), 4) first = cancellation.instance_counts[0] self.assertEqual(first.state, 'Available') self.assertEqual(first.instance_count, 0) self.assertEqual(len(cancellation.price_schedules), 5) schedule = cancellation.price_schedules[0] self.assertEqual(schedule.term, 5) self.assertEqual(schedule.price, '166.64') self.assertEqual(schedule.currency_code, 'USD') self.assertEqual(schedule.active, False) class TestCreateReservedInstancesListing(TestEC2ConnectionBase): def default_body(self): return """ request_id listing_id instance_id 2012-07-17T17:11:09.449Z 2012-07-17T17:11:09.468Z active ACTIVE Available 1 Sold 0 Cancelled 0 Pending 0 11 2.5 USD true 10 2.5 USD false 9 2.5 USD false 8 2.0 USD false 7 2.0 USD false 6 2.0 USD false 5 1.5 USD false 4 1.5 USD false 3 0.7 USD false 2 0.7 USD false 1 0.1 USD false myIdempToken1 """ def test_create_reserved_instances_listing(self): self.set_http_response(status_code=200) response = self.ec2.create_reserved_instances_listing( 'instance_id', 1, [('2.5', 11), ('2.0', 8)], 'client_token') self.assertEqual(len(response), 1) cancellation = response[0] self.assertEqual(cancellation.status, 'active') self.assertEqual(cancellation.status_message, 'ACTIVE') self.assertEqual(len(cancellation.instance_counts), 4) first = cancellation.instance_counts[0] self.assertEqual(first.state, 'Available') self.assertEqual(first.instance_count, 1) self.assertEqual(len(cancellation.price_schedules), 11) schedule = cancellation.price_schedules[0] self.assertEqual(schedule.term, 11) self.assertEqual(schedule.price, '2.5') self.assertEqual(schedule.currency_code, 'USD') self.assertEqual(schedule.active, True) self.assert_request_parameters({ 'Action': 'CreateReservedInstancesListing', 'ReservedInstancesId': 'instance_id', 'InstanceCount': '1', 'ClientToken': 'client_token', 'PriceSchedules.0.Price': '2.5', 'PriceSchedules.0.Term': '11', 'PriceSchedules.1.Price': '2.0', 'PriceSchedules.1.Term': '8',}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) class TestDescribeSpotInstanceRequests(TestEC2ConnectionBase): def default_body(self): return """ requestid sir-id 0.003000 one-time active fulfilled 2012-10-19T18:09:26.000Z Your Spot request is fulfilled. mylaunchgroup ami-id mykeypair sg-id groupname t1.micro false i-id 2012-10-19T18:07:05.000Z Linux/UNIX us-east-1d """ def test_describe_spot_instance_requets(self): self.set_http_response(status_code=200) response = self.ec2.get_all_spot_instance_requests() self.assertEqual(len(response), 1) spotrequest = response[0] self.assertEqual(spotrequest.id, 'sir-id') self.assertEqual(spotrequest.price, 0.003) self.assertEqual(spotrequest.type, 'one-time') self.assertEqual(spotrequest.state, 'active') self.assertEqual(spotrequest.fault, None) self.assertEqual(spotrequest.valid_from, None) self.assertEqual(spotrequest.valid_until, None) self.assertEqual(spotrequest.launch_group, 'mylaunchgroup') self.assertEqual(spotrequest.launched_availability_zone, 'us-east-1d') self.assertEqual(spotrequest.product_description, 'Linux/UNIX') self.assertEqual(spotrequest.availability_zone_group, None) self.assertEqual(spotrequest.create_time, '2012-10-19T18:07:05.000Z') self.assertEqual(spotrequest.instance_id, 'i-id') launch_spec = spotrequest.launch_specification self.assertEqual(launch_spec.key_name, 'mykeypair') self.assertEqual(launch_spec.instance_type, 't1.micro') self.assertEqual(launch_spec.image_id, 'ami-id') self.assertEqual(launch_spec.placement, None) self.assertEqual(launch_spec.kernel, None) self.assertEqual(launch_spec.ramdisk, None) self.assertEqual(launch_spec.monitored, False) self.assertEqual(launch_spec.subnet_id, None) self.assertEqual(launch_spec.block_device_mapping, None) self.assertEqual(launch_spec.instance_profile, None) self.assertEqual(launch_spec.ebs_optimized, False) status = spotrequest.status self.assertEqual(status.code, 'fulfilled') self.assertEqual(status.update_time, '2012-10-19T18:09:26.000Z') self.assertEqual(status.message, 'Your Spot request is fulfilled.') class TestCopySnapshot(TestEC2ConnectionBase): def default_body(self): return """ request_id snap-copied-id """ def test_copy_snapshot(self): self.set_http_response(status_code=200) snapshot_id = self.ec2.copy_snapshot('us-west-2', 'snap-id', 'description') self.assertEqual(snapshot_id, 'snap-copied-id') self.assert_request_parameters({ 'Action': 'CopySnapshot', 'Description': 'description', 'SourceRegion': 'us-west-2', 'SourceSnapshotId': 'snap-id'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) class TestCopyImage(TestEC2ConnectionBase): def default_body(self): return """ request_id ami-copied-id """ def test_copy_image(self): self.set_http_response(status_code=200) copied_ami = self.ec2.copy_image('us-west-2', 'ami-id', 'name', 'description', 'client-token') self.assertEqual(copied_ami.image_id, 'ami-copied-id') self.assert_request_parameters({ 'Action': 'CopyImage', 'Description': 'description', 'Name': 'name', 'SourceRegion': 'us-west-2', 'SourceImageId': 'ami-id', 'ClientToken': 'client-token'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) def test_copy_image_without_name(self): self.set_http_response(status_code=200) copied_ami = self.ec2.copy_image('us-west-2', 'ami-id', description='description', client_token='client-token') self.assertEqual(copied_ami.image_id, 'ami-copied-id') self.assert_request_parameters({ 'Action': 'CopyImage', 'Description': 'description', 'SourceRegion': 'us-west-2', 'SourceImageId': 'ami-id', 'ClientToken': 'client-token'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) class TestAccountAttributes(TestEC2ConnectionBase): def default_body(self): return """ 6d042e8a-4bc3-43e8-8265-3cbc54753f14 vpc-max-security-groups-per-interface 5 max-instances 50 supported-platforms EC2 VPC default-vpc none """ def test_describe_account_attributes(self): self.set_http_response(status_code=200) parsed = self.ec2.describe_account_attributes() self.assertEqual(len(parsed), 4) self.assertEqual(parsed[0].attribute_name, 'vpc-max-security-groups-per-interface') self.assertEqual(parsed[0].attribute_values, ['5']) self.assertEqual(parsed[-1].attribute_name, 'default-vpc') self.assertEqual(parsed[-1].attribute_values, ['none']) class TestDescribeVPCAttribute(TestEC2ConnectionBase): def default_body(self): return """ request_id vpc-id false """ def test_describe_vpc_attribute(self): self.set_http_response(status_code=200) parsed = self.ec2.describe_vpc_attribute('vpc-id', 'enableDnsHostnames') self.assertEqual(parsed.vpc_id, 'vpc-id') self.assertFalse(parsed.enable_dns_hostnames) self.assert_request_parameters({ 'Action': 'DescribeVpcAttribute', 'VpcId': 'vpc-id', 'Attribute': 'enableDnsHostnames',}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) class TestGetAllNetworkInterfaces(TestEC2ConnectionBase): def default_body(self): return """ fc45294c-006b-457b-bab9-012f5b3b0e40 eni-0f62d866 subnet-c53c87ac vpc-cc3c87a5 ap-southeast-1b 053230519467 false in-use 02:81:60:cb:27:37 10.0.0.146 true sg-3f4b5653 default eni-attach-6537fc0c i-22197876 053230519467 5 attached 2012-07-01T21:45:27.000Z true 10.0.0.146 true 10.0.0.148 false 10.0.0.150 false """ def test_attachment_has_device_index(self): self.set_http_response(status_code=200) parsed = self.ec2.get_all_network_interfaces() self.assertEqual(5, parsed[0].attachment.device_index) class TestGetAllImages(TestEC2ConnectionBase): def default_body(self): return """ e32375e8-4ac3-4099-a8bf-3ec902b9023e ami-abcd1234 111111111111/windows2008r2-hvm-i386-20130702 available 111111111111 false i386 machine windows true Windows Test Windows Test Description bp-6ba54002 ebs /dev/sda1 /dev/sda1 snap-abcd1234 30 true standard xvdb ephemeral0 xvdc ephemeral1 xvdd ephemeral2 xvde ephemeral3 hvm xen """ def test_get_all_images(self): self.set_http_response(status_code=200) parsed = self.ec2.get_all_images() self.assertEquals(1, len(parsed)) self.assertEquals("ami-abcd1234", parsed[0].id) self.assertEquals("111111111111/windows2008r2-hvm-i386-20130702", parsed[0].location) self.assertEquals("available", parsed[0].state) self.assertEquals("111111111111", parsed[0].ownerId) self.assertEquals("111111111111", parsed[0].owner_id) self.assertEquals(False, parsed[0].is_public) self.assertEquals("i386", parsed[0].architecture) self.assertEquals("machine", parsed[0].type) self.assertEquals(None, parsed[0].kernel_id) self.assertEquals(None, parsed[0].ramdisk_id) self.assertEquals(None, parsed[0].owner_alias) self.assertEquals("windows", parsed[0].platform) self.assertEquals("Windows Test", parsed[0].name) self.assertEquals("Windows Test Description", parsed[0].description) self.assertEquals("ebs", parsed[0].root_device_type) self.assertEquals("/dev/sda1", parsed[0].root_device_name) self.assertEquals("hvm", parsed[0].virtualization_type) self.assertEquals("xen", parsed[0].hypervisor) self.assertEquals(None, parsed[0].instance_lifecycle) # 1 billing product parsed into a list self.assertEquals(1, len(parsed[0].billing_products)) self.assertEquals("bp-6ba54002", parsed[0].billing_products[0]) # Just verify length, there is already a block_device_mapping test self.assertEquals(5, len(parsed[0].block_device_mapping)) # TODO: No tests for product codes? class TestModifyInterfaceAttribute(TestEC2ConnectionBase): def default_body(self): return """ 657a4623-5620-4232-b03b-427e852d71cf true """ def test_modify_description(self): self.set_http_response(status_code=200) self.ec2.modify_network_interface_attribute('id', 'description', 'foo') self.assert_request_parameters({ 'Action': 'ModifyNetworkInterfaceAttribute', 'NetworkInterfaceId': 'id', 'Description.Value': 'foo'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) def test_modify_source_dest_check_bool(self): self.set_http_response(status_code=200) self.ec2.modify_network_interface_attribute('id', 'sourceDestCheck', True) self.assert_request_parameters({ 'Action': 'ModifyNetworkInterfaceAttribute', 'NetworkInterfaceId': 'id', 'SourceDestCheck.Value': 'true'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) def test_modify_source_dest_check_str(self): self.set_http_response(status_code=200) self.ec2.modify_network_interface_attribute('id', 'sourceDestCheck', 'true') self.assert_request_parameters({ 'Action': 'ModifyNetworkInterfaceAttribute', 'NetworkInterfaceId': 'id', 'SourceDestCheck.Value': 'true'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) def test_modify_source_dest_check_invalid(self): self.set_http_response(status_code=200) with self.assertRaises(ValueError): self.ec2.modify_network_interface_attribute('id', 'sourceDestCheck', 123) def test_modify_delete_on_termination_str(self): self.set_http_response(status_code=200) self.ec2.modify_network_interface_attribute('id', 'deleteOnTermination', True, attachment_id='bar') self.assert_request_parameters({ 'Action': 'ModifyNetworkInterfaceAttribute', 'NetworkInterfaceId': 'id', 'Attachment.AttachmentId': 'bar', 'Attachment.DeleteOnTermination': 'true'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) def test_modify_delete_on_termination_bool(self): self.set_http_response(status_code=200) self.ec2.modify_network_interface_attribute('id', 'deleteOnTermination', 'false', attachment_id='bar') self.assert_request_parameters({ 'Action': 'ModifyNetworkInterfaceAttribute', 'NetworkInterfaceId': 'id', 'Attachment.AttachmentId': 'bar', 'Attachment.DeleteOnTermination': 'false'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) def test_modify_delete_on_termination_invalid(self): self.set_http_response(status_code=200) with self.assertRaises(ValueError): self.ec2.modify_network_interface_attribute('id', 'deleteOnTermination', 123, attachment_id='bar') def test_modify_group_set_list(self): self.set_http_response(status_code=200) self.ec2.modify_network_interface_attribute('id', 'groupSet', ['sg-1', 'sg-2']) self.assert_request_parameters({ 'Action': 'ModifyNetworkInterfaceAttribute', 'NetworkInterfaceId': 'id', 'SecurityGroupId.1': 'sg-1', 'SecurityGroupId.2': 'sg-2'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) def test_modify_group_set_invalid(self): self.set_http_response(status_code=200) with self.assertRaisesRegexp(TypeError, 'iterable'): self.ec2.modify_network_interface_attribute('id', 'groupSet', False) def test_modify_attr_invalid(self): self.set_http_response(status_code=200) with self.assertRaisesRegexp(ValueError, 'Unknown attribute'): self.ec2.modify_network_interface_attribute('id', 'invalid', 0) class TestConnectToRegion(unittest.TestCase): def setUp(self): self.https_connection = Mock(spec=httplib.HTTPSConnection) self.https_connection_factory = ( Mock(return_value=self.https_connection), ()) def test_aws_region(self): region = boto.ec2.RegionData.keys()[0] self.ec2 = boto.ec2.connect_to_region(region, https_connection_factory=self.https_connection_factory, aws_access_key_id='aws_access_key_id', aws_secret_access_key='aws_secret_access_key' ) self.assertEqual(boto.ec2.RegionData[region], self.ec2.host) def test_non_aws_region(self): self.ec2 = boto.ec2.connect_to_region('foo', https_connection_factory=self.https_connection_factory, aws_access_key_id='aws_access_key_id', aws_secret_access_key='aws_secret_access_key', region = RegionInfo(name='foo', endpoint='https://foo.com/bar') ) self.assertEqual('https://foo.com/bar', self.ec2.host) def test_missing_region(self): self.ec2 = boto.ec2.connect_to_region('foo', https_connection_factory=self.https_connection_factory, aws_access_key_id='aws_access_key_id', aws_secret_access_key='aws_secret_access_key' ) self.assertEqual(None, self.ec2) class TestTrimSnapshots(TestEC2ConnectionBase): """ Test snapshot trimming functionality by ensuring that expected calls are made when given a known set of volume snapshots. """ def _get_snapshots(self): """ Generate a list of fake snapshots with names and dates. """ snaps = [] # Generate some dates offset by days, weeks, months now = datetime.now() dates = [ now, now - timedelta(days=1), now - timedelta(days=2), now - timedelta(days=7), now - timedelta(days=14), datetime(now.year, now.month, 1) - timedelta(days=30), datetime(now.year, now.month, 1) - timedelta(days=60), datetime(now.year, now.month, 1) - timedelta(days=90) ] for date in dates: # Create a fake snapshot for each date snap = Snapshot(self.ec2) snap.tags['Name'] = 'foo' # Times are expected to be ISO8601 strings snap.start_time = date.strftime('%Y-%m-%dT%H:%M:%S.000Z') snaps.append(snap) return snaps def test_trim_defaults(self): """ Test trimming snapshots with the default arguments, which should keep all monthly backups forever. The result of this test should be that nothing is deleted. """ # Setup mocks orig = { 'get_all_snapshots': self.ec2.get_all_snapshots, 'delete_snapshot': self.ec2.delete_snapshot } snaps = self._get_snapshots() self.ec2.get_all_snapshots = MagicMock(return_value=snaps) self.ec2.delete_snapshot = MagicMock() # Call the tested method self.ec2.trim_snapshots() # Assertions self.assertEqual(True, self.ec2.get_all_snapshots.called) self.assertEqual(False, self.ec2.delete_snapshot.called) # Restore self.ec2.get_all_snapshots = orig['get_all_snapshots'] self.ec2.delete_snapshot = orig['delete_snapshot'] def test_trim_months(self): """ Test trimming monthly snapshots and ensure that older months get deleted properly. The result of this test should be that the two oldest snapshots get deleted. """ # Setup mocks orig = { 'get_all_snapshots': self.ec2.get_all_snapshots, 'delete_snapshot': self.ec2.delete_snapshot } snaps = self._get_snapshots() self.ec2.get_all_snapshots = MagicMock(return_value=snaps) self.ec2.delete_snapshot = MagicMock() # Call the tested method self.ec2.trim_snapshots(monthly_backups=1) # Assertions self.assertEqual(True, self.ec2.get_all_snapshots.called) self.assertEqual(2, self.ec2.delete_snapshot.call_count) # Restore self.ec2.get_all_snapshots = orig['get_all_snapshots'] self.ec2.delete_snapshot = orig['delete_snapshot'] class TestModifyReservedInstances(TestEC2ConnectionBase): def default_body(self): return """ bef729b6-0731-4489-8881-2258746ae163 rimod-3aae219d-3d63-47a9-a7e9-e764example """ def test_serialized_api_args(self): self.set_http_response(status_code=200) response = self.ec2.modify_reserved_instances( 'a-token-goes-here', reserved_instance_ids=[ '2567o137-8a55-48d6-82fb-7258506bb497', ], target_configurations=[ ReservedInstancesConfiguration( availability_zone='us-west-2c', platform='EC2-VPC', instance_count=3 ), ] ) self.assert_request_parameters({ 'Action': 'ModifyReservedInstances', 'ClientToken': 'a-token-goes-here', 'ReservedInstancesConfigurationSetItemType.0.AvailabilityZone': 'us-west-2c', 'ReservedInstancesConfigurationSetItemType.0.InstanceCount': 3, 'ReservedInstancesConfigurationSetItemType.0.Platform': 'EC2-VPC', 'ReservedInstancesId.1': '2567o137-8a55-48d6-82fb-7258506bb497' }, ignore_params_values=[ 'AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version' ]) self.assertEqual(response, 'rimod-3aae219d-3d63-47a9-a7e9-e764example') class TestDescribeReservedInstancesModifications(TestEC2ConnectionBase): def default_body(self): return """ eb4a6e3c-3689-445c-b536-19e38df35898 rimod-49b9433e-fdc7-464a-a6e5-9dabcexample 2567o137-8a55-48d6-82fb-7258506bb497 9d5cb137-5d65-4479-b4ac-8c337example us-east-1b EC2-VPC 1 2013-09-02T21:20:19.637Z 2013-09-02T21:38:24.143Z 2013-09-02T21:00:00.000Z fulfilled token-f5b56c05-09b0-4d17-8d8c-c75d8a67b806 """ def test_serialized_api_args(self): self.set_http_response(status_code=200) response = self.ec2.describe_reserved_instances_modifications( reserved_instances_modification_ids=[ '2567o137-8a55-48d6-82fb-7258506bb497' ], filters={ 'status': 'processing', } ) self.assert_request_parameters({ 'Action': 'DescribeReservedInstancesModifications', 'Filter.1.Name': 'status', 'Filter.1.Value.1': 'processing', 'ReservedInstancesModificationId.1': '2567o137-8a55-48d6-82fb-7258506bb497' }, ignore_params_values=[ 'AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version' ]) # Make sure the response was parsed correctly. self.assertEqual( response[0].modification_id, 'rimod-49b9433e-fdc7-464a-a6e5-9dabcexample' ) self.assertEqual( response[0].create_date, datetime(2013, 9, 2, 21, 20, 19, 637000) ) self.assertEqual( response[0].update_date, datetime(2013, 9, 2, 21, 38, 24, 143000) ) self.assertEqual( response[0].effective_date, datetime(2013, 9, 2, 21, 0, 0, 0) ) self.assertEqual( response[0].status, 'fulfilled' ) self.assertEqual( response[0].status_message, None ) self.assertEqual( response[0].client_token, 'token-f5b56c05-09b0-4d17-8d8c-c75d8a67b806' ) self.assertEqual( response[0].reserved_instances[0].id, '2567o137-8a55-48d6-82fb-7258506bb497' ) self.assertEqual( response[0].modification_results[0].availability_zone, 'us-east-1b' ) self.assertEqual( response[0].modification_results[0].platform, 'EC2-VPC' ) self.assertEqual( response[0].modification_results[0].instance_count, 1 ) self.assertEqual(len(response), 1) class TestRegisterImage(TestEC2ConnectionBase): def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE ami-1a2b3c4d """ def test_vm_type_default(self): self.set_http_response(status_code=200) self.ec2.register_image('name', 'description', image_location='s3://foo') self.assert_request_parameters({ 'Action': 'RegisterImage', 'ImageLocation': 's3://foo', 'Name': 'name', 'Description': 'description', }, ignore_params_values=[ 'AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version' ]) def test_vm_type_hvm(self): self.set_http_response(status_code=200) self.ec2.register_image('name', 'description', image_location='s3://foo', virtualization_type='hvm') self.assert_request_parameters({ 'Action': 'RegisterImage', 'ImageLocation': 's3://foo', 'Name': 'name', 'Description': 'description', 'VirtualizationType': 'hvm' }, ignore_params_values=[ 'AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version' ]) class TestTerminateInstances(TestEC2ConnectionBase): def default_body(self): return """ req-59a9ad52-0434-470c-ad48-4f89ded3a03e i-000043a2 16 running 16 running """ def test_terminate_bad_response(self): self.set_http_response(status_code=200) self.ec2.terminate_instances('foo') class TestDescribeInstances(TestEC2ConnectionBase): def default_body(self): return """ """ def test_default_behavior(self): self.set_http_response(status_code=200) self.ec2.get_all_instances() self.assert_request_parameters({ 'Action': 'DescribeInstances'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) def test_max_results(self): self.set_http_response(status_code=200) self.ec2.get_all_instances( max_results=10 ) self.assert_request_parameters({ 'Action': 'DescribeInstances', 'MaxResults': 10}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) class TestDescribeTags(TestEC2ConnectionBase): def default_body(self): return """ """ def test_default_behavior(self): self.set_http_response(status_code=200) self.ec2.get_all_tags() self.assert_request_parameters({ 'Action': 'DescribeTags'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) def test_max_results(self): self.set_http_response(status_code=200) self.ec2.get_all_tags( max_results=10 ) self.assert_request_parameters({ 'Action': 'DescribeTags', 'MaxResults': 10}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/ec2/test_instance.py000066400000000000000000000233141225267101000205450ustar00rootroot00000000000000#!/usr/bin/env python from tests.unit import unittest from tests.unit import AWSMockServiceTestCase import mock from boto.ec2.connection import EC2Connection DESCRIBE_INSTANCE_VPC = r""" c6132c74-b524-4884-87f5-0f4bde4a9760 r-72ef4a0a 184906166255 i-instance ami-1624987f 16 running mykeypair 0 m1.small 2012-12-14T23:48:37.000Z us-east-1d default aki-88aa75e1 disabled subnet-0dc60667 vpc-id 10.0.0.67 true sg-id WebServerSG x86_64 ebs /dev/sda1 /dev/sda1 vol-id attached 2012-12-14T23:48:43.000Z true paravirtual foo Name xen eni-id subnet-id vpc-id Primary network interface ownerid in-use 10.0.0.67 true sg-id WebServerSG eni-attach-id 0 attached 2012-12-14T23:48:37.000Z true 10.0.0.67 true 10.0.0.54 false 10.0.0.55 false false """ RUN_INSTANCE_RESPONSE = r""" ad4b83c2-f606-4c39-90c6-5dcc5be823e1 r-c5cef7a7 ownerid sg-id SSH i-ff0f1299 ami-ed65ba84 0 pending awskeypair 0 t1.micro 2012-05-30T19:21:18.000Z us-east-1a default aki-b6aa75df disabled sg-99a710f1 SSH pending pending i386 ebs /dev/sda1 paravirtual xen arn:aws:iam::ownerid:instance-profile/myinstanceprofile iamid """ class TestRunInstanceResponseParsing(unittest.TestCase): def testIAMInstanceProfileParsedCorrectly(self): ec2 = EC2Connection(aws_access_key_id='aws_access_key_id', aws_secret_access_key='aws_secret_access_key') mock_response = mock.Mock() mock_response.read.return_value = RUN_INSTANCE_RESPONSE mock_response.status = 200 ec2.make_request = mock.Mock(return_value=mock_response) reservation = ec2.run_instances(image_id='ami-12345') self.assertEqual(len(reservation.instances), 1) instance = reservation.instances[0] self.assertEqual(instance.image_id, 'ami-ed65ba84') # iamInstanceProfile has an ID element, so we want to make sure # that this does not map to instance.id (which should be the # id of the ec2 instance). self.assertEqual(instance.id, 'i-ff0f1299') self.assertDictEqual( instance.instance_profile, {'arn': ('arn:aws:iam::ownerid:' 'instance-profile/myinstanceprofile'), 'id': 'iamid'}) class TestDescribeInstances(AWSMockServiceTestCase): connection_class = EC2Connection def default_body(self): return DESCRIBE_INSTANCE_VPC def test_multiple_private_ip_addresses(self): self.set_http_response(status_code=200) api_response = self.service_connection.get_all_reservations() self.assertEqual(len(api_response), 1) instances = api_response[0].instances self.assertEqual(len(instances), 1) instance = instances[0] self.assertEqual(len(instance.interfaces), 1) interface = instance.interfaces[0] self.assertEqual(len(interface.private_ip_addresses), 3) addresses = interface.private_ip_addresses self.assertEqual(addresses[0].private_ip_address, '10.0.0.67') self.assertTrue(addresses[0].primary) self.assertEqual(addresses[1].private_ip_address, '10.0.0.54') self.assertFalse(addresses[1].primary) self.assertEqual(addresses[2].private_ip_address, '10.0.0.55') self.assertFalse(addresses[2].primary) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/ec2/test_instancestatus.py000066400000000000000000000020731225267101000220100ustar00rootroot00000000000000#!/usr/bin/env python from tests.unit import unittest from tests.unit import AWSMockServiceTestCase import mock from boto.ec2.connection import EC2Connection INSTANCE_STATUS_RESPONSE = r""" 3be1508e-c444-4fef-89cc-0b1223c4f02fEXAMPLE page-2 """ class TestInstanceStatusResponseParsing(unittest.TestCase): def test_next_token(self): ec2 = EC2Connection(aws_access_key_id='aws_access_key_id', aws_secret_access_key='aws_secret_access_key') mock_response = mock.Mock() mock_response.read.return_value = INSTANCE_STATUS_RESPONSE mock_response.status = 200 ec2.make_request = mock.Mock(return_value=mock_response) all_statuses = ec2.get_all_instance_status() self.assertEqual(all_statuses.next_token, 'page-2') if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/ec2/test_networkinterface.py000066400000000000000000000230451225267101000223140ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from tests.unit import unittest from boto.exception import BotoClientError from boto.ec2.networkinterface import NetworkInterfaceCollection from boto.ec2.networkinterface import NetworkInterfaceSpecification from boto.ec2.networkinterface import PrivateIPAddress class TestNetworkInterfaceCollection(unittest.TestCase): maxDiff = None def setUp(self): self.private_ip_address1 = PrivateIPAddress( private_ip_address='10.0.0.10', primary=False) self.private_ip_address2 = PrivateIPAddress( private_ip_address='10.0.0.11', primary=False) self.network_interfaces_spec1 = NetworkInterfaceSpecification( device_index=1, subnet_id='subnet_id', description='description1', private_ip_address='10.0.0.54', delete_on_termination=False, private_ip_addresses=[self.private_ip_address1, self.private_ip_address2] ) self.private_ip_address3 = PrivateIPAddress( private_ip_address='10.0.1.10', primary=False) self.private_ip_address4 = PrivateIPAddress( private_ip_address='10.0.1.11', primary=False) self.network_interfaces_spec2 = NetworkInterfaceSpecification( device_index=2, subnet_id='subnet_id2', description='description2', groups=['group_id1', 'group_id2'], private_ip_address='10.0.1.54', delete_on_termination=False, private_ip_addresses=[self.private_ip_address3, self.private_ip_address4] ) self.network_interfaces_spec3 = NetworkInterfaceSpecification( device_index=0, subnet_id='subnet_id2', description='description2', groups=['group_id1', 'group_id2'], private_ip_address='10.0.1.54', delete_on_termination=False, private_ip_addresses=[self.private_ip_address3, self.private_ip_address4], associate_public_ip_address=True ) def test_param_serialization(self): collection = NetworkInterfaceCollection(self.network_interfaces_spec1, self.network_interfaces_spec2) params = {} collection.build_list_params(params) self.assertDictEqual(params, { 'NetworkInterface.0.DeviceIndex': '1', 'NetworkInterface.0.DeleteOnTermination': 'false', 'NetworkInterface.0.Description': 'description1', 'NetworkInterface.0.PrivateIpAddress': '10.0.0.54', 'NetworkInterface.0.SubnetId': 'subnet_id', 'NetworkInterface.0.PrivateIpAddresses.0.Primary': 'false', 'NetworkInterface.0.PrivateIpAddresses.0.PrivateIpAddress': '10.0.0.10', 'NetworkInterface.0.PrivateIpAddresses.1.Primary': 'false', 'NetworkInterface.0.PrivateIpAddresses.1.PrivateIpAddress': '10.0.0.11', 'NetworkInterface.1.DeviceIndex': '2', 'NetworkInterface.1.Description': 'description2', 'NetworkInterface.1.DeleteOnTermination': 'false', 'NetworkInterface.1.PrivateIpAddress': '10.0.1.54', 'NetworkInterface.1.SubnetId': 'subnet_id2', 'NetworkInterface.1.SecurityGroupId.0': 'group_id1', 'NetworkInterface.1.SecurityGroupId.1': 'group_id2', 'NetworkInterface.1.PrivateIpAddresses.0.Primary': 'false', 'NetworkInterface.1.PrivateIpAddresses.0.PrivateIpAddress': '10.0.1.10', 'NetworkInterface.1.PrivateIpAddresses.1.Primary': 'false', 'NetworkInterface.1.PrivateIpAddresses.1.PrivateIpAddress': '10.0.1.11', }) def test_add_prefix_to_serialization(self): collection = NetworkInterfaceCollection(self.network_interfaces_spec1, self.network_interfaces_spec2) params = {} collection.build_list_params(params, prefix='LaunchSpecification.') # We already tested the actual serialization previously, so # we're just checking a few keys to make sure we get the proper # prefix. self.assertDictEqual(params, { 'LaunchSpecification.NetworkInterface.0.DeviceIndex': '1', 'LaunchSpecification.NetworkInterface.0.DeleteOnTermination': 'false', 'LaunchSpecification.NetworkInterface.0.Description': 'description1', 'LaunchSpecification.NetworkInterface.0.PrivateIpAddress': '10.0.0.54', 'LaunchSpecification.NetworkInterface.0.SubnetId': 'subnet_id', 'LaunchSpecification.NetworkInterface.0.PrivateIpAddresses.0.Primary': 'false', 'LaunchSpecification.NetworkInterface.0.PrivateIpAddresses.0.PrivateIpAddress': '10.0.0.10', 'LaunchSpecification.NetworkInterface.0.PrivateIpAddresses.1.Primary': 'false', 'LaunchSpecification.NetworkInterface.0.PrivateIpAddresses.1.PrivateIpAddress': '10.0.0.11', 'LaunchSpecification.NetworkInterface.1.DeviceIndex': '2', 'LaunchSpecification.NetworkInterface.1.Description': 'description2', 'LaunchSpecification.NetworkInterface.1.DeleteOnTermination': 'false', 'LaunchSpecification.NetworkInterface.1.PrivateIpAddress': '10.0.1.54', 'LaunchSpecification.NetworkInterface.1.SubnetId': 'subnet_id2', 'LaunchSpecification.NetworkInterface.1.SecurityGroupId.0': 'group_id1', 'LaunchSpecification.NetworkInterface.1.SecurityGroupId.1': 'group_id2', 'LaunchSpecification.NetworkInterface.1.PrivateIpAddresses.0.Primary': 'false', 'LaunchSpecification.NetworkInterface.1.PrivateIpAddresses.0.PrivateIpAddress': '10.0.1.10', 'LaunchSpecification.NetworkInterface.1.PrivateIpAddresses.1.Primary': 'false', 'LaunchSpecification.NetworkInterface.1.PrivateIpAddresses.1.PrivateIpAddress': '10.0.1.11', }) def test_cant_use_public_ip(self): collection = NetworkInterfaceCollection(self.network_interfaces_spec3, self.network_interfaces_spec1) params = {} # First, verify we can't incorrectly create multiple interfaces with # on having a public IP. with self.assertRaises(BotoClientError): collection.build_list_params(params, prefix='LaunchSpecification.') # Next, ensure it can't be on device index 1. self.network_interfaces_spec3.device_index = 1 collection = NetworkInterfaceCollection(self.network_interfaces_spec3) params = {} with self.assertRaises(BotoClientError): collection.build_list_params(params, prefix='LaunchSpecification.') def test_public_ip(self): # With public IP. collection = NetworkInterfaceCollection(self.network_interfaces_spec3) params = {} collection.build_list_params(params, prefix='LaunchSpecification.') self.assertDictEqual(params, { 'LaunchSpecification.NetworkInterface.0.AssociatePublicIpAddress': 'true', 'LaunchSpecification.NetworkInterface.0.DeviceIndex': '0', 'LaunchSpecification.NetworkInterface.0.DeleteOnTermination': 'false', 'LaunchSpecification.NetworkInterface.0.Description': 'description2', 'LaunchSpecification.NetworkInterface.0.PrivateIpAddress': '10.0.1.54', 'LaunchSpecification.NetworkInterface.0.SubnetId': 'subnet_id2', 'LaunchSpecification.NetworkInterface.0.PrivateIpAddresses.0.Primary': 'false', 'LaunchSpecification.NetworkInterface.0.PrivateIpAddresses.0.PrivateIpAddress': '10.0.1.10', 'LaunchSpecification.NetworkInterface.0.PrivateIpAddresses.1.Primary': 'false', 'LaunchSpecification.NetworkInterface.0.PrivateIpAddresses.1.PrivateIpAddress': '10.0.1.11', 'LaunchSpecification.NetworkInterface.0.SecurityGroupId.0': 'group_id1', 'LaunchSpecification.NetworkInterface.0.SecurityGroupId.1': 'group_id2', }) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/ec2/test_securitygroup.py000066400000000000000000000205641225267101000216710ustar00rootroot00000000000000#!/usr/bin/env python from tests.unit import unittest from tests.unit import AWSMockServiceTestCase import mock from boto.ec2.connection import EC2Connection from boto.ec2.securitygroup import SecurityGroup DESCRIBE_SECURITY_GROUP = r""" 59dbff89-35bd-4eac-99ed-be587EXAMPLE 111122223333 sg-1a2b3c4d WebServers Web Servers tcp 80 80 0.0.0.0/0 111122223333 sg-2a2b3c4d RangedPortsBySource Group A tcp 6000 7000 111122223333 sg-3a2b3c4d Group B """ DESCRIBE_INSTANCES = r""" c6132c74-b524-4884-87f5-0f4bde4a9760 r-72ef4a0a 184906166255 i-instance ami-1624987f 16 running mykeypair 0 m1.small 2012-12-14T23:48:37.000Z us-east-1d default aki-88aa75e1 disabled subnet-0dc60667 vpc-id 10.0.0.67 true sg-1a2b3c4d WebServerSG x86_64 ebs /dev/sda1 /dev/sda1 vol-id attached 2012-12-14T23:48:43.000Z true paravirtual foo Name xen eni-id subnet-id vpc-id Primary network interface ownerid in-use 10.0.0.67 true sg-id WebServerSG eni-attach-id 0 attached 2012-12-14T23:48:37.000Z true 10.0.0.67 true 10.0.0.54 false 10.0.0.55 false false """ class TestDescribeSecurityGroups(AWSMockServiceTestCase): connection_class = EC2Connection def test_get_instances(self): self.set_http_response(status_code=200, body=DESCRIBE_SECURITY_GROUP) groups = self.service_connection.get_all_security_groups() self.set_http_response(status_code=200, body=DESCRIBE_INSTANCES) instances = groups[0].instances() self.assertEqual(1, len(instances)) self.assertEqual(groups[0].id, instances[0].groups[0].id) class SecurityGroupTest(unittest.TestCase): def test_add_rule(self): sg = SecurityGroup() self.assertEqual(len(sg.rules), 0) # Regression: ``dry_run`` was being passed (but unhandled) before. sg.add_rule( ip_protocol='http', from_port='80', to_port='8080', src_group_name='groupy', src_group_owner_id='12345', cidr_ip='10.0.0.1', src_group_group_id='54321', dry_run=False ) self.assertEqual(len(sg.rules), 1) def test_remove_rule_on_empty_group(self): # Remove a rule from a group with no rules sg = SecurityGroup() with self.assertRaises(ValueError): sg.remove_rule('ip', 80, 80, None, None, None, None) boto-2.20.1/tests/unit/ec2/test_snapshot.py000066400000000000000000000051031225267101000205740ustar00rootroot00000000000000from tests.unit import AWSMockServiceTestCase from boto.ec2.connection import EC2Connection from boto.ec2.snapshot import Snapshot class TestDescribeSnapshots(AWSMockServiceTestCase): connection_class = EC2Connection def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE snap-1a2b3c4d vol-1a2b3c4d pending YYYY-MM-DDTHH:MM:SS.SSSZ 30% 111122223333 15 Daily Backup Purpose demo_db_14_backup """ def test_cancel_spot_instance_requests(self): self.set_http_response(status_code=200) response = self.service_connection.get_all_snapshots(['snap-1a2b3c4d', 'snap-9f8e7d6c'], owner=['self', '111122223333'], restorable_by='999988887777', filters={'status': 'pending', 'tag-value': '*db_*'}) self.assert_request_parameters({ 'Action': 'DescribeSnapshots', 'SnapshotId.1': 'snap-1a2b3c4d', 'SnapshotId.2': 'snap-9f8e7d6c', 'Owner.1': 'self', 'Owner.2': '111122223333', 'RestorableBy.1': '999988887777', 'Filter.1.Name': 'status', 'Filter.1.Value.1': 'pending', 'Filter.2.Name': 'tag-value', 'Filter.2.Value.1': '*db_*'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEqual(len(response), 1) self.assertIsInstance(response[0], Snapshot) self.assertEqual(response[0].id, 'snap-1a2b3c4d') boto-2.20.1/tests/unit/ec2/test_spotinstance.py000066400000000000000000000034471225267101000214600ustar00rootroot00000000000000from tests.unit import AWSMockServiceTestCase from boto.ec2.connection import EC2Connection class TestCancelSpotInstanceRequests(AWSMockServiceTestCase): connection_class = EC2Connection def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE sir-1a2b3c4d cancelled sir-9f8e7d6c cancelled """ def test_cancel_spot_instance_requests(self): self.set_http_response(status_code=200) response = self.service_connection.cancel_spot_instance_requests(['sir-1a2b3c4d', 'sir-9f8e7d6c']) self.assert_request_parameters({ 'Action': 'CancelSpotInstanceRequests', 'SpotInstanceRequestId.1': 'sir-1a2b3c4d', 'SpotInstanceRequestId.2': 'sir-9f8e7d6c'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEqual(len(response), 2) self.assertEqual(response[0].id, 'sir-1a2b3c4d') self.assertEqual(response[0].state, 'cancelled') self.assertEqual(response[1].id, 'sir-9f8e7d6c') self.assertEqual(response[1].state, 'cancelled') boto-2.20.1/tests/unit/ec2/test_volume.py000066400000000000000000000261221225267101000202500ustar00rootroot00000000000000import mock from tests.unit import unittest from boto.ec2.snapshot import Snapshot from boto.ec2.tag import Tag, TagSet from boto.ec2.volume import Volume, AttachmentSet, VolumeAttribute class VolumeTests(unittest.TestCase): def setUp(self): self.attach_data = AttachmentSet() self.attach_data.id = 1 self.attach_data.instance_id = 2 self.attach_data.status = "some status" self.attach_data.attach_time = 5 self.attach_data.device = "/dev/null" self.volume_one = Volume() self.volume_one.id = 1 self.volume_one.create_time = 5 self.volume_one.status = "one_status" self.volume_one.size = "one_size" self.volume_one.snapshot_id = 1 self.volume_one.attach_data = self.attach_data self.volume_one.zone = "one_zone" self.volume_two = Volume() self.volume_two.connection = mock.Mock() self.volume_two.id = 1 self.volume_two.create_time = 6 self.volume_two.status = "two_status" self.volume_two.size = "two_size" self.volume_two.snapshot_id = 2 self.volume_two.attach_data = None self.volume_two.zone = "two_zone" @mock.patch("boto.ec2.volume.TaggedEC2Object.startElement") def test_startElement_calls_TaggedEC2Object_startElement_with_correct_args(self, startElement): volume = Volume() volume.startElement("some name", "some attrs", None) startElement.assert_called_with( volume, "some name", "some attrs", None ) @mock.patch("boto.ec2.volume.TaggedEC2Object.startElement") def test_startElement_retval_not_None_returns_correct_thing(self, startElement): tag_set = mock.Mock(TagSet) startElement.return_value = tag_set volume = Volume() retval = volume.startElement(None, None, None) self.assertEqual(retval, tag_set) @mock.patch("boto.ec2.volume.TaggedEC2Object.startElement") @mock.patch("boto.resultset.ResultSet") def test_startElement_with_name_tagSet_calls_ResultSet(self, ResultSet, startElement): startElement.return_value = None result_set = mock.Mock(ResultSet([("item", Tag)])) volume = Volume() volume.tags = result_set retval = volume.startElement("tagSet", None, None) self.assertEqual(retval, volume.tags) @mock.patch("boto.ec2.volume.TaggedEC2Object.startElement") def test_startElement_with_name_attachmentSet_returns_AttachmentSet(self, startElement): startElement.return_value = None attach_data = AttachmentSet() volume = Volume() volume.attach_data = attach_data retval = volume.startElement("attachmentSet", None, None) self.assertEqual(retval, volume.attach_data) @mock.patch("boto.ec2.volume.TaggedEC2Object.startElement") def test_startElement_else_returns_None(self, startElement): startElement.return_value = None volume = Volume() retval = volume.startElement("not tagSet or attachmentSet", None, None) self.assertEqual(retval, None) def check_that_attribute_has_been_set(self, name, value, attribute): volume = Volume() volume.endElement(name, value, None) self.assertEqual(getattr(volume, attribute), value) def test_endElement_sets_correct_attributes_with_values(self): for arguments in [("volumeId", "some value", "id"), ("createTime", "some time", "create_time"), ("status", "some status", "status"), ("size", 5, "size"), ("snapshotId", 1, "snapshot_id"), ("availabilityZone", "some zone", "zone"), ("someName", "some value", "someName")]: self.check_that_attribute_has_been_set(arguments[0], arguments[1], arguments[2]) def test_endElement_with_name_status_and_empty_string_value_doesnt_set_status(self): volume = Volume() volume.endElement("status", "", None) self.assertNotEqual(volume.status, "") def test_update_with_result_set_greater_than_0_updates_dict(self): self.volume_two.connection.get_all_volumes.return_value = [self.volume_one] self.volume_two.update() assert all([self.volume_two.create_time == 5, self.volume_two.status == "one_status", self.volume_two.size == "one_size", self.volume_two.snapshot_id == 1, self.volume_two.attach_data == self.attach_data, self.volume_two.zone == "one_zone"]) def test_update_with_validate_true_raises_value_error(self): self.volume_one.connection = mock.Mock() self.volume_one.connection.get_all_volumes.return_value = [] with self.assertRaisesRegexp(ValueError, "^1 is not a valid Volume ID$"): self.volume_one.update(True) def test_update_returns_status(self): self.volume_one.connection = mock.Mock() self.volume_one.connection.get_all_volumes.return_value = [self.volume_two] retval = self.volume_one.update() self.assertEqual(retval, "two_status") def test_delete_calls_delete_volume(self): self.volume_one.connection = mock.Mock() self.volume_one.delete() self.volume_one.connection.delete_volume.assert_called_with( 1, dry_run=False ) def test_attach_calls_attach_volume(self): self.volume_one.connection = mock.Mock() self.volume_one.attach("instance_id", "/dev/null") self.volume_one.connection.attach_volume.assert_called_with( 1, "instance_id", "/dev/null", dry_run=False ) def test_detach_calls_detach_volume(self): self.volume_one.connection = mock.Mock() self.volume_one.detach() self.volume_one.connection.detach_volume.assert_called_with( 1, 2, "/dev/null", False, dry_run=False) def test_detach_with_no_attach_data(self): self.volume_two.connection = mock.Mock() self.volume_two.detach() self.volume_two.connection.detach_volume.assert_called_with( 1, None, None, False, dry_run=False) def test_detach_with_force_calls_detach_volume_with_force(self): self.volume_one.connection = mock.Mock() self.volume_one.detach(True) self.volume_one.connection.detach_volume.assert_called_with( 1, 2, "/dev/null", True, dry_run=False) def test_create_snapshot_calls_connection_create_snapshot(self): self.volume_one.connection = mock.Mock() self.volume_one.create_snapshot() self.volume_one.connection.create_snapshot.assert_called_with( 1, None, dry_run=False ) def test_create_snapshot_with_description(self): self.volume_one.connection = mock.Mock() self.volume_one.create_snapshot("some description") self.volume_one.connection.create_snapshot.assert_called_with( 1, "some description", dry_run=False ) def test_volume_state_returns_status(self): retval = self.volume_one.volume_state() self.assertEqual(retval, "one_status") def test_attachment_state_returns_state(self): retval = self.volume_one.attachment_state() self.assertEqual(retval, "some status") def test_attachment_state_no_attach_data_returns_None(self): retval = self.volume_two.attachment_state() self.assertEqual(retval, None) def test_snapshots_returns_snapshots(self): snapshot_one = Snapshot() snapshot_one.volume_id = 1 snapshot_two = Snapshot() snapshot_two.volume_id = 2 self.volume_one.connection = mock.Mock() self.volume_one.connection.get_all_snapshots.return_value = [snapshot_one, snapshot_two] retval = self.volume_one.snapshots() self.assertEqual(retval, [snapshot_one]) def test_snapshots__with_owner_and_restorable_by(self): self.volume_one.connection = mock.Mock() self.volume_one.connection.get_all_snapshots.return_value = [] self.volume_one.snapshots("owner", "restorable_by") self.volume_one.connection.get_all_snapshots.assert_called_with( owner="owner", restorable_by="restorable_by", dry_run=False) class AttachmentSetTests(unittest.TestCase): def check_that_attribute_has_been_set(self, name, value, attribute): attachment_set = AttachmentSet() attachment_set.endElement(name, value, None) self.assertEqual(getattr(attachment_set, attribute), value) def test_endElement_with_name_volumeId_sets_id(self): return self.check_that_attribute_has_been_set("volumeId", "some value", "id") def test_endElement_with_name_instanceId_sets_instance_id(self): return self.check_that_attribute_has_been_set("instanceId", 1, "instance_id") def test_endElement_with_name_status_sets_status(self): return self.check_that_attribute_has_been_set("status", "some value", "status") def test_endElement_with_name_attachTime_sets_attach_time(self): return self.check_that_attribute_has_been_set("attachTime", 5, "attach_time") def test_endElement_with_name_device_sets_device(self): return self.check_that_attribute_has_been_set("device", "/dev/null", "device") def test_endElement_with_other_name_sets_other_name_attribute(self): return self.check_that_attribute_has_been_set("someName", "some value", "someName") class VolumeAttributeTests(unittest.TestCase): def setUp(self): self.volume_attribute = VolumeAttribute() self.volume_attribute._key_name = "key_name" self.volume_attribute.attrs = {"key_name": False} def test_startElement_with_name_autoEnableIO_sets_key_name(self): self.volume_attribute.startElement("autoEnableIO", None, None) self.assertEqual(self.volume_attribute._key_name, "autoEnableIO") def test_startElement_without_name_autoEnableIO_returns_None(self): retval = self.volume_attribute.startElement("some name", None, None) self.assertEqual(retval, None) def test_endElement_with_name_value_and_value_true_sets_attrs_key_name_True(self): self.volume_attribute.endElement("value", "true", None) self.assertEqual(self.volume_attribute.attrs['key_name'], True) def test_endElement_with_name_value_and_value_false_sets_attrs_key_name_False(self): self.volume_attribute._key_name = "other_key_name" self.volume_attribute.endElement("value", "false", None) self.assertEqual(self.volume_attribute.attrs['other_key_name'], False) def test_endElement_with_name_volumeId_sets_id(self): self.volume_attribute.endElement("volumeId", "some_value", None) self.assertEqual(self.volume_attribute.id, "some_value") def test_endElement_with_other_name_sets_other_name_attribute(self): self.volume_attribute.endElement("someName", "some value", None) self.assertEqual(self.volume_attribute.someName, "some value") if __name__ == "__main__": unittest.main() boto-2.20.1/tests/unit/elasticache/000077500000000000000000000000001225267101000171215ustar00rootroot00000000000000boto-2.20.1/tests/unit/elasticache/__init__.py000066400000000000000000000000001225267101000212200ustar00rootroot00000000000000boto-2.20.1/tests/unit/elasticache/test_api_interface.py000066400000000000000000000013701225267101000233240ustar00rootroot00000000000000from boto.elasticache.layer1 import ElastiCacheConnection from tests.unit import AWSMockServiceTestCase class TestAPIInterface(AWSMockServiceTestCase): connection_class = ElastiCacheConnection def test_required_launch_params(self): """ Make sure only the AWS required params are required by boto """ name = 'test_cache_cluster' self.set_http_response(status_code=200, body='{}') self.service_connection.create_cache_cluster(name) self.assert_request_parameters({ 'Action': 'CreateCacheCluster', 'CacheClusterId': name, }, ignore_params_values=[ 'Version', 'AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'ContentType', ]) boto-2.20.1/tests/unit/emr/000077500000000000000000000000001225267101000154375ustar00rootroot00000000000000boto-2.20.1/tests/unit/emr/test_connection.py000066400000000000000000000225071225267101000212150ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. from __future__ import with_statement import boto.utils from datetime import datetime from tests.unit import AWSMockServiceTestCase from boto.emr.connection import EmrConnection from boto.emr.emrobject import JobFlowStepList # These tests are just checking the basic structure of # the Elastic MapReduce code, by picking a few calls # and verifying we get the expected results with mocked # responses. The integration tests actually verify the # API calls interact with the service correctly. class TestListClusters(AWSMockServiceTestCase): connection_class = EmrConnection def default_body(self): return """""" def test_list_clusters(self): self.set_http_response(status_code=200) response = self.service_connection.list_clusters() self.assert_request_parameters({ 'Action': 'ListClusters', 'Version': '2009-03-31', }) def test_list_clusters_created_before(self): self.set_http_response(status_code=200) date = datetime.now() response = self.service_connection.list_clusters(created_before=date) self.assert_request_parameters({ 'Action': 'ListClusters', 'CreatedBefore': date.strftime(boto.utils.ISO8601), 'Version': '2009-03-31' }) def test_list_clusters_created_after(self): self.set_http_response(status_code=200) date = datetime.now() response = self.service_connection.list_clusters(created_after=date) self.assert_request_parameters({ 'Action': 'ListClusters', 'CreatedAfter': date.strftime(boto.utils.ISO8601), 'Version': '2009-03-31' }) def test_list_clusters_states(self): self.set_http_response(status_code=200) response = self.service_connection.list_clusters(cluster_states=[ 'RUNNING', 'WAITING' ]) self.assert_request_parameters({ 'Action': 'ListClusters', 'ClusterStates.member.1': 'RUNNING', 'ClusterStates.member.2': 'WAITING', 'Version': '2009-03-31' }) class TestListInstanceGroups(AWSMockServiceTestCase): connection_class = EmrConnection def default_body(self): return """""" def test_list_instance_groups(self): self.set_http_response(200) with self.assertRaises(TypeError): self.service_connection.list_instance_groups() response = self.service_connection.list_instance_groups(cluster_id='j-123') self.assert_request_parameters({ 'Action': 'ListInstanceGroups', 'ClusterId': 'j-123', 'Version': '2009-03-31' }) class TestListInstances(AWSMockServiceTestCase): connection_class = EmrConnection def default_body(self): return """""" def test_list_instances(self): self.set_http_response(200) with self.assertRaises(TypeError): self.service_connection.list_instances() response = self.service_connection.list_instances(cluster_id='j-123') self.assert_request_parameters({ 'Action': 'ListInstances', 'ClusterId': 'j-123', 'Version': '2009-03-31' }) def test_list_instances_with_group_id(self): self.set_http_response(200) response = self.service_connection.list_instances( cluster_id='j-123', instance_group_id='abc') self.assert_request_parameters({ 'Action': 'ListInstances', 'ClusterId': 'j-123', 'InstanceGroupId': 'abc', 'Version': '2009-03-31' }) def test_list_instances_with_types(self): self.set_http_response(200) response = self.service_connection.list_instances( cluster_id='j-123', instance_group_types=[ 'MASTER', 'TASK' ]) self.assert_request_parameters({ 'Action': 'ListInstances', 'ClusterId': 'j-123', 'InstanceGroupTypeList.member.1': 'MASTER', 'InstanceGroupTypeList.member.2': 'TASK', 'Version': '2009-03-31' }) class TestListSteps(AWSMockServiceTestCase): connection_class = EmrConnection def default_body(self): return """""" def test_list_steps(self): self.set_http_response(200) with self.assertRaises(TypeError): self.service_connection.list_steps() response = self.service_connection.list_steps(cluster_id='j-123') self.assert_request_parameters({ 'Action': 'ListSteps', 'ClusterId': 'j-123', 'Version': '2009-03-31' }) def test_list_steps_with_states(self): self.set_http_response(200) response = self.service_connection.list_steps( cluster_id='j-123', step_states=[ 'COMPLETED', 'FAILED' ]) self.assert_request_parameters({ 'Action': 'ListSteps', 'ClusterId': 'j-123', 'StepStateList.member.1': 'COMPLETED', 'StepStateList.member.2': 'FAILED', 'Version': '2009-03-31' }) class TestListBootstrapActions(AWSMockServiceTestCase): connection_class = EmrConnection def default_body(self): return """""" def test_list_bootstrap_actions(self): self.set_http_response(200) with self.assertRaises(TypeError): self.service_connection.list_bootstrap_actions() response = self.service_connection.list_bootstrap_actions(cluster_id='j-123') self.assert_request_parameters({ 'Action': 'ListBootstrapActions', 'ClusterId': 'j-123', 'Version': '2009-03-31' }) class TestDescribeCluster(AWSMockServiceTestCase): connection_class = EmrConnection def default_body(self): return """""" def test_describe_cluster(self): self.set_http_response(200) with self.assertRaises(TypeError): self.service_connection.describe_cluster() response = self.service_connection.describe_cluster(cluster_id='j-123') self.assert_request_parameters({ 'Action': 'DescribeCluster', 'ClusterId': 'j-123', 'Version': '2009-03-31' }) class TestDescribeStep(AWSMockServiceTestCase): connection_class = EmrConnection def default_body(self): return """""" def test_describe_step(self): self.set_http_response(200) with self.assertRaises(TypeError): self.service_connection.describe_step() with self.assertRaises(TypeError): self.service_connection.describe_step(cluster_id='j-123') with self.assertRaises(TypeError): self.service_connection.describe_step(step_id='abc') response = self.service_connection.describe_step( cluster_id='j-123', step_id='abc') self.assert_request_parameters({ 'Action': 'DescribeStep', 'ClusterId': 'j-123', 'StepId': 'abc', 'Version': '2009-03-31' }) class TestAddJobFlowSteps(AWSMockServiceTestCase): connection_class = EmrConnection def default_body(self): return """ Foo Bar """ def test_add_jobflow_steps(self): self.set_http_response(200) response = self.service_connection.add_jobflow_steps( jobflow_id='j-123', steps=[]) # Make sure the correct object is returned, as this was # previously set to incorrectly return an empty instance # of RunJobFlowResponse. self.assertTrue(isinstance(response, JobFlowStepList)) self.assertEqual(response.stepids[0].value, 'Foo') self.assertEqual(response.stepids[1].value, 'Bar') boto-2.20.1/tests/unit/emr/test_emr_responses.py000066400000000000000000000405271225267101000217440ustar00rootroot00000000000000# Copyright (c) 2010 Jeremy Thurgood # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # NOTE: These tests only cover the very simple cases I needed to test # for the InstanceGroup fix. import xml.sax import unittest from boto import handler from boto.emr import emrobject from boto.resultset import ResultSet JOB_FLOW_EXAMPLE = """ 2009-01-28T21:49:16Z 2009-01-28T21:49:16Z STARTING MyJobFlowName mybucket/subdir/ 2009-01-28T21:49:16Z PENDING MyJarFile MyMailClass arg1 arg2 MyStepName CONTINUE j-3UN6WX5RRO2AG us-east-1a m1.small m1.small myec2keyname 4 true 9cea3229-ed85-11dd-9877-6fad448a8419 """ JOB_FLOW_COMPLETED = """ 2010-10-21T01:00:25Z Steps completed 2010-10-21T01:03:59Z 2010-10-21T01:03:59Z COMPLETED 2010-10-21T01:44:18Z RealJobFlowName s3n://example.emrtest.scripts/jobflow_logs/ s3n://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar s3n://us-east-1.elasticmapreduce/libs/state-pusher/0.1/fetch Setup Hadoop Debugging TERMINATE_JOB_FLOW 2010-10-21T01:00:25Z 2010-10-21T01:03:59Z COMPLETED 2010-10-21T01:04:22Z /home/hadoop/contrib/streaming/hadoop-0.20-streaming.jar -mapper s3://example.emrtest.scripts/81d8-5a9d3df4a86c-InitialMapper.py -reducer s3://example.emrtest.scripts/81d8-5a9d3df4a86c-InitialReducer.py -input s3://example.emrtest.data/raw/2010/10/20/* -input s3://example.emrtest.data/raw/2010/10/19/* -input s3://example.emrtest.data/raw/2010/10/18/* -input s3://example.emrtest.data/raw/2010/10/17/* -input s3://example.emrtest.data/raw/2010/10/16/* -input s3://example.emrtest.data/raw/2010/10/15/* -input s3://example.emrtest.data/raw/2010/10/14/* -output s3://example.emrtest.crunched/ testjob_Initial TERMINATE_JOB_FLOW 2010-10-21T01:00:25Z 2010-10-21T01:04:22Z COMPLETED 2010-10-21T01:36:18Z /home/hadoop/contrib/streaming/hadoop-0.20-streaming.jar -mapper s3://example.emrtest.scripts/81d8-5a9d3df4a86c-step1Mapper.py -reducer s3://example.emrtest.scripts/81d8-5a9d3df4a86c-step1Reducer.py -input s3://example.emrtest.crunched/* -output s3://example.emrtest.step1/ testjob_step1 TERMINATE_JOB_FLOW 2010-10-21T01:00:25Z 2010-10-21T01:36:18Z COMPLETED 2010-10-21T01:37:51Z /home/hadoop/contrib/streaming/hadoop-0.20-streaming.jar -mapper s3://example.emrtest.scripts/81d8-5a9d3df4a86c-step2Mapper.py -reducer s3://example.emrtest.scripts/81d8-5a9d3df4a86c-step2Reducer.py -input s3://example.emrtest.crunched/* -output s3://example.emrtest.step2/ testjob_step2 TERMINATE_JOB_FLOW 2010-10-21T01:00:25Z 2010-10-21T01:37:51Z COMPLETED 2010-10-21T01:39:32Z /home/hadoop/contrib/streaming/hadoop-0.20-streaming.jar -mapper s3://example.emrtest.scripts/81d8-5a9d3df4a86c-step3Mapper.py -reducer s3://example.emrtest.scripts/81d8-5a9d3df4a86c-step3Reducer.py -input s3://example.emrtest.step1/* -output s3://example.emrtest.step3/ testjob_step3 TERMINATE_JOB_FLOW 2010-10-21T01:00:25Z 2010-10-21T01:39:32Z COMPLETED 2010-10-21T01:41:22Z /home/hadoop/contrib/streaming/hadoop-0.20-streaming.jar -mapper s3://example.emrtest.scripts/81d8-5a9d3df4a86c-step4Mapper.py -reducer s3://example.emrtest.scripts/81d8-5a9d3df4a86c-step4Reducer.py -input s3://example.emrtest.step1/* -output s3://example.emrtest.step4/ testjob_step4 TERMINATE_JOB_FLOW 2010-10-21T01:00:25Z 2010-10-21T01:41:22Z COMPLETED 2010-10-21T01:43:03Z j-3H3Q13JPFLU22 m1.large i-64c21609 us-east-1b 2010-10-21T01:00:25Z 0 2010-10-21T01:02:09Z 2010-10-21T01:03:03Z ENDED 2010-10-21T01:44:18Z 1 m1.large ON_DEMAND Job flow terminated MASTER ig-EVMHOZJ2SCO8 master 2010-10-21T01:00:25Z 0 2010-10-21T01:03:59Z 2010-10-21T01:03:59Z ENDED 2010-10-21T01:44:18Z 9 m1.large ON_DEMAND Job flow terminated CORE ig-YZHDYVITVHKB slave 40 0.20 m1.large ec2-184-72-153-139.compute-1.amazonaws.com myubersecurekey 10 false c31e701d-dcb4-11df-b5d9-337fc7fe4773 """ class TestEMRResponses(unittest.TestCase): def _parse_xml(self, body, markers): rs = ResultSet(markers) h = handler.XmlHandler(rs, None) xml.sax.parseString(body, h) return rs def _assert_fields(self, response, **fields): for field, expected in fields.items(): actual = getattr(response, field) self.assertEquals(expected, actual, "Field %s: %r != %r" % (field, expected, actual)) def test_JobFlows_example(self): [jobflow] = self._parse_xml(JOB_FLOW_EXAMPLE, [('member', emrobject.JobFlow)]) self._assert_fields(jobflow, creationdatetime='2009-01-28T21:49:16Z', startdatetime='2009-01-28T21:49:16Z', state='STARTING', instancecount='4', jobflowid='j-3UN6WX5RRO2AG', loguri='mybucket/subdir/', name='MyJobFlowName', availabilityzone='us-east-1a', slaveinstancetype='m1.small', masterinstancetype='m1.small', ec2keyname='myec2keyname', keepjobflowalivewhennosteps='true') def test_JobFlows_completed(self): [jobflow] = self._parse_xml(JOB_FLOW_COMPLETED, [('member', emrobject.JobFlow)]) self._assert_fields(jobflow, creationdatetime='2010-10-21T01:00:25Z', startdatetime='2010-10-21T01:03:59Z', enddatetime='2010-10-21T01:44:18Z', state='COMPLETED', instancecount='10', jobflowid='j-3H3Q13JPFLU22', loguri='s3n://example.emrtest.scripts/jobflow_logs/', name='RealJobFlowName', availabilityzone='us-east-1b', slaveinstancetype='m1.large', masterinstancetype='m1.large', ec2keyname='myubersecurekey', keepjobflowalivewhennosteps='false') self.assertEquals(6, len(jobflow.steps)) self.assertEquals(2, len(jobflow.instancegroups)) boto-2.20.1/tests/unit/emr/test_instance_group_args.py000066400000000000000000000037661225267101000231200ustar00rootroot00000000000000#!/usr/bin/env python # Author: Charlie Schluting # # Test to ensure initalization of InstanceGroup object emits appropriate errors # if bidprice is not specified, but allows float, int, Decimal. import unittest from decimal import Decimal from boto.emr.instance_group import InstanceGroup class TestInstanceGroupArgs(unittest.TestCase): def test_bidprice_missing_spot(self): """ Test InstanceGroup init raises ValueError when market==spot and bidprice is not specified. """ with self.assertRaisesRegexp(ValueError, 'bidprice must be specified'): InstanceGroup(1, 'MASTER', 'm1.small', 'SPOT', 'master') def test_bidprice_missing_ondemand(self): """ Test InstanceGroup init accepts a missing bidprice arg, when market is ON_DEMAND. """ instance_group = InstanceGroup(1, 'MASTER', 'm1.small', 'ON_DEMAND', 'master') def test_bidprice_Decimal(self): """ Test InstanceGroup init works with bidprice type = Decimal. """ instance_group = InstanceGroup(1, 'MASTER', 'm1.small', 'SPOT', 'master', bidprice=Decimal(1.10)) self.assertEquals('1.10', instance_group.bidprice[:4]) def test_bidprice_float(self): """ Test InstanceGroup init works with bidprice type = float. """ instance_group = InstanceGroup(1, 'MASTER', 'm1.small', 'SPOT', 'master', bidprice=1.1) self.assertEquals('1.1', instance_group.bidprice) def test_bidprice_string(self): """ Test InstanceGroup init works with bidprice type = string. """ instance_group = InstanceGroup(1, 'MASTER', 'm1.small', 'SPOT', 'master', bidprice='1.1') self.assertEquals('1.1', instance_group.bidprice) if __name__ == "__main__": unittest.main() boto-2.20.1/tests/unit/glacier/000077500000000000000000000000001225267101000162625ustar00rootroot00000000000000boto-2.20.1/tests/unit/glacier/__init__.py000066400000000000000000000000001225267101000203610ustar00rootroot00000000000000boto-2.20.1/tests/unit/glacier/test_concurrent.py000066400000000000000000000161711225267101000220630ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import tempfile from Queue import Queue import mock from tests.unit import unittest from tests.unit import AWSMockServiceTestCase from boto.glacier.concurrent import ConcurrentUploader, ConcurrentDownloader from boto.glacier.concurrent import UploadWorkerThread from boto.glacier.concurrent import _END_SENTINEL class FakeThreadedConcurrentUploader(ConcurrentUploader): def _start_upload_threads(self, results_queue, upload_id, worker_queue, filename): self.results_queue = results_queue self.worker_queue = worker_queue self.upload_id = upload_id def _wait_for_upload_threads(self, hash_chunks, result_queue, total_parts): for i in xrange(total_parts): hash_chunks[i] = 'foo' class FakeThreadedConcurrentDownloader(ConcurrentDownloader): def _start_download_threads(self, results_queue, worker_queue): self.results_queue = results_queue self.worker_queue = worker_queue def _wait_for_download_threads(self, filename, result_queue, total_parts): pass class TestConcurrentUploader(unittest.TestCase): def setUp(self): super(TestConcurrentUploader, self).setUp() self.stat_patch = mock.patch('os.stat') self.stat_mock = self.stat_patch.start() # Give a default value for tests that don't care # what the file size is. self.stat_mock.return_value.st_size = 1024 * 1024 * 8 def tearDown(self): self.stat_mock = self.stat_patch.start() def test_calculate_required_part_size(self): self.stat_mock.return_value.st_size = 1024 * 1024 * 8 uploader = ConcurrentUploader(mock.Mock(), 'vault_name') total_parts, part_size = uploader._calculate_required_part_size( 1024 * 1024 * 8) self.assertEqual(total_parts, 2) self.assertEqual(part_size, 4 * 1024 * 1024) def test_calculate_required_part_size_too_small(self): too_small = 1 * 1024 * 1024 self.stat_mock.return_value.st_size = 1024 * 1024 * 1024 uploader = ConcurrentUploader(mock.Mock(), 'vault_name', part_size=too_small) total_parts, part_size = uploader._calculate_required_part_size( 1024 * 1024 * 1024) self.assertEqual(total_parts, 256) # Part size if 4MB not the passed in 1MB. self.assertEqual(part_size, 4 * 1024 * 1024) def test_work_queue_is_correctly_populated(self): uploader = FakeThreadedConcurrentUploader(mock.MagicMock(), 'vault_name') uploader.upload('foofile') q = uploader.worker_queue items = [q.get() for i in xrange(q.qsize())] self.assertEqual(items[0], (0, 4 * 1024 * 1024)) self.assertEqual(items[1], (1, 4 * 1024 * 1024)) # 2 for the parts, 10 for the end sentinels (10 threads). self.assertEqual(len(items), 12) def test_correct_low_level_api_calls(self): api_mock = mock.MagicMock() uploader = FakeThreadedConcurrentUploader(api_mock, 'vault_name') uploader.upload('foofile') # The threads call the upload_part, so we're just verifying the # initiate/complete multipart API calls. api_mock.initiate_multipart_upload.assert_called_with( 'vault_name', 4 * 1024 * 1024, None) api_mock.complete_multipart_upload.assert_called_with( 'vault_name', mock.ANY, mock.ANY, 8 * 1024 * 1024) def test_downloader_work_queue_is_correctly_populated(self): job = mock.MagicMock() job.archive_size = 8 * 1024 * 1024 downloader = FakeThreadedConcurrentDownloader(job) downloader.download('foofile') q = downloader.worker_queue items = [q.get() for i in xrange(q.qsize())] self.assertEqual(items[0], (0, 4 * 1024 * 1024)) self.assertEqual(items[1], (1, 4 * 1024 * 1024)) # 2 for the parts, 10 for the end sentinels (10 threads). self.assertEqual(len(items), 12) class TestUploaderThread(unittest.TestCase): def setUp(self): self.fileobj = tempfile.NamedTemporaryFile() self.filename = self.fileobj.name def test_fileobj_closed_when_thread_shuts_down(self): thread = UploadWorkerThread(mock.Mock(), 'vault_name', self.filename, 'upload_id', Queue(), Queue()) fileobj = thread._fileobj self.assertFalse(fileobj.closed) # By settings should_continue to False, it should immediately # exit, and we can still verify cleanup behavior. thread.should_continue = False thread.run() self.assertTrue(fileobj.closed) def test_upload_errors_have_exception_messages(self): api = mock.Mock() job_queue = Queue() result_queue = Queue() upload_thread = UploadWorkerThread( api, 'vault_name', self.filename, 'upload_id', job_queue, result_queue, num_retries=1, time_between_retries=0) api.upload_part.side_effect = Exception("exception message") job_queue.put((0, 1024)) job_queue.put(_END_SENTINEL) upload_thread.run() result = result_queue.get(timeout=1) self.assertIn("exception message", str(result)) def test_num_retries_is_obeyed(self): # total attempts is 1 + num_retries so if I have num_retries of 2, # I'll attempt the upload once, and if that fails I'll retry up to # 2 more times for a total of 3 attempts. api = mock.Mock() job_queue = Queue() result_queue = Queue() upload_thread = UploadWorkerThread( api, 'vault_name', self.filename, 'upload_id', job_queue, result_queue, num_retries=2, time_between_retries=0) api.upload_part.side_effect = Exception() job_queue.put((0, 1024)) job_queue.put(_END_SENTINEL) upload_thread.run() self.assertEqual(api.upload_part.call_count, 3) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/glacier/test_job.py000066400000000000000000000051761225267101000204560ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from tests.unit import unittest import mock from boto.glacier.job import Job from boto.glacier.layer1 import Layer1 from boto.glacier.response import GlacierResponse from boto.glacier.exceptions import TreeHashDoesNotMatchError class TestJob(unittest.TestCase): def setUp(self): self.api = mock.Mock(spec=Layer1) self.vault = mock.Mock() self.vault.layer1 = self.api self.job = Job(self.vault) def test_get_job_validate_checksum_success(self): response = GlacierResponse(mock.Mock(), None) response['TreeHash'] = 'tree_hash' self.api.get_job_output.return_value = response with mock.patch('boto.glacier.job.tree_hash_from_str') as t: t.return_value = 'tree_hash' self.job.get_output(byte_range=(1, 1024), validate_checksum=True) def test_get_job_validation_fails(self): response = GlacierResponse(mock.Mock(), None) response['TreeHash'] = 'tree_hash' self.api.get_job_output.return_value = response with mock.patch('boto.glacier.job.tree_hash_from_str') as t: t.return_value = 'BAD_TREE_HASH_VALUE' with self.assertRaises(TreeHashDoesNotMatchError): # With validate_checksum set to True, this call fails. self.job.get_output(byte_range=(1, 1024), validate_checksum=True) # With validate_checksum set to False, this call succeeds. self.job.get_output(byte_range=(1, 1024), validate_checksum=False) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/glacier/test_layer1.py000066400000000000000000000102231225267101000210660ustar00rootroot00000000000000import json import copy import tempfile from tests.unit import AWSMockServiceTestCase from boto.glacier.layer1 import Layer1 class GlacierLayer1ConnectionBase(AWSMockServiceTestCase): connection_class = Layer1 def setUp(self): super(GlacierLayer1ConnectionBase, self).setUp() self.json_header = [('Content-Type', 'application/json')] self.vault_name = u'examplevault' self.vault_arn = 'arn:aws:glacier:us-east-1:012345678901:vaults/' + \ self.vault_name self.vault_info = {u'CreationDate': u'2012-03-16T22:22:47.214Z', u'LastInventoryDate': u'2012-03-21T22:06:51.218Z', u'NumberOfArchives': 2, u'SizeInBytes': 12334, u'VaultARN': self.vault_arn, u'VaultName': self.vault_name} class GlacierVaultsOperations(GlacierLayer1ConnectionBase): def test_create_vault_parameters(self): self.set_http_response(status_code=201) self.service_connection.create_vault(self.vault_name) def test_list_vaults(self): content = {u'Marker': None, u'RequestId': None, u'VaultList': [self.vault_info]} self.set_http_response(status_code=200, header=self.json_header, body=json.dumps(content)) api_response = self.service_connection.list_vaults() self.assertDictEqual(content, api_response) def test_describe_vaults(self): content = copy.copy(self.vault_info) content[u'RequestId'] = None self.set_http_response(status_code=200, header=self.json_header, body=json.dumps(content)) api_response = self.service_connection.describe_vault(self.vault_name) self.assertDictEqual(content, api_response) def test_delete_vault(self): self.set_http_response(status_code=204) self.service_connection.delete_vault(self.vault_name) class GlacierJobOperations(GlacierLayer1ConnectionBase): def setUp(self): super(GlacierJobOperations, self).setUp() self.job_content = 'abc' * 1024 def test_initiate_archive_job(self): content = {u'Type': u'archive-retrieval', u'ArchiveId': u'AAABZpJrTyioDC_HsOmHae8EZp_uBSJr6cnGOLKp_XJCl-Q', u'Description': u'Test Archive', u'SNSTopic': u'Topic', u'JobId': None, u'Location': None, u'RequestId': None} self.set_http_response(status_code=202, header=self.json_header, body=json.dumps(content)) api_response = self.service_connection.initiate_job(self.vault_name, self.job_content) self.assertDictEqual(content, api_response) def test_get_archive_output(self): header = [('Content-Type', 'application/octet-stream')] self.set_http_response(status_code=200, header=header, body=self.job_content) response = self.service_connection.get_job_output(self.vault_name, 'example-job-id') self.assertEqual(self.job_content, response.read()) class GlacierUploadArchiveResets(GlacierLayer1ConnectionBase): def test_upload_archive(self): fake_data = tempfile.NamedTemporaryFile() fake_data.write('foobarbaz') # First seek to a non zero offset. fake_data.seek(2) self.set_http_response(status_code=201) # Simulate reading the request body when we send the request. self.service_connection.connection.request.side_effect = \ lambda *args: fake_data.read() self.service_connection.upload_archive('vault_name', fake_data, 'linear_hash', 'tree_hash') # Verify that we seek back to the original offset after making # a request. This ensures that if we need to resend the request we're # back at the correct location within the file. self.assertEqual(fake_data.tell(), 2) boto-2.20.1/tests/unit/glacier/test_layer2.py000066400000000000000000000322461225267101000211000ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2012 Thomas Parslow http://almostobsolete.net/ # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from tests.unit import unittest from mock import call, Mock, patch, sentinel from boto.glacier.layer1 import Layer1 from boto.glacier.layer2 import Layer2 import boto.glacier.vault from boto.glacier.vault import Vault from boto.glacier.vault import Job from StringIO import StringIO # Some fixture data from the Glacier docs FIXTURE_VAULT = { "CreationDate" : "2012-02-20T17:01:45.198Z", "LastInventoryDate" : "2012-03-20T17:03:43.221Z", "NumberOfArchives" : 192, "SizeInBytes" : 78088912, "VaultARN" : "arn:aws:glacier:us-east-1:012345678901:vaults/examplevault", "VaultName" : "examplevault" } FIXTURE_VAULTS = { 'RequestId': 'vuXO7SHTw-luynJ0Zu31AYjR3TcCn7X25r7ykpuulxY2lv8', 'VaultList': [{'SizeInBytes': 0, 'LastInventoryDate': None, 'VaultARN': 'arn:aws:glacier:us-east-1:686406519478:vaults/vault0', 'VaultName': 'vault0', 'NumberOfArchives': 0, 'CreationDate': '2013-05-17T02:38:39.049Z'}, {'SizeInBytes': 0, 'LastInventoryDate': None, 'VaultARN': 'arn:aws:glacier:us-east-1:686406519478:vaults/vault3', 'VaultName': 'vault3', 'NumberOfArchives': 0, 'CreationDate': '2013-05-17T02:31:18.659Z'}]} FIXTURE_PAGINATED_VAULTS = { 'Marker': 'arn:aws:glacier:us-east-1:686406519478:vaults/vault2', 'RequestId': 'vuXO7SHTw-luynJ0Zu31AYjR3TcCn7X25r7ykpuulxY2lv8', 'VaultList': [{'SizeInBytes': 0, 'LastInventoryDate': None, 'VaultARN': 'arn:aws:glacier:us-east-1:686406519478:vaults/vault0', 'VaultName': 'vault0', 'NumberOfArchives': 0, 'CreationDate': '2013-05-17T02:38:39.049Z'}, {'SizeInBytes': 0, 'LastInventoryDate': None, 'VaultARN': 'arn:aws:glacier:us-east-1:686406519478:vaults/vault1', 'VaultName': 'vault1', 'NumberOfArchives': 0, 'CreationDate': '2013-05-17T02:31:18.659Z'}]} FIXTURE_PAGINATED_VAULTS_CONT = { 'Marker': None, 'RequestId': 'vuXO7SHTw-luynJ0Zu31AYjR3TcCn7X25r7ykpuulxY2lv8', 'VaultList': [{'SizeInBytes': 0, 'LastInventoryDate': None, 'VaultARN': 'arn:aws:glacier:us-east-1:686406519478:vaults/vault2', 'VaultName': 'vault2', 'NumberOfArchives': 0, 'CreationDate': '2013-05-17T02:38:39.049Z'}, {'SizeInBytes': 0, 'LastInventoryDate': None, 'VaultARN': 'arn:aws:glacier:us-east-1:686406519478:vaults/vault3', 'VaultName': 'vault3', 'NumberOfArchives': 0, 'CreationDate': '2013-05-17T02:31:18.659Z'}]} FIXTURE_ARCHIVE_JOB = { "Action": "ArchiveRetrieval", "ArchiveId": ("NkbByEejwEggmBz2fTHgJrg0XBoDfjP4q6iu87-TjhqG6eGoOY9Z8i1_AUyUs" "uhPAdTqLHy8pTl5nfCFJmDl2yEZONi5L26Omw12vcs01MNGntHEQL8MBfGlqr" "EXAMPLEArchiveId"), "ArchiveSizeInBytes": 16777216, "Completed": False, "CreationDate": "2012-05-15T17:21:39.339Z", "CompletionDate": "2012-05-15T17:21:43.561Z", "InventorySizeInBytes": None, "JobDescription": "My ArchiveRetrieval Job", "JobId": ("HkF9p6o7yjhFx-K3CGl6fuSm6VzW9T7esGQfco8nUXVYwS0jlb5gq1JZ55yHgt5v" "P54ZShjoQzQVVh7vEXAMPLEjobID"), "SHA256TreeHash": ("beb0fe31a1c7ca8c6c04d574ea906e3f97b31fdca7571defb5b44dc" "a89b5af60"), "SNSTopic": "arn:aws:sns:us-east-1:012345678901:mytopic", "StatusCode": "InProgress", "StatusMessage": "Operation in progress.", "VaultARN": "arn:aws:glacier:us-east-1:012345678901:vaults/examplevault" } EXAMPLE_PART_LIST_RESULT_PAGE_1 = { "ArchiveDescription": "archive description 1", "CreationDate": "2012-03-20T17:03:43.221Z", "Marker": "MfgsKHVjbQ6EldVl72bn3_n5h2TaGZQUO-Qb3B9j3TITf7WajQ", "MultipartUploadId": "OW2fM5iVylEpFEMM9_HpKowRapC3vn5sSL39_396UW9zLFUWVrnRHaPjUJddQ5OxSHVXjYtrN47NBZ-khxOjyEXAMPLE", "PartSizeInBytes": 4194304, "Parts": [ { "RangeInBytes": "4194304-8388607", "SHA256TreeHash": "01d34dabf7be316472c93b1ef80721f5d4" }], "VaultARN": "arn:aws:glacier:us-east-1:012345678901:vaults/demo1-vault" } # The documentation doesn't say whether the non-Parts fields are defined in # future pages, so assume they are not. EXAMPLE_PART_LIST_RESULT_PAGE_2 = { "ArchiveDescription": None, "CreationDate": None, "Marker": None, "MultipartUploadId": None, "PartSizeInBytes": None, "Parts": [ { "RangeInBytes": "0-4194303", "SHA256TreeHash": "01d34dabf7be316472c93b1ef80721f5d4" }], "VaultARN": None } EXAMPLE_PART_LIST_COMPLETE = { "ArchiveDescription": "archive description 1", "CreationDate": "2012-03-20T17:03:43.221Z", "Marker": None, "MultipartUploadId": "OW2fM5iVylEpFEMM9_HpKowRapC3vn5sSL39_396UW9zLFUWVrnRHaPjUJddQ5OxSHVXjYtrN47NBZ-khxOjyEXAMPLE", "PartSizeInBytes": 4194304, "Parts": [ { "RangeInBytes": "4194304-8388607", "SHA256TreeHash": "01d34dabf7be316472c93b1ef80721f5d4" }, { "RangeInBytes": "0-4194303", "SHA256TreeHash": "01d34dabf7be316472c93b1ef80721f5d4" }], "VaultARN": "arn:aws:glacier:us-east-1:012345678901:vaults/demo1-vault" } class GlacierLayer2Base(unittest.TestCase): def setUp(self): self.mock_layer1 = Mock(spec=Layer1) class TestGlacierLayer2Connection(GlacierLayer2Base): def setUp(self): GlacierLayer2Base.setUp(self) self.layer2 = Layer2(layer1=self.mock_layer1) def test_create_vault(self): self.mock_layer1.describe_vault.return_value = FIXTURE_VAULT self.layer2.create_vault("My Vault") self.mock_layer1.create_vault.assert_called_with("My Vault") def test_get_vault(self): self.mock_layer1.describe_vault.return_value = FIXTURE_VAULT vault = self.layer2.get_vault("examplevault") self.assertEqual(vault.layer1, self.mock_layer1) self.assertEqual(vault.name, "examplevault") self.assertEqual(vault.size, 78088912) self.assertEqual(vault.number_of_archives, 192) def test_list_vaults(self): self.mock_layer1.list_vaults.return_value = FIXTURE_VAULTS vaults = self.layer2.list_vaults() self.assertEqual(vaults[0].name, "vault0") self.assertEqual(len(vaults), 2) def test_list_vaults_paginated(self): resps = [FIXTURE_PAGINATED_VAULTS, FIXTURE_PAGINATED_VAULTS_CONT] def return_paginated_vaults_resp(marker=None, limit=None): return resps.pop(0) self.mock_layer1.list_vaults = Mock(side_effect = return_paginated_vaults_resp) vaults = self.layer2.list_vaults() self.assertEqual(vaults[0].name, "vault0") self.assertEqual(vaults[3].name, "vault3") self.assertEqual(len(vaults), 4) class TestVault(GlacierLayer2Base): def setUp(self): GlacierLayer2Base.setUp(self) self.vault = Vault(self.mock_layer1, FIXTURE_VAULT) # TODO: Tests for the other methods of uploading def test_create_archive_writer(self): self.mock_layer1.initiate_multipart_upload.return_value = { "UploadId": "UPLOADID"} writer = self.vault.create_archive_writer(description="stuff") self.mock_layer1.initiate_multipart_upload.assert_called_with( "examplevault", self.vault.DefaultPartSize, "stuff") self.assertEqual(writer.vault, self.vault) self.assertEqual(writer.upload_id, "UPLOADID") def test_delete_vault(self): self.vault.delete_archive("archive") self.mock_layer1.delete_archive.assert_called_with("examplevault", "archive") def test_get_job(self): self.mock_layer1.describe_job.return_value = FIXTURE_ARCHIVE_JOB job = self.vault.get_job( "NkbByEejwEggmBz2fTHgJrg0XBoDfjP4q6iu87-TjhqG6eGoOY9Z8i1_AUyUsuhPA" "dTqLHy8pTl5nfCFJmDl2yEZONi5L26Omw12vcs01MNGntHEQL8MBfGlqrEXAMPLEA" "rchiveId") self.assertEqual(job.action, "ArchiveRetrieval") def test_list_jobs(self): self.mock_layer1.list_jobs.return_value = { "JobList": [FIXTURE_ARCHIVE_JOB]} jobs = self.vault.list_jobs(False, "InProgress") self.mock_layer1.list_jobs.assert_called_with("examplevault", False, "InProgress") self.assertEqual(jobs[0].archive_id, "NkbByEejwEggmBz2fTHgJrg0XBoDfjP4q6iu87-TjhqG6eGoOY9Z" "8i1_AUyUsuhPAdTqLHy8pTl5nfCFJmDl2yEZONi5L26Omw12vcs0" "1MNGntHEQL8MBfGlqrEXAMPLEArchiveId") def test_list_all_parts_one_page(self): self.mock_layer1.list_parts.return_value = ( dict(EXAMPLE_PART_LIST_COMPLETE)) # take a copy parts_result = self.vault.list_all_parts(sentinel.upload_id) expected = [call('examplevault', sentinel.upload_id)] self.assertEquals(expected, self.mock_layer1.list_parts.call_args_list) self.assertEquals(EXAMPLE_PART_LIST_COMPLETE, parts_result) def test_list_all_parts_two_pages(self): self.mock_layer1.list_parts.side_effect = [ # take copies dict(EXAMPLE_PART_LIST_RESULT_PAGE_1), dict(EXAMPLE_PART_LIST_RESULT_PAGE_2) ] parts_result = self.vault.list_all_parts(sentinel.upload_id) expected = [call('examplevault', sentinel.upload_id), call('examplevault', sentinel.upload_id, marker=EXAMPLE_PART_LIST_RESULT_PAGE_1['Marker'])] self.assertEquals(expected, self.mock_layer1.list_parts.call_args_list) self.assertEquals(EXAMPLE_PART_LIST_COMPLETE, parts_result) @patch('boto.glacier.vault.resume_file_upload') def test_resume_archive_from_file(self, mock_resume_file_upload): part_size = 4 mock_list_parts = Mock() mock_list_parts.return_value = { 'PartSizeInBytes': part_size, 'Parts': [{ 'RangeInBytes': '0-3', 'SHA256TreeHash': '12', }, { 'RangeInBytes': '4-6', 'SHA256TreeHash': '34', }, ]} self.vault.list_all_parts = mock_list_parts self.vault.resume_archive_from_file( sentinel.upload_id, file_obj=sentinel.file_obj) mock_resume_file_upload.assert_called_once_with( self.vault, sentinel.upload_id, part_size, sentinel.file_obj, {0: '12'.decode('hex'), 1: '34'.decode('hex')}) class TestJob(GlacierLayer2Base): def setUp(self): GlacierLayer2Base.setUp(self) self.vault = Vault(self.mock_layer1, FIXTURE_VAULT) self.job = Job(self.vault, FIXTURE_ARCHIVE_JOB) def test_get_job_output(self): self.mock_layer1.get_job_output.return_value = "TEST_OUTPUT" self.job.get_output((0,100)) self.mock_layer1.get_job_output.assert_called_with( "examplevault", "HkF9p6o7yjhFx-K3CGl6fuSm6VzW9T7esGQfco8nUXVYwS0jlb5gq1JZ55yHgt5vP" "54ZShjoQzQVVh7vEXAMPLEjobID", (0,100)) class TestRangeStringParsing(unittest.TestCase): def test_simple_range(self): self.assertEquals( Vault._range_string_to_part_index('0-3', 4), 0) def test_range_one_too_big(self): # Off-by-one bug in Amazon's Glacier implementation # See: https://forums.aws.amazon.com/thread.jspa?threadID=106866&tstart=0 # Workaround is to assume that if a (start, end] range appears to be # returned then that is what it is. self.assertEquals( Vault._range_string_to_part_index('0-4', 4), 0) def test_range_too_big(self): self.assertRaises( AssertionError, Vault._range_string_to_part_index, '0-5', 4) def test_range_start_mismatch(self): self.assertRaises( AssertionError, Vault._range_string_to_part_index, '1-3', 4) def test_range_end_mismatch(self): # End mismatch is OK, since the last part might be short self.assertEquals( Vault._range_string_to_part_index('0-2', 4), 0) boto-2.20.1/tests/unit/glacier/test_utils.py000066400000000000000000000116441225267101000210410ustar00rootroot00000000000000# Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import time import logging from hashlib import sha256 from tests.unit import unittest from boto.glacier.utils import minimum_part_size, chunk_hashes, tree_hash, \ bytes_to_hex class TestPartSizeCalculations(unittest.TestCase): def test_small_values_still_use_default_part_size(self): self.assertEqual(minimum_part_size(1), 4 * 1024 * 1024) def test_under_the_maximum_value(self): # If we're under the maximum, we can use 4MB part sizes. self.assertEqual(minimum_part_size(8 * 1024 * 1024), 4 * 1024 * 1024) def test_gigabyte_size(self): # If we're over the maximum default part size, we go up to the next # power of two until we find a part size that keeps us under 10,000 # parts. self.assertEqual(minimum_part_size(8 * 1024 * 1024 * 10000), 8 * 1024 * 1024) def test_terabyte_size(self): # For a 4 TB file we need at least a 512 MB part size. self.assertEqual(minimum_part_size(4 * 1024 * 1024 * 1024 * 1024), 512 * 1024 * 1024) def test_file_size_too_large(self): with self.assertRaises(ValueError): minimum_part_size((40000 * 1024 * 1024 * 1024) + 1) def test_default_part_size_can_be_specified(self): default_part_size = 2 * 1024 * 1024 self.assertEqual(minimum_part_size(8 * 1024 * 1024, default_part_size), default_part_size) class TestChunking(unittest.TestCase): def test_chunk_hashes_exact(self): chunks = chunk_hashes('a' * (2 * 1024 * 1024)) self.assertEqual(len(chunks), 2) self.assertEqual(chunks[0], sha256('a' * 1024 * 1024).digest()) def test_chunks_with_leftovers(self): bytestring = 'a' * (2 * 1024 * 1024 + 20) chunks = chunk_hashes(bytestring) self.assertEqual(len(chunks), 3) self.assertEqual(chunks[0], sha256('a' * 1024 * 1024).digest()) self.assertEqual(chunks[1], sha256('a' * 1024 * 1024).digest()) self.assertEqual(chunks[2], sha256('a' * 20).digest()) def test_less_than_one_chunk(self): chunks = chunk_hashes('aaaa') self.assertEqual(len(chunks), 1) self.assertEqual(chunks[0], sha256('aaaa').digest()) class TestTreeHash(unittest.TestCase): # For these tests, a set of reference tree hashes were computed. # This will at least catch any regressions to the tree hash # calculations. def calculate_tree_hash(self, bytestring): start = time.time() calculated = bytes_to_hex(tree_hash(chunk_hashes(bytestring))) end = time.time() logging.debug("Tree hash calc time for length %s: %s", len(bytestring), end - start) return calculated def test_tree_hash_calculations(self): one_meg_bytestring = 'a' * (1 * 1024 * 1024) two_meg_bytestring = 'a' * (2 * 1024 * 1024) four_meg_bytestring = 'a' * (4 * 1024 * 1024) bigger_bytestring = four_meg_bytestring + 'a' * 20 self.assertEqual( self.calculate_tree_hash(one_meg_bytestring), '9bc1b2a288b26af7257a36277ae3816a7d4f16e89c1e7e77d0a5c48bad62b360') self.assertEqual( self.calculate_tree_hash(two_meg_bytestring), '560c2c9333c719cb00cfdffee3ba293db17f58743cdd1f7e4055373ae6300afa') self.assertEqual( self.calculate_tree_hash(four_meg_bytestring), '9491cb2ed1d4e7cd53215f4017c23ec4ad21d7050a1e6bb636c4f67e8cddb844') self.assertEqual( self.calculate_tree_hash(bigger_bytestring), '12f3cbd6101b981cde074039f6f728071da8879d6f632de8afc7cdf00661b08f') def test_empty_tree_hash(self): self.assertEqual( self.calculate_tree_hash(''), 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855') boto-2.20.1/tests/unit/glacier/test_vault.py000066400000000000000000000162621225267101000210350ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import unittest from cStringIO import StringIO import mock from mock import ANY from boto.glacier import vault from boto.glacier.job import Job from boto.glacier.response import GlacierResponse class TestVault(unittest.TestCase): def setUp(self): self.size_patch = mock.patch('os.path.getsize') self.getsize = self.size_patch.start() self.api = mock.Mock() self.vault = vault.Vault(self.api, None) self.vault.name = 'myvault' self.mock_open = mock.mock_open() stringio = StringIO('content') self.mock_open.return_value.read = stringio.read def tearDown(self): self.size_patch.stop() def test_upload_archive_small_file(self): self.getsize.return_value = 1 self.api.upload_archive.return_value = {'ArchiveId': 'archive_id'} with mock.patch('boto.glacier.vault.open', self.mock_open, create=True): archive_id = self.vault.upload_archive( 'filename', 'my description') self.assertEqual(archive_id, 'archive_id') self.api.upload_archive.assert_called_with( 'myvault', self.mock_open.return_value, mock.ANY, mock.ANY, 'my description') def test_small_part_size_is_obeyed(self): self.vault.DefaultPartSize = 2 * 1024 * 1024 self.vault.create_archive_writer = mock.Mock() self.getsize.return_value = 1 with mock.patch('boto.glacier.vault.open', self.mock_open, create=True): self.vault.create_archive_from_file('myfile') # The write should be created with the default part size of the # instance (2 MB). self.vault.create_archive_writer.assert_called_with( description=mock.ANY, part_size=self.vault.DefaultPartSize) def test_large_part_size_is_obeyed(self): self.vault.DefaultPartSize = 8 * 1024 * 1024 self.vault.create_archive_writer = mock.Mock() self.getsize.return_value = 1 with mock.patch('boto.glacier.vault.open', self.mock_open, create=True): self.vault.create_archive_from_file('myfile') # The write should be created with the default part size of the # instance (8 MB). self.vault.create_archive_writer.assert_called_with( description=mock.ANY, part_size=self.vault.DefaultPartSize) def test_part_size_needs_to_be_adjusted(self): # If we have a large file (400 GB) self.getsize.return_value = 400 * 1024 * 1024 * 1024 self.vault.create_archive_writer = mock.Mock() # When we try to upload the file. with mock.patch('boto.glacier.vault.open', self.mock_open, create=True): self.vault.create_archive_from_file('myfile') # We should automatically bump up the part size used to # 64 MB. expected_part_size = 64 * 1024 * 1024 self.vault.create_archive_writer.assert_called_with( description=mock.ANY, part_size=expected_part_size) def test_retrieve_inventory(self): class FakeResponse(object): status = 202 def getheader(self, key, default=None): if key == 'x-amz-job-id': return 'HkF9p6' elif key == 'Content-Type': return 'application/json' return 'something' def read(self, amt=None): return """{ "Action": "ArchiveRetrieval", "ArchiveId": "NkbByEejwEggmBz2fTHgJrg0XBoDfjP4q6iu87-EXAMPLEArchiveId", "ArchiveSizeInBytes": 16777216, "ArchiveSHA256TreeHash": "beb0fe31a1c7ca8c6c04d574ea906e3f97", "Completed": false, "CreationDate": "2012-05-15T17:21:39.339Z", "CompletionDate": "2012-05-15T17:21:43.561Z", "InventorySizeInBytes": null, "JobDescription": "My ArchiveRetrieval Job", "JobId": "HkF9p6", "RetrievalByteRange": "0-16777215", "SHA256TreeHash": "beb0fe31a1c7ca8c6c04d574ea906e3f97b31fd", "SNSTopic": "arn:aws:sns:us-east-1:012345678901:mytopic", "StatusCode": "InProgress", "StatusMessage": "Operation in progress.", "VaultARN": "arn:aws:glacier:us-east-1:012345678901:vaults/examplevault" }""" raw_resp = FakeResponse() init_resp = GlacierResponse(raw_resp, [('x-amz-job-id', 'JobId')]) raw_resp_2 = FakeResponse() desc_resp = GlacierResponse(raw_resp_2, []) with mock.patch.object(self.vault.layer1, 'initiate_job', return_value=init_resp): with mock.patch.object(self.vault.layer1, 'describe_job', return_value=desc_resp): # The old/back-compat variant of the call. self.assertEqual(self.vault.retrieve_inventory(), 'HkF9p6') # The variant the returns a full ``Job`` object. job = self.vault.retrieve_inventory_job() self.assertTrue(isinstance(job, Job)) self.assertEqual(job.id, 'HkF9p6') class TestConcurrentUploads(unittest.TestCase): def test_concurrent_upload_file(self): v = vault.Vault(None, None) with mock.patch('boto.glacier.vault.ConcurrentUploader') as c: c.return_value.upload.return_value = 'archive_id' archive_id = v.concurrent_create_archive_from_file( 'filename', 'my description') c.return_value.upload.assert_called_with('filename', 'my description') self.assertEqual(archive_id, 'archive_id') def test_concurrent_upload_forwards_kwargs(self): v = vault.Vault(None, None) with mock.patch('boto.glacier.vault.ConcurrentUploader') as c: c.return_value.upload.return_value = 'archive_id' archive_id = v.concurrent_create_archive_from_file( 'filename', 'my description', num_threads=10, part_size=1024 * 1024 * 1024 * 8) c.assert_called_with(None, None, num_threads=10, part_size=1024 * 1024 * 1024 * 8) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/glacier/test_writer.py000066400000000000000000000202401225267101000212050ustar00rootroot00000000000000# Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from hashlib import sha256 import itertools from StringIO import StringIO from tests.unit import unittest from mock import ( call, Mock, patch, sentinel, ) from nose.tools import assert_equal from boto.glacier.layer1 import Layer1 from boto.glacier.vault import Vault from boto.glacier.writer import Writer, resume_file_upload from boto.glacier.utils import bytes_to_hex, chunk_hashes, tree_hash def create_mock_vault(): vault = Mock(spec=Vault) vault.layer1 = Mock(spec=Layer1) vault.layer1.complete_multipart_upload.return_value = dict( ArchiveId=sentinel.archive_id) vault.name = sentinel.vault_name return vault def partify(data, part_size): for i in itertools.count(0): start = i * part_size part = data[start:start+part_size] if part: yield part else: return def calculate_mock_vault_calls(data, part_size, chunk_size): upload_part_calls = [] data_tree_hashes = [] for i, data_part in enumerate(partify(data, part_size)): start = i * part_size end = start + len(data_part) data_part_tree_hash_blob = tree_hash( chunk_hashes(data_part, chunk_size)) data_part_tree_hash = bytes_to_hex(data_part_tree_hash_blob) data_part_linear_hash = sha256(data_part).hexdigest() upload_part_calls.append( call.layer1.upload_part( sentinel.vault_name, sentinel.upload_id, data_part_linear_hash, data_part_tree_hash, (start, end - 1), data_part)) data_tree_hashes.append(data_part_tree_hash_blob) return upload_part_calls, data_tree_hashes def check_mock_vault_calls(vault, upload_part_calls, data_tree_hashes, data_len): vault.layer1.upload_part.assert_has_calls( upload_part_calls, any_order=True) assert_equal( len(upload_part_calls), vault.layer1.upload_part.call_count) data_tree_hash = bytes_to_hex(tree_hash(data_tree_hashes)) vault.layer1.complete_multipart_upload.assert_called_once_with( sentinel.vault_name, sentinel.upload_id, data_tree_hash, data_len) class TestWriter(unittest.TestCase): def setUp(self): super(TestWriter, self).setUp() self.vault = create_mock_vault() self.chunk_size = 2 # power of 2 self.part_size = 4 # power of 2 upload_id = sentinel.upload_id self.writer = Writer( self.vault, upload_id, self.part_size, self.chunk_size) def check_write(self, write_list): for write_data in write_list: self.writer.write(write_data) self.writer.close() data = ''.join(write_list) upload_part_calls, data_tree_hashes = calculate_mock_vault_calls( data, self.part_size, self.chunk_size) check_mock_vault_calls( self.vault, upload_part_calls, data_tree_hashes, len(data)) def test_single_byte_write(self): self.check_write(['1']) def test_one_part_write(self): self.check_write(['1234']) def test_split_write_1(self): self.check_write(['1', '234']) def test_split_write_2(self): self.check_write(['12', '34']) def test_split_write_3(self): self.check_write(['123', '4']) def test_one_part_plus_one_write(self): self.check_write(['12345']) def test_returns_archive_id(self): self.writer.write('1') self.writer.close() self.assertEquals(sentinel.archive_id, self.writer.get_archive_id()) def test_current_tree_hash(self): self.writer.write('1234') self.writer.write('567') hash_1 = self.writer.current_tree_hash self.assertEqual(hash_1, '\x0e\xb0\x11Z\x1d\x1f\n\x10|\xf76\xa6\xf5' + '\x83\xd1\xd5"bU\x0c\x95\xa8<\xf5\x81\xef\x0e\x0f\x95\n\xb7k' ) # This hash will be different, since the content has changed. self.writer.write('22i3uy') hash_2 = self.writer.current_tree_hash self.assertEqual(hash_2, '\x7f\xf4\x97\x82U]\x81R\x05#^\xe8\x1c\xd19' + '\xe8\x1f\x9e\xe0\x1aO\xaad\xe5\x06"\xa5\xc0\xa8AdL' ) self.writer.close() # Check the final tree hash, post-close. final_hash = self.writer.current_tree_hash self.assertEqual(final_hash, ';\x1a\xb8!=\xf0\x14#\x83\x11\xd5\x0b\x0f' + '\xc7D\xe4\x8e\xd1W\x99z\x14\x06\xb9D\xd0\xf0*\x93\xa2\x8e\xf9' ) # Then assert we don't get a different one on a subsequent call. self.assertEqual(final_hash, self.writer.current_tree_hash) def test_current_uploaded_size(self): self.writer.write('1234') self.writer.write('567') size_1 = self.writer.current_uploaded_size self.assertEqual(size_1, 4) # This hash will be different, since the content has changed. self.writer.write('22i3uy') size_2 = self.writer.current_uploaded_size self.assertEqual(size_2, 12) self.writer.close() # Get the final size, post-close. final_size = self.writer.current_uploaded_size self.assertEqual(final_size, 13) # Then assert we don't get a different one on a subsequent call. self.assertEqual(final_size, self.writer.current_uploaded_size) def test_upload_id(self): self.assertEquals(sentinel.upload_id, self.writer.upload_id) class TestResume(unittest.TestCase): def setUp(self): super(TestResume, self).setUp() self.vault = create_mock_vault() self.chunk_size = 2 # power of 2 self.part_size = 4 # power of 2 def check_no_resume(self, data, resume_set=set()): fobj = StringIO(data) part_hash_map = {} for part_index in resume_set: start = self.part_size * part_index end = start + self.part_size part_data = data[start:end] part_hash_map[part_index] = tree_hash( chunk_hashes(part_data, self.chunk_size)) resume_file_upload( self.vault, sentinel.upload_id, self.part_size, fobj, part_hash_map, self.chunk_size) upload_part_calls, data_tree_hashes = calculate_mock_vault_calls( data, self.part_size, self.chunk_size) resume_upload_part_calls = [ call for part_index, call in enumerate(upload_part_calls) if part_index not in resume_set] check_mock_vault_calls( self.vault, resume_upload_part_calls, data_tree_hashes, len(data)) def test_one_part_no_resume(self): self.check_no_resume('1234') def test_two_parts_no_resume(self): self.check_no_resume('12345678') def test_one_part_resume(self): self.check_no_resume('1234', resume_set=set([0])) def test_two_parts_one_resume(self): self.check_no_resume('12345678', resume_set=set([1])) def test_returns_archive_id(self): archive_id = resume_file_upload( self.vault, sentinel.upload_id, self.part_size, StringIO('1'), {}, self.chunk_size) self.assertEquals(sentinel.archive_id, archive_id) boto-2.20.1/tests/unit/iam/000077500000000000000000000000001225267101000154225ustar00rootroot00000000000000boto-2.20.1/tests/unit/iam/__init__.py000066400000000000000000000000001225267101000175210ustar00rootroot00000000000000boto-2.20.1/tests/unit/iam/test_connection.py000066400000000000000000000145411225267101000211770ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from tests.unit import unittest from boto.iam.connection import IAMConnection from tests.unit import AWSMockServiceTestCase class TestCreateSamlProvider(AWSMockServiceTestCase): connection_class = IAMConnection def default_body(self): return """ arn 29f47818-99f5-11e1-a4c3-27EXAMPLE804 """ def test_create_saml_provider(self): self.set_http_response(status_code=200) response = self.service_connection.create_saml_provider('document', 'name') self.assert_request_parameters( {'Action': 'CreateSAMLProvider', 'SAMLMetadataDocument': 'document', 'Name': 'name'}, ignore_params_values=['Version']) self.assertEqual(response['create_saml_provider_response']\ ['create_saml_provider_result']\ ['saml_provider_arn'], 'arn') class TestListSamlProviders(AWSMockServiceTestCase): connection_class = IAMConnection def default_body(self): return """ arn:aws:iam::123456789012:instance-profile/application_abc/component_xyz/Database 2032-05-09T16:27:11Z 2012-05-09T16:27:03Z arn:aws:iam::123456789012:instance-profile/application_abc/component_xyz/Webserver 2015-03-11T13:11:02Z 2012-05-09T16:27:11Z fd74fa8d-99f3-11e1-a4c3-27EXAMPLE804 """ def test_list_saml_providers(self): self.set_http_response(status_code=200) response = self.service_connection.list_saml_providers() self.assert_request_parameters( {'Action': 'ListSAMLProviders'}, ignore_params_values=['Version']) class TestGetSamlProvider(AWSMockServiceTestCase): connection_class = IAMConnection def default_body(self): return """ 2012-05-09T16:27:11Z 2015-12-31T211:59:59Z Pd9fexDssTkRgGNqs...DxptfEs== 29f47818-99f5-11e1-a4c3-27EXAMPLE804 """ def test_get_saml_provider(self): self.set_http_response(status_code=200) response = self.service_connection.get_saml_provider('arn') self.assert_request_parameters( { 'Action': 'GetSAMLProvider', 'SAMLProviderArn': 'arn' }, ignore_params_values=['Version']) class TestUpdateSamlProvider(AWSMockServiceTestCase): connection_class = IAMConnection def default_body(self): return """ arn:aws:iam::123456789012:saml-metadata/MyUniversity 29f47818-99f5-11e1-a4c3-27EXAMPLE804 """ def test_update_saml_provider(self): self.set_http_response(status_code=200) response = self.service_connection.update_saml_provider('arn', 'doc') self.assert_request_parameters( { 'Action': 'UpdateSAMLProvider', 'SAMLMetadataDocument': 'doc', 'SAMLProviderArn': 'arn' }, ignore_params_values=['Version']) class TestDeleteSamlProvider(AWSMockServiceTestCase): connection_class = IAMConnection def default_body(self): return "" def test_delete_saml_provider(self): self.set_http_response(status_code=200) response = self.service_connection.delete_saml_provider('arn') self.assert_request_parameters( { 'Action': 'DeleteSAMLProvider', 'SAMLProviderArn': 'arn' }, ignore_params_values=['Version']) boto-2.20.1/tests/unit/manage/000077500000000000000000000000001225267101000161045ustar00rootroot00000000000000boto-2.20.1/tests/unit/manage/__init__.py000066400000000000000000000000001225267101000202030ustar00rootroot00000000000000boto-2.20.1/tests/unit/manage/test_ssh.py000066400000000000000000000037311225267101000203160ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # try: import paramiko from boto.manage.cmdshell import SSHClient except ImportError: paramiko = None SSHClient = None import mock from tests.unit import unittest class TestSSHTimeout(unittest.TestCase): @unittest.skipIf(not paramiko, 'Paramiko missing') def test_timeout(self): client_tmp = paramiko.SSHClient def client_mock(): client = client_tmp() client.connect = mock.Mock(name='connect') return client paramiko.SSHClient = client_mock paramiko.RSAKey.from_private_key_file = mock.Mock() server = mock.Mock() test = SSHClient(server) self.assertEqual(test._ssh_client.connect.call_args[1]['timeout'], None) test2 = SSHClient(server, timeout=30) self.assertEqual(test2._ssh_client.connect.call_args[1]['timeout'], 30) boto-2.20.1/tests/unit/mws/000077500000000000000000000000001225267101000154625ustar00rootroot00000000000000boto-2.20.1/tests/unit/mws/__init__.py000066400000000000000000000000001225267101000175610ustar00rootroot00000000000000boto-2.20.1/tests/unit/mws/test_connection.py000077500000000000000000000065341225267101000212450ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from boto.mws.connection import MWSConnection, api_call_map from tests.unit import AWSMockServiceTestCase class TestMWSConnection(AWSMockServiceTestCase): connection_class = MWSConnection mws = True def default_body(self): return """ 2YgYW55IGNhcm5hbCBwbGVhc3VyZS4= true 2291326430 _POST_PRODUCT_DATA_ 2009-02-20T02:10:35+00:00 _SUBMITTED_ 1105b931-6f1c-4480-8e97-f3b467840a9e """ def test_built_api_call_map(self): # Ensure that the map is populated. # It starts empty, but the decorators should add to it as they're # applied. As of 2013/10/21, there were 52 calls (with more likely # to be added), so let's simply ensure there are enough there. self.assertTrue(len(api_call_map.keys()) > 50) def test_method_for(self): # First, ensure that the map is in "right enough" state. self.assertTrue('GetFeedSubmissionList' in api_call_map) # Make sure we can find the correct method. func = self.service_connection.method_for('GetFeedSubmissionList') # Ensure the right name was found. self.assertTrue(callable(func)) self.assertEqual(func, self.service_connection.get_feed_submission_list) # Check a non-existent action. func = self.service_connection.method_for('NotHereNorThere') self.assertEqual(func, None) def test_get_service_status(self): with self.assertRaises(AttributeError) as err: self.service_connection.get_service_status() self.assertTrue('products,' in str(err.exception)) self.assertTrue('inventory,' in str(err.exception)) self.assertTrue('feeds,' in str(err.exception)) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/mws/test_response.py000077500000000000000000000207021225267101000207350ustar00rootroot00000000000000#!/usr/bin/env python from boto.mws.connection import MWSConnection from boto.mws.response import (ResponseFactory, ResponseElement, Element, MemberList, ElementList, SimpleList) from tests.unit import AWSMockServiceTestCase class TestMWSResponse(AWSMockServiceTestCase): connection_class = MWSConnection mws = True def test_parsing_nested_elements(self): class Test9one(ResponseElement): Nest = Element() Zoom = Element() class Test9Result(ResponseElement): Item = Element(Test9one) text = """ Bar Zap Zoo Bam """ obj = self.issue_test('Test9', Test9Result, text) Item = obj._result.Item useful = lambda x: not x[0].startswith('_') nest = dict(filter(useful, Item.Nest.__dict__.items())) self.assertEqual(nest, dict(Zip='Zap', Zam='Zoo')) useful = lambda x: not x[0].startswith('_') and not x[0] == 'Nest' item = dict(filter(useful, Item.__dict__.items())) self.assertEqual(item, dict(Foo='Bar', Bif='Bam', Zoom=None)) def test_parsing_member_list_specification(self): class Test8extra(ResponseElement): Foo = SimpleList() class Test8Result(ResponseElement): Item = MemberList(SimpleList) Extra = MemberList(Test8extra) text = """ 0 1 2 3 45 67 """ obj = self.issue_test('Test8', Test8Result, text) self.assertSequenceEqual( map(int, obj._result.Item), range(4), ) self.assertSequenceEqual( map(lambda x: map(int, x.Foo), obj._result.Extra), [[4, 5], [], [6, 7]], ) def test_parsing_nested_lists(self): class Test7Result(ResponseElement): Item = MemberList(Nest=MemberList(), List=ElementList(Simple=SimpleList())) text = """ One 2 4 6 Two 1 3 5 4 5 6 7 8 9 Six Foo 1 2 3 Bar """ obj = self.issue_test('Test7', Test7Result, text) item = obj._result.Item self.assertEqual(len(item), 3) nests = [z.Nest for z in filter(lambda x: x.Nest, item)] self.assertSequenceEqual( [[y.Data for y in nest] for nest in nests], [[u'2', u'4', u'6'], [u'1', u'3', u'5']], ) self.assertSequenceEqual( [element.Simple for element in item[1].List], [[u'4', u'5', u'6'], [u'7', u'8', u'9']], ) self.assertSequenceEqual( item[-1].List[0].Simple, ['1', '2', '3'], ) self.assertEqual(item[-1].List[1].Simple, []) self.assertSequenceEqual( [e.Value for e in obj._result.Item], ['One', 'Two', 'Six'], ) def test_parsing_member_list(self): class Test6Result(ResponseElement): Item = MemberList() text = """ One Two Four Six """ obj = self.issue_test('Test6', Test6Result, text) self.assertSequenceEqual( [e.Value for e in obj._result.Item], ['One', 'Two', 'Six'], ) self.assertTrue(obj._result.Item[1].Error == 'Four') with self.assertRaises(AttributeError) as e: obj._result.Item[2].Error def test_parsing_empty_member_list(self): class Test5Result(ResponseElement): Item = MemberList(Nest=MemberList()) text = """ """ obj = self.issue_test('Test5', Test5Result, text) self.assertSequenceEqual(obj._result.Item, []) def test_parsing_missing_member_list(self): class Test4Result(ResponseElement): Item = MemberList(NestedItem=MemberList()) text = """ """ obj = self.issue_test('Test4', Test4Result, text) self.assertSequenceEqual(obj._result.Item, []) def test_parsing_element_lists(self): class Test1Result(ResponseElement): Item = ElementList() text = """ Bar Bif Baz Zoo """ obj = self.issue_test('Test1', Test1Result, text) self.assertTrue(len(obj._result.Item) == 3) elements = lambda x: getattr(x, 'Foo', getattr(x, 'Zip', '?')) elements = map(elements, obj._result.Item) self.assertSequenceEqual(elements, ['Bar', 'Bif', 'Baz']) def test_parsing_missing_lists(self): class Test2Result(ResponseElement): Item = ElementList() text = """ """ obj = self.issue_test('Test2', Test2Result, text) self.assertEqual(obj._result.Item, []) def test_parsing_simple_lists(self): class Test3Result(ResponseElement): Item = SimpleList() text = """ Bar Bif Baz """ obj = self.issue_test('Test3', Test3Result, text) self.assertSequenceEqual(obj._result.Item, ['Bar', 'Bif', 'Baz']) def issue_test(self, action, klass, text): cls = ResponseFactory(action, force=klass) return self.service_connection._parse_response(cls, text) if __name__ == "__main__": unittest.main() boto-2.20.1/tests/unit/provider/000077500000000000000000000000001225267101000165065ustar00rootroot00000000000000boto-2.20.1/tests/unit/provider/__init__.py000066400000000000000000000000001225267101000206050ustar00rootroot00000000000000boto-2.20.1/tests/unit/provider/test_provider.py000066400000000000000000000154431225267101000217600ustar00rootroot00000000000000#!/usr/bin/env python from datetime import datetime, timedelta from tests.unit import unittest import mock from boto import provider INSTANCE_CONFIG = { 'allowall': { u'AccessKeyId': u'iam_access_key', u'Code': u'Success', u'Expiration': u'2012-09-01T03:57:34Z', u'LastUpdated': u'2012-08-31T21:43:40Z', u'SecretAccessKey': u'iam_secret_key', u'Token': u'iam_token', u'Type': u'AWS-HMAC' } } class TestProvider(unittest.TestCase): def setUp(self): self.environ = {} self.config = {} self.metadata_patch = mock.patch('boto.utils.get_instance_metadata') self.config_patch = mock.patch('boto.provider.config.get', self.get_config) self.has_config_patch = mock.patch('boto.provider.config.has_option', self.has_config) self.environ_patch = mock.patch('os.environ', self.environ) self.get_instance_metadata = self.metadata_patch.start() self.config_patch.start() self.has_config_patch.start() self.environ_patch.start() def tearDown(self): self.metadata_patch.stop() self.config_patch.stop() self.has_config_patch.stop() self.environ_patch.stop() def has_config(self, section_name, key): try: self.config[section_name][key] return True except KeyError: return False def get_config(self, section_name, key): try: return self.config[section_name][key] except KeyError: return None def test_passed_in_values_are_used(self): p = provider.Provider('aws', 'access_key', 'secret_key', 'security_token') self.assertEqual(p.access_key, 'access_key') self.assertEqual(p.secret_key, 'secret_key') self.assertEqual(p.security_token, 'security_token') def test_environment_variables_are_used(self): self.environ['AWS_ACCESS_KEY_ID'] = 'env_access_key' self.environ['AWS_SECRET_ACCESS_KEY'] = 'env_secret_key' p = provider.Provider('aws') self.assertEqual(p.access_key, 'env_access_key') self.assertEqual(p.secret_key, 'env_secret_key') self.assertIsNone(p.security_token) def test_config_values_are_used(self): self.config = { 'Credentials': { 'aws_access_key_id': 'cfg_access_key', 'aws_secret_access_key': 'cfg_secret_key', } } p = provider.Provider('aws') self.assertEqual(p.access_key, 'cfg_access_key') self.assertEqual(p.secret_key, 'cfg_secret_key') self.assertIsNone(p.security_token) def test_keyring_is_used(self): self.config = { 'Credentials': { 'aws_access_key_id': 'cfg_access_key', 'keyring': 'test', } } import sys try: import keyring imported = True except ImportError: sys.modules['keyring'] = keyring = type(mock)('keyring', '') imported = False try: with mock.patch('keyring.get_password', create=True): keyring.get_password.side_effect = ( lambda kr, login: kr+login+'pw') p = provider.Provider('aws') self.assertEqual(p.access_key, 'cfg_access_key') self.assertEqual(p.secret_key, 'testcfg_access_keypw') self.assertIsNone(p.security_token) finally: if not imported: del sys.modules['keyring'] def test_env_vars_beat_config_values(self): self.environ['AWS_ACCESS_KEY_ID'] = 'env_access_key' self.environ['AWS_SECRET_ACCESS_KEY'] = 'env_secret_key' self.config = { 'Credentials': { 'aws_access_key_id': 'cfg_access_key', 'aws_secret_access_key': 'cfg_secret_key', } } p = provider.Provider('aws') self.assertEqual(p.access_key, 'env_access_key') self.assertEqual(p.secret_key, 'env_secret_key') self.assertIsNone(p.security_token) def test_metadata_server_credentials(self): self.get_instance_metadata.return_value = INSTANCE_CONFIG p = provider.Provider('aws') self.assertEqual(p.access_key, 'iam_access_key') self.assertEqual(p.secret_key, 'iam_secret_key') self.assertEqual(p.security_token, 'iam_token') self.assertEqual( self.get_instance_metadata.call_args[1]['data'], 'meta-data/iam/security-credentials/') def test_refresh_credentials(self): now = datetime.now() first_expiration = (now + timedelta(seconds=10)).strftime( "%Y-%m-%dT%H:%M:%SZ") credentials = { u'AccessKeyId': u'first_access_key', u'Code': u'Success', u'Expiration': first_expiration, u'LastUpdated': u'2012-08-31T21:43:40Z', u'SecretAccessKey': u'first_secret_key', u'Token': u'first_token', u'Type': u'AWS-HMAC' } instance_config = {'allowall': credentials} self.get_instance_metadata.return_value = instance_config p = provider.Provider('aws') self.assertEqual(p.access_key, 'first_access_key') self.assertEqual(p.secret_key, 'first_secret_key') self.assertEqual(p.security_token, 'first_token') self.assertIsNotNone(p._credential_expiry_time) # Now set the expiration to something in the past. expired = now - timedelta(seconds=20) p._credential_expiry_time = expired credentials['AccessKeyId'] = 'second_access_key' credentials['SecretAccessKey'] = 'second_secret_key' credentials['Token'] = 'second_token' self.get_instance_metadata.return_value = instance_config # Now upon attribute access, the credentials should be updated. self.assertEqual(p.access_key, 'second_access_key') self.assertEqual(p.secret_key, 'second_secret_key') self.assertEqual(p.security_token, 'second_token') @mock.patch('boto.provider.config.getint') @mock.patch('boto.provider.config.getfloat') def test_metadata_config_params(self, config_float, config_int): config_int.return_value = 10 config_float.return_value = 4.0 self.get_instance_metadata.return_value = INSTANCE_CONFIG p = provider.Provider('aws') self.assertEqual(p.access_key, 'iam_access_key') self.assertEqual(p.secret_key, 'iam_secret_key') self.assertEqual(p.security_token, 'iam_token') self.get_instance_metadata.assert_called_with( timeout=4.0, num_retries=10, data='meta-data/iam/security-credentials/') if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/rds/000077500000000000000000000000001225267101000154445ustar00rootroot00000000000000boto-2.20.1/tests/unit/rds/__init__.py000066400000000000000000000000001225267101000175430ustar00rootroot00000000000000boto-2.20.1/tests/unit/rds/test_connection.py000066400000000000000000000646611225267101000212310ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from tests.unit import unittest from tests.unit import AWSMockServiceTestCase from boto.ec2.securitygroup import SecurityGroup from boto.rds import RDSConnection from boto.rds.vpcsecuritygroupmembership import VPCSecurityGroupMembership from boto.rds.parametergroup import ParameterGroup class TestRDSConnection(AWSMockServiceTestCase): connection_class = RDSConnection def setUp(self): super(TestRDSConnection, self).setUp() def default_body(self): return """ 2000 1 false backing-up mydbinstance2 10:30-11:00 wed:06:30-wed:07:00 default:mysql-5-5 in-sync us-west-2b mysql general-public-license in-sync default.mysql5.5 3306
mydbinstance2.c0hjqouvn9mf.us-west-2.rds.amazonaws.com
5.5.27 active default sg-1 active mydb2 true 2012-10-03T22:01:51.047Z 200 db.m1.large awsuser true replicating read replication 990524496922 Complete My modified DBSubnetGroup mydbsubnetgroup Active subnet-7c5b4115 us-east-1c Active subnet-7b5b4112 us-east-1b Active subnet-3ea6bd57 us-east-1d
""" def test_get_all_db_instances(self): self.set_http_response(status_code=200) response = self.service_connection.get_all_dbinstances('instance_id') self.assertEqual(len(response), 1) self.assert_request_parameters({ 'Action': 'DescribeDBInstances', 'DBInstanceIdentifier': 'instance_id', }, ignore_params_values=['Version']) db = response[0] self.assertEqual(db.id, 'mydbinstance2') self.assertEqual(db.create_time, '2012-10-03T22:01:51.047Z') self.assertEqual(db.engine, 'mysql') self.assertEqual(db.status, 'backing-up') self.assertEqual(db.allocated_storage, 200) self.assertEqual( db.endpoint, (u'mydbinstance2.c0hjqouvn9mf.us-west-2.rds.amazonaws.com', 3306)) self.assertEqual(db.instance_class, 'db.m1.large') self.assertEqual(db.master_username, 'awsuser') self.assertEqual(db.availability_zone, 'us-west-2b') self.assertEqual(db.backup_retention_period, 1) self.assertEqual(db.preferred_backup_window, '10:30-11:00') self.assertEqual(db.preferred_maintenance_window, 'wed:06:30-wed:07:00') self.assertEqual(db.latest_restorable_time, None) self.assertEqual(db.multi_az, False) self.assertEqual(db.iops, 2000) self.assertEqual(db.pending_modified_values, {}) self.assertEqual(db.parameter_group.name, 'default.mysql5.5') self.assertEqual(db.parameter_group.description, None) self.assertEqual(db.parameter_group.engine, None) self.assertEqual(db.security_group.owner_id, None) self.assertEqual(db.security_group.name, 'default') self.assertEqual(db.security_group.description, None) self.assertEqual(db.security_group.ec2_groups, []) self.assertEqual(db.security_group.ip_ranges, []) self.assertEqual(len(db.status_infos), 1) self.assertEqual(db.status_infos[0].message, '') self.assertEqual(db.status_infos[0].normal, True) self.assertEqual(db.status_infos[0].status, 'replicating') self.assertEqual(db.status_infos[0].status_type, 'read replication') self.assertEqual(db.vpc_security_groups[0].status, 'active') self.assertEqual(db.vpc_security_groups[0].vpc_group, 'sg-1') self.assertEqual(db.license_model, 'general-public-license') self.assertEqual(db.engine_version, '5.5.27') self.assertEqual(db.auto_minor_version_upgrade, True) self.assertEqual(db.subnet_group.name, 'mydbsubnetgroup') class TestRDSCCreateDBInstance(AWSMockServiceTestCase): connection_class = RDSConnection def setUp(self): super(TestRDSCCreateDBInstance, self).setUp() def default_body(self): return """ mysql **** 0 false general-public-license 990524496922 Complete description subnet_grp1 Active subnet-7c5b4115 us-east-1c Active subnet-7b5b4112 us-east-1b Active subnet-3ea6bd57 us-east-1d creating 5.1.50 simcoprod01 in-sync default.mysql5.1 active default 00:00-00:30 true sat:07:30-sat:08:00 10 db.m1.large master 2e5d4270-8501-11e0-bd9b-a7b1ece36d51 """ def test_create_db_instance_param_group_name(self): self.set_http_response(status_code=200) db = self.service_connection.create_dbinstance( 'SimCoProd01', 10, 'db.m1.large', 'master', 'Password01', param_group='default.mysql5.1', db_subnet_group_name='dbSubnetgroup01', backup_retention_period=0) self.assert_request_parameters({ 'Action': 'CreateDBInstance', 'AllocatedStorage': 10, 'AutoMinorVersionUpgrade': 'true', 'BackupRetentionPeriod': 0, 'DBInstanceClass': 'db.m1.large', 'DBInstanceIdentifier': 'SimCoProd01', 'DBParameterGroupName': 'default.mysql5.1', 'DBSubnetGroupName': 'dbSubnetgroup01', 'Engine': 'MySQL5.1', 'MasterUsername': 'master', 'MasterUserPassword': 'Password01', 'Port': 3306 }, ignore_params_values=['Version']) self.assertEqual(db.id, 'simcoprod01') self.assertEqual(db.engine, 'mysql') self.assertEqual(db.status, 'creating') self.assertEqual(db.allocated_storage, 10) self.assertEqual(db.instance_class, 'db.m1.large') self.assertEqual(db.master_username, 'master') self.assertEqual(db.multi_az, False) self.assertEqual(db.pending_modified_values, {'MasterUserPassword': '****'}) self.assertEqual(db.parameter_group.name, 'default.mysql5.1') self.assertEqual(db.parameter_group.description, None) self.assertEqual(db.parameter_group.engine, None) self.assertEqual(db.backup_retention_period, 0) def test_create_db_instance_param_group_instance(self): self.set_http_response(status_code=200) param_group = ParameterGroup() param_group.name = 'default.mysql5.1' db = self.service_connection.create_dbinstance( 'SimCoProd01', 10, 'db.m1.large', 'master', 'Password01', param_group=param_group, db_subnet_group_name='dbSubnetgroup01') self.assert_request_parameters({ 'Action': 'CreateDBInstance', 'AllocatedStorage': 10, 'AutoMinorVersionUpgrade': 'true', 'DBInstanceClass': 'db.m1.large', 'DBInstanceIdentifier': 'SimCoProd01', 'DBParameterGroupName': 'default.mysql5.1', 'DBSubnetGroupName': 'dbSubnetgroup01', 'Engine': 'MySQL5.1', 'MasterUsername': 'master', 'MasterUserPassword': 'Password01', 'Port': 3306, }, ignore_params_values=['Version']) self.assertEqual(db.id, 'simcoprod01') self.assertEqual(db.engine, 'mysql') self.assertEqual(db.status, 'creating') self.assertEqual(db.allocated_storage, 10) self.assertEqual(db.instance_class, 'db.m1.large') self.assertEqual(db.master_username, 'master') self.assertEqual(db.multi_az, False) self.assertEqual(db.pending_modified_values, {'MasterUserPassword': '****'}) self.assertEqual(db.parameter_group.name, 'default.mysql5.1') self.assertEqual(db.parameter_group.description, None) self.assertEqual(db.parameter_group.engine, None) class TestRDSConnectionRestoreDBInstanceFromPointInTime(AWSMockServiceTestCase): connection_class = RDSConnection def setUp(self): super(TestRDSConnectionRestoreDBInstanceFromPointInTime, self).setUp() def default_body(self): return """ mysql 1 false general-public-license creating 5.1.50 restored-db in-sync default.mysql5.1 active default 00:00-00:30 true sat:07:30-sat:08:00 10 db.m1.large master 1ef546bc-850b-11e0-90aa-eb648410240d """ def test_restore_dbinstance_from_point_in_time(self): self.set_http_response(status_code=200) db = self.service_connection.restore_dbinstance_from_point_in_time( 'simcoprod01', 'restored-db', True) self.assert_request_parameters({ 'Action': 'RestoreDBInstanceToPointInTime', 'SourceDBInstanceIdentifier': 'simcoprod01', 'TargetDBInstanceIdentifier': 'restored-db', 'UseLatestRestorableTime': 'true', }, ignore_params_values=['Version']) self.assertEqual(db.id, 'restored-db') self.assertEqual(db.engine, 'mysql') self.assertEqual(db.status, 'creating') self.assertEqual(db.allocated_storage, 10) self.assertEqual(db.instance_class, 'db.m1.large') self.assertEqual(db.master_username, 'master') self.assertEqual(db.multi_az, False) self.assertEqual(db.parameter_group.name, 'default.mysql5.1') self.assertEqual(db.parameter_group.description, None) self.assertEqual(db.parameter_group.engine, None) def test_restore_dbinstance_from_point_in_time__db_subnet_group_name(self): self.set_http_response(status_code=200) db = self.service_connection.restore_dbinstance_from_point_in_time( 'simcoprod01', 'restored-db', True, db_subnet_group_name='dbsubnetgroup') self.assert_request_parameters({ 'Action': 'RestoreDBInstanceToPointInTime', 'SourceDBInstanceIdentifier': 'simcoprod01', 'TargetDBInstanceIdentifier': 'restored-db', 'UseLatestRestorableTime': 'true', 'DBSubnetGroupName': 'dbsubnetgroup', }, ignore_params_values=['Version']) def test_create_db_instance_vpc_sg_str(self): self.set_http_response(status_code=200) vpc_security_groups = [ VPCSecurityGroupMembership(self.service_connection, 'active', 'sg-1'), VPCSecurityGroupMembership(self.service_connection, None, 'sg-2')] db = self.service_connection.create_dbinstance( 'SimCoProd01', 10, 'db.m1.large', 'master', 'Password01', param_group='default.mysql5.1', db_subnet_group_name='dbSubnetgroup01', vpc_security_groups=vpc_security_groups) self.assert_request_parameters({ 'Action': 'CreateDBInstance', 'AllocatedStorage': 10, 'AutoMinorVersionUpgrade': 'true', 'DBInstanceClass': 'db.m1.large', 'DBInstanceIdentifier': 'SimCoProd01', 'DBParameterGroupName': 'default.mysql5.1', 'DBSubnetGroupName': 'dbSubnetgroup01', 'Engine': 'MySQL5.1', 'MasterUsername': 'master', 'MasterUserPassword': 'Password01', 'Port': 3306, 'VpcSecurityGroupIds.member.1': 'sg-1', 'VpcSecurityGroupIds.member.2': 'sg-2' }, ignore_params_values=['Version']) def test_create_db_instance_vpc_sg_obj(self): self.set_http_response(status_code=200) sg1 = SecurityGroup(name='sg-1') sg2 = SecurityGroup(name='sg-2') vpc_security_groups = [ VPCSecurityGroupMembership(self.service_connection, 'active', sg1.name), VPCSecurityGroupMembership(self.service_connection, None, sg2.name)] db = self.service_connection.create_dbinstance( 'SimCoProd01', 10, 'db.m1.large', 'master', 'Password01', param_group='default.mysql5.1', db_subnet_group_name='dbSubnetgroup01', vpc_security_groups=vpc_security_groups) self.assert_request_parameters({ 'Action': 'CreateDBInstance', 'AllocatedStorage': 10, 'AutoMinorVersionUpgrade': 'true', 'DBInstanceClass': 'db.m1.large', 'DBInstanceIdentifier': 'SimCoProd01', 'DBParameterGroupName': 'default.mysql5.1', 'DBSubnetGroupName': 'dbSubnetgroup01', 'Engine': 'MySQL5.1', 'MasterUsername': 'master', 'MasterUserPassword': 'Password01', 'Port': 3306, 'VpcSecurityGroupIds.member.1': 'sg-1', 'VpcSecurityGroupIds.member.2': 'sg-2' }, ignore_params_values=['Version']) class TestRDSOptionGroups(AWSMockServiceTestCase): connection_class = RDSConnection def setUp(self): super(TestRDSOptionGroups, self).setUp() def default_body(self): return """ 11.2 myoptiongroup oracle-se1 Test option group 11.2 default:oracle-se1-11-2 oracle-se1 Default Option Group. e4b234d9-84d5-11e1-87a6-71059839a52b """ def test_describe_option_groups(self): self.set_http_response(status_code=200) response = self.service_connection.describe_option_groups() self.assertEqual(len(response), 2) options = response[0] self.assertEqual(options.name, 'myoptiongroup') self.assertEqual(options.description, 'Test option group') self.assertEqual(options.engine_name, 'oracle-se1') self.assertEqual(options.major_engine_version, '11.2') options = response[1] self.assertEqual(options.name, 'default:oracle-se1-11-2') self.assertEqual(options.description, 'Default Option Group.') self.assertEqual(options.engine_name, 'oracle-se1') self.assertEqual(options.major_engine_version, '11.2') class TestRDSOptionGroupOptions(AWSMockServiceTestCase): connection_class = RDSConnection def setUp(self): super(TestRDSOptionGroupOptions, self).setUp() def default_body(self): return """ 11.2 true Oracle Enterprise Manager 1158 OEM oracle-se1 0.2.v3 false false d9c8f6a1-84c7-11e1-a264-0b23c28bc344 """ def test_describe_option_group_options(self): self.set_http_response(status_code=200) response = self.service_connection.describe_option_group_options() self.assertEqual(len(response), 1) options = response[0] self.assertEqual(options.name, 'OEM') self.assertEqual(options.description, 'Oracle Enterprise Manager') self.assertEqual(options.engine_name, 'oracle-se1') self.assertEqual(options.major_engine_version, '11.2') self.assertEqual(options.min_minor_engine_version, '0.2.v3') self.assertEqual(options.port_required, True) self.assertEqual(options.default_port, 1158) self.assertEqual(options.permanent, False) self.assertEqual(options.persistent, False) self.assertEqual(options.depends_on, []) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/rds/test_snapshot.py000066400000000000000000000341551225267101000207240ustar00rootroot00000000000000from tests.unit import unittest from tests.unit import AWSMockServiceTestCase from boto.rds import RDSConnection from boto.rds.dbsnapshot import DBSnapshot from boto.rds import DBInstance class TestDescribeDBSnapshots(AWSMockServiceTestCase): connection_class = RDSConnection def default_body(self): return """ 3306 2011-05-23T06:29:03.483Z mysql available us-east-1a general-public-license 2011-05-23T06:06:43.110Z 10 simcoprod01 5.1.50 mydbsnapshot manual master myoptiongroupname 1000 100 eu-west-1 myvpc 3306 2011-03-11T07:20:24.082Z mysql available us-east-1a general-public-license 2010-08-04T23:27:36.420Z 50 mydbinstance 5.1.49 mysnapshot1 manual sa myoptiongroupname 1000 3306 2012-04-02T00:01:24.082Z mysql available us-east-1d general-public-license 2010-07-16T00:06:59.107Z 60 simcoprod01 5.1.47 rds:simcoprod01-2012-04-02-00-01 automated master myoptiongroupname 1000 c4191173-8506-11e0-90aa-eb648410240d """ def test_describe_dbinstances_by_instance(self): self.set_http_response(status_code=200) response = self.service_connection.get_all_dbsnapshots(instance_id='simcoprod01') self.assert_request_parameters({ 'Action': 'DescribeDBSnapshots', 'DBInstanceIdentifier': 'simcoprod01' }, ignore_params_values=['Version']) self.assertEqual(len(response), 3) self.assertIsInstance(response[0], DBSnapshot) self.assertEqual(response[0].id, 'mydbsnapshot') self.assertEqual(response[0].status, 'available') self.assertEqual(response[0].instance_id, 'simcoprod01') self.assertEqual(response[0].engine_version, '5.1.50') self.assertEqual(response[0].license_model, 'general-public-license') self.assertEqual(response[0].iops, 1000) self.assertEqual(response[0].option_group_name, 'myoptiongroupname') self.assertEqual(response[0].percent_progress, 100) self.assertEqual(response[0].snapshot_type, 'manual') self.assertEqual(response[0].source_region, 'eu-west-1') self.assertEqual(response[0].vpc_id, 'myvpc') class TestCreateDBSnapshot(AWSMockServiceTestCase): connection_class = RDSConnection def default_body(self): return """ 3306 mysql creating us-east-1a general-public-license 2011-05-23T06:06:43.110Z 10 simcoprod01 5.1.50 mydbsnapshot manual master c4181d1d-8505-11e0-90aa-eb648410240d """ def test_create_dbinstance(self): self.set_http_response(status_code=200) response = self.service_connection.create_dbsnapshot('mydbsnapshot', 'simcoprod01') self.assert_request_parameters({ 'Action': 'CreateDBSnapshot', 'DBSnapshotIdentifier': 'mydbsnapshot', 'DBInstanceIdentifier': 'simcoprod01' }, ignore_params_values=['Version']) self.assertIsInstance(response, DBSnapshot) self.assertEqual(response.id, 'mydbsnapshot') self.assertEqual(response.instance_id, 'simcoprod01') self.assertEqual(response.status, 'creating') class TestCopyDBSnapshot(AWSMockServiceTestCase): connection_class = RDSConnection def default_body(self): return """ 3306 mysql available us-east-1a general-public-license 2011-05-23T06:06:43.110Z 10 simcoprod01 5.1.50 mycopieddbsnapshot manual master c4181d1d-8505-11e0-90aa-eb648410240d """ def test_copy_dbinstance(self): self.set_http_response(status_code=200) response = self.service_connection.copy_dbsnapshot('myautomaticdbsnapshot', 'mycopieddbsnapshot') self.assert_request_parameters({ 'Action': 'CopyDBSnapshot', 'SourceDBSnapshotIdentifier': 'myautomaticdbsnapshot', 'TargetDBSnapshotIdentifier': 'mycopieddbsnapshot' }, ignore_params_values=['Version']) self.assertIsInstance(response, DBSnapshot) self.assertEqual(response.id, 'mycopieddbsnapshot') self.assertEqual(response.status, 'available') class TestDeleteDBSnapshot(AWSMockServiceTestCase): connection_class = RDSConnection def default_body(self): return """ 3306 2011-03-11T07:20:24.082Z mysql deleted us-east-1d general-public-license 2010-07-16T00:06:59.107Z 60 simcoprod01 5.1.47 mysnapshot2 manual master 627a43a1-8507-11e0-bd9b-a7b1ece36d51 """ def test_delete_dbinstance(self): self.set_http_response(status_code=200) response = self.service_connection.delete_dbsnapshot('mysnapshot2') self.assert_request_parameters({ 'Action': 'DeleteDBSnapshot', 'DBSnapshotIdentifier': 'mysnapshot2' }, ignore_params_values=['Version']) self.assertIsInstance(response, DBSnapshot) self.assertEqual(response.id, 'mysnapshot2') self.assertEqual(response.status, 'deleted') class TestRestoreDBInstanceFromDBSnapshot(AWSMockServiceTestCase): connection_class = RDSConnection def default_body(self): return """ mysql 1 false general-public-license creating 5.1.50 myrestoreddbinstance in-sync default.mysql5.1 active default 00:00-00:30 true sat:07:30-sat:08:00 10 db.m1.large master 7ca622e8-8508-11e0-bd9b-a7b1ece36d51 """ def test_restore_dbinstance_from_dbsnapshot(self): self.set_http_response(status_code=200) response = self.service_connection.restore_dbinstance_from_dbsnapshot('mydbsnapshot', 'myrestoreddbinstance', 'db.m1.large', '3306', 'us-east-1a', 'false', 'true') self.assert_request_parameters({ 'Action': 'RestoreDBInstanceFromDBSnapshot', 'DBSnapshotIdentifier': 'mydbsnapshot', 'DBInstanceIdentifier': 'myrestoreddbinstance', 'DBInstanceClass': 'db.m1.large', 'Port': '3306', 'AvailabilityZone': 'us-east-1a', 'MultiAZ': 'false', 'AutoMinorVersionUpgrade': 'true' }, ignore_params_values=['Version']) self.assertIsInstance(response, DBInstance) self.assertEqual(response.id, 'myrestoreddbinstance') self.assertEqual(response.status, 'creating') self.assertEqual(response.instance_class, 'db.m1.large') self.assertEqual(response.multi_az, False) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/route53/000077500000000000000000000000001225267101000161625ustar00rootroot00000000000000boto-2.20.1/tests/unit/route53/__init__.py000066400000000000000000000000001225267101000202610ustar00rootroot00000000000000boto-2.20.1/tests/unit/route53/test_connection.py000066400000000000000000000055641225267101000217440ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import mock from boto.exception import BotoServerError from boto.route53.connection import Route53Connection from boto.route53.exception import DNSServerError from tests.unit import unittest from tests.unit import AWSMockServiceTestCase class TestRoute53Connection(AWSMockServiceTestCase): connection_class = Route53Connection def setUp(self): super(TestRoute53Connection, self).setUp() self.calls = { 'count': 0, } def default_body(self): return """ It failed. """ def test_typical_400(self): self.set_http_response(status_code=400, header=[ ['Code', 'Throttling'], ]) with self.assertRaises(DNSServerError) as err: self.service_connection.get_all_hosted_zones() self.assertTrue('It failed.' in str(err.exception)) @mock.patch('time.sleep') def test_retryable_400(self, sleep_mock): self.set_http_response(status_code=400, header=[ ['Code', 'PriorRequestNotComplete'], ]) def incr_retry_handler(func): def _wrapper(*args, **kwargs): self.calls['count'] += 1 return func(*args, **kwargs) return _wrapper # Patch. orig_retry = self.service_connection._retry_handler self.service_connection._retry_handler = incr_retry_handler( orig_retry ) self.assertEqual(self.calls['count'], 0) # Retries get exhausted. with self.assertRaises(BotoServerError): self.service_connection.get_all_hosted_zones() self.assertEqual(self.calls['count'], 7) # Unpatch. self.service_connection._retry_handler = orig_retry boto-2.20.1/tests/unit/s3/000077500000000000000000000000001225267101000152015ustar00rootroot00000000000000boto-2.20.1/tests/unit/s3/__init__.py000066400000000000000000000000001225267101000173000ustar00rootroot00000000000000boto-2.20.1/tests/unit/s3/test_bucket.py000066400000000000000000000103061225267101000200670ustar00rootroot00000000000000# -*- coding: utf-8 -*- from mock import patch from tests.unit import unittest from tests.unit import AWSMockServiceTestCase from boto.s3.connection import S3Connection from boto.s3.bucket import Bucket class TestS3Bucket(AWSMockServiceTestCase): connection_class = S3Connection def setUp(self): super(TestS3Bucket, self).setUp() def test_bucket_create_bucket(self): self.set_http_response(status_code=200) bucket = self.service_connection.create_bucket('mybucket_create') self.assertEqual(bucket.name, 'mybucket_create') def test_bucket_constructor(self): self.set_http_response(status_code=200) bucket = Bucket(self.service_connection, 'mybucket_constructor') self.assertEqual(bucket.name, 'mybucket_constructor') def test_bucket_basics(self): self.set_http_response(status_code=200) bucket = self.service_connection.create_bucket('mybucket') self.assertEqual(bucket.__repr__(), '') def test_bucket_new_key(self): self.set_http_response(status_code=200) bucket = self.service_connection.create_bucket('mybucket') key = bucket.new_key('mykey') self.assertEqual(key.bucket, bucket) self.assertEqual(key.key, 'mykey') def test_bucket_new_key_missing_name(self): self.set_http_response(status_code=200) bucket = self.service_connection.create_bucket('mybucket') with self.assertRaises(ValueError): key = bucket.new_key('') def test_bucket_delete_key_missing_name(self): self.set_http_response(status_code=200) bucket = self.service_connection.create_bucket('mybucket') with self.assertRaises(ValueError): key = bucket.delete_key('') def test_bucket_kwargs_misspelling(self): self.set_http_response(status_code=200) bucket = self.service_connection.create_bucket('mybucket') with self.assertRaises(TypeError): bucket.get_all_keys(delimeter='foo') def test__get_all_query_args(self): bukket = Bucket() # Default. qa = bukket._get_all_query_args({}) self.assertEqual(qa, '') # Default with initial. qa = bukket._get_all_query_args({}, 'initial=1') self.assertEqual(qa, 'initial=1') # Single param. qa = bukket._get_all_query_args({ 'foo': 'true' }) self.assertEqual(qa, 'foo=true') # Single param with initial. qa = bukket._get_all_query_args({ 'foo': 'true' }, 'initial=1') self.assertEqual(qa, 'initial=1&foo=true') # Multiple params with all the weird cases. multiple_params = { 'foo': 'true', # Ensure Unicode chars get encoded. 'bar': '☃', # Underscores are bad, m'kay? 'some_other': 'thing', # Change the variant of ``max-keys``. 'maxkeys': 0, # ``None`` values get excluded. 'notthere': None, # Empty values also get excluded. 'notpresenteither': '', } qa = bukket._get_all_query_args(multiple_params) self.assertEqual( qa, 'bar=%E2%98%83&max-keys=0&foo=true&some-other=thing' ) # Multiple params with initial. qa = bukket._get_all_query_args(multiple_params, 'initial=1') self.assertEqual( qa, 'initial=1&bar=%E2%98%83&max-keys=0&foo=true&some-other=thing' ) @patch.object(Bucket, 'get_all_keys') def test_bucket_copy_key_no_validate(self, mock_get_all_keys): self.set_http_response(status_code=200) bucket = self.service_connection.create_bucket('mybucket') self.assertFalse(mock_get_all_keys.called) self.service_connection.get_bucket('mybucket', validate=True) self.assertTrue(mock_get_all_keys.called) mock_get_all_keys.reset_mock() self.assertFalse(mock_get_all_keys.called) try: bucket.copy_key('newkey', 'srcbucket', 'srckey', preserve_acl=True) except: # Will throw because of empty response. pass self.assertFalse(mock_get_all_keys.called) boto-2.20.1/tests/unit/s3/test_cors_configuration.py000066400000000000000000000046351225267101000225170ustar00rootroot00000000000000#!/usr/bin/env python import unittest from boto.s3.cors import CORSConfiguration CORS_BODY_1 = ( '' '' 'PUT' 'POST' 'DELETE' 'http://www.example.com' '*' 'x-amz-server-side-encryption' '3000' 'foobar_rule' '' '') CORS_BODY_2 = ( '' '' 'PUT' 'POST' 'DELETE' 'http://www.example.com' '*' 'x-amz-server-side-encryption' '3000' '' '' 'GET' '*' '*' '3000' '' '') CORS_BODY_3 = ( '' '' 'GET' '*' '' '') class TestCORSConfiguration(unittest.TestCase): def test_one_rule_with_id(self): cfg = CORSConfiguration() cfg.add_rule(['PUT', 'POST', 'DELETE'], 'http://www.example.com', allowed_header='*', max_age_seconds=3000, expose_header='x-amz-server-side-encryption', id='foobar_rule') self.assertEqual(cfg.to_xml(), CORS_BODY_1) def test_two_rules(self): cfg = CORSConfiguration() cfg.add_rule(['PUT', 'POST', 'DELETE'], 'http://www.example.com', allowed_header='*', max_age_seconds=3000, expose_header='x-amz-server-side-encryption') cfg.add_rule('GET', '*', allowed_header='*', max_age_seconds=3000) self.assertEqual(cfg.to_xml(), CORS_BODY_2) def test_minimal(self): cfg = CORSConfiguration() cfg.add_rule('GET', '*') self.assertEqual(cfg.to_xml(), CORS_BODY_3) if __name__ == "__main__": unittest.main() boto-2.20.1/tests/unit/s3/test_key.py000066400000000000000000000135531225267101000174110ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from __future__ import with_statement try: from cStringIO import StringIO except ImportError: from StringIO import StringIO import mock from tests.unit import unittest from tests.unit import AWSMockServiceTestCase from boto.exception import BotoServerError from boto.s3.connection import S3Connection from boto.s3.bucket import Bucket from boto.s3.key import Key class TestS3Key(AWSMockServiceTestCase): connection_class = S3Connection def setUp(self): super(TestS3Key, self).setUp() def default_body(self): return "default body" def test_when_no_restore_header_present(self): self.set_http_response(status_code=200) b = Bucket(self.service_connection, 'mybucket') k = b.get_key('myglacierkey') self.assertIsNone(k.ongoing_restore) self.assertIsNone(k.expiry_date) def test_restore_header_with_ongoing_restore(self): self.set_http_response( status_code=200, header=[('x-amz-restore', 'ongoing-request="true"')]) b = Bucket(self.service_connection, 'mybucket') k = b.get_key('myglacierkey') self.assertTrue(k.ongoing_restore) self.assertIsNone(k.expiry_date) def test_restore_completed(self): self.set_http_response( status_code=200, header=[('x-amz-restore', 'ongoing-request="false", ' 'expiry-date="Fri, 21 Dec 2012 00:00:00 GMT"')]) b = Bucket(self.service_connection, 'mybucket') k = b.get_key('myglacierkey') self.assertFalse(k.ongoing_restore) self.assertEqual(k.expiry_date, 'Fri, 21 Dec 2012 00:00:00 GMT') def test_delete_key_return_key(self): self.set_http_response(status_code=204, body='') b = Bucket(self.service_connection, 'mybucket') key = b.delete_key('fookey') self.assertIsNotNone(key) def counter(fn): def _wrapper(*args, **kwargs): _wrapper.count += 1 return fn(*args, **kwargs) _wrapper.count = 0 return _wrapper class TestS3KeyRetries(AWSMockServiceTestCase): connection_class = S3Connection @mock.patch('time.sleep') def test_500_retry(self, sleep_mock): self.set_http_response(status_code=500) b = Bucket(self.service_connection, 'mybucket') k = b.new_key('test_failure') fail_file = StringIO('This will attempt to retry.') with self.assertRaises(BotoServerError): k.send_file(fail_file) @mock.patch('time.sleep') def test_400_timeout(self, sleep_mock): weird_timeout_body = "RequestTimeout" self.set_http_response(status_code=400, body=weird_timeout_body) b = Bucket(self.service_connection, 'mybucket') k = b.new_key('test_failure') fail_file = StringIO('This will pretend to be chunk-able.') k.should_retry = counter(k.should_retry) self.assertEqual(k.should_retry.count, 0) with self.assertRaises(BotoServerError): k.send_file(fail_file) self.assertTrue(k.should_retry.count, 1) @mock.patch('time.sleep') def test_502_bad_gateway(self, sleep_mock): weird_timeout_body = "BadGateway" self.set_http_response(status_code=502, body=weird_timeout_body) b = Bucket(self.service_connection, 'mybucket') k = b.new_key('test_failure') fail_file = StringIO('This will pretend to be chunk-able.') k.should_retry = counter(k.should_retry) self.assertEqual(k.should_retry.count, 0) with self.assertRaises(BotoServerError): k.send_file(fail_file) self.assertTrue(k.should_retry.count, 1) @mock.patch('time.sleep') def test_504_gateway_timeout(self, sleep_mock): weird_timeout_body = "GatewayTimeout" self.set_http_response(status_code=504, body=weird_timeout_body) b = Bucket(self.service_connection, 'mybucket') k = b.new_key('test_failure') fail_file = StringIO('This will pretend to be chunk-able.') k.should_retry = counter(k.should_retry) self.assertEqual(k.should_retry.count, 0) with self.assertRaises(BotoServerError): k.send_file(fail_file) self.assertTrue(k.should_retry.count, 1) class TestFileError(unittest.TestCase): def test_file_error(self): key = Key() class CustomException(Exception): pass key.get_contents_to_file = mock.Mock( side_effect=CustomException('File blew up!')) # Ensure our exception gets raised instead of a file or IO error with self.assertRaises(CustomException): key.get_contents_to_filename('foo.txt') if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/s3/test_keyfile.py000066400000000000000000000076041225267101000202510ustar00rootroot00000000000000# Copyright 2013 Google Inc. # Copyright 2011, Nexenta Systems Inc. # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. import os import unittest from boto.s3.keyfile import KeyFile from tests.integration.s3.mock_storage_service import MockConnection from tests.integration.s3.mock_storage_service import MockBucket class KeyfileTest(unittest.TestCase): def setUp(self): service_connection = MockConnection() self.contents = '0123456789' bucket = MockBucket(service_connection, 'mybucket') key = bucket.new_key('mykey') key.set_contents_from_string(self.contents) self.keyfile = KeyFile(key) def tearDown(self): self.keyfile.close() def testReadFull(self): self.assertEqual(self.keyfile.read(len(self.contents)), self.contents) def testReadPartial(self): self.assertEqual(self.keyfile.read(5), self.contents[:5]) self.assertEqual(self.keyfile.read(5), self.contents[5:]) def testTell(self): self.assertEqual(self.keyfile.tell(), 0) self.keyfile.read(4) self.assertEqual(self.keyfile.tell(), 4) self.keyfile.read(6) self.assertEqual(self.keyfile.tell(), 10) self.keyfile.close() try: self.keyfile.tell() except ValueError, e: self.assertEqual(str(e), 'I/O operation on closed file') def testSeek(self): self.assertEqual(self.keyfile.read(4), self.contents[:4]) self.keyfile.seek(0) self.assertEqual(self.keyfile.read(4), self.contents[:4]) self.keyfile.seek(5) self.assertEqual(self.keyfile.read(5), self.contents[5:]) # Seeking negative should raise. try: self.keyfile.seek(-5) except IOError, e: self.assertEqual(str(e), 'Invalid argument') # Reading past end of file is supposed to return empty string. self.keyfile.read(10) self.assertEqual(self.keyfile.read(20), '') # Seeking past end of file is supposed to silently work. self.keyfile.seek(50) self.assertEqual(self.keyfile.tell(), 50) self.assertEqual(self.keyfile.read(1), '') def testSeekEnd(self): self.assertEqual(self.keyfile.read(4), self.contents[:4]) self.keyfile.seek(0, os.SEEK_END) self.assertEqual(self.keyfile.read(1), '') self.keyfile.seek(-1, os.SEEK_END) self.assertEqual(self.keyfile.tell(), 9) self.assertEqual(self.keyfile.read(1), '9') # Test attempt to seek backwards past the start from the end. try: self.keyfile.seek(-100, os.SEEK_END) except IOError, e: self.assertEqual(str(e), 'Invalid argument') def testSeekCur(self): self.assertEqual(self.keyfile.read(1), self.contents[0]) self.keyfile.seek(1, os.SEEK_CUR) self.assertEqual(self.keyfile.tell(), 2) self.assertEqual(self.keyfile.read(4), self.contents[2:6]) boto-2.20.1/tests/unit/s3/test_lifecycle.py000066400000000000000000000075521225267101000205620ustar00rootroot00000000000000# Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from tests.unit import AWSMockServiceTestCase from boto.s3.connection import S3Connection from boto.s3.bucket import Bucket from boto.s3.lifecycle import Rule, Lifecycle, Transition class TestS3LifeCycle(AWSMockServiceTestCase): connection_class = S3Connection def default_body(self): return """ rule-1 prefix/foo Enabled 30 GLACIER 365 rule-2 prefix/bar Disabled 2012-12-31T00:00:000Z GLACIER """ def test_parse_lifecycle_response(self): self.set_http_response(status_code=200) bucket = Bucket(self.service_connection, 'mybucket') response = bucket.get_lifecycle_config() self.assertEqual(len(response), 2) rule = response[0] self.assertEqual(rule.id, 'rule-1') self.assertEqual(rule.prefix, 'prefix/foo') self.assertEqual(rule.status, 'Enabled') self.assertEqual(rule.expiration.days, 365) self.assertIsNone(rule.expiration.date) transition = rule.transition self.assertEqual(transition.days, 30) self.assertEqual(transition.storage_class, 'GLACIER') self.assertEqual(response[1].transition.date, '2012-12-31T00:00:000Z') def test_expiration_with_no_transition(self): lifecycle = Lifecycle() lifecycle.add_rule('myid', 'prefix', 'Enabled', 30) xml = lifecycle.to_xml() self.assertIn('30', xml) def test_expiration_is_optional(self): t = Transition(days=30, storage_class='GLACIER') r = Rule('myid', 'prefix', 'Enabled', expiration=None, transition=t) xml = r.to_xml() self.assertIn( 'GLACIER30', xml) def test_expiration_with_expiration_and_transition(self): t = Transition(date='2012-11-30T00:00:000Z', storage_class='GLACIER') r = Rule('myid', 'prefix', 'Enabled', expiration=30, transition=t) xml = r.to_xml() self.assertIn( 'GLACIER' '2012-11-30T00:00:000Z', xml) self.assertIn('30', xml) boto-2.20.1/tests/unit/s3/test_tagging.py000066400000000000000000000027661225267101000202450ustar00rootroot00000000000000from tests.unit import AWSMockServiceTestCase from boto.s3.connection import S3Connection from boto.s3.bucket import Bucket from boto.s3.tagging import Tag class TestS3Tagging(AWSMockServiceTestCase): connection_class = S3Connection def default_body(self): return """ Project Project One User jsmith """ def test_parse_tagging_response(self): self.set_http_response(status_code=200) b = Bucket(self.service_connection, 'mybucket') api_response = b.get_tags() # The outer list is a list of tag sets. self.assertEqual(len(api_response), 1) # The inner list is a list of tags. self.assertEqual(len(api_response[0]), 2) self.assertEqual(api_response[0][0].key, 'Project') self.assertEqual(api_response[0][0].value, 'Project One') self.assertEqual(api_response[0][1].key, 'User') self.assertEqual(api_response[0][1].value, 'jsmith') def test_tag_equality(self): t1 = Tag('foo', 'bar') t2 = Tag('foo', 'bar') t3 = Tag('foo', 'baz') t4 = Tag('baz', 'bar') self.assertEqual(t1, t2) self.assertNotEqual(t1, t3) self.assertNotEqual(t1, t4) boto-2.20.1/tests/unit/s3/test_uri.py000066400000000000000000000302521225267101000174130ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2013 Google, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import boto import tempfile import urllib from boto.exception import InvalidUriError from boto import storage_uri from boto.s3.keyfile import KeyFile from tests.integration.s3.mock_storage_service import MockBucket from tests.integration.s3.mock_storage_service import MockBucketStorageUri from tests.integration.s3.mock_storage_service import MockConnection from tests.unit import unittest """Unit tests for StorageUri interface.""" class UriTest(unittest.TestCase): def test_provider_uri(self): for prov in ('gs', 's3'): uri_str = '%s://' % prov uri = boto.storage_uri(uri_str, validate=False, suppress_consec_slashes=False) self.assertEqual(prov, uri.scheme) self.assertEqual(uri_str, uri.uri) self.assertFalse(hasattr(uri, 'versionless_uri')) self.assertEqual('', uri.bucket_name) self.assertEqual('', uri.object_name) self.assertEqual(None, uri.version_id) self.assertEqual(None, uri.generation) self.assertEqual(uri.names_provider(), True) self.assertEqual(uri.names_container(), True) self.assertEqual(uri.names_bucket(), False) self.assertEqual(uri.names_object(), False) self.assertEqual(uri.names_directory(), False) self.assertEqual(uri.names_file(), False) self.assertEqual(uri.is_stream(), False) self.assertEqual(uri.is_version_specific, False) def test_bucket_uri_no_trailing_slash(self): for prov in ('gs', 's3'): uri_str = '%s://bucket' % prov uri = boto.storage_uri(uri_str, validate=False, suppress_consec_slashes=False) self.assertEqual(prov, uri.scheme) self.assertEqual('%s/' % uri_str, uri.uri) self.assertFalse(hasattr(uri, 'versionless_uri')) self.assertEqual('bucket', uri.bucket_name) self.assertEqual('', uri.object_name) self.assertEqual(None, uri.version_id) self.assertEqual(None, uri.generation) self.assertEqual(uri.names_provider(), False) self.assertEqual(uri.names_container(), True) self.assertEqual(uri.names_bucket(), True) self.assertEqual(uri.names_object(), False) self.assertEqual(uri.names_directory(), False) self.assertEqual(uri.names_file(), False) self.assertEqual(uri.is_stream(), False) self.assertEqual(uri.is_version_specific, False) def test_bucket_uri_with_trailing_slash(self): for prov in ('gs', 's3'): uri_str = '%s://bucket/' % prov uri = boto.storage_uri(uri_str, validate=False, suppress_consec_slashes=False) self.assertEqual(prov, uri.scheme) self.assertEqual(uri_str, uri.uri) self.assertFalse(hasattr(uri, 'versionless_uri')) self.assertEqual('bucket', uri.bucket_name) self.assertEqual('', uri.object_name) self.assertEqual(None, uri.version_id) self.assertEqual(None, uri.generation) self.assertEqual(uri.names_provider(), False) self.assertEqual(uri.names_container(), True) self.assertEqual(uri.names_bucket(), True) self.assertEqual(uri.names_object(), False) self.assertEqual(uri.names_directory(), False) self.assertEqual(uri.names_file(), False) self.assertEqual(uri.is_stream(), False) self.assertEqual(uri.is_version_specific, False) def test_non_versioned_object_uri(self): for prov in ('gs', 's3'): uri_str = '%s://bucket/obj/a/b' % prov uri = boto.storage_uri(uri_str, validate=False, suppress_consec_slashes=False) self.assertEqual(prov, uri.scheme) self.assertEqual(uri_str, uri.uri) self.assertEqual(uri_str, uri.versionless_uri) self.assertEqual('bucket', uri.bucket_name) self.assertEqual('obj/a/b', uri.object_name) self.assertEqual(None, uri.version_id) self.assertEqual(None, uri.generation) self.assertEqual(uri.names_provider(), False) self.assertEqual(uri.names_container(), False) self.assertEqual(uri.names_bucket(), False) self.assertEqual(uri.names_object(), True) self.assertEqual(uri.names_directory(), False) self.assertEqual(uri.names_file(), False) self.assertEqual(uri.is_stream(), False) self.assertEqual(uri.is_version_specific, False) def test_versioned_gs_object_uri(self): uri_str = 'gs://bucket/obj/a/b#1359908801674000' uri = boto.storage_uri(uri_str, validate=False, suppress_consec_slashes=False) self.assertEqual('gs', uri.scheme) self.assertEqual(uri_str, uri.uri) self.assertEqual('gs://bucket/obj/a/b', uri.versionless_uri) self.assertEqual('bucket', uri.bucket_name) self.assertEqual('obj/a/b', uri.object_name) self.assertEqual(None, uri.version_id) self.assertEqual(1359908801674000, uri.generation) self.assertEqual(uri.names_provider(), False) self.assertEqual(uri.names_container(), False) self.assertEqual(uri.names_bucket(), False) self.assertEqual(uri.names_object(), True) self.assertEqual(uri.names_directory(), False) self.assertEqual(uri.names_file(), False) self.assertEqual(uri.is_stream(), False) self.assertEqual(uri.is_version_specific, True) def test_versioned_gs_object_uri_with_legacy_generation_value(self): uri_str = 'gs://bucket/obj/a/b#1' uri = boto.storage_uri(uri_str, validate=False, suppress_consec_slashes=False) self.assertEqual('gs', uri.scheme) self.assertEqual(uri_str, uri.uri) self.assertEqual('gs://bucket/obj/a/b', uri.versionless_uri) self.assertEqual('bucket', uri.bucket_name) self.assertEqual('obj/a/b', uri.object_name) self.assertEqual(None, uri.version_id) self.assertEqual(1, uri.generation) self.assertEqual(uri.names_provider(), False) self.assertEqual(uri.names_container(), False) self.assertEqual(uri.names_bucket(), False) self.assertEqual(uri.names_object(), True) self.assertEqual(uri.names_directory(), False) self.assertEqual(uri.names_file(), False) self.assertEqual(uri.is_stream(), False) self.assertEqual(uri.is_version_specific, True) def test_roundtrip_versioned_gs_object_uri_parsed(self): uri_str = 'gs://bucket/obj#1359908801674000' uri = boto.storage_uri(uri_str, validate=False, suppress_consec_slashes=False) roundtrip_uri = boto.storage_uri(uri.uri, validate=False, suppress_consec_slashes=False) self.assertEqual(uri.uri, roundtrip_uri.uri) self.assertEqual(uri.is_version_specific, True) def test_versioned_s3_object_uri(self): uri_str = 's3://bucket/obj/a/b#eMuM0J15HkJ9QHlktfNP5MfA.oYR2q6S' uri = boto.storage_uri(uri_str, validate=False, suppress_consec_slashes=False) self.assertEqual('s3', uri.scheme) self.assertEqual(uri_str, uri.uri) self.assertEqual('s3://bucket/obj/a/b', uri.versionless_uri) self.assertEqual('bucket', uri.bucket_name) self.assertEqual('obj/a/b', uri.object_name) self.assertEqual('eMuM0J15HkJ9QHlktfNP5MfA.oYR2q6S', uri.version_id) self.assertEqual(None, uri.generation) self.assertEqual(uri.names_provider(), False) self.assertEqual(uri.names_container(), False) self.assertEqual(uri.names_bucket(), False) self.assertEqual(uri.names_object(), True) self.assertEqual(uri.names_directory(), False) self.assertEqual(uri.names_file(), False) self.assertEqual(uri.is_stream(), False) self.assertEqual(uri.is_version_specific, True) def test_explicit_file_uri(self): tmp_dir = tempfile.tempdir uri_str = 'file://%s' % urllib.pathname2url(tmp_dir) uri = boto.storage_uri(uri_str, validate=False, suppress_consec_slashes=False) self.assertEqual('file', uri.scheme) self.assertEqual(uri_str, uri.uri) self.assertFalse(hasattr(uri, 'versionless_uri')) self.assertEqual('', uri.bucket_name) self.assertEqual(tmp_dir, uri.object_name) self.assertFalse(hasattr(uri, 'version_id')) self.assertFalse(hasattr(uri, 'generation')) self.assertFalse(hasattr(uri, 'is_version_specific')) self.assertEqual(uri.names_provider(), False) self.assertEqual(uri.names_bucket(), False) # Don't check uri.names_container(), uri.names_directory(), # uri.names_file(), or uri.names_object(), because for file URIs these # functions look at the file system and apparently unit tests run # chroot'd. self.assertEqual(uri.is_stream(), False) def test_implicit_file_uri(self): tmp_dir = tempfile.tempdir uri_str = '%s' % urllib.pathname2url(tmp_dir) uri = boto.storage_uri(uri_str, validate=False, suppress_consec_slashes=False) self.assertEqual('file', uri.scheme) self.assertEqual('file://%s' % tmp_dir, uri.uri) self.assertFalse(hasattr(uri, 'versionless_uri')) self.assertEqual('', uri.bucket_name) self.assertEqual(tmp_dir, uri.object_name) self.assertFalse(hasattr(uri, 'version_id')) self.assertFalse(hasattr(uri, 'generation')) self.assertFalse(hasattr(uri, 'is_version_specific')) self.assertEqual(uri.names_provider(), False) self.assertEqual(uri.names_bucket(), False) # Don't check uri.names_container(), uri.names_directory(), # uri.names_file(), or uri.names_object(), because for file URIs these # functions look at the file system and apparently unit tests run # chroot'd. self.assertEqual(uri.is_stream(), False) def test_gs_object_uri_contains_sharp_not_matching_version_syntax(self): uri_str = 'gs://bucket/obj#13a990880167400' uri = boto.storage_uri(uri_str, validate=False, suppress_consec_slashes=False) self.assertEqual('gs', uri.scheme) self.assertEqual(uri_str, uri.uri) self.assertEqual('gs://bucket/obj#13a990880167400', uri.versionless_uri) self.assertEqual('bucket', uri.bucket_name) self.assertEqual('obj#13a990880167400', uri.object_name) self.assertEqual(None, uri.version_id) self.assertEqual(None, uri.generation) self.assertEqual(uri.names_provider(), False) self.assertEqual(uri.names_container(), False) self.assertEqual(uri.names_bucket(), False) self.assertEqual(uri.names_object(), True) self.assertEqual(uri.names_directory(), False) self.assertEqual(uri.names_file(), False) self.assertEqual(uri.is_stream(), False) self.assertEqual(uri.is_version_specific, False) def test_file_containing_colon(self): uri_str = 'abc:def' uri = boto.storage_uri(uri_str, validate=False, suppress_consec_slashes=False) self.assertEqual('file', uri.scheme) self.assertEqual('file://%s' % uri_str, uri.uri) def test_invalid_scheme(self): uri_str = 'mars://bucket/object' try: boto.storage_uri(uri_str, validate=False, suppress_consec_slashes=False) except InvalidUriError as e: self.assertIn('Unrecognized scheme', e.message) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/s3/test_website.py000066400000000000000000000217631225267101000202650ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from tests.unit import unittest import xml.dom.minidom import xml.sax from boto.s3.website import WebsiteConfiguration from boto.s3.website import RedirectLocation from boto.s3.website import RoutingRules from boto.s3.website import Condition from boto.s3.website import RoutingRules from boto.s3.website import RoutingRule from boto.s3.website import Redirect from boto import handler def pretty_print_xml(text): text = ''.join(t.strip() for t in text.splitlines()) x = xml.dom.minidom.parseString(text) return x.toprettyxml() class TestS3WebsiteConfiguration(unittest.TestCase): maxDiff = None def setUp(self): pass def tearDown(self): pass def test_suffix_only(self): config = WebsiteConfiguration(suffix='index.html') xml = config.to_xml() self.assertIn( 'index.html', xml) def test_suffix_and_error(self): config = WebsiteConfiguration(suffix='index.html', error_key='error.html') xml = config.to_xml() self.assertIn( 'error.html', xml) def test_redirect_all_request_to_with_just_host(self): location = RedirectLocation(hostname='example.com') config = WebsiteConfiguration(redirect_all_requests_to=location) xml = config.to_xml() self.assertIn( ('' 'example.com'), xml) def test_redirect_all_requests_with_protocol(self): location = RedirectLocation(hostname='example.com', protocol='https') config = WebsiteConfiguration(redirect_all_requests_to=location) xml = config.to_xml() self.assertIn( ('' 'example.comhttps' ''), xml) def test_routing_rules_key_prefix(self): x = pretty_print_xml # This rule redirects requests for docs/* to documentation/* rules = RoutingRules() condition = Condition(key_prefix='docs/') redirect = Redirect(replace_key_prefix='documents/') rules.add_rule(RoutingRule(condition, redirect)) config = WebsiteConfiguration(suffix='index.html', routing_rules=rules) xml = config.to_xml() expected_xml = """ index.html docs/ documents/ """ self.assertEqual(x(expected_xml), x(xml)) def test_routing_rules_to_host_on_404(self): x = pretty_print_xml # Another example from the docs: # Redirect requests to a specific host in the event of a 404. # Also, the redirect inserts a report-404/. For example, # if you request a page ExamplePage.html and it results # in a 404, the request is routed to a page report-404/ExamplePage.html rules = RoutingRules() condition = Condition(http_error_code=404) redirect = Redirect(hostname='example.com', replace_key_prefix='report-404/') rules.add_rule(RoutingRule(condition, redirect)) config = WebsiteConfiguration(suffix='index.html', routing_rules=rules) xml = config.to_xml() expected_xml = """ index.html 404 example.com report-404/ """ self.assertEqual(x(expected_xml), x(xml)) def test_key_prefix(self): x = pretty_print_xml rules = RoutingRules() condition = Condition(key_prefix="images/") redirect = Redirect(replace_key='folderdeleted.html') rules.add_rule(RoutingRule(condition, redirect)) config = WebsiteConfiguration(suffix='index.html', routing_rules=rules) xml = config.to_xml() expected_xml = """ index.html images/ folderdeleted.html """ self.assertEqual(x(expected_xml), x(xml)) def test_builders(self): x = pretty_print_xml # This is a more declarative way to create rules. # First the long way. rules = RoutingRules() condition = Condition(http_error_code=404) redirect = Redirect(hostname='example.com', replace_key_prefix='report-404/') rules.add_rule(RoutingRule(condition, redirect)) xml = rules.to_xml() # Then the more concise way. rules2 = RoutingRules().add_rule( RoutingRule.when(http_error_code=404).then_redirect( hostname='example.com', replace_key_prefix='report-404/')) xml2 = rules2.to_xml() self.assertEqual(x(xml), x(xml2)) def test_parse_xml(self): x = pretty_print_xml xml_in = """ index.html error.html docs/ https www.example.com documents/ 302 404 example.com report-404/ """ webconfig = WebsiteConfiguration() h = handler.XmlHandler(webconfig, None) xml.sax.parseString(xml_in, h) xml_out = webconfig.to_xml() self.assertEqual(x(xml_in), x(xml_out)) boto-2.20.1/tests/unit/ses/000077500000000000000000000000001225267101000154465ustar00rootroot00000000000000boto-2.20.1/tests/unit/ses/__init__.py000066400000000000000000000000001225267101000175450ustar00rootroot00000000000000boto-2.20.1/tests/unit/ses/test_identity.py000066400000000000000000000062141225267101000207130ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from tests.unit import unittest from tests.unit import AWSMockServiceTestCase from boto.jsonresponse import ListElement from boto.ses.connection import SESConnection class TestSESIdentity(AWSMockServiceTestCase): connection_class = SESConnection def setUp(self): super(TestSESIdentity, self).setUp() def default_body(self): return """ amazon.com true Success vvjuipp74whm76gqoni7qmwwn4w4qusjiainivf6f 3frqe7jn4obpuxjpwpolz6ipb3k5nvt2nhjpik2oy wrqplteh7oodxnad7hsl4mixg2uavzneazxv5sxi2 bb5a105d-c468-11e1-82eb-dff885ccc06a """ def test_ses_get_identity_dkim_list(self): self.set_http_response(status_code=200) response = self.service_connection\ .get_identity_dkim_attributes(['test@amazon.com']) response = response['GetIdentityDkimAttributesResponse'] result = response['GetIdentityDkimAttributesResult'] attributes = result['DkimAttributes']['entry']['value'] tokens = attributes['DkimTokens'] self.assertEqual(ListElement, type(tokens)) self.assertEqual(3, len(tokens)) self.assertEqual('vvjuipp74whm76gqoni7qmwwn4w4qusjiainivf6f', tokens[0]) self.assertEqual('3frqe7jn4obpuxjpwpolz6ipb3k5nvt2nhjpik2oy', tokens[1]) self.assertEqual('wrqplteh7oodxnad7hsl4mixg2uavzneazxv5sxi2', tokens[2]) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/sns/000077500000000000000000000000001225267101000154575ustar00rootroot00000000000000boto-2.20.1/tests/unit/sns/__init__.py000066400000000000000000000000001225267101000175560ustar00rootroot00000000000000boto-2.20.1/tests/unit/sns/test_connection.py000066400000000000000000000217521225267101000212360ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # import json from tests.unit import unittest from tests.unit import AWSMockServiceTestCase from mock import Mock from boto.sns.connection import SNSConnection QUEUE_POLICY = { u'Policy': (u'{"Version":"2008-10-17","Id":"arn:aws:sqs:us-east-1:' 'idnum:testqueuepolicy/SQSDefaultPolicy","Statement":' '[{"Sid":"sidnum","Effect":"Allow","Principal":{"AWS":"*"},' '"Action":"SQS:GetQueueUrl","Resource":' '"arn:aws:sqs:us-east-1:idnum:testqueuepolicy"}]}')} class TestSNSConnection(AWSMockServiceTestCase): connection_class = SNSConnection def setUp(self): super(TestSNSConnection, self).setUp() def default_body(self): return "{}" def test_sqs_with_existing_policy(self): self.set_http_response(status_code=200) queue = Mock() queue.get_attributes.return_value = QUEUE_POLICY queue.arn = 'arn:aws:sqs:us-east-1:idnum:queuename' self.service_connection.subscribe_sqs_queue('topic_arn', queue) self.assert_request_parameters({ 'Action': 'Subscribe', 'ContentType': 'JSON', 'Endpoint': 'arn:aws:sqs:us-east-1:idnum:queuename', 'Protocol': 'sqs', 'TopicArn': 'topic_arn', 'Version': '2010-03-31', }, ignore_params_values=[]) # Verify that the queue policy was properly updated. actual_policy = json.loads(queue.set_attribute.call_args[0][1]) self.assertEqual(actual_policy['Version'], '2008-10-17') # A new statement should be appended to the end of the statement list. self.assertEqual(len(actual_policy['Statement']), 2) self.assertEqual(actual_policy['Statement'][1]['Action'], 'SQS:SendMessage') def test_sqs_with_no_previous_policy(self): self.set_http_response(status_code=200) queue = Mock() queue.get_attributes.return_value = {} queue.arn = 'arn:aws:sqs:us-east-1:idnum:queuename' self.service_connection.subscribe_sqs_queue('topic_arn', queue) self.assert_request_parameters({ 'Action': 'Subscribe', 'ContentType': 'JSON', 'Endpoint': 'arn:aws:sqs:us-east-1:idnum:queuename', 'Protocol': 'sqs', 'TopicArn': 'topic_arn', 'Version': '2010-03-31', }, ignore_params_values=[]) actual_policy = json.loads(queue.set_attribute.call_args[0][1]) # Only a single statement should be part of the policy. self.assertEqual(len(actual_policy['Statement']), 1) def test_publish_with_positional_args(self): self.set_http_response(status_code=200) self.service_connection.publish('topic', 'message', 'subject') self.assert_request_parameters({ 'Action': 'Publish', 'TopicArn': 'topic', 'Subject': 'subject', 'Message': 'message', }, ignore_params_values=['Version', 'ContentType']) def test_publish_with_kwargs(self): self.set_http_response(status_code=200) self.service_connection.publish(topic='topic', message='message', subject='subject') self.assert_request_parameters({ 'Action': 'Publish', 'TopicArn': 'topic', 'Subject': 'subject', 'Message': 'message', }, ignore_params_values=['Version', 'ContentType']) def test_publish_with_target_arn(self): self.set_http_response(status_code=200) self.service_connection.publish(target_arn='target_arn', message='message', subject='subject') self.assert_request_parameters({ 'Action': 'Publish', 'TargetArn': 'target_arn', 'Subject': 'subject', 'Message': 'message', }, ignore_params_values=['Version', 'ContentType']) def test_create_platform_application(self): self.set_http_response(status_code=200) self.service_connection.create_platform_application( name='MyApp', platform='APNS', attributes={ 'PlatformPrincipal': 'a ssl certificate', 'PlatformCredential': 'a private key' } ) self.assert_request_parameters({ 'Action': 'CreatePlatformApplication', 'Name': 'MyApp', 'Platform': 'APNS', 'Attributes.entry.1.key': 'PlatformCredential', 'Attributes.entry.1.value': 'a private key', 'Attributes.entry.2.key': 'PlatformPrincipal', 'Attributes.entry.2.value': 'a ssl certificate', }, ignore_params_values=['Version', 'ContentType']) def test_set_platform_application_attributes(self): self.set_http_response(status_code=200) self.service_connection.set_platform_application_attributes( platform_application_arn='arn:myapp', attributes={'PlatformPrincipal': 'a ssl certificate', 'PlatformCredential': 'a private key'}) self.assert_request_parameters({ 'Action': 'SetPlatformApplicationAttributes', 'PlatformApplicationArn': 'arn:myapp', 'Attributes.entry.1.key': 'PlatformCredential', 'Attributes.entry.1.value': 'a private key', 'Attributes.entry.2.key': 'PlatformPrincipal', 'Attributes.entry.2.value': 'a ssl certificate', }, ignore_params_values=['Version', 'ContentType']) def test_create_platform_endpoint(self): self.set_http_response(status_code=200) self.service_connection.create_platform_endpoint( platform_application_arn='arn:myapp', token='abcde12345', custom_user_data='john', attributes={'Enabled': False}) self.assert_request_parameters({ 'Action': 'CreatePlatformEndpoint', 'PlatformApplicationArn': 'arn:myapp', 'Token': 'abcde12345', 'CustomUserData': 'john', 'Attributes.entry.1.key': 'Enabled', 'Attributes.entry.1.value': False, }, ignore_params_values=['Version', 'ContentType']) def test_set_endpoint_attributes(self): self.set_http_response(status_code=200) self.service_connection.set_endpoint_attributes( endpoint_arn='arn:myendpoint', attributes={'CustomUserData': 'john', 'Enabled': False}) self.assert_request_parameters({ 'Action': 'SetEndpointAttributes', 'EndpointArn': 'arn:myendpoint', 'Attributes.entry.1.key': 'CustomUserData', 'Attributes.entry.1.value': 'john', 'Attributes.entry.2.key': 'Enabled', 'Attributes.entry.2.value': False, }, ignore_params_values=['Version', 'ContentType']) def test_message_is_required(self): self.set_http_response(status_code=200) with self.assertRaises(TypeError): self.service_connection.publish(topic='topic', subject='subject') def test_publish_with_json(self): self.set_http_response(status_code=200) self.service_connection.publish( message=json.dumps({ 'default': 'Ignored.', 'GCM': { 'data': 'goes here', } }), message_structure='json', subject='subject', target_arn='target_arn' ) self.assert_request_parameters({ 'Action': 'Publish', 'TargetArn': 'target_arn', 'Subject': 'subject', 'Message': '{"default": "Ignored.", "GCM": {"data": "goes here"}}', 'MessageStructure': 'json', }, ignore_params_values=['Version', 'ContentType']) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/sqs/000077500000000000000000000000001225267101000154625ustar00rootroot00000000000000boto-2.20.1/tests/unit/sqs/__init__.py000066400000000000000000000000001225267101000175610ustar00rootroot00000000000000boto-2.20.1/tests/unit/sqs/test_connection.py000066400000000000000000000111021225267101000212250ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from tests.unit import unittest from tests.unit import AWSMockServiceTestCase from boto.sqs.connection import SQSConnection from boto.sqs.regioninfo import SQSRegionInfo class SQSAuthParams(AWSMockServiceTestCase): connection_class = SQSConnection def setUp(self): super(SQSAuthParams, self).setUp() def default_body(self): return """ https://queue.amazonaws.com/599169622985/myqueue1 54d4c94d-2307-54a8-bb27-806a682a5abd """ def test_auth_service_name_override(self): self.set_http_response(status_code=200) # We can use the auth_service_name to change what service # name to use for the credential scope for sigv4. self.service_connection.auth_service_name = 'service_override' self.service_connection.create_queue('my_queue') # Note the service_override value instead. self.assertIn('us-east-1/service_override/aws4_request', self.actual_request.headers['Authorization']) def test_class_attribute_can_set_service_name(self): self.set_http_response(status_code=200) # The SQS class has an 'AuthServiceName' param of 'sqs': self.assertEqual(self.service_connection.AuthServiceName, 'sqs') self.service_connection.create_queue('my_queue') # And because of this, the value of 'sqs' will be used instead of # 'queue' for the credential scope: self.assertIn('us-east-1/sqs/aws4_request', self.actual_request.headers['Authorization']) def test_auth_region_name_is_automatically_updated(self): region = SQSRegionInfo(name='us-west-2', endpoint='us-west-2.queue.amazonaws.com') self.service_connection = SQSConnection( https_connection_factory=self.https_connection_factory, aws_access_key_id='aws_access_key_id', aws_secret_access_key='aws_secret_access_key', region=region) self.initialize_service_connection() self.set_http_response(status_code=200) self.service_connection.create_queue('my_queue') # Note the region name below is 'us-west-2'. self.assertIn('us-west-2/sqs/aws4_request', self.actual_request.headers['Authorization']) def test_set_get_auth_service_and_region_names(self): self.service_connection.auth_service_name = 'service_name' self.service_connection.auth_region_name = 'region_name' self.assertEqual(self.service_connection.auth_service_name, 'service_name') self.assertEqual(self.service_connection.auth_region_name, 'region_name') def test_get_queue_with_owner_account_id_returns_queue(self): self.set_http_response(status_code=200) self.service_connection.create_queue('my_queue') self.service_connection.get_queue('my_queue', '599169622985') assert 'QueueOwnerAWSAccountId' in self.actual_request.params.keys() self.assertEquals(self.actual_request.params['QueueOwnerAWSAccountId'], '599169622985') if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/sqs/test_message.py000066400000000000000000000051161225267101000205220ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from tests.unit import unittest from boto.sqs.message import MHMessage from boto.sqs.message import RawMessage from boto.exception import SQSDecodeError class TestMHMessage(unittest.TestCase): def test_contains(self): msg = MHMessage() msg.update({'hello': 'world'}) self.assertTrue('hello' in msg) class DecodeExceptionRaisingMessage(RawMessage): def decode(self, message): raise SQSDecodeError('Sample decode error', self) class TestEncodeMessage(unittest.TestCase): def test_message_id_available(self): import xml.sax from boto.resultset import ResultSet from boto.handler import XmlHandler sample_value = 'abcdef' body = """ %s %s %s """ % tuple([sample_value] * 3) rs = ResultSet([('Message', DecodeExceptionRaisingMessage)]) h = XmlHandler(rs, None) with self.assertRaises(SQSDecodeError) as context: xml.sax.parseString(body, h) message = context.exception.message self.assertEquals(message.id, sample_value) self.assertEquals(message.receipt_handle, sample_value) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/sqs/test_queue.py000066400000000000000000000030721225267101000202210ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from tests.unit import unittest from mock import Mock from boto.sqs.queue import Queue class TestQueue(unittest.TestCase): def test_queue_arn(self): connection = Mock() connection.region.name = 'us-east-1' q = Queue( connection=connection, url='https://sqs.us-east-1.amazonaws.com/id/queuename') self.assertEqual(q.arn, 'arn:aws:sqs:us-east-1:id:queuename') if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/sts/000077500000000000000000000000001225267101000154655ustar00rootroot00000000000000boto-2.20.1/tests/unit/sts/__init__.py000066400000000000000000000000001225267101000175640ustar00rootroot00000000000000boto-2.20.1/tests/unit/sts/test_connection.py000066400000000000000000000175071225267101000212470ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from tests.unit import unittest from boto.sts.connection import STSConnection from tests.unit import AWSMockServiceTestCase class TestSTSConnection(AWSMockServiceTestCase): connection_class = STSConnection def setUp(self): super(TestSTSConnection, self).setUp() def default_body(self): return """ arn:role roleid:myrolesession session_token secretkey 2012-10-18T10:18:14.789Z accesskey 8b7418cb-18a8-11e2-a706-4bd22ca68ab7 """ def test_assume_role(self): self.set_http_response(status_code=200) response = self.service_connection.assume_role('arn:role', 'mysession') self.assert_request_parameters( {'Action': 'AssumeRole', 'RoleArn': 'arn:role', 'RoleSessionName': 'mysession'}, ignore_params_values=['Timestamp', 'AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Version']) self.assertEqual(response.credentials.access_key, 'accesskey') self.assertEqual(response.credentials.secret_key, 'secretkey') self.assertEqual(response.credentials.session_token, 'session_token') self.assertEqual(response.user.arn, 'arn:role') self.assertEqual(response.user.assume_role_id, 'roleid:myrolesession') class TestSTSWebIdentityConnection(AWSMockServiceTestCase): connection_class = STSConnection def setUp(self): super(TestSTSWebIdentityConnection, self).setUp() def default_body(self): return """ amzn1.account.AF6RHO7KZU5XRVQJGXK6HB56KR2A arn:aws:sts::000240903217:assumed-role/FederatedWebIdentityRole/app1 AROACLKWSDQRAOFQC3IDI:app1 AQoDYXdzEE0a8ANXXXXXXXXNO1ewxE5TijQyp+IPfnyowF secretkey 2013-05-14T23:00:23Z accesskey ad4156e9-bce1-11e2-82e6-6b6ef249e618 """ def test_assume_role_with_web_identity(self): arn = 'arn:aws:iam::000240903217:role/FederatedWebIdentityRole' wit = 'b94d27b9934d3e08a52e52d7da7dabfac484efe37a5380ee9088f7ace2efcde9' self.set_http_response(status_code=200) response = self.service_connection.assume_role_with_web_identity( role_arn=arn, role_session_name='guestuser', web_identity_token=wit, provider_id='www.amazon.com', ) self.assert_request_parameters({ 'RoleSessionName': 'guestuser', 'AWSAccessKeyId': 'aws_access_key_id', 'RoleArn': arn, 'WebIdentityToken': wit, 'ProviderId': 'www.amazon.com', 'Action': 'AssumeRoleWithWebIdentity' }, ignore_params_values=[ 'SignatureMethod', 'Timestamp', 'SignatureVersion', 'Version', ]) self.assertEqual( response.credentials.access_key.strip(), 'accesskey' ) self.assertEqual( response.credentials.secret_key.strip(), 'secretkey' ) self.assertEqual( response.credentials.session_token.strip(), 'AQoDYXdzEE0a8ANXXXXXXXXNO1ewxE5TijQyp+IPfnyowF' ) self.assertEqual( response.user.arn.strip(), 'arn:aws:sts::000240903217:assumed-role/FederatedWebIdentityRole/app1' ) self.assertEqual( response.user.assume_role_id.strip(), 'AROACLKWSDQRAOFQC3IDI:app1' ) class TestSTSSAMLConnection(AWSMockServiceTestCase): connection_class = STSConnection def setUp(self): super(TestSTSSAMLConnection, self).setUp() def default_body(self): return """ session_token secretkey 2011-07-15T23:28:33.359Z accesskey arn:role roleid:myrolesession 6 c6104cbe-af31-11e0-8154-cbc7ccf896c7 """ def test_assume_role_with_saml(self): arn = 'arn:aws:iam::000240903217:role/Test' principal = 'arn:aws:iam::000240903217:role/Principal' assertion = 'test' self.set_http_response(status_code=200) response = self.service_connection.assume_role_with_saml( role_arn=arn, principal_arn=principal, saml_assertion=assertion ) self.assert_request_parameters({ 'RoleArn': arn, 'PrincipalArn': principal, 'SAMLAssertion': assertion, 'Action': 'AssumeRoleWithSAML' }, ignore_params_values=[ 'AWSAccessKeyId', 'SignatureMethod', 'Timestamp', 'SignatureVersion', 'Version', ]) self.assertEqual(response.credentials.access_key, 'accesskey') self.assertEqual(response.credentials.secret_key, 'secretkey') self.assertEqual(response.credentials.session_token, 'session_token') self.assertEqual(response.user.arn, 'arn:role') self.assertEqual(response.user.assume_role_id, 'roleid:myrolesession') if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/sts/test_credentials.py000066400000000000000000000020711225267101000213730ustar00rootroot00000000000000import unittest from boto.sts.credentials import Credentials class STSCredentialsTest(unittest.TestCase): sts = True def setUp(self): super(STSCredentialsTest, self).setUp() self.creds = Credentials() def test_to_dict(self): # This would fail miserably if ``Credentials.request_id`` hadn't been # explicitly set (no default). # Default. self.assertEqual(self.creds.to_dict(), { 'access_key': None, 'expiration': None, 'request_id': None, 'secret_key': None, 'session_token': None }) # Override. creds = Credentials() creds.access_key = 'something' creds.secret_key = 'crypto' creds.session_token = 'this' creds.expiration = 'way' creds.request_id = 'comes' self.assertEqual(creds.to_dict(), { 'access_key': 'something', 'expiration': 'way', 'request_id': 'comes', 'secret_key': 'crypto', 'session_token': 'this' }) boto-2.20.1/tests/unit/swf/000077500000000000000000000000001225267101000154535ustar00rootroot00000000000000boto-2.20.1/tests/unit/swf/__init__.py000066400000000000000000000000001225267101000175520ustar00rootroot00000000000000boto-2.20.1/tests/unit/swf/test_layer2_actors.py000066400000000000000000000077111225267101000216430ustar00rootroot00000000000000import boto.swf.layer2 from boto.swf.layer2 import Decider, ActivityWorker from tests.unit import unittest from mock import Mock class TestActors(unittest.TestCase): def setUp(self): boto.swf.layer2.Layer1 = Mock() self.worker = ActivityWorker(name='test-worker', domain='test', task_list='test_list') self.decider = Decider(name='test-worker', domain='test', task_list='test_list') self.worker._swf = Mock() self.decider._swf = Mock() def test_decider_pass_tasktoken(self): self.decider._swf.poll_for_decision_task.return_value = { 'events': [{'eventId': 1, 'eventTimestamp': 1379019427.953, 'eventType': 'WorkflowExecutionStarted', 'workflowExecutionStartedEventAttributes': { 'childPolicy': 'TERMINATE', 'executionStartToCloseTimeout': '3600', 'parentInitiatedEventId': 0, 'taskList': {'name': 'test_list'}, 'taskStartToCloseTimeout': '123', 'workflowType': {'name': 'test_workflow_name', 'version': 'v1'}}}, {'decisionTaskScheduledEventAttributes': {'startToCloseTimeout': '123', 'taskList': {'name': 'test_list'}}, 'eventId': 2, 'eventTimestamp': 1379019427.953, 'eventType': 'DecisionTaskScheduled'}, {'decisionTaskStartedEventAttributes': {'scheduledEventId': 2}, 'eventId': 3, 'eventTimestamp': 1379019495.585, 'eventType': 'DecisionTaskStarted'}], 'previousStartedEventId': 0, 'startedEventId': 3, 'taskToken': 'my_specific_task_token', 'workflowExecution': {'runId': 'fwr243dsa324132jmflkfu0943tr09=', 'workflowId': 'test_workflow_name-v1-1379019427'}, 'workflowType': {'name': 'test_workflow_name', 'version': 'v1'}} self.decider.poll() self.decider.complete() self.decider._swf.respond_decision_task_completed.assert_called_with('my_specific_task_token', None) self.assertEqual('my_specific_task_token', self.decider.last_tasktoken) def test_worker_pass_tasktoken(self): task_token = 'worker_task_token' self.worker._swf.poll_for_activity_task.return_value = { 'activityId': 'SomeActivity-1379020713', 'activityType': {'name': 'SomeActivity', 'version': '1.0'}, 'startedEventId': 6, 'taskToken': task_token, 'workflowExecution': {'runId': '12T026NzGK5c4eMti06N9O3GHFuTDaNyA+8LFtoDkAwfE=', 'workflowId': 'MyWorkflow-1.0-1379020705'}} self.worker.poll() self.worker.cancel(details='Cancelling!') self.worker.complete(result='Done!') self.worker.fail(reason='Failure!') self.worker.heartbeat() self.worker._swf.respond_activity_task_canceled.assert_called_with(task_token, 'Cancelling!') self.worker._swf.respond_activity_task_completed.assert_called_with(task_token, 'Done!') self.worker._swf.respond_activity_task_failed.assert_called_with(task_token, None, 'Failure!') self.worker._swf.record_activity_task_heartbeat.assert_called_with(task_token, None) def test_actor_poll_without_tasklist_override(self): self.worker.poll() self.decider.poll() self.worker._swf.poll_for_activity_task.assert_called_with('test', 'test_list') self.decider._swf.poll_for_decision_task.assert_called_with('test', 'test_list') def test_worker_override_tasklist(self): self.worker.poll(task_list='some_other_tasklist') self.worker._swf.poll_for_activity_task.assert_called_with('test', 'some_other_tasklist') def test_decider_override_tasklist(self): self.decider.poll(task_list='some_other_tasklist') self.decider._swf.poll_for_decision_task.assert_called_with('test', 'some_other_tasklist') if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/swf/test_layer2_domain.py000066400000000000000000000131661225267101000216200ustar00rootroot00000000000000import boto.swf.layer2 from boto.swf.layer2 import Domain, ActivityType, WorkflowType, WorkflowExecution from tests.unit import unittest from mock import Mock class TestDomain(unittest.TestCase): def setUp(self): boto.swf.layer2.Layer1 = Mock() self.domain = Domain(name='test-domain', description='My test domain') self.domain.aws_access_key_id = 'inheritable access key' self.domain.aws_secret_access_key = 'inheritable secret key' def test_domain_instantiation(self): self.assertEquals('test-domain', self.domain.name) self.assertEquals('My test domain', self.domain.description) def test_domain_list_activities(self): self.domain._swf.list_activity_types.return_value = { 'typeInfos': [{'activityType': {'name': 'DeleteLocalFile', 'version': '1.0'}, 'creationDate': 1332853651.235, 'status': 'REGISTERED'}, {'activityType': {'name': 'DoUpdate', 'version': 'test'}, 'creationDate': 1333463734.528, 'status': 'REGISTERED'}, {'activityType': {'name': 'GrayscaleTransform', 'version': '1.0'}, 'creationDate': 1332853651.18, 'status': 'REGISTERED'}, {'activityType': {'name': 'S3Download', 'version': '1.0'}, 'creationDate': 1332853651.264, 'status': 'REGISTERED'}, {'activityType': {'name': 'S3Upload', 'version': '1.0'}, 'creationDate': 1332853651.314, 'status': 'REGISTERED'}, {'activityType': {'name': 'SepiaTransform', 'version': '1.1'}, 'creationDate': 1333373797.734, 'status': 'REGISTERED'}]} expected_names = ('DeleteLocalFile', 'GrayscaleTransform', 'S3Download', 'S3Upload', 'SepiaTransform', 'DoUpdate') activity_types = self.domain.activities() self.assertEquals(6, len(activity_types)) for activity_type in activity_types: self.assertIsInstance(activity_type, ActivityType) self.assertTrue(activity_type.name in expected_names) def test_domain_list_workflows(self): self.domain._swf.list_workflow_types.return_value = { 'typeInfos': [{'creationDate': 1332853651.136, 'description': 'Image processing sample workflow type', 'status': 'REGISTERED', 'workflowType': {'name': 'ProcessFile', 'version': '1.0'}}, {'creationDate': 1333551719.89, 'status': 'REGISTERED', 'workflowType': {'name': 'test_workflow_name', 'version': 'v1'}}]} expected_names = ('ProcessFile', 'test_workflow_name') workflow_types = self.domain.workflows() self.assertEquals(2, len(workflow_types)) for workflow_type in workflow_types: self.assertIsInstance(workflow_type, WorkflowType) self.assertTrue(workflow_type.name in expected_names) self.assertEquals(self.domain.aws_access_key_id, workflow_type.aws_access_key_id) self.assertEquals(self.domain.aws_secret_access_key, workflow_type.aws_secret_access_key) self.assertEquals(self.domain.name, workflow_type.domain) def test_domain_list_executions(self): self.domain._swf.list_open_workflow_executions.return_value = { 'executionInfos': [{'cancelRequested': False, 'execution': {'runId': '12OeDTyoD27TDaafViz/QIlCHrYzspZmDgj0coIfjm868=', 'workflowId': 'ProcessFile-1.0-1378933928'}, 'executionStatus': 'OPEN', 'startTimestamp': 1378933928.676, 'workflowType': {'name': 'ProcessFile', 'version': '1.0'}}, {'cancelRequested': False, 'execution': {'runId': '12GwBkx4hH6t2yaIh8LYxy5HyCM6HcyhDKePJCg0/ciJk=', 'workflowId': 'ProcessFile-1.0-1378933927'}, 'executionStatus': 'OPEN', 'startTimestamp': 1378933927.919, 'workflowType': {'name': 'ProcessFile', 'version': '1.0'}}, {'cancelRequested': False, 'execution': {'runId': '12oRG3vEWrQ7oYBV+Bqi33Fht+ZRCYTt+tOdn5kLVcwKI=', 'workflowId': 'ProcessFile-1.0-1378933926'}, 'executionStatus': 'OPEN', 'startTimestamp': 1378933927.04, 'workflowType': {'name': 'ProcessFile', 'version': '1.0'}}, {'cancelRequested': False, 'execution': {'runId': '12qrdcpYmad2cjnqJcM4Njm3qrCGvmRFR1wwQEt+a2ako=', 'workflowId': 'ProcessFile-1.0-1378933874'}, 'executionStatus': 'OPEN', 'startTimestamp': 1378933874.956, 'workflowType': {'name': 'ProcessFile', 'version': '1.0'}}]} executions = self.domain.executions() self.assertEquals(4, len(executions)) for wf_execution in executions: self.assertIsInstance(wf_execution, WorkflowExecution) self.assertEquals(self.domain.aws_access_key_id, wf_execution.aws_access_key_id) self.assertEquals(self.domain.aws_secret_access_key, wf_execution.aws_secret_access_key) self.assertEquals(self.domain.name, wf_execution.domain) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/swf/test_layer2_types.py000066400000000000000000000033131225267101000215060ustar00rootroot00000000000000import boto.swf.layer2 from boto.swf.layer2 import ActivityType, WorkflowType, WorkflowExecution from tests.unit import unittest from mock import Mock, ANY class TestTypes(unittest.TestCase): def setUp(self): boto.swf.layer2.Layer1 = Mock() def test_workflow_type_register_defaults(self): wf_type = WorkflowType(name='name', domain='test', version='1') wf_type.register() wf_type._swf.register_workflow_type.assert_called_with('test', 'name', '1', default_execution_start_to_close_timeout=ANY, default_task_start_to_close_timeout=ANY, default_child_policy=ANY ) def test_activity_type_register_defaults(self): act_type = ActivityType(name='name', domain='test', version='1') act_type.register() act_type._swf.register_activity_type.assert_called_with('test', 'name', '1', default_task_heartbeat_timeout=ANY, default_task_schedule_to_close_timeout=ANY, default_task_schedule_to_start_timeout=ANY, default_task_start_to_close_timeout=ANY ) def test_workflow_type_start_execution(self): wf_type = WorkflowType(name='name', domain='test', version='1') run_id = '122aJcg6ic7MRAkjDRzLBsqU/R49qt5D0LPHycT/6ArN4=' wf_type._swf.start_workflow_execution.return_value = {'runId': run_id} execution = wf_type.start(task_list='hello_world') self.assertIsInstance(execution, WorkflowExecution) self.assertEquals(wf_type.name, execution.name) self.assertEquals(wf_type.version, execution.version) self.assertEquals(run_id, execution.runId) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/test_connection.py000066400000000000000000000447571225267101000204450ustar00rootroot00000000000000# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # from __future__ import with_statement import os import urlparse from tests.unit import unittest from httpretty import HTTPretty from boto.connection import AWSQueryConnection, AWSAuthConnection from boto.exception import BotoServerError from boto.regioninfo import RegionInfo from boto.compat import json class TestListParamsSerialization(unittest.TestCase): maxDiff = None def setUp(self): self.connection = AWSQueryConnection('access_key', 'secret_key') def test_complex_list_serialization(self): # This example is taken from the doc string of # build_complex_list_params. params = {} self.connection.build_complex_list_params( params, [('foo', 'bar', 'baz'), ('foo2', 'bar2', 'baz2')], 'ParamName.member', ('One', 'Two', 'Three')) self.assertDictEqual({ 'ParamName.member.1.One': 'foo', 'ParamName.member.1.Two': 'bar', 'ParamName.member.1.Three': 'baz', 'ParamName.member.2.One': 'foo2', 'ParamName.member.2.Two': 'bar2', 'ParamName.member.2.Three': 'baz2', }, params) def test_simple_list_serialization(self): params = {} self.connection.build_list_params( params, ['foo', 'bar', 'baz'], 'ParamName.member') self.assertDictEqual({ 'ParamName.member.1': 'foo', 'ParamName.member.2': 'bar', 'ParamName.member.3': 'baz', }, params) class MockAWSService(AWSQueryConnection): """ Fake AWS Service This is used to test the AWSQueryConnection object is behaving properly. """ APIVersion = '2012-01-01' def _required_auth_capability(self): return ['sign-v2'] def __init__(self, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, host=None, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/', api_version=None, security_token=None, validate_certs=True): self.region = region if host is None: host = self.region.endpoint AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key, is_secure, port, proxy, proxy_port, proxy_user, proxy_pass, host, debug, https_connection_factory, path, security_token, validate_certs=validate_certs) class TestAWSAuthConnection(unittest.TestCase): def test_get_path(self): conn = AWSAuthConnection( 'mockservice.cc-zone-1.amazonaws.com', aws_access_key_id='access_key', aws_secret_access_key='secret', suppress_consec_slashes=False ) # Test some sample paths for mangling. self.assertEqual(conn.get_path('/'), '/') self.assertEqual(conn.get_path('image.jpg'), '/image.jpg') self.assertEqual(conn.get_path('folder/image.jpg'), '/folder/image.jpg') self.assertEqual(conn.get_path('folder//image.jpg'), '/folder//image.jpg') # Ensure leading slashes aren't removed. # See https://github.com/boto/boto/issues/1387 self.assertEqual(conn.get_path('/folder//image.jpg'), '/folder//image.jpg') self.assertEqual(conn.get_path('/folder////image.jpg'), '/folder////image.jpg') self.assertEqual(conn.get_path('///folder////image.jpg'), '///folder////image.jpg') def test_connection_behind_proxy(self): os.environ['http_proxy'] = "http://john.doe:p4ssw0rd@127.0.0.1:8180" conn = AWSAuthConnection( 'mockservice.cc-zone-1.amazonaws.com', aws_access_key_id='access_key', aws_secret_access_key='secret', suppress_consec_slashes=False ) self.assertEqual(conn.proxy, '127.0.0.1') self.assertEqual(conn.proxy_user, 'john.doe') self.assertEqual(conn.proxy_pass, 'p4ssw0rd') self.assertEqual(conn.proxy_port, '8180') del os.environ['http_proxy'] def test_connection_behind_proxy_without_explicit_port(self): os.environ['http_proxy'] = "http://127.0.0.1" conn = AWSAuthConnection( 'mockservice.cc-zone-1.amazonaws.com', aws_access_key_id='access_key', aws_secret_access_key='secret', suppress_consec_slashes=False, port=8180 ) self.assertEqual(conn.proxy, '127.0.0.1') self.assertEqual(conn.proxy_port, 8180) del os.environ['http_proxy'] # this tests the proper setting of the host_header in v4 signing def test_host_header_with_nonstandard_port(self): # test standard port first conn = V4AuthConnection( 'testhost', aws_access_key_id='access_key', aws_secret_access_key='secret') request = conn.build_base_http_request(method='POST', path='/', auth_path=None, params=None, headers=None, data='', host=None) conn.set_host_header(request) self.assertEqual(request.headers['Host'], 'testhost') # next, test non-standard port conn = V4AuthConnection( 'testhost', aws_access_key_id='access_key', aws_secret_access_key='secret', port=8773) request = conn.build_base_http_request(method='POST', path='/', auth_path=None, params=None, headers=None, data='', host=None) conn.set_host_header(request) self.assertEqual(request.headers['Host'], 'testhost:8773') class V4AuthConnection(AWSAuthConnection): def __init__(self, host, aws_access_key_id, aws_secret_access_key, port=443): AWSAuthConnection.__init__(self, host, aws_access_key_id, aws_secret_access_key, port=port) def _required_auth_capability(self): return ['hmac-v4'] class TestAWSQueryConnection(unittest.TestCase): def setUp(self): self.region = RegionInfo(name='cc-zone-1', endpoint='mockservice.cc-zone-1.amazonaws.com', connection_cls=MockAWSService) HTTPretty.enable() def tearDown(self): HTTPretty.disable() class TestAWSQueryConnectionSimple(TestAWSQueryConnection): def test_query_connection_basis(self): HTTPretty.register_uri(HTTPretty.POST, 'https://%s/' % self.region.endpoint, json.dumps({'test': 'secure'}), content_type='application/json') conn = self.region.connect(aws_access_key_id='access_key', aws_secret_access_key='secret') self.assertEqual(conn.host, 'mockservice.cc-zone-1.amazonaws.com') def test_query_connection_noproxy(self): HTTPretty.register_uri(HTTPretty.POST, 'https://%s/' % self.region.endpoint, json.dumps({'test': 'secure'}), content_type='application/json') os.environ['no_proxy'] = self.region.endpoint conn = self.region.connect(aws_access_key_id='access_key', aws_secret_access_key='secret', proxy="NON_EXISTENT_HOSTNAME", proxy_port="3128") resp = conn.make_request('myCmd', {'par1': 'foo', 'par2': 'baz'}, "/", "POST") del os.environ['no_proxy'] args = urlparse.parse_qs(HTTPretty.last_request.body) self.assertEqual(args['AWSAccessKeyId'], ['access_key']) def test_query_connection_noproxy_nosecure(self): HTTPretty.register_uri(HTTPretty.POST, 'https://%s/' % self.region.endpoint, json.dumps({'test': 'insecure'}), content_type='application/json') os.environ['no_proxy'] = self.region.endpoint conn = self.region.connect(aws_access_key_id='access_key', aws_secret_access_key='secret', proxy="NON_EXISTENT_HOSTNAME", proxy_port="3128", is_secure = False) resp = conn.make_request('myCmd', {'par1': 'foo', 'par2': 'baz'}, "/", "POST") del os.environ['no_proxy'] args = urlparse.parse_qs(HTTPretty.last_request.body) self.assertEqual(args['AWSAccessKeyId'], ['access_key']) def test_single_command(self): HTTPretty.register_uri(HTTPretty.POST, 'https://%s/' % self.region.endpoint, json.dumps({'test': 'secure'}), content_type='application/json') conn = self.region.connect(aws_access_key_id='access_key', aws_secret_access_key='secret') resp = conn.make_request('myCmd', {'par1': 'foo', 'par2': 'baz'}, "/", "POST") args = urlparse.parse_qs(HTTPretty.last_request.body) self.assertEqual(args['AWSAccessKeyId'], ['access_key']) self.assertEqual(args['SignatureMethod'], ['HmacSHA256']) self.assertEqual(args['Version'], [conn.APIVersion]) self.assertEqual(args['par1'], ['foo']) self.assertEqual(args['par2'], ['baz']) self.assertEqual(resp.read(), '{"test": "secure"}') def test_multi_commands(self): """Check connection re-use""" HTTPretty.register_uri(HTTPretty.POST, 'https://%s/' % self.region.endpoint, json.dumps({'test': 'secure'}), content_type='application/json') conn = self.region.connect(aws_access_key_id='access_key', aws_secret_access_key='secret') resp1 = conn.make_request('myCmd1', {'par1': 'foo', 'par2': 'baz'}, "/", "POST") body1 = urlparse.parse_qs(HTTPretty.last_request.body) resp2 = conn.make_request('myCmd2', {'par3': 'bar', 'par4': 'narf'}, "/", "POST") body2 = urlparse.parse_qs(HTTPretty.last_request.body) self.assertEqual(body1['par1'], ['foo']) self.assertEqual(body1['par2'], ['baz']) with self.assertRaises(KeyError): body1['par3'] self.assertEqual(body2['par3'], ['bar']) self.assertEqual(body2['par4'], ['narf']) with self.assertRaises(KeyError): body2['par1'] self.assertEqual(resp1.read(), '{"test": "secure"}') self.assertEqual(resp2.read(), '{"test": "secure"}') def test_non_secure(self): HTTPretty.register_uri(HTTPretty.POST, 'http://%s/' % self.region.endpoint, json.dumps({'test': 'normal'}), content_type='application/json') conn = self.region.connect(aws_access_key_id='access_key', aws_secret_access_key='secret', is_secure=False) resp = conn.make_request('myCmd1', {'par1': 'foo', 'par2': 'baz'}, "/", "POST") self.assertEqual(resp.read(), '{"test": "normal"}') def test_alternate_port(self): HTTPretty.register_uri(HTTPretty.POST, 'http://%s:8080/' % self.region.endpoint, json.dumps({'test': 'alternate'}), content_type='application/json') conn = self.region.connect(aws_access_key_id='access_key', aws_secret_access_key='secret', port=8080, is_secure=False) resp = conn.make_request('myCmd1', {'par1': 'foo', 'par2': 'baz'}, "/", "POST") self.assertEqual(resp.read(), '{"test": "alternate"}') def test_temp_failure(self): responses = [HTTPretty.Response(body="{'test': 'fail'}", status=500), HTTPretty.Response(body="{'test': 'success'}", status=200)] HTTPretty.register_uri(HTTPretty.POST, 'https://%s/temp_fail/' % self.region.endpoint, responses=responses) conn = self.region.connect(aws_access_key_id='access_key', aws_secret_access_key='secret') resp = conn.make_request('myCmd1', {'par1': 'foo', 'par2': 'baz'}, '/temp_fail/', 'POST') self.assertEqual(resp.read(), "{'test': 'success'}") def test_connection_close(self): """Check connection re-use after close header is received""" HTTPretty.register_uri(HTTPretty.POST, 'https://%s/' % self.region.endpoint, json.dumps({'test': 'secure'}), content_type='application/json', connection='close') conn = self.region.connect(aws_access_key_id='access_key', aws_secret_access_key='secret') def mock_put_conn(*args, **kwargs): raise Exception('put_http_connection should not be called!') conn.put_http_connection = mock_put_conn resp1 = conn.make_request('myCmd1', {'par1': 'foo', 'par2': 'baz'}, "/", "POST") # If we've gotten this far then no exception was raised # by attempting to put the connection back into the pool # Now let's just confirm the close header was actually # set or we have another problem. self.assertEqual(resp1.getheader('connection'), 'close') def test_port_pooling(self): conn = self.region.connect(aws_access_key_id='access_key', aws_secret_access_key='secret', port=8080) # Pick a connection, then put it back con1 = conn.get_http_connection(conn.host, conn.port, conn.is_secure) conn.put_http_connection(conn.host, conn.port, conn.is_secure, con1) # Pick another connection, which hopefully is the same yet again con2 = conn.get_http_connection(conn.host, conn.port, conn.is_secure) conn.put_http_connection(conn.host, conn.port, conn.is_secure, con2) self.assertEqual(con1, con2) # Change the port and make sure a new connection is made conn.port = 8081 con3 = conn.get_http_connection(conn.host, conn.port, conn.is_secure) conn.put_http_connection(conn.host, conn.port, conn.is_secure, con3) self.assertNotEqual(con1, con3) class TestAWSQueryStatus(TestAWSQueryConnection): def test_get_status(self): HTTPretty.register_uri(HTTPretty.GET, 'https://%s/status' % self.region.endpoint, 'ok', content_type='text/xml') conn = self.region.connect(aws_access_key_id='access_key', aws_secret_access_key='secret') resp = conn.get_status('getStatus', {'par1': 'foo', 'par2': 'baz'}, 'status') self.assertEqual(resp, "ok") def test_get_status_blank_error(self): HTTPretty.register_uri(HTTPretty.GET, 'https://%s/status' % self.region.endpoint, '', content_type='text/xml') conn = self.region.connect(aws_access_key_id='access_key', aws_secret_access_key='secret') with self.assertRaises(BotoServerError): resp = conn.get_status('getStatus', {'par1': 'foo', 'par2': 'baz'}, 'status') def test_get_status_error(self): HTTPretty.register_uri(HTTPretty.GET, 'https://%s/status' % self.region.endpoint, 'error', content_type='text/xml', status=400) conn = self.region.connect(aws_access_key_id='access_key', aws_secret_access_key='secret') with self.assertRaises(BotoServerError): resp = conn.get_status('getStatus', {'par1': 'foo', 'par2': 'baz'}, 'status') if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/test_exception.py000066400000000000000000000121411225267101000202620ustar00rootroot00000000000000from tests.unit import unittest from boto.exception import BotoServerError, S3CreateError, JSONResponseError from httpretty import HTTPretty, httprettified class TestBotoServerError(unittest.TestCase): def test_botoservererror_basics(self): bse = BotoServerError('400', 'Bad Request') self.assertEqual(bse.status, '400') self.assertEqual(bse.reason, 'Bad Request') def test_message_elb_xml(self): # This test XML response comes from #509 xml = """ Sender LoadBalancerNotFound Cannot find Load Balancer webapp-balancer2 093f80d0-4473-11e1-9234-edce8ec08e2d """ bse = BotoServerError('400', 'Bad Request', body=xml) self.assertEqual(bse.error_message, 'Cannot find Load Balancer webapp-balancer2') self.assertEqual(bse.error_message, bse.message) self.assertEqual(bse.request_id, '093f80d0-4473-11e1-9234-edce8ec08e2d') self.assertEqual(bse.error_code, 'LoadBalancerNotFound') self.assertEqual(bse.status, '400') self.assertEqual(bse.reason, 'Bad Request') def test_message_sd_xml(self): # Sample XML response from: https://forums.aws.amazon.com/thread.jspa?threadID=87393 xml = """ AuthorizationFailure Session does not have permission to perform (sdb:CreateDomain) on resource (arn:aws:sdb:us-east-1:xxxxxxx:domain/test_domain). Contact account owner. 0.0055590278 e73bb2bb-63e3-9cdc-f220-6332de66dbbe """ bse = BotoServerError('403', 'Forbidden', body=xml) self.assertEqual(bse.error_message, 'Session does not have permission to perform (sdb:CreateDomain) on ' 'resource (arn:aws:sdb:us-east-1:xxxxxxx:domain/test_domain). ' 'Contact account owner.') self.assertEqual(bse.error_message, bse.message) self.assertEqual(bse.box_usage, '0.0055590278') self.assertEqual(bse.error_code, 'AuthorizationFailure') self.assertEqual(bse.status, '403') self.assertEqual(bse.reason, 'Forbidden') @httprettified def test_xmlns_not_loaded(self): xml = '' bse = BotoServerError('403', 'Forbidden', body=xml) self.assertEqual([], HTTPretty.latest_requests) @httprettified def test_xml_entity_not_loaded(self): xml = ']>error:&xxe;' bse = BotoServerError('403', 'Forbidden', body=xml) self.assertEqual([], HTTPretty.latest_requests) def test_message_storage_create_error(self): # This test value comes from https://answers.launchpad.net/duplicity/+question/150801 xml = """ BucketAlreadyOwnedByYou Your previous request to create the named bucket succeeded and you already own it. cmsbk FF8B86A32CC3FE4F 6ENGL3DT9f0n7Tkv4qdKIs/uBNCMMA6QUFapw265WmodFDluP57esOOkecp55qhh """ s3ce = S3CreateError('409', 'Conflict', body=xml) self.assertEqual(s3ce.bucket, 'cmsbk') self.assertEqual(s3ce.error_code, 'BucketAlreadyOwnedByYou') self.assertEqual(s3ce.status, '409') self.assertEqual(s3ce.reason, 'Conflict') self.assertEqual(s3ce.error_message, 'Your previous request to create the named bucket succeeded ' 'and you already own it.') self.assertEqual(s3ce.error_message, s3ce.message) self.assertEqual(s3ce.request_id, 'FF8B86A32CC3FE4F') def test_message_json_response_error(self): # This test comes from https://forums.aws.amazon.com/thread.jspa?messageID=374936 body = { '__type': 'com.amazon.coral.validate#ValidationException', 'message': 'The attempted filter operation is not supported ' 'for the provided filter argument count'} jre = JSONResponseError('400', 'Bad Request', body=body) self.assertEqual(jre.status, '400') self.assertEqual(jre.reason, 'Bad Request') self.assertEqual(jre.error_message, body['message']) self.assertEqual(jre.error_message, jre.message) self.assertEqual(jre.code, 'ValidationException') self.assertEqual(jre.code, jre.error_code) def test_message_not_xml(self): body = 'This is not XML' bse = BotoServerError('400', 'Bad Request', body=body) self.assertEqual(bse.error_message, 'This is not XML') def test_getters(self): body = "This is the body" bse = BotoServerError('400', 'Bad Request', body=body) self.assertEqual(bse.code, bse.error_code) self.assertEqual(bse.message, bse.error_message) boto-2.20.1/tests/unit/utils/000077500000000000000000000000001225267101000160145ustar00rootroot00000000000000boto-2.20.1/tests/unit/utils/__init__.py000066400000000000000000000000001225267101000201130ustar00rootroot00000000000000boto-2.20.1/tests/unit/utils/test_utils.py000066400000000000000000000204321225267101000205660ustar00rootroot00000000000000# Copyright (c) 2010 Robert Mela # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, dis- # tribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the fol- # lowing conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # try: import unittest2 as unittest except ImportError: import unittest import hashlib import hmac import mock import boto.utils from boto.utils import Password from boto.utils import pythonize_name from boto.utils import _build_instance_metadata_url from boto.utils import get_instance_userdata from boto.utils import retry_url from boto.utils import LazyLoadMetadata from boto.compat import json class TestPassword(unittest.TestCase): """Test basic password functionality""" def clstest(self, cls): """Insure that password.__eq__ hashes test value before compare.""" password = cls('foo') self.assertNotEquals(password, 'foo') password.set('foo') hashed = str(password) self.assertEquals(password, 'foo') self.assertEquals(password.str, hashed) password = cls(hashed) self.assertNotEquals(password.str, 'foo') self.assertEquals(password, 'foo') self.assertEquals(password.str, hashed) def test_aaa_version_1_9_default_behavior(self): self.clstest(Password) def test_custom_hashclass(self): class SHA224Password(Password): hashfunc = hashlib.sha224 password = SHA224Password() password.set('foo') self.assertEquals(hashlib.sha224('foo').hexdigest(), str(password)) def test_hmac(self): def hmac_hashfunc(cls, msg): return hmac.new('mysecretkey', msg) class HMACPassword(Password): hashfunc = hmac_hashfunc self.clstest(HMACPassword) password = HMACPassword() password.set('foo') self.assertEquals(str(password), hmac.new('mysecretkey', 'foo').hexdigest()) def test_constructor(self): hmac_hashfunc = lambda msg: hmac.new('mysecretkey', msg) password = Password(hashfunc=hmac_hashfunc) password.set('foo') self.assertEquals(password.str, hmac.new('mysecretkey', 'foo').hexdigest()) class TestPythonizeName(unittest.TestCase): def test_empty_string(self): self.assertEqual(pythonize_name(''), '') def test_all_lower_case(self): self.assertEqual(pythonize_name('lowercase'), 'lowercase') def test_all_upper_case(self): self.assertEqual(pythonize_name('UPPERCASE'), 'uppercase') def test_camel_case(self): self.assertEqual(pythonize_name('OriginallyCamelCased'), 'originally_camel_cased') def test_already_pythonized(self): self.assertEqual(pythonize_name('already_pythonized'), 'already_pythonized') def test_multiple_upper_cased_letters(self): self.assertEqual(pythonize_name('HTTPRequest'), 'http_request') self.assertEqual(pythonize_name('RequestForHTTP'), 'request_for_http') def test_string_with_numbers(self): self.assertEqual(pythonize_name('HTTPStatus200Ok'), 'http_status_200_ok') class TestBuildInstanceMetadataURL(unittest.TestCase): def test_normal(self): # This is the all-defaults case. self.assertEqual(_build_instance_metadata_url( 'http://169.254.169.254', 'latest', 'meta-data/' ), 'http://169.254.169.254/latest/meta-data/' ) def test_custom_path(self): self.assertEqual(_build_instance_metadata_url( 'http://169.254.169.254', 'latest', 'dynamic/' ), 'http://169.254.169.254/latest/dynamic/' ) def test_custom_version(self): self.assertEqual(_build_instance_metadata_url( 'http://169.254.169.254', '1.0', 'meta-data/' ), 'http://169.254.169.254/1.0/meta-data/' ) def test_custom_url(self): self.assertEqual(_build_instance_metadata_url( 'http://10.0.1.5', 'latest', 'meta-data/' ), 'http://10.0.1.5/latest/meta-data/' ) def test_all_custom(self): self.assertEqual(_build_instance_metadata_url( 'http://10.0.1.5', '2013-03-22', 'user-data' ), 'http://10.0.1.5/2013-03-22/user-data' ) class TestRetryURL(unittest.TestCase): def setUp(self): self.urlopen_patch = mock.patch('urllib2.urlopen') self.opener_patch = mock.patch('urllib2.build_opener') self.urlopen = self.urlopen_patch.start() self.opener = self.opener_patch.start() def tearDown(self): self.urlopen_patch.stop() self.opener_patch.stop() def set_normal_response(self, response): fake_response = mock.Mock() fake_response.read.return_value = response self.urlopen.return_value = fake_response def set_no_proxy_allowed_response(self, response): fake_response = mock.Mock() fake_response.read.return_value = response self.opener.return_value.open.return_value = fake_response def test_retry_url_uses_proxy(self): self.set_normal_response('normal response') self.set_no_proxy_allowed_response('no proxy response') response = retry_url('http://10.10.10.10/foo', num_retries=1) self.assertEqual(response, 'no proxy response') class TestLazyLoadMetadata(unittest.TestCase): def setUp(self): self.retry_url_patch = mock.patch('boto.utils.retry_url') boto.utils.retry_url = self.retry_url_patch.start() def tearDown(self): self.retry_url_patch.stop() def set_normal_response(self, data): # here "data" should be a list of return values in some order fake_response = mock.Mock() fake_response.side_effect = data boto.utils.retry_url = fake_response def test_meta_data_with_invalid_json_format_happened_once(self): # here "key_data" will be stored in the "self._leaves" # when the class "LazyLoadMetadata" initialized key_data = "test" invalid_data = '{"invalid_json_format" : true,}' valid_data = '{ "%s" : {"valid_json_format": true}}' % key_data url = "/".join(["http://169.254.169.254", key_data]) num_retries = 2 self.set_normal_response([key_data, invalid_data, valid_data]) response = LazyLoadMetadata(url, num_retries) self.assertEqual(response.values()[0], json.loads(valid_data)) def test_meta_data_with_invalid_json_format_happened_twice(self): key_data = "test" invalid_data = '{"invalid_json_format" : true,}' valid_data = '{ "%s" : {"valid_json_format": true}}' % key_data url = "/".join(["http://169.254.169.254", key_data]) num_retries = 2 self.set_normal_response([key_data, invalid_data, invalid_data]) response = LazyLoadMetadata(url, num_retries) with self.assertRaises(ValueError): response.values()[0] def test_user_data(self): self.set_normal_response(['foo']) userdata = get_instance_userdata() self.assertEqual('foo', userdata) boto.utils.retry_url.assert_called_with( 'http://169.254.169.254/latest/user-data', retry_on_404=False) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/vpc/000077500000000000000000000000001225267101000154445ustar00rootroot00000000000000boto-2.20.1/tests/unit/vpc/__init__.py000066400000000000000000000000351225267101000175530ustar00rootroot00000000000000""" Test package for VPC """ boto-2.20.1/tests/unit/vpc/test_customergateway.py000066400000000000000000000107151225267101000223040ustar00rootroot00000000000000from tests.unit import unittest from tests.unit import AWSMockServiceTestCase from boto.vpc import VPCConnection, CustomerGateway class TestDescribeCustomerGateways(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE cgw-b4dc3961 available ipsec.1 12.1.2.3 65534 """ def test_get_all_customer_gateways(self): self.set_http_response(status_code=200) api_response = self.service_connection.get_all_customer_gateways( 'cgw-b4dc3961', filters=[('state', ['pending', 'available']), ('ip-address', '12.1.2.3')]) self.assert_request_parameters({ 'Action': 'DescribeCustomerGateways', 'CustomerGatewayId.1': 'cgw-b4dc3961', 'Filter.1.Name': 'state', 'Filter.1.Value.1': 'pending', 'Filter.1.Value.2': 'available', 'Filter.2.Name': 'ip-address', 'Filter.2.Value.1': '12.1.2.3'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(len(api_response), 1) self.assertIsInstance(api_response[0], CustomerGateway) self.assertEqual(api_response[0].id, 'cgw-b4dc3961') class TestCreateCustomerGateway(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE cgw-b4dc3961 pending ipsec.1 12.1.2.3 65534 """ def test_create_customer_gateway(self): self.set_http_response(status_code=200) api_response = self.service_connection.create_customer_gateway( 'ipsec.1', '12.1.2.3', 65534) self.assert_request_parameters({ 'Action': 'CreateCustomerGateway', 'Type': 'ipsec.1', 'IpAddress': '12.1.2.3', 'BgpAsn': 65534}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertIsInstance(api_response, CustomerGateway) self.assertEquals(api_response.id, 'cgw-b4dc3961') self.assertEquals(api_response.state, 'pending') self.assertEquals(api_response.type, 'ipsec.1') self.assertEquals(api_response.ip_address, '12.1.2.3') self.assertEquals(api_response.bgp_asn, 65534) class TestDeleteCustomerGateway(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE true """ def test_delete_customer_gateway(self): self.set_http_response(status_code=200) api_response = self.service_connection.delete_customer_gateway('cgw-b4dc3961') self.assert_request_parameters({ 'Action': 'DeleteCustomerGateway', 'CustomerGatewayId': 'cgw-b4dc3961'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/vpc/test_dhcpoptions.py000066400000000000000000000210731225267101000214120ustar00rootroot00000000000000from tests.unit import unittest from tests.unit import AWSMockServiceTestCase from boto.vpc import VPCConnection, DhcpOptions class TestDescribeDhcpOptions(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE dopt-7a8b9c2d domain-name example.com domain-name-servers 10.2.5.1 domain-name-servers 10.2.5.2 """ def test_get_all_dhcp_options(self): self.set_http_response(status_code=200) api_response = self.service_connection.get_all_dhcp_options(['dopt-7a8b9c2d'], [('key', 'domain-name')]) self.assert_request_parameters({ 'Action': 'DescribeDhcpOptions', 'DhcpOptionsId.1': 'dopt-7a8b9c2d', 'Filter.1.Name': 'key', 'Filter.1.Value.1': 'domain-name'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(len(api_response), 1) self.assertIsInstance(api_response[0], DhcpOptions) self.assertEquals(api_response[0].id, 'dopt-7a8b9c2d') self.assertEquals(api_response[0].options['domain-name'], ['example.com']) self.assertEquals(api_response[0].options['domain-name-servers'], ['10.2.5.1', '10.2.5.2']) class TestCreateDhcpOptions(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE dopt-7a8b9c2d domain-name example.com domain-name-servers 10.2.5.1 10.2.5.2 ntp-servers 10.12.12.1 10.12.12.2 netbios-name-servers 10.20.20.1 netbios-node-type 2 """ def test_create_dhcp_options(self): self.set_http_response(status_code=200) api_response = self.service_connection.create_dhcp_options( domain_name='example.com', domain_name_servers=['10.2.5.1', '10.2.5.2'], ntp_servers=('10.12.12.1', '10.12.12.2'), netbios_name_servers='10.20.20.1', netbios_node_type='2') self.assert_request_parameters({ 'Action': 'CreateDhcpOptions', 'DhcpConfiguration.1.Key': 'domain-name', 'DhcpConfiguration.1.Value.1': 'example.com', 'DhcpConfiguration.2.Key': 'domain-name-servers', 'DhcpConfiguration.2.Value.1': '10.2.5.1', 'DhcpConfiguration.2.Value.2': '10.2.5.2', 'DhcpConfiguration.3.Key': 'ntp-servers', 'DhcpConfiguration.3.Value.1': '10.12.12.1', 'DhcpConfiguration.3.Value.2': '10.12.12.2', 'DhcpConfiguration.4.Key': 'netbios-name-servers', 'DhcpConfiguration.4.Value.1': '10.20.20.1', 'DhcpConfiguration.5.Key': 'netbios-node-type', 'DhcpConfiguration.5.Value.1': '2'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertIsInstance(api_response, DhcpOptions) self.assertEquals(api_response.id, 'dopt-7a8b9c2d') self.assertEquals(api_response.options['domain-name'], ['example.com']) self.assertEquals(api_response.options['domain-name-servers'], ['10.2.5.1', '10.2.5.2']) self.assertEquals(api_response.options['ntp-servers'], ['10.12.12.1', '10.12.12.2']) self.assertEquals(api_response.options['netbios-name-servers'], ['10.20.20.1']) self.assertEquals(api_response.options['netbios-node-type'], ['2']) class TestDeleteDhcpOptions(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE true """ def test_delete_dhcp_options(self): self.set_http_response(status_code=200) api_response = self.service_connection.delete_dhcp_options('dopt-7a8b9c2d') self.assert_request_parameters({ 'Action': 'DeleteDhcpOptions', 'DhcpOptionsId': 'dopt-7a8b9c2d'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) class TestAssociateDhcpOptions(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE true """ def test_associate_dhcp_options(self): self.set_http_response(status_code=200) api_response = self.service_connection.associate_dhcp_options( 'dopt-7a8b9c2d', 'vpc-1a2b3c4d') self.assert_request_parameters({ 'Action': 'AssociateDhcpOptions', 'DhcpOptionsId': 'dopt-7a8b9c2d', 'VpcId': 'vpc-1a2b3c4d'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/vpc/test_internetgateway.py000066400000000000000000000136731225267101000223010ustar00rootroot00000000000000from tests.unit import unittest from tests.unit import AWSMockServiceTestCase from boto.vpc import VPCConnection, InternetGateway class TestDescribeInternetGateway(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE igw-eaad4883EXAMPLE vpc-11ad4878 available """ def test_describe_internet_gateway(self): self.set_http_response(status_code=200) api_response = self.service_connection.get_all_internet_gateways( 'igw-eaad4883EXAMPLE', filters=[('attachment.state', ['available', 'pending'])]) self.assert_request_parameters({ 'Action': 'DescribeInternetGateways', 'InternetGatewayId.1': 'igw-eaad4883EXAMPLE', 'Filter.1.Name': 'attachment.state', 'Filter.1.Value.1': 'available', 'Filter.1.Value.2': 'pending'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(len(api_response), 1) self.assertIsInstance(api_response[0], InternetGateway) self.assertEqual(api_response[0].id, 'igw-eaad4883EXAMPLE') class TestCreateInternetGateway(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE igw-eaad4883 """ def test_create_internet_gateway(self): self.set_http_response(status_code=200) api_response = self.service_connection.create_internet_gateway() self.assert_request_parameters({ 'Action': 'CreateInternetGateway'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertIsInstance(api_response, InternetGateway) self.assertEqual(api_response.id, 'igw-eaad4883') class TestDeleteInternetGateway(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE true """ def test_delete_internet_gateway(self): self.set_http_response(status_code=200) api_response = self.service_connection.delete_internet_gateway('igw-eaad4883') self.assert_request_parameters({ 'Action': 'DeleteInternetGateway', 'InternetGatewayId': 'igw-eaad4883'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) class TestAttachInternetGateway(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE true """ def test_attach_internet_gateway(self): self.set_http_response(status_code=200) api_response = self.service_connection.attach_internet_gateway( 'igw-eaad4883', 'vpc-11ad4878') self.assert_request_parameters({ 'Action': 'AttachInternetGateway', 'InternetGatewayId': 'igw-eaad4883', 'VpcId': 'vpc-11ad4878'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) class TestDetachInternetGateway(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE true """ def test_detach_internet_gateway(self): self.set_http_response(status_code=200) api_response = self.service_connection.detach_internet_gateway( 'igw-eaad4883', 'vpc-11ad4878') self.assert_request_parameters({ 'Action': 'DetachInternetGateway', 'InternetGatewayId': 'igw-eaad4883', 'VpcId': 'vpc-11ad4878'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/vpc/test_networkacl.py000066400000000000000000000455101225267101000212330ustar00rootroot00000000000000from tests.unit import unittest from tests.unit import AWSMockServiceTestCase from boto.vpc import VPCConnection class TestDescribeNetworkAcls(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE acl-5566953c vpc-5266953b true 100 all allow true 0.0.0.0/0 32767 all deny true 0.0.0.0/0 100 all allow false 0.0.0.0/0 32767 all deny false 0.0.0.0/0 acl-5d659634 vpc-5266953b false 110 6 allow true 0.0.0.0/0 49152 65535 32767 all deny true 0.0.0.0/0 110 6 allow false 0.0.0.0/0 80 80 120 6 allow false 0.0.0.0/0 443 443 32767 all deny false 0.0.0.0/0 aclassoc-5c659635 acl-5d659634 subnet-ff669596 aclassoc-c26596ab acl-5d659634 subnet-f0669599 """ def test_get_all_network_acls(self): self.set_http_response(status_code=200) response = self.service_connection.get_all_network_acls(['acl-5566953c', 'acl-5d659634'], [('vpc-id', 'vpc-5266953b')]) self.assert_request_parameters({ 'Action': 'DescribeNetworkAcls', 'NetworkAclId.1': 'acl-5566953c', 'NetworkAclId.2': 'acl-5d659634', 'Filter.1.Name': 'vpc-id', 'Filter.1.Value.1': 'vpc-5266953b'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEqual(len(response), 2) class TestReplaceNetworkAclAssociation(AWSMockServiceTestCase): connection_class = VPCConnection get_all_network_acls_vpc_body = """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE acl-5566953c vpc-5266953b true 100 all allow true 0.0.0.0/0 32767 all deny true 0.0.0.0/0 100 all allow false 0.0.0.0/0 32767 all deny false 0.0.0.0/0 """ get_all_network_acls_subnet_body = """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE acl-5d659634 vpc-5266953b false 110 6 allow true 0.0.0.0/0 49152 65535 aclassoc-c26596ab acl-5d659634 subnet-f0669599 aclassoc-5c659635 acl-5d659634 subnet-ff669596 """ def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE aclassoc-17b85d7e """ def test_associate_network_acl(self): self.https_connection.getresponse.side_effect = [ self.create_response(status_code=200, body=self.get_all_network_acls_subnet_body), self.create_response(status_code=200) ] response = self.service_connection.associate_network_acl('acl-5fb85d36', 'subnet-ff669596') # Note: Not testing proper call to get_all_network_acls! self.assert_request_parameters({ 'Action': 'ReplaceNetworkAclAssociation', 'NetworkAclId': 'acl-5fb85d36', 'AssociationId': 'aclassoc-5c659635'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEqual(response, 'aclassoc-17b85d7e') def test_disassociate_network_acl(self): self.https_connection.getresponse.side_effect = [ self.create_response(status_code=200, body=self.get_all_network_acls_vpc_body), self.create_response(status_code=200, body=self.get_all_network_acls_subnet_body), self.create_response(status_code=200) ] response = self.service_connection.disassociate_network_acl('subnet-ff669596', 'vpc-5266953b') # Note: Not testing proper call to either call to get_all_network_acls! self.assert_request_parameters({ 'Action': 'ReplaceNetworkAclAssociation', 'NetworkAclId': 'acl-5566953c', 'AssociationId': 'aclassoc-5c659635'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEqual(response, 'aclassoc-17b85d7e') class TestCreateNetworkAcl(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE acl-5fb85d36 vpc-11ad4878 false 32767 all deny true 0.0.0.0/0 32767 all deny false 0.0.0.0/0 """ def test_create_network_acl(self): self.set_http_response(status_code=200) response = self.service_connection.create_network_acl('vpc-11ad4878') self.assert_request_parameters({ 'Action': 'CreateNetworkAcl', 'VpcId': 'vpc-11ad4878'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEqual(response.id, 'acl-5fb85d36') class DeleteCreateNetworkAcl(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE true """ def test_delete_network_acl(self): self.set_http_response(status_code=200) response = self.service_connection.delete_network_acl('acl-2cb85d45') self.assert_request_parameters({ 'Action': 'DeleteNetworkAcl', 'NetworkAclId': 'acl-2cb85d45'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEqual(response, True) class TestCreateNetworkAclEntry(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE true """ def test_create_network_acl(self): self.set_http_response(status_code=200) response = self.service_connection.create_network_acl_entry( 'acl-2cb85d45', 110, 'udp', 'allow', '0.0.0.0/0', egress=False, port_range_from=53, port_range_to=53) self.assert_request_parameters({ 'Action': 'CreateNetworkAclEntry', 'NetworkAclId': 'acl-2cb85d45', 'RuleNumber': 110, 'Protocol': 'udp', 'RuleAction': 'allow', 'Egress': 'false', 'CidrBlock': '0.0.0.0/0', 'PortRange.From': 53, 'PortRange.To': 53}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEqual(response, True) def test_create_network_acl_icmp(self): self.set_http_response(status_code=200) response = self.service_connection.create_network_acl_entry( 'acl-2cb85d45', 110, 'udp', 'allow', '0.0.0.0/0', egress='true', icmp_code=-1, icmp_type=8) self.assert_request_parameters({ 'Action': 'CreateNetworkAclEntry', 'NetworkAclId': 'acl-2cb85d45', 'RuleNumber': 110, 'Protocol': 'udp', 'RuleAction': 'allow', 'Egress': 'true', 'CidrBlock': '0.0.0.0/0', 'Icmp.Code': -1, 'Icmp.Type': 8}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEqual(response, True) class TestReplaceNetworkAclEntry(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE true """ def test_replace_network_acl(self): self.set_http_response(status_code=200) response = self.service_connection.replace_network_acl_entry( 'acl-2cb85d45', 110, 'tcp', 'deny', '0.0.0.0/0', egress=False, port_range_from=139, port_range_to=139) self.assert_request_parameters({ 'Action': 'ReplaceNetworkAclEntry', 'NetworkAclId': 'acl-2cb85d45', 'RuleNumber': 110, 'Protocol': 'tcp', 'RuleAction': 'deny', 'Egress': 'false', 'CidrBlock': '0.0.0.0/0', 'PortRange.From': 139, 'PortRange.To': 139}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEqual(response, True) def test_replace_network_acl_icmp(self): self.set_http_response(status_code=200) response = self.service_connection.replace_network_acl_entry( 'acl-2cb85d45', 110, 'tcp', 'deny', '0.0.0.0/0', icmp_code=-1, icmp_type=8) self.assert_request_parameters({ 'Action': 'ReplaceNetworkAclEntry', 'NetworkAclId': 'acl-2cb85d45', 'RuleNumber': 110, 'Protocol': 'tcp', 'RuleAction': 'deny', 'CidrBlock': '0.0.0.0/0', 'Icmp.Code': -1, 'Icmp.Type': 8}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEqual(response, True) class TestDeleteNetworkAclEntry(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE true """ def test_delete_network_acl(self): self.set_http_response(status_code=200) response = self.service_connection.delete_network_acl_entry('acl-2cb85d45', 100, egress=False) self.assert_request_parameters({ 'Action': 'DeleteNetworkAclEntry', 'NetworkAclId': 'acl-2cb85d45', 'RuleNumber': 100, 'Egress': 'false'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEqual(response, True) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/vpc/test_routetable.py000066400000000000000000000420571225267101000212330ustar00rootroot00000000000000from tests.unit import unittest from tests.unit import AWSMockServiceTestCase from boto.vpc import VPCConnection, RouteTable class TestDescribeRouteTables(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 6f570b0b-9c18-4b07-bdec-73740dcf861a rtb-13ad487a vpc-11ad4878 10.0.0.0/22 local active CreateRouteTable rtbassoc-12ad487b rtb-13ad487a
true
rtb-f9ad4890 vpc-11ad4878 10.0.0.0/22 local active CreateRouteTable 0.0.0.0/0 igw-eaad4883 active rtbassoc-faad4893 rtb-f9ad4890 subnet-15ad487c
""" def test_get_all_route_tables(self): self.set_http_response(status_code=200) api_response = self.service_connection.get_all_route_tables( ['rtb-13ad487a', 'rtb-f9ad4890'], filters=[('route.state', 'active')]) self.assert_request_parameters({ 'Action': 'DescribeRouteTables', 'RouteTableId.1': 'rtb-13ad487a', 'RouteTableId.2': 'rtb-f9ad4890', 'Filter.1.Name': 'route.state', 'Filter.1.Value.1': 'active'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(len(api_response), 2) self.assertIsInstance(api_response[0], RouteTable) self.assertEquals(api_response[0].id, 'rtb-13ad487a') self.assertEquals(len(api_response[0].routes), 1) self.assertEquals(api_response[0].routes[0].destination_cidr_block, '10.0.0.0/22') self.assertEquals(api_response[0].routes[0].gateway_id, 'local') self.assertEquals(api_response[0].routes[0].state, 'active') self.assertEquals(len(api_response[0].associations), 1) self.assertEquals(api_response[0].associations[0].id, 'rtbassoc-12ad487b') self.assertEquals(api_response[0].associations[0].route_table_id, 'rtb-13ad487a') self.assertIsNone(api_response[0].associations[0].subnet_id) self.assertEquals(api_response[0].associations[0].main, True) self.assertEquals(api_response[1].id, 'rtb-f9ad4890') self.assertEquals(len(api_response[1].routes), 2) self.assertEquals(api_response[1].routes[0].destination_cidr_block, '10.0.0.0/22') self.assertEquals(api_response[1].routes[0].gateway_id, 'local') self.assertEquals(api_response[1].routes[0].state, 'active') self.assertEquals(api_response[1].routes[1].destination_cidr_block, '0.0.0.0/0') self.assertEquals(api_response[1].routes[1].gateway_id, 'igw-eaad4883') self.assertEquals(api_response[1].routes[1].state, 'active') self.assertEquals(len(api_response[1].associations), 1) self.assertEquals(api_response[1].associations[0].id, 'rtbassoc-faad4893') self.assertEquals(api_response[1].associations[0].route_table_id, 'rtb-f9ad4890') self.assertEquals(api_response[1].associations[0].subnet_id, 'subnet-15ad487c') self.assertEquals(api_response[1].associations[0].main, False) class TestAssociateRouteTable(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE rtbassoc-f8ad4891 """ def test_associate_route_table(self): self.set_http_response(status_code=200) api_response = self.service_connection.associate_route_table( 'rtb-e4ad488d', 'subnet-15ad487c') self.assert_request_parameters({ 'Action': 'AssociateRouteTable', 'RouteTableId': 'rtb-e4ad488d', 'SubnetId': 'subnet-15ad487c'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, 'rtbassoc-f8ad4891') class TestDisassociateRouteTable(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE true """ def test_disassociate_route_table(self): self.set_http_response(status_code=200) api_response = self.service_connection.disassociate_route_table('rtbassoc-fdad4894') self.assert_request_parameters({ 'Action': 'DisassociateRouteTable', 'AssociationId': 'rtbassoc-fdad4894'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) class TestCreateRouteTable(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE rtb-f9ad4890 vpc-11ad4878 10.0.0.0/22 local active """ def test_create_route_table(self): self.set_http_response(status_code=200) api_response = self.service_connection.create_route_table('vpc-11ad4878') self.assert_request_parameters({ 'Action': 'CreateRouteTable', 'VpcId': 'vpc-11ad4878'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertIsInstance(api_response, RouteTable) self.assertEquals(api_response.id, 'rtb-f9ad4890') self.assertEquals(len(api_response.routes), 1) self.assertEquals(api_response.routes[0].destination_cidr_block, '10.0.0.0/22') self.assertEquals(api_response.routes[0].gateway_id, 'local') self.assertEquals(api_response.routes[0].state, 'active') class TestDeleteRouteTable(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE true """ def test_delete_route_table(self): self.set_http_response(status_code=200) api_response = self.service_connection.delete_route_table('rtb-e4ad488d') self.assert_request_parameters({ 'Action': 'DeleteRouteTable', 'RouteTableId': 'rtb-e4ad488d'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) class TestReplaceRouteTableAssociation(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE rtbassoc-faad4893 """ def test_replace_route_table_assocation(self): self.set_http_response(status_code=200) api_response = self.service_connection.replace_route_table_assocation( 'rtbassoc-faad4893', 'rtb-f9ad4890') self.assert_request_parameters({ 'Action': 'ReplaceRouteTableAssociation', 'AssociationId': 'rtbassoc-faad4893', 'RouteTableId': 'rtb-f9ad4890'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) def test_replace_route_table_association_with_assoc(self): self.set_http_response(status_code=200) api_response = self.service_connection.replace_route_table_association_with_assoc( 'rtbassoc-faad4893', 'rtb-f9ad4890') self.assert_request_parameters({ 'Action': 'ReplaceRouteTableAssociation', 'AssociationId': 'rtbassoc-faad4893', 'RouteTableId': 'rtb-f9ad4890'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, 'rtbassoc-faad4893') class TestCreateRoute(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE true """ def test_create_route_gateway(self): self.set_http_response(status_code=200) api_response = self.service_connection.create_route( 'rtb-e4ad488d', '0.0.0.0/0', gateway_id='igw-eaad4883') self.assert_request_parameters({ 'Action': 'CreateRoute', 'RouteTableId': 'rtb-e4ad488d', 'DestinationCidrBlock': '0.0.0.0/0', 'GatewayId': 'igw-eaad4883'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) def test_create_route_instance(self): self.set_http_response(status_code=200) api_response = self.service_connection.create_route( 'rtb-g8ff4ea2', '0.0.0.0/0', instance_id='i-1a2b3c4d') self.assert_request_parameters({ 'Action': 'CreateRoute', 'RouteTableId': 'rtb-g8ff4ea2', 'DestinationCidrBlock': '0.0.0.0/0', 'InstanceId': 'i-1a2b3c4d'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) def test_create_route_interface(self): self.set_http_response(status_code=200) api_response = self.service_connection.create_route( 'rtb-g8ff4ea2', '0.0.0.0/0', interface_id='eni-1a2b3c4d') self.assert_request_parameters({ 'Action': 'CreateRoute', 'RouteTableId': 'rtb-g8ff4ea2', 'DestinationCidrBlock': '0.0.0.0/0', 'NetworkInterfaceId': 'eni-1a2b3c4d'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) class TestReplaceRoute(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE true """ def test_replace_route_gateway(self): self.set_http_response(status_code=200) api_response = self.service_connection.replace_route( 'rtb-e4ad488d', '0.0.0.0/0', gateway_id='igw-eaad4883') self.assert_request_parameters({ 'Action': 'ReplaceRoute', 'RouteTableId': 'rtb-e4ad488d', 'DestinationCidrBlock': '0.0.0.0/0', 'GatewayId': 'igw-eaad4883'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) def test_replace_route_instance(self): self.set_http_response(status_code=200) api_response = self.service_connection.replace_route( 'rtb-g8ff4ea2', '0.0.0.0/0', instance_id='i-1a2b3c4d') self.assert_request_parameters({ 'Action': 'ReplaceRoute', 'RouteTableId': 'rtb-g8ff4ea2', 'DestinationCidrBlock': '0.0.0.0/0', 'InstanceId': 'i-1a2b3c4d'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) def test_replace_route_interface(self): self.set_http_response(status_code=200) api_response = self.service_connection.replace_route( 'rtb-g8ff4ea2', '0.0.0.0/0', interface_id='eni-1a2b3c4d') self.assert_request_parameters({ 'Action': 'ReplaceRoute', 'RouteTableId': 'rtb-g8ff4ea2', 'DestinationCidrBlock': '0.0.0.0/0', 'NetworkInterfaceId': 'eni-1a2b3c4d'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) class TestDeleteRoute(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 59dbff89-35bd-4eac-99ed-be587EXAMPLE true """ def test_delete_route(self): self.set_http_response(status_code=200) api_response = self.service_connection.delete_route('rtb-e4ad488d', '172.16.1.0/24') self.assert_request_parameters({ 'Action': 'DeleteRoute', 'RouteTableId': 'rtb-e4ad488d', 'DestinationCidrBlock': '172.16.1.0/24'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/vpc/test_subnet.py000066400000000000000000000124701225267101000203610ustar00rootroot00000000000000from tests.unit import unittest from tests.unit import AWSMockServiceTestCase from boto.vpc import VPCConnection, Subnet class TestDescribeSubnets(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE subnet-9d4a7b6c available vpc-1a2b3c4d 10.0.1.0/24 251 us-east-1a false false subnet-6e7f829e available vpc-1a2b3c4d 10.0.0.0/24 251 us-east-1a false false """ def test_get_all_subnets(self): self.set_http_response(status_code=200) api_response = self.service_connection.get_all_subnets( ['subnet-9d4a7b6c', 'subnet-6e7f829e'], filters=[('state', 'available'), ('vpc-id', ['subnet-9d4a7b6c', 'subnet-6e7f829e'])]) self.assert_request_parameters({ 'Action': 'DescribeSubnets', 'SubnetId.1': 'subnet-9d4a7b6c', 'SubnetId.2': 'subnet-6e7f829e', 'Filter.1.Name': 'state', 'Filter.1.Value.1': 'available', 'Filter.2.Name': 'vpc-id', 'Filter.2.Value.1': 'subnet-9d4a7b6c', 'Filter.2.Value.2': 'subnet-6e7f829e'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(len(api_response), 2) self.assertIsInstance(api_response[0], Subnet) self.assertEqual(api_response[0].id, 'subnet-9d4a7b6c') self.assertEqual(api_response[1].id, 'subnet-6e7f829e') class TestCreateSubnet(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE subnet-9d4a7b6c pending vpc-1a2b3c4d 10.0.1.0/24 251 us-east-1a """ def test_create_subnet(self): self.set_http_response(status_code=200) api_response = self.service_connection.create_subnet( 'vpc-1a2b3c4d', '10.0.1.0/24', 'us-east-1a') self.assert_request_parameters({ 'Action': 'CreateSubnet', 'VpcId': 'vpc-1a2b3c4d', 'CidrBlock': '10.0.1.0/24', 'AvailabilityZone': 'us-east-1a'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertIsInstance(api_response, Subnet) self.assertEquals(api_response.id, 'subnet-9d4a7b6c') self.assertEquals(api_response.state, 'pending') self.assertEquals(api_response.vpc_id, 'vpc-1a2b3c4d') self.assertEquals(api_response.cidr_block, '10.0.1.0/24') self.assertEquals(api_response.available_ip_address_count, 251) self.assertEquals(api_response.availability_zone, 'us-east-1a') class TestDeleteSubnet(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE true """ def test_delete_subnet(self): self.set_http_response(status_code=200) api_response = self.service_connection.delete_subnet('subnet-9d4a7b6c') self.assert_request_parameters({ 'Action': 'DeleteSubnet', 'SubnetId': 'subnet-9d4a7b6c'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/vpc/test_vpc.py000066400000000000000000000122421225267101000176460ustar00rootroot00000000000000# -*- coding: UTF-8 -*- from tests.unit import unittest from tests.unit import AWSMockServiceTestCase from boto.vpc import VPCConnection, VPC DESCRIBE_VPCS = r''' 623040d1-b51c-40bc-8080-93486f38d03d vpc-12345678 available 172.16.0.0/16 dopt-12345678 default false ''' class TestDescribeVPCs(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return DESCRIBE_VPCS def test_get_vpcs(self): self.set_http_response(status_code=200) api_response = self.service_connection.get_all_vpcs() self.assertEqual(len(api_response), 1) vpc = api_response[0] self.assertFalse(vpc.is_default) self.assertEqual(vpc.instance_tenancy, 'default') class TestCreateVpc(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE vpc-1a2b3c4d pending 10.0.0.0/16 dopt-1a2b3c4d2 default """ def test_create_vpc(self): self.set_http_response(status_code=200) api_response = self.service_connection.create_vpc('10.0.0.0/16', 'default') self.assert_request_parameters({ 'Action': 'CreateVpc', 'InstanceTenancy': 'default', 'CidrBlock': '10.0.0.0/16'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertIsInstance(api_response, VPC) self.assertEquals(api_response.id, 'vpc-1a2b3c4d') self.assertEquals(api_response.state, 'pending') self.assertEquals(api_response.cidr_block, '10.0.0.0/16') self.assertEquals(api_response.dhcp_options_id, 'dopt-1a2b3c4d2') self.assertEquals(api_response.instance_tenancy, 'default') class TestDeleteVpc(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE true """ def test_delete_vpc(self): self.set_http_response(status_code=200) api_response = self.service_connection.delete_vpc('vpc-1a2b3c4d') self.assert_request_parameters({ 'Action': 'DeleteVpc', 'VpcId': 'vpc-1a2b3c4d'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) class TestModifyVpcAttribute(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE true """ def test_modify_vpc_attribute_dns_support(self): self.set_http_response(status_code=200) api_response = self.service_connection.modify_vpc_attribute( 'vpc-1a2b3c4d', enable_dns_support=True) self.assert_request_parameters({ 'Action': 'ModifyVpcAttribute', 'VpcId': 'vpc-1a2b3c4d', 'EnableDnsSupport.Value': 'true'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) def test_modify_vpc_attribute_dns_hostnames(self): self.set_http_response(status_code=200) api_response = self.service_connection.modify_vpc_attribute( 'vpc-1a2b3c4d', enable_dns_hostnames=True) self.assert_request_parameters({ 'Action': 'ModifyVpcAttribute', 'VpcId': 'vpc-1a2b3c4d', 'EnableDnsHostnames.Value': 'true'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/vpc/test_vpnconnection.py000066400000000000000000000242611225267101000217450ustar00rootroot00000000000000# -*- coding: UTF-8 -*- from tests.unit import unittest from tests.unit import AWSMockServiceTestCase from boto.vpc import VPCConnection, VpnConnection DESCRIBE_VPNCONNECTIONS = r''' 12345678-asdf-ghjk-zxcv-0987654321nb vpn-12qw34er56ty available <?xml version="1.0" encoding="UTF-8"?> ipsec.1 cgw-1234qwe9 vgw-lkjh1234 Name VPN 1 123.45.67.89 DOWN 2013-03-19T19:20:34.000Z 0 123.45.67.90 UP 2013-03-20T08:00:14.000Z 0 true 192.168.0.0/24 static available vpn-qwerty12 pending <?xml version="1.0" encoding="UTF-8"?> ipsec.1 cgw-01234567 vgw-asdfghjk 134.56.78.78 UP 2013-03-20T01:46:30.000Z 0 134.56.78.79 UP 2013-03-19T19:23:59.000Z 0 true 10.0.0.0/16 static pending ''' class TestDescribeVPNConnections(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return DESCRIBE_VPNCONNECTIONS def test_get_vpcs(self): self.set_http_response(status_code=200) api_response = self.service_connection.get_all_vpn_connections( ['vpn-12qw34er56ty', 'vpn-qwerty12'], filters=[('state', ['pending', 'available'])]) self.assert_request_parameters({ 'Action': 'DescribeVpnConnections', 'VpnConnectionId.1': 'vpn-12qw34er56ty', 'VpnConnectionId.2': 'vpn-qwerty12', 'Filter.1.Name': 'state', 'Filter.1.Value.1': 'pending', 'Filter.1.Value.2': 'available'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEqual(len(api_response), 2) vpn0 = api_response[0] self.assertEqual(vpn0.type, 'ipsec.1') self.assertEqual(vpn0.customer_gateway_id, 'cgw-1234qwe9') self.assertEqual(vpn0.vpn_gateway_id, 'vgw-lkjh1234') self.assertEqual(len(vpn0.tunnels), 2) self.assertDictEqual(vpn0.tags, {'Name': 'VPN 1'}) vpn1 = api_response[1] self.assertEqual(vpn1.state, 'pending') self.assertEqual(len(vpn1.static_routes), 1) self.assertTrue(vpn1.options.static_routes_only) self.assertEqual(vpn1.tunnels[0].status, 'UP') self.assertEqual(vpn1.tunnels[1].status, 'UP') self.assertDictEqual(vpn1.tags, {}) self.assertEqual(vpn1.static_routes[0].source, 'static') self.assertEqual(vpn1.static_routes[0].state, 'pending') class TestCreateVPNConnection(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 5cc7891f-1f3b-4fc4-a626-bdea8f63ff5a vpn-83ad48ea pending <?xml version="1.0" encoding="UTF-8"?> cgw-b4dc3961 vgw-8db04f81 true """ def test_create_vpn_connection(self): self.set_http_response(status_code=200) api_response = self.service_connection.create_vpn_connection( 'ipsec.1', 'cgw-b4dc3961', 'vgw-8db04f81', static_routes_only=True) self.assert_request_parameters({ 'Action': 'CreateVpnConnection', 'Type': 'ipsec.1', 'CustomerGatewayId': 'cgw-b4dc3961', 'VpnGatewayId': 'vgw-8db04f81', 'Options.StaticRoutesOnly': 'true'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertIsInstance(api_response, VpnConnection) self.assertEquals(api_response.id, 'vpn-83ad48ea') self.assertEquals(api_response.customer_gateway_id, 'cgw-b4dc3961') self.assertEquals(api_response.options.static_routes_only, True) class TestDeleteVPNConnection(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE true """ def test_delete_vpn_connection(self): self.set_http_response(status_code=200) api_response = self.service_connection.delete_vpn_connection('vpn-44a8938f') self.assert_request_parameters({ 'Action': 'DeleteVpnConnection', 'VpnConnectionId': 'vpn-44a8938f'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) class TestCreateVPNConnectionRoute(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 4f35a1b2-c2c3-4093-b51f-abb9d7311990 true """ def test_create_vpn_connection_route(self): self.set_http_response(status_code=200) api_response = self.service_connection.create_vpn_connection_route( '11.12.0.0/16', 'vpn-83ad48ea') self.assert_request_parameters({ 'Action': 'CreateVpnConnectionRoute', 'DestinationCidrBlock': '11.12.0.0/16', 'VpnConnectionId': 'vpn-83ad48ea'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) class TestDeleteVPNConnectionRoute(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 4f35a1b2-c2c3-4093-b51f-abb9d7311990 true """ def test_delete_vpn_connection_route(self): self.set_http_response(status_code=200) api_response = self.service_connection.delete_vpn_connection_route( '11.12.0.0/16', 'vpn-83ad48ea') self.assert_request_parameters({ 'Action': 'DeleteVpnConnectionRoute', 'DestinationCidrBlock': '11.12.0.0/16', 'VpnConnectionId': 'vpn-83ad48ea'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEquals(api_response, True) if __name__ == '__main__': unittest.main() boto-2.20.1/tests/unit/vpc/test_vpngateway.py000066400000000000000000000207141225267101000212460ustar00rootroot00000000000000# -*- coding: UTF-8 -*- from tests.unit import unittest from tests.unit import AWSMockServiceTestCase from boto.vpc import VPCConnection, VpnGateway, Attachment class TestDescribeVpnGateways(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE vgw-8db04f81 available ipsec.1 us-east-1a vpc-1a2b3c4d attached """ def test_get_all_vpn_gateways(self): self.set_http_response(status_code=200) api_response = self.service_connection.get_all_vpn_gateways( 'vgw-8db04f81', filters=[('state', ['pending', 'available']), ('availability-zone', 'us-east-1a')]) self.assert_request_parameters({ 'Action': 'DescribeVpnGateways', 'VpnGatewayId.1': 'vgw-8db04f81', 'Filter.1.Name': 'state', 'Filter.1.Value.1': 'pending', 'Filter.1.Value.2': 'available', 'Filter.2.Name': 'availability-zone', 'Filter.2.Value.1': 'us-east-1a'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEqual(len(api_response), 1) self.assertIsInstance(api_response[0], VpnGateway) self.assertEqual(api_response[0].id, 'vgw-8db04f81') class TestCreateVpnGateway(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE vgw-8db04f81 pending ipsec.1 us-east-1a """ def test_delete_vpn_gateway(self): self.set_http_response(status_code=200) api_response = self.service_connection.create_vpn_gateway('ipsec.1', 'us-east-1a') self.assert_request_parameters({ 'Action': 'CreateVpnGateway', 'AvailabilityZone': 'us-east-1a', 'Type': 'ipsec.1'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertIsInstance(api_response, VpnGateway) self.assertEquals(api_response.id, 'vgw-8db04f81') class TestDeleteVpnGateway(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE true """ def test_delete_vpn_gateway(self): self.set_http_response(status_code=200) api_response = self.service_connection.delete_vpn_gateway('vgw-8db04f81') self.assert_request_parameters({ 'Action': 'DeleteVpnGateway', 'VpnGatewayId': 'vgw-8db04f81'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEqual(api_response, True) class TestAttachVpnGateway(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE vpc-1a2b3c4d attaching """ def test_attach_vpn_gateway(self): self.set_http_response(status_code=200) api_response = self.service_connection.attach_vpn_gateway('vgw-8db04f81', 'vpc-1a2b3c4d') self.assert_request_parameters({ 'Action': 'AttachVpnGateway', 'VpnGatewayId': 'vgw-8db04f81', 'VpcId': 'vpc-1a2b3c4d'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertIsInstance(api_response, Attachment) self.assertEquals(api_response.vpc_id, 'vpc-1a2b3c4d') self.assertEquals(api_response.state, 'attaching') class TestDetachVpnGateway(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE true """ def test_detach_vpn_gateway(self): self.set_http_response(status_code=200) api_response = self.service_connection.detach_vpn_gateway('vgw-8db04f81', 'vpc-1a2b3c4d') self.assert_request_parameters({ 'Action': 'DetachVpnGateway', 'VpnGatewayId': 'vgw-8db04f81', 'VpcId': 'vpc-1a2b3c4d'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEqual(api_response, True) class TestDisableVgwRoutePropagation(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 4f35a1b2-c2c3-4093-b51f-abb9d7311990 true """ def test_disable_vgw_route_propagation(self): self.set_http_response(status_code=200) api_response = self.service_connection.disable_vgw_route_propagation( 'rtb-c98a35a0', 'vgw-d8e09e8a') self.assert_request_parameters({ 'Action': 'DisableVgwRoutePropagation', 'GatewayId': 'vgw-d8e09e8a', 'RouteTableId': 'rtb-c98a35a0'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEqual(api_response, True) class TestEnableVgwRoutePropagation(AWSMockServiceTestCase): connection_class = VPCConnection def default_body(self): return """ 4f35a1b2-c2c3-4093-b51f-abb9d7311990 true """ def test_enable_vgw_route_propagation(self): self.set_http_response(status_code=200) api_response = self.service_connection.enable_vgw_route_propagation( 'rtb-c98a35a0', 'vgw-d8e09e8a') self.assert_request_parameters({ 'Action': 'EnableVgwRoutePropagation', 'GatewayId': 'vgw-d8e09e8a', 'RouteTableId': 'rtb-c98a35a0'}, ignore_params_values=['AWSAccessKeyId', 'SignatureMethod', 'SignatureVersion', 'Timestamp', 'Version']) self.assertEqual(api_response, True) if __name__ == '__main__': unittest.main() boto-2.20.1/tox.ini000066400000000000000000000001721225267101000140460ustar00rootroot00000000000000[tox] envlist = py26,py27 [testenv] commands = pip install -qr requirements.txt python tests/test.py tests/unit