awscli-1.10.1/ 0000777 4542626 0000144 00000000000 12652514126 014132 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli.egg-info/ 0000777 4542626 0000144 00000000000 12652514126 017106 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli.egg-info/dependency_links.txt 0000666 4542626 0000144 00000000001 12652514125 023153 0 ustar pysdk-ci amazon 0000000 0000000
awscli-1.10.1/awscli.egg-info/PKG-INFO 0000666 4542626 0000144 00000042035 12652514125 020206 0 ustar pysdk-ci amazon 0000000 0000000 Metadata-Version: 1.1
Name: awscli
Version: 1.10.1
Summary: Universal Command Line Environment for AWS.
Home-page: http://aws.amazon.com/cli/
Author: Amazon Web Services
Author-email: UNKNOWN
License: Apache License 2.0
Description: =======
aws-cli
=======
.. image:: https://travis-ci.org/aws/aws-cli.png?branch=develop
:target: https://travis-ci.org/aws/aws-cli
:alt: Build Status
.. image:: https://coveralls.io/repos/aws/aws-cli/badge.png
:target: https://coveralls.io/r/aws/aws-cli
This package provides a unified command line interface to Amazon Web Services.
The aws-cli package works on Python versions:
* 2.6.5 and greater
* 2.7.x and greater
* 3.3.x and greater
* 3.4.x and greater
.. attention::
We recommend that all customers regularly monitor the
`Amazon Web Services Security Bulletins website`_ for any important security bulletins related to
aws-cli.
------------
Installation
------------
The easiest way to install aws-cli is to use `pip`_::
$ pip install awscli
or, if you are not installing in a ``virtualenv``::
$ sudo pip install awscli
If you have the aws-cli installed and want to upgrade to the latest version
you can run::
$ pip install --upgrade awscli
This will install the aws-cli package as well as all dependencies. You can
also just `download the tarball`_. Once you have the
awscli directory structure on your workstation, you can just run::
$ cd
$ python setup.py install
If you want to run the ``develop`` branch of the CLI, see the
"CLI Dev Version" section below.
------------
CLI Releases
------------
The release notes for the AWS CLI can be found `here `__.
You can also find a `CHANGELOG `__
in the github repo.
------------------
Command Completion
------------------
The aws-cli package includes a very useful command completion feature.
This feature is not automatically installed so you need to configure it manually.
To enable tab completion for bash either use the built-in command ``complete``::
$ complete -C aws_completer aws
Or add ``bin/aws_bash_completer`` file under ``/etc/bash_completion.d``,
``/usr/local/etc/bash_completion.d`` or any other ``bash_completion.d`` location.
For tcsh::
$ complete aws 'p/*/`aws_completer`/'
You should add this to your startup scripts to enable it for future sessions.
For zsh please refer to bin/aws_zsh_completer.sh. Source that file, e.g.
from your `~/.zshrc`, and make sure you run `compinit` before::
$ source bin/aws_zsh_completer.sh
For now the bash compatibility auto completion (bashcompinit) is used.
For further details please refer to the top of bin/aws_zsh_completer.sh.
---------------
Getting Started
---------------
Before using aws-cli, you need to tell it about your AWS credentials. You
can do this in several ways:
* Environment variables
* Config file
* IAM Role
The quickest way to get started is to run the ``aws configure`` command::
$ aws configure
AWS Access Key ID: foo
AWS Secret Access Key: bar
Default region name [us-west-2]: us-west-2
Default output format [None]: json
To use environment variables, do the following::
$ export AWS_ACCESS_KEY_ID=
$ export AWS_SECRET_ACCESS_KEY=
To use a config file, create a configuration file like this::
[default]
aws_access_key_id=
aws_secret_access_key=
# Optional, to define default region for this profile.
region=us-west-1
[profile testing]
aws_access_key_id=
aws_secret_access_key=
region=us-west-2
and place it in ``~/.aws/config`` (or in ``%UserProfile%\.aws\config`` on Windows).
As you can see, you can have multiple ``profiles`` defined in this
configuration file and specify which profile to use by using the ``--profile``
option. If no profile is specified the ``default`` profile is used. Except
for the default profile, you **must** prefix each config section of a profile
group with ``profile``. For example, if you have a profile named "testing" the
section header would be ``[profile testing]``.
If you wish to place the config file in a different location than the one
specified above, you need to tell aws-cli where to find it. Do this by setting
the appropriate environment variable::
$ export AWS_CONFIG_FILE=/path/to/config_file
The final option for credentials is highly recommended if you are
using aws-cli on an EC2 instance. IAM Roles are
a great way to have credentials installed automatically on your
instance. If you are using IAM Roles, aws-cli will find them and use
them automatically.
----------------------------
Other Configurable Variables
----------------------------
In addition to credentials, a number of other variables can be
configured either with environment variables, configuration file
entries or both. The following table documents these.
=========== =========== ===================== ===================== ============================
Variable Option Config Entry Environment Variable Description
=========== =========== ===================== ===================== ============================
profile --profile profile AWS_DEFAULT_PROFILE Default profile name
----------- ----------- --------------------- --------------------- ----------------------------
region --region region AWS_DEFAULT_REGION Default AWS Region
----------- ----------- --------------------- --------------------- ----------------------------
config_file AWS_CONFIG_FILE Alternate location of config
----------- ----------- --------------------- --------------------- ----------------------------
output --output output AWS_DEFAULT_OUTPUT Default output style
----------- ----------- --------------------- --------------------- ----------------------------
ca_bundle --ca-bundle ca_bundle AWS_CA_BUNDLE CA Certificate Bundle
----------- ----------- --------------------- --------------------- ----------------------------
access_key aws_access_key_id AWS_ACCESS_KEY_ID AWS Access Key
----------- ----------- --------------------- --------------------- ----------------------------
secret_key aws_secret_access_key AWS_SECRET_ACCESS_KEY AWS Secret Key
----------- ----------- --------------------- --------------------- ----------------------------
token aws_session_token AWS_SESSION_TOKEN AWS Token (temp credentials)
=========== =========== ===================== ===================== ============================
^^^^^^^^
Examples
^^^^^^^^
If you get tired of specifying a ``--region`` option on the command line
all of the time, you can specify a default region to use whenever no
explicit ``--region`` option is included using the ``region`` variable.
To specify this using an environment variable::
$ export AWS_DEFAULT_REGION=us-west-2
To include it in your config file::
[default]
aws_access_key_id=
aws_secret_access_key=
region=us-west-1
Similarly, the ``profile`` variable can be used to specify which profile to use
if one is not explicitly specified on the command line via the
``--profile`` option. To set this via environment variable::
$ export AWS_DEFAULT_PROFILE=testing
The ``profile`` variable can not be specified in the configuration file
since it would have to be associated with a profile and would defeat the
purpose.
----------------------------------------
Accessing Services With Global Endpoints
----------------------------------------
Some services, such as AWS Identity and Access Management (IAM)
have a single, global endpoint rather than different endpoints for
each region.
To make access to these services simpler, aws-cli will automatically
use the global endpoint unless you explicitly supply a region (using
the ``--region`` option) or a profile (using the ``--profile`` option).
Therefore, the following::
$ aws iam list-users
Will automatically use the global endpoint for the IAM service
regardless of the value of the ``AWS_DEFAULT_REGION`` environment
variable or the ``region`` variable specified in your profile.
--------------------
JSON Parameter Input
--------------------
Many options that need to be provided are simple string or numeric
values. However, some operations require JSON data structures
as input parameters either on the command line or in files.
For example, consider the command to authorize access to an EC2
security group. In this case, we will add ingress access to port 22
for all IP addresses::
$ aws ec2 authorize-security-group-ingress --group-name MySecurityGroup \
--ip-permissions '{"FromPort":22,"ToPort":22,"IpProtocol":"tcp","IpRanges":[{"CidrIp": "0.0.0.0/0"}]}'
--------------------------
File-based Parameter Input
--------------------------
Some parameter values are so large or so complex that it would be easier
to place the parameter value in a file and refer to that file rather than
entering the value directly on the command line.
Let's use the ``authorize-security-group-ingress`` command shown above.
Rather than provide the value of the ``--ip-permissions`` parameter directly
in the command, you could first store the values in a file. Let's call
the file ip_perms.json::
{"FromPort":22,
"ToPort":22,
"IpProtocol":"tcp",
"IpRanges":[{"CidrIp":"0.0.0.0/0"}]}
Then, we could make the same call as above like this::
$ aws ec2 authorize-security-group-ingress --group-name MySecurityGroup \
--ip-permissions file://ip_perms.json
The ``file://`` prefix on the parameter value signals that the parameter value
is actually a reference to a file that contains the actual parameter value.
aws-cli will open the file, read the value and pass use that value as the
parameter value.
This is also useful when the parameter is really referring to file-based
data. For example, the ``--user-data`` option of the ``aws ec2 run-instances``
command or the ``--public-key-material`` parameter of the
``aws ec2 import-key-pair`` command.
-------------------------
URI-based Parameter Input
-------------------------
Similar to the file-based input described above, aws-cli also includes a
way to use data from a URI as the value of a parameter. The idea is exactly
the same except the prefix used is ``https://`` or ``http://``::
$ aws ec2 authorize-security-group-ingress --group-name MySecurityGroup \
--ip-permissions http://mybucket.s3.amazonaws.com/ip_perms.json
--------------
Command Output
--------------
The default output for commands is currently JSON. You can use the
``--query`` option to extract the output elements from this JSON document.
For more information on the expression language used for the ``--query``
argument, you can read the
`JMESPath Tutorial `__.
^^^^^^^^
Examples
^^^^^^^^
Get a list of IAM user names::
$ aws iam list-users --query Users[].UserName
Get a list of key names and their sizes in an S3 bucket::
$ aws s3api list-objects --bucket b --query Contents[].[Key,Size]
Get a list of all EC2 instances and include their Instance ID, State Name,
and their Name (if they've been tagged with a Name)::
$ aws ec2 describe-instances --query \
'Reservations[].Instances[].[InstanceId,State.Name,Tags[?Key==`Name`] | [0].Value]'
You may also find the `jq `_ tool useful in
processing the JSON output for other uses.
There is also an ASCII table format available. You can select this style with
the ``--output table`` option or you can make this style your default output
style via environment variable or config file entry as described above.
Try adding ``--output table`` to the above commands.
---------------
CLI Dev Version
---------------
If you are just interested in using the latest released version of the AWS CLI,
please see the "Installation" section above. This section is for anyone that
wants to install the development version of the CLI. You normally would not
need to do this unless:
* You are developing a feature for the CLI and plan on submitting a Pull
Request.
* You want to test the latest changes of the CLI before they make it into an
official release.
The latest changes to the CLI are in the ``develop`` branch on github. This is
the default branch when you clone the git repository.
Additionally, there are several other packages that are developed in tandem
with the CLI. This includes:
* `botocore `__
* `jmespath `__
If you just want to install a snapshot of the latest development version of
the CLI, you can use the ``requirements.txt`` file included in this repo.
This file points to the development version of the above packages::
cd
pip install -r requirements.txt
pip install -e .
However, to keep up to date, you will continually have to run the
``pip install -r requirements.txt`` file to pull in the latest changes
from the develop branches of botocore, jmespath, etc.
You can optionally clone each of those repositories and run "pip install -e ."
for each repository::
git clone && cd jmespath/
pip install -e . && cd ..
git clone && cd botocore/
pip install -e . && cd ..
git clone && cd aws-cli/
pip install -e .
.. _`Amazon Web Services Security Bulletins website`: https://aws.amazon.com/security/security-bulletins
.. _pip: http://www.pip-installer.org/en/latest/
.. _`download the tarball`: https://pypi.python.org/pypi/awscli
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: System Administrators
Classifier: Natural Language :: English
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2.6
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: 3.4
awscli-1.10.1/awscli.egg-info/SOURCES.txt 0000666 4542626 0000144 00000135426 12652514126 021005 0 ustar pysdk-ci amazon 0000000 0000000 LICENSE.txt
MANIFEST.in
README.rst
requirements.txt
setup.cfg
setup.py
awscli/__init__.py
awscli/argparser.py
awscli/argprocess.py
awscli/arguments.py
awscli/clidocs.py
awscli/clidriver.py
awscli/compat.py
awscli/completer.py
awscli/errorhandler.py
awscli/formatter.py
awscli/handlers.py
awscli/help.py
awscli/paramfile.py
awscli/plugin.py
awscli/schema.py
awscli/shorthand.py
awscli/table.py
awscli/testutils.py
awscli/text.py
awscli/topictags.py
awscli/utils.py
awscli.egg-info/PKG-INFO
awscli.egg-info/SOURCES.txt
awscli.egg-info/dependency_links.txt
awscli.egg-info/requires.txt
awscli.egg-info/top_level.txt
awscli/customizations/__init__.py
awscli/customizations/addexamples.py
awscli/customizations/argrename.py
awscli/customizations/arguments.py
awscli/customizations/assumerole.py
awscli/customizations/awslambda.py
awscli/customizations/cliinputjson.py
awscli/customizations/cloudfront.py
awscli/customizations/cloudsearch.py
awscli/customizations/cloudsearchdomain.py
awscli/customizations/codecommit.py
awscli/customizations/commands.py
awscli/customizations/ec2addcount.py
awscli/customizations/ec2bundleinstance.py
awscli/customizations/ec2decryptpassword.py
awscli/customizations/ec2protocolarg.py
awscli/customizations/ec2runinstances.py
awscli/customizations/ec2secgroupsimplify.py
awscli/customizations/ecr.py
awscli/customizations/flatten.py
awscli/customizations/generatecliskeleton.py
awscli/customizations/globalargs.py
awscli/customizations/iamvirtmfa.py
awscli/customizations/iot.py
awscli/customizations/iot_data.py
awscli/customizations/kms.py
awscli/customizations/opsworks.py
awscli/customizations/paginate.py
awscli/customizations/preview.py
awscli/customizations/putmetricdata.py
awscli/customizations/rds.py
awscli/customizations/removals.py
awscli/customizations/route53.py
awscli/customizations/s3endpoint.py
awscli/customizations/s3errormsg.py
awscli/customizations/scalarparse.py
awscli/customizations/sessendemail.py
awscli/customizations/streamingoutputarg.py
awscli/customizations/toplevelbool.py
awscli/customizations/utils.py
awscli/customizations/waiters.py
awscli/customizations/cloudtrail/__init__.py
awscli/customizations/cloudtrail/subscribe.py
awscli/customizations/cloudtrail/utils.py
awscli/customizations/cloudtrail/validation.py
awscli/customizations/codedeploy/__init__.py
awscli/customizations/codedeploy/codedeploy.py
awscli/customizations/codedeploy/deregister.py
awscli/customizations/codedeploy/install.py
awscli/customizations/codedeploy/locationargs.py
awscli/customizations/codedeploy/push.py
awscli/customizations/codedeploy/register.py
awscli/customizations/codedeploy/systems.py
awscli/customizations/codedeploy/uninstall.py
awscli/customizations/codedeploy/utils.py
awscli/customizations/configservice/__init__.py
awscli/customizations/configservice/getstatus.py
awscli/customizations/configservice/putconfigurationrecorder.py
awscli/customizations/configservice/rename_cmd.py
awscli/customizations/configservice/subscribe.py
awscli/customizations/configure/__init__.py
awscli/customizations/configure/addmodel.py
awscli/customizations/datapipeline/__init__.py
awscli/customizations/datapipeline/constants.py
awscli/customizations/datapipeline/createdefaultroles.py
awscli/customizations/datapipeline/listrunsformatter.py
awscli/customizations/datapipeline/translator.py
awscli/customizations/emr/__init__.py
awscli/customizations/emr/addinstancegroups.py
awscli/customizations/emr/addsteps.py
awscli/customizations/emr/addtags.py
awscli/customizations/emr/applicationutils.py
awscli/customizations/emr/argumentschema.py
awscli/customizations/emr/command.py
awscli/customizations/emr/config.py
awscli/customizations/emr/configutils.py
awscli/customizations/emr/constants.py
awscli/customizations/emr/createcluster.py
awscli/customizations/emr/createdefaultroles.py
awscli/customizations/emr/describecluster.py
awscli/customizations/emr/emr.py
awscli/customizations/emr/emrfsutils.py
awscli/customizations/emr/emrutils.py
awscli/customizations/emr/exceptions.py
awscli/customizations/emr/hbase.py
awscli/customizations/emr/hbaseutils.py
awscli/customizations/emr/helptext.py
awscli/customizations/emr/installapplications.py
awscli/customizations/emr/instancegroupsutils.py
awscli/customizations/emr/listclusters.py
awscli/customizations/emr/modifyclusterattributes.py
awscli/customizations/emr/ssh.py
awscli/customizations/emr/sshutils.py
awscli/customizations/emr/steputils.py
awscli/customizations/emr/terminateclusters.py
awscli/customizations/s3/__init__.py
awscli/customizations/s3/comparator.py
awscli/customizations/s3/executor.py
awscli/customizations/s3/fileformat.py
awscli/customizations/s3/filegenerator.py
awscli/customizations/s3/fileinfo.py
awscli/customizations/s3/fileinfobuilder.py
awscli/customizations/s3/filters.py
awscli/customizations/s3/s3.py
awscli/customizations/s3/s3handler.py
awscli/customizations/s3/subcommands.py
awscli/customizations/s3/tasks.py
awscli/customizations/s3/transferconfig.py
awscli/customizations/s3/utils.py
awscli/customizations/s3/syncstrategy/__init__.py
awscli/customizations/s3/syncstrategy/base.py
awscli/customizations/s3/syncstrategy/delete.py
awscli/customizations/s3/syncstrategy/exacttimestamps.py
awscli/customizations/s3/syncstrategy/register.py
awscli/customizations/s3/syncstrategy/sizeonly.py
awscli/data/cli.json
awscli/examples/acm/delete-certificate.rst
awscli/examples/acm/describe-certificate.rst
awscli/examples/acm/get-certificate.rst
awscli/examples/acm/list-certificates.rst
awscli/examples/acm/request-certificate.rst
awscli/examples/acm/resend-validation-email.rst
awscli/examples/autoscaling/attach-instances.rst
awscli/examples/autoscaling/attach-load-balancers.rst
awscli/examples/autoscaling/complete-lifecycle-action.rst
awscli/examples/autoscaling/create-auto-scaling-group.rst
awscli/examples/autoscaling/create-launch-configuration.rst
awscli/examples/autoscaling/create-or-update-tags.rst
awscli/examples/autoscaling/delete-auto-scaling-group.rst
awscli/examples/autoscaling/delete-launch-configuration.rst
awscli/examples/autoscaling/delete-lifecycle-hook.rst
awscli/examples/autoscaling/delete-notification-configuration.rst
awscli/examples/autoscaling/delete-policy.rst
awscli/examples/autoscaling/delete-scheduled-action.rst
awscli/examples/autoscaling/delete-tags.rst
awscli/examples/autoscaling/describe-account-limits.rst
awscli/examples/autoscaling/describe-adjustment-types.rst
awscli/examples/autoscaling/describe-auto-scaling-groups.rst
awscli/examples/autoscaling/describe-auto-scaling-instances.rst
awscli/examples/autoscaling/describe-auto-scaling-notification-types.rst
awscli/examples/autoscaling/describe-launch-configurations.rst
awscli/examples/autoscaling/describe-lifecycle-hook-types.rst
awscli/examples/autoscaling/describe-lifecycle-hooks.rst
awscli/examples/autoscaling/describe-load-balancers.rst
awscli/examples/autoscaling/describe-metric-collection-types.rst
awscli/examples/autoscaling/describe-notification-configurations.rst
awscli/examples/autoscaling/describe-policies.rst
awscli/examples/autoscaling/describe-scaling-activities.rst
awscli/examples/autoscaling/describe-scaling-process-types.rst
awscli/examples/autoscaling/describe-scheduled-actions.rst
awscli/examples/autoscaling/describe-tags.rst
awscli/examples/autoscaling/describe-termination-policy-types.rst
awscli/examples/autoscaling/detach-instances.rst
awscli/examples/autoscaling/detach-load-balancers.rst
awscli/examples/autoscaling/disable-metrics-collection.rst
awscli/examples/autoscaling/enable-metrics-collection.rst
awscli/examples/autoscaling/enter-standby.rst
awscli/examples/autoscaling/execute-policy.rst
awscli/examples/autoscaling/exit-standby.rst
awscli/examples/autoscaling/put-lifecycle-hook.rst
awscli/examples/autoscaling/put-notification-configuration.rst
awscli/examples/autoscaling/put-scaling-policy.rst
awscli/examples/autoscaling/put-scheduled-update-group-action.rst
awscli/examples/autoscaling/record-lifecycle-action-heartbeat.rst
awscli/examples/autoscaling/resume-processes.rst
awscli/examples/autoscaling/set-desired-capacity.rst
awscli/examples/autoscaling/set-instance-health.rst
awscli/examples/autoscaling/set-instance-protection.rst
awscli/examples/autoscaling/suspend-processes.rst
awscli/examples/autoscaling/terminate-instance-in-auto-scaling-group.rst
awscli/examples/autoscaling/update-auto-scaling-group.rst
awscli/examples/cloudformation/cancel-update-stack.rst
awscli/examples/cloudformation/create-stack.rst
awscli/examples/cloudformation/describe-stacks.rst
awscli/examples/cloudformation/get-template.rst
awscli/examples/cloudformation/list-stacks.rst
awscli/examples/cloudformation/update-stack.rst
awscli/examples/cloudformation/validate-template.rst
awscli/examples/cloudfront/create-distribution.rst
awscli/examples/cloudfront/create-invalidation.rst
awscli/examples/cloudfront/delete-distribution.rst
awscli/examples/cloudfront/get-distribution-config.rst
awscli/examples/cloudfront/get-distribution.rst
awscli/examples/cloudfront/get-invalidation.rst
awscli/examples/cloudfront/list-distributions.rst
awscli/examples/cloudfront/list-invalidations.rst
awscli/examples/cloudfront/update-distribution.rst
awscli/examples/cloudwatch/delete-alarms.rst
awscli/examples/cloudwatch/describe-alarm-history.rst
awscli/examples/cloudwatch/describe-alarms-for-metric.rst
awscli/examples/cloudwatch/describe-alarms.rst
awscli/examples/cloudwatch/disable-alarm-actions.rst
awscli/examples/cloudwatch/enable-alarm-actions.rst
awscli/examples/cloudwatch/get-metric-statistics.rst
awscli/examples/cloudwatch/list-metrics.rst
awscli/examples/cloudwatch/put-metric-alarm.rst
awscli/examples/cloudwatch/put-metric-data.rst
awscli/examples/cloudwatch/set-alarm-state.rst
awscli/examples/codecommit/batch-get-repositories.rst
awscli/examples/codecommit/create-branch.rst
awscli/examples/codecommit/create-repository.rst
awscli/examples/codecommit/delete-branch.rst
awscli/examples/codecommit/delete-repository.rst
awscli/examples/codecommit/get-branch.rst
awscli/examples/codecommit/get-repository.rst
awscli/examples/codecommit/list-branches.rst
awscli/examples/codecommit/list-repositories.rst
awscli/examples/codecommit/update-default-branch.rst
awscli/examples/codecommit/update-repository-description.rst
awscli/examples/codecommit/update-repository-name.rst
awscli/examples/codepipeline/acknowledge-job.rst
awscli/examples/codepipeline/create-custom-action-type.rst
awscli/examples/codepipeline/create-pipeline.rst
awscli/examples/codepipeline/delete-custom-action-type.rst
awscli/examples/codepipeline/delete-pipeline.rst
awscli/examples/codepipeline/disable-stage-transition.rst
awscli/examples/codepipeline/enable-stage-transition.rst
awscli/examples/codepipeline/get-job-details.rst
awscli/examples/codepipeline/get-pipeline-state.rst
awscli/examples/codepipeline/get-pipeline.rst
awscli/examples/codepipeline/list-action-types.rst
awscli/examples/codepipeline/list-pipelines.rst
awscli/examples/codepipeline/poll-for-jobs.rst
awscli/examples/codepipeline/start-pipeline-execution.rst
awscli/examples/codepipeline/update-pipeline.rst
awscli/examples/configservice/delete-config-rule.rst
awscli/examples/configservice/delete-delivery-channel.rst
awscli/examples/configservice/deliver-config-snapshot.rst
awscli/examples/configservice/describe-compliance-by-config-rule.rst
awscli/examples/configservice/describe-compliance-by-resource.rst
awscli/examples/configservice/describe-config-rule-evaluation-status.rst
awscli/examples/configservice/describe-config-rules.rst
awscli/examples/configservice/describe-configuration-recorder-status.rst
awscli/examples/configservice/describe-configuration-recorders.rst
awscli/examples/configservice/describe-delivery-channel-status.rst
awscli/examples/configservice/describe-delivery-channels.rst
awscli/examples/configservice/get-compliance-details-by-config-rule.rst
awscli/examples/configservice/get-compliance-details-by-resource.rst
awscli/examples/configservice/get-compliance-summary-by-config-rule.rst
awscli/examples/configservice/get-compliance-summary-by-resource-type.rst
awscli/examples/configservice/get-resource-config-history.rst
awscli/examples/configservice/get-status.rst
awscli/examples/configservice/list-discovered-resources.rst
awscli/examples/configservice/put-config-rule.rst
awscli/examples/configservice/put-configuration-recorder.rst
awscli/examples/configservice/put-delivery-channel.rst
awscli/examples/configservice/start-configuration-recorder.rst
awscli/examples/configservice/stop-configuration-recorder.rst
awscli/examples/configservice/subscribe.rst
awscli/examples/configure/_description.rst
awscli/examples/configure/get/_description.rst
awscli/examples/configure/get/_examples.rst
awscli/examples/configure/set/_description.rst
awscli/examples/configure/set/_examples.rst
awscli/examples/datapipeline/activate-pipeline.rst
awscli/examples/datapipeline/add-tags.rst
awscli/examples/datapipeline/create-pipeline.rst
awscli/examples/datapipeline/deactivate-pipeline.rst
awscli/examples/datapipeline/delete-pipeline.rst
awscli/examples/datapipeline/describe-pipelines.rst
awscli/examples/datapipeline/get-pipeline-definition.rst
awscli/examples/datapipeline/list-pipelines.rst
awscli/examples/datapipeline/list-runs.rst
awscli/examples/datapipeline/put-pipeline-definition.rst
awscli/examples/datapipeline/remove-tags.rst
awscli/examples/deploy/add-tags-to-on-premises-instances.rst
awscli/examples/deploy/batch-get-applications.rst
awscli/examples/deploy/batch-get-deployments.rst
awscli/examples/deploy/batch-get-on-premises-instances.rst
awscli/examples/deploy/create-application.rst
awscli/examples/deploy/create-deployment-config.rst
awscli/examples/deploy/create-deployment-group.rst
awscli/examples/deploy/create-deployment.rst
awscli/examples/deploy/delete-application.rst
awscli/examples/deploy/delete-deployment-config.rst
awscli/examples/deploy/delete-deployment-group.rst
awscli/examples/deploy/deregister-on-premises-instance.rst
awscli/examples/deploy/deregister.rst
awscli/examples/deploy/get-application-revision.rst
awscli/examples/deploy/get-application.rst
awscli/examples/deploy/get-deployment-config.rst
awscli/examples/deploy/get-deployment-group.rst
awscli/examples/deploy/get-deployment-instance.rst
awscli/examples/deploy/get-deployment.rst
awscli/examples/deploy/get-on-premises-instance.rst
awscli/examples/deploy/install.rst
awscli/examples/deploy/list-application-revisions.rst
awscli/examples/deploy/list-applications.rst
awscli/examples/deploy/list-deployment-configs.rst
awscli/examples/deploy/list-deployment-groups.rst
awscli/examples/deploy/list-deployment-instances.rst
awscli/examples/deploy/list-deployments.rst
awscli/examples/deploy/list-on-premises-instances.rst
awscli/examples/deploy/push.rst
awscli/examples/deploy/register-application-revision.rst
awscli/examples/deploy/register-on-premises-instance.rst
awscli/examples/deploy/register.rst
awscli/examples/deploy/remove-tags-from-on-premises-instances.rst
awscli/examples/deploy/stop-deployment.rst
awscli/examples/deploy/uninstall.rst
awscli/examples/deploy/update-application.rst
awscli/examples/deploy/update-deployment-group.rst
awscli/examples/dynamodb/batch-get-item.rst
awscli/examples/dynamodb/batch-write-item.rst
awscli/examples/dynamodb/create-table.rst
awscli/examples/dynamodb/delete-item.rst
awscli/examples/dynamodb/delete-table.rst
awscli/examples/dynamodb/describe-table.rst
awscli/examples/dynamodb/get-item.rst
awscli/examples/dynamodb/list-tables.rst
awscli/examples/dynamodb/put-item.rst
awscli/examples/dynamodb/query.rst
awscli/examples/dynamodb/scan.rst
awscli/examples/dynamodb/update-item.rst
awscli/examples/dynamodb/update-table.rst
awscli/examples/ec2/accept-vpc-peering-connection.rst
awscli/examples/ec2/allocate-address.rst
awscli/examples/ec2/allocate-hosts.rst
awscli/examples/ec2/assign-private-ip-addresses.rst
awscli/examples/ec2/associate-address.rst
awscli/examples/ec2/associate-dhcp-options.rst
awscli/examples/ec2/associate-route-table.rst
awscli/examples/ec2/attach-classic-link-vpc.rst
awscli/examples/ec2/attach-internet-gateway.rst
awscli/examples/ec2/attach-network-interface.rst
awscli/examples/ec2/attach-volume.rst
awscli/examples/ec2/attach-vpn-gateway.rst
awscli/examples/ec2/authorize-security-group-egress.rst
awscli/examples/ec2/authorize-security-group-ingress.rst
awscli/examples/ec2/bundle-instance.rst
awscli/examples/ec2/cancel-bundle-task.rst
awscli/examples/ec2/cancel-conversion-task.rst
awscli/examples/ec2/cancel-export-task.rst
awscli/examples/ec2/cancel-spot-fleet-requests.rst
awscli/examples/ec2/cancel-spot-instance-requests.rst
awscli/examples/ec2/confirm-product-instance.rst
awscli/examples/ec2/copy-image.rst
awscli/examples/ec2/copy-snapshot.rst
awscli/examples/ec2/create-customer-gateway.rst
awscli/examples/ec2/create-dhcp-options.rst
awscli/examples/ec2/create-flow-logs.rst
awscli/examples/ec2/create-image.rst
awscli/examples/ec2/create-instance-export-task.rst
awscli/examples/ec2/create-internet-gateway.rst
awscli/examples/ec2/create-key-pair.rst
awscli/examples/ec2/create-nat-gateway.rst
awscli/examples/ec2/create-network-acl-entry.rst
awscli/examples/ec2/create-network-acl.rst
awscli/examples/ec2/create-network-interface.rst
awscli/examples/ec2/create-placement-group.rst
awscli/examples/ec2/create-route-table.rst
awscli/examples/ec2/create-route.rst
awscli/examples/ec2/create-security-group.rst
awscli/examples/ec2/create-snapshot.rst
awscli/examples/ec2/create-spot-datafeed-subscription.rst
awscli/examples/ec2/create-subnet.rst
awscli/examples/ec2/create-tags.rst
awscli/examples/ec2/create-volume.rst
awscli/examples/ec2/create-vpc-endpoint.rst
awscli/examples/ec2/create-vpc-peering-connection.rst
awscli/examples/ec2/create-vpc.rst
awscli/examples/ec2/create-vpn-connection-route.rst
awscli/examples/ec2/create-vpn-connection.rst
awscli/examples/ec2/create-vpn-gateway.rst
awscli/examples/ec2/delete-customer-gateway.rst
awscli/examples/ec2/delete-dhcp-options.rst
awscli/examples/ec2/delete-flow-logs.rst
awscli/examples/ec2/delete-internet-gateway.rst
awscli/examples/ec2/delete-key-pair.rst
awscli/examples/ec2/delete-nat-gateway.rst
awscli/examples/ec2/delete-network-acl-entry.rst
awscli/examples/ec2/delete-network-acl.rst
awscli/examples/ec2/delete-network-interface.rst
awscli/examples/ec2/delete-placement-group.rst
awscli/examples/ec2/delete-route-table.rst
awscli/examples/ec2/delete-route.rst
awscli/examples/ec2/delete-security-group.rst
awscli/examples/ec2/delete-snapshot.rst
awscli/examples/ec2/delete-spot-datafeed-subscription.rst
awscli/examples/ec2/delete-subnet.rst
awscli/examples/ec2/delete-tags.rst
awscli/examples/ec2/delete-volume.rst
awscli/examples/ec2/delete-vpc-endpoints.rst
awscli/examples/ec2/delete-vpc-peering-connection.rst
awscli/examples/ec2/delete-vpc.rst
awscli/examples/ec2/delete-vpn-connection-route.rst
awscli/examples/ec2/delete-vpn-connection.rst
awscli/examples/ec2/delete-vpn-gateway.rst
awscli/examples/ec2/deregister-image.rst
awscli/examples/ec2/describe-account-attributes.rst
awscli/examples/ec2/describe-addresses.rst
awscli/examples/ec2/describe-availability-zones.rst
awscli/examples/ec2/describe-bundle-tasks.rst
awscli/examples/ec2/describe-classic-link-instances.rst
awscli/examples/ec2/describe-conversion-tasks.rst
awscli/examples/ec2/describe-customer-gateways.rst
awscli/examples/ec2/describe-dhcp-options.rst
awscli/examples/ec2/describe-export-tasks.rst
awscli/examples/ec2/describe-flow-logs.rst
awscli/examples/ec2/describe-hosts.rst
awscli/examples/ec2/describe-id-format.rst
awscli/examples/ec2/describe-image-attribute.rst
awscli/examples/ec2/describe-images.rst
awscli/examples/ec2/describe-instance-attribute.rst
awscli/examples/ec2/describe-instance-status.rst
awscli/examples/ec2/describe-instances.rst
awscli/examples/ec2/describe-internet-gateways.rst
awscli/examples/ec2/describe-key-pairs.rst
awscli/examples/ec2/describe-moving-addresses.rst
awscli/examples/ec2/describe-nat-gateways.rst
awscli/examples/ec2/describe-network-acls.rst
awscli/examples/ec2/describe-network-interface-attribute.rst
awscli/examples/ec2/describe-network-interfaces.rst
awscli/examples/ec2/describe-placement-groups.rst
awscli/examples/ec2/describe-prefix-lists.rst
awscli/examples/ec2/describe-regions.rst
awscli/examples/ec2/describe-reserved-instances-modifications.rst
awscli/examples/ec2/describe-reserved-instances-offerings.rst
awscli/examples/ec2/describe-reserved-instances.rst
awscli/examples/ec2/describe-route-tables.rst
awscli/examples/ec2/describe-security-groups.rst
awscli/examples/ec2/describe-snapshot-attribute.rst
awscli/examples/ec2/describe-snapshots.rst
awscli/examples/ec2/describe-spot-datafeed-subscription.rst
awscli/examples/ec2/describe-spot-fleet-instances.rst
awscli/examples/ec2/describe-spot-fleet-request-history.rst
awscli/examples/ec2/describe-spot-fleet-requests.rst
awscli/examples/ec2/describe-spot-instance-requests.rst
awscli/examples/ec2/describe-spot-price-history.rst
awscli/examples/ec2/describe-subnets.rst
awscli/examples/ec2/describe-tags.rst
awscli/examples/ec2/describe-volume-attribute.rst
awscli/examples/ec2/describe-volume-status.rst
awscli/examples/ec2/describe-volumes.rst
awscli/examples/ec2/describe-vpc-attribute.rst
awscli/examples/ec2/describe-vpc-classic-link-dns-support.rst
awscli/examples/ec2/describe-vpc-classic-link.rst
awscli/examples/ec2/describe-vpc-endpoint-services.rst
awscli/examples/ec2/describe-vpc-endpoints.rst
awscli/examples/ec2/describe-vpc-peering-connections.rst
awscli/examples/ec2/describe-vpcs.rst
awscli/examples/ec2/describe-vpn-connections.rst
awscli/examples/ec2/describe-vpn-gateways.rst
awscli/examples/ec2/detach-classic-link-vpc.rst
awscli/examples/ec2/detach-internet-gateway.rst
awscli/examples/ec2/detach-network-interface.rst
awscli/examples/ec2/detach-volume.rst
awscli/examples/ec2/detach-vpn-gateway.rst
awscli/examples/ec2/disable-vgw-route-propagation.rst
awscli/examples/ec2/disable-vpc-classic-link-dns-support.rst
awscli/examples/ec2/disable-vpc-classic-link.rst
awscli/examples/ec2/disassociate-address.rst
awscli/examples/ec2/disassociate-route-table.rst
awscli/examples/ec2/enable-vgw-route-propagation.rst
awscli/examples/ec2/enable-vpc-classic-link-dns-support.rst
awscli/examples/ec2/enable-vpc-classic-link.rst
awscli/examples/ec2/get-console-output.rst
awscli/examples/ec2/get-password-data.rst
awscli/examples/ec2/import-key-pair.rst
awscli/examples/ec2/modify-hosts.rst
awscli/examples/ec2/modify-id-format.rst
awscli/examples/ec2/modify-image-attribute.rst
awscli/examples/ec2/modify-instance-attribute.rst
awscli/examples/ec2/modify-instance-placement.rst
awscli/examples/ec2/modify-network-interface-attribute.rst
awscli/examples/ec2/modify-reserved-instances.rst
awscli/examples/ec2/modify-snapshot-attribute.rst
awscli/examples/ec2/modify-spot-fleet-request.rst
awscli/examples/ec2/modify-subnet-attribute.rst
awscli/examples/ec2/modify-volume-attribute.rst
awscli/examples/ec2/modify-vpc-attribute.rst
awscli/examples/ec2/modify-vpc-endpoint.rst
awscli/examples/ec2/monitor-instances.rst
awscli/examples/ec2/move-address-to-vpc.rst
awscli/examples/ec2/purchase-reserved-instances-offering.rst
awscli/examples/ec2/reboot-instances.rst
awscli/examples/ec2/register-image.rst
awscli/examples/ec2/reject-vpc-peering-connection.rst
awscli/examples/ec2/release-address.rst
awscli/examples/ec2/release-hosts.rst
awscli/examples/ec2/replace-network-acl-association.rst
awscli/examples/ec2/replace-network-acl-entry.rst
awscli/examples/ec2/replace-route-table-association.rst
awscli/examples/ec2/replace-route.rst
awscli/examples/ec2/report-instance-status.rst
awscli/examples/ec2/request-spot-fleet.rst
awscli/examples/ec2/request-spot-instances.rst
awscli/examples/ec2/reset-image-attribute.rst
awscli/examples/ec2/reset-instance-attribute.rst
awscli/examples/ec2/reset-snapshot-attribute.rst
awscli/examples/ec2/restore-address-to-classic.rst
awscli/examples/ec2/revoke-security-group-egress.rst
awscli/examples/ec2/revoke-security-group-ingress.rst
awscli/examples/ec2/run-instances.rst
awscli/examples/ec2/start-instances.rst
awscli/examples/ec2/stop-instances.rst
awscli/examples/ec2/terminate-instances.rst
awscli/examples/ec2/unassign-private-ip-addresses.rst
awscli/examples/ec2/unmonitor-instances.rst
awscli/examples/ecr/batch-delete-image.rst
awscli/examples/ecr/batch-get-image.rst
awscli/examples/ecr/create-repository.rst
awscli/examples/ecr/delete-repository.rst
awscli/examples/ecr/describe-repositories.rst
awscli/examples/ecr/get-authorization-token.rst
awscli/examples/ecr/get-login.rst
awscli/examples/ecr/get-login_description.rst
awscli/examples/ecs/create-cluster.rst
awscli/examples/ecs/create-service.rst
awscli/examples/ecs/delete-cluster.rst
awscli/examples/ecs/delete-service.rst
awscli/examples/ecs/deregister-container-instance.rst
awscli/examples/ecs/deregister-task-definition.rst
awscli/examples/ecs/describe-clusters.rst
awscli/examples/ecs/describe-container-instances.rst
awscli/examples/ecs/describe-services.rst
awscli/examples/ecs/describe-task-definition.rst
awscli/examples/ecs/describe-tasks.rst
awscli/examples/ecs/list-clusters.rst
awscli/examples/ecs/list-container-instances.rst
awscli/examples/ecs/list-services.rst
awscli/examples/ecs/list-task-definition-families.rst
awscli/examples/ecs/list-task-definitions.rst
awscli/examples/ecs/list-tasks.rst
awscli/examples/ecs/register-task-definition.rst
awscli/examples/ecs/run-task.rst
awscli/examples/ecs/update-container-agent.rst
awscli/examples/ecs/update-service.rst
awscli/examples/elasticbeanstalk/abort-environment-update.rst
awscli/examples/elasticbeanstalk/check-dns-availability.rst
awscli/examples/elasticbeanstalk/create-application-version.rst
awscli/examples/elasticbeanstalk/create-application.rst
awscli/examples/elasticbeanstalk/create-configuration-template.rst
awscli/examples/elasticbeanstalk/create-environment.rst
awscli/examples/elasticbeanstalk/create-storage-location.rst
awscli/examples/elasticbeanstalk/delete-application-version.rst
awscli/examples/elasticbeanstalk/delete-application.rst
awscli/examples/elasticbeanstalk/delete-configuration-template.rst
awscli/examples/elasticbeanstalk/delete-environment-configuration.rst
awscli/examples/elasticbeanstalk/describe-application-versions.rst
awscli/examples/elasticbeanstalk/describe-applications.rst
awscli/examples/elasticbeanstalk/describe-configuration-options.rst
awscli/examples/elasticbeanstalk/describe-configuration-settings.rst
awscli/examples/elasticbeanstalk/describe-environment-health.rst
awscli/examples/elasticbeanstalk/describe-environment-resources.rst
awscli/examples/elasticbeanstalk/describe-environments.rst
awscli/examples/elasticbeanstalk/describe-events.rst
awscli/examples/elasticbeanstalk/describe-instances-health.rst
awscli/examples/elasticbeanstalk/list-available-solution-stacks.rst
awscli/examples/elasticbeanstalk/rebuild-environment.rst
awscli/examples/elasticbeanstalk/request-environment-info.rst
awscli/examples/elasticbeanstalk/restart-app-server.rst
awscli/examples/elasticbeanstalk/retrieve-environment-info.rst
awscli/examples/elasticbeanstalk/swap-environment-cnames.rst
awscli/examples/elasticbeanstalk/terminate-environment.rst
awscli/examples/elasticbeanstalk/update-application-version.rst
awscli/examples/elasticbeanstalk/update-application.rst
awscli/examples/elasticbeanstalk/update-configuration-template.rst
awscli/examples/elasticbeanstalk/update-environment.rst
awscli/examples/elasticbeanstalk/validate-configuration-settings.rst
awscli/examples/elb/add-tags.rst
awscli/examples/elb/apply-security-groups-to-load-balancer.rst
awscli/examples/elb/attach-load-balancer-to-subnets.rst
awscli/examples/elb/configure-health-check.rst
awscli/examples/elb/create-app-cookie-stickiness-policy.rst
awscli/examples/elb/create-lb-cookie-stickiness-policy.rst
awscli/examples/elb/create-load-balancer-listeners.rst
awscli/examples/elb/create-load-balancer-policy.rst
awscli/examples/elb/create-load-balancer.rst
awscli/examples/elb/delete-load-balancer-listeners.rst
awscli/examples/elb/delete-load-balancer-policy.rst
awscli/examples/elb/delete-load-balancer.rst
awscli/examples/elb/deregister-instances-from-load-balancer.rst
awscli/examples/elb/describe-instance-health.rst
awscli/examples/elb/describe-load-balancer-attributes.rst
awscli/examples/elb/describe-load-balancer-policies.rst
awscli/examples/elb/describe-load-balancer-policy-types.rst
awscli/examples/elb/describe-load-balancers.rst
awscli/examples/elb/describe-tags.rst
awscli/examples/elb/detach-load-balancer-from-subnets.rst
awscli/examples/elb/disable-availability-zones-for-load-balancer.rst
awscli/examples/elb/enable-availability-zones-for-load-balancer.rst
awscli/examples/elb/modify-load-balancer-attributes.rst
awscli/examples/elb/register-instances-with-load-balancer.rst
awscli/examples/elb/remove-tags.rst
awscli/examples/elb/set-load-balancer-listener-ssl-certificate.rst
awscli/examples/elb/set-load-balancer-policies-for-backend-server.rst
awscli/examples/elb/set-load-balancer-policies-of-listener.rst
awscli/examples/emr/add-steps.rst
awscli/examples/emr/add-tags.rst
awscli/examples/emr/create-cluster-examples.rst
awscli/examples/emr/create-cluster-synopsis.rst
awscli/examples/emr/create-default-roles.rst
awscli/examples/emr/describe-cluster.rst
awscli/examples/emr/describe-step.rst
awscli/examples/emr/get.rst
awscli/examples/emr/list-clusters.rst
awscli/examples/emr/list-instances.rst
awscli/examples/emr/list-steps.rst
awscli/examples/emr/modify-cluster-attributes.rst
awscli/examples/emr/put.rst
awscli/examples/emr/remove-tags.rst
awscli/examples/emr/schedule-hbase-backup.rst
awscli/examples/emr/socks.rst
awscli/examples/emr/ssh.rst
awscli/examples/emr/wait.rst
awscli/examples/glacier/abort-multipart-upload.rst
awscli/examples/glacier/add-tags-to-vault.rst
awscli/examples/glacier/complete-multipart-upload.rst
awscli/examples/glacier/create-vault.rst
awscli/examples/glacier/delete-vault.rst
awscli/examples/glacier/describe-job.rst
awscli/examples/glacier/describe-vault.rst
awscli/examples/glacier/get-data-retrieval-policy.rst
awscli/examples/glacier/get-job-output.rst
awscli/examples/glacier/get-vault-notifications.rst
awscli/examples/glacier/initiate-job.rst
awscli/examples/glacier/initiate-multipart-upload.rst
awscli/examples/glacier/list-jobs.rst
awscli/examples/glacier/list-multipart-uploads.rst
awscli/examples/glacier/list-parts.rst
awscli/examples/glacier/list-tags-for-vault.rst
awscli/examples/glacier/list-vaults.rst
awscli/examples/glacier/remove-tags-from-vault.rst
awscli/examples/glacier/set-data-retrieval-policy.rst
awscli/examples/glacier/set-vault-notifications.rst
awscli/examples/glacier/upload-archive.rst
awscli/examples/glacier/upload-multipart-part.rst
awscli/examples/iam/add-client-id-to-open-id-connect-provider.rst
awscli/examples/iam/add-role-to-instance-profile.rst
awscli/examples/iam/add-user-to-group.rst
awscli/examples/iam/attach-group-policy.rst
awscli/examples/iam/attach-role-policy.rst
awscli/examples/iam/attach-user-policy.rst
awscli/examples/iam/change-password.rst
awscli/examples/iam/create-access-key.rst
awscli/examples/iam/create-account-alias.rst
awscli/examples/iam/create-group.rst
awscli/examples/iam/create-instance-profile.rst
awscli/examples/iam/create-login-profile.rst
awscli/examples/iam/create-open-id-connect-provider.rst
awscli/examples/iam/create-policy-version.rst
awscli/examples/iam/create-policy.rst
awscli/examples/iam/create-role.rst
awscli/examples/iam/create-saml-provider.rst
awscli/examples/iam/create-user.rst
awscli/examples/iam/create-virtual-mfa-device.rst
awscli/examples/iam/deactivate-mfa-device.rst
awscli/examples/iam/delete-access-key.rst
awscli/examples/iam/delete-account-alias.rst
awscli/examples/iam/delete-account-password-policy.rst
awscli/examples/iam/delete-group-policy.rst
awscli/examples/iam/delete-group.rst
awscli/examples/iam/delete-instance-profile.rst
awscli/examples/iam/delete-login-profile.rst
awscli/examples/iam/delete-open-id-connect-provider.rst
awscli/examples/iam/delete-policy-version.rst
awscli/examples/iam/delete-policy.rst
awscli/examples/iam/delete-role-policy.rst
awscli/examples/iam/delete-role.rst
awscli/examples/iam/delete-saml-provider.rst
awscli/examples/iam/delete-signing-certificate.rst
awscli/examples/iam/delete-user-policy.rst
awscli/examples/iam/delete-user.rst
awscli/examples/iam/delete-virtual-mfa-device.rst
awscli/examples/iam/detach-group-policy.rst
awscli/examples/iam/detach-role-policy.rst
awscli/examples/iam/detach-user-policy.rst
awscli/examples/iam/enable-mfa-device.rst
awscli/examples/iam/generate-credential-report.rst
awscli/examples/iam/get-access-key-last-used.rst
awscli/examples/iam/get-account-authorization-details.rst
awscli/examples/iam/get-account-password-policy.rst
awscli/examples/iam/get-account-summary.rst
awscli/examples/iam/get-credential-report.rst
awscli/examples/iam/get-group-policy.rst
awscli/examples/iam/get-group.rst
awscli/examples/iam/get-instance-profile.rst
awscli/examples/iam/get-login-profile.rst
awscli/examples/iam/get-open-id-connect-provider.rst
awscli/examples/iam/get-policy-version.rst
awscli/examples/iam/get-policy.rst
awscli/examples/iam/get-role-policy.rst
awscli/examples/iam/get-role.rst
awscli/examples/iam/get-saml-provider.rst
awscli/examples/iam/get-user-policy.rst
awscli/examples/iam/get-user.rst
awscli/examples/iam/list-access-keys.rst
awscli/examples/iam/list-account-aliases.rst
awscli/examples/iam/list-attached-group-policies.rst
awscli/examples/iam/list-attached-role-policies.rst
awscli/examples/iam/list-attached-user-policies.rst
awscli/examples/iam/list-entities-for-policy.rst
awscli/examples/iam/list-group-policies.rst
awscli/examples/iam/list-groups-for-user.rst
awscli/examples/iam/list-groups.rst
awscli/examples/iam/list-instance-profiles-for-role.rst
awscli/examples/iam/list-instance-profiles.rst
awscli/examples/iam/list-mfa-devices.rst
awscli/examples/iam/list-open-id-connect-providers.rst
awscli/examples/iam/list-policies.rst
awscli/examples/iam/list-policy-versions.rst
awscli/examples/iam/list-role-policies.rst
awscli/examples/iam/list-roles.rst
awscli/examples/iam/list-saml-providers.rst
awscli/examples/iam/list-signing-certificates.rst
awscli/examples/iam/list-user-policies.rst
awscli/examples/iam/list-users.rst
awscli/examples/iam/list-virtual-mfa-devices.rst
awscli/examples/iam/put-group-policy.rst
awscli/examples/iam/put-role-policy.rst
awscli/examples/iam/put-user-policy.rst
awscli/examples/iam/remove-client-id-from-open-id-connect-provider.rst
awscli/examples/iam/remove-role-from-instance-profile.rst
awscli/examples/iam/remove-user-from-group.rst
awscli/examples/iam/resync-mfa-device.rst
awscli/examples/iam/set-default-policy-version.rst
awscli/examples/iam/update-access-key.rst
awscli/examples/iam/update-account-password-policy.rst
awscli/examples/iam/update-assume-role-policy.rst
awscli/examples/iam/update-group.rst
awscli/examples/iam/update-login-profile.rst
awscli/examples/iam/update-open-id-connect-provider-thumbprint.rst
awscli/examples/iam/update-saml-provider.rst
awscli/examples/iam/update-signing-certificate.rst
awscli/examples/iam/update-user.rst
awscli/examples/iam/upload-server-certificate.rst
awscli/examples/iam/upload-signing-certificate.rst
awscli/examples/importexport/cancel-job.rst
awscli/examples/importexport/create-job.rst
awscli/examples/importexport/get-shipping-label.rst
awscli/examples/importexport/get-status.rst
awscli/examples/importexport/list-jobs.rst
awscli/examples/importexport/update-job.rst
awscli/examples/iot/create-certificate-from-csr.rst
awscli/examples/kms/create-alias.rst
awscli/examples/kms/decrypt.rst
awscli/examples/kms/encrypt.rst
awscli/examples/logs/create-log-group.rst
awscli/examples/logs/create-log-stream.rst
awscli/examples/logs/delete-log-group.rst
awscli/examples/logs/delete-log-stream.rst
awscli/examples/logs/delete-retention-policy.rst
awscli/examples/logs/describe-log-groups.rst
awscli/examples/logs/describe-log-streams.rst
awscli/examples/logs/get-log-events.rst
awscli/examples/logs/put-log-events.rst
awscli/examples/logs/put-retention-policy.rst
awscli/examples/opsworks/assign-instance.rst
awscli/examples/opsworks/assign-volume.rst
awscli/examples/opsworks/associate-elastic-ip.rst
awscli/examples/opsworks/attach-elastic-load-balancer.rst
awscli/examples/opsworks/create-app.rst
awscli/examples/opsworks/create-deployment.rst
awscli/examples/opsworks/create-instance.rst
awscli/examples/opsworks/create-layer.rst
awscli/examples/opsworks/create-stack.rst
awscli/examples/opsworks/create-user-profile.rst
awscli/examples/opsworks/delete-app.rst
awscli/examples/opsworks/delete-instance.rst
awscli/examples/opsworks/delete-layer.rst
awscli/examples/opsworks/delete-stack.rst
awscli/examples/opsworks/delete-user-profile.rst
awscli/examples/opsworks/deregister-elastic-ip.rst
awscli/examples/opsworks/deregister-instance.rst
awscli/examples/opsworks/deregister-rds-db-instance.rst
awscli/examples/opsworks/deregister-volume.rst
awscli/examples/opsworks/describe-apps.rst
awscli/examples/opsworks/describe-commands.rst
awscli/examples/opsworks/describe-deployments.rst
awscli/examples/opsworks/describe-elastic-ips.rst
awscli/examples/opsworks/describe-elastic-load-balancers.rst
awscli/examples/opsworks/describe-instances.rst
awscli/examples/opsworks/describe-layers.rst
awscli/examples/opsworks/describe-load-based-auto-scaling.rst
awscli/examples/opsworks/describe-my-user-profile.rst
awscli/examples/opsworks/describe-permissions.rst
awscli/examples/opsworks/describe-raid-arrays.rst
awscli/examples/opsworks/describe-rds-db-instances.rst
awscli/examples/opsworks/describe-stack-summary.rst
awscli/examples/opsworks/describe-stacks.rst
awscli/examples/opsworks/describe-timebased-auto-scaling.rst
awscli/examples/opsworks/describe-user-profiles.rst
awscli/examples/opsworks/describe-volumes.rst
awscli/examples/opsworks/detach-elastic-load-balancer.rst
awscli/examples/opsworks/disassociate-elastic-ip.rst
awscli/examples/opsworks/get-hostname-suggestion.rst
awscli/examples/opsworks/reboot-instance.rst
awscli/examples/opsworks/register-elastic-ip.rst
awscli/examples/opsworks/register-rds-db-instance.rst
awscli/examples/opsworks/register-volume.rst
awscli/examples/opsworks/register.rst
awscli/examples/opsworks/set-load-based-auto-scaling.rst
awscli/examples/opsworks/set-permission.rst
awscli/examples/opsworks/set-time-based-auto-scaling.rst
awscli/examples/opsworks/start-instance.rst
awscli/examples/opsworks/start-stack.rst
awscli/examples/opsworks/stop-instance.rst
awscli/examples/opsworks/stop-stack.rst
awscli/examples/opsworks/unassign-instance.rst
awscli/examples/opsworks/unassign-volume.rst
awscli/examples/opsworks/update-app.rst
awscli/examples/opsworks/update-elastic-ip.rst
awscli/examples/opsworks/update-instance.rst
awscli/examples/opsworks/update-layer.rst
awscli/examples/opsworks/update-my-user-profile.rst
awscli/examples/opsworks/update-rds-db-instance.rst
awscli/examples/opsworks/update-volume.rst
awscli/examples/rds/add-tag-to-resource.rst
awscli/examples/rds/create-db-instance.rst
awscli/examples/rds/create-db-security-group.rst
awscli/examples/rds/create-option-group.rst
awscli/examples/rds/describe-db-instances.rst
awscli/examples/rds/download-db-log-file-portion.rst
awscli/examples/redshift/authorize-cluster-security-group-ingress.rst
awscli/examples/redshift/authorize-snapshot-access.rst
awscli/examples/redshift/copy-cluster-snapshot.rst
awscli/examples/redshift/create-cluster-parameter-group.rst
awscli/examples/redshift/create-cluster-security-group.rst
awscli/examples/redshift/create-cluster-snapshot.rst
awscli/examples/redshift/create-cluster-subnet-group.rst
awscli/examples/redshift/create-cluster.rst
awscli/examples/redshift/delete-cluster-parameter-group.rst
awscli/examples/redshift/delete-cluster-security-group.rst
awscli/examples/redshift/delete-cluster-snapshot.rst
awscli/examples/redshift/delete-cluster-subnet-group.rst
awscli/examples/redshift/delete-cluster.rst
awscli/examples/redshift/describe-cluster-parameter-groups.rst
awscli/examples/redshift/describe-cluster-parameters.rst
awscli/examples/redshift/describe-cluster-security-groups.rst
awscli/examples/redshift/describe-cluster-snapshots.rst
awscli/examples/redshift/describe-cluster-subnet-groups.rst
awscli/examples/redshift/describe-cluster-versions.rst
awscli/examples/redshift/describe-clusters.rst
awscli/examples/redshift/describe-default-cluster-parameters.rst
awscli/examples/redshift/describe-events.rst
awscli/examples/redshift/describe-orderable-cluster-options.rst
awscli/examples/redshift/describe-reserved-node-offerings.rst
awscli/examples/redshift/describe-reserved-nodes.rst
awscli/examples/redshift/describe-resize.rst
awscli/examples/redshift/modify-cluster-parameter-group.rst
awscli/examples/redshift/modify-cluster-subnet-group.rst
awscli/examples/redshift/modify-cluster.rst
awscli/examples/redshift/purchase-reserved-node-offering.rst
awscli/examples/redshift/reboot-cluster.rst
awscli/examples/redshift/reset-cluster-parameter-group.rst
awscli/examples/redshift/restore-from-cluster-snapshot.rst
awscli/examples/redshift/revoke-cluster-security-group-ingress.rst
awscli/examples/redshift/revoke-snapshot-access.rst
awscli/examples/route53/change-resource-record-sets.rst
awscli/examples/route53/create-health-check.rst
awscli/examples/route53/create-hosted-zone.rst
awscli/examples/route53/delete-health-check.rst
awscli/examples/route53/delete-hosted-zone.rst
awscli/examples/route53/get-change.rst
awscli/examples/route53/get-health-check.rst
awscli/examples/route53/get-hosted-zone.rst
awscli/examples/route53/list-health-checks.rst
awscli/examples/route53/list-hosted-zones-by-name.rst
awscli/examples/route53/list-hosted-zones.rst
awscli/examples/route53/list-resource-record-sets.rst
awscli/examples/s3/_concepts.rst
awscli/examples/s3/cp.rst
awscli/examples/s3/ls.rst
awscli/examples/s3/mb.rst
awscli/examples/s3/mv.rst
awscli/examples/s3/rb.rst
awscli/examples/s3/rm.rst
awscli/examples/s3/sync.rst
awscli/examples/s3/website.rst
awscli/examples/s3api/abort-multipart-upload.rst
awscli/examples/s3api/complete-multipart-upload.rst
awscli/examples/s3api/copy-object.rst
awscli/examples/s3api/create-bucket.rst
awscli/examples/s3api/create-multipart-upload.rst
awscli/examples/s3api/delete-bucket-cors.rst
awscli/examples/s3api/delete-bucket-lifecycle.rst
awscli/examples/s3api/delete-bucket-policy.rst
awscli/examples/s3api/delete-bucket-replication.rst
awscli/examples/s3api/delete-bucket-tagging.rst
awscli/examples/s3api/delete-bucket-website.rst
awscli/examples/s3api/delete-bucket.rst
awscli/examples/s3api/delete-object.rst
awscli/examples/s3api/delete-objects.rst
awscli/examples/s3api/get-bucket-acl.rst
awscli/examples/s3api/get-bucket-cors.rst
awscli/examples/s3api/get-bucket-lifecycle-configuration.rst
awscli/examples/s3api/get-bucket-lifecycle.rst
awscli/examples/s3api/get-bucket-location.rst
awscli/examples/s3api/get-bucket-notification-configuration.rst
awscli/examples/s3api/get-bucket-notification.rst
awscli/examples/s3api/get-bucket-policy.rst
awscli/examples/s3api/get-bucket-replication.rst
awscli/examples/s3api/get-bucket-tagging.rst
awscli/examples/s3api/get-bucket-versioning.rst
awscli/examples/s3api/get-bucket-website.rst
awscli/examples/s3api/get-object-acl.rst
awscli/examples/s3api/get-object-torrent.rst
awscli/examples/s3api/get-object.rst
awscli/examples/s3api/head-bucket.rst
awscli/examples/s3api/head-object.rst
awscli/examples/s3api/list-buckets.rst
awscli/examples/s3api/list-multipart-uploads.rst
awscli/examples/s3api/list-object-versions.rst
awscli/examples/s3api/list-objects.rst
awscli/examples/s3api/list-parts.rst
awscli/examples/s3api/put-bucket-acl.rst
awscli/examples/s3api/put-bucket-cors.rst
awscli/examples/s3api/put-bucket-lifecycle-configuration.rst
awscli/examples/s3api/put-bucket-lifecycle.rst
awscli/examples/s3api/put-bucket-logging.rst
awscli/examples/s3api/put-bucket-notification-configuration.rst
awscli/examples/s3api/put-bucket-notification.rst
awscli/examples/s3api/put-bucket-policy.rst
awscli/examples/s3api/put-bucket-replication.rst
awscli/examples/s3api/put-bucket-tagging.rst
awscli/examples/s3api/put-bucket-versioning.rst
awscli/examples/s3api/put-bucket-website.rst
awscli/examples/s3api/put-object-acl.rst
awscli/examples/s3api/put-object.rst
awscli/examples/s3api/upload-part.rst
awscli/examples/ses/delete-identity.rst
awscli/examples/ses/get-identity-dkim-attributes.rst
awscli/examples/ses/get-identity-notification-attributes.rst
awscli/examples/ses/get-identity-verification-attributes.rst
awscli/examples/ses/get-send-quota.rst
awscli/examples/ses/get-send-statistics.rst
awscli/examples/ses/list-identities.rst
awscli/examples/ses/send-email.rst
awscli/examples/ses/send-raw-email.rst
awscli/examples/ses/set-identity-dkim-enabled.rst
awscli/examples/ses/set-identity-feedback-forwarding-enabled.rst
awscli/examples/ses/set-identity-notification-topic.rst
awscli/examples/ses/verify-domain-dkim.rst
awscli/examples/ses/verify-domain-identity.rst
awscli/examples/ses/verify-email-identity.rst
awscli/examples/sns/confirm-subscription.rst
awscli/examples/sns/create-topic.rst
awscli/examples/sns/delete-topic.rst
awscli/examples/sns/get-subscription-attributes.rst
awscli/examples/sns/get-topic-attributes.rst
awscli/examples/sns/list-subscriptions-by-topic.rst
awscli/examples/sns/list-subscriptions.rst
awscli/examples/sns/list-topics.rst
awscli/examples/sns/publish.rst
awscli/examples/sns/subscribe.rst
awscli/examples/sns/unsubscribe.rst
awscli/examples/sqs/add-permission.rst
awscli/examples/sqs/change-message-visibility-batch.rst
awscli/examples/sqs/change-message-visibility.rst
awscli/examples/sqs/create-queue.rst
awscli/examples/sqs/delete-message-batch.rst
awscli/examples/sqs/delete-message.rst
awscli/examples/sqs/delete-queue.rst
awscli/examples/sqs/get-queue-attributes.rst
awscli/examples/sqs/get-queue-url.rst
awscli/examples/sqs/list-dead-letter-source-queues.rst
awscli/examples/sqs/list-queues.rst
awscli/examples/sqs/purge-queue.rst
awscli/examples/sqs/receive-message.rst
awscli/examples/sqs/remove-permission.rst
awscli/examples/sqs/send-message-batch.rst
awscli/examples/sqs/send-message.rst
awscli/examples/sqs/set-queue-attributes.rst
awscli/examples/ssm/create-association-batch.rst
awscli/examples/ssm/create-association.rst
awscli/examples/ssm/create-document.rst
awscli/examples/ssm/delete-association.rst
awscli/examples/ssm/delete-document.rst
awscli/examples/ssm/describe-association.rst
awscli/examples/ssm/describe-document.rst
awscli/examples/ssm/get-document.rst
awscli/examples/ssm/list-associations.rst
awscli/examples/ssm/list-documents.rst
awscli/examples/ssm/update-association-status.rst
awscli/examples/storagegateway/describe-gateway-information.rst
awscli/examples/storagegateway/list-gateways.rst
awscli/examples/storagegateway/list-volumes.rst
awscli/examples/swf/count-closed-workflow-executions.rst
awscli/examples/swf/count-open-workflow-executions.rst
awscli/examples/swf/deprecate-domain.rst
awscli/examples/swf/describe-domain.rst
awscli/examples/swf/list-activity-types.rst
awscli/examples/swf/list-domains.rst
awscli/examples/swf/list-workflow-types.rst
awscli/examples/swf/register-domain.rst
awscli/examples/swf/register-workflow-type.rst
awscli/examples/workspaces/create-workspaces.rst
awscli/examples/workspaces/describe-workspace-bundles.rst
awscli/examples/workspaces/describe-workspace-directories.rst
awscli/examples/workspaces/describe-workspaces.rst
awscli/examples/workspaces/terminate-workspaces.rst
awscli/topics/config-vars.rst
awscli/topics/return-codes.rst
awscli/topics/s3-config.rst
awscli/topics/topic-tags.json
bin/aws
bin/aws.cmd
bin/aws_bash_completer
bin/aws_completer
bin/aws_zsh_completer.sh awscli-1.10.1/awscli.egg-info/requires.txt 0000666 4542626 0000144 00000000162 12652514125 021504 0 ustar pysdk-ci amazon 0000000 0000000 botocore==1.3.23
colorama>=0.2.5,<=0.3.3
docutils>=0.10
rsa>=3.1.2,<=3.3.0
[:python_version=="2.6"]
argparse>=1.1 awscli-1.10.1/awscli.egg-info/top_level.txt 0000666 4542626 0000144 00000000007 12652514125 021634 0 ustar pysdk-ci amazon 0000000 0000000 awscli
awscli-1.10.1/bin/ 0000777 4542626 0000144 00000000000 12652514126 014702 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/bin/aws_bash_completer 0000666 4542626 0000144 00000000314 12652514124 020462 0 ustar pysdk-ci amazon 0000000 0000000 # Typically that would be added under one of the following paths:
# - /etc/bash_completion.d
# - /usr/local/etc/bash_completion.d
# - /usr/share/bash-completion/completions
complete -C aws_completer aws
awscli-1.10.1/bin/aws.cmd 0000666 4542626 0000144 00000002603 12652514124 016160 0 ustar pysdk-ci amazon 0000000 0000000 @echo OFF
REM="""
setlocal
set PythonExe=""
set PythonExeFlags=
for %%i in (cmd bat exe) do (
for %%j in (python.%%i) do (
call :SetPythonExe "%%~$PATH:j"
)
)
for /f "tokens=2 delims==" %%i in ('assoc .py') do (
for /f "tokens=2 delims==" %%j in ('ftype %%i') do (
for /f "tokens=1" %%k in ("%%j") do (
call :SetPythonExe %%k
)
)
)
%PythonExe% -x %PythonExeFlags% "%~f0" %*
goto :EOF
:SetPythonExe
if not ["%~1"]==[""] (
if [%PythonExe%]==[""] (
set PythonExe="%~1"
)
)
goto :EOF
"""
# ===================================================
# Python script starts here
# ===================================================
#!/usr/bin/env python
# Copyright 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
# http://aws.amazon.com/apache2.0/
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import awscli.clidriver
import sys
def main():
return awscli.clidriver.main()
if __name__ == '__main__':
sys.exit(main())
awscli-1.10.1/bin/aws 0000777 4542626 0000144 00000001462 12652514124 015423 0 ustar pysdk-ci amazon 0000000 0000000 #!/usr/bin/env python
# Copyright 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
# http://aws.amazon.com/apache2.0/
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import sys
import os
if os.environ.get('LC_CTYPE', '') == 'UTF-8':
os.environ['LC_CTYPE'] = 'en_US.UTF-8'
import awscli.clidriver
def main():
return awscli.clidriver.main()
if __name__ == '__main__':
sys.exit(main())
awscli-1.10.1/bin/aws_completer 0000777 4542626 0000144 00000002163 12652514124 017474 0 ustar pysdk-ci amazon 0000000 0000000 #!/usr/bin/env python
# Copyright 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
# http://aws.amazon.com/apache2.0/
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import os
if os.environ.get('LC_CTYPE', '') == 'UTF-8':
os.environ['LC_CTYPE'] = 'en_US.UTF-8'
import awscli.completer
if __name__ == '__main__':
# bash exports COMP_LINE and COMP_POINT, tcsh COMMAND_LINE only
cline = os.environ.get('COMP_LINE') or os.environ.get('COMMAND_LINE') or ''
cpoint = int(os.environ.get('COMP_POINT') or len(cline))
try:
awscli.completer.complete(cline, cpoint)
except KeyboardInterrupt:
# If the user hits Ctrl+C, we don't want to print
# a traceback to the user.
pass
awscli-1.10.1/bin/aws_zsh_completer.sh 0000666 4542626 0000144 00000003573 12652514124 020774 0 ustar pysdk-ci amazon 0000000 0000000 # Source this file to activate auto completion for zsh using the bash
# compatibility helper. Make sure to run `compinit` before, which should be
# given usually.
#
# % source /path/to/zsh_complete.sh
#
# Typically that would be called somewhere in your .zshrc.
#
# Note, the overwrite of _bash_complete() is to export COMP_LINE and COMP_POINT
# That is only required for zsh <= edab1d3dbe61da7efe5f1ac0e40444b2ec9b9570
#
# https://github.com/zsh-users/zsh/commit/edab1d3dbe61da7efe5f1ac0e40444b2ec9b9570
#
# zsh relases prior to that version do not export the required env variables!
#
# It is planned to write a proper zsh auto completion soon. Please talk
# to Frank Becker .
autoload -Uz bashcompinit
bashcompinit -i
_bash_complete() {
local ret=1
local -a suf matches
local -x COMP_POINT COMP_CWORD
local -a COMP_WORDS COMPREPLY BASH_VERSINFO
local -x COMP_LINE="$words"
local -A savejobstates savejobtexts
(( COMP_POINT = 1 + ${#${(j. .)words[1,CURRENT]}} + $#QIPREFIX + $#IPREFIX + $#PREFIX ))
(( COMP_CWORD = CURRENT - 1))
COMP_WORDS=( $words )
BASH_VERSINFO=( 2 05b 0 1 release )
savejobstates=( ${(kv)jobstates} )
savejobtexts=( ${(kv)jobtexts} )
[[ ${argv[${argv[(I)nospace]:-0}-1]} = -o ]] && suf=( -S '' )
matches=( ${(f)"$(compgen $@ -- ${words[CURRENT]})"} )
if [[ -n $matches ]]; then
if [[ ${argv[${argv[(I)filenames]:-0}-1]} = -o ]]; then
compset -P '*/' && matches=( ${matches##*/} )
compset -S '/*' && matches=( ${matches%%/*} )
compadd -Q -f "${suf[@]}" -a matches && ret=0
else
compadd -Q "${suf[@]}" -a matches && ret=0
fi
fi
if (( ret )); then
if [[ ${argv[${argv[(I)default]:-0}-1]} = -o ]]; then
_default "${suf[@]}" && ret=0
elif [[ ${argv[${argv[(I)dirnames]:-0}-1]} = -o ]]; then
_directories "${suf[@]}" && ret=0
fi
fi
return ret
}
complete -C aws_completer aws
awscli-1.10.1/awscli/ 0000777 4542626 0000144 00000000000 12652514126 015414 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/clidriver.py 0000666 4542626 0000144 00000062712 12652514124 017757 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import sys
import signal
import logging
import botocore.session
from botocore import __version__ as botocore_version
from botocore.hooks import HierarchicalEmitter
from botocore import xform_name
from botocore.compat import copy_kwargs, OrderedDict
from botocore.exceptions import NoCredentialsError
from botocore.exceptions import NoRegionError
from awscli import EnvironmentVariables, __version__
from awscli.formatter import get_formatter
from awscli.plugin import load_plugins
from awscli.argparser import MainArgParser
from awscli.argparser import ServiceArgParser
from awscli.argparser import ArgTableArgParser
from awscli.argparser import USAGE
from awscli.help import ProviderHelpCommand
from awscli.help import ServiceHelpCommand
from awscli.help import OperationHelpCommand
from awscli.arguments import CustomArgument
from awscli.arguments import ListArgument
from awscli.arguments import BooleanArgument
from awscli.arguments import CLIArgument
from awscli.arguments import UnknownArgumentError
from awscli.argprocess import unpack_argument
LOG = logging.getLogger('awscli.clidriver')
LOG_FORMAT = (
'%(asctime)s - %(threadName)s - %(name)s - %(levelname)s - %(message)s')
def main():
driver = create_clidriver()
return driver.main()
def create_clidriver():
emitter = HierarchicalEmitter()
session = botocore.session.Session(EnvironmentVariables, emitter)
_set_user_agent_for_session(session)
load_plugins(session.full_config.get('plugins', {}),
event_hooks=emitter)
driver = CLIDriver(session=session)
return driver
def _set_user_agent_for_session(session):
session.user_agent_name = 'aws-cli'
session.user_agent_version = __version__
session.user_agent_extra = 'botocore/%s' % botocore_version
class CLIDriver(object):
def __init__(self, session=None):
if session is None:
self.session = botocore.session.get_session(EnvironmentVariables)
_set_user_agent_for_session(self.session)
else:
self.session = session
self._cli_data = None
self._command_table = None
self._argument_table = None
def _get_cli_data(self):
# Not crazy about this but the data in here is needed in
# several places (e.g. MainArgParser, ProviderHelp) so
# we load it here once.
if self._cli_data is None:
self._cli_data = self.session.get_data('cli')
return self._cli_data
def _get_command_table(self):
if self._command_table is None:
self._command_table = self._build_command_table()
return self._command_table
def _get_argument_table(self):
if self._argument_table is None:
self._argument_table = self._build_argument_table()
return self._argument_table
def _build_command_table(self):
"""
Create the main parser to handle the global arguments.
:rtype: ``argparser.ArgumentParser``
:return: The parser object
"""
command_table = self._build_builtin_commands(self.session)
self.session.emit('building-command-table.main',
command_table=command_table,
session=self.session,
command_object=self)
return command_table
def _build_builtin_commands(self, session):
commands = OrderedDict()
services = session.get_available_services()
for service_name in services:
commands[service_name] = ServiceCommand(cli_name=service_name,
session=self.session,
service_name=service_name)
return commands
def _build_argument_table(self):
argument_table = OrderedDict()
cli_data = self._get_cli_data()
cli_arguments = cli_data.get('options', None)
for option in cli_arguments:
option_params = copy_kwargs(cli_arguments[option])
cli_argument = self._create_cli_argument(option, option_params)
cli_argument.add_to_arg_table(argument_table)
# Then the final step is to send out an event so handlers
# can add extra arguments or modify existing arguments.
self.session.emit('building-top-level-params',
argument_table=argument_table)
return argument_table
def _create_cli_argument(self, option_name, option_params):
return CustomArgument(
option_name, help_text=option_params.get('help', ''),
dest=option_params.get('dest'),
default=option_params.get('default'),
action=option_params.get('action'),
required=option_params.get('required'),
choices=option_params.get('choices'),
cli_type_name=option_params.get('type'))
def create_help_command(self):
cli_data = self._get_cli_data()
return ProviderHelpCommand(self.session, self._get_command_table(),
self._get_argument_table(),
cli_data.get('description', None),
cli_data.get('synopsis', None),
cli_data.get('help_usage', None))
def _create_parser(self):
# Also add a 'help' command.
command_table = self._get_command_table()
command_table['help'] = self.create_help_command()
cli_data = self._get_cli_data()
parser = MainArgParser(
command_table, self.session.user_agent(),
cli_data.get('description', None),
self._get_argument_table())
return parser
def main(self, args=None):
"""
:param args: List of arguments, with the 'aws' removed. For example,
the command "aws s3 list-objects --bucket foo" will have an
args list of ``['s3', 'list-objects', '--bucket', 'foo']``.
"""
if args is None:
args = sys.argv[1:]
parser = self._create_parser()
command_table = self._get_command_table()
parsed_args, remaining = parser.parse_known_args(args)
try:
# Because _handle_top_level_args emits events, it's possible
# that exceptions can be raised, which should have the same
# general exception handling logic as calling into the
# command table. This is why it's in the try/except clause.
self._handle_top_level_args(parsed_args)
self._emit_session_event()
return command_table[parsed_args.command](remaining, parsed_args)
except UnknownArgumentError as e:
sys.stderr.write("usage: %s\n" % USAGE)
sys.stderr.write(str(e))
sys.stderr.write("\n")
return 255
except NoRegionError as e:
msg = ('%s You can also configure your region by running '
'"aws configure".' % e)
self._show_error(msg)
return 255
except NoCredentialsError as e:
msg = ('%s. You can configure credentials by running '
'"aws configure".' % e)
self._show_error(msg)
return 255
except KeyboardInterrupt:
# Shell standard for signals that terminate
# the process is to return 128 + signum, in this case
# SIGINT=2, so we'll have an RC of 130.
sys.stdout.write("\n")
return 128 + signal.SIGINT
except Exception as e:
LOG.debug("Exception caught in main()", exc_info=True)
LOG.debug("Exiting with rc 255")
sys.stderr.write("\n")
sys.stderr.write("%s\n" % e)
return 255
def _emit_session_event(self):
# This event is guaranteed to run after the session has been
# initialized and a profile has been set. This was previously
# problematic because if something in CLIDriver caused the
# session components to be reset (such as session.profile = foo)
# then all the prior registered components would be removed.
self.session.emit('session-initialized', session=self.session)
def _show_error(self, msg):
LOG.debug(msg, exc_info=True)
sys.stderr.write(msg)
sys.stderr.write('\n')
def _handle_top_level_args(self, args):
self.session.emit(
'top-level-args-parsed', parsed_args=args, session=self.session)
if args.profile:
self.session.set_config_variable('profile', args.profile)
if args.debug:
# TODO:
# Unfortunately, by setting debug mode here, we miss out
# on all of the debug events prior to this such as the
# loading of plugins, etc.
self.session.set_stream_logger('botocore', logging.DEBUG,
format_string=LOG_FORMAT)
self.session.set_stream_logger('awscli', logging.DEBUG,
format_string=LOG_FORMAT)
LOG.debug("CLI version: %s", self.session.user_agent())
LOG.debug("Arguments entered to CLI: %s", sys.argv[1:])
else:
self.session.set_stream_logger(logger_name='awscli',
log_level=logging.ERROR)
class CLICommand(object):
"""Interface for a CLI command.
This class represents a top level CLI command
(``aws ec2``, ``aws s3``, ``aws config``).
"""
@property
def name(self):
# Subclasses must implement a name.
raise NotImplementedError("name")
@name.setter
def name(self, value):
# Subclasses must implement setting/changing the cmd name.
raise NotImplementedError("name")
@property
def lineage(self):
# Represents how to get to a specific command using the CLI.
# It includes all commands that came before it and itself in
# a list.
return [self]
@property
def lineage_names(self):
# Represents the lineage of a command in terms of command ``name``
return [cmd.name for cmd in self.lineage]
def __call__(self, args, parsed_globals):
"""Invoke CLI operation.
:type args: str
:param args: The remaining command line args.
:type parsed_globals: ``argparse.Namespace``
:param parsed_globals: The parsed arguments so far.
:rtype: int
:return: The return code of the operation. This will be used
as the RC code for the ``aws`` process.
"""
# Subclasses are expected to implement this method.
pass
def create_help_command(self):
# Subclasses are expected to implement this method if they want
# help docs.
return None
@property
def arg_table(self):
return {}
class ServiceCommand(CLICommand):
"""A service command for the CLI.
For example, ``aws ec2 ...`` we'd create a ServiceCommand
object that represents the ec2 service.
"""
def __init__(self, cli_name, session, service_name=None):
# The cli_name is the name the user types, the name we show
# in doc, etc.
# The service_name is the name we used internally with botocore.
# For example, we have the 's3api' as the cli_name for the service
# but this is actually bound to the 's3' service name in botocore,
# i.e. we load s3.json from the botocore data dir. Most of
# the time these are the same thing but in the case of renames,
# we want users/external things to be able to rename the cli name
# but *not* the service name, as this has to be exactly what
# botocore expects.
self._name = cli_name
self.session = session
self._command_table = None
if service_name is None:
# Then default to using the cli name.
self._service_name = cli_name
else:
self._service_name = service_name
self._lineage = [self]
self._service_model = None
@property
def name(self):
return self._name
@name.setter
def name(self, value):
self._name = value
@property
def service_model(self):
return self._get_service_model()
@property
def lineage(self):
return self._lineage
@lineage.setter
def lineage(self, value):
self._lineage = value
def _get_command_table(self):
if self._command_table is None:
self._command_table = self._create_command_table()
return self._command_table
def _get_service_model(self):
if self._service_model is None:
self._service_model = self.session.get_service_model(
self._service_name)
return self._service_model
def __call__(self, args, parsed_globals):
# Once we know we're trying to call a service for this operation
# we can go ahead and create the parser for it. We
# can also grab the Service object from botocore.
service_parser = self._create_parser()
parsed_args, remaining = service_parser.parse_known_args(args)
command_table = self._get_command_table()
return command_table[parsed_args.operation](remaining, parsed_globals)
def _create_command_table(self):
command_table = OrderedDict()
service_model = self._get_service_model()
for operation_name in service_model.operation_names:
cli_name = xform_name(operation_name, '-')
operation_model = service_model.operation_model(operation_name)
command_table[cli_name] = ServiceOperation(
name=cli_name,
parent_name=self._name,
session=self.session,
operation_model=operation_model,
operation_caller=CLIOperationCaller(self.session),
)
self.session.emit('building-command-table.%s' % self._name,
command_table=command_table,
session=self.session,
command_object=self)
self._add_lineage(command_table)
return command_table
def _add_lineage(self, command_table):
for command in command_table:
command_obj = command_table[command]
command_obj.lineage = self.lineage + [command_obj]
def create_help_command(self):
command_table = self._get_command_table()
return ServiceHelpCommand(session=self.session,
obj=self._get_service_model(),
command_table=command_table,
arg_table=None,
event_class='.'.join(self.lineage_names),
name=self._name)
def _create_parser(self):
command_table = self._get_command_table()
# Also add a 'help' command.
command_table['help'] = self.create_help_command()
return ServiceArgParser(
operations_table=command_table, service_name=self._name)
class ServiceOperation(object):
"""A single operation of a service.
This class represents a single operation for a service, for
example ``ec2.DescribeInstances``.
"""
ARG_TYPES = {
'list': ListArgument,
'boolean': BooleanArgument,
}
DEFAULT_ARG_CLASS = CLIArgument
def __init__(self, name, parent_name, operation_caller,
operation_model, session):
"""
:type name: str
:param name: The name of the operation/subcommand.
:type parent_name: str
:param parent_name: The name of the parent command.
:type operation_model: ``botocore.model.OperationModel``
:param operation_object: The operation model
associated with this subcommand.
:type operation_caller: ``CLIOperationCaller``
:param operation_caller: An object that can properly call the
operation.
:type session: ``botocore.session.Session``
:param session: The session object.
"""
self._arg_table = None
self._name = name
# These is used so we can figure out what the proper event
# name should be ..
self._parent_name = parent_name
self._operation_caller = operation_caller
self._lineage = [self]
self._operation_model = operation_model
self._session = session
@property
def name(self):
return self._name
@name.setter
def name(self, value):
self._name = value
@property
def lineage(self):
return self._lineage
@lineage.setter
def lineage(self, value):
self._lineage = value
@property
def lineage_names(self):
# Represents the lineage of a command in terms of command ``name``
return [cmd.name for cmd in self.lineage]
@property
def arg_table(self):
if self._arg_table is None:
self._arg_table = self._create_argument_table()
return self._arg_table
def __call__(self, args, parsed_globals):
# Once we know we're trying to call a particular operation
# of a service we can go ahead and load the parameters.
event = 'before-building-argument-table-parser.%s.%s' % \
(self._parent_name, self._name)
self._emit(event, argument_table=self.arg_table, args=args,
session=self._session)
operation_parser = self._create_operation_parser(self.arg_table)
self._add_help(operation_parser)
parsed_args, remaining = operation_parser.parse_known_args(args)
if parsed_args.help == 'help':
op_help = self.create_help_command()
return op_help(remaining, parsed_globals)
elif parsed_args.help:
remaining.append(parsed_args.help)
if remaining:
raise UnknownArgumentError(
"Unknown options: %s" % ', '.join(remaining))
event = 'operation-args-parsed.%s.%s' % (self._parent_name,
self._name)
self._emit(event, parsed_args=parsed_args,
parsed_globals=parsed_globals)
call_parameters = self._build_call_parameters(parsed_args,
self.arg_table)
event = 'calling-command.%s.%s' % (self._parent_name,
self._name)
override = self._emit_first_non_none_response(
event,
call_parameters=call_parameters,
parsed_args=parsed_args,
parsed_globals=parsed_globals
)
# There are two possible values for override. It can be some type
# of exception that will be raised if detected or it can represent
# the desired return code. Note that a return code of 0 represents
# a success.
if override is not None:
if isinstance(override, Exception):
# If the override value provided back is an exception then
# raise the exception
raise override
else:
# This is the value usually returned by the ``invoke()``
# method of the operation caller. It represents the return
# code of the operation.
return override
else:
# No override value was supplied.
return self._operation_caller.invoke(
self._operation_model.service_model.service_name,
self._operation_model.name,
call_parameters, parsed_globals)
def create_help_command(self):
return OperationHelpCommand(
self._session,
operation_model=self._operation_model,
arg_table=self.arg_table,
name=self._name, event_class='.'.join(self.lineage_names))
def _add_help(self, parser):
# The 'help' output is processed a little differently from
# the operation help because the arg_table has
# CLIArguments for values.
parser.add_argument('help', nargs='?')
def _build_call_parameters(self, args, arg_table):
# We need to convert the args specified on the command
# line as valid **kwargs we can hand to botocore.
service_params = {}
# args is an argparse.Namespace object so we're using vars()
# so we can iterate over the parsed key/values.
parsed_args = vars(args)
for arg_object in arg_table.values():
py_name = arg_object.py_name
if py_name in parsed_args:
value = parsed_args[py_name]
value = self._unpack_arg(arg_object, value)
arg_object.add_to_params(service_params, value)
return service_params
def _unpack_arg(self, cli_argument, value):
# Unpacks a commandline argument into a Python value by firing the
# load-cli-arg.service-name.operation-name event.
session = self._session
service_name = self._operation_model.service_model.endpoint_prefix
operation_name = xform_name(self._name, '-')
return unpack_argument(session, service_name, operation_name,
cli_argument, value)
def _create_argument_table(self):
argument_table = OrderedDict()
input_shape = self._operation_model.input_shape
required_arguments = []
arg_dict = {}
if input_shape is not None:
required_arguments = input_shape.required_members
arg_dict = input_shape.members
for arg_name, arg_shape in arg_dict.items():
cli_arg_name = xform_name(arg_name, '-')
arg_class = self.ARG_TYPES.get(arg_shape.type_name,
self.DEFAULT_ARG_CLASS)
is_required = arg_name in required_arguments
event_emitter = self._session.get_component('event_emitter')
arg_object = arg_class(
name=cli_arg_name,
argument_model=arg_shape,
is_required=is_required,
operation_model=self._operation_model,
serialized_name=arg_name,
event_emitter=event_emitter)
arg_object.add_to_arg_table(argument_table)
LOG.debug(argument_table)
self._emit('building-argument-table.%s.%s' % (self._parent_name,
self._name),
operation_model=self._operation_model,
session=self._session,
command=self,
argument_table=argument_table)
return argument_table
def _emit(self, name, **kwargs):
return self._session.emit(name, **kwargs)
def _emit_first_non_none_response(self, name, **kwargs):
return self._session.emit_first_non_none_response(
name, **kwargs)
def _create_operation_parser(self, arg_table):
parser = ArgTableArgParser(arg_table)
return parser
class CLIOperationCaller(object):
"""Call an AWS operation and format the response."""
def __init__(self, session):
self._session = session
def invoke(self, service_name, operation_name, parameters, parsed_globals):
"""Invoke an operation and format the response.
:type service_name: str
:param service_name: The name of the service. Note this is the service name,
not the endpoint prefix (e.g. ``ses`` not ``email``).
:type operation_name: str
:param operation_name: The operation name of the service. The casing
of the operation name should match the exact casing used by the service,
e.g. ``DescribeInstances``, not ``describe-instances`` or
``describe_instances``.
:type parameters: dict
:param parameters: The parameters for the operation call. Again, these values
have the same casing used by the service.
:type parsed_globals: Namespace
:param parsed_globals: The parsed globals from the command line.
:return: None, the result is displayed through a formatter, but no
value is returned.
"""
client = self._session.create_client(
service_name, region_name=parsed_globals.region,
endpoint_url=parsed_globals.endpoint_url,
verify=parsed_globals.verify_ssl)
py_operation_name = xform_name(operation_name)
if client.can_paginate(py_operation_name) and parsed_globals.paginate:
paginator = client.get_paginator(py_operation_name)
response = paginator.paginate(**parameters)
else:
response = getattr(client, xform_name(operation_name))(
**parameters)
self._display_response(operation_name, response, parsed_globals)
return 0
def _display_response(self, command_name, response,
parsed_globals):
output = parsed_globals.output
if output is None:
output = self._session.get_config_variable('output')
formatter = get_formatter(output, parsed_globals)
formatter(command_name, response)
awscli-1.10.1/awscli/data/ 0000777 4542626 0000144 00000000000 12652514126 016325 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/data/cli.json 0000666 4542626 0000144 00000005637 12652514124 020000 0 ustar pysdk-ci amazon 0000000 0000000 {
"description": "The AWS Command Line Interface is a unified tool to manage your AWS services.",
"synopsis": "aws [options] [parameters]",
"help_usage": "Use *aws command help* for information on a specific command. Use *aws help topics* to view a list of available help topics. The synopsis for each command shows its parameters and their usage. Optional parameters are shown in square brackets.",
"options": {
"debug": {
"action": "store_true",
"help": "
Turn on debug logging.
"
},
"endpoint-url": {
"help": "
Override command's default URL with the given URL.
By default, the AWS CLI uses SSL when communicating with AWS services. For each SSL connection, the AWS CLI will verify SSL certificates. This option overrides the default behavior of verifying SSL certificates.
The maximum socket connect time in seconds. If the value is set to 0, the socket connect will be blocking and not timeout.
"
}
}
}
awscli-1.10.1/awscli/text.py 0000666 4542626 0000144 00000010274 12652514124 016754 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2012-2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
# http://aws.amazon.com/apache2.0/
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.compat import six
def format_text(data, stream):
_format_text(data, stream)
def _format_text(item, stream, identifier=None, scalar_keys=None):
if isinstance(item, dict):
_format_dict(scalar_keys, item, identifier, stream)
elif isinstance(item, list):
_format_list(item, identifier, stream)
else:
# If it's not a list or a dict, we just write the scalar
# value out directly.
stream.write(six.text_type(item))
stream.write('\n')
def _format_list(item, identifier, stream):
if not item:
return
if any(isinstance(el, dict) for el in item):
all_keys = _all_scalar_keys(item)
for element in item:
_format_text(element, stream=stream, identifier=identifier,
scalar_keys=all_keys)
elif any(isinstance(el, list) for el in item):
scalar_elements, non_scalars = _partition_list(item)
if scalar_elements:
_format_scalar_list(scalar_elements, identifier, stream)
for non_scalar in non_scalars:
_format_text(non_scalar, stream=stream,
identifier=identifier)
else:
_format_scalar_list(item, identifier, stream)
def _partition_list(item):
scalars = []
non_scalars = []
for element in item:
if isinstance(element, (list, dict)):
non_scalars.append(element)
else:
scalars.append(element)
return scalars, non_scalars
def _format_scalar_list(elements, identifier, stream):
if identifier is not None:
for item in elements:
stream.write('%s\t%s\n' % (identifier.upper(),
item))
else:
# For a bare list, just print the contents.
stream.write('\t'.join([six.text_type(item) for item in elements]))
stream.write('\n')
def _format_dict(scalar_keys, item, identifier, stream):
scalars, non_scalars = _partition_dict(item, scalar_keys=scalar_keys)
if scalars:
if identifier is not None:
scalars.insert(0, identifier.upper())
stream.write('\t'.join(scalars))
stream.write('\n')
for new_identifier, non_scalar in non_scalars:
_format_text(item=non_scalar, stream=stream,
identifier=new_identifier)
def _all_scalar_keys(list_of_dicts):
keys_seen = set()
for item_dict in list_of_dicts:
for key, value in item_dict.items():
if not isinstance(value, (dict, list)):
keys_seen.add(key)
return list(sorted(keys_seen))
def _partition_dict(item_dict, scalar_keys):
# Given a dictionary, partition it into two list based on the
# values associated with the keys.
# {'foo': 'scalar', 'bar': 'scalar', 'baz': ['not, 'scalar']}
# scalar = [('foo', 'scalar'), ('bar', 'scalar')]
# non_scalar = [('baz', ['not', 'scalar'])]
scalar = []
non_scalar = []
if scalar_keys is None:
# scalar_keys can have more than just the keys in the item_dict,
# but if user does not provide scalar_keys, we'll grab the keys
# from the current item_dict
for key, value in sorted(item_dict.items()):
if isinstance(value, (dict, list)):
non_scalar.append((key, value))
else:
scalar.append(six.text_type(value))
else:
for key in scalar_keys:
scalar.append(six.text_type(item_dict.get(key, '')))
remaining_keys = sorted(set(item_dict.keys()) - set(scalar_keys))
for remaining_key in remaining_keys:
non_scalar.append((remaining_key, item_dict[remaining_key]))
return scalar, non_scalar
awscli-1.10.1/awscli/clidocs.py 0000666 4542626 0000144 00000063171 12652514124 017414 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
import os
from botocore import xform_name
from botocore.docs.bcdoc.docevents import DOC_EVENTS
from botocore.model import StringShape
from awscli import SCALAR_TYPES
from awscli.argprocess import ParamShorthandDocGen
from awscli.topictags import TopicTagDB
LOG = logging.getLogger(__name__)
class CLIDocumentEventHandler(object):
def __init__(self, help_command):
self.help_command = help_command
self.register(help_command.session, help_command.event_class)
self.help_command.doc.translation_map = self.build_translation_map()
self._arg_groups = self._build_arg_table_groups(help_command)
self._documented_arg_groups = []
def _build_arg_table_groups(self, help_command):
arg_groups = {}
for name, arg in help_command.arg_table.items():
if arg.group_name is not None:
arg_groups.setdefault(arg.group_name, []).append(arg)
return arg_groups
def build_translation_map(self):
return dict()
def _map_handlers(self, session, event_class, mapfn):
for event in DOC_EVENTS:
event_handler_name = event.replace('-', '_')
if hasattr(self, event_handler_name):
event_handler = getattr(self, event_handler_name)
format_string = DOC_EVENTS[event]
num_args = len(format_string.split('.')) - 2
format_args = (event_class,) + ('*',) * num_args
event_string = event + format_string % format_args
unique_id = event_class + event_handler_name
mapfn(event_string, event_handler, unique_id)
def register(self, session, event_class):
"""
The default register iterates through all of the
available document events and looks for a corresponding
handler method defined in the object. If it's there, that
handler method will be registered for the all events of
that type for the specified ``event_class``.
"""
self._map_handlers(session, event_class, session.register)
def unregister(self):
"""
The default unregister iterates through all of the
available document events and looks for a corresponding
handler method defined in the object. If it's there, that
handler method will be unregistered for the all events of
that type for the specified ``event_class``.
"""
self._map_handlers(self.help_command.session,
self.help_command.event_class,
self.help_command.session.unregister)
# These are default doc handlers that apply in the general case.
def doc_breadcrumbs(self, help_command, **kwargs):
doc = help_command.doc
if doc.target != 'man':
cmd_names = help_command.event_class.split('.')
doc.write('[ ')
doc.write(':ref:`aws `')
full_cmd_list = ['aws']
for cmd in cmd_names[:-1]:
doc.write(' . ')
full_cmd_list.append(cmd)
full_cmd_name = ' '.join(full_cmd_list)
doc.write(':ref:`%s `' % (cmd, full_cmd_name))
doc.write(' ]')
def doc_title(self, help_command, **kwargs):
doc = help_command.doc
doc.style.new_paragraph()
reference = help_command.event_class.replace('.', ' ')
if reference != 'aws':
reference = 'aws ' + reference
doc.writeln('.. _cli:%s:' % reference)
doc.style.h1(help_command.name)
def doc_description(self, help_command, **kwargs):
doc = help_command.doc
doc.style.h2('Description')
doc.include_doc_string(help_command.description)
doc.style.new_paragraph()
def doc_synopsis_start(self, help_command, **kwargs):
self._documented_arg_groups = []
doc = help_command.doc
doc.style.h2('Synopsis')
doc.style.start_codeblock()
doc.writeln('%s' % help_command.name)
def doc_synopsis_option(self, arg_name, help_command, **kwargs):
doc = help_command.doc
argument = help_command.arg_table[arg_name]
if argument.group_name in self._arg_groups:
if argument.group_name in self._documented_arg_groups:
# This arg is already documented so we can move on.
return
option_str = ' | '.join(
[a.cli_name for a in
self._arg_groups[argument.group_name]])
self._documented_arg_groups.append(argument.group_name)
else:
option_str = '%s ' % argument.cli_name
if not argument.required:
option_str = '[%s]' % option_str
doc.writeln('%s' % option_str)
def doc_synopsis_end(self, help_command, **kwargs):
doc = help_command.doc
doc.style.end_codeblock()
# Reset the documented arg groups for other sections
# that may document args (the detailed docs following
# the synopsis).
self._documented_arg_groups = []
def doc_options_start(self, help_command, **kwargs):
doc = help_command.doc
doc.style.h2('Options')
if not help_command.arg_table:
doc.write('*None*\n')
def doc_option(self, arg_name, help_command, **kwargs):
doc = help_command.doc
argument = help_command.arg_table[arg_name]
if argument.group_name in self._arg_groups:
if argument.group_name in self._documented_arg_groups:
# This arg is already documented so we can move on.
return
name = ' | '.join(
['``%s``' % a.cli_name for a in
self._arg_groups[argument.group_name]])
self._documented_arg_groups.append(argument.group_name)
else:
name = '``%s``' % argument.cli_name
doc.write('%s (%s)\n' % (name, argument.cli_type_name))
doc.style.indent()
doc.include_doc_string(argument.documentation)
self._document_enums(argument, doc)
doc.style.dedent()
doc.style.new_paragraph()
def doc_relateditems_start(self, help_command, **kwargs):
if help_command.related_items:
doc = help_command.doc
doc.style.h2('See Also')
def doc_relateditem(self, help_command, related_item, **kwargs):
doc = help_command.doc
doc.write('* ')
doc.style.sphinx_reference_label(
label='cli:%s' % related_item,
text=related_item
)
doc.write('\n')
def _document_enums(self, argument, doc):
"""Documents top-level parameter enums"""
if hasattr(argument, 'argument_model'):
model = argument.argument_model
if isinstance(model, StringShape):
if model.enum:
doc.style.new_paragraph()
doc.write('Possible values:')
doc.style.start_ul()
for enum in model.enum:
doc.style.li('``%s``' % enum)
doc.style.end_ul()
class ProviderDocumentEventHandler(CLIDocumentEventHandler):
def doc_breadcrumbs(self, help_command, event_name, **kwargs):
pass
def doc_synopsis_start(self, help_command, **kwargs):
doc = help_command.doc
doc.style.h2('Synopsis')
doc.style.codeblock(help_command.synopsis)
doc.include_doc_string(help_command.help_usage)
def doc_synopsis_option(self, arg_name, help_command, **kwargs):
pass
def doc_synopsis_end(self, help_command, **kwargs):
doc = help_command.doc
doc.style.new_paragraph()
def doc_options_start(self, help_command, **kwargs):
doc = help_command.doc
doc.style.h2('Options')
def doc_option(self, arg_name, help_command, **kwargs):
doc = help_command.doc
argument = help_command.arg_table[arg_name]
doc.writeln('``%s`` (%s)' % (argument.cli_name,
argument.cli_type_name))
doc.include_doc_string(argument.documentation)
if argument.choices:
doc.style.start_ul()
for choice in argument.choices:
doc.style.li(choice)
doc.style.end_ul()
def doc_subitems_start(self, help_command, **kwargs):
doc = help_command.doc
doc.style.h2('Available Services')
doc.style.toctree()
def doc_subitem(self, command_name, help_command, **kwargs):
doc = help_command.doc
file_name = '%s/index' % command_name
doc.style.tocitem(command_name, file_name=file_name)
class ServiceDocumentEventHandler(CLIDocumentEventHandler):
def build_translation_map(self):
d = {}
service_model = self.help_command.obj
for operation_name in service_model.operation_names:
d[operation_name] = xform_name(operation_name, '-')
return d
# A service document has no synopsis.
def doc_synopsis_start(self, help_command, **kwargs):
pass
def doc_synopsis_option(self, arg_name, help_command, **kwargs):
pass
def doc_synopsis_end(self, help_command, **kwargs):
pass
# A service document has no option section.
def doc_options_start(self, help_command, **kwargs):
pass
def doc_option(self, arg_name, help_command, **kwargs):
pass
def doc_option_example(self, arg_name, help_command, **kwargs):
pass
def doc_options_end(self, help_command, **kwargs):
pass
def doc_description(self, help_command, **kwargs):
doc = help_command.doc
service_model = help_command.obj
doc.style.h2('Description')
# TODO: need a documentation attribute.
doc.include_doc_string(service_model.documentation)
def doc_subitems_start(self, help_command, **kwargs):
doc = help_command.doc
doc.style.h2('Available Commands')
doc.style.toctree()
def doc_subitem(self, command_name, help_command, **kwargs):
doc = help_command.doc
subcommand = help_command.command_table[command_name]
subcommand_table = getattr(subcommand, 'subcommand_table', {})
# If the subcommand table has commands in it,
# direct the subitem to the command's index because
# it has more subcommands to be documented.
if (len(subcommand_table) > 0):
file_name = '%s/index' % command_name
doc.style.tocitem(command_name, file_name=file_name)
else:
doc.style.tocitem(command_name)
class OperationDocumentEventHandler(CLIDocumentEventHandler):
def build_translation_map(self):
operation_model = self.help_command.obj
d = {}
for cli_name, cli_argument in self.help_command.arg_table.items():
if cli_argument.argument_model is not None:
d[cli_argument.argument_model.name] = cli_name
for operation_name in operation_model.service_model.operation_names:
d[operation_name] = xform_name(operation_name, '-')
return d
def doc_description(self, help_command, **kwargs):
doc = help_command.doc
operation_model = help_command.obj
doc.style.h2('Description')
doc.include_doc_string(operation_model.documentation)
def _json_example_value_name(self, argument_model, include_enum_values=True):
# If include_enum_values is True, then the valid enum values
# are included as the sample JSON value.
if isinstance(argument_model, StringShape):
if argument_model.enum and include_enum_values:
choices = argument_model.enum
return '|'.join(['"%s"' % c for c in choices])
else:
return '"string"'
elif argument_model.type_name == 'boolean':
return 'true|false'
else:
return '%s' % argument_model.type_name
def _json_example(self, doc, argument_model, stack):
if argument_model.name in stack:
# Document the recursion once, otherwise just
# note the fact that it's recursive and return.
if stack.count(argument_model.name) > 1:
if argument_model.type_name == 'structure':
doc.write('{ ... recursive ... }')
return
stack.append(argument_model.name)
try:
self._do_json_example(doc, argument_model, stack)
finally:
stack.pop()
def _do_json_example(self, doc, argument_model, stack):
if argument_model.type_name == 'list':
doc.write('[')
if argument_model.member.type_name in SCALAR_TYPES:
doc.write('%s, ...' % self._json_example_value_name(argument_model.member))
else:
doc.style.indent()
doc.style.new_line()
self._json_example(doc, argument_model.member, stack)
doc.style.new_line()
doc.write('...')
doc.style.dedent()
doc.style.new_line()
doc.write(']')
elif argument_model.type_name == 'map':
doc.write('{')
doc.style.indent()
key_string = self._json_example_value_name(argument_model.key)
doc.write('%s: ' % key_string)
if argument_model.value.type_name in SCALAR_TYPES:
doc.write(self._json_example_value_name(argument_model.value))
else:
doc.style.indent()
self._json_example(doc, argument_model.value, stack)
doc.style.dedent()
doc.style.new_line()
doc.write('...')
doc.style.dedent()
doc.write('}')
elif argument_model.type_name == 'structure':
doc.write('{')
doc.style.indent()
doc.style.new_line()
self._doc_input_structure_members(doc, argument_model, stack)
def _doc_input_structure_members(self, doc, argument_model, stack):
members = argument_model.members
for i, member_name in enumerate(members):
member_model = members[member_name]
member_type_name = member_model.type_name
if member_type_name in SCALAR_TYPES:
doc.write('"%s": %s' % (member_name,
self._json_example_value_name(member_model)))
elif member_type_name == 'structure':
doc.write('"%s": ' % member_name)
self._json_example(doc, member_model, stack)
elif member_type_name == 'map':
doc.write('"%s": ' % member_name)
self._json_example(doc, member_model, stack)
elif member_type_name == 'list':
doc.write('"%s": ' % member_name)
self._json_example(doc, member_model, stack)
if i < len(members) - 1:
doc.write(',')
doc.style.new_line()
else:
doc.style.dedent()
doc.style.new_line()
doc.write('}')
def doc_option_example(self, arg_name, help_command, **kwargs):
doc = help_command.doc
cli_argument = help_command.arg_table[arg_name]
if cli_argument.group_name in self._arg_groups:
if cli_argument.group_name in self._documented_arg_groups:
# Args with group_names (boolean args) don't
# need to generate example syntax.
return
argument_model = cli_argument.argument_model
docgen = ParamShorthandDocGen()
if docgen.supports_shorthand(cli_argument.argument_model):
example_shorthand_syntax = docgen.generate_shorthand_example(
cli_argument.cli_name, cli_argument.argument_model)
if example_shorthand_syntax is None:
# If the shorthand syntax returns a value of None,
# this indicates to us that there is no example
# needed for this param so we can immediately
# return.
return
if example_shorthand_syntax:
doc.style.new_paragraph()
doc.write('Shorthand Syntax')
doc.style.start_codeblock()
for example_line in example_shorthand_syntax.splitlines():
doc.writeln(example_line)
doc.style.end_codeblock()
if argument_model is not None and argument_model.type_name == 'list' and \
argument_model.member.type_name in SCALAR_TYPES:
# A list of scalars is special. While you *can* use
# JSON ( ["foo", "bar", "baz"] ), you can also just
# use the argparse behavior of space separated lists.
# "foo" "bar" "baz". In fact we don't even want to
# document the JSON syntax in this case.
member = argument_model.member
doc.style.new_paragraph()
doc.write('Syntax')
doc.style.start_codeblock()
example_type = self._json_example_value_name(
member, include_enum_values=False)
doc.write('%s %s ...' % (example_type, example_type))
if isinstance(member, StringShape) and member.enum:
# If we have enum values, we can tell the user
# exactly what valid values they can provide.
self._write_valid_enums(doc, member.enum)
doc.style.end_codeblock()
doc.style.new_paragraph()
elif cli_argument.cli_type_name not in SCALAR_TYPES:
doc.style.new_paragraph()
doc.write('JSON Syntax')
doc.style.start_codeblock()
self._json_example(doc, argument_model, stack=[])
doc.style.end_codeblock()
doc.style.new_paragraph()
def _write_valid_enums(self, doc, enum_values):
doc.style.new_paragraph()
doc.write("Where valid values are:\n")
for value in enum_values:
doc.write(" %s\n" % value)
doc.write("\n")
def doc_output(self, help_command, event_name, **kwargs):
doc = help_command.doc
doc.style.h2('Output')
operation_model = help_command.obj
output_shape = operation_model.output_shape
if output_shape is None:
doc.write('None')
else:
for member_name, member_shape in output_shape.members.items():
self._doc_member_for_output(doc, member_name, member_shape, stack=[])
def _doc_member_for_output(self, doc, member_name, member_shape, stack):
if member_shape.name in stack:
# Document the recursion once, otherwise just
# note the fact that it's recursive and return.
if stack.count(member_shape.name) > 1:
if member_shape.type_name == 'structure':
doc.write('( ... recursive ... )')
return
stack.append(member_shape.name)
try:
self._do_doc_member_for_output(doc, member_name,
member_shape, stack)
finally:
stack.pop()
def _do_doc_member_for_output(self, doc, member_name, member_shape, stack):
docs = member_shape.documentation
if member_name:
doc.write('%s -> (%s)' % (member_name, member_shape.type_name))
else:
doc.write('(%s)' % member_shape.type_name)
doc.style.indent()
doc.style.new_paragraph()
doc.include_doc_string(docs)
doc.style.new_paragraph()
member_type_name = member_shape.type_name
if member_type_name == 'structure':
for sub_name, sub_shape in member_shape.members.items():
self._doc_member_for_output(doc, sub_name, sub_shape, stack)
elif member_type_name == 'map':
key_shape = member_shape.key
key_name = key_shape.serialization.get('name', 'key')
self._doc_member_for_output(doc, key_name, key_shape, stack)
value_shape = member_shape.value
value_name = value_shape.serialization.get('name', 'value')
self._doc_member_for_output(doc, value_name, value_shape, stack)
elif member_type_name == 'list':
self._doc_member_for_output(doc, '', member_shape.member, stack)
doc.style.dedent()
doc.style.new_paragraph()
class TopicListerDocumentEventHandler(CLIDocumentEventHandler):
DESCRIPTION = (
'This is the AWS CLI Topic Guide. It gives access to a set '
'of topics that provide a deeper understanding of the CLI. To access '
'the list of topics from the command line, run ``aws help topics``. '
'To access a specific topic from the command line, run '
'``aws help [topicname]``, where ``topicname`` is the name of the '
'topic as it appears in the output from ``aws help topics``.')
def __init__(self, help_command):
self.help_command = help_command
self.register(help_command.session, help_command.event_class)
self.help_command.doc.translation_map = self.build_translation_map()
self._topic_tag_db = TopicTagDB()
self._topic_tag_db.load_json_index()
def doc_breadcrumbs(self, help_command, **kwargs):
doc = help_command.doc
if doc.target != 'man':
doc.write('[ ')
doc.style.sphinx_reference_label(label='cli:aws', text='aws')
doc.write(' ]')
def doc_title(self, help_command, **kwargs):
doc = help_command.doc
doc.style.new_paragraph()
doc.style.link_target_definition(
refname='cli:aws help %s' % self.help_command.name,
link='')
doc.style.h1('AWS CLI Topic Guide')
def doc_description(self, help_command, **kwargs):
doc = help_command.doc
doc.style.h2('Description')
doc.include_doc_string(self.DESCRIPTION)
doc.style.new_paragraph()
def doc_synopsis_start(self, help_command, **kwargs):
pass
def doc_synopsis_end(self, help_command, **kwargs):
pass
def doc_options_start(self, help_command, **kwargs):
pass
def doc_options_end(self, help_command, **kwargs):
pass
def doc_subitems_start(self, help_command, **kwargs):
doc = help_command.doc
doc.style.h2('Available Topics')
categories = self._topic_tag_db.query('category')
topic_names = self._topic_tag_db.get_all_topic_names()
# Sort the categories
category_names = sorted(categories.keys())
for category_name in category_names:
doc.style.h3(category_name)
doc.style.new_paragraph()
# Write out the topic and a description for each topic under
# each category.
for topic_name in sorted(categories[category_name]):
description = self._topic_tag_db.get_tag_single_value(
topic_name, 'description')
doc.write('* ')
doc.style.sphinx_reference_label(
label='cli:aws help %s' % topic_name,
text=topic_name
)
doc.write(': %s\n' % description)
# Add a hidden toctree to make sure everything is connected in
# the document.
doc.style.hidden_toctree()
for topic_name in topic_names:
doc.style.hidden_tocitem(topic_name)
class TopicDocumentEventHandler(TopicListerDocumentEventHandler):
def doc_breadcrumbs(self, help_command, **kwargs):
doc = help_command.doc
if doc.target != 'man':
doc.write('[ ')
doc.style.sphinx_reference_label(label='cli:aws', text='aws')
doc.write(' . ')
doc.style.sphinx_reference_label(
label='cli:aws help topics',
text='topics'
)
doc.write(' ]')
def doc_title(self, help_command, **kwargs):
doc = help_command.doc
doc.style.new_paragraph()
doc.style.link_target_definition(
refname='cli:aws help %s' % self.help_command.name,
link='')
title = self._topic_tag_db.get_tag_single_value(
help_command.name, 'title')
doc.style.h1(title)
def doc_description(self, help_command, **kwargs):
doc = help_command.doc
topic_filename = os.path.join(self._topic_tag_db.topic_dir,
help_command.name + '.rst')
contents = self._remove_tags_from_content(topic_filename)
doc.writeln(contents)
doc.style.new_paragraph()
def _remove_tags_from_content(self, filename):
with open(filename, 'r') as f:
lines = f.readlines()
content_begin_index = 0
for i, line in enumerate(lines):
# If a line is encountered that does not begin with the tag
# end the search for tags and mark where tags end.
if not self._line_has_tag(line):
content_begin_index = i
break
# Join all of the non-tagged lines back together.
return ''.join(lines[content_begin_index:])
def _line_has_tag(self, line):
for tag in self._topic_tag_db.valid_tags:
if line.startswith(':' + tag + ':'):
return True
return False
def doc_subitems_start(self, help_command, **kwargs):
pass
awscli-1.10.1/awscli/utils.py 0000666 4542626 0000144 00000010363 12652514124 017127 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import csv
import signal
import datetime
import contextlib
from awscli.compat import six
def split_on_commas(value):
if not any(char in value for char in ['"', '\\', "'", ']', '[']):
# No quotes or escaping, just use a simple split.
return value.split(',')
elif not any(char in value for char in ['"', "'", '[', ']']):
# Simple escaping, let the csv module handle it.
return list(csv.reader(six.StringIO(value), escapechar='\\'))[0]
else:
# If there's quotes for the values, we have to handle this
# ourselves.
return _split_with_quotes(value)
def _split_with_quotes(value):
try:
parts = list(csv.reader(six.StringIO(value), escapechar='\\'))[0]
except csv.Error:
raise ValueError("Bad csv value: %s" % value)
iter_parts = iter(parts)
new_parts = []
for part in iter_parts:
# Find the first quote
quote_char = _find_quote_char_in_part(part)
# Find an opening list bracket
list_start = part.find('=[')
if list_start >= 0 and value.find(']') != -1 and \
(quote_char is None or part.find(quote_char) > list_start):
# This is a list, eat all the items until the end
if ']' in part:
# Short circuit for only one item
new_chunk = part
else:
new_chunk = _eat_items(value, iter_parts, part, ']')
list_items = _split_with_quotes(new_chunk[list_start + 2:-1])
new_chunk = new_chunk[:list_start + 1] + ','.join(list_items)
new_parts.append(new_chunk)
continue
elif quote_char is None:
new_parts.append(part)
continue
elif part.count(quote_char) == 2:
# Starting and ending quote are in this part.
# While it's not needed right now, this will
# break down if we ever need to escape quotes while
# quoting a value.
new_parts.append(part.replace(quote_char, ''))
continue
# Now that we've found a starting quote char, we
# need to combine the parts until we encounter an end quote.
new_chunk = _eat_items(value, iter_parts, part, quote_char, quote_char)
new_parts.append(new_chunk)
return new_parts
def _eat_items(value, iter_parts, part, end_char, replace_char=''):
"""
Eat items from an iterator, optionally replacing characters with
a blank and stopping when the end_char has been reached.
"""
current = part
chunks = [current.replace(replace_char, '')]
while True:
try:
current = six.advance_iterator(iter_parts)
except StopIteration:
raise ValueError(value)
chunks.append(current.replace(replace_char, ''))
if current.endswith(end_char):
break
return ','.join(chunks)
def _find_quote_char_in_part(part):
if '"' not in part and "'" not in part:
return
quote_char = None
double_quote = part.find('"')
single_quote = part.find("'")
if double_quote >= 0 and single_quote == -1:
quote_char = '"'
elif single_quote >= 0 and double_quote == -1:
quote_char = "'"
elif double_quote < single_quote:
quote_char = '"'
elif single_quote < double_quote:
quote_char = "'"
return quote_char
def json_encoder(obj):
"""JSON encoder that formats datetimes as ISO8601 format."""
if isinstance(obj, datetime.datetime):
return obj.isoformat()
else:
return obj
@contextlib.contextmanager
def ignore_ctrl_c():
original = signal.signal(signal.SIGINT, signal.SIG_IGN)
try:
yield
finally:
signal.signal(signal.SIGINT, original)
awscli-1.10.1/awscli/plugin.py 0000666 4542626 0000144 00000004331 12652514124 017263 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
from botocore.hooks import HierarchicalEmitter
log = logging.getLogger('awscli.plugin')
BUILTIN_PLUGINS = {'__builtin__': 'awscli.handlers'}
def load_plugins(plugin_mapping, event_hooks=None, include_builtins=True):
"""
:type plugin_mapping: dict
:param plugin_mapping: A dict of plugin name to import path,
e.g. ``{"plugingName": "package.modulefoo"}``.
:type event_hooks: ``EventHooks``
:param event_hooks: Event hook emitter. If one if not provided,
an emitter will be created and returned. Otherwise, the
passed in ``event_hooks`` will be used to initialize plugins.
:type include_builtins: bool
:param include_builtins: If True, the builtin awscli plugins (specified in
``BUILTIN_PLUGINS``) will be included in the list of plugins to load.
:rtype: HierarchicalEmitter
:return: An event emitter object.
"""
if include_builtins:
plugin_mapping.update(BUILTIN_PLUGINS)
modules = _import_plugins(plugin_mapping)
if event_hooks is None:
event_hooks = HierarchicalEmitter()
for name, plugin in zip(plugin_mapping.keys(), modules):
log.debug("Initializing plugin %s: %s", name, plugin)
plugin.awscli_initialize(event_hooks)
return event_hooks
def _import_plugins(plugin_names):
plugins = []
for name, path in plugin_names.items():
log.debug("Importing plugin %s: %s", name, path)
if '.' not in path:
plugins.append(__import__(path))
else:
package, module = path.rsplit('.', 1)
module = __import__(path, fromlist=[module])
plugins.append(module)
return plugins
awscli-1.10.1/awscli/completer.py 0000777 4542626 0000144 00000015444 12652514124 017771 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
# http://aws.amazon.com/apache2.0/
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import awscli.clidriver
import sys
import logging
import copy
LOG = logging.getLogger(__name__)
class Completer(object):
def __init__(self):
self.driver = awscli.clidriver.create_clidriver()
self.main_hc = self.driver.create_help_command()
self.main_options = self._documented(self.main_hc.arg_table)
self.cmdline = None
self.point = None
self.command_hc = None
self.subcommand_hc = None
self.command_name = None
self.subcommand_name = None
self.current_word = None
self.previous_word = None
self.non_options = None
def _complete_option(self, option_name):
if option_name == '--endpoint-url':
return []
if option_name == '--output':
cli_data = self.driver.session.get_data('cli')
return cli_data['options']['output']['choices']
if option_name == '--profile':
return self.driver.session.available_profiles
return []
def _complete_provider(self):
retval = []
if self.current_word.startswith('-'):
cw = self.current_word.lstrip('-')
l = ['--' + n for n in self.main_options
if n.startswith(cw)]
retval = l
elif self.current_word == 'aws':
retval = self._documented(self.main_hc.command_table)
else:
# Otherwise, see if they have entered a partial command name
retval = self._documented(self.main_hc.command_table,
startswith=self.current_word)
return retval
def _complete_command(self):
retval = []
if self.current_word == self.command_name:
if self.command_hc:
retval = self._documented(self.command_hc.command_table)
elif self.current_word.startswith('-'):
retval = self._find_possible_options()
else:
# See if they have entered a partial command name
if self.command_hc:
retval = self._documented(self.command_hc.command_table,
startswith=self.current_word)
return retval
def _documented(self, table, startswith=None):
names = []
for key, command in table.items():
if getattr(command, '_UNDOCUMENTED', False):
# Don't tab complete undocumented commands/params
continue
if startswith is not None and not key.startswith(startswith):
continue
if getattr(command, 'positional_arg', False):
continue
names.append(key)
return names
def _complete_subcommand(self):
retval = []
if self.current_word == self.subcommand_name:
retval = []
elif self.current_word.startswith('-'):
retval = self._find_possible_options()
return retval
def _find_possible_options(self):
all_options = copy.copy(self.main_options)
if self.subcommand_hc:
all_options = all_options + self._documented(self.subcommand_hc.arg_table)
for opt in self.options:
# Look thru list of options on cmdline. If there are
# options that have already been specified and they are
# not the current word, remove them from list of possibles.
if opt != self.current_word:
stripped_opt = opt.lstrip('-')
if stripped_opt in all_options:
all_options.remove(stripped_opt)
cw = self.current_word.lstrip('-')
possibles = ['--' + n for n in all_options if n.startswith(cw)]
if len(possibles) == 1 and possibles[0] == self.current_word:
return self._complete_option(possibles[0])
return possibles
def _process_command_line(self):
# Process the command line and try to find:
# - command_name
# - subcommand_name
# - words
# - current_word
# - previous_word
# - non_options
# - options
self.command_name = None
self.subcommand_name = None
self.words = self.cmdline[0:self.point].split()
self.current_word = self.words[-1]
if len(self.words) >= 2:
self.previous_word = self.words[-2]
else:
self.previous_word = None
self.non_options = [w for w in self.words if not w.startswith('-')]
self.options = [w for w in self.words if w.startswith('-')]
# Look for a command name in the non_options
for w in self.non_options:
if w in self.main_hc.command_table:
self.command_name = w
cmd_obj = self.main_hc.command_table[self.command_name]
self.command_hc = cmd_obj.create_help_command()
if self.command_hc and self.command_hc.command_table:
# Look for subcommand name
for w in self.non_options:
if w in self.command_hc.command_table:
self.subcommand_name = w
cmd_obj = self.command_hc.command_table[self.subcommand_name]
self.subcommand_hc = cmd_obj.create_help_command()
break
break
def complete(self, cmdline, point):
self.cmdline = cmdline
self.command_name = None
if point is None:
point = len(cmdline)
self.point = point
self._process_command_line()
if not self.command_name:
# If we didn't find any command names in the cmdline
# lets try to complete provider options
return self._complete_provider()
if self.command_name and not self.subcommand_name:
return self._complete_command()
return self._complete_subcommand()
def complete(cmdline, point):
choices = Completer().complete(cmdline, point)
print(' \n'.join(choices))
if __name__ == '__main__':
if len(sys.argv) == 3:
cmdline = sys.argv[1]
point = int(sys.argv[2])
elif len(sys.argv) == 2:
cmdline = sys.argv[1]
else:
print('usage: %s ' % sys.argv[0])
sys.exit(1)
print(complete(cmdline, point))
awscli-1.10.1/awscli/errorhandler.py 0000666 4542626 0000144 00000005723 12652514124 020462 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
LOG = logging.getLogger(__name__)
class BaseOperationError(Exception):
MSG_TEMPLATE = ("A {error_type} error ({error_code}) occurred "
"when calling the {operation_name} operation: "
"{error_message}")
def __init__(self, error_code, error_message, error_type, operation_name,
http_status_code):
msg = self.MSG_TEMPLATE.format(
error_code=error_code, error_message=error_message,
error_type=error_type, operation_name=operation_name)
super(BaseOperationError, self).__init__(msg)
self.error_code = error_code
self.error_message = error_message
self.error_type = error_type
self.operation_name = operation_name
self.http_status_code = http_status_code
class ClientError(BaseOperationError):
pass
class ServerError(BaseOperationError):
pass
class ErrorHandler(object):
"""
This class is responsible for handling any HTTP errors that occur
when a service operation is called. It is registered for the
``after-call`` event and will have the opportunity to inspect
all operation calls. If the HTTP response contains an error
``status_code`` an appropriate error message will be printed and
the handler will short-circuit all further processing by exiting
with an appropriate error code.
"""
def __call__(self, http_response, parsed, model, **kwargs):
LOG.debug('HTTP Response Code: %d', http_response.status_code)
error_type = None
error_class = None
if http_response.status_code >= 500:
error_type = 'server'
error_class = ServerError
elif http_response.status_code >= 400 or http_response.status_code == 301:
error_type = 'client'
error_class = ClientError
if error_class is not None:
code, message = self._get_error_code_and_message(parsed)
raise error_class(
error_code=code, error_message=message,
error_type=error_type, operation_name=model.name,
http_status_code=http_response.status_code)
def _get_error_code_and_message(self, response):
code = 'Unknown'
message = 'Unknown'
if 'Error' in response:
error = response['Error']
return error.get('Code', code), error.get('Message', message)
return (code, message)
awscli-1.10.1/awscli/help.py 0000666 4542626 0000144 00000031620 12652514124 016716 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2012-2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
import os
import platform
import shlex
from subprocess import Popen, PIPE
from docutils.core import publish_string
from docutils.writers import manpage
from botocore.docs.bcdoc import docevents
from botocore.docs.bcdoc.restdoc import ReSTDocument
from botocore.docs.bcdoc.textwriter import TextWriter
from awscli.clidocs import ProviderDocumentEventHandler
from awscli.clidocs import ServiceDocumentEventHandler
from awscli.clidocs import OperationDocumentEventHandler
from awscli.clidocs import TopicListerDocumentEventHandler
from awscli.clidocs import TopicDocumentEventHandler
from awscli.argprocess import ParamShorthand
from awscli.argparser import ArgTableArgParser
from awscli.topictags import TopicTagDB
from awscli.utils import ignore_ctrl_c
LOG = logging.getLogger('awscli.help')
class ExecutableNotFoundError(Exception):
def __init__(self, executable_name):
super(ExecutableNotFoundError, self).__init__(
'Could not find executable named "%s"' % executable_name)
def get_renderer():
"""
Return the appropriate HelpRenderer implementation for the
current platform.
"""
if platform.system() == 'Windows':
return WindowsHelpRenderer()
else:
return PosixHelpRenderer()
class PagingHelpRenderer(object):
"""
Interface for a help renderer.
The renderer is responsible for displaying the help content on
a particular platform.
"""
PAGER = None
def get_pager_cmdline(self):
pager = self.PAGER
if 'MANPAGER' in os.environ:
pager = os.environ['MANPAGER']
elif 'PAGER' in os.environ:
pager = os.environ['PAGER']
return shlex.split(pager)
def render(self, contents):
"""
Each implementation of HelpRenderer must implement this
render method.
"""
converted_content = self._convert_doc_content(contents)
self._send_output_to_pager(converted_content)
def _send_output_to_pager(self, output):
cmdline = self.get_pager_cmdline()
LOG.debug("Running command: %s", cmdline)
p = self._popen(cmdline, stdin=PIPE)
p.communicate(input=output)
def _popen(self, *args, **kwargs):
return Popen(*args, **kwargs)
def _convert_doc_content(self, contents):
return contents
class PosixHelpRenderer(PagingHelpRenderer):
"""
Render help content on a Posix-like system. This includes
Linux and MacOS X.
"""
PAGER = 'less -R'
def _convert_doc_content(self, contents):
man_contents = publish_string(contents, writer=manpage.Writer())
if not self._exists_on_path('groff'):
raise ExecutableNotFoundError('groff')
cmdline = ['groff', '-man', '-T', 'ascii']
LOG.debug("Running command: %s", cmdline)
p3 = self._popen(cmdline, stdin=PIPE, stdout=PIPE, stderr=PIPE)
groff_output = p3.communicate(input=man_contents)[0]
return groff_output
def _send_output_to_pager(self, output):
cmdline = self.get_pager_cmdline()
LOG.debug("Running command: %s", cmdline)
with ignore_ctrl_c():
# We can't rely on the KeyboardInterrupt from
# the CLIDriver being caught because when we
# send the output to a pager it will use various
# control characters that need to be cleaned
# up gracefully. Otherwise if we simply catch
# the Ctrl-C and exit, it will likely leave the
# users terminals in a bad state and they'll need
# to manually run ``reset`` to fix this issue.
# Ignoring Ctrl-C solves this issue. It's also
# the default behavior of less (you can't ctrl-c
# out of a manpage).
p = self._popen(cmdline, stdin=PIPE)
p.communicate(input=output)
def _exists_on_path(self, name):
# Since we're only dealing with POSIX systems, we can
# ignore things like PATHEXT.
return any([os.path.exists(os.path.join(p, name))
for p in os.environ.get('PATH', '').split(os.pathsep)])
class WindowsHelpRenderer(PagingHelpRenderer):
"""Render help content on a Windows platform."""
PAGER = 'more'
def _convert_doc_content(self, contents):
text_output = publish_string(contents,
writer=TextWriter())
return text_output
def _popen(self, *args, **kwargs):
# Also set the shell value to True. To get any of the
# piping to a pager to work, we need to use shell=True.
kwargs['shell'] = True
return Popen(*args, **kwargs)
class HelpCommand(object):
"""
HelpCommand Interface
---------------------
A HelpCommand object acts as the interface between objects in the
CLI (e.g. Providers, Services, Operations, etc.) and the documentation
system (bcdoc).
A HelpCommand object wraps the object from the CLI space and provides
a consistent interface to critical information needed by the
documentation pipeline such as the object's name, description, etc.
The HelpCommand object is passed to the component of the
documentation pipeline that fires documentation events. It is
then passed on to each document event handler that has registered
for the events.
All HelpCommand objects contain the following attributes:
+ ``session`` - A ``botocore`` ``Session`` object.
+ ``obj`` - The object that is being documented.
+ ``command_table`` - A dict mapping command names to
callable objects.
+ ``arg_table`` - A dict mapping argument names to callable objects.
+ ``doc`` - A ``Document`` object that is used to collect the
generated documentation.
In addition, please note the `properties` defined below which are
required to allow the object to be used in the document pipeline.
Implementations of HelpCommand are provided here for Provider,
Service and Operation objects. Other implementations for other
types of objects might be needed for customization in plugins.
As long as the implementations conform to this basic interface
it should be possible to pass them to the documentation system
and generate interactive and static help files.
"""
EventHandlerClass = None
"""
Each subclass should define this class variable to point to the
EventHandler class used by this HelpCommand.
"""
def __init__(self, session, obj, command_table, arg_table):
self.session = session
self.obj = obj
if command_table is None:
command_table = {}
self.command_table = command_table
if arg_table is None:
arg_table = {}
self.arg_table = arg_table
self._subcommand_table = {}
self._related_items = []
self.renderer = get_renderer()
self.doc = ReSTDocument(target='man')
@property
def event_class(self):
"""
Return the ``event_class`` for this object.
The ``event_class`` is used by the documentation pipeline
when generating documentation events. For the event below::
doc-title..
The document pipeline would use this property to determine
the ``event_class`` value.
"""
pass
@property
def name(self):
"""
Return the name of the wrapped object.
This would be called by the document pipeline to determine
the ``name`` to be inserted into the event, as shown above.
"""
pass
@property
def subcommand_table(self):
"""These are the commands that may follow after the help command"""
return self._subcommand_table
@property
def related_items(self):
"""This is list of items that are related to the help command"""
return self._related_items
def __call__(self, args, parsed_globals):
if args:
subcommand_parser = ArgTableArgParser({}, self.subcommand_table)
parsed, remaining = subcommand_parser.parse_known_args(args)
if getattr(parsed, 'subcommand', None) is not None:
return self.subcommand_table[parsed.subcommand](remaining,
parsed_globals)
# Create an event handler for a Provider Document
instance = self.EventHandlerClass(self)
# Now generate all of the events for a Provider document.
# We pass ourselves along so that we can, in turn, get passed
# to all event handlers.
docevents.generate_events(self.session, self)
self.renderer.render(self.doc.getvalue())
instance.unregister()
class ProviderHelpCommand(HelpCommand):
"""Implements top level help command.
This is what is called when ``aws help`` is run.
"""
EventHandlerClass = ProviderDocumentEventHandler
def __init__(self, session, command_table, arg_table,
description, synopsis, usage):
HelpCommand.__init__(self, session, None,
command_table, arg_table)
self.description = description
self.synopsis = synopsis
self.help_usage = usage
self._subcommand_table = None
self._topic_tag_db = None
self._related_items = ['aws help topics']
@property
def event_class(self):
return 'aws'
@property
def name(self):
return 'aws'
@property
def subcommand_table(self):
if self._subcommand_table is None:
if self._topic_tag_db is None:
self._topic_tag_db = TopicTagDB()
self._topic_tag_db.load_json_index()
self._subcommand_table = self._create_subcommand_table()
return self._subcommand_table
def _create_subcommand_table(self):
subcommand_table = {}
# Add the ``aws help topics`` command to the ``topic_table``
topic_lister_command = TopicListerCommand(self.session)
subcommand_table['topics'] = topic_lister_command
topic_names = self._topic_tag_db.get_all_topic_names()
# Add all of the possible topics to the ``topic_table``
for topic_name in topic_names:
topic_help_command = TopicHelpCommand(self.session, topic_name)
subcommand_table[topic_name] = topic_help_command
return subcommand_table
class ServiceHelpCommand(HelpCommand):
"""Implements service level help.
This is the object invoked whenever a service command
help is implemented, e.g. ``aws ec2 help``.
"""
EventHandlerClass = ServiceDocumentEventHandler
def __init__(self, session, obj, command_table, arg_table, name,
event_class):
super(ServiceHelpCommand, self).__init__(session, obj, command_table,
arg_table)
self._name = name
self._event_class = event_class
@property
def event_class(self):
return self._event_class
@property
def name(self):
return self._name
class OperationHelpCommand(HelpCommand):
"""Implements operation level help.
This is the object invoked whenever help for a service is requested,
e.g. ``aws ec2 describe-instances help``.
"""
EventHandlerClass = OperationDocumentEventHandler
def __init__(self, session, operation_model, arg_table, name,
event_class):
HelpCommand.__init__(self, session, operation_model, None, arg_table)
self.param_shorthand = ParamShorthand()
self._name = name
self._event_class = event_class
@property
def event_class(self):
return self._event_class
@property
def name(self):
return self._name
class TopicListerCommand(HelpCommand):
EventHandlerClass = TopicListerDocumentEventHandler
def __init__(self, session):
super(TopicListerCommand, self).__init__(session, None, {}, {})
@property
def event_class(self):
return 'topics'
@property
def name(self):
return 'topics'
class TopicHelpCommand(HelpCommand):
EventHandlerClass = TopicDocumentEventHandler
def __init__(self, session, topic_name):
super(TopicHelpCommand, self).__init__(session, None, {}, {})
self._topic_name = topic_name
@property
def event_class(self):
return 'topics.' + self.name
@property
def name(self):
return self._topic_name
awscli-1.10.1/awscli/topictags.py 0000666 4542626 0000144 00000030553 12652514124 017767 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright (c) 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish, dis-
# tribute, sublicense, and/or sell copies of the Software, and to permit
# persons to whom the Software is furnished to do so, subject to the fol-
# lowing conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
#
import os
import json
import docutils.core
class TopicTagDB(object):
"""This class acts like a database for the tags of all available topics.
A tag is an element in a topic reStructured text file that contains
information about a topic. Information can range from titles to even
related CLI commands. Here are all of the currently supported tags:
Tag Meaning Required?
--- ------- ---------
:title: The title of the topic Yes
:description: Sentence description of topic Yes
:category: Category topic falls under Yes
:related topic: A related topic No
:related command: A related command No
To see examples of how to specify tags, look in the directory
awscli/topics. Note that tags can have multiple values by delimiting
values with commas. All tags must be on their own line in the file.
This class can load a JSON index represeting all topics and their tags,
scan all of the topics and store the values of their tags, retrieve the
tag value for a particular topic, query for all the topics with a specific
tag and/or value, and save the loaded data back out to a JSON index.
The structure of the database can be viewed as a python dictionary:
{'topic-name-1': {
'title': ['My First Topic Title'],
'description': ['This describes my first topic'],
'category': ['General Topics', 'S3'],
'related command': ['aws s3'],
'related topic': ['topic-name-2']
},
'topic-name-2': { .....
}
The keys of the dictionary are the CLI command names of the topics. These
names are based off the name of the reStructed text file that corresponds
to the topic. The value of these keys are dictionaries of tags, where the
tags are keys and their value is a list of values for that tag. Note
that all tag values for a specific tag of a specific topic are unique.
"""
VALID_TAGS = ['category', 'description', 'title', 'related topic',
'related command']
# The default directory to look for topics.
TOPIC_DIR = os.path.join(
os.path.dirname(
os.path.abspath(__file__)), 'topics')
# The default JSON index to load.
JSON_INDEX = os.path.join(TOPIC_DIR, 'topic-tags.json')
def __init__(self, tag_dictionary=None, index_file=JSON_INDEX,
topic_dir=TOPIC_DIR):
"""
:param index_file: The path to a specific JSON index to load.
If nothing is specified it will default to the default JSON
index at ``JSON_INDEX``.
:param topic_dir: The path to the directory where to retrieve
the topic source files. Note that if you store your index
in this directory, you must supply the full path to the json
index to the ``file_index`` argument as it may not be ignored when
listing topic source files. If nothing is specified it will
default to the default directory at ``TOPIC_DIR``.
"""
self._tag_dictionary = tag_dictionary
if self._tag_dictionary is None:
self._tag_dictionary = {}
self._index_file = index_file
self._topic_dir = topic_dir
@property
def index_file(self):
return self._index_file
@index_file.setter
def index_file(self, value):
self._index_file = value
@property
def topic_dir(self):
return self._topic_dir
@topic_dir.setter
def topic_dir(self, value):
self._topic_dir = value
@property
def valid_tags(self):
return self.VALID_TAGS
def load_json_index(self):
"""Loads a JSON file into the tag dictionary."""
with open(self.index_file, 'r') as f:
self._tag_dictionary = json.load(f)
def save_to_json_index(self):
"""Writes the loaded data back out to the JSON index."""
with open(self.index_file, 'w') as f:
f.write(json.dumps(self._tag_dictionary, indent=4, sort_keys=True))
def get_all_topic_names(self):
"""Retrieves all of the topic names of the loaded JSON index"""
return list(self._tag_dictionary)
def get_all_topic_src_files(self):
"""Retrieves the file paths of all the topics in directory"""
topic_full_paths = []
topic_names = os.listdir(self.topic_dir)
for topic_name in topic_names:
# Do not try to load hidden files.
if not topic_name.startswith('.'):
topic_full_path = os.path.join(self.topic_dir, topic_name)
# Ignore the JSON Index as it is stored with topic files.
if topic_full_path != self.index_file:
topic_full_paths.append(topic_full_path)
return topic_full_paths
def scan(self, topic_files):
"""Scan in the tags of a list of topics into memory.
Note that if there are existing values in an entry in the database
of tags, they will not be overwritten. Any new values will be
appended to original values.
:param topic_files: A list of paths to topics to scan into memory.
"""
for topic_file in topic_files:
with open(topic_file, 'r') as f:
# Parse out the name of the topic
topic_name = self._find_topic_name(topic_file)
# Add the topic to the dictionary if it does not exist
self._add_topic_name_to_dict(topic_name)
topic_content = f.read()
# Record the tags and the values
self._add_tag_and_values_from_content(
topic_name, topic_content)
def _find_topic_name(self, topic_src_file):
# Get the name of each of these files
topic_name_with_ext = os.path.basename(topic_src_file)
# Strip of the .rst extension from the files
return topic_name_with_ext[:-4]
def _add_tag_and_values_from_content(self, topic_name, content):
# Retrieves tags and values and adds from content of topic file
# to the dictionary.
doctree = docutils.core.publish_doctree(content).asdom()
fields = doctree.getElementsByTagName('field')
for field in fields:
field_name = field.getElementsByTagName('field_name')[0]
field_body = field.getElementsByTagName('field_body')[0]
# Get the tag.
tag = field_name.firstChild.nodeValue
if tag in self.VALID_TAGS:
# Get the value of the tag.
values = field_body.childNodes[0].firstChild.nodeValue
# Seperate values into a list by splitting at commas
tag_values = values.split(',')
# Strip the white space around each of these values.
for i in range(len(tag_values)):
tag_values[i] = tag_values[i].strip()
self._add_tag_to_dict(topic_name, tag, tag_values)
else:
raise ValueError(
"Tag %s found under topic %s is not supported."
% (tag, topic_name)
)
def _add_topic_name_to_dict(self, topic_name):
# This method adds a topic name to the dictionary if it does not
# already exist
# Check if the topic is in the topic tag dictionary
if self._tag_dictionary.get(topic_name, None) is None:
self._tag_dictionary[topic_name] = {}
def _add_tag_to_dict(self, topic_name, tag, values):
# This method adds a tag to the dictionary given its tag and value
# If there are existing values associated to the tag it will add
# only values that previously did not exist in the list.
# Add topic to the topic tag dictionary if needed.
self._add_topic_name_to_dict(topic_name)
# Get all of a topics tags
topic_tags = self._tag_dictionary[topic_name]
self._add_key_values(topic_tags, tag, values)
def _add_key_values(self, dictionary, key, values):
# This method adds a value to a dictionary given a key.
# If there are existing values associated to the key it will add
# only values that previously did not exist in the list. All values
# in the dictionary should be lists
if dictionary.get(key, None) is None:
dictionary[key] = []
for value in values:
if value not in dictionary[key]:
dictionary[key].append(value)
def query(self, tag, values=None):
"""Groups topics by a specific tag and/or tag value.
:param tag: The name of the tag to query for.
:param values: A list of tag values to only include in query.
If no value is provided, all possible tag values will be returned
:rtype: dictionary
:returns: A dictionary whose keys are all possible tag values and the
keys' values are all of the topic names that had that tag value
in its source file. For example, if ``topic-name-1`` had the tag
``:category: foo, bar`` and ``topic-name-2`` had the tag
``:category: foo`` and we queried based on ``:category:``,
the returned dictionary would be:
{
'foo': ['topic-name-1', 'topic-name-2'],
'bar': ['topic-name-1']
}
"""
query_dict = {}
for topic_name in self._tag_dictionary.keys():
# Get the tag values for a specified tag of the topic
if self._tag_dictionary[topic_name].get(tag, None) is not None:
tag_values = self._tag_dictionary[topic_name][tag]
for tag_value in tag_values:
# Add the values to dictionary to be returned if
# no value constraints are provided or if the tag value
# falls in the allowed tag values.
if values is None or tag_value in values:
self._add_key_values(query_dict,
key=tag_value,
values=[topic_name])
return query_dict
def get_tag_value(self, topic_name, tag, default_value=None):
"""Get a value of a tag for a topic
:param topic_name: The name of the topic
:param tag: The name of the tag to retrieve
:param default_value: The value to return if the topic and/or tag
does not exist.
"""
if topic_name in self._tag_dictionary:
return self._tag_dictionary[topic_name].get(tag, default_value)
return default_value
def get_tag_single_value(self, topic_name, tag):
"""Get the value of a tag for a topic (i.e. not wrapped in a list)
:param topic_name: The name of the topic
:param tag: The name of the tag to retrieve
:raises VauleError: Raised if there is not exactly one value
in the list value.
"""
value = self.get_tag_value(topic_name, tag)
if value is not None:
if len(value) != 1:
raise ValueError(
'Tag %s for topic %s has value %. Expected a single '
'element in list.' % (tag, topic_name, value)
)
value = value[0]
return value
awscli-1.10.1/awscli/table.py 0000666 4542626 0000144 00000034611 12652514124 017060 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
# http://aws.amazon.com/apache2.0/
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import os
import sys
import struct
import colorama
from awscli.compat import six
def determine_terminal_width(default_width=80):
# If we can't detect the terminal width, the default_width is returned.
try:
from termios import TIOCGWINSZ
from fcntl import ioctl
except ImportError:
return default_width
try:
height, width = struct.unpack('hhhh', ioctl(sys.stdout,
TIOCGWINSZ, '\000' * 8))[0:2]
except Exception:
return default_width
else:
return width
def is_a_tty():
try:
return os.isatty(sys.stdout.fileno())
except Exception:
return False
def center_text(text, length=80, left_edge='|', right_edge='|',
text_length=None):
"""Center text with specified edge chars.
You can pass in the length of the text as an arg, otherwise it is computed
automatically for you. This can allow you to center a string not based
on it's literal length (useful if you're using ANSI codes).
"""
# postcondition: len(returned_text) == length
if text_length is None:
text_length = len(text)
output = []
char_start = (length // 2) - (text_length // 2) - 1
output.append(left_edge + ' ' * char_start + text)
length_so_far = len(left_edge) + char_start + text_length
right_side_spaces = length - len(right_edge) - length_so_far
output.append(' ' * right_side_spaces)
output.append(right_edge)
final = ''.join(output)
return final
def align_left(text, length, left_edge='|', right_edge='|', text_length=None,
left_padding=2):
"""Left align text."""
# postcondition: len(returned_text) == length
if text_length is None:
text_length = len(text)
computed_length = (
text_length + left_padding + len(left_edge) + len(right_edge))
if length - computed_length >= 0:
padding = left_padding
else:
padding = 0
output = []
length_so_far = 0
output.append(left_edge)
length_so_far += len(left_edge)
output.append(' ' * padding)
length_so_far += padding
output.append(text)
length_so_far += text_length
output.append(' ' * (length - length_so_far - len(right_edge)))
output.append(right_edge)
return ''.join(output)
def convert_to_vertical_table(sections):
# Any section that only has a single row is
# inverted, so:
# header1 | header2 | header3
# val1 | val2 | val2
#
# becomes:
#
# header1 | val1
# header2 | val2
# header3 | val3
for i, section in enumerate(sections):
if len(section.rows) == 1 and section.headers:
headers = section.headers
new_section = Section()
new_section.title = section.title
new_section.indent_level = section.indent_level
for header, element in zip(headers, section.rows[0]):
new_section.add_row([header, element])
sections[i] = new_section
class IndentedStream(object):
def __init__(self, stream, indent_level, left_indent_char='|',
right_indent_char='|'):
self._stream = stream
self._indent_level = indent_level
self._left_indent_char = left_indent_char
self._right_indent_char = right_indent_char
def write(self, text):
self._stream.write(self._left_indent_char * self._indent_level)
if text.endswith('\n'):
self._stream.write(text[:-1])
self._stream.write(self._right_indent_char * self._indent_level)
self._stream.write('\n')
else:
self._stream.write(text)
def __getattr__(self, attr):
return getattr(self._stream, attr)
class Styler(object):
def style_title(self, text):
return text
def style_header_column(self, text):
return text
def style_row_element(self, text):
return text
def style_indentation_char(self, text):
return text
class ColorizedStyler(Styler):
def __init__(self):
# autoreset allows us to not have to sent
# reset sequences for every string.
colorama.init(autoreset=True)
def style_title(self, text):
# Originally bold + underline
return text
#return colorama.Style.BOLD + text + colorama.Style.RESET_ALL
def style_header_column(self, text):
# Originally underline
return text
def style_row_element(self, text):
return (colorama.Style.BRIGHT + colorama.Fore.BLUE +
text + colorama.Style.RESET_ALL)
def style_indentation_char(self, text):
return (colorama.Style.DIM + colorama.Fore.YELLOW +
text + colorama.Style.RESET_ALL)
class MultiTable(object):
def __init__(self, terminal_width=None, initial_section=True,
column_separator='|', terminal=None,
styler=None, auto_reformat=True):
self._auto_reformat = auto_reformat
if initial_section:
self._current_section = Section()
self._sections = [self._current_section]
else:
self._current_section = None
self._sections = []
if styler is None:
# Move out to factory.
if is_a_tty():
self._styler = ColorizedStyler()
else:
self._styler = Styler()
else:
self._styler = styler
self._rendering_index = 0
self._column_separator = column_separator
if terminal_width is None:
self._terminal_width = determine_terminal_width()
def add_title(self, title):
self._current_section.add_title(title)
def add_row_header(self, headers):
self._current_section.add_header(headers)
def add_row(self, row_elements):
self._current_section.add_row(row_elements)
def new_section(self, title, indent_level=0):
self._current_section = Section()
self._sections.append(self._current_section)
self._current_section.add_title(title)
self._current_section.indent_level = indent_level
def render(self, stream):
max_width = self._calculate_max_width()
should_convert_table = self._determine_conversion_needed(max_width)
if should_convert_table:
convert_to_vertical_table(self._sections)
max_width = self._calculate_max_width()
stream.write('-' * max_width + '\n')
for section in self._sections:
self._render_section(section, max_width, stream)
def _determine_conversion_needed(self, max_width):
# If we don't know the width of the controlling terminal,
# then we don't try to resize the table.
if max_width > self._terminal_width:
return self._auto_reformat
def _calculate_max_width(self):
max_width = max(s.total_width(padding=4, with_border=True,
outer_padding=s.indent_level)
for s in self._sections)
return max_width
def _render_section(self, section, max_width, stream):
stream = IndentedStream(stream, section.indent_level,
self._styler.style_indentation_char('|'),
self._styler.style_indentation_char('|'))
max_width -= (section.indent_level * 2)
self._render_title(section, max_width, stream)
self._render_column_titles(section, max_width, stream)
self._render_rows(section, max_width, stream)
def _render_title(self, section, max_width, stream):
# The title consists of:
# title : | This is the title |
# bottom_border: ----------------------------
if section.title:
title = self._styler.style_title(section.title)
stream.write(center_text(title, max_width, '|', '|',
len(section.title)) + '\n')
if not section.headers and not section.rows:
stream.write('+%s+' % ('-' * (max_width - 2)) + '\n')
def _render_column_titles(self, section, max_width, stream):
if not section.headers:
return
# In order to render the column titles we need to know
# the width of each of the columns.
widths = section.calculate_column_widths(padding=4,
max_width=max_width)
# TODO: Built a list instead of +=, it's more efficient.
current = ''
length_so_far = 0
# The first cell needs both left and right edges '| foo |'
# while subsequent cells only need right edges ' foo |'.
first = True
for width, header in zip(widths, section.headers):
stylized_header = self._styler.style_header_column(header)
if first:
left_edge = '|'
first = False
else:
left_edge = ''
current += center_text(text=stylized_header, length=width,
left_edge=left_edge, right_edge='|',
text_length=len(header))
length_so_far += width
self._write_line_break(stream, widths)
stream.write(current + '\n')
def _write_line_break(self, stream, widths):
# Write out something like:
# +-------+---------+---------+
parts = []
first = True
for width in widths:
if first:
parts.append('+%s+' % ('-' * (width - 2)))
first = False
else:
parts.append('%s+' % ('-' * (width - 1)))
parts.append('\n')
stream.write(''.join(parts))
def _render_rows(self, section, max_width, stream):
if not section.rows:
return
widths = section.calculate_column_widths(padding=4,
max_width=max_width)
if not widths:
return
self._write_line_break(stream, widths)
for row in section.rows:
# TODO: Built the string in a list then join instead of using +=,
# it's more efficient.
current = ''
length_so_far = 0
first = True
for width, element in zip(widths, row):
if first:
left_edge = '|'
first = False
else:
left_edge = ''
stylized = self._styler.style_row_element(element)
current += align_left(text=stylized, length=width,
left_edge=left_edge,
right_edge=self._column_separator,
text_length=len(element))
length_so_far += width
stream.write(current + '\n')
self._write_line_break(stream, widths)
class Section(object):
def __init__(self):
self.title = ''
self.headers = []
self.rows = []
self.indent_level = 0
self._num_cols = None
self._max_widths = []
def __repr__(self):
return ("Section(title=%s, headers=%s, indent_level=%s, num_rows=%s)" %
(self.title, self.headers, self.indent_level, len(self.rows)))
def calculate_column_widths(self, padding=0, max_width=None):
# postcondition: sum(widths) == max_width
unscaled_widths = [w + padding for w in self._max_widths]
if max_width is None:
return unscaled_widths
if not unscaled_widths:
return unscaled_widths
else:
# Compute scale factor for max_width.
scale_factor = max_width / float(sum(unscaled_widths))
scaled = [int(round(scale_factor * w)) for w in unscaled_widths]
# Once we've scaled the columns, we may be slightly over/under
# the amount we need so we have to adjust the columns.
off_by = sum(scaled) - max_width
while off_by != 0:
iter_order = range(len(scaled))
if off_by < 0:
iter_order = reversed(iter_order)
for i in iter_order:
if off_by > 0:
scaled[i] -= 1
off_by -= 1
else:
scaled[i] += 1
off_by += 1
if off_by == 0:
break
return scaled
def total_width(self, padding=0, with_border=False, outer_padding=0):
total = 0
# One char on each side == 2 chars total to the width.
border_padding = 2
for w in self.calculate_column_widths():
total += w + padding
if with_border:
total += border_padding
total += outer_padding + outer_padding
return max(len(self.title) + border_padding + outer_padding +
outer_padding, total)
def add_title(self, title):
self.title = title
def add_header(self, headers):
self._update_max_widths(headers)
if self._num_cols is None:
self._num_cols = len(headers)
self.headers = self._format_headers(headers)
def _format_headers(self, headers):
return headers
def add_row(self, row):
if self._num_cols is None:
self._num_cols = len(row)
if len(row) != self._num_cols:
raise ValueError("Row should have %s elements, instead "
"it has %s" % (self._num_cols, len(row)))
row = self._format_row(row)
self.rows.append(row)
self._update_max_widths(row)
def _format_row(self, row):
return [six.text_type(r) for r in row]
def _update_max_widths(self, row):
if not self._max_widths:
self._max_widths = [len(el) for el in row]
else:
for i, el in enumerate(row):
self._max_widths[i] = max(len(el), self._max_widths[i])
awscli-1.10.1/awscli/compat.py 0000666 4542626 0000144 00000007223 12652514124 017253 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
# http://aws.amazon.com/apache2.0/
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import sys
import os
import zipfile
from botocore.compat import six
#import botocore.compat
# If you ever want to import from the vendored six. Add it here and then
# import from awscli.compat. Also try to keep it in alphabetical order.
# This may get large.
advance_iterator = six.advance_iterator
PY3 = six.PY3
queue = six.moves.queue
shlex_quote = six.moves.shlex_quote
StringIO = six.StringIO
urlopen = six.moves.urllib.request.urlopen
# Most, but not all, python installations will have zlib. This is required to
# compress any files we send via a push. If we can't compress, we can still
# package the files in a zip container.
try:
import zlib
ZIP_COMPRESSION_MODE = zipfile.ZIP_DEFLATED
except ImportError:
ZIP_COMPRESSION_MODE = zipfile.ZIP_STORED
class BinaryStdout(object):
def __enter__(self):
if sys.platform == "win32":
import msvcrt
self.previous_mode = msvcrt.setmode(sys.stdout.fileno(),
os.O_BINARY)
return sys.stdout
def __exit__(self, type, value, traceback):
if sys.platform == "win32":
import msvcrt
msvcrt.setmode(sys.stdout.fileno(), self.previous_mode)
if six.PY3:
import locale
import urllib.parse as urlparse
from urllib.error import URLError
raw_input = input
def get_stdout_text_writer():
return sys.stdout
def compat_open(filename, mode='r', encoding=None):
"""Back-port open() that accepts an encoding argument.
In python3 this uses the built in open() and in python2 this
uses the io.open() function.
If the file is not being opened in binary mode, then we'll
use locale.getpreferredencoding() to find the preferred
encoding.
"""
if 'b' not in mode:
encoding = locale.getpreferredencoding()
return open(filename, mode, encoding=encoding)
else:
import codecs
import locale
import io
import urlparse
from urllib2 import URLError
raw_input = raw_input
def get_stdout_text_writer():
# In python3, all the sys.stdout/sys.stderr streams are in text
# mode. This means they expect unicode, and will encode the
# unicode automatically before actually writing to stdout/stderr.
# In python2, that's not the case. In order to provide a consistent
# interface, we can create a wrapper around sys.stdout that will take
# unicode, and automatically encode it to the preferred encoding.
# That way consumers can just call get_stdout_text_writer() and write
# unicode to the returned stream. Note that get_stdout_text_writer
# just returns sys.stdout in the PY3 section above because python3
# handles this.
return codecs.getwriter(locale.getpreferredencoding())(sys.stdout)
def compat_open(filename, mode='r', encoding=None):
# See docstring for compat_open in the PY3 section above.
if 'b' not in mode:
encoding = locale.getpreferredencoding()
return io.open(filename, mode, encoding=encoding)
awscli-1.10.1/awscli/testutils.py 0000666 4542626 0000144 00000065125 12652514124 020035 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2012-2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""Test utilities for the AWS CLI.
This module includes various classes/functions that help in writing
CLI unit/integration tests. This module should not be imported by
any module **except** for test code. This is included in the CLI
package so that code that is not part of the CLI can still take
advantage of all the testing utilities we provide.
"""
import os
import sys
import copy
import shutil
import time
import json
import random
import logging
import tempfile
import platform
import contextlib
import string
from pprint import pformat
from subprocess import Popen, PIPE
from awscli.compat import StringIO
try:
import mock
except ImportError as e:
# In the off chance something imports this module
# that's not suppose to, we should not stop the CLI
# by raising an ImportError. Now if anything actually
# *uses* this module that isn't suppose to, that's s
# different story.
mock = None
from awscli.compat import six
from botocore.hooks import HierarchicalEmitter
from botocore.session import Session
from botocore.exceptions import ClientError
import botocore.loaders
from botocore.vendored import requests
import awscli.clidriver
from awscli.plugin import load_plugins
from awscli.clidriver import CLIDriver
from awscli import EnvironmentVariables
# The unittest module got a significant overhaul
# in 2.7, so if we're in 2.6 we can use the backported
# version unittest2.
if sys.version_info[:2] == (2, 6):
import unittest2 as unittest
else:
import unittest
# In python 3, order matters when calling assertEqual to
# compare lists and dictionaries with lists. Therefore,
# assertItemsEqual needs to be used but it is renamed to
# assertCountEqual in python 3.
if six.PY2:
unittest.TestCase.assertCountEqual = unittest.TestCase.assertItemsEqual
_LOADER = botocore.loaders.Loader()
INTEG_LOG = logging.getLogger('awscli.tests.integration')
AWS_CMD = None
def skip_if_windows(reason):
"""Decorator to skip tests that should not be run on windows.
Example usage:
@skip_if_windows("Not valid")
def test_some_non_windows_stuff(self):
self.assertEqual(...)
"""
def decorator(func):
return unittest.skipIf(
platform.system() not in ['Darwin', 'Linux'], reason)(func)
return decorator
def create_clidriver():
driver = awscli.clidriver.create_clidriver()
session = driver.session
data_path = session.get_config_variable('data_path').split(os.pathsep)
if not data_path:
data_path = []
_LOADER.search_paths.extend(data_path)
session.register_component('data_loader', _LOADER)
return driver
def get_aws_cmd():
global AWS_CMD
import awscli
if AWS_CMD is None:
# Try /bin/aws
repo_root = os.path.dirname(os.path.abspath(awscli.__file__))
aws_cmd = os.path.join(repo_root, 'bin', 'aws')
if not os.path.isfile(aws_cmd):
aws_cmd = _search_path_for_cmd('aws')
if aws_cmd is None:
raise ValueError('Could not find "aws" executable. Either '
'make sure it is on your PATH, or you can '
'explicitly set this value using '
'"set_aws_cmd()"')
AWS_CMD = aws_cmd
return AWS_CMD
def _search_path_for_cmd(cmd_name):
for path in os.environ.get('PATH', '').split(os.pathsep):
full_cmd_path = os.path.join(path, cmd_name)
if os.path.isfile(full_cmd_path):
return full_cmd_path
return None
def set_aws_cmd(aws_cmd):
global AWS_CMD
AWS_CMD = aws_cmd
@contextlib.contextmanager
def temporary_file(mode):
"""This is a cross platform temporary file creation.
tempfile.NamedTemporary file on windows creates a secure temp file
that can't be read by other processes and can't be opened a second time.
For tests, we generally *want* them to be read multiple times.
The test fixture writes the temp file contents, the test reads the
temp file.
"""
temporary_directory = tempfile.mkdtemp()
basename = 'tmpfile-%s-%s' % (int(time.time()), random.randint(1, 1000))
full_filename = os.path.join(temporary_directory, basename)
open(full_filename, 'w').close()
try:
with open(full_filename, mode) as f:
yield f
finally:
shutil.rmtree(temporary_directory)
def create_bucket(session, name=None, region=None):
"""
Creates a bucket
:returns: the name of the bucket created
"""
if not region:
region = 'us-west-2'
client = session.create_client('s3', region_name=region)
if name:
bucket_name = name
else:
rand1 = ''.join(random.sample(string.ascii_lowercase + string.digits,
10))
bucket_name = 'awscli-s3test-' + str(rand1)
params = {'Bucket': bucket_name}
if region != 'us-east-1':
params['CreateBucketConfiguration'] = {'LocationConstraint': region}
try:
# To disable the (obsolete) awscli.errorhandler.ClientError behavior
client.meta.events.unregister(
'after-call', unique_id='awscli-error-handler')
client.create_bucket(**params)
except ClientError as e:
if e.response['Error'].get('Code') == 'BucketAlreadyOwnedByYou':
# This can happen in the retried request, when the first one
# succeeded on S3 but somehow the response never comes back.
# We still got a bucket ready for test anyway.
pass
else:
raise
return bucket_name
class BaseCLIDriverTest(unittest.TestCase):
"""Base unittest that use clidriver.
This will load all the default plugins as well so it
will simulate the behavior the user will see.
"""
def setUp(self):
self.environ = {
'AWS_DATA_PATH': os.environ['AWS_DATA_PATH'],
'AWS_DEFAULT_REGION': 'us-east-1',
'AWS_ACCESS_KEY_ID': 'access_key',
'AWS_SECRET_ACCESS_KEY': 'secret_key',
'AWS_CONFIG_FILE': '',
}
self.environ_patch = mock.patch('os.environ', self.environ)
self.environ_patch.start()
emitter = HierarchicalEmitter()
session = Session(EnvironmentVariables, emitter)
session.register_component('data_loader', _LOADER)
load_plugins({}, event_hooks=emitter)
driver = CLIDriver(session=session)
self.session = session
self.driver = driver
def tearDown(self):
self.environ_patch.stop()
class BaseAWSHelpOutputTest(BaseCLIDriverTest):
def setUp(self):
super(BaseAWSHelpOutputTest, self).setUp()
self.renderer_patch = mock.patch('awscli.help.get_renderer')
self.renderer_mock = self.renderer_patch.start()
self.renderer = CapturedRenderer()
self.renderer_mock.return_value = self.renderer
def tearDown(self):
super(BaseAWSHelpOutputTest, self).tearDown()
self.renderer_patch.stop()
def assert_contains(self, contains):
if contains not in self.renderer.rendered_contents:
self.fail("The expected contents:\n%s\nwere not in the "
"actual rendered contents:\n%s" % (
contains, self.renderer.rendered_contents))
def assert_contains_with_count(self, contains, count):
r_count = self.renderer.rendered_contents.count(contains)
if r_count != count:
self.fail("The expected contents:\n%s\n, with the "
"count:\n%d\nwere not in the actual rendered "
" contents:\n%s\nwith count:\n%d" % (
contains, count, self.renderer.rendered_contents, r_count))
def assert_not_contains(self, contents):
if contents in self.renderer.rendered_contents:
self.fail("The contents:\n%s\nwere not suppose to be in the "
"actual rendered contents:\n%s" % (
contents, self.renderer.rendered_contents))
def assert_text_order(self, *args, **kwargs):
# First we need to find where the SYNOPSIS section starts.
starting_from = kwargs.pop('starting_from')
args = list(args)
contents = self.renderer.rendered_contents
self.assertIn(starting_from, contents)
start_index = contents.find(starting_from)
arg_indices = [contents.find(arg, start_index) for arg in args]
previous = arg_indices[0]
for i, index in enumerate(arg_indices[1:], 1):
if index == -1:
self.fail('The string %r was not found in the contents: %s'
% (args[index], contents))
if index < previous:
self.fail('The string %r came before %r, but was suppose to come '
'after it.\n%s' % (args[i], args[i - 1], contents))
previous = index
class CapturedRenderer(object):
def __init__(self):
self.rendered_contents = ''
def render(self, contents):
self.rendered_contents = contents.decode('utf-8')
class CapturedOutput(object):
def __init__(self, stdout, stderr):
self.stdout = stdout
self.stderr = stderr
@contextlib.contextmanager
def capture_output():
stderr = six.StringIO()
stdout = six.StringIO()
with mock.patch('sys.stderr', stderr):
with mock.patch('sys.stdout', stdout):
yield CapturedOutput(stdout, stderr)
class BaseAWSCommandParamsTest(unittest.TestCase):
maxDiff = None
def setUp(self):
self.last_params = {}
# awscli/__init__.py injects AWS_DATA_PATH at import time
# so that we can find cli.json. This might be fixed in the
# future, but for now we just grab that value out of the real
# os.environ so the patched os.environ has this data and
# the CLI works.
self.environ = {
'AWS_DATA_PATH': os.environ['AWS_DATA_PATH'],
'AWS_DEFAULT_REGION': 'us-east-1',
'AWS_ACCESS_KEY_ID': 'access_key',
'AWS_SECRET_ACCESS_KEY': 'secret_key',
}
self.environ_patch = mock.patch('os.environ', self.environ)
self.environ_patch.start()
self.http_response = requests.models.Response()
self.http_response.status_code = 200
self.parsed_response = {}
self.make_request_patch = mock.patch('botocore.endpoint.Endpoint.make_request')
self.make_request_is_patched = False
self.operations_called = []
self.parsed_responses = None
self.driver = create_clidriver()
def tearDown(self):
# This clears all the previous registrations.
self.environ_patch.stop()
if self.make_request_is_patched:
self.make_request_patch.stop()
self.make_request_is_patched = False
def before_call(self, params, **kwargs):
self._store_params(params)
def _store_params(self, params):
self.last_request_dict = params
self.last_params = params['body']
def patch_make_request(self):
# If you do not stop a previously started patch,
# it can never be stopped if you call start() again on the same
# patch again...
# So stop the current patch before calling start() on it again.
if self.make_request_is_patched:
self.make_request_patch.stop()
self.make_request_is_patched = False
make_request_patch = self.make_request_patch.start()
if self.parsed_responses is not None:
make_request_patch.side_effect = lambda *args, **kwargs: \
(self.http_response, self.parsed_responses.pop(0))
else:
make_request_patch.return_value = (self.http_response, self.parsed_response)
self.make_request_is_patched = True
def assert_params_for_cmd(self, cmd, params=None, expected_rc=0,
stderr_contains=None, ignore_params=None):
stdout, stderr, rc = self.run_cmd(cmd, expected_rc)
if stderr_contains is not None:
self.assertIn(stderr_contains, stderr)
if params is not None:
# The last kwargs of Operation.call() in botocore.
last_kwargs = copy.copy(self.last_kwargs)
if ignore_params is not None:
for key in ignore_params:
try:
del last_kwargs[key]
except KeyError:
pass
if params != last_kwargs:
self.fail("Actual params did not match expected params.\n"
"Expected:\n\n"
"%s\n"
"Actual:\n\n%s\n" % (
pformat(params), pformat(last_kwargs)))
return stdout, stderr, rc
def before_parameter_build(self, params, model, **kwargs):
self.last_kwargs = params
self.operations_called.append((model, params))
def run_cmd(self, cmd, expected_rc=0):
logging.debug("Calling cmd: %s", cmd)
self.patch_make_request()
self.driver.session.register('before-call', self.before_call)
self.driver.session.register('before-parameter-build',
self.before_parameter_build)
if not isinstance(cmd, list):
cmdlist = cmd.split()
else:
cmdlist = cmd
with capture_output() as captured:
try:
rc = self.driver.main(cmdlist)
except SystemExit as e:
# We need to catch SystemExit so that we
# can get a proper rc and still present the
# stdout/stderr to the test runner so we can
# figure out what went wrong.
rc = e.code
stderr = captured.stderr.getvalue()
stdout = captured.stdout.getvalue()
self.assertEqual(
rc, expected_rc,
"Unexpected rc (expected: %s, actual: %s) for command: %s\n"
"stdout:\n%sstderr:\n%s" % (
expected_rc, rc, cmd, stdout, stderr))
return stdout, stderr, rc
class BaseAWSPreviewCommandParamsTest(BaseAWSCommandParamsTest):
def setUp(self):
self.preview_patch = mock.patch(
'awscli.customizations.preview.mark_as_preview')
self.preview_patch.start()
super(BaseAWSPreviewCommandParamsTest, self).setUp()
def tearDown(self):
self.preview_patch.stop()
super(BaseAWSPreviewCommandParamsTest, self).tearDown()
class FileCreator(object):
def __init__(self):
self.rootdir = tempfile.mkdtemp()
def remove_all(self):
shutil.rmtree(self.rootdir)
def create_file(self, filename, contents, mtime=None, mode='w'):
"""Creates a file in a tmpdir
``filename`` should be a relative path, e.g. "foo/bar/baz.txt"
It will be translated into a full path in a tmp dir.
If the ``mtime`` argument is provided, then the file's
mtime will be set to the provided value (must be an epoch time).
Otherwise the mtime is left untouched.
``mode`` is the mode the file should be opened either as ``w`` or
`wb``.
Returns the full path to the file.
"""
full_path = os.path.join(self.rootdir, filename)
if not os.path.isdir(os.path.dirname(full_path)):
os.makedirs(os.path.dirname(full_path))
with open(full_path, mode) as f:
f.write(contents)
current_time = os.path.getmtime(full_path)
# Subtract a few years off the last modification date.
os.utime(full_path, (current_time, current_time - 100000000))
if mtime is not None:
os.utime(full_path, (mtime, mtime))
return full_path
def append_file(self, filename, contents):
"""Append contents to a file
``filename`` should be a relative path, e.g. "foo/bar/baz.txt"
It will be translated into a full path in a tmp dir.
Returns the full path to the file.
"""
full_path = os.path.join(self.rootdir, filename)
if not os.path.isdir(os.path.dirname(full_path)):
os.makedirs(os.path.dirname(full_path))
with open(full_path, 'a') as f:
f.write(contents)
return full_path
def full_path(self, filename):
"""Translate relative path to full path in temp dir.
f.full_path('foo/bar.txt') -> /tmp/asdfasd/foo/bar.txt
"""
return os.path.join(self.rootdir, filename)
class ProcessTerminatedError(Exception):
pass
class Result(object):
def __init__(self, rc, stdout, stderr, memory_usage=None):
self.rc = rc
self.stdout = stdout
self.stderr = stderr
INTEG_LOG.debug("rc: %s", rc)
INTEG_LOG.debug("stdout: %s", stdout)
INTEG_LOG.debug("stderr: %s", stderr)
if memory_usage is None:
memory_usage = []
self.memory_usage = memory_usage
@property
def json(self):
return json.loads(self.stdout)
def _escape_quotes(command):
# For windows we have different rules for escaping.
# First, double quotes must be escaped.
command = command.replace('"', '\\"')
# Second, single quotes do nothing, to quote a value we need
# to use double quotes.
command = command.replace("'", '"')
return command
def aws(command, collect_memory=False, env_vars=None,
wait_for_finish=True, input_data=None, input_file=None):
"""Run an aws command.
This help function abstracts the differences of running the "aws"
command on different platforms.
If collect_memory is ``True`` the the Result object will have a list
of memory usage taken at 2 second intervals. The memory usage
will be in bytes.
If env_vars is None, this will set the environment variables
to be used by the aws process.
If wait_for_finish is False, then the Process object is returned
to the caller. It is then the caller's responsibility to ensure
proper cleanup. This can be useful if you want to test timeout's
or how the CLI responds to various signals.
:type input_data: string
:param input_data: This string will be communicated to the process through
the stdin of the process. It essentially allows the user to
avoid having to use a file handle to pass information to the process.
Note that this string is not passed on creation of the process, but
rather communicated to the process.
:type input_file: a file handle
:param input_file: This is a file handle that will act as the
the stdin of the process immediately on creation. Essentially
any data written to the file will be read from stdin of the
process. This is needed if you plan to stream data into stdin while
collecting memory.
"""
if platform.system() == 'Windows':
command = _escape_quotes(command)
if 'AWS_TEST_COMMAND' in os.environ:
aws_command = os.environ['AWS_TEST_COMMAND']
else:
aws_command = 'python %s' % get_aws_cmd()
full_command = '%s %s' % (aws_command, command)
stdout_encoding = get_stdout_encoding()
if isinstance(full_command, six.text_type) and not six.PY3:
full_command = full_command.encode(stdout_encoding)
INTEG_LOG.debug("Running command: %s", full_command)
env = os.environ.copy()
env['AWS_DEFAULT_REGION'] = "us-east-1"
if env_vars is not None:
env = env_vars
if input_file is None:
input_file = PIPE
process = Popen(full_command, stdout=PIPE, stderr=PIPE, stdin=input_file,
shell=True, env=env)
if not wait_for_finish:
return process
memory = None
if not collect_memory:
kwargs = {}
if input_data:
kwargs = {'input': input_data}
stdout, stderr = process.communicate(**kwargs)
else:
stdout, stderr, memory = _wait_and_collect_mem(process)
return Result(process.returncode,
stdout.decode(stdout_encoding),
stderr.decode(stdout_encoding),
memory)
def get_stdout_encoding():
encoding = getattr(sys.__stdout__, 'encoding', None)
if encoding is None:
encoding = 'utf-8'
return encoding
def _wait_and_collect_mem(process):
# We only know how to collect memory on mac/linux.
if platform.system() == 'Darwin':
get_memory = _get_memory_with_ps
elif platform.system() == 'Linux':
get_memory = _get_memory_with_ps
else:
raise ValueError(
"Can't collect memory for process on platform %s." %
platform.system())
memory = []
while process.poll() is None:
try:
current = get_memory(process.pid)
except ProcessTerminatedError:
# It's possible the process terminated between .poll()
# and get_memory().
break
memory.append(current)
stdout, stderr = process.communicate()
return stdout, stderr, memory
def _get_memory_with_ps(pid):
# It's probably possible to do with proc_pidinfo and ctypes on a Mac,
# but we'll do it the easy way with parsing ps output.
command_list = 'ps u -p'.split()
command_list.append(str(pid))
p = Popen(command_list, stdout=PIPE)
stdout = p.communicate()[0]
if not p.returncode == 0:
raise ProcessTerminatedError(str(pid))
else:
# Get the RSS from output that looks like this:
# USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
# user 47102 0.0 0.1 2437000 4496 s002 S+ 7:04PM 0:00.12 python2.6
return int(stdout.splitlines()[1].split()[5]) * 1024
class BaseS3CLICommand(unittest.TestCase):
"""Base class for aws s3 command.
This contains convenience functions to make writing these tests easier
and more streamlined.
"""
def setUp(self):
self.files = FileCreator()
self.session = botocore.session.get_session()
self.regions = {}
self.region = 'us-west-2'
self.client = self.session.create_client('s3', region_name=self.region)
self.extra_setup()
def extra_setup(self):
# Subclasses can use this to define extra setup steps.
pass
def tearDown(self):
self.files.remove_all()
self.extra_teardown()
def extra_teardown(self):
# Subclasses can use this to define extra teardown steps.
pass
def assert_key_contents_equal(self, bucket, key, expected_contents):
if isinstance(expected_contents, six.BytesIO):
expected_contents = expected_contents.getvalue().decode('utf-8')
actual_contents = self.get_key_contents(bucket, key)
# The contents can be huge so we try to give helpful error messages
# without necessarily printing the actual contents.
self.assertEqual(len(actual_contents), len(expected_contents))
if actual_contents != expected_contents:
self.fail("Contents for %s/%s do not match (but they "
"have the same length)" % (bucket, key))
def create_bucket(self, name=None, region=None):
if not region:
region = self.region
bucket_name = create_bucket(self.session, name, region)
self.regions[bucket_name] = region
self.addCleanup(self.delete_bucket, bucket_name)
return bucket_name
def put_object(self, bucket_name, key_name, contents='', extra_args=None):
client = self.session.create_client(
's3', region_name=self.regions[bucket_name])
call_args = {
'Bucket': bucket_name,
'Key': key_name, 'Body': contents
}
if extra_args is not None:
call_args.update(extra_args)
response = client.put_object(**call_args)
self.addCleanup(self.delete_key, bucket_name, key_name)
def delete_bucket(self, bucket_name):
self.remove_all_objects(bucket_name)
client = self.session.create_client(
's3', region_name=self.regions[bucket_name])
response = client.delete_bucket(Bucket=bucket_name)
del self.regions[bucket_name]
def remove_all_objects(self, bucket_name):
client = self.session.create_client(
's3', region_name=self.regions[bucket_name])
paginator = client.get_paginator('list_objects')
pages = paginator.paginate(Bucket=bucket_name)
key_names = []
for page in pages:
key_names += [obj['Key'] for obj in page.get('Contents', [])]
for key_name in key_names:
self.delete_key(bucket_name, key_name)
def delete_key(self, bucket_name, key_name):
client = self.session.create_client(
's3', region_name=self.regions[bucket_name])
response = client.delete_object(Bucket=bucket_name, Key=key_name)
def get_key_contents(self, bucket_name, key_name):
client = self.session.create_client(
's3', region_name=self.regions[bucket_name])
response = client.get_object(Bucket=bucket_name, Key=key_name)
return response['Body'].read().decode('utf-8')
def key_exists(self, bucket_name, key_name):
client = self.session.create_client(
's3', region_name=self.regions[bucket_name])
try:
client.head_object(Bucket=bucket_name, Key=key_name)
return True
except ClientError:
return False
def list_buckets(self):
response = self.client.list_buckets()
return response['Buckets']
def content_type_for_key(self, bucket_name, key_name):
parsed = self.head_object(bucket_name, key_name)
return parsed['ContentType']
def head_object(self, bucket_name, key_name):
client = self.session.create_client(
's3', region_name=self.regions[bucket_name])
response = client.head_object(Bucket=bucket_name, Key=key_name)
return response
def assert_no_errors(self, p):
self.assertEqual(
p.rc, 0,
"Non zero rc (%s) received: %s" % (p.rc, p.stdout + p.stderr))
self.assertNotIn("Error:", p.stderr)
self.assertNotIn("failed:", p.stderr)
self.assertNotIn("client error", p.stderr)
self.assertNotIn("server error", p.stderr)
class StringIOWithFileNo(StringIO):
def fileno(self):
return 0
awscli-1.10.1/awscli/examples/ 0000777 4542626 0000144 00000000000 12652514126 017232 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/elb/ 0000777 4542626 0000144 00000000000 12652514126 017774 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/elb/describe-tags.rst 0000666 4542626 0000144 00000001143 12652514124 023237 0 ustar pysdk-ci amazon 0000000 0000000 **To describe the tags assigned to a load balancer**
This example describes the tags assigned to the specified load balancer.
Command::
aws elb describe-tags --load-balancer-name my-load-balancer
Output::
{
"TagDescriptions": [
{
"Tags": [
{
"Value": "lima",
"Key": "project"
},
{
"Value": "digital-media",
"Key": "department"
}
],
"LoadBalancerName": "my-load-balancer"
}
]
}
awscli-1.10.1/awscli/examples/elb/add-tags.rst 0000666 4542626 0000144 00000000347 12652514124 022214 0 ustar pysdk-ci amazon 0000000 0000000 **To add a tag to a load balancer**
This example adds tags to the specified load balancer.
Command::
aws elb add-tags --load-balancer-name my-load-balancer --tags "Key=project,Value=lima" "Key=department,Value=digital-media"
awscli-1.10.1/awscli/examples/elb/create-app-cookie-stickiness-policy.rst 0000666 4542626 0000144 00000000535 12652514124 027471 0 ustar pysdk-ci amazon 0000000 0000000 **To generate a stickiness policy for your HTTPS load balancer**
This example generates a stickiness policy that follows the sticky session lifetimes of the application-generated cookie.
Command::
aws elb create-app-cookie-stickiness-policy --load-balancer-name my-load-balancer --policy-name my-app-cookie-policy --cookie-name my-app-cookie
awscli-1.10.1/awscli/examples/elb/enable-availability-zones-for-load-balancer.rst 0000666 4542626 0000144 00000000606 12652514124 031026 0 ustar pysdk-ci amazon 0000000 0000000 **To enable Availability Zones for a load balancer**
This example adds the specified Availability Zone to the specified load balancer.
Command::
aws elb enable-availability-zones-for-load-balancer --load-balancer-name my-load-balancer --availability-zones us-west-2b
Output::
{
"AvailabilityZones": [
"us-west-2a",
"us-west-2b"
]
}
awscli-1.10.1/awscli/examples/elb/delete-load-balancer.rst 0000666 4542626 0000144 00000000246 12652514124 024452 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a load balancer**
This example deletes the specified load balancer.
Command::
aws elb delete-load-balancer --load-balancer-name my-load-balancer
awscli-1.10.1/awscli/examples/elb/disable-availability-zones-for-load-balancer.rst 0000666 4542626 0000144 00000000625 12652514124 031204 0 ustar pysdk-ci amazon 0000000 0000000 **To disable Availability Zones for a load balancer**
This example removes the specified Availability Zone from the set of Availability Zones for the specified load balancer.
Command::
aws elb disable-availability-zones-for-load-balancer --load-balancer-name my-load-balancer --availability-zones us-west-2a
Output::
{
"AvailabilityZones": [
"us-west-2b"
]
}
awscli-1.10.1/awscli/examples/elb/set-load-balancer-policies-of-listener.rst 0000666 4542626 0000144 00000001331 12652514124 030031 0 ustar pysdk-ci amazon 0000000 0000000 **To replace the policies associated with a listener**
This example replaces the policies that are currently associated with the specified listener.
Command::
aws elb set-load-balancer-policies-of-listener --load-balancer-name my-load-balancer --load-balancer-port 443 --policy-names my-SSLNegotiation-policy
**To remove all policies associated with your listener**
This example removes all policies that are currently associated with the specified listener.
Command::
aws elb set-load-balancer-policies-of-listener --load-balancer-name my-load-balancer --load-balancer-port 443 --policy-names []
To confirm that the policies are removed from the load balancer, use the ``describe-load-balancer-policies`` command.
awscli-1.10.1/awscli/examples/elb/create-lb-cookie-stickiness-policy.rst 0000666 4542626 0000144 00000000562 12652514124 027306 0 ustar pysdk-ci amazon 0000000 0000000 **To generate a duration-based stickiness policy for your HTTPS load balancer**
This example generates a stickiness policy with sticky session lifetimes controlled by the specified expiration period.
Command::
aws elb create-lb-cookie-stickiness-policy --load-balancer-name my-load-balancer --policy-name my-duration-cookie-policy --cookie-expiration-period 60
awscli-1.10.1/awscli/examples/elb/describe-load-balancer-attributes.rst 0000666 4542626 0000144 00000001105 12652514124 027147 0 ustar pysdk-ci amazon 0000000 0000000 **To describe the attributes of a load balancer**
This example describes the attributes of the specified load balancer.
Command::
aws elb describe-load-balancer-attributes --load-balancer-name my-load-balancer
Output::
{
"LoadBalancerAttributes": {
"ConnectionDraining": {
"Enabled": false,
"Timeout": 300
},
"CrossZoneLoadBalancing": {
"Enabled": true
},
"ConnectionSettings": {
"IdleTimeout": 30
},
"AccessLog": {
"Enabled": false
}
}
}
awscli-1.10.1/awscli/examples/elb/delete-load-balancer-policy.rst 0000666 4542626 0000144 00000000461 12652514124 025746 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a policy from your load balancer**
This example deletes the specified policy from the specified load balancer. The policy must not be enabled on any listener.
Command::
aws elb delete-load-balancer-policy --load-balancer-name my-load-balancer --policy-name my-duration-cookie-policy
awscli-1.10.1/awscli/examples/elb/register-instances-with-load-balancer.rst 0000666 4542626 0000144 00000000761 12652514124 027774 0 ustar pysdk-ci amazon 0000000 0000000 **To register instances with a load balancer**
This example registers the specified instance with the specified load balancer.
Command::
aws elb register-instances-with-load-balancer --load-balancer-name my-load-balancer --instances i-d6f6fae3
Output::
{
"Instances": [
{
"InstanceId": "i-d6f6fae3"
},
{
"InstanceId": "i-207d9717"
},
{
"InstanceId": "i-afefb49b"
}
]
}
awscli-1.10.1/awscli/examples/elb/set-load-balancer-policies-for-backend-server.rst 0000666 4542626 0000144 00000001341 12652514124 031262 0 ustar pysdk-ci amazon 0000000 0000000 **To replace the policies associated with a port for a backend instance**
This example replaces the policies that are currently associated with the specified port.
Command::
aws elb set-load-balancer-policies-for-backend-server --load-balancer-name my-load-balancer --instance-port 80 --policy-names my-ProxyProtocol-policy
**To remove all policies that are currently associated with a port on your backend instance**
This example removes all policies associated with the specified port.
Command::
aws elb set-load-balancer-policies-for-backend-server --load-balancer-name my-load-balancer --instance-port 80 --policy-names []
To confirm that the policies are removed, use the ``describe-load-balancer-policies`` command.
awscli-1.10.1/awscli/examples/elb/detach-load-balancer-from-subnets.rst 0000666 4542626 0000144 00000000500 12652514124 027053 0 ustar pysdk-ci amazon 0000000 0000000 **To detach load balancers from subnets**
This example detaches the specified load balancer from the specified subnet.
Command::
aws elb detach-load-balancer-from-subnets --load-balancer-name my-load-balancer --subnets subnet-0ecac448
Output::
{
"Subnets": [
"subnet-15aaab61"
]
}
awscli-1.10.1/awscli/examples/elb/modify-load-balancer-attributes.rst 0000666 4542626 0000144 00000001736 12652514124 026670 0 ustar pysdk-ci amazon 0000000 0000000 **To modify the attributes of a load balancer**
This example modifies the ``CrossZoneLoadBalancing`` attribute of the specified load balancer.
Command::
aws elb modify-load-balancer-attributes --load-balancer-name my-load-balancer --load-balancer-attributes "{\"CrossZoneLoadBalancing\":{\"Enabled\":true}}"
Output::
{
"LoadBalancerAttributes": {
"CrossZoneLoadBalancing": {
"Enabled": true
}
},
"LoadBalancerName": "my-load-balancer"
}
This example modifies the ``ConnectionDraining`` attribute of the specified load balancer.
Command::
aws elb modify-load-balancer-attributes --load-balancer-name my-load-balancer --load-balancer-attributes "{\"ConnectionDraining\":{\"Enabled\":true,\"Timeout\":300}}"
Output::
{
"LoadBalancerAttributes": {
"ConnectionDraining": {
"Enabled": true,
"Timeout": 300
}
},
"LoadBalancerName": "my-load-balancer"
}
awscli-1.10.1/awscli/examples/elb/create-load-balancer-listeners.rst 0000666 4542626 0000144 00000000475 12652514124 026465 0 ustar pysdk-ci amazon 0000000 0000000 **To create listeners for a load balancer**
This example creates a listener for your load balancer at port 80 using the HTTP protocol.
Command::
aws elb create-load-balancer-listeners --load-balancer-name my-load-balancer --listeners "Protocol=HTTP,LoadBalancerPort=80,InstanceProtocol=HTTP,InstancePort=80"
awscli-1.10.1/awscli/examples/elb/attach-load-balancer-to-subnets.rst 0000666 4542626 0000144 00000000563 12652514124 026557 0 ustar pysdk-ci amazon 0000000 0000000 **To attach subnets to a load balancer**
This example adds the specified subnet to the set of configured subnets for the specified load balancer.
Command::
aws elb attach-load-balancer-to-subnets --load-balancer-name my-load-balancer --subnets subnet-0ecac448
Output::
{
"Subnets": [
"subnet-15aaab61",
"subnet-0ecac448"
]
}
awscli-1.10.1/awscli/examples/elb/deregister-instances-from-load-balancer.rst 0000666 4542626 0000144 00000000714 12652514124 030273 0 ustar pysdk-ci amazon 0000000 0000000 **To deregister instances from a load balancer**
This example deregisters the specified instance from the specified load balancer.
Command::
aws elb deregister-instances-from-load-balancer --load-balancer-name my-load-balancer --instances i-d6f6fae3
Output::
{
"Instances": [
{
"InstanceId": "i-207d9717"
},
{
"InstanceId": "i-afefb49b"
}
]
}
awscli-1.10.1/awscli/examples/elb/describe-load-balancers.rst 0000666 4542626 0000144 00000006022 12652514124 025151 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your load balancers**
This example describes all of your load balancers.
Command::
aws elb describe-load-balancers
**To describe one of your load balancers**
This example describes the specified load balancer.
Command::
aws elb describe-load-balancers --load-balancer-name my-load-balancer
The following example response is for an HTTPS load balancer in a VPC.
Output::
{
"LoadBalancerDescriptions": [
{
"Subnets": [
"subnet-15aaab61"
],
"CanonicalHostedZoneNameID": "Z3DZXE0EXAMPLE",
"CanonicalHostedZoneName": "my-load-balancer-1234567890.us-west-2.elb.amazonaws.com",
"ListenerDescriptions": [
{
"Listener": {
"InstancePort": 80,
"LoadBalancerPort": 80,
"Protocol": "HTTP",
"InstanceProtocol": "HTTP"
},
"PolicyNames": []
},
{
"Listener": {
"InstancePort": 443,
"SSLCertificateId": "arn:aws:iam::123456789012:server-certificate/my-server-cert",
"LoadBalancerPort": 443,
"Protocol": "HTTPS",
"InstanceProtocol": "HTTPS"
},
"PolicyNames": [
"ELBSecurityPolicy-2015-03"
]
}
],
"HealthCheck": {
"HealthyThreshold": 2,
"Interval": 30,
"Target": "HTTP:80/png",
"Timeout": 3,
"UnhealthyThreshold": 2
},
"VPCId": "vpc-a01106c2",
"BackendServerDescriptions": [
{
"InstancePort": 80,
"PolicyNames": [
"my-ProxyProtocol-policy"
]
}
],
"Instances": [
{
"InstanceId": "i-207d9717"
},
{
"InstanceId": "i-afefb49b"
}
],
"DNSName": "my-load-balancer-1234567890.us-west-2.elb.amazonaws.com",
"SecurityGroups": [
"sg-a61988c3"
],
"Policies": {
"LBCookieStickinessPolicies": [
{
"PolicyName": "my-duration-cookie-policy",
"CookieExpirationPeriod": 60
}
],
"AppCookieStickinessPolicies": [],
"OtherPolicies": [
"my-PublicKey-policy",
"my-authentication-policy",
"my-SSLNegotiation-policy",
"my-ProxyProtocol-policy",
"ELBSecurityPolicy-2015-03"
]
},
"LoadBalancerName": "my-load-balancer",
"CreatedTime": "2015-03-19T03:24:02.650Z",
"AvailabilityZones": [
"us-west-2a"
],
"Scheme": "internet-facing",
"SourceSecurityGroup": {
"OwnerAlias": "123456789012",
"GroupName": "my-elb-sg"
}
}
]
}
awscli-1.10.1/awscli/examples/elb/describe-load-balancer-policies.rst 0000666 4542626 0000144 00000003556 12652514124 026604 0 ustar pysdk-ci amazon 0000000 0000000 **To describe all policies associated with a load balancer**
This example describes all of the policies associated with the specified load balancer.
Command::
aws elb describe-load-balancer-policies --load-balancer-name my-load-balancer
Output::
{
"PolicyDescriptions": [
{
"PolicyAttributeDescriptions": [
{
"AttributeName": "ProxyProtocol",
"AttributeValue": "true"
}
],
"PolicyName": "my-ProxyProtocol-policy",
"PolicyTypeName": "ProxyProtocolPolicyType"
},
{
"PolicyAttributeDescriptions": [
{
"AttributeName": "CookieName",
"AttributeValue": "my-app-cookie"
}
],
"PolicyName": "my-app-cookie-policy",
"PolicyTypeName": "AppCookieStickinessPolicyType"
},
{
"PolicyAttributeDescriptions": [
{
"AttributeName": "CookieExpirationPeriod",
"AttributeValue": "60"
}
],
"PolicyName": "my-duration-cookie-policy",
"PolicyTypeName": "LBCookieStickinessPolicyType"
},
.
.
.
]
}
**To describe a specific policy associated with a load balancer**
This example describes the specified policy associated with the specified load balancer.
Command::
aws elb describe-load-balancer-policies --load-balancer-name my-load-balancer --policy-name my-authentication-policy
Output::
{
"PolicyDescriptions": [
{
"PolicyAttributeDescriptions": [
{
"AttributeName": "PublicKeyPolicyName",
"AttributeValue": "my-PublicKey-policy"
}
],
"PolicyName": "my-authentication-policy",
"PolicyTypeName": "BackendServerAuthenticationPolicyType"
}
]
}
awscli-1.10.1/awscli/examples/elb/describe-instance-health.rst 0000666 4542626 0000144 00000003031 12652514124 025346 0 ustar pysdk-ci amazon 0000000 0000000 **To describe the health the instances for a load balancer**
This example describes the health of the instances for the specified load balancer.
Command::
aws elb describe-instance-health --load-balancer-name my-load-balancer
Output::
{
"InstanceStates": [
{
"InstanceId": "i-207d9717",
"ReasonCode": "N/A",
"State": "InService",
"Description": "N/A"
},
{
"InstanceId": "i-afefb49b",
"ReasonCode": "N/A",
"State": "InService",
"Description": "N/A"
},
]
}
**To describe the health of an instance for a load balancer**
This example describes the health of the specified instance for the specified load balancer.
Command::
aws elb describe-instance-health --load-balancer-name my-load-balancer --instances i-7299c809
The following is an example response for an instance that is registering.
Output::
{
"InstanceStates": [
{
"InstanceId": "i-7299c809",
"ReasonCode": "ELB",
"State": "OutOfService",
"Description": "Instance registration is still in progress."
}
]
}
The following is an example response for an unhealthy instance.
Output::
{
"InstanceStates": [
{
"InstanceId": "i-7299c809",
"ReasonCode": "Instance",
"State": "OutOfService",
"Description": "Instance has failed at least the UnhealthyThreshold number of health checks consecutively."
}
]
}
awscli-1.10.1/awscli/examples/elb/create-load-balancer.rst 0000666 4542626 0000144 00000004377 12652514124 024464 0 ustar pysdk-ci amazon 0000000 0000000 **To create an HTTP load balancer**
This example creates an HTTP load balancer in a VPC.
Command::
aws elb create-load-balancer --load-balancer-name my-load-balancer --listeners "Protocol=HTTP,LoadBalancerPort=80,InstanceProtocol=HTTP,InstancePort=80" --subnets subnet-15aaab61 --security-groups sg-a61988c3
Output::
{
"DNSName": "my-load-balancer-1234567890.us-west-2.elb.amazonaws.com"
}
This example creates an HTTP load balancer in EC2-Classic.
Command::
aws elb create-load-balancer --load-balancer-name my-load-balancer --listeners "Protocol=HTTP,LoadBalancerPort=80,InstanceProtocol=HTTP,InstancePort=80" --availability-zones us-west-2a us-west-2b
Output::
{
"DNSName": "my-load-balancer-123456789.us-west-2.elb.amazonaws.com"
}
**To create an HTTPS load balancer**
This example creates an HTTPS load balancer in a VPC.
Command::
aws elb create-load-balancer --load-balancer-name my-load-balancer --listeners "Protocol=HTTP,LoadBalancerPort=80,InstanceProtocol=HTTP,InstancePort=80" "Protocol=HTTPS,LoadBalancerPort=443,InstanceProtocol=HTTPS,InstancePort=443,SSLCertificateId=arn:aws:iam::123456789012:server-certificate/my-server-cert" --subnets subnet-15aaab61 --security-groups sg-a61988c3
Output::
{
"DNSName": "my-load-balancer-1234567890.us-west-2.elb.amazonaws.com"
}
This example creates an HTTPS load balancer in EC2-Classic.
Command::
aws elb create-load-balancer --load-balancer-name my-load-balancer --listeners "Protocol=HTTP,LoadBalancerPort=80,InstanceProtocol=HTTP,InstancePort=80" "Protocol=HTTPS,LoadBalancerPort=443,InstanceProtocol=HTTPS,InstancePort=443,SSLCertificateId=arn:aws:iam::123456789012:server-certificate/my-server-cert" --availability-zones us-west-2a us-west-2b
Output::
{
"DNSName": "my-load-balancer-123456789.us-west-2.elb.amazonaws.com"
}
**To create an internal load balancer**
This example creates an internal HTTP load balancer in a VPC.
Command::
aws elb create-load-balancer --load-balancer-name my-load-balancer --listeners "Protocol=HTTP,LoadBalancerPort=80,InstanceProtocol=HTTP,InstancePort=80" --scheme internal --subnets subnet-a85db0df --security-groups sg-a61988c3
Output::
{
"DNSName": "internal-my-load-balancer-123456789.us-west-2.elb.amazonaws.com"
}
awscli-1.10.1/awscli/examples/elb/delete-load-balancer-listeners.rst 0000666 4542626 0000144 00000000405 12652514124 026455 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a listener from your load balancer**
This example deletes the listener for the specified port from the specified load balancer.
Command::
aws elb delete-load-balancer-listeners --load-balancer-name my-load-balancer --load-balancer-ports 80
awscli-1.10.1/awscli/examples/elb/describe-load-balancer-policy-types.rst 0000666 4542626 0000144 00000006576 12652514124 027443 0 ustar pysdk-ci amazon 0000000 0000000 **To describe the load balancer policy types defined by Elastic Load Balancing**
This example describes the load balancer policy types that you can use to create policy configurations for your load balancer.
Command::
aws elb describe-load-balancer-policy-types
Output::
{
"PolicyTypeDescriptions": [
{
"PolicyAttributeTypeDescriptions": [
{
"Cardinality": "ONE",
"AttributeName": "ProxyProtocol",
"AttributeType": "Boolean"
}
],
"PolicyTypeName": "ProxyProtocolPolicyType",
"Description": "Policy that controls whether to include the IP address and port of the originating request for TCP messages. This policy operates on TCP/SSL listeners only"
},
{
"PolicyAttributeTypeDescriptions": [
{
"Cardinality": "ONE",
"AttributeName": "PublicKey",
"AttributeType": "String"
}
],
"PolicyTypeName": "PublicKeyPolicyType",
"Description": "Policy containing a list of public keys to accept when authenticating the back-end server(s). This policy cannot be applied directly to back-end servers or listeners but must be part of a BackendServerAuthenticationPolicyType."
},
{
"PolicyAttributeTypeDescriptions": [
{
"Cardinality": "ONE",
"AttributeName": "CookieName",
"AttributeType": "String"
}
],
"PolicyTypeName": "AppCookieStickinessPolicyType",
"Description": "Stickiness policy with session lifetimes controlled by the lifetime of the application-generated cookie. This policy can be associated only with HTTP/HTTPS listeners."
},
{
"PolicyAttributeTypeDescriptions": [
{
"Cardinality": "ZERO_OR_ONE",
"AttributeName": "CookieExpirationPeriod",
"AttributeType": "Long"
}
],
"PolicyTypeName": "LBCookieStickinessPolicyType",
"Description": "Stickiness policy with session lifetimes controlled by the browser (user-agent) or a specified expiration period. This policy can be associated only with HTTP/HTTPS listeners."
},
{
"PolicyAttributeTypeDescriptions": [
.
.
.
],
"PolicyTypeName": "SSLNegotiationPolicyType",
"Description": "Listener policy that defines the ciphers and protocols that will be accepted by the load balancer. This policy can be associated only with HTTPS/SSL listeners."
},
{
"PolicyAttributeTypeDescriptions": [
{
"Cardinality": "ONE_OR_MORE",
"AttributeName": "PublicKeyPolicyName",
"AttributeType": "PolicyName"
}
],
"PolicyTypeName": "BackendServerAuthenticationPolicyType",
"Description": "Policy that controls authentication to back-end server(s) and contains one or more policies, such as an instance of a PublicKeyPolicyType. This policy can be associated only with back-end servers that are using HTTPS/SSL."
}
]
}
awscli-1.10.1/awscli/examples/elb/set-load-balancer-listener-ssl-certificate.rst 0000666 4542626 0000144 00000000555 12652514124 030710 0 ustar pysdk-ci amazon 0000000 0000000 **To update the SSL certificate for an HTTPS load balancer**
This example replaces the existing SSL certificate for the specified HTTPS load balancer.
Command::
aws elb set-load-balancer-listener-ssl-certificate --load-balancer-name my-load-balancer --load-balancer-port 443 --ssl-certificate-id arn:aws:iam::123456789012:server-certificate/new-server-cert
awscli-1.10.1/awscli/examples/elb/configure-health-check.rst 0000666 4542626 0000144 00000001061 12652514124 025021 0 ustar pysdk-ci amazon 0000000 0000000 **To specify the health check settings for your backend EC2 instances**
This example specifies the health check settings used to evaluate the health of your backend EC2 instances.
Command::
aws elb configure-health-check --load-balancer-name my-load-balancer --health-check Target=HTTP:80/png,Interval=30,UnhealthyThreshold=2,HealthyThreshold=2,Timeout=3
Output::
{
"HealthCheck": {
"HealthyThreshold": 2,
"Interval": 30,
"Target": "HTTP:80/png",
"Timeout": 3,
"UnhealthyThreshold": 2
}
awscli-1.10.1/awscli/examples/elb/remove-tags.rst 0000666 4542626 0000144 00000000275 12652514124 022761 0 ustar pysdk-ci amazon 0000000 0000000 **To remove tags from a load balancer**
This example removes a tag from the specified load balancer.
Command::
aws elb remove-tags --load-balancer-name my-load-balancer --tags project
awscli-1.10.1/awscli/examples/elb/create-load-balancer-policy.rst 0000666 4542626 0000144 00000005156 12652514124 025755 0 ustar pysdk-ci amazon 0000000 0000000 **To create a policy that enables Proxy Protocol on a load balancer**
This example creates a policy that enables Proxy Protocol on the specified load balancer.
Command::
aws elb create-load-balancer-policy --load-balancer-name my-load-balancer --policy-name my-ProxyProtocol-policy --policy-type-name ProxyProtocolPolicyType --policy-attributes AttributeName=ProxyProtocol,AttributeValue=true
**To create an SSL negotiation policy using the recommended security policy**
This example creates an SSL negotiation policy for the specified HTTPS load balancer using the recommended security policy.
Command::
aws elb create-load-balancer-policy --load-balancer-name my-load-balancer --policy-name my-SSLNegotiation-policy --policy-type-name SSLNegotiationPolicyType --policy-attributes AttributeName=Reference-Security-Policy,AttributeValue=ELBSecurityPolicy-2015-03
**To create an SSL negotiation policy using a custom security policy**
This example creates an SSL negotiation policy for your HTTPS load balancer using a custom security policy by enabling the protocols and the ciphers.
Command::
aws elb create-load-balancer-policy --load-balancer-name my-load-balancer --policy-name my-SSLNegotiation-policy --policy-type-name SSLNegotiationPolicyType --policy-attributes AttributeName=Protocol-SSLv3,AttributeValue=true AttributeName=Protocol-TLSv1.1,AttributeValue=true AttributeName=DHE-RSA-AES256-SHA256,AttributeValue=true AttributeName=Server-Defined-Cipher-Order,AttributeValue=true
**To create a public key policy**
This example creates a public key policy.
Command::
aws elb create-load-balancer-policy --load-balancer-name my-load-balancer --policy-name my-PublicKey-policy --policy-type-name PublicKeyPolicyType --policy-attributes AttributeName=PublicKey,AttributeValue=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwAYUjnfyEyXr1pxjhFWBpMlggUcqoi3kl+dS74kj//c6x7ROtusUaeQCTgIUkayttRDWchuqo1pHC1u+n5xxXnBBe2ejbb2WRsKIQ5rXEeixsjFpFsojpSQKkzhVGI6mJVZBJDVKSHmswnwLBdofLhzvllpovBPTHe+o4haAWvDBALJU0pkSI1FecPHcs2hwxf14zHoXy1e2k36A64nXW43wtfx5qcVSIxtCEOjnYRg7RPvybaGfQ+v6Iaxb/+7J5kEvZhTFQId+bSiJImF1FSUT1W1xwzBZPUbcUkkXDj45vC2s3Z8E+Lk7a3uZhvsQHLZnrfuWjBWGWvZ/MhZYgEXAMPLE
**To create a backend server authentication policy**
This example creates a backend server authentication policy that enables authentication on your backend instance using a public key policy.
Command::
aws elb create-load-balancer-policy --load-balancer-name my-load-balancer --policy-name my-authentication-policy --policy-type-name BackendServerAuthenticationPolicyType --policy-attributes AttributeName=PublicKeyPolicyName,AttributeValue=my-PublicKey-policy
awscli-1.10.1/awscli/examples/elb/apply-security-groups-to-load-balancer.rst 0000666 4542626 0000144 00000000544 12652514124 030140 0 ustar pysdk-ci amazon 0000000 0000000 **To associate a security group with a load balancer in a VPC**
This example associates a security group with the specified load balancer in a VPC.
Command::
aws elb apply-security-groups-to-load-balancer --load-balancer-name my-load-balancer --security-groups sg-fc448899
Output::
{
"SecurityGroups": [
"sg-fc448899"
]
}
awscli-1.10.1/awscli/examples/dynamodb/ 0000777 4542626 0000144 00000000000 12652514126 021027 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/dynamodb/batch-get-item.rst 0000666 4542626 0000144 00000002676 12652514124 024364 0 ustar pysdk-ci amazon 0000000 0000000 **To retrieve multiple items from a table**
This example reads multiple items from the *MusicCollection* table using a batch of three GetItem requests. Only the *AlbumTitle* attribute is returned.
Command::
aws dynamodb batch-get-item --request-items file://request-items.json
The arguments for ``--request-items`` are stored in a JSON file, ``request-items.json``. Here are the contents of that file::
{
"MusicCollection": {
"Keys": [
{
"Artist": {"S": "No One You Know"},
"SongTitle": {"S": "Call Me Today"}
},
{
"Artist": {"S": "Acme Band"},
"SongTitle": {"S": "Happy Day"}
},
{
"Artist": {"S": "No One You Know"},
"SongTitle": {"S": "Scared of My Shadow"}
}
],
"ProjectionExpression":"AlbumTitle"
}
}
Output::
{
"UnprocessedKeys": {},
"Responses": {
"MusicCollection": [
{
"AlbumTitle": {
"S": "Somewhat Famous"
}
},
{
"AlbumTitle": {
"S": "Blue Sky Blues"
}
},
{
"AlbumTitle": {
"S": "Louder Than Ever"
}
}
]
}
}
awscli-1.10.1/awscli/examples/dynamodb/create-table.rst 0000666 4542626 0000144 00000002535 12652514124 024114 0 ustar pysdk-ci amazon 0000000 0000000 **To create a table**
This example creates a table named *MusicCollection*.
Command::
aws dynamodb create-table --table-name MusicCollection --attribute-definitions AttributeName=Artist,AttributeType=S AttributeName=SongTitle,AttributeType=S --key-schema AttributeName=Artist,KeyType=HASH AttributeName=SongTitle,KeyType=RANGE --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5
Output::
{
"TableDescription": {
"AttributeDefinitions": [
{
"AttributeName": "Artist",
"AttributeType": "S"
},
{
"AttributeName": "SongTitle",
"AttributeType": "S"
}
],
"ProvisionedThroughput": {
"NumberOfDecreasesToday": 0,
"WriteCapacityUnits": 5,
"ReadCapacityUnits": 5
},
"TableSizeBytes": 0,
"TableName": "MusicCollection",
"TableStatus": "CREATING",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "Artist"
},
{
"KeyType": "RANGE",
"AttributeName": "SongTitle"
}
],
"ItemCount": 0,
"CreationDateTime": 1421866952.062
}
}
awscli-1.10.1/awscli/examples/dynamodb/put-item.rst 0000666 4542626 0000144 00000001121 12652514124 023316 0 ustar pysdk-ci amazon 0000000 0000000 **To add an item to a table**
This example adds a new item to the *MusicCollection* table.
Command::
aws dynamodb put-item --table-name MusicCollection --item file://item.json --return-consumed-capacity TOTAL
The arguments for ``--item`` are stored in a JSON file, ``item.json``. Here are the contents of that file::
{
"Artist": {"S": "No One You Know"},
"SongTitle": {"S": "Call Me Today"},
"AlbumTitle": {"S": "Somewhat Famous"}
}
Output::
{
"ConsumedCapacity": {
"CapacityUnits": 1.0,
"TableName": "MusicCollection"
}
}
awscli-1.10.1/awscli/examples/dynamodb/list-tables.rst 0000666 4542626 0000144 00000000440 12652514124 024000 0 ustar pysdk-ci amazon 0000000 0000000 **To list tables**
This example lists all of the tables associated with the current AWS account and endpoint
Command::
aws dynamodb list-tables
Output::
{
"TableNames": [
"Forum",
"ProductCatalog",
"Reply",
"Thread",
]
}
awscli-1.10.1/awscli/examples/dynamodb/query.rst 0000666 4542626 0000144 00000002027 12652514124 022725 0 ustar pysdk-ci amazon 0000000 0000000 **To query an item**
This example queries items in the *MusicCollection* table. The table has a hash-and-range primary key (*Artist* and *SongTitle*), but this query only specifies the hash key value. It returns song titles by the artist named "No One You Know".
Command::
aws dynamodb query --table-name MusicCollection --key-conditions file://key-conditions.json --projection-expression "SongTitle"
The arguments for ``--key-conditions`` are stored in a JSON file, ``key-conditions.json``. Here are the contents of that file::
{
"Artist": {
"AttributeValueList": [
{"S": "No One You Know"}
],
"ComparisonOperator": "EQ"
}
}
Output::
{
"Count": 2,
"Items": [
{
"SongTitle": {
"S": "Call Me Today"
}
},
{
"SongTitle": {
"S": "Scared of My Shadow"
}
}
],
"ScannedCount": 2,
"ConsumedCapacity": null
}
awscli-1.10.1/awscli/examples/dynamodb/get-item.rst 0000666 4542626 0000144 00000001355 12652514124 023276 0 ustar pysdk-ci amazon 0000000 0000000 **To read an item in a table**
This example retrieves an item from the *MusicCollection* table. The table has a hash-and-range primary key (*Artist* and *SongTitle*), so you must specify both of these ttributes.
Command::
aws dynamodb get-item --table-name MusicCollection --key file://key.json
The arguments for ``--key`` are stored in a JSON file, ``key.json``. Here are the contents of that file::
{
"Artist": {"S": "Acme Band"},
"SongTitle": {"S": "Happy Day"}
}
Output::
{
"Item": {
"AlbumTitle": {
"S": "Songs About Life"
},
"SongTitle": {
"S": "Happy Day"
},
"Artist": {
"S": "Acme Band"
}
}
}
awscli-1.10.1/awscli/examples/dynamodb/delete-item.rst 0000666 4542626 0000144 00000001000 12652514124 023744 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an item**
This example deletes an item from the *MusicCollection* table.
Command::
aws dynamodb delete-item --table-name MusicCollection --key file://key.json
The arguments for ``--key`` are stored in a JSON file, ``key.json``. Here are the contents of that file::
{
"Artist": {"S": "No One You Know"},
"SongTitle": {"S": "Scared of My Shadow"}
}
Output::
{
"ConsumedCapacity": {
"CapacityUnits": 1.0,
"TableName": "MusicCollection"
}
}
awscli-1.10.1/awscli/examples/dynamodb/update-table.rst 0000666 4542626 0000144 00000002436 12652514124 024133 0 ustar pysdk-ci amazon 0000000 0000000 **To modify a table's provisioned throughput**
This example increases the provisioned read and write capacity on the *MusicCollection* table.
Command::
aws dynamodb update-table --table-name MusicCollection --provisioned-throughput ReadCapacityUnits=10,WriteCapacityUnits=10
Output::
{
"TableDescription": {
"AttributeDefinitions": [
{
"AttributeName": "Artist",
"AttributeType": "S"
},
{
"AttributeName": "SongTitle",
"AttributeType": "S"
}
],
"ProvisionedThroughput": {
"NumberOfDecreasesToday": 0,
"WriteCapacityUnits": 1,
"LastIncreaseDateTime": 1421874759.194,
"ReadCapacityUnits": 1
},
"TableSizeBytes": 0,
"TableName": "MusicCollection",
"TableStatus": "UPDATING",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "Artist"
},
{
"KeyType": "RANGE",
"AttributeName": "SongTitle"
}
],
"ItemCount": 0,
"CreationDateTime": 1421866952.062
}
}
awscli-1.10.1/awscli/examples/dynamodb/describe-table.rst 0000666 4542626 0000144 00000002125 12652514124 024424 0 ustar pysdk-ci amazon 0000000 0000000 **To describe a table**
This example describes the *MusicCollection* table.
Command::
aws dynamodb describe-table --table-name MusicCollection
Output::
{
"Table": {
"AttributeDefinitions": [
{
"AttributeName": "Artist",
"AttributeType": "S"
},
{
"AttributeName": "SongTitle",
"AttributeType": "S"
}
],
"ProvisionedThroughput": {
"NumberOfDecreasesToday": 0,
"WriteCapacityUnits": 5,
"ReadCapacityUnits": 5
},
"TableSizeBytes": 0,
"TableName": "MusicCollection",
"TableStatus": "ACTIVE",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "Artist"
},
{
"KeyType": "RANGE",
"AttributeName": "SongTitle"
}
],
"ItemCount": 0,
"CreationDateTime": 1421866952.062
}
}
awscli-1.10.1/awscli/examples/dynamodb/delete-table.rst 0000666 4542626 0000144 00000000773 12652514124 024115 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a table**
This example deletes the *MusicCollection* table.
Command::
aws dynamodb delete-table --table-name MusicCollection
Output::
{
"TableDescription": {
"TableStatus": "DELETING",
"TableSizeBytes": 0,
"ItemCount": 0,
"TableName": "MusicCollection",
"ProvisionedThroughput": {
"NumberOfDecreasesToday": 0,
"WriteCapacityUnits": 5,
"ReadCapacityUnits": 5
}
}
}
awscli-1.10.1/awscli/examples/dynamodb/scan.rst 0000666 4542626 0000144 00000002617 12652514124 022511 0 ustar pysdk-ci amazon 0000000 0000000 **To scan a table**
This example scans the entire *MusicCollection* table, and then narrows the results to songs by the artist "No One You Know". For each item, only the album title and song title are returned.
Command::
aws dynamodb scan --table-name MusicCollection --filter-expression "Artist = :a" --projection-expression "#ST, #AT" --expression-attribute-names file://expression-attribute-names.json --expression-attribute-values file://expression-attribute-values.json
The arguments for ``--expression-attribute-names`` are stored in a JSON file, ``expression-attribute-names.json``. Here are the contents of that file::
{
"#ST": "SongTitle",
"#AT":"AlbumTitle"
}
The arguments for ``--expression-attribute-values`` are stored in a JSON file, ``expression-attribute-values.json``. Here are the contents of that file::
{
":a": {"S": "No One You Know"}
}
Output::
{
"Count": 2,
"Items": [
{
"SongTitle": {
"S": "Call Me Today"
},
"AlbumTitle": {
"S": "Somewhat Famous"
}
},
{
"SongTitle": {
"S": "Scared of My Shadow"
},
"AlbumTitle": {
"S": "Blue Sky Blues"
}
}
],
"ScannedCount": 3,
"ConsumedCapacity": null
}
awscli-1.10.1/awscli/examples/dynamodb/update-item.rst 0000666 4542626 0000144 00000002643 12652514124 024002 0 ustar pysdk-ci amazon 0000000 0000000 **To update an item in a table**
This example updates an item in the *MusicCollection* table. It adds a new attribute (*Year*) and modifies the *AlbumTitle* attribute. All of the attributes in the item, as they appear after the update, are returned in the response.
Command::
aws dynamodb update-item --table-name MusicCollection --key file://key.json --update-expression "SET #Y = :y, #AT = :t" --expression-attribute-names file://expression-attribute-names.json --expression-attribute-values file://expression-attribute-values.json --return-values ALL_NEW
The arguments for ``--key`` are stored in a JSON file, ``key.json``. Here are the contents of that file::
{
"Artist": {"S": "Acme Band"},
"SongTitle": {"S": "Happy Day"}
}
The arguments for ``--expression-attribute-names`` are stored in a JSON file, ``expression-attribute-names.json``. Here are the contents of that file::
{
"#Y":"Year", "#AT":"AlbumTitle"
}
The arguments for ``--expression-attribute-values`` are stored in a JSON file, ``expression-attribute-values.json``. Here are the contents of that file::
{
":y":{"N": "2015"},
":t":{"S": "Louder Than Ever"}
}
Output::
{
"Item": {
"AlbumTitle": {
"S": "Songs About Life"
},
"SongTitle": {
"S": "Happy Day"
},
"Artist": {
"S": "Acme Band"
}
}
}
awscli-1.10.1/awscli/examples/dynamodb/batch-write-item.rst 0000666 4542626 0000144 00000002471 12652514124 024730 0 ustar pysdk-ci amazon 0000000 0000000 **To add multiple items to a table**
This example adds three new items to the *MusicCollection* table using a batch of three PutItem requests.
Command::
aws dynamodb batch-write-item --request-items file://request-items.json
The arguments for ``--request-items`` are stored in a JSON file, ``request-items.json``. Here are the contents of that file::
{
"MusicCollection": [
{
"PutRequest": {
"Item": {
"Artist": {"S": "No One You Know"},
"SongTitle": {"S": "Call Me Today"},
"AlbumTitle": {"S": "Somewhat Famous"}
}
}
},
{
"PutRequest": {
"Item": {
"Artist": {"S": "Acme Band"},
"SongTitle": {"S": "Happy Day"},
"AlbumTitle": {"S": "Songs About Life"}
}
}
},
{
"PutRequest": {
"Item": {
"Artist": {"S": "No One You Know"},
"SongTitle": {"S": "Scared of My Shadow"},
"AlbumTitle": {"S": "Blue Sky Blues"}
}
}
}
]
}
Output::
{
"UnprocessedItems": {}
}
awscli-1.10.1/awscli/examples/codepipeline/ 0000777 4542626 0000144 00000000000 12652514126 021672 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/codepipeline/delete-custom-action-type.rst 0000666 4542626 0000144 00000001141 12652514124 027423 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a custom action**
This example deletes a custom action in AWS CodePipeline by using an already-created JSON file (here named DeleteMyCustomAction.json) that contains the action type, provider name, and version number of the action to be deleted. Use the list-action-types command to view the correct values for category, version, and provider.
Command::
aws codepipeline delete-custom-action-type --cli-input-json file://DeleteMyCustomAction.json
JSON file sample contents::
{
"category": "Build",
"version": "1",
"provider": "MyJenkinsProviderName"
}
Output::
None. awscli-1.10.1/awscli/examples/codepipeline/get-pipeline.rst 0000666 4542626 0000144 00000004557 12652514124 025017 0 ustar pysdk-ci amazon 0000000 0000000 **To view the structure of a pipeline**
This example returns the structure of a pipeline named MyFirstPipeline.
Command::
aws codepipeline get-pipeline --name MyFirstPipeline
Output::
{
"pipeline": {
"roleArn": "arn:aws:iam::111111111111:role/AWS-CodePipeline-Service",
"stages": [
{
"name": "Source",
"actions": [
{
"inputArtifacts": [],
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"version": "1",
"provider": "S3"
},
"outputArtifacts": [
{
"name": "MyApp"
}
],
"configuration": {
"S3Bucket": "awscodepipeline-demo-bucket",
"S3ObjectKey": "aws-codepipeline-s3-aws-codedeploy_linux.zip"
},
"runOrder": 1
}
]
},
{
"name": "Beta",
"actions": [
{
"inputArtifacts": [
{
"name": "MyApp"
}
],
"name": "CodePipelineDemoFleet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CodeDeploy"
},
"outputArtifacts": [],
"configuration": {
"ApplicationName": "CodePipelineDemoApplication",
"DeploymentGroupName": "CodePipelineDemoFleet"
},
"runOrder": 1
}
]
}
],
"artifactStore": {
"type": "S3",
"location": "codepipeline-us-east-1-11EXAMPLE11"
},
"name": "MyFirstPipeline",
"version": 1
}
}
awscli-1.10.1/awscli/examples/codepipeline/acknowledge-job.rst 0000666 4542626 0000144 00000000647 12652514124 025464 0 ustar pysdk-ci amazon 0000000 0000000 **To retrieve information about a specified job**
This example returns information about a specified job, including the status of that job if it exists. This is only used for job workers and custom actions. To determine the value of nonce and the job ID, use aws codepipeline poll-for-jobs.
Command::
aws codecommit acknowledge-job --job-id f4f4ff82-2d11-EXAMPLE --nonce 3
Output::
{
"status": "InProgress"
} awscli-1.10.1/awscli/examples/codepipeline/update-pipeline.rst 0000666 4542626 0000144 00000007000 12652514124 025504 0 ustar pysdk-ci amazon 0000000 0000000 **To update the structure of a pipeline**
This example updates the structure of a pipeline by using a pre-defined JSON file (MyFirstPipeline.json) to supply the new structure.
Command::
aws codepipeline update-pipeline --cli-input-json file://MyFirstPipeline.json
Sample JSON file contents::
{
"pipeline": {
"roleArn": "arn:aws:iam::111111111111:role/AWS-CodePipeline-Service",
"stages": [
{
"name": "Source",
"actions": [
{
"inputArtifacts": [],
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"version": "1",
"provider": "S3"
},
"outputArtifacts": [
{
"name": "MyApp"
}
],
"configuration": {
"S3Bucket": "awscodepipeline-demo-bucket2",
"S3ObjectKey": "aws-codepipeline-s3-aws-codedeploy_linux.zip"
},
"runOrder": 1
}
]
},
{
"name": "Beta",
"actions": [
{
"inputArtifacts": [
{
"name": "MyApp"
}
],
"name": "CodePipelineDemoFleet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CodeDeploy"
},
"outputArtifacts": [],
"configuration": {
"ApplicationName": "CodePipelineDemoApplication",
"DeploymentGroupName": "CodePipelineDemoFleet"
},
"runOrder": 1
}
]
}
],
"artifactStore": {
"type": "S3",
"location": "codepipeline-us-east-1-11EXAMPLE11"
},
"name": "MyFirstPipeline",
"version": 1
}
}
Output::
{
"pipeline": {
"artifactStore": {
"location": "codepipeline-us-east-1-11EXAMPLE11",
"type": "S3"
},
"name": "MyFirstPipeline",
"roleArn": "arn:aws:iam::111111111111:role/AWS-CodePipeline-Service",
"stages": [
{
"actions": [
{
"actionTypeId": {
"__type": "ActionTypeId",
"category": "Source",
"owner": "AWS",
"provider": "S3",
"version": "1"
},
"configuration": {
"S3Bucket": "awscodepipeline-demo-bucket2",
"S3ObjectKey": "aws-codepipeline-s3-aws-codedeploy_linux.zip"
},
"inputArtifacts": [],
"name": "Source",
"outputArtifacts": [
{
"name": "MyApp"
}
],
"runOrder": 1
}
],
"name": "Source"
},
{
"actions": [
{
"actionTypeId": {
"__type": "ActionTypeId",
"category": "Deploy",
"owner": "AWS",
"provider": "CodeDeploy",
"version": "1"
},
"configuration": {
"ApplicationName": "CodePipelineDemoApplication",
"DeploymentGroupName": "CodePipelineDemoFleet"
},
"inputArtifacts": [
{
"name": "MyApp"
}
],
"name": "CodePipelineDemoFleet",
"outputArtifacts": [],
"runOrder": 1
}
],
"name": "Beta"
}
],
"version": 3
}
}awscli-1.10.1/awscli/examples/codepipeline/poll-for-jobs.rst 0000666 4542626 0000144 00000006256 12652514124 025120 0 ustar pysdk-ci amazon 0000000 0000000 **To view any available jobs**
This example returns information about any jobs for a job worker to act upon. This example uses a pre-defined JSON file (MyActionTypeInfo.json) to supply information about the action type for which the job worker processes jobs. This command is only used for custom actions. When this command is called, AWS CodePipeline returns temporary credentials for the Amazon S3 bucket used to store artifacts for the pipeline. This command will also return any secret values defined for the action, if any are defined.
Command::
aws codepipeline poll-for-jobs --cli-input-json file://MyActionTypeInfo.json
JSON file sample contents::
{
"actionTypeId": {
"category": "Test",
"owner": "Custom",
"provider": "MyJenkinsProviderName",
"version": "1"
},
"maxBatchSize": 5,
"queryParam": {
"ProjectName": "MyJenkinsTestProject"
}
}
Output::
{
"jobs": [
{
"accountId": "111111111111",
"data": {
"actionConfiguration": {
"__type": "ActionConfiguration",
"configuration": {
"ProjectName": "MyJenkinsExampleTestProject"
}
},
"actionTypeId": {
"__type": "ActionTypeId",
"category": "Test",
"owner": "Custom",
"provider": "MyJenkinsProviderName",
"version": "1"
},
"artifactCredentials": {
"__type": "AWSSessionCredentials",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"secretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"sessionToken": "fICCQD6m7oRw0uXOjANBgkqhkiG9w0BAQUFADCBiDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwdTZWF0dGxlMQ8wDQYDVQQKEwZBbWF6b24xFDASBgNVBAsTC0lBTSBDb25zb2xlMRIwEAYDVQQDEwlUZXN0Q2lsYWMxHzAdBgkqhkiG9w0BCQEWEG5vb25lQGFtYXpvbi5jb20wHhcNMTEwNDI1MjA0NTIxWhcNMTIwNDI0MjA0NTIxWjCBiDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwdTZWF0dGxlMQ8wDQYDVQQKEwZBbWF6b24xFDASBgNVBAsTC0lBTSBDb25zb2xlMRIwEAYDVQQDEwlUZXN0Q2lsYWMxHzAdBgkqhkiG9w0BCQEWEG5vb25lQGFtYXpvbi5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMaK0dn+a4GmWIWJ21uUSfwfEvySWtC2XADZ4nB+BLYgVIk60CpiwsZ3G93vUEIO3IyNoH/f0wYK8m9TrDHudUZg3qX4waLG5M43q7Wgc/MbQITxOUSQv7c7ugFFDzQGBzZswY6786m86gpEIbb3OhjZnzcvQAaRHhdlQWIMm2nrAgMBAAEwDQYJKoZIhvcNAQEFBQADgYEAtCu4nUhVVxYUntneD9+h8Mg9q6q+auNKyExzyLwaxlAoo7TJHidbtS4J5iNmZgXL0FkbFFBjvSfpJIlJ00zbhNYS5f6GuoEDmFJl0ZxBHjJnyp378OD8uTs7fLvjx79LjSTbNYiytVbZPQUQ5Yaxu2jXnimvw3rrszlaEXAMPLE="
},
"inputArtifacts": [
{
"__type": "Artifact",
"location": {
"s3Location": {
"bucketName": "codepipeline-us-east-1-11EXAMPLE11",
"objectKey": "MySecondPipeline/MyAppBuild/EXAMPLE"
},
"type": "S3"
},
"name": "MyAppBuild"
}
],
"outputArtifacts": [],
"pipelineContext": {
"__type": "PipelineContext",
"action": {
"name": "MyJenkinsTest-Action"
},
"pipelineName": "MySecondPipeline",
"stage": {
"name": "Testing"
}
}
},
"id": "ef66c259-64f9-EXAMPLE",
"nonce": "3"
}
]
} awscli-1.10.1/awscli/examples/codepipeline/disable-stage-transition.rst 0000666 4542626 0000144 00000000472 12652514124 027321 0 ustar pysdk-ci amazon 0000000 0000000 **To disable a transition to a stage in a pipeline**
This example disables transitions into the Beta stage of the MyFirstPipeline pipeline in AWS CodePipeline.
Command::
aws codepipeline disable-stage-transition --pipeline-name MyFirstPipeline --stage-name Beta --transition-type Inbound
Output::
None. awscli-1.10.1/awscli/examples/codepipeline/create-pipeline.rst 0000666 4542626 0000144 00000004112 12652514124 025466 0 ustar pysdk-ci amazon 0000000 0000000 **To create a pipeline**
This example creates a pipeline in AWS CodePipeline using an already-created JSON file (here named MySecondPipeline.json) that contains the structure of the pipeline. For more information about the requirements for creating a pipeline, including the structure of the file, see the AWS CodePipeline User Guide.
Command::
aws codepipeline create-pipeline --cli-input-json file://MySecondPipeline.json
JSON file sample contents::
{
"pipeline": {
"roleArn": "arn:aws:iam::111111111111:role/AWS-CodePipeline-Service",
"stages": [
{
"name": "Source",
"actions": [
{
"inputArtifacts": [],
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"version": "1",
"provider": "S3"
},
"outputArtifacts": [
{
"name": "MyApp"
}
],
"configuration": {
"S3Bucket": "awscodepipeline-demo-bucket",
"S3ObjectKey": "aws-codepipeline-s3-aws-codedeploy_linux.zip"
},
"runOrder": 1
}
]
},
{
"name": "Beta",
"actions": [
{
"inputArtifacts": [
{
"name": "MyApp"
}
],
"name": "CodePipelineDemoFleet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CodeDeploy"
},
"outputArtifacts": [],
"configuration": {
"ApplicationName": "CodePipelineDemoApplication",
"DeploymentGroupName": "CodePipelineDemoFleet"
},
"runOrder": 1
}
]
}
],
"artifactStore": {
"type": "S3",
"location": "codepipeline-us-east-1-11EXAMPLE11"
},
"name": "MySecondPipeline",
"version": 1
}
}
Output::
This command returns the structure of the pipeline. awscli-1.10.1/awscli/examples/codepipeline/create-custom-action-type.rst 0000666 4542626 0000144 00000002700 12652514124 027426 0 ustar pysdk-ci amazon 0000000 0000000 **To create a custom action**
This example creates a custom action for AWS CodePipeline using an already-created JSON file (here named MyCustomAction.json) that contains the structure of the custom action. For more information about the requirements for creating a custom action, including the structure of the file, see the AWS CodePipeline User Guide.
Command::
aws codepipeline create-custom-action-type --cli-input-json file://MyCustomAction.json
JSON file sample contents::
{
"actionType": {
"actionConfigurationProperties": [
{
"description": "The name of the build project must be provided when this action is added to the pipeline.",
"key": true,
"name": "MyJenkinsExampleBuildProject",
"queryable": false,
"required": true,
"secret": false
}
],
"id": {
"__type": "ActionTypeId",
"category": "Build",
"owner": "Custom",
"provider": "MyJenkinsProviderName",
"version": "1"
},
"inputArtifactDetails": {
"maximumCount": 1,
"minimumCount": 0
},
"outputArtifactDetails": {
"maximumCount": 1,
"minimumCount": 0
},
"settings": {
"entityUrlTemplate": "https://192.0.2.4/job/{Config:ProjectName}/",
"executionUrlTemplate": "https://192.0.2.4/job/{Config:ProjectName}/lastSuccessfulBuild/{ExternalExecutionId}/"
}
}
}
Output::
This command returns the structure of the custom action. awscli-1.10.1/awscli/examples/codepipeline/list-action-types.rst 0000666 4542626 0000144 00000004474 12652514124 026023 0 ustar pysdk-ci amazon 0000000 0000000 **To view the action types available**
Used by itself, the list-action-types command returns the structure of all actions available to your AWS account. This example uses the --action-owner-filter option to return only custom actions.
Command::
aws codepipeline list-action-types --action-owner-filter Custom
Output::
{
"actionTypes": [
{
"inputArtifactDetails": {
"maximumCount": 5,
"minimumCount": 0
},
"actionConfigurationProperties": [
{
"secret": false,
"required": true,
"name": "MyJenkinsExampleBuildProject",
"key": true,
"queryable": true
}
],
"outputArtifactDetails": {
"maximumCount": 5,
"minimumCount": 0
},
"id": {
"category": "Build",
"owner": "Custom",
"version": "1",
"provider": "MyJenkinsProviderName"
},
"settings": {
"entityUrlTemplate": "http://192.0.2.4/job/{Config:ProjectName}",
"executionUrlTemplate": "http://192.0.2.4/job/{Config:ProjectName}/{ExternalExecutionId}"
}
},
{
"inputArtifactDetails": {
"maximumCount": 5,
"minimumCount": 0
},
"actionConfigurationProperties": [
{
"secret": false,
"required": true,
"name": "MyJenkinsExampleTestProject",
"key": true,
"queryable": true
}
],
"outputArtifactDetails": {
"maximumCount": 5,
"minimumCount": 0
},
"id": {
"category": "Test",
"owner": "Custom",
"version": "1",
"provider": "MyJenkinsProviderName"
},
"settings": {
"entityUrlTemplate": "http://192.0.2.4/job/{Config:ProjectName}",
"executionUrlTemplate": "http://192.0.2.4/job/{Config:ProjectName}/{ExternalExecutionId}"
}
}
]
} awscli-1.10.1/awscli/examples/codepipeline/start-pipeline-execution.rst 0000666 4542626 0000144 00000000506 12652514124 027364 0 ustar pysdk-ci amazon 0000000 0000000 **To run the latest revision through a pipeline**
This example runs the latest revision present in the source stage of a pipeline through the pipeline named "MyFirstPipeline".
Command::
aws codepipeline start-pipeline-execution --name MyFirstPipeline
Output::
{
"pipelineExecutionId": "3137f7cb-7cf7-EXAMPLE"
} awscli-1.10.1/awscli/examples/codepipeline/get-pipeline-state.rst 0000666 4542626 0000144 00000002557 12652514124 026133 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about the state of a pipeline**
This example returns the most recent state of a pipeline named MyFirstPipeline.
Command::
aws codepipeline get-pipeline-state --name MyFirstPipeline
Output::
{
"created": 1446137312.204,
"pipelineName": "MyFirstPipeline",
"pipelineVersion": 1,
"stageStates": [
{
"actionStates": [
{
"actionName": "Source",
"entityUrl": "https://console.aws.amazon.com/s3/home?#",
"latestExecution": {
"lastStatusChange": 1446137358.328,
"status": "Succeeded"
}
}
],
"stageName": "Source"
},
{
"actionStates": [
{
"actionName": "CodePipelineDemoFleet",
"entityUrl": "https://console.aws.amazon.com/codedeploy/home?#/applications/CodePipelineDemoApplication/deployment-groups/CodePipelineDemoFleet",
"latestExecution": {
"externalExecutionId": "d-EXAMPLE",
"externalExecutionUrl": "https://console.aws.amazon.com/codedeploy/home?#/deployments/d-EXAMPLE",
"lastStatusChange": 1446137493.131,
"status": "Succeeded",
"summary": "Deployment Succeeded"
}
}
],
"inboundTransitionState": {
"enabled": true
},
"stageName": "Beta"
}
],
"updated": 1446137312.204
}
awscli-1.10.1/awscli/examples/codepipeline/enable-stage-transition.rst 0000666 4542626 0000144 00000000467 12652514124 027150 0 ustar pysdk-ci amazon 0000000 0000000 **To enable a transition to a stage in a pipeline**
This example enables transitions into the Beta stage of the MyFirstPipeline pipeline in AWS CodePipeline.
Command::
aws codepipeline enable-stage-transition --pipeline-name MyFirstPipeline --stage-name Beta --transition-type Inbound
Output::
None. awscli-1.10.1/awscli/examples/codepipeline/list-pipelines.rst 0000666 4542626 0000144 00000001030 12652514124 025355 0 ustar pysdk-ci amazon 0000000 0000000 **To view a list of pipelines**
This example lists all AWS CodePipeline pipelines associated with the user's AWS account.
Command::
aws codepipeline list-pipelines
Output::
{
"pipelines": [
{
"updated": 1439504274.641,
"version": 1,
"name": "MyFirstPipeline",
"created": 1439504274.641
},
{
"updated": 1436461837.992,
"version": 2,
"name": "MySecondPipeline",
"created": 1436460801.381
}
]
} awscli-1.10.1/awscli/examples/codepipeline/get-job-details.rst 0000666 4542626 0000144 00000005252 12652514124 025400 0 ustar pysdk-ci amazon 0000000 0000000 **To get details of a job**
This example returns details about a job whose ID is represented by f4f4ff82-2d11-EXAMPLE. This command is only used for custom actions. When this command is called, AWS CodePipeline returns temporary credentials for the Amazon S3 bucket used to store artifacts for the pipeline, if required for the custom action. This command will also return any secret values defined for the action, if any are defined.
Command::
aws codepipeline get-job-details --job-id f4f4ff82-2d11-EXAMPLE
Output::
{
"jobDetails": {
"accountId": "111111111111",
"data": {
"actionConfiguration": {
"__type": "ActionConfiguration",
"configuration": {
"ProjectName": "MyJenkinsExampleTestProject"
}
},
"actionTypeId": {
"__type": "ActionTypeId",
"category": "Test",
"owner": "Custom",
"provider": "MyJenkinsProviderName",
"version": "1"
},
"artifactCredentials": {
"__type": "AWSSessionCredentials",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"secretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"sessionToken": "fICCQD6m7oRw0uXOjANBgkqhkiG9w0BAQUFADCBiDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwdTZWF0dGxlMQ8wDQYDVQQKEwZBbWF6b24xFDASBgNVBAsTC0lBTSBDb25zb2xlMRIwEAYDVQQDEwlUZXN0Q2lsYWMxHzAdBgkqhkiG9w0BCQEWEG5vb25lQGFtYXpvbi5jb20wHhcNMTEwNDI1MjA0NTIxWhcNMTIwNDI0MjA0NTIxWjCBiDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwdTZWF0dGxlMQ8wDQYDVQQKEwZBbWF6b24xFDASBgNVBAsTC0lBTSBDb25zb2xlMRIwEAYDVQQDEwlUZXN0Q2lsYWMxHzAdBgkqhkiG9w0BCQEWEG5vb25lQGFtYXpvbi5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMaK0dn+a4GmWIWJ21uUSfwfEvySWtC2XADZ4nB+BLYgVIk60CpiwsZ3G93vUEIO3IyNoH/f0wYK8m9TrDHudUZg3qX4waLG5M43q7Wgc/MbQITxOUSQv7c7ugFFDzQGBzZswY6786m86gpEIbb3OhjZnzcvQAaRHhdlQWIMm2nrAgMBAAEwDQYJKoZIhvcNAQEFBQADgYEAtCu4nUhVVxYUntneD9+h8Mg9q6q+auNKyExzyLwaxlAoo7TJHidbtS4J5iNmZgXL0FkbFFBjvSfpJIlJ00zbhNYS5f6GuoEDmFJl0ZxBHjJnyp378OD8uTs7fLvjx79LjSTbNYiytVbZPQUQ5Yaxu2jXnimvw3rrszlaEXAMPLE="
},
"inputArtifacts": [
{
"__type": "Artifact",
"location": {
"s3Location": {
"bucketName": "codepipeline-us-east-1-11EXAMPLE11",
"objectKey": "MySecondPipeline/MyAppBuild/EXAMPLE"
},
"type": "S3"
},
"name": "MyAppBuild"
}
],
"outputArtifacts": [],
"pipelineContext": {
"__type": "PipelineContext",
"action": {
"name": "MyJenkinsTest-Action"
},
"pipelineName": "MySecondPipeline",
"stage": {
"name": "Testing"
}
}
},
"id": "f4f4ff82-2d11-EXAMPLE"
}
}
awscli-1.10.1/awscli/examples/codepipeline/delete-pipeline.rst 0000666 4542626 0000144 00000000437 12652514124 025473 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a pipeline**
This example deletes a pipeline named MySecondPipeline from AWS CodePipeline. Use the list-pipelines command to view a list of pipelines associated with your AWS account.
Command::
aws codepipeline delete-pipeline --name MySecondPipeline
Output::
None. awscli-1.10.1/awscli/examples/kms/ 0000777 4542626 0000144 00000000000 12652514126 020024 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/kms/decrypt.rst 0000666 4542626 0000144 00000001411 12652514124 022223 0 ustar pysdk-ci amazon 0000000 0000000 The following command decrypts a KMS encrypted, binary encoded version of a 256 bit key named ``encryptedkey-binary`` in the current folder::
aws kms decrypt --ciphertext-blob fileb://encryptedkey-binary
The fileb:// prefix instructs the AWS CLI not to attempt to decode the binary data in the file.
The output of this command includes the ID of the key used to decrypt the key, and a base64 encoded version of the decrypted key. Use ``base64 -d`` to reverse the base64 encoding and view the original hexadecimal (or otherwise encoded) version of the key. The following command uses an AWS CLI query to isolate the base64 plaintext and pipe it into ``base64``::
aws kms decrypt --ciphertext-blob fileb://encryptedkey-binary --query Plaintext --output text | base64 -d awscli-1.10.1/awscli/examples/kms/create-alias.rst 0000666 4542626 0000144 00000000416 12652514124 023107 0 ustar pysdk-ci amazon 0000000 0000000 The following command creates an alias for a customer master key::
$ aws kms create-alias --alias-name alias/my-alias --target-key-id arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012
Note that all alias names must begin with ``alias/``.
awscli-1.10.1/awscli/examples/kms/encrypt.rst 0000666 4542626 0000144 00000001262 12652514124 022241 0 ustar pysdk-ci amazon 0000000 0000000 This example shows how to encrypt the string ``"1\!2@3#4$5%6^7&8*9(0)-_=+"``
and save the binary contents to a file::
aws kms encrypt --key-id my-key-id --plaintext "1\!2@3#4$5%6^7&8*9(0)-_=+" --query CiphertextBlob --output text | base64 --decode > /tmp/encrypted
If you want to decrypt the contents of the file above you can use this
command::
echo "Decrypted is: $(aws kms decrypt --ciphertext-blob fileb:///tmp/encrypted --output text --query Plaintext | base64 --decode)"
Note the use of the ``fileb://`` prefix in the ``decrypt`` command above. This
indicates that the referenced file contains binary contents, and that the file
contents are not decoded before being used.
awscli-1.10.1/awscli/examples/autoscaling/ 0000777 4542626 0000144 00000000000 12652514126 021543 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/autoscaling/describe-tags.rst 0000666 4542626 0000144 00000003520 12652514124 025007 0 ustar pysdk-ci amazon 0000000 0000000 **To describe tags**
The following ``describe-tags`` command returns all tags::
aws autoscaling describe-tags
The output of this command is a JSON block that describes the tags for all Auto Scaling groups, similar to the following::
{
"Tags": [
{
"ResourceType": "auto-scaling-group",
"ResourceId": "tags-auto-scaling-group",
"PropagateAtLaunch": true,
"Value": "Research",
"Key": "Dept"
},
{
"ResourceType": "auto-scaling-group",
"ResourceId": "tags-auto-scaling-group",
"PropagateAtLaunch": true,
"Value": "WebServer",
"Key": "Role"
}
]
}
The following example uses the ``filters`` parameter to return tags for a specific Auto Scaling group::
aws autoscaling describe-tags --filters Name=auto-scaling-group,Values=tags-auto-scaling-group
To return a specific number of tags with this command, use the ``max-items`` parameter::
aws autoscaling describe-tags --max-items 1
In this example, the output of this command is a JSON block that describes the first tag::
{
"NextToken": "None___1",
"Tags": [
{
"ResourceType": "auto-scaling-group",
"ResourceId": "tags-auto-scaling-group",
"PropagateAtLaunch": true,
"Value": "Research",
"Key": "Dept"
}
]
}
This JSON block includes a ``NextToken`` field. You can use the value of this field with the ``starting-token`` parameter to return additional tags::
aws autoscaling describe-tags --filters Name=auto-scaling-group,Values=tags-auto-scaling-group --starting-token None___1
For more information, see `Add, Modify, or Remove Auto Scaling Group Tags`_ in the *Auto Scaling Developer Guide*.
.. _`Add, Modify, or Remove Auto Scaling Group Tags`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/ASTagging.html
awscli-1.10.1/awscli/examples/autoscaling/exit-standby.rst 0000666 4542626 0000144 00000001470 12652514124 024710 0 ustar pysdk-ci amazon 0000000 0000000 **To move instances out of standby mode**
This example moves the specified instance out of standby mode::
aws autoscaling exit-standby --instance-ids i-93633f9b --auto-scaling-group-name my-asg
The following is example output::
{
"Activities": [
{
"Description": "Moving EC2 instance out of Standby: i-93633f9b",
"AutoScalingGroupName": "my-asg",
"ActivityId": "142928e1-a2dc-453a-9b24-b85ad6735928",
"Details": {"Availability Zone": "us-west-2a"},
"StartTime": "2015-04-12T15:14:29.886Z",
"Progress": 30,
"Cause": "At 2015-04-12T15:14:29Z instance i-93633f9b was moved out of standby in response to a user request, increasing the capacity from 1 to 2.",
"StatusCode": "PreInService"
}
]
}
awscli-1.10.1/awscli/examples/autoscaling/execute-policy.rst 0000666 4542626 0000144 00000000720 12652514124 025231 0 ustar pysdk-ci amazon 0000000 0000000 **To execute an Auto Scaling policy**
The following ``execute-policy`` command executes an Auto Scaling policy for an Auto Scaling group::
aws autoscaling execute-policy --auto-scaling-group-name basic-auto-scaling-group --policy-name ScaleIn --honor-cooldown
For more information, see `Dynamic Scaling`_ in the *Auto Scaling Developer Guide*.
.. _`Dynamic Scaling`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-scale-based-on-demand.html
awscli-1.10.1/awscli/examples/autoscaling/attach-load-balancers.rst 0000666 4542626 0000144 00000000371 12652514124 026405 0 ustar pysdk-ci amazon 0000000 0000000 **To attach a load balancer to an Auto Scaling group**
This example attaches the specified load balancer to the specified Auto Scaling group::
aws autoscaling attach-load-balancers --load-balancer-names my-lb --auto-scaling-group-name my-asg
awscli-1.10.1/awscli/examples/autoscaling/describe-notification-configurations.rst 0000666 4542626 0000144 00000004111 12652514124 031564 0 ustar pysdk-ci amazon 0000000 0000000 **To describe the Auto Scaling notification configurations**
The following ``describe-notification-configurations`` command returns the notification configurations for an Auto Scaling group::
aws autoscaling describe-notification-configurations --auto-scaling-group-name basic-auto-scaling-group
The output of this command is a JSON block that describes the notification configurations, similar to the following::
{
"NotificationConfigurations": [
{
"AutoScalingGroupName": "basic-auto-scaling-group",
"NotificationType": "autoscaling:TEST_NOTIFICATION",
"TopicARN": "arn:aws:sns:us-west-2:123456789012:second-test-topic"
},
{
"AutoScalingGroupName": "basic-auto-scaling-group",
"NotificationType": "autoscaling:TEST_NOTIFICATION",
"TopicARN": "arn:aws:sns:us-west-2:123456789012:test-topic"
}
]
}
To return a specific number of notification configurations with this command, use the ``max-items`` parameter::
aws autoscaling describe-notification-configurations --auto-scaling-group-name basic-auto-scaling-group --max-items 1
In this example, the output of this command is a JSON block that describes the first notification configuration::
{
"NextToken": "None___1",
"NotificationConfigurations": [
{
"AutoScalingGroupName": "basic-auto-scaling-group",
"NotificationType": "autoscaling:TEST_NOTIFICATION",
"TopicARN": "arn:aws:sns:us-west-2:123456789012:second-test-topic"
}
]
}
This JSON block includes a ``NextToken`` field. You can use the value of this field with the ``starting-token`` parameter to return additional notification configurations::
aws autoscaling describe-notification-configurations --auto-scaling-group-name basic-auto-scaling-group --starting-token None___1
For more information, see `Getting Notifications When Your Auto Scaling Group Changes`_ in the *Auto Scaling Developer Guide*.
.. _`Getting Notifications When Your Auto Scaling Group Changes`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/ASGettingNotifications.html
awscli-1.10.1/awscli/examples/autoscaling/delete-policy.rst 0000666 4542626 0000144 00000000641 12652514124 025033 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an Auto Scaling policy**
The following ``delete-policy`` command deletes an Auto Scaling policy::
aws autoscaling delete-policy --auto-scaling-group-name basic-auto-scaling-group --policy-name ScaleIn
For more information, see `Dynamic Scaling`_ in the *Auto Scaling Developer Guide*.
.. _`Dynamic Scaling`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-scale-based-on-demand.html
awscli-1.10.1/awscli/examples/autoscaling/set-instance-protection.rst 0000666 4542626 0000144 00000000620 12652514124 027052 0 ustar pysdk-ci amazon 0000000 0000000 **To change the instance protection setting for an instance**
This example enables instance protection for the specified instance::
aws autoscaling set-instance-protection --instance-ids i-93633f9b --protected-from-scale-in
This example disables instance protection for the specified instance::
aws autoscaling set-instance-protection --instance-ids i-93633f9b --no-protected-from-scale-in
awscli-1.10.1/awscli/examples/autoscaling/suspend-processes.rst 0000666 4542626 0000144 00000000770 12652514124 025764 0 ustar pysdk-ci amazon 0000000 0000000 **To suspend Auto Scaling processes**
The following ``suspend-processes`` command suspends a scaling process for an Auto Scaling group::
aws autoscaling suspend-processes --auto-scaling-group-name basic-auto-scaling-group --scaling-processes AlarmNotification
For more information, see `Suspend and Resume Auto Scaling Process`_ in the *Auto Scaling Developer Guide*.
.. _`Suspend and Resume Auto Scaling Process`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/US_SuspendResume.html
awscli-1.10.1/awscli/examples/autoscaling/describe-lifecycle-hook-types.rst 0000666 4542626 0000144 00000000530 12652514124 030106 0 ustar pysdk-ci amazon 0000000 0000000 **To describe the available types of lifecycle hooks**
This example describes the available types of lifecycle hooks::
aws autoscaling describe-lifecycle-hook-types
The following is example output::
{
"LifecycleHookTypes": [
"autoscaling:EC2_INSTANCE_LAUNCHING",
"autoscaling:EC2_INSTANCE_TERMINATING"
]
}
awscli-1.10.1/awscli/examples/autoscaling/detach-instances.rst 0000666 4542626 0000144 00000001547 12652514124 025517 0 ustar pysdk-ci amazon 0000000 0000000 **To detach an instance from an Auto Scaling group**
This example detaches the specified instance from the specified Auto Scaling group::
aws autoscaling detach-instances --instance-ids i-93633f9b --auto-scaling-group-name my-asg --should-decrement-desired-capacity
The following is example output::
{
"Activities": [
{
"Description": "Detaching EC2 instance: i-93633f9b",
"AutoScalingGroupName": "my-asg",
"ActivityId": "5091cb52-547a-47ce-a236-c9ccbc2cb2c9",
"Details": {"Availability Zone": "us-west-2a"},
"StartTime": "2015-04-12T15:02:16.179Z",
"Progress": 50,
"Cause": "At 2015-04-12T15:02:16Z instance i-93633f9b was detached in response to a user request, shrinking the capacity from 2 to 1.",
"StatusCode": "InProgress"
}
]
}
awscli-1.10.1/awscli/examples/autoscaling/put-scheduled-update-group-action.rst 0000666 4542626 0000144 00000002007 12652514124 030725 0 ustar pysdk-ci amazon 0000000 0000000 **To add a scheduled action to an Auto Scaling group**
The following ``put-scheduled-update-group-action`` command adds a scheduled action to an Auto Scaling group::
aws autoscaling put-scheduled-update-group-action --auto-scaling-group-name basic-auto-scaling-group --scheduled-action-name sample-scheduled-action --start-time "2014-05-12T08:00:00Z" --end-time "2014-05-12T08:00:00Z" --min-size 2 --max-size 6 --desired-capacity 4
The following example creates a scheduled action to scale on a recurring schedule that is scheduled to execute at 00:30 hours on the first of January, June, and December every year::
aws autoscaling put-scheduled-update-group-action --auto-scaling-group-name basic-auto-scaling-group --scheduled-action-name sample-scheduled-action --recurrence "30 0 1 1,6,12 0" --min-size 2 --max-size 6 --desired-capacity 4
For more information, see `Scheduled Scaling`__ in the *Auto Scaling Developer Guide*.
.. __: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/schedule_time.html
awscli-1.10.1/awscli/examples/autoscaling/describe-auto-scaling-groups.rst 0000666 4542626 0000144 00000003751 12652514124 027762 0 ustar pysdk-ci amazon 0000000 0000000 **To get a description of an Auto Scaling group**
This example describes the specified Auto Scaling group::
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name my-asg
The following is example output::
{
"AutoScalingGroups": [
{
"AutoScalingGroupARN": "arn:aws:autoscaling:us-west-2:123456789012:autoScalingGroup:930d940e-891e-4781-a11a-7b0acd480f03:autoScalingGroupName/my-asg",
"HealthCheckGracePeriod": 0,
"SuspendedProcesses": [],
"DesiredCapacity": 1,
"Tags": [],
"EnabledMetrics": [],
"LoadBalancerNames": [],
"AutoScalingGroupName": "my-test-asg",
"DefaultCooldown": 300,
"MinSize": 0,
"Instances": [
{
"InstanceId": "i-4ba0837f",
"AvailabilityZone": "us-west-2c",
"HealthStatus": "Healthy",
"LifecycleState": "InService",
"LaunchConfigurationName": "my-test-lc"
}
],
"MaxSize": 1,
"VPCZoneIdentifier": null,
"TerminationPolicies": [
"Default"
],
"LaunchConfigurationName": "my-test-lc",
"CreatedTime": "2013-08-19T20:53:25.584Z",
"AvailabilityZones": [
"us-west-2c"
],
"HealthCheckType": "EC2"
}
]
}
To return a specific number of Auto Scaling groups with this command, use the ``max-items`` parameter::
aws autoscaling describe-auto-scaling-groups --max-items 1
If the output for this command includes a ``NextToken`` field, it indicates that there are more groups. You can use the value of this field with the ``starting-token`` parameter to return additional groups::
aws autoscaling describe-auto-scaling-groups --starting-token None___1
awscli-1.10.1/awscli/examples/autoscaling/delete-notification-configuration.rst 0000666 4542626 0000144 00000001242 12652514124 031065 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an Auto Scaling notification**
The following ``delete-notification-configuration`` command deletes a notification from an Auto Scaling group::
aws autoscaling delete-notification-configuration --auto-scaling-group-name basic-auto-scaling-group --topic-arn arn:aws:sns:us-west-2:896650972448:second-test-topic
For more information, see the `Delete Notification Configuration`_ section in the Getting Notifications When Your Auto Scaling Group Changes topic, in the *Auto Scaling Developer Guide*.
.. _`Delete Notification Configuration`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/ASGettingNotifications.html#delete-settingupnotifications
awscli-1.10.1/awscli/examples/autoscaling/describe-auto-scaling-instances.rst 0000666 4542626 0000144 00000002432 12652514124 030425 0 ustar pysdk-ci amazon 0000000 0000000 **To describe one or more instances**
This example describes the specified instance::
aws autoscaling describe-auto-scaling-instances --instance-ids i-4ba0837f
The following is example output::
{
"AutoScalingInstances": [
{
"InstanceId": "i-4ba0837f",
"HealthStatus": "HEALTHY",
"AvailabilityZone": "us-west-2c",
"AutoScalingGroupName": "my-asg",
"LifecycleState": "InService"
}
]
}
This example uses the ``max-items`` parameter to specify how many instances to return with this call::
aws autoscaling describe-auto-scaling-instances --max-items 1
The following is example output::
{
"NextToken": "None___1",
"AutoScalingInstances": [
{
"InstanceId": "i-4ba0837f",
"HealthStatus": "HEALTHY",
"AvailabilityZone": "us-west-2c",
"AutoScalingGroupName": "my-asg",
"LifecycleState": "InService"
}
]
}
Notice that the output for this command includes a ``NextToken`` field, which indicates that there are more instances. You can use the value of this field with the ``starting-token`` parameter as follows to return additional instances::
aws autoscaling describe-auto-scaling-instances --starting-token None___1
awscli-1.10.1/awscli/examples/autoscaling/disable-metrics-collection.rst 0000666 4542626 0000144 00000001103 12652514124 027466 0 ustar pysdk-ci amazon 0000000 0000000 **To disable metrics collection for an Auto Scaling group**
The following ``disable-metrics-collection`` command disables collecting data for the ``GroupDesiredCapacity`` metric for an Auto Scaling group::
aws autoscaling disable-metrics-collection --auto-scaling-group-name basic-auto-scaling-group --metrics GroupDesiredCapacity
For more information, see `Monitoring Your Auto Scaling Instances`_ in the *Auto Scaling Developer Guide*.
.. _`Monitoring Your Auto Scaling Instances`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-instance-monitoring.html
awscli-1.10.1/awscli/examples/autoscaling/describe-scaling-activities.rst 0000666 4542626 0000144 00000003354 12652514124 027640 0 ustar pysdk-ci amazon 0000000 0000000 **To get a description of the scaling activities for an Auto Scaling group**
This example describes the scaling activities for the specified Auto Scaling group::
aws autoscaling describe-scaling-activities --auto-scaling-group-name my-asg
The following is example output::
{
"Activities": [
{
"Description": "Launching a new EC2 instance: i-4ba0837f",
"AutoScalingGroupName": "my-asg",
"ActivityId": "f9f2d65b-f1f2-43e7-b46d-d86756459699",
"Details": "{"Availability Zone":"us-west-2c"}",
"StartTime": "2013-08-19T20:53:29.930Z",
"Progress": 100,
"EndTime": "2013-08-19T20:54:02Z",
"Cause": "At 2013-08-19T20:53:25Z a user request created an AutoScalingGroup changing the desired capacity from 0 to 1. At 2013-08-19T20:53:29Z an instance was started in response to a difference between desired and actual capa city, increasing the capacity from 0 to 1.",
"StatusCode": "Successful"
}
]
}
To return information about a specific scaling activity, use the ``activity-ids`` parameter::
aws autoscaling describe-scaling-activities --activity-ids b55c7b67-c8aa-4d10-b240-730ff91d8895
To return a specific number of activities with this command, use the ``max-items`` parameter::
aws autoscaling describe-scaling-activities --max-items 1
If the output for this command includes a ``NextToken`` field, this indicates that there are more activities. You can use the value of this field with the ``starting-token`` parameter as follows to return additional activities::
aws autoscaling describe-scaling-activities --starting-token None___1
awscli-1.10.1/awscli/examples/autoscaling/attach-instances.rst 0000666 4542626 0000144 00000000372 12652514124 025526 0 ustar pysdk-ci amazon 0000000 0000000 **To attach an instance to an Auto Scaling group**
This example attaches the specified instance to the specified Auto Scaling group::
aws autoscaling attach-instances --instance-ids i-93633f9b --auto-scaling-group-name basic-auto-scaling-group
awscli-1.10.1/awscli/examples/autoscaling/put-scaling-policy.rst 0000666 4542626 0000144 00000002453 12652514124 026022 0 ustar pysdk-ci amazon 0000000 0000000 **To add a scaling policy to an Auto Scaling group**
This example adds a policy to the specified Auto Scaling group::
aws autoscaling put-scaling-policy --auto-scaling-group-name basic-auto-scaling-group --policy-name ScaleIn --scaling-adjustment -1 --adjustment-type ChangeInCapacity
To change the size of the auto scaling group by a specific number of instances, set the ``adjustment-type`` parameter to ``PercentChangeInCapacity``. Then, assign a value to
the ``min-adjustment-step`` parameter, where the value represents the number of instances you want the policy to add or remove from the Auto Scaling group::
aws autoscaling put-scaling-policy --auto-scaling-group-name basic-auto-scaling-group --policy-name ScalePercentChange --scaling-adjustment 25 --adjustment-type PercentChangeInCapacity --cooldown 60 --min-adjustment-step 2
The output of this command includes the ARN of the policy. The following is example output::
{
"PolicyARN": "arn:aws:autoscaling:us-west-2:123456789012:scalingPolicy:2233f3d7-6290-403b-b632-93c553560106:autoScalingGroupName/basic-auto-scaling-group:policyName/ScaleIn"
}
For more information, see `Dynamic Scaling`_ in the *Auto Scaling Developer Guide*.
.. _`Dynamic Scaling`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-scale-based-on-demand.html
awscli-1.10.1/awscli/examples/autoscaling/describe-termination-policy-types.rst 0000666 4542626 0000144 00000001525 12652514124 031044 0 ustar pysdk-ci amazon 0000000 0000000 **To describe termination policy types**
The following ``describe-termination-policy-types`` command returns the available termination policy types::
aws autoscaling describe-termination-policy-types
The output of this command is a JSON block that lists the types of termination policies, similar to the following::
{
"TerminationPolicyTypes": [
"ClosestToNextInstanceHour",
"Default",
"NewestInstance",
"OldestInstance",
"OldestLaunchConfiguration"
]
}
For more information, see the `How Termination Policies Work`_ section in the Configure Instance Termination Policy for Your Auto Scaling Group topic, in the *Auto Scaling Developer Guide*.
.. _`How Termination Policies Work`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/us-termination-policy.html#your-termination-policy
awscli-1.10.1/awscli/examples/autoscaling/enable-metrics-collection.rst 0000666 4542626 0000144 00000001355 12652514124 027322 0 ustar pysdk-ci amazon 0000000 0000000 **To enable metrics collection for an Auto Scaling group**
The following ``enable-metrics-collection`` command enables collecting data for an Auto Scaling group::
aws autoscaling enable-metrics-collection --auto-scaling-group-name basic-auto-scaling-group --granularity "1Minute"
To collect data on a specific metric, use the ``metrics`` parameter::
aws autoscaling enable-metrics-collection --auto-scaling-group-name basic-auto-scaling-group --metrics GroupDesiredCapacity --granularity "1Minute"
For more information, see `Monitoring Your Auto Scaling Instances`_ in the *Auto Scaling Developer Guide*.
.. _`Monitoring Your Auto Scaling Instances`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-instance-monitoring.html
awscli-1.10.1/awscli/examples/autoscaling/complete-lifecycle-action.rst 0000666 4542626 0000144 00000000624 12652514124 027315 0 ustar pysdk-ci amazon 0000000 0000000 **To complete the lifecycle action**
This example lets Auto Scaling know that the specified lifecycle action is complete so that it can finish launching or terminating the instance::
aws autoscaling complete-lifecycle-action --lifecycle-hook-name my-lifecycle-hook --auto-scaling-group-name my-asg --lifecycle-action-result CONTINUE --lifecycle-action-token bcd2f1b8-9a78-44d3-8a7a-4dd07d7cf635
awscli-1.10.1/awscli/examples/autoscaling/describe-adjustment-types.rst 0000666 4542626 0000144 00000001202 12652514124 027364 0 ustar pysdk-ci amazon 0000000 0000000 **To describe the Auto Scaling adjustment types**
This example describes the adjustment types available for your Auto Scaling groups::
aws autoscaling describe-adjustment-types
The following is example output::
{
"AdjustmentTypes": [
{
"AdjustmentType": "ChangeInCapacity"
}
{
"AdjustmentType": "ExactCapcity"
}
{
"AdjustmentType": "PercentChangeInCapacity"
}
]
}
For more information, see `Dynamic Scaling`_ in the *Auto Scaling Developer Guide*.
.. _`Dynamic Scaling`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-scale-based-on-demand.html
awscli-1.10.1/awscli/examples/autoscaling/delete-auto-scaling-group.rst 0000666 4542626 0000144 00000001322 12652514124 027251 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an Auto Scaling group**
The following ``delete-auto-scaling-group`` command deletes an Auto Scaling group::
aws autoscaling delete-auto-scaling-group --auto-scaling-group-name delete-me-auto-scaling-group
If you want to delete the Auto Scaling group without waiting for the instances in the group to terminate, use the ``--force-delete`` parameter::
aws autoscaling delete-auto-scaling-group --auto-scaling-group-name delete-me-auto-scaling-group --force-delete
For more information, see `Shut Down Your Auto Scaling Process`_ in the *Auto Scaling Developer Guide*.
.. _`Shut Down Your Auto Scaling Process`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-process-shutdown.html
awscli-1.10.1/awscli/examples/autoscaling/set-desired-capacity.rst 0000666 4542626 0000144 00000000753 12652514124 026303 0 ustar pysdk-ci amazon 0000000 0000000 **To set the desired capacity for an Auto Scaling group**
The following ``set-desired-capacity`` command sets the desired capacity for an Auto Scaling group::
aws autoscaling set-desired-capacity --auto-scaling-group-name basic-auto-scaling-group --desired-capacity 2 --honor-cooldown
For more information, see `How Auto Scaling Works`_ in the *Auto Scaling Developer Guide*.
.. _`How Auto Scaling Works`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/how-as-works.html
awscli-1.10.1/awscli/examples/autoscaling/describe-account-limits.rst 0000666 4542626 0000144 00000001123 12652514124 027001 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your Auto Scaling account limits**
The following ``describe-account-limits`` command describes your Auto Scaling account limits::
aws autoscaling describe-account-limits
The output of this command is a JSON block that describes your Auto Scaling account limits, similar to the following::
{
"MaxNumberOfLaunchConfigurations": 100,
"MaxNumberOfAutoScalingGroups": 20
}
For more information, see `Auto Scaling Limits`_ in the *Auto Scaling Developer Guide*.
.. _`Auto Scaling Limits`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-account-limits.html
awscli-1.10.1/awscli/examples/autoscaling/terminate-instance-in-auto-scaling-group.rst 0000666 4542626 0000144 00000001320 12652514124 032203 0 ustar pysdk-ci amazon 0000000 0000000 **To terminate an instance in an Auto Scaling group**
The following ``terminate-instance-in-auto-scaling-group`` command terminates an instance from an Auto Scaling group without resetting the size of the group::
aws autoscaling terminate-instance-in-auto-scaling-group --instance-id i-93633f9b --no-should-decrement-desired-capacity
This results in a new instance starting up after the specified instance terminates.
For more information, see `Configure Instance Termination Policy for Your Auto Scaling Group`_ in the *Auto Scaling Developer Guide*.
.. _`Configure Instance Termination Policy for Your Auto Scaling Group`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/us-termination-policy.html
awscli-1.10.1/awscli/examples/autoscaling/describe-launch-configurations.rst 0000666 4542626 0000144 00000004145 12652514124 030357 0 ustar pysdk-ci amazon 0000000 0000000 **To describe Auto Scaling launch configurations**
This example returns information about the specified launch configuration::
aws autoscaling describe-launch-configurations --launch-configuration-names "basic-launch-config"
The following is example output for this command::
{
"LaunchConfigurations": [
{
"UserData": null,
"EbsOptimized": false,
"LaunchConfigurationARN": "arn:aws:autoscaling:us-west-2:123456789012:launchConfiguration:98d3b196-4cf9-4e88-8ca1-8547c24ced8b:launchConfigurationName/basic-launch-config",
"InstanceMonitoring": {
"Enabled": true
},
"ImageId": "ami-043a5034",
"CreatedTime": "2014-05-07T17:39:28.599Z",
"BlockDeviceMappings": [],
"KeyName": null,
"SecurityGroups": [
"sg-67ef0308"
],
"LaunchConfigurationName": "basic-launch-config",
"KernelId": null,
"RamdiskId": null,
"InstanceType": "t1.micro",
"AssociatePublicIpAddress": true
}
]
}
To return a specific number of launch configurations with this command, use the ``max-items`` parameter::
aws autoscaling describe-launch-configurations --max-items 1
The following is example output for this command::
{
"NextToken": "None___1",
"LaunchConfigurations": [
{
"UserData": null,
"EbsOptimized": false,
"LaunchConfigurationARN": "arn:aws:autoscaling:us-west-2:123456789012:launchConfiguration:98d3b196-4cf9-4e88-8ca1-8547c24ced8b:launchConfigurationName/basic-launch-config",
"InstanceMonitoring": {
"Enabled": true
},
"ImageId": "ami-043a5034",
"CreatedTime": "2014-05-07T17:39:28.599Z",
"BlockDeviceMappings": [],
"KeyName": null,
"SecurityGroups": [
"sg-67ef0308"
],
"LaunchConfigurationName": "basic-launch-config",
"KernelId": null,
"RamdiskId": null,
"InstanceType": "t1.micro",
"AssociatePublicIpAddress": true
}
]
}
The output includes a ``NextToken`` field. You can use the value of this field with the ``starting-token`` parameter to return additional launch configurations in a subsequent call::
aws autoscaling describe-launch-configurations --starting-token None___1
awscli-1.10.1/awscli/examples/autoscaling/create-auto-scaling-group.rst 0000666 4542626 0000144 00000003131 12652514124 027252 0 ustar pysdk-ci amazon 0000000 0000000 **To launch an Auto Scaling group**
This example launches an Auto Scaling group in a VPC::
aws autoscaling create-auto-scaling-group --auto-scaling-group-name basic-auto-scaling-group --launch-configuration-name basic-launch-config --min-size 1 --max-size 3 --vpc-zone-identifier subnet-41767929c
This example launches an Auto Scaling group and configures it to use Elastic Load Balancing::
aws autoscaling create-auto-scaling-group --auto-scaling-group-name extended-auto-scaling-group-2 --launch-configuration-name basic-launch-config-3 --load-balancer-names "sample-lb" --health-check-type ELB --health-check-grace-period 120
This example launches an Auto Scaling group. It specifies Availability Zones (using the ``availability-zones`` parameter) instead of subnets. It also launches any instances into a placement group and sets the termination policy to terminate the oldest instances first::
aws autoscaling create-auto-scaling-group --auto-scaling-group-name extended-auto-scaling-group-2 --launch-configuration-name basic-launch-config-3 --min-size 1 --max-size 3 --desired-capacity 2 --default-cooldown 600 --placement-group sample-placement-group --termination-policies "OldestInstance" --availability-zones us-west-2c
This example launches an Auto Scaling group and assigns a tag to instances in the group::
aws autoscaling create-auto-scaling-group --auto-scaling-group-name tags-auto-scaling-group --instance-id i-22c99e2a --min-size 1 --max-size 3 --vpc-zone-identifier subnet-41767929 --tags ResourceId=tags-auto-scaling-group,ResourceType=auto-scaling-group,Key=Role,Value=WebServer
awscli-1.10.1/awscli/examples/autoscaling/delete-lifecycle-hook.rst 0000666 4542626 0000144 00000000307 12652514124 026430 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a lifecycle hook**
This example deletes the specified lifecycle hook::
aws autoscaling delete-lifecycle-hook --lifecycle-hook-name my-lifecycle-hook --auto-scaling-group-name my-asg
awscli-1.10.1/awscli/examples/autoscaling/delete-launch-configuration.rst 0000666 4542626 0000144 00000000634 12652514124 027655 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a launch configuration**
This example deletes the specified launch configuration::
aws autoscaling delete-launch-configuration --launch-configuration-name my-lc
For more information, see `Shut Down Your Auto Scaling Process`_ in the *Auto Scaling Developer Guide*.
.. _`Shut Down Your Auto Scaling Process`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-process-shutdown.html
awscli-1.10.1/awscli/examples/autoscaling/put-notification-configuration.rst 0000666 4542626 0000144 00000001264 12652514124 030437 0 ustar pysdk-ci amazon 0000000 0000000 **To add an Auto Scaling notification**
The following ``put-notification-configuration`` command adds a notification to an Auto Scaling group::
--auto-scaling-group-name my-auto-scaling-group --topic-arn arn:aws:sns:us-west-2:123456789012:test-topic --notification-type autoscaling:TEST_NOTIFICATION
For more information, see the `Configure your Auto Scaling Group to Send Notifications`_ section in the Getting Notifications When Your Auto Scaling Group Changes topic, in the *Auto Scaling Developer Guide*.
.. _`Configure your Auto Scaling Group to Send Notifications`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/ASGettingNotifications.html#as-configure-asg-for-sns
awscli-1.10.1/awscli/examples/autoscaling/describe-load-balancers.rst 0000666 4542626 0000144 00000000622 12652514124 026720 0 ustar pysdk-ci amazon 0000000 0000000 **To describe the load balancers for an Auto Scaling group**
This example returns information about the load balancers for the specified Auto Scaling group::
aws autoscaling describe-load-balancers --auto-scaling-group-name my-asg
The following is example output for this command::
{
"LoadBalancers": [
{
"State": "Added",
"LoadBalancerName": "my-lb"
}
]
}
awscli-1.10.1/awscli/examples/autoscaling/describe-lifecycle-hooks.rst 0000666 4542626 0000144 00000001354 12652514124 027134 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your lifecycle hooks**
This example describes the lifecycle hooks for the specified Auto Scaling group::
aws autoscaling describe-lifecycle-hooks --auto-scaling-group-name my-asg
The following is example output::
{
"LifecycleHooks": [
{
"GlobalTimeout": 172800,
"HeartbeatTimeout": 3600,
"RoleARN": "arn:aws:iam::123456789012:role/my-auto-scaling-role",
"AutoScalingGroupName": "my-asg",
"LifecycleHookName": "my-lifecycle-hook",
"DefaultResult": "ABANDON",
"NotificationTargetARN": "arn:aws:sns:us-west-2:123456789012:my-sns-topic",
"LifecycleTransition": "autoscaling:EC2_INSTANCE_LAUNCHING"
}
]
}
awscli-1.10.1/awscli/examples/autoscaling/delete-tags.rst 0000666 4542626 0000144 00000000741 12652514124 024473 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a tag from an Auto Scaling group**
This example deletes a tag from the specified Auto Scaling group::
aws autoscaling delete-tags --tags ResourceId=tags-auto-scaling-group,ResourceType=auto-scaling-group,Key=Dept,Value=Research
For more information, see `Tagging Auto Scaling Groups and Instances`_ in the *Auto Scaling Developer Guide*.
.. _`Tagging Auto Scaling Groups and Instances`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/ASTagging.html
awscli-1.10.1/awscli/examples/autoscaling/describe-metric-collection-types.rst 0000666 4542626 0000144 00000002220 12652514124 030623 0 ustar pysdk-ci amazon 0000000 0000000 **To describe the Auto Scaling metric collection types**
The following ``describe-metric-collection-types`` command describes the metric collection types available for Auto Scaling groups::
aws autoscaling describe-metric-collection-types
The output of this command is a JSON block that describes the metric collection types, similar to the following::
{
"Metrics": [
{
"Metric": "GroupMinSize"
},
{
"Metric": "GroupMaxSize"
},
{
"Metric": "GroupDesiredCapacity"
},
{
"Metric": "GroupInServiceInstances"
},
{
"Metric": "GroupPendingInstances"
},
{
"Metric": "GroupTerminatingInstances"
},
{
"Metric": "GroupTotalInstances"
}
],
"Granularities": [
{
"Granularity": "1Minute"
}
]
}
For more information, see the `Auto Scaling Group Metrics`_ section in the Monitoring Your Auto Scaling Instances topic, in the *Auto Scaling Developer Guide*.
.. _`Auto Scaling Group Metrics`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-instance-monitoring.html#as-group-metrics
awscli-1.10.1/awscli/examples/autoscaling/update-auto-scaling-group.rst 0000666 4542626 0000144 00000002521 12652514124 027273 0 ustar pysdk-ci amazon 0000000 0000000 **To update an Auto Scaling group**
This example updates the specified Auto Scaling group to use Elastic Load Balancing health checks::
aws autoscaling update-auto-scaling-group --auto-scaling-group-name my-asg --health-check-type ELB --health-check-grace-period 60
This example updates the launch configuration, minimum and maximum size of the group, and which subnet to use::
aws autoscaling update-auto-scaling-group --auto-scaling-group-name my-asg --launch-configuration-name new-launch-config --min-size 1 --max-size 3 --vpc-zone-identifier subnet-41767929
This example updates the desired capacity, default cooldown, placement group, termination policy, and which Availability Zone to use::
aws autoscaling update-auto-scaling-group --auto-scaling-group-name my-asg --default-cooldown 600 --placement-group sample-placement-group --termination-policies "OldestInstance" --availability-zones us-west-2c
This example enables the instance protection setting for the specified Auto Scaling group::
aws autoscaling update-auto-scaling-group --auto-scaling-group-name my-asg --new-instances-protected-from-scale-in
This example disables the instance protection setting for the specified Auto Scaling group::
aws autoscaling update-auto-scaling-group --auto-scaling-group-name my-asg --no-new-instances-protected-from-scale-in
awscli-1.10.1/awscli/examples/autoscaling/put-lifecycle-hook.rst 0000666 4542626 0000144 00000001117 12652514124 025776 0 ustar pysdk-ci amazon 0000000 0000000 **To create a lifecycle hook**
This example creates a lifecycle hook::
aws autoscaling put-lifecycle-hook --lifecycle-hook-name my-lifecycle-hook --auto-scaling-group-name my-asg --lifecycle-transition autoscaling:EC2_INSTANCE_LAUNCHING --notification-target-arn arn:aws:sns:us-west-2:123456789012:my-sns-topic --role-arn arn:aws:iam::123456789012:role/my-auto-scaling-role
For more information, see `Adding Lifecycle Hooks`_ in the *Auto Scaling Developer Guide*.
.. _`Adding Lifecycle Hooks`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/adding-lifecycle-hooks.html
awscli-1.10.1/awscli/examples/autoscaling/resume-processes.rst 0000666 4542626 0000144 00000000776 12652514124 025611 0 ustar pysdk-ci amazon 0000000 0000000 **To resume Auto Scaling processes**
The following ``resume-processes`` command resumes a suspended scaling process for an Auto Scaling group::
aws autoscaling resume-processes --auto-scaling-group-name basic-auto-scaling-group --scaling-processes AlarmNotification
For more information, see `Suspend and Resume Auto Scaling Process`_ in the *Auto Scaling Developer Guide*.
.. _`Suspend and Resume Auto Scaling Process`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/US_SuspendResume.html
awscli-1.10.1/awscli/examples/autoscaling/record-lifecycle-action-heartbeat.rst 0000666 4542626 0000144 00000000505 12652514124 030716 0 ustar pysdk-ci amazon 0000000 0000000 **To record a lifecycle action heartbeat**
This example records a lifecycle action heartbeat to keep the instance in a pending state::
aws autoscaling record-lifecycle-action-heartbeat --lifecycle-hook-name my-lifecycle-hook --auto-scaling-group-name my-asg --lifecycle-action-token bcd2f1b8-9a78-44d3-8a7a-4dd07d7cf635
awscli-1.10.1/awscli/examples/autoscaling/delete-scheduled-action.rst 0000666 4542626 0000144 00000000766 12652514124 026757 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a scheduled action from an Auto Scaling group**
The following ``delete-scheduled-action`` command deletes a scheduled action from an Auto Scaling group::
aws autoscaling delete-scheduled-action --auto-scaling-group-name basic-auto-scaling-group --scheduled-action-name sample-scheduled-action
For more information, see `Scheduled Scaling`_ in the *Auto Scaling Developer Guide*.
.. _`Scheduled Scaling`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/schedule_time.html
awscli-1.10.1/awscli/examples/autoscaling/create-or-update-tags.rst 0000666 4542626 0000144 00000000621 12652514124 026367 0 ustar pysdk-ci amazon 0000000 0000000 **To create or update tags for an Auto Scaling group**
This example attaches two tags to the specified Auto Scaling group::
aws autoscaling create-or-update-tags --tags ResourceId=tags-auto-scaling-group,ResourceType=auto-scaling-group,Key=Role,Value=WebServer,PropagateAtLaunch=true ResourceId=tags-auto-scaling-group,ResourceType=auto-scaling-group,Key=Dept,Value=Research,PropagateAtLaunch=true
awscli-1.10.1/awscli/examples/autoscaling/create-launch-configuration.rst 0000666 4542626 0000144 00000004164 12652514124 027660 0 ustar pysdk-ci amazon 0000000 0000000 **To create a launch configuration**
This example creates a launch configuration::
aws autoscaling create-launch-configuration --launch-configuration-name my-test-lc --image-id ami-c6169af6 --instance-type m1.medium
This example creates a launch configuration that uses Spot Instances::
aws autoscaling create-launch-configuration --launch-configuration-name my-test-lc --image-id ami-c6169af6 --instance-type m1.medium --spot-price "0.50"
This example creates a launch configuration with a key pair and bootstrapping script::
aws autoscaling create-launch-configuration --launch-configuration-name detailed-launch-config --key-name qwikLABS-L238-20080 --image-id ami-c6169af6 --instance-type m1.small --user-data file://labuserdata.txt
This example creates a launch configuration based on an existing instance. In addition, it also specifies launch configuration attributes such as a security group, tenancy, Amazon EBS optimization, and bootstrapping script::
aws autoscaling create-launch-configuration --launch-configuration-name detailed-launch-config --key-name qwikLABS-L238-20080 --instance-id i-7e13c876 --security-groups sg-eb2af88e --instance-type m1.small --user-data file://labuserdata.txt --instance-monitoring Enabled=true --no-ebs-optimized --no-associate-public-ip-address --placement-tenancy dedicated --iam-instance-profile "autoscalingrole"
Add the following parameter to your ``create-launch-configuration`` command to add an Amazon EBS volume with the device name ``/dev/sdh`` and a volume size of 100.
Parameter::
--block-device-mappings "[{\"DeviceName\": \"/dev/sdh\",\"Ebs\":{\"VolumeSize\":100}}]"
Add the following parameter to your ``create-launch-configuration`` command to add ``ephemeral1`` as an instance store volume with the device name ``/dev/sdc``.
Parameter::
--block-device-mappings "[{\"DeviceName\": \"/dev/sdc\",\"VirtualName\":\"ephemeral1\"}]"
Add the following parameter to your ``create-launch-configuration`` command to omit a device included on the instance (for example, ``/dev/sdf``).
Parameter::
--block-device-mappings "[{\"DeviceName\": \"/dev/sdf\",\"NoDevice\":\"\"}]"
awscli-1.10.1/awscli/examples/autoscaling/describe-scheduled-actions.rst 0000666 4542626 0000144 00000005244 12652514124 027454 0 ustar pysdk-ci amazon 0000000 0000000 **To describe scheduled actions**
The following ``describe-scheduled-actions`` command returns all scheduled actions::
aws autoscaling describe-scheduled-actions
The output of this command is a JSON block that describes the scheduled actions for all Auto Scaling groups, similar to the following::
{
"ScheduledUpdateGroupActions": [
{
"MinSize": 2,
"DesiredCapacity": 4,
"AutoScalingGroupName": "my-auto-scaling-group",
"MaxSize": 6,
"Recurrence": "30 0 1 12 0",
"ScheduledActionARN": "arn:aws:autoscaling:us-west-2:123456789012:scheduledUpdateGroupAction:8e86b655-b2e6-4410-8f29-b4f094d6871c:autoScalingGroupName/my-auto-scaling-group:scheduledActionName/sample-scheduled-action",
"ScheduledActionName": "sample-scheduled-action",
"StartTime": "2019-12-01T00:30:00Z",
"Time": "2019-12-01T00:30:00Z"
}
]
}
To return the scheduled actions for a specific Auto Scaling group, use the ``auto-scaling-group-name`` parameter::
aws autoscaling describe-scheduled-actions --auto-scaling-group-name my-auto-scaling-group
To return a specific scheduled action, use the ``scheduled-action-names`` parameter::
aws autoscaling describe-scheduled-actions --scheduled-action-names sample-scheduled-action
To return the scheduled actions that start at a specific time, use the ``start-time`` parameter::
aws autoscaling describe-scheduled-actions --start-time "2019-12-01T00:30:00Z"
To return the scheduled actions that end at a specific time, use the ``end-time`` parameter::
aws autoscaling describe-scheduled-actions --end-time "2022-12-01T00:30:00Z"
To return a specific number of scheduled actions with this command, use the ``max-items`` parameter::
aws autoscaling describe-scheduled-actions --auto-scaling-group-name my-auto-scaling-group --max-items 1
In this example, the output of this command is a JSON block that describes the first scheduled action::
{
"NextToken": "None___1",
"NotificationConfigurations": [
{
"AutoScalingGroupName": "my-auto-scaling-group",
"NotificationType": "autoscaling:TEST_NOTIFICATION",
"TopicARN": "arn:aws:sns:us-west-2:123456789012:second-test-topic"
}
]
}
This JSON block includes a ``NextToken`` field. You can use the value of this field with the ``starting-token`` parameter to return scheduled actions::
aws autoscaling describe-scheduled-actions --auto-scaling-group-name my-auto-scaling-group --starting-token None___1
For more information, see `Scheduled Scaling`_ in the *Auto Scaling Developer Guide*.
.. _`Scheduled Scaling`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/schedule_time.html
awscli-1.10.1/awscli/examples/autoscaling/set-instance-health.rst 0000666 4542626 0000144 00000000325 12652514124 026133 0 ustar pysdk-ci amazon 0000000 0000000 **To set the health status of an instance**
This example sets the health status of the specified instance to Unhealthy::
aws autoscaling set-instance-health --instance-id i-93633f9b --health-status Unhealthy
awscli-1.10.1/awscli/examples/autoscaling/detach-load-balancers.rst 0000666 4542626 0000144 00000000374 12652514124 026374 0 ustar pysdk-ci amazon 0000000 0000000 **To detach a load balancer from an Auto Scaling group**
This example detaches the specified load balancer from the specified Auto Scaling group::
aws autoscaling detach-load-balancers --load-balancer-names my-lb --auto-scaling-group-name my-asg
awscli-1.10.1/awscli/examples/autoscaling/describe-scaling-process-types.rst 0000666 4542626 0000144 00000001730 12652514124 030310 0 ustar pysdk-ci amazon 0000000 0000000 **To describe Auto Scaling process types**
The following ``describe-scaling-process-types`` command returns Auto Scaling process types::
aws autoscaling describe-scaling-process-types
The output of this command lists the processes, similar to the following::
{
"Processes": [
{
"ProcessName": "AZRebalance"
},
{
"ProcessName": "AddToLoadBalancer"
},
{
"ProcessName": "AlarmNotification"
},
{
"ProcessName": "HealthCheck"
},
{
"ProcessName": "Launch"
},
{
"ProcessName": "ReplaceUnhealthy"
},
{
"ProcessName": "ScheduledActions"
},
{
"ProcessName": "Terminate"
}
]
}
For more information, see `Suspend and Resume Auto Scaling Process`_ in the *Auto Scaling Developer Guide*.
.. _`Suspend and Resume Auto Scaling Process`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/US_SuspendResume.html
awscli-1.10.1/awscli/examples/autoscaling/describe-auto-scaling-notification-types.rst 0000666 4542626 0000144 00000001747 12652514124 032276 0 ustar pysdk-ci amazon 0000000 0000000 **To describe the Auto Scaling notification types**
The following ``describe-auto-scaling-notification-types`` command describes the notification types available for Auto Scaling groups::
aws autoscaling describe-auto-scaling-notification-types
The output of this command is a JSON block that describes the notification types, similar to the following::
{
"AutoScalingNotificationTypes": [
"autoscaling:EC2_INSTANCE_LAUNCH",
"autoscaling:EC2_INSTANCE_LAUNCH_ERROR",
"autoscaling:EC2_INSTANCE_TERMINATE",
"autoscaling:EC2_INSTANCE_TERMINATE_ERROR",
"autoscaling:TEST_NOTIFICATION"
]
}
For more information, see the `Configure your Auto Scaling Group to Send Notifications`_ section in the Getting Notifications When Your Auto Scaling Group Changes topic, in the *Auto Scaling Developer Guide*.
.. _`Configure your Auto Scaling Group to Send Notifications`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/ASGettingNotifications.html#as-configure-asg-for-sns
awscli-1.10.1/awscli/examples/autoscaling/enter-standby.rst 0000666 4542626 0000144 00000001515 12652514124 025054 0 ustar pysdk-ci amazon 0000000 0000000 **To move instances into standby mode**
This example puts the specified instance into standby mode::
aws autoscaling enter-standby --instance-ids i-93633f9b --auto-scaling-group-name my-asg --should-decrement-desired-capacity
The following is example output::
{
"Activities": [
{
"Description": "Moving EC2 instance to Standby: i-93633f9b",
"AutoScalingGroupName": "my-asg",
"ActivityId": "ffa056b4-6ed3-41ba-ae7c-249dfae6eba1",
"Details": {"Availability Zone": "us-west-2a"},
"StartTime": "2015-04-12T15:10:23.640Z",
"Progress": 50,
"Cause": "At 2015-04-12T15:10:23Z instance i-93633f9b was moved to standby in response to a user request, shrinking the capacity from 2 to 1.",
"StatusCode": "InProgress"
}
]
}
awscli-1.10.1/awscli/examples/autoscaling/describe-policies.rst 0000666 4542626 0000144 00000005161 12652514124 025663 0 ustar pysdk-ci amazon 0000000 0000000 **To describe Auto Scaling policies**
The following ``describe-policies`` command returns the policies for an Auto Scaling group::
aws autoscaling describe-policies --auto-scaling-group-name basic-auto-scaling-group
The output of this command is a JSON block that describes the notification configurations, similar to the following::
{
"ScalingPolicies": [
{
"PolicyName": "ScaleIn",
"AutoScalingGroupName": "basic-auto-scaling-group",
"PolicyARN": "arn:aws:autoscaling:us-west-2:123456789012:scalingPolicy:2233f3d7-6290-403b-b632-93c553560106:autoScalingGroupName/basic-auto-scaling-group:policyName/ScaleIn",
"AdjustmentType": "ChangeInCapacity",
"Alarms": [],
"ScalingAdjustment": -1
},
{
"PolicyName": "ScalePercentChange",
"MinAdjustmentStep": 2,
"AutoScalingGroupName": "basic-auto-scaling-group",
"PolicyARN": "arn:aws:autoscaling:us-west-2:123456789012:scalingPolicy:2b435159-cf77-4e89-8c0e-d63b497baad7:autoScalingGroupName/basic-auto-scaling-group:policyName/ScalePercentChange",
"Cooldown": 60,
"AdjustmentType": "PercentChangeInCapacity",
"Alarms": [],
"ScalingAdjustment": 25
}
]
}
To return specific scaling policies with this command, use the ``policy-names`` parameter::
aws autoscaling describe-policies --auto-scaling-group-name basic-auto-scaling-group --policy-names ScaleIn
To return a specific number of policies with this command, use the ``max-items`` parameter::
aws autoscaling describe-policies --auto-scaling-group-name basic-auto-scaling-group --max-items 1
In this example, the output of this command is a JSON block that describes the first policy::
{
"ScalingPolicies": [
{
"PolicyName": "ScaleIn",
"AutoScalingGroupName": "basic-auto-scaling-group",
"PolicyARN": "arn:aws:autoscaling:us-west-2:123456789012:scalingPolicy:2233f3d7-6290-403b-b632-93c553560106:autoScalingGroupName/basic-auto-scaling-group:policyName/ScaleIn",
"AdjustmentType": "ChangeInCapacity",
"Alarms": [],
"ScalingAdjustment": -1
}
],
"NextToken": "None___1"
}
This JSON block includes a ``NextToken`` field. You can use the value of this field with the ``starting-token`` parameter to return additional policies::
aws autoscaling describe-policies --auto-scaling-group-name basic-auto-scaling-group --starting-token None___1
For more information, see `Dynamic Scaling`_ in the *Auto Scaling Developer Guide*.
.. _`Dynamic Scaling`: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-scale-based-on-demand.html
awscli-1.10.1/awscli/examples/elasticbeanstalk/ 0000777 4542626 0000144 00000000000 12652514126 022543 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/elasticbeanstalk/validate-configuration-settings.rst 0000666 4542626 0000144 00000005230 12652514124 031567 0 ustar pysdk-ci amazon 0000000 0000000 **To validate configuration settings**
The following command validates a CloudWatch custom metrics config document::
aws elasticbeanstalk validate-configuration-settings --application-name my-app --environment-name my-env --option-settings file://options.json
``options.json`` is a JSON document that includes one or more configuration settings to validate::
[
{
"Namespace": "aws:elasticbeanstalk:healthreporting:system",
"OptionName": "ConfigDocument",
"Value": "{\"CloudWatchMetrics\": {\"Environment\": {\"ApplicationLatencyP99.9\": null,\"InstancesSevere\": 60,\"ApplicationLatencyP90\": 60,\"ApplicationLatencyP99\": null,\"ApplicationLatencyP95\": 60,\"InstancesUnknown\": 60,\"ApplicationLatencyP85\": 60,\"InstancesInfo\": null,\"ApplicationRequests2xx\": null,\"InstancesDegraded\": null,\"InstancesWarning\": 60,\"ApplicationLatencyP50\": 60,\"ApplicationRequestsTotal\": null,\"InstancesNoData\": null,\"InstancesPending\": 60,\"ApplicationLatencyP10\": null,\"ApplicationRequests5xx\": null,\"ApplicationLatencyP75\": null,\"InstancesOk\": 60,\"ApplicationRequests3xx\": null,\"ApplicationRequests4xx\": null},\"Instance\": {\"ApplicationLatencyP99.9\": null,\"ApplicationLatencyP90\": 60,\"ApplicationLatencyP99\": null,\"ApplicationLatencyP95\": null,\"ApplicationLatencyP85\": null,\"CPUUser\": 60,\"ApplicationRequests2xx\": null,\"CPUIdle\": null,\"ApplicationLatencyP50\": null,\"ApplicationRequestsTotal\": 60,\"RootFilesystemUtil\": null,\"LoadAverage1min\": null,\"CPUIrq\": null,\"CPUNice\": 60,\"CPUIowait\": 60,\"ApplicationLatencyP10\": null,\"LoadAverage5min\": null,\"ApplicationRequests5xx\": null,\"ApplicationLatencyP75\": 60,\"CPUSystem\": 60,\"ApplicationRequests3xx\": 60,\"ApplicationRequests4xx\": null,\"InstanceHealth\": null,\"CPUSoftirq\": 60}},\"Version\": 1}"
}
]
If the options that you specify are valid for the specified environment, Elastic Beanstalk returns an empty Messages array::
{
"Messages": []
}
If validation fails, the response will include information about the error::
{
"Messages": [
{
"OptionName": "ConfigDocumet",
"Message": "Invalid option specification (Namespace: 'aws:elasticbeanstalk:healthreporting:system', OptionName: 'ConfigDocumet'): Unknown configuration setting.",
"Namespace": "aws:elasticbeanstalk:healthreporting:system",
"Severity": "error"
}
]
}
For more information about namespaces and supported options, see `Option Values`_ in the *AWS Elastic Beanstalk Developer Guide*.
.. _`Option Values`: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html
awscli-1.10.1/awscli/examples/elasticbeanstalk/describe-configuration-settings.rst 0000666 4542626 0000144 00000004057 12652514124 031564 0 ustar pysdk-ci amazon 0000000 0000000 **To view configurations settings for an environment**
The following command retrieves configuration settings for an environment named ``my-env``::
aws elasticbeanstalk describe-configuration-settings --environment-name my-env --application-name my-app
Output (abbreviated)::
{
"ConfigurationSettings": [
{
"ApplicationName": "my-app",
"EnvironmentName": "my-env",
"Description": "Environment created from the EB CLI using \"eb create\"",
"DeploymentStatus": "deployed",
"DateCreated": "2015-08-13T19:16:25Z",
"OptionSettings": [
{
"OptionName": "Availability Zones",
"ResourceName": "AWSEBAutoScalingGroup",
"Namespace": "aws:autoscaling:asg",
"Value": "Any"
},
{
"OptionName": "Cooldown",
"ResourceName": "AWSEBAutoScalingGroup",
"Namespace": "aws:autoscaling:asg",
"Value": "360"
},
...
{
"OptionName": "ConnectionDrainingTimeout",
"ResourceName": "AWSEBLoadBalancer",
"Namespace": "aws:elb:policies",
"Value": "20"
},
{
"OptionName": "ConnectionSettingIdleTimeout",
"ResourceName": "AWSEBLoadBalancer",
"Namespace": "aws:elb:policies",
"Value": "60"
}
],
"DateUpdated": "2015-08-13T23:30:07Z",
"SolutionStackName": "64bit Amazon Linux 2015.03 v2.0.0 running Tomcat 8 Java 8"
}
]
}
For more information about namespaces and supported options, see `Option Values`_ in the *AWS Elastic Beanstalk Developer Guide*.
.. _`Option Values`: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html
awscli-1.10.1/awscli/examples/elasticbeanstalk/abort-environment-update.rst 0000666 4542626 0000144 00000000322 12652514124 030221 0 ustar pysdk-ci amazon 0000000 0000000 **To abort a deployment**
The following command aborts a running application version deployment for an environment named ``my-env``::
aws elasticbeanstalk abort-environment-update --environment-name my-env
awscli-1.10.1/awscli/examples/elasticbeanstalk/delete-application.rst 0000666 4542626 0000144 00000000243 12652514124 027035 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an application**
The following command deletes an application named ``my-app``::
aws elasticbeanstalk delete-application --application-name my-app
awscli-1.10.1/awscli/examples/elasticbeanstalk/describe-environments.rst 0000666 4542626 0000144 00000002122 12652514124 027575 0 ustar pysdk-ci amazon 0000000 0000000 **To view information about an environment**
The following command retrieves information about an environment named ``my-env``::
aws elasticbeanstalk describe-environments --environment-names my-env
Output::
{
"Environments": [
{
"ApplicationName": "my-app",
"EnvironmentName": "my-env",
"VersionLabel": "7f58-stage-150812_025409",
"Status": "Ready",
"EnvironmentId": "e-rpqsewtp2j",
"EndpointURL": "awseb-e-w-AWSEBLoa-1483140XB0Q4L-109QXY8121.us-west-2.elb.amazonaws.com",
"SolutionStackName": "64bit Amazon Linux 2015.03 v2.0.0 running Tomcat 8 Java 8",
"CNAME": "my-env.elasticbeanstalk.com",
"Health": "Green",
"AbortableOperationInProgress": false,
"Tier": {
"Version": " ",
"Type": "Standard",
"Name": "WebServer"
},
"DateUpdated": "2015-08-12T18:16:55.019Z",
"DateCreated": "2015-08-07T20:48:49.599Z"
}
]
}
awscli-1.10.1/awscli/examples/elasticbeanstalk/describe-events.rst 0000666 4542626 0000144 00000003062 12652514124 026356 0 ustar pysdk-ci amazon 0000000 0000000 **To view events for an environment**
The following command retrieves events for an environment named ``my-env``::
aws elasticbeanstalk describe-events --environment-name my-env
Output (abbreviated)::
{
"Events": [
{
"ApplicationName": "my-app",
"EnvironmentName": "my-env",
"Message": "Environment health has transitioned from Info to Ok.",
"EventDate": "2015-08-20T07:06:53.535Z",
"Severity": "INFO"
},
{
"ApplicationName": "my-app",
"EnvironmentName": "my-env",
"Severity": "INFO",
"RequestId": "b7f3960b-4709-11e5-ba1e-07e16200da41",
"Message": "Environment update completed successfully.",
"EventDate": "2015-08-20T07:06:02.049Z"
},
...
{
"ApplicationName": "my-app",
"EnvironmentName": "my-env",
"Severity": "INFO",
"RequestId": "ca8dfbf6-41ef-11e5-988b-651aa638f46b",
"Message": "Using elasticbeanstalk-us-west-2-012445113685 as Amazon S3 storage bucket for environment data.",
"EventDate": "2015-08-13T19:16:27.561Z"
},
{
"ApplicationName": "my-app",
"EnvironmentName": "my-env",
"Severity": "INFO",
"RequestId": "cdfba8f6-41ef-11e5-988b-65638f41aa6b",
"Message": "createEnvironment is starting.",
"EventDate": "2015-08-13T19:16:26.581Z"
}
]
}
awscli-1.10.1/awscli/examples/elasticbeanstalk/request-environment-info.rst 0000666 4542626 0000144 00000000630 12652514124 030255 0 ustar pysdk-ci amazon 0000000 0000000 **To request tailed logs**
The following command requests logs from an environment named ``my-env``::
aws elasticbeanstalk request-environment-info --environment-name my-env --info-type tail
After requesting logs, retrieve their location with `retrieve-environment-info`_.
.. _`retrieve-environment-info`: http://docs.aws.amazon.com/cli/latest/reference/elasticbeanstalk/retrieve-environment-info.html
awscli-1.10.1/awscli/examples/elasticbeanstalk/describe-configuration-options.rst 0000666 4542626 0000144 00000003574 12652514124 031422 0 ustar pysdk-ci amazon 0000000 0000000 **To view configuration options for an environment**
The following command retrieves descriptions of all available configuration options for an environment named ``my-env``::
aws elasticbeanstalk describe-configuration-options --environment-name my-env --application-name my-app
Output (abbreviated)::
{
"Options": [
{
"Name": "JVMOptions",
"UserDefined": false,
"DefaultValue": "Xms=256m,Xmx=256m,XX:MaxPermSize=64m,JVM Options=",
"ChangeSeverity": "RestartApplicationServer",
"Namespace": "aws:cloudformation:template:parameter",
"ValueType": "KeyValueList"
},
{
"Name": "Interval",
"UserDefined": false,
"DefaultValue": "30",
"ChangeSeverity": "NoInterruption",
"Namespace": "aws:elb:healthcheck",
"MaxValue": 300,
"MinValue": 5,
"ValueType": "Scalar"
},
...
{
"Name": "LowerThreshold",
"UserDefined": false,
"DefaultValue": "2000000",
"ChangeSeverity": "NoInterruption",
"Namespace": "aws:autoscaling:trigger",
"MinValue": 0,
"ValueType": "Scalar"
},
{
"Name": "ListenerEnabled",
"UserDefined": false,
"DefaultValue": "true",
"ChangeSeverity": "Unknown",
"Namespace": "aws:elb:listener",
"ValueType": "Boolean"
}
]
}
Available configuration options vary per platform and configuration version. For more information about namespaces and supported options, see `Option Values`_ in the *AWS Elastic Beanstalk Developer Guide*.
.. _`Option Values`: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html
awscli-1.10.1/awscli/examples/elasticbeanstalk/rebuild-environment.rst 0000666 4542626 0000144 00000000307 12652514124 027263 0 ustar pysdk-ci amazon 0000000 0000000 **To rebuild an environment**
The following command terminates and recreates the resources in an environment named ``my-env``::
aws elasticbeanstalk rebuild-environment --environment-name my-env
awscli-1.10.1/awscli/examples/elasticbeanstalk/delete-environment-configuration.rst 0000666 4542626 0000144 00000000354 12652514124 031746 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a draft configuration**
The following command deletes a draft configuration for an environment named ``my-env``::
aws elasticbeanstalk delete-environment-configuration --environment-name my-env --application-name my-app
awscli-1.10.1/awscli/examples/elasticbeanstalk/restart-app-server.rst 0000666 4542626 0000144 00000000322 12652514124 027036 0 ustar pysdk-ci amazon 0000000 0000000 **To restart application servers**
The following command restarts application servers on all instances in an environment named ``my-env``::
aws elasticbeanstalk restart-app-server --environment-name my-env
awscli-1.10.1/awscli/examples/elasticbeanstalk/create-storage-location.rst 0000666 4542626 0000144 00000000342 12652514124 030005 0 ustar pysdk-ci amazon 0000000 0000000 **To create a storage location**
The following command creates a storage location in Amazon S3::
aws elasticbeanstalk create-storage-location
Output::
{
"S3Bucket": "elasticbeanstalk-us-west-2-0123456789012"
}
awscli-1.10.1/awscli/examples/elasticbeanstalk/create-configuration-template.rst 0000666 4542626 0000144 00000001120 12652514124 031206 0 ustar pysdk-ci amazon 0000000 0000000 **To create a configuration template**
The following command creates a configuration template named ``my-app-v1`` from the settings applied to an environment with the id ``e-rpqsewtp2j``::
aws elasticbeanstalk create-configuration-template --application-name my-app --template-name my-app-v1 --environment-id e-rpqsewtp2j
Output::
{
"ApplicationName": "my-app",
"TemplateName": "my-app-v1",
"DateCreated": "2015-08-12T18:40:39Z",
"DateUpdated": "2015-08-12T18:40:39Z",
"SolutionStackName": "64bit Amazon Linux 2015.03 v2.0.0 running Tomcat 8 Java 8"
}
awscli-1.10.1/awscli/examples/elasticbeanstalk/describe-environment-health.rst 0000666 4542626 0000144 00000002733 12652514124 030665 0 ustar pysdk-ci amazon 0000000 0000000 **To view environment health**
The following command retrieves overall health information for an environment named ``my-env``::
aws elasticbeanstalk describe-environment-health --environment-name my-env --attribute-names All
Output::
{
"Status": "Ready",
"EnvironmentName": "my-env",
"Color": "Green",
"ApplicationMetrics": {
"Duration": 10,
"Latency": {
"P99": 0.004,
"P75": 0.002,
"P90": 0.003,
"P95": 0.004,
"P85": 0.003,
"P10": 0.001,
"P999": 0.004,
"P50": 0.001
},
"RequestCount": 45,
"StatusCodes": {
"Status3xx": 0,
"Status2xx": 45,
"Status5xx": 0,
"Status4xx": 0
}
},
"RefreshedAt": "2015-08-20T21:09:18Z",
"HealthStatus": "Ok",
"InstancesHealth": {
"Info": 0,
"Ok": 1,
"Unknown": 0,
"Severe": 0,
"Warning": 0,
"Degraded": 0,
"NoData": 0,
"Pending": 0
},
"Causes": []
}
Health information is only available for environments with enhanced health reporting enabled. For more information, see `Enhanced Health Reporting and Monitoring`_ in the *AWS Elastic Beanstalk Developer Guide*.
.. _`Enhanced Health Reporting and Monitoring`: http://integ-docs-aws.amazon.com/elasticbeanstalk/latest/dg/health-enhanced.html
awscli-1.10.1/awscli/examples/elasticbeanstalk/retrieve-environment-info.rst 0000666 4542626 0000144 00000001652 12652514124 030417 0 ustar pysdk-ci amazon 0000000 0000000 **To retrieve tailed logs**
The following command retrieves a link to logs from an environment named ``my-env``::
aws elasticbeanstalk retrieve-environment-info --environment-name my-env --info-type tail
Output::
{
"EnvironmentInfo": [
{
"SampleTimestamp": "2015-08-20T22:23:17.703Z",
"Message": "https://elasticbeanstalk-us-west-2-0123456789012.s3.amazonaws.com/resources/environments/logs/tail/e-fyqyju3yjs/i-09c1c867/TailLogs-1440109397703.out?AWSAccessKeyId=AKGPT4J56IAJ2EUBL5CQ&Expires=1440195891&Signature=n%2BEalOV6A2HIOx4Rcfb7LT16bBM%3D",
"InfoType": "tail",
"Ec2InstanceId": "i-09c1c867"
}
]
}
View the link in a browser. Prior to retrieval, logs must be requested with `request-environment-info`_.
.. _`request-environment-info`: http://docs.aws.amazon.com/cli/latest/reference/elasticbeanstalk/retrieve-environment-info.html
awscli-1.10.1/awscli/examples/elasticbeanstalk/terminate-environment.rst 0000666 4542626 0000144 00000001527 12652514124 027632 0 ustar pysdk-ci amazon 0000000 0000000 **To terminate an environment**
The following command terminates an Elastic Beanstalk environment named ``my-env``::
aws elasticbeanstalk terminate-environment --environment-name my-env
Output::
{
"ApplicationName": "my-app",
"EnvironmentName": "my-env",
"Status": "Terminating",
"EnvironmentId": "e-fh2eravpns",
"EndpointURL": "awseb-e-f-AWSEBLoa-1I9XUMP4-8492WNUP202574.us-west-2.elb.amazonaws.com",
"SolutionStackName": "64bit Amazon Linux 2015.03 v2.0.0 running Tomcat 8 Java 8",
"CNAME": "my-env.elasticbeanstalk.com",
"Health": "Grey",
"AbortableOperationInProgress": false,
"Tier": {
"Version": " ",
"Type": "Standard",
"Name": "WebServer"
},
"DateUpdated": "2015-08-12T19:05:54.744Z",
"DateCreated": "2015-08-12T18:52:53.622Z"
}
awscli-1.10.1/awscli/examples/elasticbeanstalk/delete-configuration-template.rst 0000666 4542626 0000144 00000000407 12652514124 031214 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a configuration template**
The following command deletes a configuration template named ``my-template`` for an application named ``my-app``::
aws elasticbeanstalk delete-configuration-template --template-name my-template --application-name my-app
awscli-1.10.1/awscli/examples/elasticbeanstalk/describe-applications.rst 0000666 4542626 0000144 00000002374 12652514124 027545 0 ustar pysdk-ci amazon 0000000 0000000 **To view a list of applications**
The following command retrieves information about applications in the current region::
aws elasticbeanstalk describe-applications
Output::
{
"Applications": [
{
"ApplicationName": "ruby",
"ConfigurationTemplates": [],
"DateUpdated": "2015-08-13T21:05:44.376Z",
"Versions": [
"Sample Application"
],
"DateCreated": "2015-08-13T21:05:44.376Z"
},
{
"ApplicationName": "pythonsample",
"Description": "Application created from the EB CLI using \"eb init\"",
"Versions": [
"Sample Application"
],
"DateCreated": "2015-08-13T19:05:43.637Z",
"ConfigurationTemplates": [],
"DateUpdated": "2015-08-13T19:05:43.637Z"
},
{
"ApplicationName": "nodejs-example",
"ConfigurationTemplates": [],
"DateUpdated": "2015-08-06T17:50:02.486Z",
"Versions": [
"add elasticache",
"First Release"
],
"DateCreated": "2015-08-06T17:50:02.486Z"
}
]
}
awscli-1.10.1/awscli/examples/elasticbeanstalk/delete-application-version.rst 0000666 4542626 0000144 00000000432 12652514124 030520 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an application version**
The following command deletes an application version named ``22a0-stage-150819_182129`` for an application named ``my-app``::
aws elasticbeanstalk delete-application-version --version-label 22a0-stage-150819_182129 --application-name my-app
awscli-1.10.1/awscli/examples/elasticbeanstalk/update-configuration-template.rst 0000666 4542626 0000144 00000001623 12652514124 031235 0 ustar pysdk-ci amazon 0000000 0000000 **To update a configuration template**
The following command removes the configured CloudWatch custom health metrics configuration ``ConfigDocument`` from a saved configuration template named ``my-template``::
aws elasticbeanstalk update-configuration-template --template-name my-template --application-name my-app --options-to-remove Namespace=aws:elasticbeanstalk:healthreporting:system,OptionName=ConfigDocument
Output::
{
"ApplicationName": "my-app",
"TemplateName": "my-template",
"DateCreated": "2015-08-20T22:39:31Z",
"DateUpdated": "2015-08-20T22:43:11Z",
"SolutionStackName": "64bit Amazon Linux 2015.03 v2.0.0 running Tomcat 8 Java 8"
}
For more information about namespaces and supported options, see `Option Values`_ in the *AWS Elastic Beanstalk Developer Guide*.
.. _`Option Values`: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html
awscli-1.10.1/awscli/examples/elasticbeanstalk/describe-application-versions.rst 0000666 4542626 0000144 00000002242 12652514124 031222 0 ustar pysdk-ci amazon 0000000 0000000 **To view information about an application version**
The following command retrieves information about an application version labeled ``v2``::
aws elasticbeanstalk describe-application-versions --application-name my-app --version-label "v2"
Output::
{
"ApplicationVersions": [
{
"ApplicationName": "my-app",
"VersionLabel": "v2",
"Description": "update cover page",
"DateCreated": "2015-07-23T01:32:26.079Z",
"DateUpdated": "2015-07-23T01:32:26.079Z",
"SourceBundle": {
"S3Bucket": "elasticbeanstalk-us-west-2-015321684451",
"S3Key": "my-app/5026-stage-150723_224258.war"
}
},
{
"ApplicationName": "my-app",
"VersionLabel": "v1",
"Description": "initial version",
"DateCreated": "2015-07-23T22:26:10.816Z",
"DateUpdated": "2015-07-23T22:26:10.816Z",
"SourceBundle": {
"S3Bucket": "elasticbeanstalk-us-west-2-015321684451",
"S3Key": "my-app/5026-stage-150723_222618.war"
}
}
]
}
awscli-1.10.1/awscli/examples/elasticbeanstalk/update-application-version.rst 0000666 4542626 0000144 00000001415 12652514124 030542 0 ustar pysdk-ci amazon 0000000 0000000 **To change an application version's description**
The following command updates the description of an application version named ``22a0-stage-150819_185942``::
aws elasticbeanstalk update-application-version --version-label 22a0-stage-150819_185942 --application-name my-app --description "new description"
Output::
{
"ApplicationVersion": {
"ApplicationName": "my-app",
"VersionLabel": "22a0-stage-150819_185942",
"Description": "new description",
"DateCreated": "2015-08-19T18:59:17.646Z",
"DateUpdated": "2015-08-20T22:53:28.871Z",
"SourceBundle": {
"S3Bucket": "elasticbeanstalk-us-west-2-0123456789012",
"S3Key": "my-app/22a0-stage-150819_185942.war"
}
}
} awscli-1.10.1/awscli/examples/elasticbeanstalk/create-environment.rst 0000666 4542626 0000144 00000003311 12652514124 027076 0 ustar pysdk-ci amazon 0000000 0000000 **To create a new environment for an application**
The following command creates a new environment for version "v1" of a java application named "my-app"::
aws elasticbeanstalk create-environment --application-name my-app --environment-name my-env --cname-prefix my-app --version-label v1 --solution-stack-name "64bit Amazon Linux 2015.03 v2.0.0 running Tomcat 8 Java 8"
Output::
{
"ApplicationName": "my-app",
"EnvironmentName": "my-env",
"VersionLabel": "v1",
"Status": "Launching",
"EnvironmentId": "e-izqpassy4h",
"SolutionStackName": "64bit Amazon Linux 2015.03 v2.0.0 running Tomcat 8 Java 8",
"CNAME": "my-app.elasticbeanstalk.com",
"Health": "Grey",
"Tier": {
"Type": "Standard",
"Name": "WebServer",
"Version": " "
},
"DateUpdated": "2015-02-03T23:04:54.479Z",
"DateCreated": "2015-02-03T23:04:54.479Z"
}
``v1`` is the label of an application version previously uploaded with `create-application-version`_.
.. _`create-application-version`: http://docs.aws.amazon.com/cli/latest/reference/elasticbeanstalk/create-application-version.html
**To specify a JSON file to define environment configuration options**
The following ``create-environment`` command specifies that a JSON file with the name ``myoptions.json`` should be used to override values obtained from the solution stack or the configuration template::
aws elasticbeanstalk create-environment --environment-name sample-env --application-name sampleapp --option-settings file://myoptions.json
For more information, see `Option Values`_ in the *AWS Elastic Beanstalk Developer Guide*.
.. _`Option Values`: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html awscli-1.10.1/awscli/examples/elasticbeanstalk/create-application-version.rst 0000666 4542626 0000144 00000001644 12652514124 030527 0 ustar pysdk-ci amazon 0000000 0000000 **To create a new application version**
The following command creates a new version, "v1" of an application named "MyApp"::
aws elasticbeanstalk create-application-version --application-name MyApp --version-label v1 --description MyAppv1 --source-bundle S3Bucket="my-bucket",S3Key="sample.war" --auto-create-application
The application will be created automatically if it does not already exist, due to the auto-create-application option. The source bundle is a .war file stored in an s3 bucket named "my-bucket" that contains the Apache Tomcat sample application.
Output::
{
"ApplicationVersion": {
"ApplicationName": "MyApp",
"VersionLabel": "v1",
"Description": "MyAppv1",
"DateCreated": "2015-02-03T23:01:25.412Z",
"DateUpdated": "2015-02-03T23:01:25.412Z",
"SourceBundle": {
"S3Bucket": "my-bucket",
"S3Key": "sample.war"
}
}
}
awscli-1.10.1/awscli/examples/elasticbeanstalk/update-environment.rst 0000666 4542626 0000144 00000006242 12652514124 027123 0 ustar pysdk-ci amazon 0000000 0000000 **To update an environment to a new version**
The following command updates an environment named "my-env" to version "v2" of the application to which it belongs::
aws elasticbeanstalk update-environment --environment-name my-env --version-label v2
This command requires that the "my-env" environment already exists and belongs to an application that has a valid application version with the label "v2".
Output::
{
"ApplicationName": "my-app",
"EnvironmentName": "my-env",
"VersionLabel": "v2",
"Status": "Updating",
"EnvironmentId": "e-szqipays4h",
"EndpointURL": "awseb-e-i-AWSEBLoa-1RDLX6TC9VUAO-0123456789.us-west-2.elb.amazonaws.com",
"SolutionStackName": "64bit Amazon Linux running Tomcat 7",
"CNAME": "my-env.elasticbeanstalk.com",
"Health": "Grey",
"Tier": {
"Version": " ",
"Type": "Standard",
"Name": "WebServer"
},
"DateUpdated": "2015-02-03T23:12:29.119Z",
"DateCreated": "2015-02-03T23:04:54.453Z"
}
**To set an environment variable**
The following command sets the value of the "PARAM1" variable in the "my-env" environment to "ParamValue"::
aws elasticbeanstalk update-environment --environment-name my-env --option-settings Namespace=aws:elasticbeanstalk:application:environment,OptionName=PARAM1,Value=ParamValue
The ``option-settings`` parameter takes a namespace in addition to the name and value of the variable. Elastic Beanstalk supports several namespaces for options in addition to environment variables.
**To configure option settings from a file**
The following command configures several options in the ``aws:elb:loadbalancer`` namespace from a file::
aws elasticbeanstalk update-environment --environment-name my-env --option-settings file://options.json
``options.json`` is a JSON object defining several settings::
[
{
"Namespace": "aws:elb:healthcheck",
"OptionName": "Interval",
"Value": "15"
},
{
"Namespace": "aws:elb:healthcheck",
"OptionName": "Timeout",
"Value": "8"
},
{
"Namespace": "aws:elb:healthcheck",
"OptionName": "HealthyThreshold",
"Value": "2"
},
{
"Namespace": "aws:elb:healthcheck",
"OptionName": "UnhealthyThreshold",
"Value": "3"
}
]
Output::
{
"ApplicationName": "my-app",
"EnvironmentName": "my-env",
"VersionLabel": "7f58-stage-150812_025409",
"Status": "Updating",
"EnvironmentId": "e-wtp2rpqsej",
"EndpointURL": "awseb-e-w-AWSEBLoa-14XB83101Q4L-104QXY80921.sa-east-1.elb.amazonaws.com",
"SolutionStackName": "64bit Amazon Linux 2015.03 v2.0.0 running Tomcat 8 Java 8",
"CNAME": "my-env.elasticbeanstalk.com",
"Health": "Grey",
"AbortableOperationInProgress": true,
"Tier": {
"Version": " ",
"Type": "Standard",
"Name": "WebServer"
},
"DateUpdated": "2015-08-12T18:15:23.804Z",
"DateCreated": "2015-08-07T20:48:49.599Z"
}
For more information about namespaces and supported options, see `Option Values`_ in the *AWS Elastic Beanstalk Developer Guide*.
.. _`Option Values`: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html
awscli-1.10.1/awscli/examples/elasticbeanstalk/swap-environment-cnames.rst 0000666 4542626 0000144 00000000354 12652514124 030055 0 ustar pysdk-ci amazon 0000000 0000000 **To swap environment CNAMES**
The following command swaps the assigned subdomains of two environments::
aws elasticbeanstalk swap-environment-cnames --source-environment-name my-env-blue --destination-environment-name my-env-green
awscli-1.10.1/awscli/examples/elasticbeanstalk/list-available-solution-stacks.rst 0000666 4542626 0000144 00000005000 12652514124 031317 0 ustar pysdk-ci amazon 0000000 0000000 **To view solution stacks**
The following command lists solution stacks for all currently available platform configurations and any that you have used in the past::
aws elasticbeanstalk list-available-solution-stacks
Output (abbreviated)::
{
"SolutionStacks": [
"64bit Amazon Linux 2015.03 v2.0.0 running Node.js",
"64bit Amazon Linux 2015.03 v2.0.0 running PHP 5.6",
"64bit Amazon Linux 2015.03 v2.0.0 running PHP 5.5",
"64bit Amazon Linux 2015.03 v2.0.0 running PHP 5.4",
"64bit Amazon Linux 2015.03 v2.0.0 running Python 3.4",
"64bit Amazon Linux 2015.03 v2.0.0 running Python 2.7",
"64bit Amazon Linux 2015.03 v2.0.0 running Python",
"64bit Amazon Linux 2015.03 v2.0.0 running Ruby 2.2 (Puma)",
"64bit Amazon Linux 2015.03 v2.0.0 running Ruby 2.2 (Passenger Standalone)",
"64bit Amazon Linux 2015.03 v2.0.0 running Ruby 2.1 (Puma)",
"64bit Amazon Linux 2015.03 v2.0.0 running Ruby 2.1 (Passenger Standalone)",
"64bit Amazon Linux 2015.03 v2.0.0 running Ruby 2.0 (Puma)",
"64bit Amazon Linux 2015.03 v2.0.0 running Ruby 2.0 (Passenger Standalone)",
"64bit Amazon Linux 2015.03 v2.0.0 running Ruby 1.9.3",
"64bit Amazon Linux 2015.03 v2.0.0 running Tomcat 8 Java 8",
"64bit Amazon Linux 2015.03 v2.0.0 running Tomcat 7 Java 7",
"64bit Amazon Linux 2015.03 v2.0.0 running Tomcat 7 Java 6",
"64bit Windows Server Core 2012 R2 running IIS 8.5",
"64bit Windows Server 2012 R2 running IIS 8.5",
"64bit Windows Server 2012 running IIS 8",
"64bit Windows Server 2008 R2 running IIS 7.5",
"64bit Amazon Linux 2015.03 v2.0.0 running Docker 1.6.2",
"64bit Amazon Linux 2015.03 v2.0.0 running Multi-container Docker 1.6.2 (Generic)",
"64bit Debian jessie v2.0.0 running GlassFish 4.1 Java 8 (Preconfigured - Docker)",
"64bit Debian jessie v2.0.0 running GlassFish 4.0 Java 7 (Preconfigured - Docker)",
"64bit Debian jessie v2.0.0 running Go 1.4 (Preconfigured - Docker)",
"64bit Debian jessie v2.0.0 running Go 1.3 (Preconfigured - Docker)",
"64bit Debian jessie v2.0.0 running Python 3.4 (Preconfigured - Docker)",
],
"SolutionStackDetails": [
{
"PermittedFileTypes": [
"zip"
],
"SolutionStackName": "64bit Amazon Linux 2015.03 v2.0.0 running Node.js"
},
...
]
}
awscli-1.10.1/awscli/examples/elasticbeanstalk/check-dns-availability.rst 0000666 4542626 0000144 00000000475 12652514124 027610 0 ustar pysdk-ci amazon 0000000 0000000 **To check the availability of a CNAME**
The following command checks the availability of the subdomain ``my-cname.elasticbeanstalk.com``::
aws elasticbeanstalk check-dns-availability --cname-prefix my-cname
Output::
{
"Available": true,
"FullyQualifiedCNAME": "my-cname.elasticbeanstalk.com"
}
awscli-1.10.1/awscli/examples/elasticbeanstalk/describe-environment-resources.rst 0000666 4542626 0000144 00000001732 12652514124 031430 0 ustar pysdk-ci amazon 0000000 0000000 **To view information about the AWS resources in your environment**
The following command retrieves information about resources in an environment named ``my-env``::
aws elasticbeanstalk describe-environment-resources --environment-name my-env
Output::
{
"EnvironmentResources": {
"EnvironmentName": "my-env",
"AutoScalingGroups": [
{
"Name": "awseb-e-qu3fyyjyjs-stack-AWSEBAutoScalingGroup-QSB2ZO88SXZT"
}
],
"Triggers": [],
"LoadBalancers": [
{
"Name": "awseb-e-q-AWSEBLoa-1EEPZ0K98BIF0"
}
],
"Queues": [],
"Instances": [
{
"Id": "i-0c91c786"
}
],
"LaunchConfigurations": [
{
"Name": "awseb-e-qu3fyyjyjs-stack-AWSEBAutoScalingLaunchConfiguration-1UUVQIBC96TQ2"
}
]
}
}
awscli-1.10.1/awscli/examples/elasticbeanstalk/describe-instances-health.rst 0000666 4542626 0000144 00000004004 12652514124 030301 0 ustar pysdk-ci amazon 0000000 0000000 **To view environment health**
The following command retrieves health information for instances in an environment named ``my-env``::
aws elasticbeanstalk describe-instances-health --environment-name my-env --attribute-names All
Output::
{
"InstanceHealthList": [
{
"InstanceId": "i-08691cc7",
"ApplicationMetrics": {
"Duration": 10,
"Latency": {
"P99": 0.006,
"P75": 0.002,
"P90": 0.004,
"P95": 0.005,
"P85": 0.003,
"P10": 0.0,
"P999": 0.006,
"P50": 0.001
},
"RequestCount": 48,
"StatusCodes": {
"Status3xx": 0,
"Status2xx": 47,
"Status5xx": 0,
"Status4xx": 1
}
},
"System": {
"LoadAverage": [
0.0,
0.02,
0.05
],
"CPUUtilization": {
"SoftIRQ": 0.1,
"IOWait": 0.2,
"System": 0.3,
"Idle": 97.8,
"User": 1.5,
"IRQ": 0.0,
"Nice": 0.1
}
},
"Color": "Green",
"HealthStatus": "Ok",
"LaunchedAt": "2015-08-13T19:17:09Z",
"Causes": []
}
],
"RefreshedAt": "2015-08-20T21:09:08Z"
}
Health information is only available for environments with enhanced health reporting enabled. For more information, see `Enhanced Health Reporting and Monitoring`_ in the *AWS Elastic Beanstalk Developer Guide*.
.. _`Enhanced Health Reporting and Monitoring`: http://integ-docs-aws.amazon.com/elasticbeanstalk/latest/dg/health-enhanced.html
awscli-1.10.1/awscli/examples/elasticbeanstalk/update-application.rst 0000666 4542626 0000144 00000001376 12652514124 027065 0 ustar pysdk-ci amazon 0000000 0000000 **To change an application's description**
The following command updates the description of an application named ``my-app``::
aws elasticbeanstalk update-application --application-name my-app --description "my Elastic Beanstalk application"
Output::
{
"Application": {
"ApplicationName": "my-app",
"Description": "my Elastic Beanstalk application",
"Versions": [
"2fba-stage-150819_234450",
"bf07-stage-150820_214945",
"93f8",
"fd7c-stage-150820_000431",
"22a0-stage-150819_185942"
],
"DateCreated": "2015-08-13T19:15:50.449Z",
"ConfigurationTemplates": [],
"DateUpdated": "2015-08-20T22:34:56.195Z"
}
}
awscli-1.10.1/awscli/examples/elasticbeanstalk/create-application.rst 0000666 4542626 0000144 00000001500 12652514124 027033 0 ustar pysdk-ci amazon 0000000 0000000 **To create a new application**
The following command creates a new application named "MyApp"::
aws elasticbeanstalk create-application --application-name MyApp --description "my application"
The ``create-application`` command only configures the application's name and description. To upload source code for the application, create an initial version of the application using ``create-application-version``. ``create-application-version`` also has an ``auto-create-application`` option that lets you create the application and the application version in one step.
Output::
{
"Application": {
"ApplicationName": "MyApp",
"ConfigurationTemplates": [],
"DateUpdated": "2015-02-12T18:32:21.181Z",
"Description": "my application",
"DateCreated": "2015-02-12T18:32:21.181Z"
}
}
awscli-1.10.1/awscli/examples/redshift/ 0000777 4542626 0000144 00000000000 12652514126 021042 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/redshift/describe-resize.rst 0000666 4542626 0000144 00000000750 12652514124 024653 0 ustar pysdk-ci amazon 0000000 0000000 Describe Resize
---------------
This example describes the latest resize of a cluster. The request was for 3 nodes of type ``dw.hs1.8xlarge``.
Command::
aws redshift describe-resize --cluster-identifier mycluster
Result::
{
"Status": "NONE",
"TargetClusterType": "multi-node",
"TargetNodeType": "dw.hs1.8xlarge",
"ResponseMetadata": {
"RequestId": "9f52b0b4-7733-11e2-aa9b-318b2909bd27"
},
"TargetNumberOfNodes": "3"
}
awscli-1.10.1/awscli/examples/redshift/modify-cluster.rst 0000666 4542626 0000144 00000001661 12652514124 024544 0 ustar pysdk-ci amazon 0000000 0000000 Associate a Security Group with a Cluster
-----------------------------------------
This example shows how to associate a cluster security group with the specified cluster.
Command::
aws redshift modify-cluster --cluster-identifier mycluster --cluster-security-groups mysecuritygroup
Modify the Maintenance Window for a Cluster
-------------------------------------------
This shows how to change the weekly preferred maintenance window for a cluster to be the minimum four hour window
starting Sundays at 11:15 PM, and ending Mondays at 3:15 AM.
Command::
aws redshift modify-cluster --cluster-identifier mycluster --preferred-maintenance-window Sun:23:15-Mon:03:15
Change the Master Password for the Cluster
------------------------------------------
This example shows how to change the master password for a cluster.
Command::
aws redshift modify-cluster --cluster-identifier mycluster --master-user-password A1b2c3d4
awscli-1.10.1/awscli/examples/redshift/revoke-cluster-security-group-ingress.rst 0000666 4542626 0000144 00000001133 12652514124 031211 0 ustar pysdk-ci amazon 0000000 0000000 Revoke Access from an EC2 Security Group
----------------------------------------
This example revokes access to a named Amazon EC2 security group.
Command::
aws redshift revoke-cluster-security-group-ingress --cluster-security-group-name mysecuritygroup --ec2-security-group-name myec2securitygroup --ec2-security-group-owner-id 123445677890
Revoking Access to a CIDR range
-------------------------------
This example revokes access to a CIDR range.
Command::
aws redshift revoke-cluster-security-group-ingress --cluster-security-group-name mysecuritygroup --cidrip 192.168.100.100/32
awscli-1.10.1/awscli/examples/redshift/describe-reserved-nodes.rst 0000666 4542626 0000144 00000001742 12652514124 026301 0 ustar pysdk-ci amazon 0000000 0000000 Describe Reserved Nodes
-----------------------
This example shows a reserved node offering that has been purchased.
Command::
aws redshift describe-reserved-nodes
Result::
{
"ResponseMetadata": {
"RequestId": "bc29ce2e-7600-11e2-9949-4b361e7420b7"
},
"ReservedNodes": [
{
"OfferingType": "Heavy Utilization",
"FixedPrice": "",
"NodeType": "dw.hs1.xlarge",
"ReservedNodeId": "1ba8e2e3-bc01-4d65-b35d-a4a3e931547e",
"UsagePrice": "",
"RecurringCharges": [
{
"RecurringChargeAmount": "",
"RecurringChargeFrequency": "Hourly"
} ],
"NodeCount": 1,
"State": "payment-pending",
"StartTime": "2013-02-13T17:08:39.051Z",
"Duration": 31536000,
"ReservedNodeOfferingId": "ceb6a579-cf4c-4343-be8b-d832c45ab51c"
}
]
}
awscli-1.10.1/awscli/examples/redshift/describe-events.rst 0000666 4542626 0000144 00000001651 12652514124 024657 0 ustar pysdk-ci amazon 0000000 0000000 Describe All Events
-------------------
this example returns all events. By default, the output is in JSON format.
Command::
aws redshift describe-events
Result::
{
"Events": [
{
"Date": "2013-01-22T19:17:03.640Z",
"SourceIdentifier": "myclusterparametergroup",
"Message": "Cluster parameter group myclusterparametergroup has been created.",
"SourceType": "cluster-parameter-group"
} ],
"ResponseMetadata": {
"RequestId": "9f056111-64c9-11e2-9390-ff04f2c1e638"
}
}
You can also obtain the same information in text format using the ``--output text`` option.
Command::
aws redshift describe-events --output text
Result::
2013-01-22T19:17:03.640Z myclusterparametergroup Cluster parameter group myclusterparametergroup has been created. cluster-parameter-group
RESPONSEMETADATA 8e5fe765-64c9-11e2-bce3-e56f52c50e17
awscli-1.10.1/awscli/examples/redshift/create-cluster.rst 0000666 4542626 0000144 00000002646 12652514124 024524 0 ustar pysdk-ci amazon 0000000 0000000 Create a Cluster with Minimal Parameters
----------------------------------------
This example creates a cluster with the minimal set of parameters. By default, the output is in JSON format.
Command::
aws redshift create-cluster --node-type dw.hs1.xlarge --number-of-nodes 2 --master-username adminuser --master-user-password TopSecret1 --cluster-identifier mycluster
Result::
{
"Cluster": {
"NodeType": "dw.hs1.xlarge",
"ClusterVersion": "1.0",
"PubliclyAccessible": "true",
"MasterUsername": "adminuser",
"ClusterParameterGroups": [
{
"ParameterApplyStatus": "in-sync",
"ParameterGroupName": "default.redshift-1.0"
} ],
"ClusterSecurityGroups": [
{
"Status": "active",
"ClusterSecurityGroupName": "default"
} ],
"AllowVersionUpgrade": true,
"VpcSecurityGroups": \[],
"PreferredMaintenanceWindow": "sat:03:30-sat:04:00",
"AutomatedSnapshotRetentionPeriod": 1,
"ClusterStatus": "creating",
"ClusterIdentifier": "mycluster",
"DBName": "dev",
"NumberOfNodes": 2,
"PendingModifiedValues": {
"MasterUserPassword": "\****"
}
},
"ResponseMetadata": {
"RequestId": "7cf4bcfc-64dd-11e2-bea9-49e0ce183f07"
}
}
awscli-1.10.1/awscli/examples/redshift/create-cluster-subnet-group.rst 0000666 4542626 0000144 00000001541 12652514124 027145 0 ustar pysdk-ci amazon 0000000 0000000 Create a Cluster Subnet Group
-----------------------------
This example creates a new cluster subnet group.
Command::
aws redshift create-cluster-subnet-group --cluster-subnet-group-name mysubnetgroup --description "My subnet group" --subnet-ids subnet-763fdd1c
Result::
{
"ClusterSubnetGroup": {
"Subnets": [
{
"SubnetStatus": "Active",
"SubnetIdentifier": "subnet-763fdd1c",
"SubnetAvailabilityZone": {
"Name": "us-east-1a"
}
} ],
"VpcId": "vpc-7e3fdd14",
"SubnetGroupStatus": "Complete",
"Description": "My subnet group",
"ClusterSubnetGroupName": "mysubnetgroup"
},
"ResponseMetadata": {
"RequestId": "500b8ce2-698f-11e2-9790-fd67517fb6fd"
}
}
awscli-1.10.1/awscli/examples/redshift/delete-cluster-snapshot.rst 0000666 4542626 0000144 00000000270 12652514124 026347 0 ustar pysdk-ci amazon 0000000 0000000 Delete a Cluster Snapshot
-------------------------
This example deletes a cluster snapshot.
Command::
aws redshift delete-cluster-snapshot --snapshot-identifier my-snapshot-id
awscli-1.10.1/awscli/examples/redshift/create-cluster-security-group.rst 0000666 4542626 0000144 00000002364 12652514124 027520 0 ustar pysdk-ci amazon 0000000 0000000 Creating a Cluster Security Group
---------------------------------
This example creates a new cluster security group. By default, the output is in JSON format.
Command::
aws redshift create-cluster-security-group --cluster-security-group-name mysecuritygroup --description "This is my cluster security group"
Result::
{
"create_cluster_security_group_response": {
"create_cluster_security_group_result": {
"cluster_security_group": {
"description": "This is my cluster security group",
"owner_id": "300454760768",
"cluster_security_group_name": "mysecuritygroup",
"ec2_security_groups": \[],
"ip_ranges": \[]
}
},
"response_metadata": {
"request_id": "5df486a0-343a-11e2-b0d8-d15d0ef48549"
}
}
}
You can also obtain the same information in text format using the ``--output text`` option.
Command::
aws redshift create-cluster-security-group --cluster-security-group-name mysecuritygroup --description "This is my cluster security group" --output text
Result::
This is my cluster security group 300454760768 mysecuritygroup
a0c0bfab-343a-11e2-95d2-c3dc9fe8ab57
awscli-1.10.1/awscli/examples/redshift/create-cluster-parameter-group.rst 0000666 4542626 0000144 00000001212 12652514124 027620 0 ustar pysdk-ci amazon 0000000 0000000 Create a Cluster Parameter Group
--------------------------------
This example creates a new cluster parameter group.
Command::
aws redshift create-cluster-parameter-group --parameter-group-name myclusterparametergroup --parameter-group-family redshift-1.0 --description "My first cluster parameter group"
Result::
{
"ClusterParameterGroup": {
"ParameterGroupFamily": "redshift-1.0",
"Description": "My first cluster parameter group",
"ParameterGroupName": "myclusterparametergroup"
},
"ResponseMetadata": {
"RequestId": "739448f0-64cc-11e2-8f7d-3b939af52818"
}
}
awscli-1.10.1/awscli/examples/redshift/delete-cluster-subnet-group.rst 0000666 4542626 0000144 00000000511 12652514124 027140 0 ustar pysdk-ci amazon 0000000 0000000 Delete a Cluster subnet Group
-----------------------------
This example deletes a cluster subnet group.
Command::
aws redshift delete-cluster-subnet-group --cluster-subnet-group-name mysubnetgroup
Result::
{
"ResponseMetadata": {
"RequestId": "253fbffd-6993-11e2-bc3a-47431073908a"
}
}
awscli-1.10.1/awscli/examples/redshift/describe-cluster-parameters.rst 0000666 4542626 0000144 00000004251 12652514124 027174 0 ustar pysdk-ci amazon 0000000 0000000 Retrieve the Parameters for a Specified Cluster Parameter Group
---------------------------------------------------------------
This example retrieves the parameters for the named parameter group. By default, the output is in JSON format.
Command::
aws redshift describe-cluster-parameters --parameter-group-name myclusterparametergroup
Result::
{
"Parameters": [
{
"Description": "Sets the display format for date and time values.",
"DataType": "string",
"IsModifiable": true,
"Source": "engine-default",
"ParameterValue": "ISO, MDY",
"ParameterName": "datestyle"
},
{
"Description": "Sets the number of digits displayed for floating-point values",
"DataType": "integer",
"IsModifiable": true,
"AllowedValues": "-15-2",
"Source": "engine-default",
"ParameterValue": "0",
"ParameterName": "extra_float_digits"
},
(...remaining output omitted...)
]
}
You can also obtain the same information in text format using the ``--output text`` option.
Command::
aws redshift describe-cluster-parameters --parameter-group-name myclusterparametergroup --output text
Result::
RESPONSEMETADATA cdac40aa-64cc-11e2-9e70-918437dd236d
Sets the display format for date and time values. string True engine-default ISO, MDY datestyle
Sets the number of digits displayed for floating-point values integer True -15-2 engine-default 0 extra_float_digits
This parameter applies a user-defined label to a group of queries that are run during the same session.. string True engine-default default query_group
require ssl for all databaseconnections boolean True true,false engine-default false require_ssl
Sets the schema search order for names that are not schema-qualified. string True engine-default $user, public search_path
Aborts any statement that takes over the specified number of milliseconds. integer True engine-default 0 statement_timeout
wlm json configuration string True engine-default \[{"query_concurrency":5}] wlm_json_configuration
awscli-1.10.1/awscli/examples/redshift/delete-cluster.rst 0000666 4542626 0000144 00000001133 12652514124 024511 0 ustar pysdk-ci amazon 0000000 0000000 Delete a Cluster with No Final Cluster Snapshot
-----------------------------------------------
This example deletes a cluster, forcing data deletion so no final cluster snapshot
is created.
Command::
aws redshift delete-cluster --cluster-identifier mycluster --skip-final-cluster-snapshot
Delete a Cluster, Allowing a Final Cluster Snapshot
---------------------------------------------------
This example deletes a cluster, but specifies a final cluster snapshot.
Command::
aws redshift delete-cluster --cluster-identifier mycluster --final-cluster-snapshot-identifier myfinalsnapshot
awscli-1.10.1/awscli/examples/redshift/reboot-cluster.rst 0000666 4542626 0000144 00000002663 12652514124 024552 0 ustar pysdk-ci amazon 0000000 0000000 Reboot a Cluster
----------------
This example reboots a cluster. By default, the output is in JSON format.
Command::
aws redshift reboot-cluster --cluster-identifier mycluster
Result::
{
"Cluster": {
"NodeType": "dw.hs1.xlarge",
"Endpoint": {
"Port": 5439,
"Address": "mycluster.coqoarplqhsn.us-east-1.redshift.amazonaws.com"
},
"ClusterVersion": "1.0",
"PubliclyAccessible": "true",
"MasterUsername": "adminuser",
"ClusterParameterGroups": [
{
"ParameterApplyStatus": "in-sync",
"ParameterGroupName": "default.redshift-1.0"
}
],
"ClusterSecurityGroups": [
{
"Status": "active",
"ClusterSecurityGroupName": "default"
}
],
"AllowVersionUpgrade": true,
"VpcSecurityGroups": \[],
"AvailabilityZone": "us-east-1a",
"ClusterCreateTime": "2013-01-22T21:59:29.559Z",
"PreferredMaintenanceWindow": "sun:23:15-mon:03:15",
"AutomatedSnapshotRetentionPeriod": 1,
"ClusterStatus": "rebooting",
"ClusterIdentifier": "mycluster",
"DBName": "dev",
"NumberOfNodes": 2,
"PendingModifiedValues": {}
},
"ResponseMetadata": {
"RequestId": "61c8b564-64e8-11e2-8f7d-3b939af52818"
}
}
awscli-1.10.1/awscli/examples/redshift/describe-reserved-node-offerings.rst 0000666 4542626 0000144 00000002733 12652514124 030077 0 ustar pysdk-ci amazon 0000000 0000000 Describe Reserved Node Offerings
--------------------------------
This example shows all of the reserved node offerings that are available for
purchase.
Command::
aws redshift describe-reserved-node-offerings
Result::
{
"ReservedNodeOfferings": [
{
"OfferingType": "Heavy Utilization",
"FixedPrice": "",
"NodeType": "dw.hs1.xlarge",
"UsagePrice": "",
"RecurringCharges": [
{
"RecurringChargeAmount": "",
"RecurringChargeFrequency": "Hourly"
} ],
"Duration": 31536000,
"ReservedNodeOfferingId": "ceb6a579-cf4c-4343-be8b-d832c45ab51c"
},
{
"OfferingType": "Heavy Utilization",
"FixedPrice": "",
"NodeType": "dw.hs1.8xlarge",
"UsagePrice": "",
"RecurringCharges": [
{
"RecurringChargeAmount": "",
"RecurringChargeFrequency": "Hourly"
} ],
"Duration": 31536000,
"ReservedNodeOfferingId": "e5a2ff3b-352d-4a9c-ad7d-373c4cab5dd2"
},
...remaining output omitted...
],
"ResponseMetadata": {
"RequestId": "8b1a1a43-75ff-11e2-9666-e142fe91ddd1"
}
}
If you want to purchase a reserved node offering, you can call ``purchase-reserved-node-offering`` using a valid
*ReservedNodeOfferingId*.
awscli-1.10.1/awscli/examples/redshift/describe-default-cluster-parameters.rst 0000666 4542626 0000144 00000002436 12652514124 030621 0 ustar pysdk-ci amazon 0000000 0000000 Get a Description of Default Cluster Parameters
-----------------------------------------------
This example returns a description of the default cluster parameters for the
``redshift-1.0`` family. By default, the output is in JSON format.
Command::
aws redshift describe-default-cluster-parameters --parameter-group-family redshift-1.0
Result::
{
"DefaultClusterParameters": {
"ParameterGroupFamily": "redshift-1.0",
"Parameters": [
{
"Description": "Sets the display format for date and time values.",
"DataType": "string",
"IsModifiable": true,
"Source": "engine-default",
"ParameterValue": "ISO, MDY",
"ParameterName": "datestyle"
},
{
"Description": "Sets the number of digits displayed for floating-point values",
"DataType": "integer",
"IsModifiable": true,
"AllowedValues": "-15-2",
"Source": "engine-default",
"ParameterValue": "0",
"ParameterName": "extra_float_digits"
},
(...remaining output omitted...)
]
}
}
.. tip:: To see a list of valid parameter group families, use the ``describe-cluster-parameter-groups`` command.
awscli-1.10.1/awscli/examples/redshift/describe-cluster-subnet-groups.rst 0000666 4542626 0000144 00000001642 12652514124 027647 0 ustar pysdk-ci amazon 0000000 0000000 Get a Description of All Cluster Subnet Groups
----------------------------------------------
This example returns a description of all cluster subnet groups. By default, the output is in JSON format.
Command::
aws redshift describe-cluster-subnet-groups
Result::
{
"ClusterSubnetGroups": [
{
"Subnets": [
{
"SubnetStatus": "Active",
"SubnetIdentifier": "subnet-763fdd1c",
"SubnetAvailabilityZone": {
"Name": "us-east-1a"
}
}
],
"VpcId": "vpc-7e3fdd14",
"SubnetGroupStatus": "Complete",
"Description": "My subnet group",
"ClusterSubnetGroupName": "mysubnetgroup"
}
],
"ResponseMetadata": {
"RequestId": "37fa8c89-6990-11e2-8f75-ab4018764c77"
}
}
awscli-1.10.1/awscli/examples/redshift/revoke-snapshot-access.rst 0000666 4542626 0000144 00000002507 12652514124 026165 0 ustar pysdk-ci amazon 0000000 0000000 Revoke the Authorization of an AWS Account to Restore a Snapshot
----------------------------------------------------------------
This example revokes the authorization of the AWS account ``444455556666`` to
restore the snapshot ``my-snapshot-id``. By default, the output is in JSON
format.
Command::
aws redshift revoke-snapshot-access --snapshot-id my-snapshot-id --account-with-restore-access 444455556666
Result::
{
"Snapshot": {
"Status": "available",
"SnapshotCreateTime": "2013-07-17T22:04:18.947Z",
"EstimatedSecondsToCompletion": 0,
"AvailabilityZone": "us-east-1a",
"ClusterVersion": "1.0",
"MasterUsername": "adminuser",
"Encrypted": false,
"OwnerAccount": "111122223333",
"BackupProgressInMegabytes": 11.0,
"ElapsedTimeInSeconds": 0,
"DBName": "dev",
"CurrentBackupRateInMegabytesPerSecond: 0.1534,
"ClusterCreateTime": "2013-01-22T21:59:29.559Z",
"ActualIncrementalBackupSizeInMegabytes"; 11.0,
"SnapshotType": "manual",
"NodeType": "dw.hs1.xlarge",
"ClusterIdentifier": "mycluster",
"TotalBackupSizeInMegabytes": 20.0,
"Port": 5439,
"NumberOfNodes": 2,
"SnapshotIdentifier": "my-snapshot-id"
}
}
awscli-1.10.1/awscli/examples/redshift/purchase-reserved-node-offering.rst 0000666 4542626 0000144 00000002110 12652514124 027733 0 ustar pysdk-ci amazon 0000000 0000000 Purchase a Reserved Node
------------------------
This example shows how to purchase a reserved node offering. The ``reserved-node-offering-id`` is obtained by
calling ``describe-reserved-node-offerings``.
Command::
aws redshift purchase-reserved-node-offering --reserved-node-offering-id ceb6a579-cf4c-4343-be8b-d832c45ab51c
Result::
{
"ReservedNode": {
"OfferingType": "Heavy Utilization",
"FixedPrice": "",
"NodeType": "dw.hs1.xlarge",
"ReservedNodeId": "1ba8e2e3-bc01-4d65-b35d-a4a3e931547e",
"UsagePrice": "",
"RecurringCharges": [
{
"RecurringChargeAmount": "",
"RecurringChargeFrequency": "Hourly"
}
],
"NodeCount": 1,
"State": "payment-pending",
"StartTime": "2013-02-13T17:08:39.051Z",
"Duration": 31536000,
"ReservedNodeOfferingId": "ceb6a579-cf4c-4343-be8b-d832c45ab51c"
},
"ResponseMetadata": {
"RequestId": "01bda7bf-7600-11e2-b605-2568d7396e7f"
}
}
awscli-1.10.1/awscli/examples/redshift/describe-cluster-versions.rst 0000666 4542626 0000144 00000001054 12652514124 026677 0 ustar pysdk-ci amazon 0000000 0000000 Get a Description of All Cluster Versions
-----------------------------------------
This example returns a description of all cluster versions. By default, the output is in JSON format.
Command::
aws redshift describe-cluster-versions
Result::
{
"ClusterVersions": [
{
"ClusterVersion": "1.0",
"Description": "Initial release",
"ClusterParameterGroupFamily": "redshift-1.0"
} ],
"ResponseMetadata": {
"RequestId": "16a53de3-64cc-11e2-bec0-17624ad140dd"
}
}
awscli-1.10.1/awscli/examples/redshift/describe-orderable-cluster-options.rst 0000666 4542626 0000144 00000003416 12652514124 030463 0 ustar pysdk-ci amazon 0000000 0000000 Describing All Orderable Cluster Options
----------------------------------------
This example returns descriptions of all orderable cluster options. By default, the output is in JSON format.
Command::
aws redshift describe-orderable-cluster-options
Result::
{
"OrderableClusterOptions": [
{
"NodeType": "dw.hs1.8xlarge",
"AvailabilityZones": [
{ "Name": "us-east-1a" },
{ "Name": "us-east-1b" },
{ "Name": "us-east-1c" } ],
"ClusterVersion": "1.0",
"ClusterType": "multi-node"
},
{
"NodeType": "dw.hs1.xlarge",
"AvailabilityZones": [
{ "Name": "us-east-1a" },
{ "Name": "us-east-1b" },
{ "Name": "us-east-1c" } ],
"ClusterVersion": "1.0",
"ClusterType": "multi-node"
},
{
"NodeType": "dw.hs1.xlarge",
"AvailabilityZones": [
{ "Name": "us-east-1a" },
{ "Name": "us-east-1b" },
{ "Name": "us-east-1c" } ],
"ClusterVersion": "1.0",
"ClusterType": "single-node"
} ],
"ResponseMetadata": {
"RequestId": "f6000035-64cb-11e2-9135-ff82df53a51a"
}
}
You can also obtain the same information in text format using the ``--output text`` option.
Command::
aws redshift describe-orderable-cluster-options --output text
Result::
dw.hs1.8xlarge 1.0 multi-node
us-east-1a
us-east-1b
us-east-1c
dw.hs1.xlarge 1.0 multi-node
us-east-1a
us-east-1b
us-east-1c
dw.hs1.xlarge 1.0 single-node
us-east-1a
us-east-1b
us-east-1c
RESPONSEMETADATA e648696b-64cb-11e2-bec0-17624ad140dd
awscli-1.10.1/awscli/examples/redshift/delete-cluster-parameter-group.rst 0000666 4542626 0000144 00000000336 12652514124 027625 0 ustar pysdk-ci amazon 0000000 0000000 Delete a Cluster Parameter Group
--------------------------------
This example deletes a cluster parameter group.
Command::
aws redshift delete-cluster-parameter-group --parameter-group-name myclusterparametergroup
awscli-1.10.1/awscli/examples/redshift/reset-cluster-parameter-group.rst 0000666 4542626 0000144 00000000432 12652514124 027502 0 ustar pysdk-ci amazon 0000000 0000000 Reset Parameters in a Parameter Group
-------------------------------------
This example shows how to reset all of the parameters in a parameter group.
Command::
aws redshift reset-cluster-parameter-group --parameter-group-name myclusterparametergroup --reset-all-parameters
awscli-1.10.1/awscli/examples/redshift/create-cluster-snapshot.rst 0000666 4542626 0000144 00000001653 12652514124 026356 0 ustar pysdk-ci amazon 0000000 0000000 Create a Cluster Snapshot
-------------------------
This example creates a new cluster snapshot. By default, the output is in JSON format.
Command::
aws redshift create-cluster-snapshot --cluster-identifier mycluster --snapshot-identifier my-snapshot-id
Result::
{
"Snapshot": {
"Status": "creating",
"SnapshotCreateTime": "2013-01-22T22:20:33.548Z",
"AvailabilityZone": "us-east-1a",
"ClusterVersion": "1.0",
"MasterUsername": "adminuser",
"DBName": "dev",
"ClusterCreateTime": "2013-01-22T21:59:29.559Z",
"SnapshotType": "manual",
"NodeType": "dw.hs1.xlarge",
"ClusterIdentifier": "mycluster",
"Port": 5439,
"NumberOfNodes": "2",
"SnapshotIdentifier": "my-snapshot-id"
},
"ResponseMetadata": {
"RequestId": "f024d1a5-64e1-11e2-88c5-53eb05787dfb"
}
}
awscli-1.10.1/awscli/examples/redshift/copy-cluster-snapshot.rst 0000666 4542626 0000144 00000002031 12652514124 026054 0 ustar pysdk-ci amazon 0000000 0000000 Get a Description of All Cluster Versions
-----------------------------------------
This example returns a description of all cluster versions. By default, the output is in JSON format.
Command::
aws redshift copy-cluster-snapshot --source-snapshot-identifier cm:examplecluster-2013-01-22-19-27-58 --target-snapshot-identifier my-saved-snapshot-copy
Result::
{
"Snapshot": {
"Status": "available",
"SnapshotCreateTime": "2013-01-22T19:27:58.931Z",
"AvailabilityZone": "us-east-1c",
"ClusterVersion": "1.0",
"MasterUsername": "adminuser",
"DBName": "dev",
"ClusterCreateTime": "2013-01-22T19:23:59.368Z",
"SnapshotType": "manual",
"NodeType": "dw.hs1.xlarge",
"ClusterIdentifier": "examplecluster",
"Port": 5439,
"NumberOfNodes": "2",
"SnapshotIdentifier": "my-saved-snapshot-copy"
},
"ResponseMetadata": {
"RequestId": "3b279691-64e3-11e2-bec0-17624ad140dd"
}
}
awscli-1.10.1/awscli/examples/redshift/authorize-snapshot-access.rst 0000666 4542626 0000144 00000002425 12652514124 026703 0 ustar pysdk-ci amazon 0000000 0000000 Authorize an AWS Account to Restore a Snapshot
----------------------------------------------
This example authorizes the AWS account ``444455556666`` to restore the snapshot ``my-snapshot-id``.
By default, the output is in JSON format.
Command::
aws redshift authorize-snapshot-access --snapshot-id my-snapshot-id --account-with-restore-access 444455556666
Result::
{
"Snapshot": {
"Status": "available",
"SnapshotCreateTime": "2013-07-17T22:04:18.947Z",
"EstimatedSecondsToCompletion": 0,
"AvailabilityZone": "us-east-1a",
"ClusterVersion": "1.0",
"MasterUsername": "adminuser",
"Encrypted": false,
"OwnerAccount": "111122223333",
"BackupProgressInMegabytes": 11.0,
"ElapsedTimeInSeconds": 0,
"DBName": "dev",
"CurrentBackupRateInMegabytesPerSecond: 0.1534,
"ClusterCreateTime": "2013-01-22T21:59:29.559Z",
"ActualIncrementalBackupSizeInMegabytes"; 11.0,
"SnapshotType": "manual",
"NodeType": "dw.hs1.xlarge",
"ClusterIdentifier": "mycluster",
"TotalBackupSizeInMegabytes": 20.0,
"Port": 5439,
"NumberOfNodes": 2,
"SnapshotIdentifier": "my-snapshot-id"
}
}
awscli-1.10.1/awscli/examples/redshift/authorize-cluster-security-group-ingress.rst 0000666 4542626 0000144 00000001162 12652514124 031732 0 ustar pysdk-ci amazon 0000000 0000000 Authorizing Access to an EC2 Security Group
-------------------------------------------
This example authorizes access to a named Amazon EC2 security group.
Command::
aws redshift authorize-cluster-security-group-ingress --cluster-security-group-name mysecuritygroup --ec2-security-group-name myec2securitygroup --ec2-security-group-owner-id 123445677890
Authorizing Access to a CIDR range
----------------------------------
This example authorizes access to a CIDR range.
Command::
aws redshift authorize-cluster-security-group-ingress --cluster-security-group-name mysecuritygroup --cidrip 192.168.100.100/32
awscli-1.10.1/awscli/examples/redshift/modify-cluster-subnet-group.rst 0000666 4542626 0000144 00000002200 12652514124 027162 0 ustar pysdk-ci amazon 0000000 0000000 Modify the Subnets in a Cluster Subnet Group
--------------------------------------------
This example shows how to modify the list of subnets in a cache subnet group. By default, the output is in JSON format.
Command::
aws redshift modify-cluster-subnet-group --cluster-subnet-group-name mysubnetgroup --subnet-ids subnet-763fdd1 subnet-ac830e9
Result::
{
"ClusterSubnetGroup":
{
"Subnets": [
{
"SubnetStatus": "Active",
"SubnetIdentifier": "subnet-763fdd1c",
"SubnetAvailabilityZone":
{ "Name": "us-east-1a" }
},
{
"SubnetStatus": "Active",
"SubnetIdentifier": "subnet-ac830e9",
"SubnetAvailabilityZone":
{ "Name": "us-east-1b" }
} ],
"VpcId": "vpc-7e3fdd14",
"SubnetGroupStatus": "Complete",
"Description": "My subnet group",
"ClusterSubnetGroupName": "mysubnetgroup"
},
"ResponseMetadata": {
"RequestId": "8da93e89-8372-f936-93a8-873918938197a"
}
}
awscli-1.10.1/awscli/examples/redshift/describe-cluster-parameter-groups.rst 0000666 4542626 0000144 00000001725 12652514124 030331 0 ustar pysdk-ci amazon 0000000 0000000 Get a Description of All Cluster Parameter Groups
-------------------------------------------------
This example returns a description of all cluster parameter groups for the
account, with column headers. By default, the output is in JSON format.
Command::
aws redshift describe-cluster-parameter-groups
Result::
{
"ParameterGroups": [
{
"ParameterGroupFamily": "redshift-1.0",
"Description": "My first cluster parameter group",
"ParameterGroupName": "myclusterparametergroup"
} ],
"ResponseMetadata": {
"RequestId": "8ceb8f6f-64cc-11e2-bea9-49e0ce183f07"
}
}
You can also obtain the same information in text format using the ``--output text`` option.
Command::
aws redshift describe-cluster-parameter-groups --output text
Result::
redshift-1.0 My first cluster parameter group myclusterparametergroup
RESPONSEMETADATA 9e665a36-64cc-11e2-8f7d-3b939af52818
awscli-1.10.1/awscli/examples/redshift/describe-cluster-security-groups.rst 0000666 4542626 0000144 00000001762 12652514124 030221 0 ustar pysdk-ci amazon 0000000 0000000 Get a Description of All Cluster Security Groups
------------------------------------------------
This example returns a description of all cluster security groups for the account.
By default, the output is in JSON format.
Command::
aws redshift describe-cluster-security-groups
Result::
{
"ClusterSecurityGroups": [
{
"OwnerId": "100447751468",
"Description": "default",
"ClusterSecurityGroupName": "default",
"EC2SecurityGroups": \[],
"IPRanges": [
{
"Status": "authorized",
"CIDRIP": "0.0.0.0/0"
}
]
},
{
"OwnerId": "100447751468",
"Description": "This is my cluster security group",
"ClusterSecurityGroupName": "mysecuritygroup",
"EC2SecurityGroups": \[],
"IPRanges": \[]
},
(...remaining output omitted...)
]
}
awscli-1.10.1/awscli/examples/redshift/modify-cluster-parameter-group.rst 0000666 4542626 0000144 00000001510 12652514124 027645 0 ustar pysdk-ci amazon 0000000 0000000 Modify a Parameter in a Parameter Group
---------------------------------------
This example shows how to modify the *wlm_json_configuration* parameter for workload management.
Command::
aws redshift modify-cluster-parameter-group --parameter-group-name myclusterparametergroup --parameters '{"parameter_name":"wlm_json_configuration","parameter_value":"\[{\\"user_group\\":\[\\"example_user_group1\\"],\\"query_group\\":\[\\"example_query_group1\\"],\\"query_concurrency\\":7},{\\"query_concurrency\\":5}]"}'
Result::
{
"ParameterGroupStatus": "Your parameter group has been updated but changes won't get applied until you reboot the associated Clusters.",
"ParameterGroupName": "myclusterparametergroup",
"ResponseMetadata": {
"RequestId": "09974cc0-64cd-11e2-bea9-49e0ce183f07"
}
}
awscli-1.10.1/awscli/examples/redshift/describe-cluster-snapshots.rst 0000666 4542626 0000144 00000004615 12652514124 027057 0 ustar pysdk-ci amazon 0000000 0000000 Get a Description of All Cluster Snapshots
------------------------------------------
This example returns a description of all cluster snapshots for the
account. By default, the output is in JSON format.
Command::
aws redshift describe-cluster-snapshots
Result::
{
"Snapshots": [
{
"Status": "available",
"SnapshotCreateTime": "2013-07-17T22:02:22.852Z",
"EstimatedSecondsToCompletion": -1,
"AvailabilityZone": "us-east-1a",
"ClusterVersion": "1.0",
"MasterUsername": "adminuser",
"Encrypted": false,
"OwnerAccount": "111122223333",
"BackupProgressInMegabytes": 20.0,
"ElapsedTimeInSeconds": 0,
"DBName": "dev",
"CurrentBackupRateInMegabytesPerSecond: 0.0,
"ClusterCreateTime": "2013-01-22T21:59:29.559Z",
"ActualIncrementalBackupSizeInMegabytes"; 20.0
"SnapshotType": "automated",
"NodeType": "dw.hs1.xlarge",
"ClusterIdentifier": "mycluster",
"Port": 5439,
"TotalBackupSizeInMegabytes": 20.0,
"NumberOfNodes": "2",
"SnapshotIdentifier": "cm:mycluster-2013-01-22-22-04-18"
},
{
"EstimatedSecondsToCompletion": 0,
"OwnerAccount": "111122223333",
"CurrentBackupRateInMegabytesPerSecond: 0.1534,
"ActualIncrementalBackupSizeInMegabytes"; 11.0,
"NumberOfNodes": "2",
"Status": "available",
"ClusterVersion": "1.0",
"MasterUsername": "adminuser",
"AccountsWithRestoreAccess": [
{
"AccountID": "444455556666"
} ],
"TotalBackupSizeInMegabytes": 20.0,
"DBName": "dev",
"BackupProgressInMegabytes": 11.0,
"ClusterCreateTime": "2013-01-22T21:59:29.559Z",
"ElapsedTimeInSeconds": 0,
"ClusterIdentifier": "mycluster",
"SnapshotCreateTime": "2013-07-17T22:04:18.947Z",
"AvailabilityZone": "us-east-1a",
"NodeType": "dw.hs1.xlarge",
"Encrypted": false,
"SnapshotType": "manual",
"Port": 5439,
"SnapshotIdentifier": "my-snapshot-id"
} ]
}
(...remaining output omitted...)
awscli-1.10.1/awscli/examples/redshift/delete-cluster-security-group.rst 0000666 4542626 0000144 00000000331 12652514124 027507 0 ustar pysdk-ci amazon 0000000 0000000 Delete a Cluster Security Group
-------------------------------
This example deletes a cluster security group.
Command::
aws redshift delete-cluster-security-group --cluster-security-group-name mysecuritygroup
awscli-1.10.1/awscli/examples/redshift/describe-clusters.rst 0000666 4542626 0000144 00000003725 12652514124 025223 0 ustar pysdk-ci amazon 0000000 0000000 Get a Description of All Clusters
---------------------------------
This example returns a description of all clusters for the account. By default, the output is in JSON format.
Command::
aws redshift describe-clusters
Result::
{
"Clusters": [
{
"NodeType": "dw.hs1.xlarge",
"Endpoint": {
"Port": 5439,
"Address": "mycluster.coqoarplqhsn.us-east-1.redshift.amazonaws.com"
},
"ClusterVersion": "1.0",
"PubliclyAccessible": "true",
"MasterUsername": "adminuser",
"ClusterParameterGroups": [
{
"ParameterApplyStatus": "in-sync",
"ParameterGroupName": "default.redshift-1.0"
} ],
"ClusterSecurityGroups": [
{
"Status": "active",
"ClusterSecurityGroupName": "default"
} ],
"AllowVersionUpgrade": true,
"VpcSecurityGroups": \[],
"AvailabilityZone": "us-east-1a",
"ClusterCreateTime": "2013-01-22T21:59:29.559Z",
"PreferredMaintenanceWindow": "sat:03:30-sat:04:00",
"AutomatedSnapshotRetentionPeriod": 1,
"ClusterStatus": "available",
"ClusterIdentifier": "mycluster",
"DBName": "dev",
"NumberOfNodes": 2,
"PendingModifiedValues": {}
} ],
"ResponseMetadata": {
"RequestId": "65b71cac-64df-11e2-8f5b-e90bd6c77476"
}
}
You can also obtain the same information in text format using the ``--output text`` option.
Command::
aws redshift describe-clusters --output text
Result::
dw.hs1.xlarge 1.0 true adminuser True us-east-1a 2013-01-22T21:59:29.559Z sat:03:30-sat:04:00 1 available mycluster dev 2
ENDPOINT 5439 mycluster.coqoarplqhsn.us-east-1.redshift.amazonaws.com
in-sync default.redshift-1.0
active default
PENDINGMODIFIEDVALUES
RESPONSEMETADATA 934281a8-64df-11e2-b07c-f7fbdd006c67
awscli-1.10.1/awscli/examples/redshift/restore-from-cluster-snapshot.rst 0000666 4542626 0000144 00000002401 12652514124 027527 0 ustar pysdk-ci amazon 0000000 0000000 Restore a Cluster From a Snapshot
---------------------------------
This example restores a cluster from a snapshot.
Command::
aws redshift restore-from-cluster-snapshot --cluster-identifier mycluster-clone --snapshot-identifier my-snapshot-id
Result::
{
"Cluster": {
"NodeType": "dw.hs1.xlarge",
"ClusterVersion": "1.0",
"PubliclyAccessible": "true",
"MasterUsername": "adminuser",
"ClusterParameterGroups": [
{
"ParameterApplyStatus": "in-sync",
"ParameterGroupName": "default.redshift-1.0"
}
],
"ClusterSecurityGroups": [
{
"Status": "active",
"ClusterSecurityGroupName": "default"
}
],
"AllowVersionUpgrade": true,
"VpcSecurityGroups": \[],
"PreferredMaintenanceWindow": "sun:23:15-mon:03:15",
"AutomatedSnapshotRetentionPeriod": 1,
"ClusterStatus": "creating",
"ClusterIdentifier": "mycluster-clone",
"DBName": "dev",
"NumberOfNodes": 2,
"PendingModifiedValues": {}
},
"ResponseMetadata": {
"RequestId": "77fd512b-64e3-11e2-8f5b-e90bd6c77476"
}
}
awscli-1.10.1/awscli/examples/deploy/ 0000777 4542626 0000144 00000000000 12652514126 020526 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/deploy/get-on-premises-instance.rst 0000666 4542626 0000144 00000001213 12652514124 026073 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about an on-premises instance**
This example gets information about an on-premises instance.
Command::
aws deploy get-on-premises-instance --instance-name AssetTag12010298EX
Output::
{
"instanceInfo": {
"iamUserArn": "arn:aws:iam::80398EXAMPLE:user/AWS/CodeDeploy/AssetTag12010298EX",
"tags": [
{
"Value": "CodeDeployDemo-OnPrem",
"Key": "Name"
}
],
"instanceName": "AssetTag12010298EX",
"registerTime": 1425579465.228,
"instanceArn": "arn:aws:codedeploy:us-east-1:80398EXAMPLE:instance/AssetTag12010298EX_4IwLNI2Alh"
}
} awscli-1.10.1/awscli/examples/deploy/deregister.rst 0000666 4542626 0000144 00000001610 12652514124 023411 0 ustar pysdk-ci amazon 0000000 0000000 **To deregister an on-premises instance**
This example deregisters an on-premises instance with AWS CodeDeploy. It does not delete the IAM user that is associated with the instance. It disassociates in AWS CodeDeploy the on-premises tags from the instance. It does not uninstall the AWS CodeDeploy Agent from the instance nor remove the on-premises configuration file from the instance.
Command::
aws deploy deregister --instance-name AssetTag12010298EX --no-delete-iam-user --region us-west-2
Output::
Retrieving on-premises instance information... DONE
IamUserArn: arn:aws:iam::80398EXAMPLE:user/AWS/CodeDeploy/AssetTag12010298EX
Tags: Key=Name,Value=CodeDeployDemo-OnPrem
Removing tags from the on-premises instance... DONE
Deregistering the on-premises instance... DONE
Run the following command on the on-premises instance to uninstall the codedeploy-agent:
aws deploy uninstall awscli-1.10.1/awscli/examples/deploy/delete-deployment-group.rst 0000666 4542626 0000144 00000000444 12652514124 026032 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a deployment group**
This example deletes a deployment group that is associated with the specified application.
Command::
aws deploy delete-deployment-group --application-name WordPress_App --deployment-group-name WordPress_DG
Output::
{
"hooksNotCleanedUp": []
} awscli-1.10.1/awscli/examples/deploy/list-on-premises-instances.rst 0000666 4542626 0000144 00000000777 12652514124 026470 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about one or more on-premises instances**
This example gets a list of available on-premises instance names for instances that are registered in AWS CodeDeploy and also have the specified on-premises instance tag associated in AWS CodeDeploy with the instance.
Command::
aws deploy list-on-premises-instances --registration-status Registered --tag-filters Key=Name,Value=CodeDeployDemo-OnPrem,Type=KEY_AND_VALUE
Output::
{
"instanceNames": [
"AssetTag12010298EX"
]
} awscli-1.10.1/awscli/examples/deploy/list-deployment-groups.rst 0000666 4542626 0000144 00000000605 12652514124 025725 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about deployment groups**
This example displays information about all deployment groups that are associated with the specified application.
Command::
aws deploy list-deployment-groups --application-name WordPress_App
Output::
{
"applicationName": "WordPress_App",
"deploymentGroups": [
"WordPress_DG",
"WordPress_Beta_DG"
]
} awscli-1.10.1/awscli/examples/deploy/delete-application.rst 0000666 4542626 0000144 00000000321 12652514124 025015 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an application**
This example deletes an application that is associated with the user's AWS account.
Command::
aws deploy delete-application --application-name WordPress_App
Output::
None. awscli-1.10.1/awscli/examples/deploy/get-deployment-config.rst 0000666 4542626 0000144 00000001135 12652514124 025456 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about a deployment configuration**
This example displays information about a deployment configuration that is associated with the user's AWS account.
Command::
aws deploy get-deployment-config --deployment-config-name ThreeQuartersHealthy
Output::
{
"deploymentConfigInfo": {
"deploymentConfigId": "bf6b390b-61d3-4f24-8911-a1664EXAMPLE",
"minimumHealthyHosts": {
"type": "FLEET_PERCENT",
"value": 75
},
"createTime": 1411081164.379,
"deploymentConfigName": "ThreeQuartersHealthy"
}
} awscli-1.10.1/awscli/examples/deploy/get-deployment.rst 0000666 4542626 0000144 00000002227 12652514124 024216 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about a deployment**
This example displays information about a deployment that is associated with the user's AWS account.
Command::
aws deploy get-deployment --deployment-id d-USUAELQEX
Output::
{
"deploymentInfo": {
"applicationName": "WordPress_App",
"status": "Succeeded",
"deploymentOverview": {
"Failed": 0,
"InProgress": 0,
"Skipped": 0,
"Succeeded": 1,
"Pending": 0
},
"deploymentConfigName": "CodeDeployDefault.OneAtATime",
"creator": "user",
"description": "My WordPress app deployment",
"revision": {
"revisionType": "S3",
"s3Location": {
"bundleType": "zip",
"eTag": "\"dd56cfd59d434b8e768f9d77fEXAMPLE\"",
"bucket": "CodeDeployDemoBucket",
"key": "WordPressApp.zip"
}
},
"deploymentId": "d-USUAELQEX",
"deploymentGroupName": "WordPress_DG",
"createTime": 1409764576.589,
"completeTime": 1409764596.101,
"ignoreApplicationStopFailures": false
}
} awscli-1.10.1/awscli/examples/deploy/list-deployment-instances.rst 0000666 4542626 0000144 00000000564 12652514124 026401 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about deployment instances**
This example displays information about all deployment instances that are associated with the specified deployment.
Command::
aws deploy list-deployment-instances --deployment-id d-9DI6I4EX --instance-status-filter Succeeded
Output::
{
"instancesList": [
"i-8c4490EX",
"i-7d5389EX"
]
} awscli-1.10.1/awscli/examples/deploy/push.rst 0000666 4542626 0000144 00000001640 12652514124 022236 0 ustar pysdk-ci amazon 0000000 0000000 **To bundle and deploy an AWS CodeDeploy compatible application revision to Amazon S3**
This example bundles and deploys an application revision to Amazon S3 and then associates the application revision with the specified application.
Use the output of the push command to create a deployment that uses the uploaded application revision.
Command::
aws deploy push --application-name WordPress_App --description "This is my deployment" --ignore-hidden-files --s3-location s3://CodeDeployDemoBucket/WordPressApp.zip --source /tmp/MyLocalDeploymentFolder/
Output::
To deploy with this revision, run:
aws deploy create-deployment --application-name WordPress_App --deployment-config-name --deployment-group-name --s3-location bucket=CodeDeployDemoBucket,key=WordPressApp.zip,bundleType=zip,eTag="cecc9b8a08eac650a6e71fdb88EXAMPLE",version=LFsJAUd_2J4VWXfvKtvi79L8EXAMPLE awscli-1.10.1/awscli/examples/deploy/create-deployment-config.rst 0000666 4542626 0000144 00000000600 12652514124 026136 0 ustar pysdk-ci amazon 0000000 0000000 **To create a custom deployment configuration**
This example creates a custom deployment configuration and associates it with the user's AWS account.
Command::
aws deploy create-deployment-config --deployment-config-name ThreeQuartersHealthy --minimum-healthy-hosts type=FLEET_PERCENT,value=75
Output::
{
"deploymentConfigId": "bf6b390b-61d3-4f24-8911-a1664EXAMPLE"
} awscli-1.10.1/awscli/examples/deploy/get-application.rst 0000666 4542626 0000144 00000000704 12652514124 024337 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about an application**
This example displays information about an application that is associated with the user's AWS account.
Command::
aws deploy get-application --application-name WordPress_App
Output::
{
"application": {
"applicationName": "WordPress_App",
"applicationId": "d9dd6993-f171-44fa-a811-211e4EXAMPLE",
"createTime": 1407878168.078,
"linkedToGitHub": false
}
} awscli-1.10.1/awscli/examples/deploy/batch-get-on-premises-instances.rst 0000666 4542626 0000144 00000002155 12652514124 027343 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about one or more on-premises instances**
This example gets information about two on-premises instances.
Command::
aws deploy batch-get-on-premises-instances --instance-names AssetTag12010298EX AssetTag23121309EX
Output::
{
"instanceInfos": [
{
"iamUserArn": "arn:aws:iam::80398EXAMPLE:user/AWS/CodeDeploy/AssetTag12010298EX",
"tags": [
{
"Value": "CodeDeployDemo-OnPrem",
"Key": "Name"
}
],
"instanceName": "AssetTag12010298EX",
"registerTime": 1425579465.228,
"instanceArn": "arn:aws:codedeploy:us-west-2:80398EXAMPLE:instance/AssetTag12010298EX_4IwLNI2Alh"
},
{
"iamUserArn": "arn:aws:iam::80398EXAMPLE:user/AWS/CodeDeploy/AssetTag23121309EX",
"tags": [
{
"Value": "CodeDeployDemo-OnPrem",
"Key": "Name"
}
],
"instanceName": "AssetTag23121309EX",
"registerTime": 1425595585.988,
"instanceArn": "arn:aws:codedeploy:us-west-2:80398EXAMPLE:instance/AssetTag23121309EX_PomUy64Was"
}
]
} awscli-1.10.1/awscli/examples/deploy/register-on-premises-instance.rst 0000666 4542626 0000144 00000000714 12652514124 027145 0 ustar pysdk-ci amazon 0000000 0000000 **To register an on-premises instance**
This example registers an on-premises instance with AWS CodeDeploy. It does not create the specified IAM user, nor does it associate in AWS CodeDeploy any on-premises instances tags with the registered instance.
Command::
aws deploy register-on-premises-instance --instance-name AssetTag12010298EX --iam-user-arn arn:aws:iam::80398EXAMPLE:user/CodeDeployDemoUser-OnPrem
Output::
This command produces no output. awscli-1.10.1/awscli/examples/deploy/list-deployment-configs.rst 0000666 4542626 0000144 00000000672 12652514124 026042 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about deployment configurations**
This example displays information about all deployment configurations that are associated with the user's AWS account.
Command::
aws deploy list-deployment-configs
Output::
{
"deploymentConfigsList": [
"ThreeQuartersHealthy",
"CodeDeployDefault.AllAtOnce",
"CodeDeployDefault.HalfAtATime",
"CodeDeployDefault.OneAtATime"
]
} awscli-1.10.1/awscli/examples/deploy/uninstall.rst 0000666 4542626 0000144 00000000731 12652514124 023270 0 ustar pysdk-ci amazon 0000000 0000000 **To uninstall an on-premises instance**
This example uninstalls the AWS CodeDeploy Agent from the on-premises instance, and it removes the on-premises configuration file from the instance. It does not deregister the instance in AWS CodeDeploy, nor disassociate any on-premises instance tags in AWS CodeDeploy from the instance, nor delete the IAM user that is associated with the instance.
Command::
aws deploy uninstall
Output::
This command produces no output. awscli-1.10.1/awscli/examples/deploy/list-deployments.rst 0000666 4542626 0000144 00000000773 12652514124 024601 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about deployments**
This example displays information about all deployments that are associated with the specified application and deployment group.
Command::
aws deploy list-deployments --application-name WordPress_App --create-time-range start=2014-08-19T00:00:00,end=2014-08-20T00:00:00 --deployment-group-name WordPress_DG --include-only-statuses Failed
Output::
{
"deployments": [
"d-QA4G4F9EX",
"d-1MVNYOEEX",
"d-WEWRE8BEX"
]
} awscli-1.10.1/awscli/examples/deploy/list-application-revisions.rst 0000666 4542626 0000144 00000001727 12652514124 026560 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about application revisions**
This example displays information about all application revisions that are associated with the specified application.
Command::
aws deploy list-application-revisions --application-name WordPress_App --s-3-bucket CodeDeployDemoBucket --deployed exclude --s-3-key-prefix WordPress_ --sort-by lastUsedTime --sort-order descending
Output::
{
"revisions": [
{
"revisionType": "S3",
"s3Location": {
"version": "uTecLusvCB_JqHFXtfUcyfV8bEXAMPLE",
"bucket": "CodeDeployDemoBucket",
"key": "WordPress_App.zip",
"bundleType": "zip"
}
},
{
"revisionType": "S3",
"s3Location": {
"version": "tMk.UxgDpMEVb7V187ZM6wVAWEXAMPLE",
"bucket": "CodeDeployDemoBucket",
"key": "WordPress_App_2-0.zip",
"bundleType": "zip"
}
}
]
}
awscli-1.10.1/awscli/examples/deploy/register.rst 0000666 4542626 0000144 00000001704 12652514124 023104 0 ustar pysdk-ci amazon 0000000 0000000 **To register an on-premises instance**
This example registers an on-premises instance with AWS CodeDeploy, associates in AWS CodeDeploy the specified on-premises instance tag with the registered instance, and creates an on-premises configuration file that can be copied to the instance. It does not create the IAM user, nor does it install the AWS CodeDeploy Agent on the instance.
Command::
aws deploy register --instance-name AssetTag12010298EX --iam-user-arn arn:aws:iam::80398EXAMPLE:user/CodeDeployUser-OnPrem --tags Key=Name,Value=CodeDeployDemo-OnPrem --region us-west-2
Output::
Registering the on-premises instance... DONE
Adding tags to the on-premises instance... DONE
Copy the on-premises configuration file named codedeploy.onpremises.yml to the on-premises instance, and run the following command on the on-premises instance to install and configure the AWS CodeDeploy Agent:
aws deploy install --config-file codedeploy.onpremises.yml awscli-1.10.1/awscli/examples/deploy/remove-tags-from-on-premises-instances.rst 0000666 4542626 0000144 00000001201 12652514124 030666 0 ustar pysdk-ci amazon 0000000 0000000 **To remove tags from one or more on-premises instances**
This example disassociates the same on-premises tag in AWS CodeDeploy from the two specified on-premises instances. It does not deregister the on-premises instances in AWS CodeDeploy, nor uninstall the AWS CodeDeploy Agent from the instance, nor remove the on-premises configuration file from the instances, nor delete the IAM users that are associated with the instances.
Command::
aws deploy remove-tags-from-on-premises-instances --instance-names AssetTag12010298EX AssetTag23121309EX --tags Key=Name,Value=CodeDeployDemo-OnPrem
Output::
This command produces no output. awscli-1.10.1/awscli/examples/deploy/list-applications.rst 0000666 4542626 0000144 00000000445 12652514124 024720 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about applications**
This example displays information about all applications that are associated with the user's AWS account.
Command::
aws deploy list-applications
Output::
{
"applications": [
"WordPress_App",
"MyOther_App"
]
} awscli-1.10.1/awscli/examples/deploy/delete-deployment-config.rst 0000666 4542626 0000144 00000000403 12652514124 026136 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a deployment configuration**
This example deletes a custom deployment configuration that is associated with the user's AWS account.
Command::
aws deploy delete-deployment-config --deployment-config-name ThreeQuartersHealthy
Output::
None. awscli-1.10.1/awscli/examples/deploy/stop-deployment.rst 0000666 4542626 0000144 00000000550 12652514124 024421 0 ustar pysdk-ci amazon 0000000 0000000 **To attempt to stop a deployment**
This example attempts to stop an in-progress deployment that is associated with the user's AWS account.
Command::
aws deploy stop-deployment --deployment-id d-8365D4OEX
Output::
{
"status": "Succeeded",
"statusMessage": "No more commands will be scheduled for execution in the deployment instances"
} awscli-1.10.1/awscli/examples/deploy/update-deployment-group.rst 0000666 4542626 0000144 00000001114 12652514124 026045 0 ustar pysdk-ci amazon 0000000 0000000 **To change information about a deployment group**
This example changes the settings of a deployment group that is associated with the specified application.
Command::
aws deploy update-deployment-group --application-name WordPress_App --auto-scaling-groups My_CodeDeployDemo_ASG --current-deployment-group-name WordPress_DG --deployment-config-name CodeDeployDefault.AllAtOnce --ec2-tag-filters Key=Name,Type=KEY_AND_VALUE,Value=My_CodeDeployDemo --new-deployment-group-name My_WordPress_DepGroup --service-role-arn arn:aws:iam::80398EXAMPLE:role/CodeDeployDemo-2
Output::
None. awscli-1.10.1/awscli/examples/deploy/get-deployment-instance.rst 0000666 4542626 0000144 00000003731 12652514124 026021 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about a deployment instance**
This example displays information about a deployment instance that is associated with the specified deployment.
Command::
aws deploy get-deployment-instance --deployment-id d-QA4G4F9EX --instance-id i-902e9fEX
Output::
{
"instanceSummary": {
"instanceId": "arn:aws:ec2:us-east-1:80398EXAMPLE:instance/i-902e9fEX",
"lifecycleEvents": [
{
"status": "Succeeded",
"endTime": 1408480726.569,
"startTime": 1408480726.437,
"lifecycleEventName": "ApplicationStop"
},
{
"status": "Succeeded",
"endTime": 1408480728.016,
"startTime": 1408480727.665,
"lifecycleEventName": "DownloadBundle"
},
{
"status": "Succeeded",
"endTime": 1408480729.744,
"startTime": 1408480729.125,
"lifecycleEventName": "BeforeInstall"
},
{
"status": "Succeeded",
"endTime": 1408480730.979,
"startTime": 1408480730.844,
"lifecycleEventName": "Install"
},
{
"status": "Failed",
"endTime": 1408480732.603,
"startTime": 1408480732.1,
"lifecycleEventName": "AfterInstall"
},
{
"status": "Skipped",
"endTime": 1408480732.606,
"lifecycleEventName": "ApplicationStart"
},
{
"status": "Skipped",
"endTime": 1408480732.606,
"lifecycleEventName": "ValidateService"
}
],
"deploymentId": "d-QA4G4F9EX",
"lastUpdatedAt": 1408480733.152,
"status": "Failed"
}
} awscli-1.10.1/awscli/examples/deploy/deregister-on-premises-instance.rst 0000666 4542626 0000144 00000001037 12652514124 027455 0 ustar pysdk-ci amazon 0000000 0000000 **To deregister an on-premises instance**
This example deregisters an on-premises instance with AWS CodeDeploy, but it does not delete the IAM user associated with the instance, nor does it disassociate in AWS CodeDeploy the on-premises instance tags from the instance. It also does not uninstall the AWS CodeDeploy Agent from the instance nor remove the on-premises configuration file from the instance.
Command::
aws deploy deregister-on-premises-instance --instance-name AssetTag12010298EX
Output::
This command produces no output. awscli-1.10.1/awscli/examples/deploy/install.rst 0000666 4542626 0000144 00000001376 12652514124 022733 0 ustar pysdk-ci amazon 0000000 0000000 **To install an on-premises instance**
This example copies the on-premises configuration file from the specified location on the instance to the location on the instance that the AWS CodeDeploy Agent expects to find it. It also installs the AWS CodeDeploy Agent on the instance. It does not create any IAM user, nor register the on-premises instance with AWS CodeDeploy, nor associate any on-premises instance tags in AWS CodeDeploy for the instance.
Command::
aws deploy install --override-config --config-file C:\temp\codedeploy.onpremises.yml --region us-west-2 --agent-installer s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.msi
Output::
Creating the on-premises instance configuration file... DONE
Installing the AWS CodeDeploy Agent... DONE awscli-1.10.1/awscli/examples/deploy/get-deployment-group.rst 0000666 4542626 0000144 00000001617 12652514124 025352 0 ustar pysdk-ci amazon 0000000 0000000 **To view information about a deployment group**
This example displays information about a deployment group that is associated with the specified application.
Command::
aws deploy get-deployment-group --application-name WordPress_App --deployment-group-name WordPress_DG
Output::
{
"deploymentGroupInfo": {
"applicationName": "WordPress_App",
"autoScalingGroups": [
"CodeDeployDemo-ASG"
],
"deploymentConfigName": "CodeDeployDefault.OneAtATime",
"ec2TagFilters": [
{
"Type": "KEY_AND_VALUE",
"Value": "CodeDeployDemo",
"Key": "Name"
}
],
"deploymentGroupId": "cdac3220-0e64-4d63-bb50-e68faEXAMPLE",
"serviceRoleArn": "arn:aws:iam::80398EXAMPLE:role/CodeDeployDemoRole",
"deploymentGroupName": "WordPress_DG"
}
} awscli-1.10.1/awscli/examples/deploy/register-application-revision.rst 0000666 4542626 0000144 00000000724 12652514124 027242 0 ustar pysdk-ci amazon 0000000 0000000 **To register information about an already-uploaded application revision**
This example registers information about an already-uploaded application revision in Amazon S3 with AWS CodeDeploy.
Command::
aws deploy register-application-revision --application-name WordPress_App --description "Revised WordPress application" --s3-location bucket=CodeDeployDemoBucket,key=RevisedWordPressApp.zip,bundleType=zip,eTag=cecc9b8a08eac650a6e71fdb88EXAMPLE
Output::
None. awscli-1.10.1/awscli/examples/deploy/batch-get-applications.rst 0000666 4542626 0000144 00000001351 12652514124 025600 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about multiple applications**
This example displays information about multiple applications that are associated with the user's AWS account.
Command::
aws deploy batch-get-applications --application-names WordPress_App MyOther_App
Output::
{
"applicationsInfo": [
{
"applicationName": "WordPress_App",
"applicationId": "d9dd6993-f171-44fa-a811-211e4EXAMPLE",
"createTime": 1407878168.078,
"linkedToGitHub": false
},
{
"applicationName": "MyOther_App",
"applicationId": "8ca57519-31da-42b2-9194-8bb16EXAMPLE",
"createTime": 1407453571.63,
"linkedToGitHub": false
}
]
} awscli-1.10.1/awscli/examples/deploy/create-deployment-group.rst 0000666 4542626 0000144 00000001111 12652514124 026023 0 ustar pysdk-ci amazon 0000000 0000000 **To create a deployment group**
This example creates a deployment group and associates it with the specified application and the user's AWS account.
Command::
aws deploy create-deployment-group --application-name WordPress_App --auto-scaling-groups CodeDeployDemo-ASG --deployment-config-name CodeDeployDefault.OneAtATime --deployment-group-name WordPress_DG --ec2-tag-filters Key=Name,Value=CodeDeployDemo,Type=KEY_AND_VALUE --service-role-arn arn:aws:iam::80398EXAMPLE:role/CodeDeployDemoRole
Output::
{
"deploymentGroupId": "cdac3220-0e64-4d63-bb50-e68faEXAMPLE"
} awscli-1.10.1/awscli/examples/deploy/update-application.rst 0000666 4542626 0000144 00000000427 12652514124 025044 0 ustar pysdk-ci amazon 0000000 0000000 **To change information about an application**
This example changes the name of an application that is associated with the user's AWS account.
Command::
aws deploy update-application --application-name WordPress_App --new-application-name My_WordPress_App
Output::
None. awscli-1.10.1/awscli/examples/deploy/add-tags-to-on-premises-instances.rst 0000666 4542626 0000144 00000000643 12652514124 027611 0 ustar pysdk-ci amazon 0000000 0000000 **To add tags to on-premises instances**
This example associates in AWS CodeDeploy the same on-premises instance tag to two on-premises instances. It does not register the on-premises instances with AWS CodeDeploy.
Command::
aws deploy add-tags-to-on-premises-instances --instance-names AssetTag12010298EX AssetTag23121309EX --tags Key=Name,Value=CodeDeployDemo-OnPrem
Output::
This command produces no output. awscli-1.10.1/awscli/examples/deploy/batch-get-deployments.rst 0000666 4542626 0000144 00000004471 12652514124 025463 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about multiple deployments**
This example displays information about multiple deployments that are associated with the user's AWS account.
Command::
aws deploy batch-get-deployments --deployment-ids d-USUAELQEX d-QA4G4F9EX
Output::
{
"deploymentsInfo": [
{
"applicationName": "WordPress_App",
"status": "Failed",
"deploymentOverview": {
"Failed": 0,
"InProgress": 0,
"Skipped": 0,
"Succeeded": 1,
"Pending": 0
},
"deploymentConfigName": "CodeDeployDefault.OneAtATime",
"creator": "user",
"deploymentGroupName": "WordPress_DG",
"revision": {
"revisionType": "S3",
"s3Location": {
"bundleType": "zip",
"version": "uTecLusvCB_JqHFXtfUcyfV8bEXAMPLE",
"bucket": "CodeDeployDemoBucket",
"key": "WordPressApp.zip"
}
},
"deploymentId": "d-QA4G4F9EX",
"createTime": 1408480721.9,
"completeTime": 1408480741.822
},
{
"applicationName": "MyOther_App",
"status": "Failed",
"deploymentOverview": {
"Failed": 1,
"InProgress": 0,
"Skipped": 0,
"Succeeded": 0,
"Pending": 0
},
"deploymentConfigName": "CodeDeployDefault.OneAtATime",
"creator": "user",
"errorInformation": {
"message": "Deployment failed: Constraint default violated: No hosts succeeded.",
"code": "HEALTH_CONSTRAINTS"
},
"deploymentGroupName": "MyOther_DG",
"revision": {
"revisionType": "S3",
"s3Location": {
"bundleType": "zip",
"eTag": "\"dd56cfd59d434b8e768f9d77fEXAMPLE\"",
"bucket": "CodeDeployDemoBucket",
"key": "MyOtherApp.zip"
}
},
"deploymentId": "d-USUAELQEX",
"createTime": 1409764576.589,
"completeTime": 1409764596.101
}
]
}
awscli-1.10.1/awscli/examples/deploy/get-application-revision.rst 0000666 4542626 0000144 00000001717 12652514124 026200 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about an application revision**
This example displays information about an application revision that is associated with the specified application.
Command::
aws deploy get-application-revision --application-name WordPress_App --s3-location bucket=CodeDeployDemoBucket,bundleType=zip,eTag=dd56cfd59d434b8e768f9d77fEXAMPLE,key=WordPressApp.zip
Output::
{
"applicationName": "WordPress_App",
"revisionInfo": {
"description": "Application revision registered by Deployment ID: d-N65I7GEX",
"registerTime": 1411076520.009,
"deploymentGroups": "WordPress_DG",
"lastUsedTime": 1411076520.009,
"firstUsedTime": 1411076520.009
},
"revision": {
"revisionType": "S3",
"s3Location": {
"bundleType": "zip",
"eTag": "dd56cfd59d434b8e768f9d77fEXAMPLE",
"bucket": "CodeDeployDemoBucket",
"key": "WordPressApp.zip"
}
}
} awscli-1.10.1/awscli/examples/deploy/create-application.rst 0000666 4542626 0000144 00000000416 12652514124 025023 0 ustar pysdk-ci amazon 0000000 0000000 **To create an application**
This example creates an application and associates it with the user's AWS account.
Command::
aws deploy create-application --application-name MyOther_App
Output::
{
"applicationId": "cfd3e1f1-5744-4aee-9251-eaa25EXAMPLE"
} awscli-1.10.1/awscli/examples/deploy/create-deployment.rst 0000666 4542626 0000144 00000000741 12652514124 024701 0 ustar pysdk-ci amazon 0000000 0000000 **To create a deployment**
This example creates a deployment and associates it with the user's AWS account.
Command::
aws deploy create-deployment --application-name WordPress_App --deployment-config-name CodeDeployDefault.OneAtATime --deployment-group-name WordPress_DG --description "My demo deployment" --s3-location bucket=CodeDeployDemoBucket,bundleType=zip,eTag=dd56cfd59d434b8e768f9d77fEXAMPLE,key=WordPressApp.zip
Output::
{
"deploymentId": "d-N65YI7Gex"
} awscli-1.10.1/awscli/examples/ssm/ 0000777 4542626 0000144 00000000000 12652514126 020034 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/ssm/create-document.rst 0000666 4542626 0000144 00000001305 12652514124 023642 0 ustar pysdk-ci amazon 0000000 0000000 **To create a configuration document**
This example creates a configuration document called ``My_Document`` in your account. The document must be in JSON format. For more information about writing a configuration document, see `Configuration Document`_ in the *SSM API Reference*.
.. _`Configuration Document`: http://docs.aws.amazon.com/ssm/latest/APIReference/aws-ssm-document.html
Command::
aws ssm create-document --content file://myconfigfile.json --name "My_Config_Document"
Output::
{
"DocumentDescription": {
"Status": "Creating",
"Sha1": "715919de1715exampled803025817856844a5f3",
"Name": "My_Config_Document",
"CreatedDate": 1424351175.521
}
}
awscli-1.10.1/awscli/examples/ssm/list-associations.rst 0000666 4542626 0000144 00000001536 12652514124 024241 0 ustar pysdk-ci amazon 0000000 0000000 **To list your associations for a specific instance**
This example lists all the associations for instance ``i-1a2b3c4d``.
Command::
aws ssm list-associations --association-filter-list key=InstanceId,value=i-1a2b3c4d
Output::
{
"Associations": [
{
"InstanceId": "i-1a2b3c4d",
"Name": "My_Config_File"
}
]
}
**To list your associations for a specific configuration document**
This example lists all associations for the configuration document ``My_Config_File``.
Command::
aws ssm list-associations --association-filter-list key=Name,value=My_Config_File
Output::
{
"Associations": [
{
"InstanceId": "i-1a2b3c4d",
"Name": "My_Config_File"
},
{
"InstanceId": "i-rraa3344",
"Name": "My_Config_File"
}
]
}
awscli-1.10.1/awscli/examples/ssm/get-document.rst 0000666 4542626 0000144 00000001272 12652514124 023161 0 ustar pysdk-ci amazon 0000000 0000000 **To get the contents of a configuration document**
This example gets the contents of the document called ``My_Config_Document``.
Command::
aws ssm get-document --name "My_Config_Document"
Output::
{
"Content": "{\n
\"schemaVersion\": \"1.0\",\n
\"description\": \"Sample configuration to join an instance to a domain\",\n
\"runtimeConfig\": {\n
\"aws:domainJoin\": {\n
\"properties\": [\n
{\n
\"directoryId\": \"d-1234567890\",\n
\"directoryName\": \"test.example.com\",\n
\"dnsIpAddresses\": [\"198.51.100.1\",\"198.51.100.2\"]\n
}\n
]\n
}\n
}\n
}",
"Name": "My_Config_Document"
} awscli-1.10.1/awscli/examples/ssm/create-association.rst 0000666 4542626 0000144 00000001071 12652514124 024340 0 ustar pysdk-ci amazon 0000000 0000000 **To associate a configuration document**
This example associates configuration document ``My_Config_File`` with instance ``i-1a2b3c4d``.
Command::
aws ssm create-association --instance-id i-1a2b3c4d --name "My_Config_File"
Output::
{
"AssociationDescription": {
"InstanceId": "i-1a2b3c4d",
"Date": 1424354424.842,
"Name": "My_Config_File",
"Status": {
"Date": 1424354424.842,
"Message": "Associated with My_Config_File",
"Name": "Associated"
}
}
}
awscli-1.10.1/awscli/examples/ssm/update-association-status.rst 0000666 4542626 0000144 00000001353 12652514124 025703 0 ustar pysdk-ci amazon 0000000 0000000 **To update the association status**
This example updates the association status of the association between instance ``i-1a2b3c4d`` and configuration document ``My_Config_1``.
Command::
aws ssm update-association-status --name My_Config_1 --instance-id i-1a2b3c4d --association-status Date=1424421071.939,Name=Pending,Message=temp_status_change,AdditionalInfo=Additional-Config-Needed
Output::
{
"AssociationDescription": {
"InstanceId": "i-1a2b3c4d",
"Date": 1424421071.939,
"Name": "My_Config_1",
"Status": {
"Date": 1424421071.0,
"AdditionalInfo": "Additional-Config-Needed",
"Message": "temp_status_change",
"Name": "Pending"
}
}
} awscli-1.10.1/awscli/examples/ssm/describe-document.rst 0000666 4542626 0000144 00000000610 12652514124 024155 0 ustar pysdk-ci amazon 0000000 0000000 **To describe a configuration document**
This example returns information about a document called ``My_Config_Doc``.
Command::
aws ssm describe-document --name "My_Config_Doc"
Output::
{
"Document": {
"Status": "Active",
"Sha1": "715919de171exampleb3d803025817856844a5f3",
"Name": "My_Config_Doc",
"CreatedDate": 1424351175.521
}
}
awscli-1.10.1/awscli/examples/ssm/list-documents.rst 0000666 4542626 0000144 00000000514 12652514124 023536 0 ustar pysdk-ci amazon 0000000 0000000 **To list all the configuration documents in your account**
This example lists all the configuration documents in your account.
Command::
aws ssm list-documents
Output::
{
"DocumentIdentifiers": [
{
"Name": "Config_2"
},
{
"Name": "My_Config_Document"
}
]
}
awscli-1.10.1/awscli/examples/ssm/delete-document.rst 0000666 4542626 0000144 00000000325 12652514124 023642 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a configuration document**
This example deletes the configuration document called ``Config_2``. If the command succeeds, no output is returned.
Command::
aws ssm delete-document --name "Config_2"
awscli-1.10.1/awscli/examples/ssm/describe-association.rst 0000666 4542626 0000144 00000001512 12652514124 024655 0 ustar pysdk-ci amazon 0000000 0000000 **To describe an association**
This example describes the association between instance ``i-1a2b3c4d`` and ``My_Config_File``.
Command::
aws ssm describe-association --instance-id i-1a2b3c4d --name "My_Config_File"
Output::
{
"AssociationDescription": {
"InstanceId": "i-1a2b3c4d",
"Date": 1424419009.036,
"Name": "My_Config_File",
"Status": {
"Date": 1424419196.804,
"AdditionalInfo": "{agent=EC2Config,ver=3.0.54,osver=6.3.9600,os=Windows Server 2012 R2 Standard,lang=en-US}",
"Message": "RunId=0198dadc-aaaa-4150-875f-exampleba3d, status:InProgress, code:0, message:RuntimeStatusCounts=[PassedAndReboot=1], RuntimeStatus=[aws:domainJoin={PassedAndReboot,Domain join Succeeded to domain: test.ssm.com}]",
"Name": "Pending"
}
}
} awscli-1.10.1/awscli/examples/ssm/delete-association.rst 0000666 4542626 0000144 00000000431 12652514124 024336 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an association**
This example deletes the association between instance ``i-bbcc3344`` and the configuration document ``Test_config``. If the command succeeds, no output is returned.
Command::
aws ssm delete-association --instance-id i-bbcc3344 --name Test_config
awscli-1.10.1/awscli/examples/ssm/create-association-batch.rst 0000666 4542626 0000144 00000002022 12652514124 025414 0 ustar pysdk-ci amazon 0000000 0000000 **To create multiple associations**
This example associates the configuration document ``My_Config_1`` with instance ``i-aabb2233``, and associates the configuration document ``My_Config_2`` with instance ``i-cdcd2233``. The output returns a list of successful and unsuccessful operations, if applicable.
Command::
aws ssm create-association-batch --entries Name=My_Config_1,InstanceId=i-aabb2233 Name=My_Config_2,InstanceId=1-cdcd2233
Output::
{
"Successful": [
{
"InstanceId": "i-aabb2233",
"Date": 1424421071.939,
"Name": My_Config_1",
"Status": {
"Date": 1424421071.939,
"Message": "Associated with My_Config_1",
"Name": "Associated"
}
}
],
"Failed": [
{
"Entry": {
"InstanceId": "i-cdcd2233",
"Name": "My_Config_2"
},
"Message": "Association Already Exists",
"Fault": "Client"
}
]
} awscli-1.10.1/awscli/examples/ecs/ 0000777 4542626 0000144 00000000000 12652514126 020004 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/ecs/create-service.rst 0000666 4542626 0000144 00000006761 12652514124 023447 0 ustar pysdk-ci amazon 0000000 0000000 **To create a new service**
This example command creates a service in your default region called ``ecs-simple-service``. The service uses the ``ecs-demo`` task definition and it maintains 10 instantiations of that task.
Command::
aws ecs create-service --service-name ecs-simple-service --task-definition ecs-demo --desired-count 10
Output::
{
"service": {
"status": "ACTIVE",
"taskDefinition": "arn:aws:ecs:::task-definition/ecs-demo:1",
"pendingCount": 0,
"loadBalancers": [],
"desiredCount": 10,
"serviceName": "ecs-simple-service",
"clusterArn": "arn:aws:ecs:::cluster/default",
"serviceArn": "arn:aws:ecs:::service/ecs-simple-service",
"deployments": [
{
"status": "PRIMARY",
"pendingCount": 0,
"createdAt": 1428096748.604,
"desiredCount": 10,
"taskDefinition": "arn:aws:ecs:::task-definition/ecs-demo:1",
"updatedAt": 1428096748.604,
"id": "ecs-svc/",
"runningCount": 0
}
],
"events": [],
"runningCount": 0
}
}
**To create a new service behind a load balancer**
This example command creates a service in your default region called ``ecs-simple-service-elb``. The service uses the ``ecs-demo`` task definition and it maintains 10 instantiations of that task. You must have a load balancer configured in the same region as your container instances.
This example uses the ``--cli-input-json`` option and a JSON input file called ``ecs-simple-service-elb.json`` with the below format.
Input file::
{
"serviceName": "ecs-simple-service-elb",
"taskDefinition": "ecs-demo",
"loadBalancers": [
{
"loadBalancerName": "EC2Contai-EcsElast-S06278JGSJCM",
"containerName": "simple-demo",
"containerPort": 80
}
],
"desiredCount": 10,
"role": "ecsServiceRole"
}
Command::
aws ecs create-service --service-name ecs-simple-service-elb --cli-input-json file://ecs-simple-service-elb.json
Output::
{
"service": {
"status": "ACTIVE",
"taskDefinition": "arn:aws:ecs:::task-definition/ecs-demo:1",
"pendingCount": 0,
"loadBalancers": [
{
"containerName": "ecs-demo",
"containerPort": 80,
"loadBalancerName": "EC2Contai-EcsElast-S06278JGSJCM"
}
],
"roleArn": "arn:aws:iam:::role/ecsServiceRole",
"desiredCount": 10,
"serviceName": "ecs-simple-service-elb",
"clusterArn": "arn:aws:ecs:::cluster/default",
"serviceArn": "arn:aws:ecs:::service/ecs-simple-service-elb",
"deployments": [
{
"status": "PRIMARY",
"pendingCount": 0,
"createdAt": 1428100239.123,
"desiredCount": 10,
"taskDefinition": "arn:aws:ecs:::task-definition/ecs-demo:1",
"updatedAt": 1428100239.123,
"id": "ecs-svc/",
"runningCount": 0
}
],
"events": [],
"runningCount": 0
}
} awscli-1.10.1/awscli/examples/ecs/describe-task-definition.rst 0000666 4542626 0000144 00000003016 12652514124 025402 0 ustar pysdk-ci amazon 0000000 0000000 **To describe a task definition**
This example command provides a description of the specified task definition.
Command::
aws ecs describe-task-definition --task-definition hello_world:8
Output::
{
"taskDefinition": {
"volumes": [],
"taskDefinitionArn": "arn:aws:ecs:us-east-1::task-definition/hello_world:8",
"containerDefinitions": [
{
"environment": [],
"name": "wordpress",
"links": [
"mysql"
],
"mountPoints": [],
"image": "wordpress",
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"memory": 500,
"cpu": 10,
"volumesFrom": []
},
{
"environment": [
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "password"
}
],
"name": "mysql",
"mountPoints": [],
"image": "mysql",
"cpu": 10,
"portMappings": [],
"memory": 500,
"essential": true,
"volumesFrom": []
}
],
"family": "hello_world",
"revision": 8
}
}
awscli-1.10.1/awscli/examples/ecs/update-service.rst 0000666 4542626 0000144 00000000744 12652514124 023461 0 ustar pysdk-ci amazon 0000000 0000000 **To change the task definition used in a service**
This example command updates the ``my-http-service`` service to use the ``amazon-ecs-sample`` task definition.
Command::
aws ecs update-service --service my-http-service --task-definition amazon-ecs-sample
**To change the number of tasks in a service**
This example command updates the desired count of the ``my-http-service`` service to 10.
Command::
aws ecs update-service --service my-http-service --desired-count 10 awscli-1.10.1/awscli/examples/ecs/create-cluster.rst 0000666 4542626 0000144 00000000762 12652514124 023463 0 ustar pysdk-ci amazon 0000000 0000000 **To create a new cluster**
This example command creates a cluster in your default region.
Command::
aws ecs create-cluster --cluster-name "my_cluster"
Output::
{
"cluster": {
"status": "ACTIVE",
"clusterName": "my_cluster",
"registeredContainerInstancesCount": 0,
"pendingTasksCount": 0,
"runningTasksCount": 0,
"activeServicesCount": 0,
"clusterArn": "arn:aws:ecs:::cluster/my_cluster"
}
}
awscli-1.10.1/awscli/examples/ecs/update-container-agent.rst 0000666 4542626 0000144 00000001174 12652514124 025075 0 ustar pysdk-ci amazon 0000000 0000000 **To update the container agent on an Amazon ECS container instance**
This example command updates the container agent on the container instance ``a3e98c65-2a40-4452-a63c-62beb4d9be9b`` in the default cluster.
Command::
aws ecs update-container-agent --cluster default --container-instance a3e98c65-2a40-4452-a63c-62beb4d9be9b
Output::
{
"containerInstance": {
"status": "ACTIVE",
...
"agentUpdateStatus": "PENDING",
"versionInfo": {
"agentVersion": "1.0.0",
"agentHash": "4023248",
"dockerVersion": "DockerVersion: 1.5.0"
}
}
} awscli-1.10.1/awscli/examples/ecs/list-container-instances.rst 0000666 4542626 0000144 00000001001 12652514124 025444 0 ustar pysdk-ci amazon 0000000 0000000 **To list your available container instances in a cluster**
This example command lists all of your available container instances in the specified cluster in your default region.
Command::
aws ecs list-container-instances --cluster default
Output::
{
"containerInstanceArns": [
"arn:aws:ecs:us-east-1::container-instance/f6bbb147-5370-4ace-8c73-c7181ded911f",
"arn:aws:ecs:us-east-1::container-instance/ffe3d344-77e2-476c-a4d0-bf560ad50acb"
]
}
awscli-1.10.1/awscli/examples/ecs/list-task-definition-families.rst 0000666 4542626 0000144 00000001145 12652514124 026365 0 ustar pysdk-ci amazon 0000000 0000000 **To list your registered task definition families**
This example command lists all of your registered task definition families.
Command::
aws ecs list-task-definition-families
Output::
{
"families": [
"node-js-app",
"web-timer",
"hpcc",
"hpcc-c4-8xlarge"
]
}
**To filter your registered task definition families**
This example command lists the task definition revisions that start with "hpcc".
Command::
aws ecs list-task-definition-families --family-prefix hpcc
Output::
{
"families": [
"hpcc",
"hpcc-c4-8xlarge"
]
}
awscli-1.10.1/awscli/examples/ecs/describe-tasks.rst 0000666 4542626 0000144 00000003304 12652514124 023437 0 ustar pysdk-ci amazon 0000000 0000000 **To describe a task**
This example command provides a description of the specified task, using the task UUID as an identifier.
Command::
aws ecs describe-tasks --tasks c5cba4eb-5dad-405e-96db-71ef8eefe6a8
Output::
{
"failures": [],
"tasks": [
{
"taskArn": "arn:aws:ecs:::task/c5cba4eb-5dad-405e-96db-71ef8eefe6a8",
"overrides": {
"containerOverrides": [
{
"name": "ecs-demo"
}
]
},
"lastStatus": "RUNNING",
"containerInstanceArn": "arn:aws:ecs:::container-instance/18f9eda5-27d7-4c19-b133-45adc516e8fb",
"clusterArn": "arn:aws:ecs:::cluster/default",
"desiredStatus": "RUNNING",
"taskDefinitionArn": "arn:aws:ecs:::task-definition/amazon-ecs-sample:1",
"startedBy": "ecs-svc/9223370608528463088",
"containers": [
{
"containerArn": "arn:aws:ecs:::container/7c01765b-c588-45b3-8290-4ba38bd6c5a6",
"taskArn": "arn:aws:ecs:::task/c5cba4eb-5dad-405e-96db-71ef8eefe6a8",
"lastStatus": "RUNNING",
"name": "ecs-demo",
"networkBindings": [
{
"bindIP": "0.0.0.0",
"containerPort": 80,
"hostPort": 80
}
]
}
]
}
]
}
awscli-1.10.1/awscli/examples/ecs/delete-cluster.rst 0000666 4542626 0000144 00000000761 12652514124 023461 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an empty cluster**
This example command deletes an empty cluster in your default region.
Command::
aws ecs delete-cluster --cluster my_cluster
Output::
{
"cluster": {
"status": "INACTIVE",
"clusterName": "my_cluster",
"registeredContainerInstancesCount": 0,
"pendingTasksCount": 0,
"runningTasksCount": 0,
"activeServicesCount": 0,
"clusterArn": "arn:aws:ecs:::cluster/my_cluster"
}
}
awscli-1.10.1/awscli/examples/ecs/describe-services.rst 0000666 4542626 0000144 00000003125 12652514124 024136 0 ustar pysdk-ci amazon 0000000 0000000 **To describe a service**
This example command provides descriptive information about the ``my-http-service``.
Command::
aws ecs describe-services --service my-http-service
Output::
{
"services": [
{
"status": "ACTIVE",
"taskDefinition": "arn:aws:ecs:::task-definition/amazon-ecs-sample:1",
"pendingCount": 0,
"loadBalancers": [],
"desiredCount": 10,
"serviceName": "my-http-service",
"clusterArn": "arn:aws:ecs:::cluster/default",
"serviceArn": "arn:aws:ecs:::service/my-http-service",
"deployments": [
{
"status": "PRIMARY",
"pendingCount": 0,
"createdAt": 1428326312.703,
"desiredCount": 10,
"taskDefinition": "arn:aws:ecs:::task-definition/amazon-ecs-sample:1",
"updatedAt": 1428326312.703,
"id": "ecs-svc/9223370608528463088",
"runningCount": 10
}
],
"events": [
{
"message": "(service my-http-service) has reached a steady state.",
"id": "97c8a8e0-16a5-4d30-80bd-9e5413f8951b",
"createdAt": 1428326587.208
}
],
"runningCount": 10
}
],
"failures": []
}
awscli-1.10.1/awscli/examples/ecs/delete-service.rst 0000666 4542626 0000144 00000000356 12652514124 023440 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a service**
This example command deletes the ``my-http-service`` service. The service must have a desired count and running count of 0 before you can delete it.
Command::
aws ecs delete-service --service my-http-service
awscli-1.10.1/awscli/examples/ecs/run-task.rst 0000666 4542626 0000144 00000002407 12652514124 022303 0 ustar pysdk-ci amazon 0000000 0000000 **To run a task on your default cluster**
This example command runs the specified task definition on your default cluster.
Command::
aws ecs run-task --cluster default --task-definition sleep360:1
Output::
{
"tasks": [
{
"taskArn": "arn:aws:ecs:us-east-1::task/a9f21ea7-c9f5-44b1-b8e6-b31f50ed33c0",
"overrides": {
"containerOverrides": [
{
"name": "sleep"
}
]
},
"lastStatus": "PENDING",
"containerInstanceArn": "arn:aws:ecs:us-east-1::container-instance/ffe3d344-77e2-476c-a4d0-bf560ad50acb",
"desiredStatus": "RUNNING",
"taskDefinitionArn": "arn:aws:ecs:us-east-1::task-definition/sleep360:1",
"containers": [
{
"containerArn": "arn:aws:ecs:us-east-1::container/58591c8e-be29-4ddf-95aa-ee459d4c59fd",
"taskArn": "arn:aws:ecs:us-east-1::task/a9f21ea7-c9f5-44b1-b8e6-b31f50ed33c0",
"lastStatus": "PENDING",
"name": "sleep"
}
]
}
]
}
awscli-1.10.1/awscli/examples/ecs/describe-container-instances.rst 0000666 4542626 0000144 00000005426 12652514124 026270 0 ustar pysdk-ci amazon 0000000 0000000 **To describe container instance**
This example command provides a description of the specified container instance in the ``update`` cluster, using the container instance UUID as an identifier.
Command::
aws ecs describe-container-instances --cluster update --container-instances 53ac7152-dcd1-4102-81f5-208962864132
Output::
{
"failures": [],
"containerInstances": [
{
"status": "ACTIVE",
"registeredResources": [
{
"integerValue": 2048,
"longValue": 0,
"type": "INTEGER",
"name": "CPU",
"doubleValue": 0.0
},
{
"integerValue": 3955,
"longValue": 0,
"type": "INTEGER",
"name": "MEMORY",
"doubleValue": 0.0
},
{
"name": "PORTS",
"longValue": 0,
"doubleValue": 0.0,
"stringSetValue": [
"22",
"2376",
"2375",
"51678"
],
"type": "STRINGSET",
"integerValue": 0
}
],
"ec2InstanceId": "i-f3c1de3a",
"agentConnected": true,
"containerInstanceArn": "arn:aws:ecs:us-west-2::container-instance/53ac7152-dcd1-4102-81f5-208962864132",
"pendingTasksCount": 0,
"remainingResources": [
{
"integerValue": 2048,
"longValue": 0,
"type": "INTEGER",
"name": "CPU",
"doubleValue": 0.0
},
{
"integerValue": 3955,
"longValue": 0,
"type": "INTEGER",
"name": "MEMORY",
"doubleValue": 0.0
},
{
"name": "PORTS",
"longValue": 0,
"doubleValue": 0.0,
"stringSetValue": [
"22",
"2376",
"2375",
"51678"
],
"type": "STRINGSET",
"integerValue": 0
}
],
"runningTasksCount": 0,
"versionInfo": {
"agentVersion": "1.0.0",
"agentHash": "4023248",
"dockerVersion": "DockerVersion: 1.5.0"
}
}
]
} awscli-1.10.1/awscli/examples/ecs/list-services.rst 0000666 4542626 0000144 00000000407 12652514124 023331 0 ustar pysdk-ci amazon 0000000 0000000 **To list the services in a cluster**
This example command lists the services running in a cluster.
Command::
aws ecs list-services
Output::
{
"serviceArns": [
"arn:aws:ecs:::service/my-http-service"
]
}
awscli-1.10.1/awscli/examples/ecs/deregister-task-definition.rst 0000666 4542626 0000144 00000002123 12652514124 025755 0 ustar pysdk-ci amazon 0000000 0000000 **To deregister a task definition**
This example deregisters the first revision of the ``curler`` task definition in your default region. Note that in the resulting output, the task definition status becomes ``INACTIVE``.
Command::
aws ecs deregister-task-definition --task-definition curler:1
Output::
{
"taskDefinition": {
"status": "INACTIVE",
"family": "curler",
"volumes": [],
"taskDefinitionArn": "arn:aws:ecs:us-west-2::task-definition/curler:1",
"containerDefinitions": [
{
"environment": [],
"name": "curler",
"mountPoints": [],
"image": "curl:latest",
"cpu": 100,
"portMappings": [],
"entryPoint": [],
"memory": 256,
"command": [
"curl -v http://example.com/"
],
"essential": true,
"volumesFrom": []
}
],
"revision": 1
}
} awscli-1.10.1/awscli/examples/ecs/deregister-container-instance.rst 0000666 4542626 0000144 00000000640 12652514124 026453 0 ustar pysdk-ci amazon 0000000 0000000 **To deregister a container instance from a cluster**
This example deregisters a container instance from the specified cluster in your default region. If there are still tasks running on the container instance, you must either stop those tasks before deregistering, or use the force option.
Command::
aws ecs deregister-container-instance --cluster default --container-instance --force awscli-1.10.1/awscli/examples/ecs/register-task-definition.rst 0000666 4542626 0000144 00000005441 12652514124 025452 0 ustar pysdk-ci amazon 0000000 0000000 **To register a task definition with a JSON file**
This example registers a task definition to the specified family with container definitions that are saved in JSON format at the specified file location.
Command::
aws ecs register-task-definition --cli-input-json file:///sleep360.json
JSON file format::
{
"containerDefinitions": [
{
"name": "sleep",
"image": "busybox",
"cpu": 10,
"command": [
"sleep",
"360"
],
"memory": 10,
"essential": true
}
],
"family": "sleep360"
}
Output::
{
"taskDefinition": {
"volumes": [],
"taskDefinitionArn": "arn:aws:ecs:us-east-1::task-definition/sleep360:19",
"containerDefinitions": [
{
"environment": [],
"name": "sleep",
"mountPoints": [],
"image": "busybox",
"cpu": 10,
"portMappings": [],
"command": [
"sleep",
"360"
],
"memory": 10,
"essential": true,
"volumesFrom": []
}
],
"family": "sleep360",
"revision": 1
}
}
**To register a task definition with a JSON string**
This example registers a the same task definition from the previous example, but the container definitions are in a string format with the double quotes escaped.
Command::
aws ecs register-task-definition --family sleep360 --container-definitions "[{\"name\":\"sleep\",\"image\":\"busybox\",\"cpu\":10,\"command\":[\"sleep\",\"360\"],\"memory\":10,\"essential\":true}]"
**To use data volumes in a task definition**
This example task definition creates a data volume called `webdata` that exists at `/ecs/webdata` on the container instance. The volume is mounted read-only as `/usr/share/nginx/html` on the `web` container, and read-write as `/nginx/` on the `timer` container.
Task Definition::
{
"family": "web-timer",
"containerDefinitions": [
{
"name": "web",
"image": "nginx",
"cpu": 99,
"memory": 100,
"portMappings": [{
"containerPort": 80,
"hostPort": 80
}],
"essential": true,
"mountPoints": [{
"sourceVolume": "webdata",
"containerPath": "/usr/share/nginx/html",
"readOnly": true
}]
}, {
"name": "timer",
"image": "busybox",
"cpu": 10,
"memory": 20,
"entryPoint": ["sh", "-c"],
"command": ["while true; do date > /nginx/index.html; sleep 1; done"],
"mountPoints": [{
"sourceVolume": "webdata",
"containerPath": "/nginx/"
}]
}],
"volumes": [{
"name": "webdata",
"host": {
"sourcePath": "/ecs/webdata"
}}
]
}
awscli-1.10.1/awscli/examples/ecs/list-clusters.rst 0000666 4542626 0000144 00000000513 12652514124 023350 0 ustar pysdk-ci amazon 0000000 0000000 **To list your available clusters**
This example command lists all of your available clusters in your default region.
Command::
aws ecs list-clusters
Output::
{
"clusterArns": [
"arn:aws:ecs:us-east-1::cluster/test",
"arn:aws:ecs:us-east-1::cluster/default"
]
}
awscli-1.10.1/awscli/examples/ecs/describe-clusters.rst 0000666 4542626 0000144 00000001112 12652514124 024151 0 ustar pysdk-ci amazon 0000000 0000000 **To describe a cluster**
This example command provides a description of the specified cluster in your default region.
Command::
aws ecs describe-clusters --cluster default
Output::
{
"clusters": [
{
"status": "ACTIVE",
"clusterName": "default",
"registeredContainerInstancesCount": 0,
"pendingTasksCount": 0,
"runningTasksCount": 0,
"activeServicesCount": 1,
"clusterArn": "arn:aws:ecs:us-west-2::cluster/default"
}
],
"failures": []
}
awscli-1.10.1/awscli/examples/ecs/list-tasks.rst 0000666 4542626 0000144 00000001432 12652514124 022632 0 ustar pysdk-ci amazon 0000000 0000000 **To list the tasks in a cluster**
This example command lists all of the tasks in a cluster.
Command::
aws ecs list-tasks --cluster default
Output::
{
"taskArns": [
"arn:aws:ecs:us-east-1::task/0cc43cdb-3bee-4407-9c26-c0e6ea5bee84",
"arn:aws:ecs:us-east-1::task/6b809ef6-c67e-4467-921f-ee261c15a0a1"
]
}
**To list the tasks on a particular container instance**
This example command lists the tasks of a specified container instance, using the container instance UUID as a filter.
Command::
aws ecs list-tasks --cluster default --container-instance f6bbb147-5370-4ace-8c73-c7181ded911f
Output::
{
"taskArns": [
"arn:aws:ecs:us-east-1::task/0cc43cdb-3bee-4407-9c26-c0e6ea5bee84"
]
} awscli-1.10.1/awscli/examples/ecs/list-task-definitions.rst 0000666 4542626 0000144 00000002351 12652514124 024761 0 ustar pysdk-ci amazon 0000000 0000000 **To list your registered task definitions**
This example command lists all of your registered task definitions.
Command::
aws ecs list-task-definitions
Output::
{
"taskDefinitionArns": [
"arn:aws:ecs:us-east-1::task-definition/sleep300:2",
"arn:aws:ecs:us-east-1::task-definition/sleep360:1",
"arn:aws:ecs:us-east-1::task-definition/wordpress:3",
"arn:aws:ecs:us-east-1::task-definition/wordpress:4",
"arn:aws:ecs:us-east-1::task-definition/wordpress:5",
"arn:aws:ecs:us-east-1::task-definition/wordpress:6"
]
}
**To list the registered task definitions in a family**
This example command lists the task definition revisions of a specified family.
Command::
aws ecs list-task-definitions --family-prefix wordpress
Output::
{
"taskDefinitionArns": [
"arn:aws:ecs:us-east-1::task-definition/wordpress:3",
"arn:aws:ecs:us-east-1::task-definition/wordpress:4",
"arn:aws:ecs:us-east-1::task-definition/wordpress:5",
"arn:aws:ecs:us-east-1::task-definition/wordpress:6"
]
} awscli-1.10.1/awscli/examples/acm/ 0000777 4542626 0000144 00000000000 12652514126 017772 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/acm/list-certificates.rst 0000666 4542626 0000144 00000003630 12652514125 024143 0 ustar pysdk-ci amazon 0000000 0000000 **To list the ACM certificates for an AWS account**
The following ``list-certificates`` command lists the ARNs of the certificates in your account::
aws acm list-certificates
The preceding command produces the following output::
{
"CertificateSummaryList": [
{
"CertificateArn": "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012",
"DomainName": "www.example.com"
},
{
"CertificateArn": "arn:aws:acm:us-east-1:493619779192:certificate/87654321-4321-4321-4321-210987654321",
"DomainName": "www.example.net"
}
]
}
You can also filter your output by using the "certificate-statuses" argument. The following command displays certificates that have a PENDING_VALIDATION status::
aws acm list-certificates --certificate-statuses PENDING_VALIDATION
Finally, you can decide how many certificates you want to display each time you call ``list-certificates``. For example, to display no more than two certificates at a time, set the ``max-items`` argument to 2 as in the following example::
aws acm list-certificates --max-items 2
Two certificate ARNs and a ``NextToken`` value will be displayed::
{
"CertificateSummaryList": [
{
"CertificateArn": "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012",
"DomainName": "www.example.com"
},
{
"CertificateArn": "arn:aws:acm:us-east-1:493619779192:certificate/87654321-4321-4321-4321-210987654321",
"DomainName": "www.example.net"
}
],
"NextToken": "9f4d9f69-275a-41fe-b58e-2b837bd9ba48"
}
To display the next two certificates in your account, set this ``NextToken`` value in your next call::
aws acm list-certificates --max-items 2 --next-token 9f4d9f69-275a-41fe-b58e-2b837bd9ba48
awscli-1.10.1/awscli/examples/acm/get-certificate.rst 0000666 4542626 0000144 00000010272 12652514125 023564 0 ustar pysdk-ci amazon 0000000 0000000 **To retrieve an ACM certificate**
The following ``get-certificate`` command retrieves the certificate for the specified ARN and the certificate chain::
aws acm get-certificate --certificate-arn arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012
Output similar to the following is displayed::
{
"Certificate": "-----BEGIN CERTIFICATE-----
MIICiTCCAfICCQD6m7oRw0uXOjANBgkqhkiG9w0BAQUFADCBiDELMAkGA1UEBhMC
VVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwdTZWF0dGxlMQ8wDQYDVQQKEwZBbWF6
b24xFDASBgNVBAsTC0lBTSBDb25zb2xlMRIwEAYDVQQDEwlUZXN0Q2lsYWMxHzAd
BgkqhkiG9w0BCQEWEG5vb25lQGFtYXpvbi5jb20wHhcNMTEwNDI1MjA0NTIxWhcN
MTIwNDI0MjA0NTIxWjCBiDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMRAwDgYD
VQQHEwdTZWF0dGxlMQ8wDQYDVQQKEwZBbWF6b24xFDASBgNVBAsTC0lBTSBDb25z
b2xlMRIwEAYDVQQDEwlUZXN0Q2lsYWMxHzAdBgkqhkiG9w0BCQEWEG5vb25lQGFt
YXpvbi5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMaK0dn+a4GmWIWJ
21uUSfwfEvySWtC2XADZ4nB+BLYgVIk60CpiwsZ3G93vUEIO3IyNoH/f0wYK8m9T
rDHudUZg3qX4waLG5M43q7Wgc/MbQITxOUSQv7c7ugFFDzQGBzZswY6786m86gpE
Ibb3OhjZnzcvQAaRHhdlQWIMm2nrAgMBAAEwDQYJKoZIhvcNAQEFBQADgYEAtCu4
nUhVVxYUntneD9+h8Mg9q6q+auNKyExzyLwaxlAoo7TJHidbtS4J5iNmZgXL0Fkb
FFBjvSfpJIlJ00zbhNYS5f6GuoEDmFJl0ZxBHjJnyp378OD8uTs7fLvjx79LjSTb
NYiytVbZPQUQ5Yaxu2jXnimvw3rrszlaEXAMPLE=
-----END CERTIFICATE-----",
"CertificateChain": "-----BEGIN CERTIFICATE-----
MIICiTCCAfICCQD6m7oRw0uXOjANBgkqhkiG9w0BAQUFADCBiDELMAkGA1UEBhMC
VVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwdTZWF0dGxlMQ8wDQYDVQQKEwZBbWF6
b24xFDASBgNVBAsTC0lBTSBDb25zb2xlMRIwEAYDVQQDEwlUZXN0Q2lsYWMxHzAd
BgkqhkiG9w0BCQEWEG5vb25lQGFtYXpvbi5jb20wHhcNMTEwNDI1MjA0NTIxWhcN
MTIwNDI0MjA0NTIxWjCBiDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMRAwDgYD
VQQHEwdTZWF0dGxlMQ8wDQYDVQQKEwZBbWF6b24xFDASBgNVBAsTC0lBTSBDb25z
b2xlMRIwEAYDVQQDEwlUZXN0Q2lsYWMxHzAdBgkqhkiG9w0BCQEWEG5vb25lQGFt
YXpvbi5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMaK0dn+a4GmWIWJ
21uUSfwfEvySWtC2XADZ4nB+BLYgVIk60CpiwsZ3G93vUEIO3IyNoH/f0wYK8m9T
rDHudUZg3qX4waLG5M43q7Wgc/MbQITxOUSQv7c7ugFFDzQGBzZswY6786m86gpE
Ibb3OhjZnzcvQAaRHhdlQWIMm2nrAgMBAAEwDQYJKoZIhvcNAQEFBQADgYEAtCu4
nUhVVxYUntneD9+h8Mg9q6q+auNKyExzyLwaxlAoo7TJHidbtS4J5iNmZgXL0Fkb
FFBjvSfpJIlJ00zbhNYS5f6GuoEDmFJl0ZxBHjJnyp378OD8uTs7fLvjx79LjSTb
NYiytVbZPQUQ5Yaxu2jXnimvw3rrszlaEXAMPLE=
-----END CERTIFICATE-----",
"-----BEGIN CERTIFICATE-----
MIICiTCCAfICCQD6m7oRw0uXOjANBgkqhkiG9w0BAQUFADCBiDELMAkGA1UEBhMC
VVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwdTZWF0dGxlMQ8wDQYDVQQKEwZBbWF6
b24xFDASBgNVBAsTC0lBTSBDb25zb2xlMRIwEAYDVQQDEwlUZXN0Q2lsYWMxHzAd
BgkqhkiG9w0BCQEWEG5vb25lQGFtYXpvbi5jb20wHhcNMTEwNDI1MjA0NTIxWhcN
MTIwNDI0MjA0NTIxWjCBiDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMRAwDgYD
VQQHEwdTZWF0dGxlMQ8wDQYDVQQKEwZBbWF6b24xFDASBgNVBAsTC0lBTSBDb25z
b2xlMRIwEAYDVQQDEwlUZXN0Q2lsYWMxHzAdBgkqhkiG9w0BCQEWEG5vb25lQGFt
YXpvbi5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMaK0dn+a4GmWIWJ
21uUSfwfEvySWtC2XADZ4nB+BLYgVIk60CpiwsZ3G93vUEIO3IyNoH/f0wYK8m9T
rDHudUZg3qX4waLG5M43q7Wgc/MbQITxOUSQv7c7ugFFDzQGBzZswY6786m86gpE
Ibb3OhjZnzcvQAaRHhdlQWIMm2nrAgMBAAEwDQYJKoZIhvcNAQEFBQADgYEAtCu4
nUhVVxYUntneD9+h8Mg9q6q+auNKyExzyLwaxlAoo7TJHidbtS4J5iNmZgXL0Fkb
FFBjvSfpJIlJ00zbhNYS5f6GuoEDmFJl0ZxBHjJnyp378OD8uTs7fLvjx79LjSTb
NYiytVbZPQUQ5Yaxu2jXnimvw3rrszlaEXAMPLE=
-----END CERTIFICATE-----",
"-----BEGIN CERTIFICATE-----
MIICiTCCAfICCQD6m7oRw0uXOjANBgkqhkiG9w0BAQUFADCBiDELMAkGA1UEBhMC
VVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwdTZWF0dGxlMQ8wDQYDVQQKEwZBbWF6
b24xFDASBgNVBAsTC0lBTSBDb25zb2xlMRIwEAYDVQQDEwlUZXN0Q2lsYWMxHzAd
BgkqhkiG9w0BCQEWEG5vb25lQGFtYXpvbi5jb20wHhcNMTEwNDI1MjA0NTIxWhcN
MTIwNDI0MjA0NTIxWjCBiDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMRAwDgYD
VQQHEwdTZWF0dGxlMQ8wDQYDVQQKEwZBbWF6b24xFDASBgNVBAsTC0lBTSBDb25z
b2xlMRIwEAYDVQQDEwlUZXN0Q2lsYWMxHzAdBgkqhkiG9w0BCQEWEG5vb25lQGFt
YXpvbi5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMaK0dn+a4GmWIWJ
21uUSfwfEvySWtC2XADZ4nB+BLYgVIk60CpiwsZ3G93vUEIO3IyNoH/f0wYK8m9T
rDHudUZg3qX4waLG5M43q7Wgc/MbQITxOUSQv7c7ugFFDzQGBzZswY6786m86gpE
Ibb3OhjZnzcvQAaRHhdlQWIMm2nrAgMBAAEwDQYJKoZIhvcNAQEFBQADgYEAtCu4
nUhVVxYUntneD9+h8Mg9q6q+auNKyExzyLwaxlAoo7TJHidbtS4J5iNmZgXL0Fkb
FFBjvSfpJIlJ00zbhNYS5f6GuoEDmFJl0ZxBHjJnyp378OD8uTs7fLvjx79LjSTb
NYiytVbZPQUQ5Yaxu2jXnimvw3rrszlaEXAMPLE=
-----END CERTIFICATE-----"
}
awscli-1.10.1/awscli/examples/acm/request-certificate.rst 0000666 4542626 0000144 00000002117 12652514125 024474 0 ustar pysdk-ci amazon 0000000 0000000 **To request a new ACM certificate**
The following ``request-certificate`` command requests a new certificate for the www.example.com domain::
aws acm request-certificate --domain-name www.example.com
You can enter an idempotency token to distinguish between calls to ``request-certificate``::
aws acm request-certificate --domain-name www.example.com --idempotency-token 91adc45q
You can enter an alternative name that can be used to reach your website::
aws acm request-certificate --domain-name www.example.com --idempotency-token 91adc45q --subject-alternative-names www.example.net
You can also enter multiple alternative names::
aws acm request-certificate --domain-name a.example.com --subject-alternative-names b.example.com c.example.com d.example.com *.e.example.com *.f.example.com
You can also enter domain validation options to specify the domain to which validation email will be sent::
aws acm request-certificate --domain-name example.com --subject-alternative-names www.example.com --domain-validation-options DomainName=www.example.com,ValidationDomain=example.com
awscli-1.10.1/awscli/examples/acm/delete-certificate.rst 0000666 4542626 0000144 00000000425 12652514125 024246 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an ACM certificate from your account**
The following ``delete-certificate`` command deletes the certificate with the specified ARN::
aws acm delete-certificate --certificate-arn arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012 awscli-1.10.1/awscli/examples/acm/describe-certificate.rst 0000666 4542626 0000144 00000004130 12652514125 024561 0 ustar pysdk-ci amazon 0000000 0000000 **To retrieve the fields contained in an ACM certificate**
The following ``describe-certificate`` command retrieves all of the fields for the certificate with the specified ARN::
aws acm describe-certificate --certificate-arn arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012
Output similar to the following is displayed::
{
"Certificate": {
"CertificateArn": "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012",
"CreatedAt": 1446835267.0,
"DomainName": "www.example.com",
"DomainValidationOptions": [
{
"DomainName": "www.example.com",
"ValidationDomain": "www.example.com",
"ValidationEmails": [
"hostmaster@example.com",
"admin@example.com",
"owner@example.com.whoisprivacyservice.org",
"tech@example.com.whoisprivacyservice.org",
"admin@example.com.whoisprivacyservice.org",
"postmaster@example.com",
"webmaster@example.com",
"administrator@example.com"
]
},
{
"DomainName": "www.example.net",
"ValidationDomain": "www.example.net",
"ValidationEmails": [
"postmaster@example.net",
"admin@example.net",
"owner@example.net.whoisprivacyservice.org",
"tech@example.net.whoisprivacyservice.org",
"admin@example.net.whoisprivacyservice.org",
"hostmaster@example.net",
"administrator@example.net",
"webmaster@example.net"
]
}
],
"InUseBy": [],
"IssuedAt": 1446835815.0,
"Issuer": "Amazon",
"KeyAlgorithm": "RSA-2048",
"NotAfter": 1478433600.0,
"NotBefore": 1446768000.0,
"Serial": "0f:ac:b0:a3:8d:ea:65:52:2d:7d:01:3a:39:36:db:d6",
"SignatureAlgorithm": "SHA256WITHRSA",
"Status": "ISSUED",
"Subject": "CN=www.example.com",
"SubjectAlternativeNames": [
"www.example.com",
"www.example.net"
]
}
}
awscli-1.10.1/awscli/examples/acm/resend-validation-email.rst 0000666 4542626 0000144 00000000624 12652514125 025222 0 ustar pysdk-ci amazon 0000000 0000000 **To resend validation email for your ACM certificate request**
The following ``resend-validation-email`` command tells the Amazon certificate authority to send validation email to the appropriate addresses::
aws acm resend-validation-email --certificate-arn arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012 --domain www.example.com --validation-domain example.com
awscli-1.10.1/awscli/examples/ecr/ 0000777 4542626 0000144 00000000000 12652514126 020003 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/ecr/batch-delete-image.rst 0000666 4542626 0000144 00000000741 12652514124 024136 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an image**
This example deletes an image with the tag ``precise`` in a repository called
``ubuntu`` in the default registry for an account.
Command::
aws ecr batch-delete-image --repository-name ubuntu --image-ids imageTag=precise
Output::
{
"failures": [],
"imageIds": [
{
"imageTag": "precise",
"imageDigest": "sha256:19665f1e6d1e504117a1743c0a3d3753086354a38375961f2e665416ef4b1b2f"
}
]
}
awscli-1.10.1/awscli/examples/ecr/get-login_description.rst 0000666 4542626 0000144 00000001470 12652514124 025025 0 ustar pysdk-ci amazon 0000000 0000000 Log in to an Amazon ECR registry.
This command retrieves a token that is valid for a specified registry for 12
hours, and then it prints a ``docker login`` command with that authorization
token. You can execute the printed command to log in to your registry with
Docker. After you have logged in to an Amazon ECR registry with this command,
you can use the Docker CLI to push and pull images from that registry until the
token expires.
.. note::
This command writes displays ``docker login`` commands to stdout with
authentication credentials. Your credentials could be visible by other
users on your system in a process list display or a command history. If you
are not on a secure system, you should consider this risk and login
interactively. For more information, see ``get-authorization-token``.
awscli-1.10.1/awscli/examples/ecr/describe-repositories.rst 0000666 4542626 0000144 00000001144 12652514124 025040 0 ustar pysdk-ci amazon 0000000 0000000 **To describe the repositories in a registry**
This example describes the repositories in the default registry for an account.
Command::
aws ecr describe-repositories
Output::
{
"repositories": [
{
"registryId": "012345678910",
"repositoryName": "ubuntu",
"repositoryArn": "arn:aws:ecr:us-west-2:012345678910:repository/ubuntu"
},
{
"registryId": "012345678910",
"repositoryName": "test",
"repositoryArn": "arn:aws:ecr:us-west-2:012345678910:repository/test"
}
]
}
awscli-1.10.1/awscli/examples/ecr/batch-get-image.rst 0000666 4542626 0000144 00000000371 12652514124 023452 0 ustar pysdk-ci amazon 0000000 0000000 **To describe an image**
This example describes an image with the tag ``precise`` in a repository called
``ubuntu`` in the default registry for an account.
Command::
aws ecr batch-get-image --repository-name ubuntu --image-ids imageTag=precise
awscli-1.10.1/awscli/examples/ecr/create-repository.rst 0000666 4542626 0000144 00000000757 12652514124 024224 0 ustar pysdk-ci amazon 0000000 0000000 **To create a repository**
This example creates a repository called ``nginx-web-app`` inside the
``project-a`` namespace in the default registry for an account.
Command::
aws ecr create-repository --repository-name project-a/nginx-web-app
Output::
{
"repository": {
"registryId": "",
"repositoryName": "project-a/nginx-web-app",
"repositoryArn": "arn:aws:ecr:us-west-2::repository/project-a/nginx-web-app"
}
}
awscli-1.10.1/awscli/examples/ecr/get-login.rst 0000666 4542626 0000144 00000001253 12652514124 022421 0 ustar pysdk-ci amazon 0000000 0000000 **To retrieve a Docker login command to your default registry**
This example prints a command that you can use to log in to your default Amazon
ECR registry.
Command::
aws ecr get-login
Output::
docker login -u AWS -p -e none https://.dkr.ecr..amazonaws.com
**To log in to another account's registry**
This example prints one or more commands that you can use to log in to
Amazon ECR registries associated with other accounts.
Command::
aws ecr get-login --registry-ids 012345678910 023456789012
Output::
docker login -u -p -e none
docker login -u -p -e none
awscli-1.10.1/awscli/examples/ecr/get-authorization-token.rst 0000666 4542626 0000144 00000003132 12652514124 025325 0 ustar pysdk-ci amazon 0000000 0000000 **To get an authorization token for your default registry**
This example command gets an authorization token for your default registry.
Command::
aws ecr get-authorization-token
Output::
{
"authorizationData": [
{
"authorizationToken": "QVdTOkN...",
"expiresAt": 1448875853.241,
"proxyEndpoint": "https://.dkr.ecr.us-west-2.amazonaws.com"
}
]
}
**To get the decoded password for your default registry**
This example command gets an authorization token for your default registry and
returns the decoded password for you to use in a ``docker login`` command.
.. note::
Mac OSX users should use the ``-D`` option to ``base64`` to decode the
token data.
Command::
aws ecr get-authorization-token --output text \
--query authorizationData[].authorizationToken \
| base64 -d | cut -d: -f2
**To `docker login` with your decoded password**
This example command uses your decoded password to add authentication
information to your Docker installation by using the ``docker login`` command.
The user name is ``AWS``, and you can use any email you want (Amazon ECR does
nothing with this information, but ``docker login`` required the email field).
.. note::
The final argument is the ``proxyEndpoint`` returned from
``get-authorization-token`` without the ``https://`` prefix.
Command::
docker login -u AWS -p -e .dkr.ecr.us-west-2.amazonaws.com
Output::
WARNING: login credentials saved in $HOME/.docker/config.json
Login Succeeded
awscli-1.10.1/awscli/examples/ecr/delete-repository.rst 0000666 4542626 0000144 00000000753 12652514124 024217 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a repository**
This example command force deletes a repository named ``ubuntu`` in the default
registry for an account. The ``--force`` flag is required if the repository
contains images.
Command::
aws ecr delete-repository --force --repository-name ubuntu
Output::
{
"repository": {
"registryId": "",
"repositoryName": "ubuntu",
"repositoryArn": "arn:aws:ecr:us-west-2::repository/ubuntu"
}
}
awscli-1.10.1/awscli/examples/rds/ 0000777 4542626 0000144 00000000000 12652514126 020022 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/rds/create-option-group.rst 0000666 4542626 0000144 00000001412 12652514124 024453 0 ustar pysdk-ci amazon 0000000 0000000 **To Create an Amazon RDS option group**
The following ``create-option-group`` command creates a new Amazon RDS option group::
aws rds create-option-group --option-group-name MyOptionGroup --engine-name oracle-ee --major-engine-version 11.2 --option-group-description "Oracle Database Manager Database Control"
In the example, the option group is created for Oracle Enterprise Edition version *11.2*, is named *MyOptionGroup* and
includes a description.
This command output a JSON block that contains information on the option group.
For more information, see `Create an Amazon RDS Option Group`_ in the *AWS Command Line Interface User Guide*.
.. _`Create an Amazon RDS Option Group`: http://docs.aws.amazon.com/cli/latest/userguide/cli-rds-create-option-group.html
awscli-1.10.1/awscli/examples/rds/download-db-log-file-portion.rst 0000666 4542626 0000144 00000001270 12652514124 026130 0 ustar pysdk-ci amazon 0000000 0000000 **How to download your log file**
By default, this command will only download the latest part of your log file::
aws rds download-db-log-file-portion --db-instance-identifier myinstance \
--log-file-name log.txt --output text > tail.txt
In order to download the entire file, you need `--starting-token 0` parameter::
aws rds download-db-log-file-portion --db-instance-identifier myinstance \
--log-file-name log.txt --starting-token 0 --output text > full.txt
Note that, the downloaded file may contain several extra blank lines.
They appear at the end of each part of the log file while being downloaded.
This will generally not cause any trouble in your log file analysis.
awscli-1.10.1/awscli/examples/rds/add-tag-to-resource.rst 0000666 4542626 0000144 00000001414 12652514124 024320 0 ustar pysdk-ci amazon 0000000 0000000 **To add a tag to an Amazon RDS resource**
The following ``add-tags-to-resource`` command adds a tag to an Amazon RDS resource. In the example, a DB instance is
identified by the instance's ARN, arn:aws:rds:us-west-2:001234567890:db:mysql-db1. The tag that is added to the DB
instance has a key of ``project`` and a value of ``salix``::
aws rds add-tags-to-resource --resource-name arn:aws:rds:us-west-2:001234567890:db:mysql-db1 --tags account=sg01,project=salix
This command outputs a JSON block that acknowledges the change to the RDS resource.
For more information, see `Tagging an Amazon RDS DB Instance`_ in the *AWS Command Line Interface User Guide*.
.. _`Tagging an Amazon RDS DB Instance`: http://docs.aws.amazon.com/cli/latest/userguide/cli-rds-add-tags.html
awscli-1.10.1/awscli/examples/rds/describe-db-instances.rst 0000666 4542626 0000144 00000001042 12652514124 024677 0 ustar pysdk-ci amazon 0000000 0000000 **To Describe an Amazon RDS DB instance**
The following ``describe-db-instances`` command describes all DB instances that are owned by the AWS account::
aws rds describe-db-instances
This command output a JSON block that contains descriptive information about all the DB instances for this AWS account.
For more information, see `Describe an Amazon RDS DB Instance`_ in the *AWS Command Line Interface User Guide*.
.. _`Describe an Amazon RDS DB Instance`: http://docs.aws.amazon.com/cli/latest/userguide/cli-rds-describe-instance.html
awscli-1.10.1/awscli/examples/rds/create-db-instance.rst 0000666 4542626 0000144 00000003754 12652514124 024213 0 ustar pysdk-ci amazon 0000000 0000000 **To create an Amazon RDS DB instance**
The following ``create-db-instance`` command launches a new Amazon RDS DB instance::
aws rds create-db-instance --db-instance-identifier sg-cli-test \
--allocated-storage 20 --db-instance-class db.m1.small --engine mysql \
--master-username myawsuser --master-user-password myawsuser
In the preceding example, the DB instance is created with 20 Gb of standard storage and has a DB engine class of
db.m1.small. The master username and master password are provided.
This command outputs a JSON block that indicates that the DB instance was created::
{
"DBInstance": {
"Engine": "mysql",
"MultiAZ": false,
"DBSecurityGroups": [
{
"Status": "active",
"DBSecurityGroupName": "default"
}
],
"DBInstanceStatus": "creating",
"DBParameterGroups": [
{
"DBParameterGroupName": "default.mysql5.6",
"ParameterApplyStatus": "in-sync"
}
],
"MasterUsername": "myawsuser",
"LicenseModel": "general-public-license",
"OptionGroupMemberships": [
{
"Status": "in-sync",
"OptionGroupName": "default:mysql-5-6"
}
],
"AutoMinorVersionUpgrade": true,
"PreferredBackupWindow": "11:58-12:28",
"VpcSecurityGroups": [],
"PubliclyAccessible": true,
"PreferredMaintenanceWindow": "sat:13:10-sat:13:40",
"AllocatedStorage": 20,
"EngineVersion": "5.6.13",
"DBInstanceClass": "db.m1.small",
"ReadReplicaDBInstanceIdentifiers": [],
"BackupRetentionPeriod": 1,
"DBInstanceIdentifier": "sg-cli-test",
"PendingModifiedValues": {
"MasterUserPassword": "****"
}
}
}
awscli-1.10.1/awscli/examples/rds/create-db-security-group.rst 0000666 4542626 0000144 00000001267 12652514124 025405 0 ustar pysdk-ci amazon 0000000 0000000 **To create an Amazon RDS DB security group**
The following ``create-db-security-group`` command creates a new Amazon RDS DB security group::
aws rds create-db-security-group --db-security-group-name mysecgroup --db-security-group-description "My Test Security Group"
In the example, the new DB security group is named ``mysecgroup`` and has a description.
This command output a JSON block that contains information about the DB security group.
For more information, see `Create an Amazon RDS DB Security Group`_ in the *AWS Command Line Interface User Guide*.
.. _`Create an Amazon RDS DB Security Group`: http://docs.aws.amazon.com/cli/latest/userguide/cli-rds-create-secgroup.html
awscli-1.10.1/awscli/examples/cloudwatch/ 0000777 4542626 0000144 00000000000 12652514126 021367 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/cloudwatch/describe-alarm-history.rst 0000666 4542626 0000144 00000003552 12652514124 026475 0 ustar pysdk-ci amazon 0000000 0000000 **To retrieve history for an alarm**
The following example uses the ``describe-alarm-history`` command to retrieve history for the Amazon
CloudWatch alarm named "myalarm"::
aws cloudwatch describe-alarm-history --alarm-name "myalarm" --history-item-type StateUpdate
Output::
{
"AlarmHistoryItems": [
{
"Timestamp": "2014-04-09T18:59:06.442Z",
"HistoryItemType": "StateUpdate",
"AlarmName": "myalarm",
"HistoryData": "{\"version\":\"1.0\",\"oldState\":{\"stateValue\":\"ALARM\",\"stateReason\":\"testing purposes\"},\"newState\":{\"stateValue\":\"OK\",\"stateReason\":\"Threshold Crossed: 2 datapoints were not greater than the threshold (70.0). The most recent datapoints: [38.958, 40.292].\",\"stateReasonData\":{\"version\":\"1.0\",\"queryDate\":\"2014-04-09T18:59:06.419+0000\",\"startDate\":\"2014-04-09T18:44:00.000+0000\",\"statistic\":\"Average\",\"period\":300,\"recentDatapoints\":[38.958,40.292],\"threshold\":70.0}}}",
"HistorySummary": "Alarm updated from ALARM to OK"
},
{
"Timestamp": "2014-04-09T18:59:05.805Z",
"HistoryItemType": "StateUpdate",
"AlarmName": "myalarm",
"HistoryData": "{\"version\":\"1.0\",\"oldState\":{\"stateValue\":\"OK\",\"stateReason\":\"Threshold Crossed: 2 datapoints were not greater than the threshold (70.0). The most recent datapoints: [38.839999999999996, 39.714].\",\"stateReasonData\":{\"version\":\"1.0\",\"queryDate\":\"2014-03-11T22:45:41.569+0000\",\"startDate\":\"2014-03-11T22:30:00.000+0000\",\"statistic\":\"Average\",\"period\":300,\"recentDatapoints\":[38.839999999999996,39.714],\"threshold\":70.0}},\"newState\":{\"stateValue\":\"ALARM\",\"stateReason\":\"testing purposes\"}}",
"HistorySummary": "Alarm updated from OK to ALARM"
}
]
}
awscli-1.10.1/awscli/examples/cloudwatch/get-metric-statistics.rst 0000666 4542626 0000144 00000010150 12652514124 026344 0 ustar pysdk-ci amazon 0000000 0000000 **To get the CPU utilization per EC2 instance**
The following example uses the ``get-metric-statistics`` command to get the CPU utilization for an EC2
instance with the ID i-abcdef. For more examples using the ``get-metric-statistics`` command, see `Get Statistics for a Metric`__ in the *Amazon CloudWatch Developer Guide*.
.. __: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/US_GetStatistics.html
::
aws cloudwatch get-metric-statistics --metric-name CPUUtilization --start-time 2014-04-08T23:18:00 --end-time 2014-04-09T23:18:00 --period 3600 --namespace AWS/EC2 --statistics Maximum --dimensions Name=InstanceId,Value=i-abcdef
Output::
{
"Datapoints": [
{
"Timestamp": "2014-04-09T11:18:00Z",
"Maximum": 44.79,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T20:18:00Z",
"Maximum": 47.92,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T19:18:00Z",
"Maximum": 50.85,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T09:18:00Z",
"Maximum": 47.92,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T03:18:00Z",
"Maximum": 76.84,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T21:18:00Z",
"Maximum": 48.96,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T14:18:00Z",
"Maximum": 47.92,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T08:18:00Z",
"Maximum": 47.92,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T16:18:00Z",
"Maximum": 45.55,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T06:18:00Z",
"Maximum": 47.92,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T13:18:00Z",
"Maximum": 45.08,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T05:18:00Z",
"Maximum": 47.92,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T18:18:00Z",
"Maximum": 46.88,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T17:18:00Z",
"Maximum": 52.08,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T07:18:00Z",
"Maximum": 47.92,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T02:18:00Z",
"Maximum": 51.23,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T12:18:00Z",
"Maximum": 47.67,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-08T23:18:00Z",
"Maximum": 46.88,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T10:18:00Z",
"Maximum": 51.91,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T04:18:00Z",
"Maximum": 47.13,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T15:18:00Z",
"Maximum": 48.96,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T00:18:00Z",
"Maximum": 48.16,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T01:18:00Z",
"Maximum": 49.18,
"Unit": "Percent"
}
],
"Label": "CPUUtilization"
}
awscli-1.10.1/awscli/examples/cloudwatch/describe-alarms.rst 0000666 4542626 0000144 00000003327 12652514125 025162 0 ustar pysdk-ci amazon 0000000 0000000 **To list information about an alarm**
The following example uses the ``describe-alarms`` command to provide information about the alarm named "myalarm"::
aws cloudwatch describe-alarms --alarm-names "myalarm"
Output::
{
"MetricAlarms": [
{
"EvaluationPeriods": 2,
"AlarmArn": "arn:aws:cloudwatch:us-east-1:123456789012:alarm:myalarm",
"StateUpdatedTimestamp": "2014-04-09T18:59:06.442Z",
"AlarmConfigurationUpdatedTimestamp": "2012-12-27T00:49:54.032Z",
"ComparisonOperator": "GreaterThanThreshold",
"AlarmActions": [
"arn:aws:sns:us-east-1:123456789012:myHighCpuAlarm"
],
"Namespace": "AWS/EC2",
"AlarmDescription": "CPU usage exceeds 70 percent",
"StateReasonData": "{\"version\":\"1.0\",\"queryDate\":\"2014-04-09T18:59:06.419+0000\",\"startDate\":\"2014-04-09T18:44:00.000+0000\",\"statistic\":\"Average\",\"period\":300,\"recentDatapoints\":[38.958,40.292],\"threshold\":70.0}",
"Period": 300,
"StateValue": "OK",
"Threshold": 70.0,
"AlarmName": "myalarm",
"Dimensions": [
{
"Name": "InstanceId",
"Value": "i-0c986c72"
}
],
"Statistic": "Average",
"StateReason": "Threshold Crossed: 2 datapoints were not greater than the threshold (70.0). The most recent datapoints: [38.958, 40.292].",
"InsufficientDataActions": [],
"OKActions": [],
"ActionsEnabled": true,
"MetricName": "CPUUtilization"
}
]
}
awscli-1.10.1/awscli/examples/cloudwatch/disable-alarm-actions.rst 0000666 4542626 0000144 00000000416 12652514124 026253 0 ustar pysdk-ci amazon 0000000 0000000 **To disable actions for an alarm**
The following example uses the ``disable-alarm-actions`` command to disable all actions for the alarm named myalarm.::
aws cloudwatch disable-alarm-actions --alarm-names myalarm
This command returns to the prompt if successful.
awscli-1.10.1/awscli/examples/cloudwatch/set-alarm-state.rst 0000666 4542626 0000144 00000000621 12652514124 025121 0 ustar pysdk-ci amazon 0000000 0000000 **To temporarily change the state of an alarm**
The following example uses the ``set-alarm-state`` command to temporarily change the state of an
Amazon CloudWatch alarm named "myalarm" and set it to the ALARM state for testing purposes::
aws cloudwatch set-alarm-state --alarm-name "myalarm" --state-value ALARM --state-reason "testing purposes"
This command returns to the prompt if successful.
awscli-1.10.1/awscli/examples/cloudwatch/describe-alarms-for-metric.rst 0000666 4542626 0000144 00000006752 12652514124 027233 0 ustar pysdk-ci amazon 0000000 0000000 **To display information about alarms associated with a metric**
The following example uses the ``describe-alarms-for-metric`` command to display information about
any alarms associated with the Amazon EC2 CPUUtilization metric and the instance with the ID i-0c986c72.::
aws cloudwatch describe-alarms-for-metric --metric-name CPUUtilization --namespace AWS/EC2 --dimensions Name=InstanceId,Value=i-0c986c72
Output::
{
"MetricAlarms": [
{
"EvaluationPeriods": 10,
"AlarmArn": "arn:aws:cloudwatch:us-east-1:111122223333:alarm:myHighCpuAlarm2",
"StateUpdatedTimestamp": "2013-10-30T03:03:51.479Z",
"AlarmConfigurationUpdatedTimestamp": "2013-10-30T03:03:50.865Z",
"ComparisonOperator": "GreaterThanOrEqualToThreshold",
"AlarmActions": [
"arn:aws:sns:us-east-1:111122223333:NotifyMe"
],
"Namespace": "AWS/EC2",
"AlarmDescription": "CPU usage exceeds 70 percent",
"StateReasonData": "{\"version\":\"1.0\",\"queryDate\":\"2013-10-30T03:03:51.479+0000\",\"startDate\":\"2013-10-30T02:08:00.000+0000\",\"statistic\":\"Average\",\"period\":300,\"recentDatapoints\":[40.698,39.612,42.432,39.796,38.816,42.28,42.854,40.088,40.760000000000005,41.316],\"threshold\":70.0}",
"Period": 300,
"StateValue": "OK",
"Threshold": 70.0,
"AlarmName": "myHighCpuAlarm2",
"Dimensions": [
{
"Name": "InstanceId",
"Value": "i-0c986c72"
}
],
"Statistic": "Average",
"StateReason": "Threshold Crossed: 10 datapoints were not greater than or equal to the threshold (70.0). The most recent datapoints: [40.760000000000005, 41.316].",
"InsufficientDataActions": [],
"OKActions": [],
"ActionsEnabled": true,
"MetricName": "CPUUtilization"
},
{
"EvaluationPeriods": 2,
"AlarmArn": "arn:aws:cloudwatch:us-east-1:111122223333:alarm:myHighCpuAlarm",
"StateUpdatedTimestamp": "2014-04-09T18:59:06.442Z",
"AlarmConfigurationUpdatedTimestamp": "2014-04-09T22:26:05.958Z",
"ComparisonOperator": "GreaterThanThreshold",
"AlarmActions": [
"arn:aws:sns:us-east-1:111122223333:HighCPUAlarm"
],
"Namespace": "AWS/EC2",
"AlarmDescription": "CPU usage exceeds 70 percent",
"StateReasonData": "{\"version\":\"1.0\",\"queryDate\":\"2014-04-09T18:59:06.419+0000\",\"startDate\":\"2014-04-09T18:44:00.000+0000\",\"statistic\":\"Average\",\"period\":300,\"recentDatapoints\":[38.958,40.292],\"threshold\":70.0}",
"Period": 300,
"StateValue": "OK",
"Threshold": 70.0,
"AlarmName": "myHighCpuAlarm",
"Dimensions": [
{
"Name": "InstanceId",
"Value": "i-0c986c72"
}
],
"Statistic": "Average",
"StateReason": "Threshold Crossed: 2 datapoints were not greater than the threshold (70.0). The most recent datapoints: [38.958, 40.292].",
"InsufficientDataActions": [],
"OKActions": [],
"ActionsEnabled": false,
"MetricName": "CPUUtilization"
}
]
}
awscli-1.10.1/awscli/examples/cloudwatch/delete-alarms.rst 0000666 4542626 0000144 00000000375 12652514124 024643 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an alarm**
The following example uses the ``delete-alarms`` command to delete the Amazon CloudWatch alarm
named "myalarm"::
aws cloudwatch delete-alarms --alarm-name myalarm
Output::
This command returns to the prompt if successful.
awscli-1.10.1/awscli/examples/cloudwatch/put-metric-alarm.rst 0000666 4542626 0000144 00000001443 12652514124 025304 0 ustar pysdk-ci amazon 0000000 0000000 **To send an Amazon Simple Notification Service email message when CPU utilization exceeds 70 percent**
The following example uses the ``put-metric-alarm`` command to send an Amazon Simple Notification Service email message when CPU utilization exceeds 70 percent::
aws cloudwatch put-metric-alarm --alarm-name cpu-mon --alarm-description "Alarm when CPU exceeds 70 percent" --metric-name CPUUtilization --namespace AWS/EC2 --statistic Average --period 300 --threshold 70 --comparison-operator GreaterThanThreshold --dimensions Name=InstanceId,Value=i-12345678 --evaluation-periods 2 --alarm-actions arn:aws:sns:us-east-1:111122223333:MyTopic --unit Percent
This command returns to the prompt if successful. If an alarm with the same name already exists, it will be overwritten by the new alarm.
awscli-1.10.1/awscli/examples/cloudwatch/enable-alarm-actions.rst 0000666 4542626 0000144 00000000416 12652514124 026076 0 ustar pysdk-ci amazon 0000000 0000000 **To enable all actions for an alarm**
The following example uses the ``enable-alarm-actions`` command to enable all actions for the alarm named myalarm.::
aws cloudwatch enable-alarm-actions --alarm-names myalarm
This command returns to the prompt if successful.
awscli-1.10.1/awscli/examples/cloudwatch/put-metric-data.rst 0000666 4542626 0000144 00000001363 12652514124 025122 0 ustar pysdk-ci amazon 0000000 0000000 **To publish a custom metric to Amazon CloudWatch**
The following example uses the ``put-metric-data`` command to publish a custom metric to Amazon CloudWatch::
aws cloudwatch put-metric-data --namespace "Usage Metrics" --metric-data file://metric.json
The values for the metric itself are stored in the JSON file, ``metric.json``.
Here are the contents of that file::
[
{
"MetricName": "New Posts",
"Timestamp": "Wednesday, June 12, 2013 8:28:20 PM",
"Value": 0.50,
"Unit": "Count"
}
]
For more information, see `Publishing Custom Metrics`_ in the *Amazon CloudWatch Developer Guide*.
.. _`Publishing Custom Metrics`: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/publishingMetrics.html
awscli-1.10.1/awscli/examples/cloudwatch/list-metrics.rst 0000666 4542626 0000144 00000004737 12652514124 024551 0 ustar pysdk-ci amazon 0000000 0000000 **To list the metrics for Amazon EC2**
The following example uses the ``list-metrics`` command to list the metrics for Amazon SNS.::
aws cloudwatch list-metrics --namespace "AWS/SNS"
Output::
{
"Metrics": [
{
"Namespace": "AWS/SNS",
"Dimensions": [
{
"Name": "TopicName",
"Value": "NotifyMe"
}
],
"MetricName": "PublishSize"
},
{
"Namespace": "AWS/SNS",
"Dimensions": [
{
"Name": "TopicName",
"Value": "CFO"
}
],
"MetricName": "PublishSize"
},
{
"Namespace": "AWS/SNS",
"Dimensions": [
{
"Name": "TopicName",
"Value": "NotifyMe"
}
],
"MetricName": "NumberOfNotificationsFailed"
},
{
"Namespace": "AWS/SNS",
"Dimensions": [
{
"Name": "TopicName",
"Value": "NotifyMe"
}
],
"MetricName": "NumberOfNotificationsDelivered"
},
{
"Namespace": "AWS/SNS",
"Dimensions": [
{
"Name": "TopicName",
"Value": "NotifyMe"
}
],
"MetricName": "NumberOfMessagesPublished"
},
{
"Namespace": "AWS/SNS",
"Dimensions": [
{
"Name": "TopicName",
"Value": "CFO"
}
],
"MetricName": "NumberOfMessagesPublished"
},
{
"Namespace": "AWS/SNS",
"Dimensions": [
{
"Name": "TopicName",
"Value": "CFO"
}
],
"MetricName": "NumberOfNotificationsDelivered"
},
{
"Namespace": "AWS/SNS",
"Dimensions": [
{
"Name": "TopicName",
"Value": "CFO"
}
],
"MetricName": "NumberOfNotificationsFailed"
}
]
}
awscli-1.10.1/awscli/examples/cloudformation/ 0000777 4542626 0000144 00000000000 12652514126 022257 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/cloudformation/list-stacks.rst 0000666 4542626 0000144 00000001513 12652514124 025250 0 ustar pysdk-ci amazon 0000000 0000000 **To list AWS CloudFormation stacks**
The following ``list-stacks`` command shows a summary of all stacks that have a status of ``CREATE_COMPLETE``::
aws cloudformation list-stacks --stack-status-filter CREATE_COMPLETE
Output::
[
{
"StackId": "arn:aws:cloudformation:us-east-1:123456789012:stack/myteststack/466df9e0-0dff-08e3-8e2f-5088487c4896",
"TemplateDescription": "AWS CloudFormation Sample Template S3_Bucket: Sample template showing how to create a publicly accessible S3 bucket. **WARNING** This template creates an S3 bucket. You will be billed for the AWS resources used if you create a stack from this template.",
"StackStatusReason": null,
"CreationTime": "2013-08-26T03:27:10.190Z",
"StackName": "myteststack",
"StackStatus": "CREATE_COMPLETE"
}
] awscli-1.10.1/awscli/examples/cloudformation/create-stack.rst 0000666 4542626 0000144 00000002630 12652514124 025356 0 ustar pysdk-ci amazon 0000000 0000000 **To create an AWS CloudFormation stack**
The following ``create-stacks`` command creates a stack with the name ``myteststack`` using the ``sampletemplate.json`` template::
aws cloudformation create-stack --stack-name myteststack --template-body file:////home//local//test//sampletemplate.json
Output::
[
{
"StackId": "arn:aws:cloudformation:us-east-1:123456789012:stack/myteststack/466df9e0-0dff-08e3-8e2f-5088487c4896",
"Description": "AWS CloudFormation Sample Template S3_Bucket: Sample template showing how to create a publicly accessible S3 bucket. **WARNING** This template creates an S3 bucket. You will be billed for the AWS resources used if you create a stack from this template.",
"Tags": [],
"Outputs": [
{
"Description": "Name of S3 bucket to hold website content",
"OutputKey": "BucketName",
"OutputValue": "myteststack-s3bucket-jssofi1zie2w"
}
],
"StackStatusReason": null,
"CreationTime": "2013-08-23T01:02:15.422Z",
"Capabilities": [],
"StackName": "myteststack",
"StackStatus": "CREATE_COMPLETE",
"DisableRollback": false
}
]
For more information, see `Stacks`_ in the *AWS CloudFormation User Guide*.
.. _`Stacks`: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/concept-stack.html
awscli-1.10.1/awscli/examples/cloudformation/describe-stacks.rst 0000666 4542626 0000144 00000002633 12652514124 026061 0 ustar pysdk-ci amazon 0000000 0000000 **To describe AWS CloudFormation stacks**
The following ``describe-stacks`` command shows summary information for the ``myteststack`` stack::
aws cloudformation describe-stacks --stack-name myteststack
Output::
{
"Stacks": [
{
"StackId": "arn:aws:cloudformation:us-east-1:123456789012:stack/myteststack/466df9e0-0dff-08e3-8e2f-5088487c4896",
"Description": "AWS CloudFormation Sample Template S3_Bucket: Sample template showing how to create a publicly accessible S3 bucket. **WARNING** This template creates an S3 bucket. You will be billed for the AWS resources used if you create a stack from this template.",
"Tags": [],
"Outputs": [
{
"Description": "Name of S3 bucket to hold website content",
"OutputKey": "BucketName",
"OutputValue": "myteststack-s3bucket-jssofi1zie2w"
}
],
"StackStatusReason": null,
"CreationTime": "2013-08-23T01:02:15.422Z",
"Capabilities": [],
"StackName": "myteststack",
"StackStatus": "CREATE_COMPLETE",
"DisableRollback": false
}
]
For more information, see `Stacks`_ in the *AWS CloudFormation User Guide*.
.. _`Stacks`: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/concept-stack.html
awscli-1.10.1/awscli/examples/cloudformation/update-stack.rst 0000666 4542626 0000144 00000002362 12652514124 025377 0 ustar pysdk-ci amazon 0000000 0000000 **To update AWS CloudFormation stacks**
The following ``update-stack`` command updates the template and input parameters for the ``mystack`` stack::
aws cloudformation update-stack --stack-name mystack --template-url https://s3.amazonaws.com/sample/updated.template --parameters ParameterKey=KeyPairName,ParameterValue=SampleKeyPair ParameterKey=SubnetIDs,ParameterValue=SampleSubnetID1\\,SampleSubnetID2
The following ``update-stack`` command updates just the ``SubnetIDs`` parameter value for the ``mystack`` stack. If you
don't specify a parameter value, the default value that is specified in the template is used::
aws cloudformation update-stack --stack-name mystack --template-url https://s3.amazonaws.com/sample/updated.template --parameters ParameterKey=KeyPairName,UsePreviousValue=true ParameterKey=SubnetIDs,ParameterValue=SampleSubnetID1\\,UpdatedSampleSubnetID2
The following ``update-stack`` command adds two stack notification topics to the ``mystack`` stack::
aws cloudformation update-stack --stack-name mystack --use-previous-template --notification-arns "arn:aws:sns:use-east-1:123456789012:mytopic1" "arn:aws:sns:us-east-1:123456789012:mytopic2"
For more information, see `Updating a Stack`_ in the *AWS CloudFormation User Guide*.
awscli-1.10.1/awscli/examples/cloudformation/get-template.rst 0000666 4542626 0000144 00000002065 12652514124 025402 0 ustar pysdk-ci amazon 0000000 0000000 **To view the template body for an AWS CloudFormation stack**
The following ``get-template`` command shows the template for the ``myteststack`` stack::
aws cloudformation get-template --stack-name myteststack
Output::
{
"TemplateBody": {
"AWSTemplateFormatVersion": "2010-09-09",
"Outputs": {
"BucketName": {
"Description": "Name of S3 bucket to hold website content",
"Value": {
"Ref": "S3Bucket"
}
}
},
"Description": "AWS CloudFormation Sample Template S3_Bucket: Sample template showing how to create a publicly accessible S3 bucket. **WARNING** This template creates an S3 bucket. You will be billed for the AWS resources used if you create a stack from this template.",
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"AccessControl": "PublicRead"
}
}
}
}
} awscli-1.10.1/awscli/examples/cloudformation/cancel-update-stack.rst 0000666 4542626 0000144 00000000331 12652514124 026614 0 ustar pysdk-ci amazon 0000000 0000000 **To cancel a stack update that is in progress**
The following ``cancel-update-stack`` command cancels a stack update on the ``myteststack`` stack::
aws cloudformation cancel-update-stack --stack-name myteststack
awscli-1.10.1/awscli/examples/cloudformation/validate-template.rst 0000666 4542626 0000144 00000001502 12652514124 026407 0 ustar pysdk-ci amazon 0000000 0000000 **To validate an AWS CloudFormation template**
The following ``validate-template`` command validates the ``sampletemplate.json`` template::
aws cloudformation validate-template --template-body file:////home//local//test//sampletemplate.json
Output::
{
"Description": "AWS CloudFormation Sample Template S3_Bucket: Sample template showing how to create a publicly accessible S3 bucket. **WARNING** This template creates an S3 bucket. You will be billed for the AWS resources used if you create a stack from this template.",
"Parameters": [],
"Capabilities": []
}
For more information, see `Working with AWS CloudFormation Templates`_ in the *AWS CloudFormation User Guide*.
.. _`Working with AWS CloudFormation Templates`: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-guide.html
awscli-1.10.1/awscli/examples/iam/ 0000777 4542626 0000144 00000000000 12652514126 020000 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/iam/list-user-policies.rst 0000666 4542626 0000144 00000000754 12652514124 024272 0 ustar pysdk-ci amazon 0000000 0000000 **To list policies for an IAM user**
The following ``list-user-policies`` command lists the policies that are attached to the IAM user named ``Bob``::
aws iam list-user-policies --user-name Bob
Output::
"PolicyNames": [
"ExamplePolicy",
"TestPolicy"
]
For more information, see `Adding a New User to Your AWS Account`_ in the *Using IAM* guide.
.. _`Adding a New User to Your AWS Account`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_SettingUpUser.html
awscli-1.10.1/awscli/examples/iam/delete-login-profile.rst 0000666 4542626 0000144 00000000561 12652514124 024540 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a password for an IAM user**
The following ``delete-login-profile`` command deletes the password for the IAM user named ``Bob``::
aws iam delete-login-profile --user-name Bob
For more information, see `Managing Passwords`_ in the *Using IAM* guide.
.. _`Managing Passwords`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingLogins.html
awscli-1.10.1/awscli/examples/iam/get-policy.rst 0000666 4542626 0000144 00000001524 12652514124 022606 0 ustar pysdk-ci amazon 0000000 0000000 **To retrieve information about the specified managed policy**
This example returns details about the managed policy whose ARN is ``arn:aws:iam::123456789012:policy/MySamplePolicy``::
aws iam get-policy --policy-arn arn:aws:iam::123456789012:policy/MySamplePolicy
Output::
{
"Policy": {
"PolicyName": "MySamplePolicy",
"CreateDate": "2015-06-17T19:23;32Z",
"AttachmentCount": "0",
"IsAttachable": "true",
"PolicyId": "Z27SI6FQMGNQ2EXAMPLE1",
"DefaultVersionId": "v1",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:policy/MySamplePolicy",
"UpdateDate": "2015-06-17T19:23:32Z"
}
}
For more information, see `Overview of IAM Policies`_ in the *Using IAM* guide.
.. _`Overview of IAM Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/policies_overview.html awscli-1.10.1/awscli/examples/iam/get-role.rst 0000666 4542626 0000144 00000001425 12652514124 022250 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about an IAM role**
The following ``get-role`` command gets information about the role named ``Test-Role``::
aws iam get-role --role-name Test-Role
Output::
{
"Role": {
"AssumeRolePolicyDocument": "",
"RoleId": "AIDIODR4TAW7CSEXAMPLE",
"CreateDate": "2013-04-18T05:01:58Z",
"RoleName": "Test-Role",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:role/Test-Role"
}
}
The command displays the trust policy attached to the role. To list the permissions policies attached to a role, use the ``list-role-policies`` command.
For more information, see `Creating a Role`_ in the *Using IAM* guide.
.. _`Creating a Role`: http://docs.aws.amazon.com/IAM/latest/UserGuide/creating-role.html
awscli-1.10.1/awscli/examples/iam/create-policy-version.rst 0000666 4542626 0000144 00000001335 12652514124 024755 0 ustar pysdk-ci amazon 0000000 0000000 **To create a new version of a managed policy**
This example creates a new ``v2`` version of the IAM policy whose ARN is ``arn:aws:iam::123456789012:policy/MyPolicy`` and makes it the default version::
aws iam create-policy-version --policy-arn arn:aws:iam::123456789012:policy/MyPolicy --policy-document file://NewPolicyVersion.json --set-as-default
Output::
{
"PolicyVersion": {
"CreateDate": "2015-06-16T18:56:03.721Z",
"VersionId": "v2",
"IsDefaultVersion": true
}
}
For more information, see `Versioning for Managed Policies`_ in the *Using IAM* guide.
.. _`Versioning for Managed Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/policies_managed-versioning.html awscli-1.10.1/awscli/examples/iam/generate-credential-report.rst 0000666 4542626 0000144 00000001001 12652514124 025733 0 ustar pysdk-ci amazon 0000000 0000000 **To generate a credential report**
The following example attempts to generate a credential report for the AWS account::
aws iam generate-credential-report
Output::
{
"State": "STARTED",
"Description": "No report exists. Starting a new report generation task"
}
For more information, see `Getting Credential Reports for Your AWS Account`_ in the *Using IAM* guide.
.. _`Getting Credential Reports for Your AWS Account`: http://docs.aws.amazon.com/IAM/latest/UserGuide/credential-reports.html awscli-1.10.1/awscli/examples/iam/list-virtual-mfa-devices.rst 0000666 4542626 0000144 00000001164 12652514124 025352 0 ustar pysdk-ci amazon 0000000 0000000 **To list virtual MFA devices**
The following ``list-virtual-mfa-devices`` command lists the virtual MFA devices that have been configured for the current account::
aws iam list-virtual-mfa-devices
Output::
{
"VirtualMFADevices": [
{
"SerialNumber": "arn:aws:iam::123456789012:mfa/ExampleMFADevice"
},
{
"SerialNumber": "arn:aws:iam::123456789012:mfa/Fred"
}
]
}
For more information, see `Using a Virtual MFA Device with AWS`_ in the *Using IAM* guide.
.. _`Using a Virtual MFA Device with AWS`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_VirtualMFA.html
awscli-1.10.1/awscli/examples/iam/delete-saml-provider.rst 0000666 4542626 0000144 00000000676 12652514124 024565 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a SAML provider**
This example deletes the IAM SAML 2.0 provider whose ARN is ``arn:aws:iam::123456789012:saml-provider/SAMLADFSProvider``::
aws iam delete-saml-provider --saml-provider-arn arn:aws:iam::123456789012:saml-provider/SAMLADFSProvider
For more information, see `Using SAML Providers`_ in the *Using IAM* guide.
.. _`Using SAML Providers`: http://docs.aws.amazon.com/IAM/latest/UserGuide/identity-providers-saml.html awscli-1.10.1/awscli/examples/iam/upload-server-certificate.rst 0000666 4542626 0000144 00000002216 12652514124 025601 0 ustar pysdk-ci amazon 0000000 0000000 **To upload a server certificate to your AWS account**
The following **upload-server-certificate** command uploads a server certificate to your AWS account::
aws iam upload-server-certificate --server-certificate-name myServerCertificate --certificate-body file://public_key_cert_file.pem --private-key file://my_private_key.pem --certificate-chain file://my_certificate_chain_file.pem
The certificate is in the file ``public_key_cert_file.pem``, and your private key is in the file ``my_private_key.pem``.
When the file has finished uploading, it is available under the name *myServerCertificate*. The certificate chain
provided by the certificate authority (CA) is included as the ``my_certificate_chain_file.pem`` file.
Note that the parameters that contain file names are preceded with ``file://``. This tells the command that the
parameter value is a file name. You can include a complete path following ``file://``.
For more information, see `Creating, Uploading, and Deleting Server Certificates`_ in the *Using IAM* guide.
.. _`Creating, Uploading, and Deleting Server Certificates`: http://docs.aws.amazon.com/IAM/latest/UserGuide/InstallCert.html
awscli-1.10.1/awscli/examples/iam/delete-user-policy.rst 0000666 4542626 0000144 00000001015 12652514124 024240 0 ustar pysdk-ci amazon 0000000 0000000 **To remove a policy from an IAM user**
The following ``delete-user-policy`` command removes the specified policy from the IAM user named ``Bob``::
aws iam delete-user-policy --user-name Bob --policy-name ExamplePolicy
To get a list of policies for an IAM user, use the ``list-user-policies`` command.
For more information, see `Adding a New User to Your AWS Account`_ in the *Using IAM* guide.
.. _`Adding a New User to Your AWS Account`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_SettingUpUser.html
awscli-1.10.1/awscli/examples/iam/delete-group-policy.rst 0000666 4542626 0000144 00000000772 12652514124 024427 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a policy from an IAM group**
The following ``delete-group-policy`` command deletes the policy named ``ExamplePolicy`` from the group named ``Admins``::
aws iam delete-group-policy --group-name Admins --policy-name ExamplePolicy
To see the policies attached to a group, use the ``list-group-policies`` command.
For more information, see `Managing IAM Policies`_ in the *Using IAM* guide.
.. _`Managing IAM Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingPolicies.html
awscli-1.10.1/awscli/examples/iam/delete-policy.rst 0000666 4542626 0000144 00000000617 12652514124 023273 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an IAM policy**
This example deletes the policy whose ARN is ``arn:aws:iam::123456789012:policy/MySamplePolicy``::
aws iam delete-policy --policy-arn arn:aws:iam::123456789012:policy/MySamplePolicy
For more information, see `Overview of IAM Policies`_ in the *Using IAM* guide.
.. _`Overview of IAM Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/policies_overview.html awscli-1.10.1/awscli/examples/iam/create-virtual-mfa-device.rst 0000666 4542626 0000144 00000001420 12652514124 025452 0 ustar pysdk-ci amazon 0000000 0000000 **To create a virtual MFA device**
This example creates a new virtual MFA device called ``BobsMFADevice``. It creates a file that contains bootstrap information called ``QRCode.png``
and places it in the ``C:/`` directory. The bootstrap method used in this example is ``QRCodePNG``::
aws iam create-virtual-mfa-device --virtual-mfa-device-name BobsMFADevice --outfile C:/QRCode.png --bootstrap-method QRCodePNG
Output::
{
"VirtualMFADevice": {
"SerialNumber": "arn:aws:iam::210987654321:mfa/BobsMFADevice"
}
For more information, see `Using Multi-Factor Authentication (MFA) Devices with AWS`_ in the *Using IAM* guide.
.. _`Using Multi-Factor Authentication (MFA) Devices with AWS`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingMFA.html awscli-1.10.1/awscli/examples/iam/get-group.rst 0000666 4542626 0000144 00000001112 12652514124 022434 0 ustar pysdk-ci amazon 0000000 0000000 **To get an IAM group**
This example returns details about the IAM group ``Admins``::
aws iam get-group --group-name Admins
Output::
{
"Group": {
"Path": "/",
"CreateDate": "2015-06-16T19:41:48Z",
"GroupId": "AIDGPMS9RO4H3FEXAMPLE",
"Arn": "arn:aws:iam::123456789012:group/Admins",
"GroupName": "Admins"
},
"Users": []
}
For more information, see `IAM Users and Groups`_ in the *Using IAM* guide.
.. _`IAM Users and Groups`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_WorkingWithGroupsAndUsers.html awscli-1.10.1/awscli/examples/iam/update-signing-certificate.rst 0000666 4542626 0000144 00000001212 12652514124 025722 0 ustar pysdk-ci amazon 0000000 0000000 **To activate or deactivate a signing certificate for an IAM user**
The following ``update-signing-certificate`` command deactivates the specified signing certificate for the IAM user named ``Bob``::
aws iam update-signing-certificate --certificate-id TA7SMP42TDN5Z26OBPJE7EXAMPLE --status Inactive --user-name Bob
To get the ID for a signing certificate, use the ``list-signing-certificates`` command.
For more information, see `Creating and Uploading a User Signing Certificate`_ in the *Using IAM* guide.
.. _`Creating and Uploading a User Signing Certificate`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_UploadCertificate.html
awscli-1.10.1/awscli/examples/iam/delete-policy-version.rst 0000666 4542626 0000144 00000000727 12652514124 024760 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a version of a managed policy**
This example deletes the version identified as ``v2`` from the policy whose ARN is ``arn:aws:iam::123456789012:policy/MySamplePolicy``::
aws iam delete-policy-version --policy-arn arn:aws:iam::123456789012:policy/MyPolicy --version-id v2
For more information, see `Overview of IAM Policies`_ in the *Using IAM* guide.
.. _`Overview of IAM Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/policies_overview.html awscli-1.10.1/awscli/examples/iam/list-groups.rst 0000666 4542626 0000144 00000001470 12652514124 023022 0 ustar pysdk-ci amazon 0000000 0000000 **To list the IAM groups for the current account**
The following ``list-groups`` command lists the IAM groups in the current account::
aws iam list-groups
Output::
"Groups": [
{
"Path": "/",
"CreateDate": "2013-06-04T20:27:27.972Z",
"GroupId": "AIDACKCEVSQ6C2EXAMPLE",
"Arn": "arn:aws:iam::123456789012:group/Admins",
"GroupName": "Admins"
},
{
"Path": "/",
"CreateDate": "2013-04-16T20:30:42Z",
"GroupId": "AIDGPMS9RO4H3FEXAMPLE",
"Arn": "arn:aws:iam::123456789012:group/S3-Admins",
"GroupName": "S3-Admins"
}
]
For more information, see `Creating and Listing Groups`_ in the *Using IAM* guide.
.. _`Creating and Listing Groups`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_CreatingAndListingGroups.html
awscli-1.10.1/awscli/examples/iam/put-role-policy.rst 0000666 4542626 0000144 00000001212 12652514124 023570 0 ustar pysdk-ci amazon 0000000 0000000 **To attach a permissions policy to an IAM role**
The following ``put-role-policy`` command adds a permissions policy to the role named ``Test-Role``::
aws iam put-role-policy --role-name Test-Role --policy-name ExamplePolicy --policy-document file://AdminPolicy.json
The policy is defined as a JSON document in the *AdminPolicy.json* file. (The file name and extension do not have significance.)
To attach a trust policy to a role, use the ``update-assume-role-policy`` command.
For more information, see `Creating a Role`_ in the *Using IAM* guide.
.. _`Creating a Role`: http://docs.aws.amazon.com/IAM/latest/UserGuide/creating-role.html
awscli-1.10.1/awscli/examples/iam/delete-group.rst 0000666 4542626 0000144 00000000524 12652514124 023125 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an IAM group**
The following ``delete-group`` command deletes an IAM group named ``MyTestGroup``::
aws iam delete-group --group-name MyTestGroup
For more information, see `Deleting an IAM Group`_ in the *Using IAM* guide.
.. _`Deleting an IAM Group`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_DeleteGroup.html awscli-1.10.1/awscli/examples/iam/resync-mfa-device.rst 0000666 4542626 0000144 00000001301 12652514124 024024 0 ustar pysdk-ci amazon 0000000 0000000 **To synchronize the specified MFA device with AWS servers**
This example synchronizes the MFA device that is associated with the IAM user ``Bob`` and whose ARN is ``arn:aws:iam::123456789012:mfa/BobsMFADevice``
with an authenticator program that provided the two authentication codes::
aws iam resync-mfa-device --user-name Bob --serial-number arn:aws:iam::210987654321:mfa/BobsMFADevice --authentication-code-1 123456 --authentication-code-2 987654
For more information, see `Using Multi-Factor Authentication (MFA) Devices with AWS`_ in the *IAM User* guide.
.. _`Using Multi-Factor Authentication (MFA) Devices with AWS`: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa.html awscli-1.10.1/awscli/examples/iam/delete-account-alias.rst 0000666 4542626 0000144 00000000623 12652514124 024514 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an account alias**
The following ``delete-account-alias`` command removes the alias ``mycompany`` for the current account::
aws iam delete-account-alias --account-alias mycompany
For more information, see `Using an Alias for Your AWS Account ID`_ in the *Using IAM* guide.
.. _`Using an Alias for Your AWS Account ID`: http://docs.aws.amazon.com/IAM/latest/UserGuide/AccountAlias.html
awscli-1.10.1/awscli/examples/iam/update-access-key.rst 0000666 4542626 0000144 00000001313 12652514124 024035 0 ustar pysdk-ci amazon 0000000 0000000 **To activate or deactivate an access key for an IAM user**
The following ``update-access-key`` command deactivates the specified access key (access key ID and secret access key)
for the IAM user named ``Bob``::
aws iam update-access-key --access-key-id AKIAIOSFODNN7EXAMPLE --status Inactive --user-name Bob
Deactivating the key means that it cannot be used for programmatic access to AWS. However, the key is still available and can be reactivated.
For more information, see `Creating, Modifying, and Viewing User Security Credentials`_ in the *Using IAM* guide.
.. _`Creating, Modifying, and Viewing User Security Credentials`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_CreateAccessKey.html
awscli-1.10.1/awscli/examples/iam/get-policy-version.rst 0000666 4542626 0000144 00000001516 12652514124 024272 0 ustar pysdk-ci amazon 0000000 0000000 **To retrieve information about the specified version of the specified managed policy**
This example returns the policy document for the v2 version of the policy whose ARN is ``arn:aws:iam::123456789012:policy/MyManagedPolicy``::
aws iam get-policy-version --policy-arn arn:aws:iam::123456789012:policy/MyPolicy --version-id v2
Output::
{
"PolicyVersion": {
"CreateDate": "2015-06-17T19:23;32Z",
"VersionId": "v2",
"Document": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "iam:*",
"Resource": "*",
"Effect": "Allow"
}
]
}
"IsDefaultVersion": "false"
}
}
For more information, see `Overview of IAM Policies`_ in the *Using IAM* guide.
.. _`Overview of IAM Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/policies_overview.html awscli-1.10.1/awscli/examples/iam/add-client-id-to-open-id-connect-provider.rst 0000666 4542626 0000144 00000001305 12652514124 030355 0 ustar pysdk-ci amazon 0000000 0000000 **To add a client ID (audience) to an Open-ID Connect (OIDC) provider**
The following ``add-client-id-to-open-id-connect-provider`` command adds the client ID ``my-application-ID`` to the OIDC provider named ``server.example.com``::
aws iam add-client-id-to-open-id-connect-provider --client-id my-application-ID --open-id-connect-provider-arn arn:aws:iam::123456789012:oidc-provider/server.example.com
To create an OIDC provider, use the ``create-open-id-connect-provider`` command.
For more information, see `Using OpenID Connect Identity Providers`_ in the *Using IAM* guide.
.. _`Using OpenID Connect Identity Providers`: http://docs.aws.amazon.com/IAM/latest/UserGuide/identity-providers-oidc.html awscli-1.10.1/awscli/examples/iam/get-saml-provider.rst 0000666 4542626 0000144 00000001170 12652514124 024070 0 ustar pysdk-ci amazon 0000000 0000000 **To retrieve the SAML provider metadocument**
This example retrieves the details about the SAML 2.0 provider whose ARM is ``arn:aws:iam::123456789012:saml-provider/SAMLADFS``.
The response includes the metadata document that you got from the identity provider to create the AWS SAML provider entity as well
as the creation and expiration dates::
aws iam get-saml-provider --saml-provider-arn arn:aws:iam::123456789012:saml-provider/SAMLADFS
For more information, see `Using SAML Providers`_ in the *Using IAM* guide.
.. _`Using SAML Providers`: http://docs.aws.amazon.com/IAM/latest/UserGuide/identity-providers-saml.html awscli-1.10.1/awscli/examples/iam/create-saml-provider.rst 0000666 4542626 0000144 00000001103 12652514124 024550 0 ustar pysdk-ci amazon 0000000 0000000 **To create a SAML provider**
This example creates a new SAML provider in IAM named ``MySAMLProvider``. It is described by the SAML metadata document found in the file ``SAMLMetaData.xml``::
aws iam create-saml-provider --saml-metadata-document file://SAMLMetaData.xml --name MySAMLProvider
Output::
{
"SAMLProviderArn": "arn:aws:iam::123456789012:saml-provider/MySAMLProvider"
}
For more information, see `Using SAML Providers`_ in the *Using IAM* guide.
.. _`Using SAML Providers`: http://docs.aws.amazon.com/IAM/latest/UserGuide/identity-providers-saml.html awscli-1.10.1/awscli/examples/iam/list-attached-role-policies.rst 0000666 4542626 0000144 00000001223 12652514124 026020 0 ustar pysdk-ci amazon 0000000 0000000 **To list all managed policies that are attached to the specified role**
This command returns the names and ARNs of the managed policies attached to the IAM role named ``SecurityAuditRole`` in the AWS account::
aws iam list-attached-role-policies --role-name SecurityAuditRole
Output::
{
"AttachedPolicies": [
{
"PolicyName": "SecurityAudit",
"PolicyArn": "arn:aws:iam::aws:policy/SecurityAudit"
}
],
"IsTruncated": false
}
For more information, see `Overview of IAM Policies`_ in the *Using IAM* guide.
.. _`Overview of IAM Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/policies_overview.html awscli-1.10.1/awscli/examples/iam/detach-group-policy.rst 0000666 4542626 0000144 00000000744 12652514124 024414 0 ustar pysdk-ci amazon 0000000 0000000 **To detach a policy from a group**
This example removes the managed policy with the ARN ``arn:aws:iam::123456789012:policy/TesterAccessPolicy`` from the group called ``Testers``::
aws iam detach-group-policy --group-name Testers --policy-arn arn:aws:iam::123456789012:policy/TesterAccessPolicy
For more information, see `Overview of IAM Policies`_ in the *Using IAM* guide.
.. _`Overview of IAM Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/policies_overview.html awscli-1.10.1/awscli/examples/iam/create-login-profile.rst 0000666 4542626 0000144 00000004133 12652514124 024540 0 ustar pysdk-ci amazon 0000000 0000000 **To create a password for an IAM user**
To create a password for an IAM user, we recommend using the ``--cli-input-json`` parameter to pass a JSON file that contains the password. Using this method, you can create a strong password with non-alphanumeric characters. It can be difficult to create a password with non-alphanumeric characters when you pass it as a command line parameter.
To use the ``--cli-input-json`` parameter, start by using the ``create-login-profile`` command with the ``--generate-cli-skeleton`` parameter, as in the following example::
aws iam create-login-profile --generate-cli-skeleton > create-login-profile.json
The previous command creates a JSON file called create-login-profile.json that you can use to fill in the information for a subsequent ``create-login-profile`` command. For example::
{
"UserName": "Bob",
"Password": "&1-3a6u:RA0djs",
"PasswordResetRequired": true
}
Next, to create a password for an IAM user, use the ``create-login-profile`` command again, this time passing the ``--cli-input-json`` parameter to specify your JSON file. The following ``create-login-profile`` command uses the ``--cli-input-json`` parameter with a JSON file called create-login-profile.json::
aws iam create-login-profile --cli-input-json file://create-login-profile.json
Output::
{
"LoginProfile": {
"UserName": "Bob",
"CreateDate": "2015-03-10T20:55:40.274Z",
"PasswordResetRequired": true
}
}
If the new password violates the account password policy, the command returns a ``PasswordPolicyViolation`` error.
To change the password for a user that already has one, use ``update-login-profile``. To set a password policy for the account, use the ``update-account-password-policy`` command.
If the account password policy allows them to, IAM users can change their own passwords using the ``change-password`` command.
For more information, see `Managing Passwords for IAM Users`_ in the *Using IAM* guide.
.. _`Managing Passwords for IAM Users`: http://docs.aws.amazon.com/IAM/latest/UserGuide/credentials-add-pwd-for-user.html awscli-1.10.1/awscli/examples/iam/update-user.rst 0000666 4542626 0000144 00000000611 12652514124 022764 0 ustar pysdk-ci amazon 0000000 0000000 **To change an IAM user's name**
The following ``update-user`` command changes the name of the IAM user ``Bob`` to ``Robert``::
aws iam update-user --user-name Bob --new-user-name Robert
For more information, see `Changing a Group's Name or Path`_ in the *Using IAM* guide.
.. _`Changing a Group's Name or Path`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_RenamingGroup.html
awscli-1.10.1/awscli/examples/iam/list-groups-for-user.rst 0000666 4542626 0000144 00000001600 12652514124 024555 0 ustar pysdk-ci amazon 0000000 0000000 **To list the groups that an IAM user belongs to**
The following ``list-groups-for-user`` command displays the groups that the IAM user named ``Bob`` belongs to::
aws iam list-groups-for-user --user-name Bob
Output::
"Groups": [
{
"Path": "/",
"CreateDate": "2013-05-06T01:18:08Z",
"GroupId": "AKIAIOSFODNN7EXAMPLE",
"Arn": "arn:aws:iam::123456789012:group/Admin",
"GroupName": "Admin"
},
{
"Path": "/",
"CreateDate": "2013-05-06T01:37:28Z",
"GroupId": "AKIAI44QH8DHBEXAMPLE",
"Arn": "arn:aws:iam::123456789012:group/s3-Users",
"GroupName": "s3-Users"
}
]
For more information, see `Creating and Listing Groups`_ in the *Using IAM* guide.
.. _`Creating and Listing Groups`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_CreatingAndListingGroups.html
awscli-1.10.1/awscli/examples/iam/update-open-id-connect-provider-thumbprint.rst 0000666 4542626 0000144 00000001272 12652514124 031016 0 ustar pysdk-ci amazon 0000000 0000000 **To replace the existing list of server certificate thumbprints with a new list**
This example updates the certificate thumbprint list for the OIDC provider whose ARN is
``arn:aws:iam::123456789012:oidc-provider/example.oidcprovider.com`` to use a new thumbprint::
aws iam update-open-id-connect-provider-thumbprint --open-id-connect-provider-arn arn:aws:iam::123456789012:oidc-provider/example.oidcprovider.com --thumbprint-list 7359755EXAMPLEabc3060bce3EXAMPLEec4542a3
For more information, see `Using OpenID Connect Identity Providers`_ in the *Using IAM* guide.
.. _`Using OpenID Connect Identity Providers`: http://docs.aws.amazon.com/IAM/latest/UserGuide/identity-providers-oidc.html awscli-1.10.1/awscli/examples/iam/list-instance-profiles-for-role.rst 0000666 4542626 0000144 00000002157 12652514124 026656 0 ustar pysdk-ci amazon 0000000 0000000 **To list the instance profiles for an IAM role**
The following ``list-instance-profiles-for-role`` command lists the instance profiles that are associated with the role ``Test-Role``::
aws iam list-instance-profiles-for-role --role-name Test-Role
Output::
"InstanceProfiles": [
{
"InstanceProfileId": "AIDGPMS9RO4H3FEXAMPLE",
"Roles": [
{
"AssumeRolePolicyDocument": "",
"RoleId": "AIDACKCEVSQ6C2EXAMPLE",
"CreateDate": "2013-06-07T20:42:15Z",
"RoleName": "Test-Role",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:role/Test-Role"
}
],
"CreateDate": "2013-06-07T21:05:24Z",
"InstanceProfileName": "ExampleInstanceProfile",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:instance-profile/ExampleInstanceProfile"
}
]
For more information, see `Instance Profiles`_ in the *Using IAM* guide.
.. _`Instance Profiles`: http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html
awscli-1.10.1/awscli/examples/iam/get-account-password-policy.rst 0000666 4542626 0000144 00000001464 12652514124 026103 0 ustar pysdk-ci amazon 0000000 0000000 **To see the current account password policy**
The following ``get-account-password-policy`` command displays details about the password policy for the current account::
aws iam get-account-password-policy
Output::
{
"PasswordPolicy": {
"AllowUsersToChangePassword": false,
"RequireLowercaseCharacters": false,
"RequireUppercaseCharacters": false,
"MinimumPasswordLength": 8,
"RequireNumbers": true,
"RequireSymbols": true
}
}
If no password policy is defined for the account, the command returns a ``NoSuchEntity`` error.
For more information, see `Managing an IAM Password Policy`_ in the *Using IAM* guide.
.. _`Managing an IAM Password Policy`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingPasswordPolicies.html
awscli-1.10.1/awscli/examples/iam/put-user-policy.rst 0000666 4542626 0000144 00000001120 12652514124 023603 0 ustar pysdk-ci amazon 0000000 0000000 **To attach a policy to an IAM user**
The following ``put-user-policy`` command attaches a policy to the IAM user named ``Bob``::
aws iam put-user-policy --user-name Bob --policy-name ExamplePolicy --policy-document file://AdminPolicy.json
The policy is defined as a JSON document in the *AdminPolicy.json* file. (The file name and extension do not have significance.)
For more information, see `Adding a New User to Your AWS Account`_ in the *Using IAM* guide.
.. _`Adding a New User to Your AWS Account`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_SettingUpUser.html
awscli-1.10.1/awscli/examples/iam/list-access-keys.rst 0000666 4542626 0000144 00000001753 12652514124 023721 0 ustar pysdk-ci amazon 0000000 0000000 **To list the access key IDs for an IAM user**
The following ``list-access-keys`` command lists the access keys IDs for the IAM user named ``Bob``::
aws iam list-access-keys --user-name Bob
Output::
"AccessKeyMetadata": [
{
"UserName": "Bob",
"Status": "Active",
"CreateDate": "2013-06-04T18:17:34Z",
"AccessKeyId": "AKIAIOSFODNN7EXAMPLE"
},
{
"UserName": "Bob",
"Status": "Inactive",
"CreateDate": "2013-06-06T20:42:26Z",
"AccessKeyId": "AKIAI44QH8DHBEXAMPLE"
}
]
You cannot list the secret access keys for IAM users. If the secret access keys are lost, you must create new access keys using the ``create-access-keys`` command.
For more information, see `Creating, Modifying, and Viewing User Security Credentials`_ in the *Using IAM* guide.
.. _`Creating, Modifying, and Viewing User Security Credentials`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_CreateAccessKey.html
awscli-1.10.1/awscli/examples/iam/list-open-id-connect-providers.rst 0000666 4542626 0000144 00000001152 12652514124 026475 0 ustar pysdk-ci amazon 0000000 0000000 **To list information about the OpenID Connect providers in the AWS account**
This example returns a list of ARNS of all the OpenID Connect providers that are defined in the current AWS account::
aws iam list-open-id-connect-providers
Output::
{
"OpenIDConnectProviderList": [
{
"Arn": "arn:aws:iam::123456789012:oidc-provider/example.oidcprovider.com"
}
]
}
For more information, see `Using OpenID Connect Identity Providers`_ in the *Using IAM* guide.
.. _`Using OpenID Connect Identity Providers`: http://docs.aws.amazon.com/IAM/latest/UserGuide/identity-providers-oidc.html awscli-1.10.1/awscli/examples/iam/change-password.rst 0000666 4542626 0000144 00000003151 12652514124 023615 0 ustar pysdk-ci amazon 0000000 0000000 **To change the password for your IAM user**
To change the password for your IAM user, we recommend using the ``--cli-input-json`` parameter to pass a JSON file that contains your old and new passwords. Using this method, you can use strong passwords with non-alphanumeric characters. It can be difficult to use passwords with non-alphanumeric characters when you pass them as command line parameters. To use the ``--cli-input-json`` parameter, start by using the ``change-password`` command with the ``--generate-cli-skeleton`` parameter, as in the following example::
aws iam change-password --generate-cli-skeleton > change-password.json
The previous command creates a JSON file called change-password.json that you can use to fill in your old and new passwords. For example, the file might look like this::
{
"OldPassword": "3s0K_;xh4~8XXI",
"NewPassword": "]35d/{pB9Fo9wJ"
}
Next, to change your password, use the ``change-password`` command again, this time passing the ``--cli-input-json`` parameter to specify your JSON file. The following ``change-password`` command uses the ``--cli-input-json`` parameter with a JSON file called change-password.json::
aws iam change-password --cli-input-json file://change-password.json
This command can be called by IAM users only. If this command is called using AWS account (root) credentials, the command returns an ``InvalidUserType`` error.
For more information, see `How IAM Users Change Their Own Password`_ in the *Using IAM* guide.
.. _`How IAM Users Change Their Own Password`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingUserPwdSelf.html awscli-1.10.1/awscli/examples/iam/delete-role-policy.rst 0000666 4542626 0000144 00000000631 12652514124 024226 0 ustar pysdk-ci amazon 0000000 0000000 **To remove a policy from an IAM role**
The following ``delete-role-policy`` command removes the policy named ``ExamplePolicy`` from the role named ``Test-Role``::
aws iam delete-role-policy --role-name Test-Role --policy-name ExamplePolicy
For more information, see `Creating a Role`_ in the *Using IAM* guide.
.. _`Creating a Role`: http://docs.aws.amazon.com/IAM/latest/UserGuide/creating-role.html
awscli-1.10.1/awscli/examples/iam/create-account-alias.rst 0000666 4542626 0000144 00000000612 12652514124 024513 0 ustar pysdk-ci amazon 0000000 0000000 **To create an account alias**
The following ``create-account-alias`` command creates the alias ``examplecorp`` for your AWS account::
aws iam create-account-alias --account-alias examplecorp
For more information, see `Your AWS Account ID and Its Alias`_ in the *Using IAM* guide.
.. _`Your AWS Account ID and Its Alias`: http://docs.aws.amazon.com/IAM/latest/UserGuide/AccountAlias.html
awscli-1.10.1/awscli/examples/iam/get-group-policy.rst 0000666 4542626 0000144 00000001555 12652514124 023744 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about a policy attached to an IAM group**
The following ``get-group-policy`` command gets information about the specified policy attached to the group named ``Test-Group``::
aws iam get-group-policy --group-name Test-Group --policy-name S3-ReadOnly-Policy
Output::
{
"GroupName": "Test-Group",
"PolicyDocument": {
"Statement": [
{
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "*",
"Effect": "Allow"
}
]
},
"PolicyName": "S3-ReadOnly-Policy"
}
For more information, see `Managing IAM Policies`_ in the *Using IAM* guide.
.. _`Managing IAM Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingPolicies.html
awscli-1.10.1/awscli/examples/iam/create-access-key.rst 0000666 4542626 0000144 00000001512 12652514124 024017 0 ustar pysdk-ci amazon 0000000 0000000 **To create an access key for an IAM user**
The following ``create-access-key`` command creates an access key (access key ID and secret access key) for the IAM user named ``Bob``::
aws iam create-access-key --user-name Bob
Output::
{
"AccessKey": {
"UserName": "Bob",
"Status": "Active",
"CreateDate": "2015-03-09T18:39:23.411Z",
"SecretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYzEXAMPLEKEY",
"AccessKeyId": "AKIAIOSFODNN7EXAMPLE"
}
}
Store the secret access key in a secure location. If it is lost, it cannot be recovered, and you must create a new access key.
For more information, see `Managing Access Keys for IAM Users`_ in the *Using IAM* guide.
.. _`Managing Access Keys for IAM Users`: http://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingCredentials.html awscli-1.10.1/awscli/examples/iam/get-user-policy.rst 0000666 4542626 0000144 00000001551 12652514124 023562 0 ustar pysdk-ci amazon 0000000 0000000 **To list policy details for an IAM user**
The following ``get-user-policy`` command lists the details of the specified policy that is attached to the IAM user named ``Bob``::
aws iam get-user-policy --user-name Bob --policy-name ExamplePolicy
Output::
{
"UserName": "Bob",
"PolicyName": "ExamplePolicy",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "*",
"Resource": "*",
"Effect": "Allow"
}
]
}
}
To get a list of policies for an IAM user, use the ``list-user-policies`` command.
For more information, see `Adding a New User to Your AWS Account`_ in the *Using IAM* guide.
.. _`Adding a New User to Your AWS Account`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_SettingUpUser.html
awscli-1.10.1/awscli/examples/iam/attach-user-policy.rst 0000666 4542626 0000144 00000000776 12652514124 024257 0 ustar pysdk-ci amazon 0000000 0000000 **To attach a managed policy to an IAM user**
The following ``attach-user-policy`` command attaches the AWS managed policy named ``AdministratorAccess`` to the IAM user named ``Alice``::
aws iam attach-user-policy --policy-arn arn:aws:iam::aws:policy/AdministratorAccess --user-name Alice
For more information, see `Managed Policies and Inline Policies`_ in the *Using IAM* guide.
.. _`Managed Policies and Inline Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html awscli-1.10.1/awscli/examples/iam/remove-client-id-from-open-id-connect-provider.rst 0000666 4542626 0000144 00000001327 12652514124 031447 0 ustar pysdk-ci amazon 0000000 0000000 **To remove the specified client ID from the list of client IDs registered for the specified IAM OpenID Connect provider**
This example removes the client ID ``My-TestApp-3`` from the list of client IDs associated with the IAM OIDC provider whose
ARN is ``arn:aws:iam::123456789012:oidc-provider/example.oidcprovider.com``::
aws iam remove-client-id-from-open-id-connect-provider --client-id My-TestApp-3 --open-id-connect-provider-arn arn:aws:iam::123456789012:oidc-provider/example.oidcprovider.com
For more information, see `Using OpenID Connect Identity Providers`_ in the *Using IAM* guide.
.. _`Using OpenID Connect Identity Providers`: http://docs.aws.amazon.com/IAM/latest/UserGuide/identity-providers-oidc.html awscli-1.10.1/awscli/examples/iam/set-default-policy-version.rst 0000666 4542626 0000144 00000001017 12652514124 025724 0 ustar pysdk-ci amazon 0000000 0000000 **To set the specified version of the specified policy as the policy's default version.**
This example sets the ``v2`` version of the policy whose ARN is ``arn:aws:iam::123456789012:policy/MyPolicy`` as the default active version::
aws iam set-default-policy-version --policy-arn arn:aws:iam::123456789012:policy/MyPolicy --version-id v2
For more information, see `Overview of IAM Policies`_ in the *Using IAM* guide.
.. _`Overview of IAM Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/policies_overview.html awscli-1.10.1/awscli/examples/iam/delete-instance-profile.rst 0000666 4542626 0000144 00000000623 12652514124 025233 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an instance profile**
The following ``delete-instance-profile`` command deletes the instance profile named ``ExampleInstanceProfile``::
aws iam delete-instance-profile --instance-profile-name ExampleInstanceProfile
For more information, see `Instance Profiles`_ in the *Using IAM* guide.
.. _`Instance Profiles`: http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html
awscli-1.10.1/awscli/examples/iam/list-group-policies.rst 0000666 4542626 0000144 00000001062 12652514124 024441 0 ustar pysdk-ci amazon 0000000 0000000 **To list all inline policies that are attached to the specified group**
The following ``list-group-policies`` command lists the names of inline policies that are attached to the IAM group named
``Admins`` in the current account::
aws iam list-group-policies --group-name Admins
Output::
{
"PolicyNames": [
"AdminRoot",
"ExamplepPolicy"
]
}
For more information, see `Managing IAM Policies`_ in the *Using IAM* guide.
.. _`Managing IAM Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingPolicies.html
awscli-1.10.1/awscli/examples/iam/delete-open-id-connect-provider.rst 0000666 4542626 0000144 00000001016 12652514124 026600 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an IAM OpenID Connect identity provider**
This example deletes the IAM OIDC provider that connects to the provider ``example.oidcprovider.com``::
aws aim delete-open-id-connect-provider --open-id-connect-provider-arn arn:aws:iam::123456789012:oidc-provider/example.oidcprovider.com
For more information, see `Using OpenID Connect Identity Providers`_ in the *Using IAM* guide.
.. _`Using OpenID Connect Identity Providers`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_CreatingAndListingGroups.html awscli-1.10.1/awscli/examples/iam/update-assume-role-policy.rst 0000666 4542626 0000144 00000001306 12652514124 025541 0 ustar pysdk-ci amazon 0000000 0000000 **To update the trust policy for an IAM role**
The following ``update-assume-role-policy`` command updates the trust policy for the role named ``Test-Role``::
aws iam update-assume-role-policy --role-name Test-Tole --policy-document file://Test-Role-Trust-Policy.json
The trust policy is defined as a JSON document in the *Test-Role-Trust-Policy.json* file. (The file name and extension
do not have significance.) The trust policy must specify a principal.
To update the permissions policy for a role, use the ``put-role-policy`` command.
For more information, see `Creating a Role`_ in the *Using IAM* guide.
.. _`Creating a Role`: http://docs.aws.amazon.com/IAM/latest/UserGuide/creating-role.html
awscli-1.10.1/awscli/examples/iam/put-group-policy.rst 0000666 4542626 0000144 00000001044 12652514124 023766 0 ustar pysdk-ci amazon 0000000 0000000 **To add a policy to a group**
The following ``put-group-policy`` command adds a policy to the IAM group named ``Admins``::
aws iam put-group-policy --group-name Admins --policy-document file://AdminPolicy.json --policy-name AdminRoot
The policy is defined as a JSON document in the *AdminPolicy.json* file. (The file name and extension do not have
significance.)
For more information, see `Managing IAM Policies`_ in the *Using IAM* guide.
.. _`Managing IAM Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingPolicies.html
awscli-1.10.1/awscli/examples/iam/attach-group-policy.rst 0000666 4542626 0000144 00000000775 12652514124 024434 0 ustar pysdk-ci amazon 0000000 0000000 **To attach a managed policy to an IAM group**
The following ``attach-group-policy`` command attaches the AWS managed policy named ``ReadOnlyAccess`` to the IAM group named ``Finance``::
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/ReadOnlyAccess --group-name Finance
For more information, see `Managed Policies and Inline Policies`_ in the *Using IAM* guide.
.. _`Managed Policies and Inline Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html awscli-1.10.1/awscli/examples/iam/add-role-to-instance-profile.rst 0000666 4542626 0000144 00000001212 12652514124 026073 0 ustar pysdk-ci amazon 0000000 0000000 **To add a role to an instance profile**
The following ``add-role-to-instance-profile`` command adds the role named ``S3Access`` to the instance profile named ``Webserver``::
aws iam add-role-to-instance-profile --role-name S3Access --instance-profile-name Webserver
To create an instance profile, use the ``create-instance-profile`` command.
For more information, see `Using IAM Roles to Delegate Permissions to Applications that Run on Amazon EC2`_ in the *Using IAM* guide.
.. _`Using IAM Roles to Delegate Permissions to Applications that Run on Amazon EC2`: http://docs.aws.amazon.com/IAM/latest/UserGuide/roles-usingrole-ec2instance.html awscli-1.10.1/awscli/examples/iam/list-signing-certificates.rst 0000666 4542626 0000144 00000001424 12652514124 025603 0 ustar pysdk-ci amazon 0000000 0000000 **To list the signing certificates for an IAM user**
The following ``list-signing-certificates`` command lists the signing certificates for the IAM user named ``Bob``::
aws iam list-signing-certificates --user-name Bob
Output::
{
"Certificates: "[
{
"UserName": "Bob",
"Status": "Inactive",
"CertificateBody": "-----BEGIN CERTIFICATE----------END CERTIFICATE-----",
"CertificateId": "TA7SMP42TDN5Z26OBPJE7EXAMPLE",
"UploadDate": "2013-06-06T21:40:08Z"
}
]
}
For more information, see `Creating and Uploading a User Signing Certificate`_ in the *Using IAM* guide.
.. _`Creating and Uploading a User Signing Certificate`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_UploadCertificate.html
awscli-1.10.1/awscli/examples/iam/delete-role.rst 0000666 4542626 0000144 00000001172 12652514124 022732 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an IAM role**
The following ``delete-role`` command removes the role named ``Test-Role``::
aws iam delete-role --role-name Test-Role
Before you can delete a role, you must remove the role from any instance profile (``remove-role-from-instance-policy``) and delete any policies that are attached to the role (``delete-role-policy``).
For more information, see `Creating a Role`_ and `Instance Profiles`_ in the *Using IAM* guide.
.. _`Creating a Role`: http://docs.aws.amazon.com/IAM/latest/UserGuide/creating-role.html
.. _Instance Profiles: http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html
awscli-1.10.1/awscli/examples/iam/create-role.rst 0000666 4542626 0000144 00000001725 12652514124 022737 0 ustar pysdk-ci amazon 0000000 0000000 **To create an IAM role**
The following ``create-role`` command creates a role named ``Test-Role`` and attaches a trust policy to it::
aws iam create-role --role-name Test-Role --assume-role-policy-document file://Test-Role-Trust-Policy.json
Output::
{
"Role": {
"AssumeRolePolicyDocument": "",
"RoleId": "AKIAIOSFODNN7EXAMPLE",
"CreateDate": "2013-06-07T20:43:32.821Z",
"RoleName": "Test-Role",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:role/Test-Role"
}
}
The trust policy is defined as a JSON document in the *Test-Role-Trust-Policy.json* file. (The file name and extension do not have significance.) The trust policy must specify a principal.
To attach a permissions policy to a role, use the ``put-role-policy`` command.
For more information, see `Creating a Role`_ in the *Using IAM* guide.
.. _`Creating a Role`: http://docs.aws.amazon.com/IAM/latest/UserGuide/creating-role.html
awscli-1.10.1/awscli/examples/iam/update-login-profile.rst 0000666 4542626 0000144 00000001577 12652514124 024570 0 ustar pysdk-ci amazon 0000000 0000000 **To update the password for an IAM user**
The following ``update-login-profile`` command creates a new password for the IAM user named ``Bob``::
aws iam update-login-profile --user-name Bob --password
To set a password policy for the account, use the ``update-account-password-policy`` command. If the new password
violates the account password policy, the command returns a ``PasswordPolicyViolation`` error.
If the account password policy allows them to, IAM users can change their own passwords using the ``change-password`` command.
Store the password in a secure place. If the password is lost, it cannot be recovered, and you must create a new one using the ``create-login-profile`` command.
For more information, see `Managing Passwords`_ in the *Using IAM* guide.
.. _`Managing Passwords`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingLogins.html
awscli-1.10.1/awscli/examples/iam/detach-user-policy.rst 0000666 4542626 0000144 00000000706 12652514124 024234 0 ustar pysdk-ci amazon 0000000 0000000 **To detach a policy from a user**
This example removes the managed policy with the ARN ``arn:aws:iam::123456789012:policy/TesterPolicy`` from the user ``Bob``::
aws iam detach-user-policy --user-name Bob --policy-arn arn:aws:iam::123456789012:policy/TesterPolicy
For more information, see `Overview of IAM Policies`_ in the *Using IAM* guide.
.. _`Overview of IAM Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/policies_overview.html awscli-1.10.1/awscli/examples/iam/attach-role-policy.rst 0000666 4542626 0000144 00000001002 12652514124 024221 0 ustar pysdk-ci amazon 0000000 0000000 **To attach a managed policy to an IAM role**
The following ``attach-role-policy`` command attaches the AWS managed policy named ``ReadOnlyAccess`` to the IAM role named ``ReadOnlyRole``::
aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/ReadOnlyAccess --role-name ReadOnlyRole
For more information, see `Managed Policies and Inline Policies`_ in the *Using IAM* guide.
.. _`Managed Policies and Inline Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html awscli-1.10.1/awscli/examples/iam/list-mfa-devices.rst 0000666 4542626 0000144 00000001167 12652514124 023671 0 ustar pysdk-ci amazon 0000000 0000000 **To list all MFA devices for a specified user**
This example returns details about the MFA device assigned to the IAM user ``Bob``::
aws iam list-mfa-devices --user-name Bob
Output::
{
"MFADevices": [
{
"UserName": "Bob",
"SerialNumber": "arn:aws:iam::123456789012:mfa/BobsMFADevice",
"EnablDate": "2015-06-16T22:36:37Z"
}
]
}
For more information, see `Using Multi-Factor Authentication (MFA) Devices with AWS`_ in the *Using IAM* guide.
.. _`Using Multi-Factor Authentication (MFA) Devices with AWS`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingMFA.html awscli-1.10.1/awscli/examples/iam/get-login-profile.rst 0000666 4542626 0000144 00000001573 12652514124 024061 0 ustar pysdk-ci amazon 0000000 0000000 **To get password information for an IAM user**
The following ``get-login-profile`` command gets information about the password for the IAM user named ``Bob``::
aws iam get-login-profile --user-name Bob
Output::
{
"LoginProfile": {
"UserName": "Bob",
"CreateDate": "2012-09-21T23:03:39Z"
}
}
The ``get-login-profile`` command can be used to verify that an IAM user has a password. The command returns a ``NoSuchEntity``
error if no password is defined for the user.
You cannot recover a password using this command. If the password is lost, you must delete the login profile (``delete-login-profile``) for the user and then create a new one (``create-login-profile``).
For more information, see `Managing Passwords`_ in the *Using IAM* guide.
.. _`Managing Passwords`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingLogins.html
awscli-1.10.1/awscli/examples/iam/list-instance-profiles.rst 0000666 4542626 0000144 00000004442 12652514124 025132 0 ustar pysdk-ci amazon 0000000 0000000 **To lists the instance profiles for the account**
The following ``list-instance-profiles`` command lists the instance profiles that are associated with the current account::
aws iam list-instance-profiles
Output::
{
"InstanceProfiles": [
{
"InstanceProfileId": "AIPAIXEU4NUHUPEXAMPLE",
"Roles": [
{
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
},
"RoleId": "AROAJ52OTH4H7LEXAMPLE",
"CreateDate": "2013-05-11T00:02:27Z",
"RoleName": "example-role",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:role/example-role"
}
],
"CreateDate": "2013-05-11T00:02:27Z",
"InstanceProfileName": "ExampleInstanceProfile",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:instance-profile/ExampleInstanceProfile"
},
{
"InstanceProfileId": "AIPAJVJVNRIQFREXAMPLE",
"Roles": [
{
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
},
"RoleId": "AROAINUBC5O7XLEXAMPLE",
"CreateDate": "2013-01-09T06:33:26Z",
"RoleName": "s3-test-role",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:role/s3-test-role"
}
],
"CreateDate": "2013-06-12T23:52:02Z",
"InstanceProfileName": "ExampleInstanceProfile2",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:instance-profile/ExampleInstanceProfile2"
},
]
}
For more information, see `Instance Profiles`_ in the *Using IAM* guide.
.. _`Instance Profiles`: http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html
awscli-1.10.1/awscli/examples/iam/get-credential-report.rst 0000666 4542626 0000144 00000000753 12652514124 024735 0 ustar pysdk-ci amazon 0000000 0000000 **To get a credential report**
This example opens the returned report and outputs it to the pipeline as an array of text lines::
aws iam get-credential-report
Output::
{
"GeneratedTime": "2015-06-17T19:11:50Z",
"ReportFormat": "text/csv"
}
For more information, see `Getting Credential Reports for Your AWS Account`_ in the *Using IAM* guide.
.. _`Getting Credential Reports for Your AWS Account`: http://docs.aws.amazon.com/IAM/latest/UserGuide/credential-reports.html awscli-1.10.1/awscli/examples/iam/list-account-aliases.rst 0000666 4542626 0000144 00000000636 12652514124 024561 0 ustar pysdk-ci amazon 0000000 0000000 **To list account aliases**
The following ``list-account-aliases`` command lists the aliases for the current account::
aws iam list-account-aliases
Output::
"AccountAliases": [
"mycompany"
]
For more information, see `Using an Alias for Your AWS Account ID`_ in the *Using IAM* guide.
.. _`Using an Alias for Your AWS Account ID`: http://docs.aws.amazon.com/IAM/latest/UserGuide/AccountAlias.html
awscli-1.10.1/awscli/examples/iam/create-group.rst 0000666 4542626 0000144 00000001122 12652514124 023121 0 ustar pysdk-ci amazon 0000000 0000000 **To create an IAM group**
The following ``create-group`` command creates an IAM group named ``Admins``::
aws iam create-group --group-name Admins
Output::
{
"Group": {
"Path": "/",
"CreateDate": "2015-03-09T20:30:24.940Z",
"GroupId": "AIDGPMS9RO4H3FEXAMPLE",
"Arn": "arn:aws:iam::123456789012:group/Admins",
"GroupName": "Admins"
}
}
For more information, see `Creating IAM Groups`_ in the *Using IAM* guide.
.. _`Creating IAM Groups`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_CreatingAndListingGroups.html awscli-1.10.1/awscli/examples/iam/get-role-policy.rst 0000666 4542626 0000144 00000001640 12652514124 023544 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about a policy attached to an IAM role**
The following ``get-role-policy`` command gets information about the specified policy attached to the role named ``Test-Role``::
aws iam get-role-policy --role-name Test-Role --policy-name ExamplePolicy
Output::
{
"RoleName": "Test-Role",
"PolicyDocument": {
"Statement": [
{
"Action": [
"s3:ListBucket",
"s3:Put*",
"s3:Get*",
"s3:*MultipartUpload*"
],
"Resource": "*",
"Effect": "Allow",
"Sid": "1"
}
]
}
"PolicyName": "ExamplePolicy"
}
For more information, see `Creating a Role`_ in the *Using IAM* guide.
.. _`Creating a Role`: http://docs.aws.amazon.com/IAM/latest/UserGuide/creating-role.html
awscli-1.10.1/awscli/examples/iam/list-attached-user-policies.rst 0000666 4542626 0000144 00000001357 12652514124 026045 0 ustar pysdk-ci amazon 0000000 0000000 **To list all managed policies that are attached to the specified user**
This command returns the names and ARNs of the managed policies for the IAM user named ``Bob`` in the AWS account::
aws iam list-attached-user-policies --user-name Bob
Output::
{
"AttachedPolicies": [
{
"PolicyName": "AdministratorAccess",
"PolicyArn": "arn:aws:iam::aws:policy/AdministratorAccess"
},
{
"PolicyName": "SecurityAudit",
"PolicyArn": "arn:aws:iam::aws:policy/SecurityAudit"
}
],
"IsTruncated": false
}
For more information, see `Overview of IAM Policies`_ in the *Using IAM* guide.
.. _`Overview of IAM Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/policies_overview.html awscli-1.10.1/awscli/examples/iam/list-policy-versions.rst 0000666 4542626 0000144 00000001442 12652514124 024647 0 ustar pysdk-ci amazon 0000000 0000000 **To list information about the versions of the specified managed policy**
This example returns the list of available versions of the policy whose ARN is ``arn:aws:iam::123456789012:policy/MySamplePolicy``::
aws iam list-policy-versions --policy-arn arn:aws:iam::123456789012:policy/MySamplePolicy
Output::
{
"IsTruncated": false,
"Versions": [
{
"CreateDate": "2015-06-02T23:19:44Z",
"VersionId": "v2",
"IsDefaultVersion": true
},
{
"CreateDate": "2015-06-02T22:30:47Z",
"VersionId": "v1",
"IsDefaultVersion": false
}
]
}
For more information, see `Overview of IAM Policies`_ in the *Using IAM* guide.
.. _`Overview of IAM Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/policies_overview.html awscli-1.10.1/awscli/examples/iam/create-user.rst 0000666 4542626 0000144 00000001156 12652514124 022752 0 ustar pysdk-ci amazon 0000000 0000000 **To create an IAM user**
The following ``create-user`` command creates an IAM user named ``Bob`` in the current account::
aws iam create-user --user-name Bob
Output::
{
"User": {
"UserName": "Bob",
"Path": "/",
"CreateDate": "2013-06-08T03:20:41.270Z",
"UserId": "AKIAIOSFODNN7EXAMPLE",
"Arn": "arn:aws:iam::123456789012:user/Bob"
}
}
For more information, see `Adding a New User to Your AWS Account`_ in the *Using IAM* guide.
.. _`Adding a New User to Your AWS Account`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_SettingUpUser.html
awscli-1.10.1/awscli/examples/iam/list-users.rst 0000666 4542626 0000144 00000001363 12652514124 022645 0 ustar pysdk-ci amazon 0000000 0000000 **To list IAM users**
The following ``list-users`` command lists the IAM users in the current account::
aws iam list-users
Output::
"Users": [
{
"UserName": "Adele",
"Path": "/",
"CreateDate": "2013-03-07T05:14:48Z",
"UserId": "AKIAI44QH8DHBEXAMPLE",
"Arn": "arn:aws:iam::123456789012:user/Adele"
},
{
"UserName": "Bob",
"Path": "/",
"CreateDate": "2012-09-21T23:03:13Z",
"UserId": "AKIAIOSFODNN7EXAMPLE",
"Arn": "arn:aws:iam::123456789012:user/Bob"
}
]
For more information, see `Listing Users`_ in the *Using IAM* guide.
.. _`Listing Users`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_GetListOfUsers.html
awscli-1.10.1/awscli/examples/iam/get-account-summary.rst 0000666 4542626 0000144 00000002405 12652514124 024435 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about IAM entity usage and IAM quotas in the current account**
The following ``get-account-summary`` command returns information about the current IAM entity usage and current IAM entity quotas in the account::
aws iam get-account-summary
Output::
{
"SummaryMap": {
"UsersQuota": 5000,
"GroupsQuota": 100,
"InstanceProfiles": 6,
"SigningCertificatesPerUserQuota": 2,
"AccountAccessKeysPresent": 0,
"RolesQuota": 250,
"RolePolicySizeQuota": 10240,
"AccountSigningCertificatesPresent": 0,
"Users": 27,
"ServerCertificatesQuota": 20,
"ServerCertificates": 0,
"AssumeRolePolicySizeQuota": 2048,
"Groups": 7,
"MFADevicesInUse": 1,
"Roles": 3,
"AccountMFAEnabled": 1,
"MFADevices": 3,
"GroupsPerUserQuota": 10,
"GroupPolicySizeQuota": 5120,
"InstanceProfilesQuota": 100,
"AccessKeysPerUserQuota": 2,
"Providers": 0,
"UserPolicySizeQuota": 2048
}
}
For more information about entity limitations, see `Limitations on IAM Entities`_ in the *Using IAM* guide.
.. _`Limitations on IAM Entities`: http://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html
awscli-1.10.1/awscli/examples/iam/update-group.rst 0000666 4542626 0000144 00000000612 12652514124 023143 0 ustar pysdk-ci amazon 0000000 0000000 **To rename an IAM group**
The following ``update-group`` command changes the name of the IAM group ``Test`` to ``Test-1``::
aws iam update-group --group-name Test --new-group-name Test-1
For more information, see `Changing a Group's Name or Path`_ in the *Using IAM* guide.
.. _`Changing a Group's Name or Path`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_RenamingGroup.html
awscli-1.10.1/awscli/examples/iam/get-access-key-last-used.rst 0000666 4542626 0000144 00000001153 12652514124 025233 0 ustar pysdk-ci amazon 0000000 0000000 **To retrieve information about when the specified access key was last used**
The following example retrieves information about when the access key ``ABCDEXAMPLE`` was last used::
aws iam get-access-key-last-used --access-key-id ABCDEXAMPLE
Output::
{
"UserName": "Bob", {
"AccessKeyLastUsed":
"Region": "us-east-1",
"ServiceName": "iam",
"LastUsedDate": "2015-06-16T22:45:00Z"
}
}
For more information, see `Managing Access Keys for IAM Users`_ in the *Using IAM* guide.
.. _`Managing Access Keys for IAM Users`: http://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingCredentials.html awscli-1.10.1/awscli/examples/iam/list-saml-providers.rst 0000666 4542626 0000144 00000001104 12652514124 024444 0 ustar pysdk-ci amazon 0000000 0000000 **To list the SAML providers in the AWS account**
This example retrieves the list of SAML 2.0 providers created in the current AWS account::
aws iam list-saml-providers
Output::
{
"SAMLProviderList": [
{
"CreateDate": "2015-06-05T22:45:14Z",
"ValidUntil": "2015-06-05T22:45:14Z",
"Arn": "arn:aws:iam::123456789012:saml-provider/SAMLADFS"
}
]
}
For more information, see `Using SAML Providers`_ in the *Using IAM* guide.
.. _`Using SAML Providers`: http://docs.aws.amazon.com/IAM/latest/UserGuide/identity-providers-saml.html awscli-1.10.1/awscli/examples/iam/list-entities-for-policy.rst 0000666 4542626 0000144 00000001404 12652514124 025405 0 ustar pysdk-ci amazon 0000000 0000000 **To list all users, groups, and roles that the specified managed policy is attached to**
This example returns a list of IAM groups, roles, and users who have the policy ``arn:aws:iam::123456789012:policy/TestPolicy`` attached::
aws iam list-entities-for-policy --policy-arn arn:aws:iam::123456789012:policy/TestPolicy
Output::
{
"PolicyGroups": [
{
"GroupName": "Admins"
}
],
"PolicyUsers": [
{
"UserName": "Bob"
}
],
"PolicyRoles": [
{
"RoleName": "testRole"
}
],
"IsTruncated": false
}
For more information, see `Overview of IAM Policies`_ in the *Using IAM* guide.
.. _`Overview of IAM Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/policies_overview.html awscli-1.10.1/awscli/examples/iam/upload-signing-certificate.rst 0000666 4542626 0000144 00000001602 12652514124 025727 0 ustar pysdk-ci amazon 0000000 0000000 **To upload a signing certificate for an IAM user**
The following ``upload-signing-certificate`` command uploads a signing certificate for the IAM user named ``Bob``::
aws iam upload-signing-certificate --user-name Bob --certificate-body file://certificate.pem
Output::
{
"Certificate": {
"UserName": "Bob",
"Status": "Active",
"CertificateBody": "-----BEGIN CERTIFICATE----------END CERTIFICATE-----",
"CertificateId": "TA7SMP42TDN5Z26OBPJE7EXAMPLE",
"UploadDate": "2013-06-06T21:40:08.121Z"
}
}
The certificate is in a file named *certificate.pem* in PEM format.
For more information, see `Creating and Uploading a User Signing Certificate`_ in the *Using IAM* guide.
.. _`Creating and Uploading a User Signing Certificate`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_UploadCertificate.html
awscli-1.10.1/awscli/examples/iam/get-user.rst 0000666 4542626 0000144 00000001076 12652514124 022267 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about an IAM user**
The following ``get-user`` command gets information about the IAM user named ``Bob``::
aws iam get-user --user-name Bob
Output::
{
"User": {
"UserName": "Bob",
"Path": "/",
"CreateDate": "2012-09-21T23:03:13Z",
"UserId": "AKIAIOSFODNN7EXAMPLE",
"Arn": "arn:aws:iam::123456789012:user/Bob"
}
}
For more information, see `Listing Users`_ in the *Using IAM* guide.
.. _`Listing Users`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_GetListOfUsers.html
awscli-1.10.1/awscli/examples/iam/get-open-id-connect-provider.rst 0000666 4542626 0000144 00000001450 12652514124 026117 0 ustar pysdk-ci amazon 0000000 0000000 **To return information about the specified OpenID Connect provider**
This example returns details about the OpenID Connect provider whose ARN is ``arn:aws:iam::123456789012:oidc-provider/server.example.com``::
aws iam get-open-id-connect-provider --open-id-connect-provider-arn arn:aws:iam::123456789012:oidc-provider/server.example.com
Output::
{
"Url": "server.example.com"
"CreateDate": "2015-06-16T19:41:48Z",
"ThumbprintList": [
"12345abcdefghijk67890lmnopqrst987example"
],
"ClientIDList": [
"example-application-ID"
]
}
For more information, see `Using OpenID Connect Identity Providers`_ in the *Using IAM* guide.
.. _`Using OpenID Connect Identity Providers`: http://docs.aws.amazon.com/IAM/latest/UserGuide/identity-providers-oidc.html awscli-1.10.1/awscli/examples/iam/list-policies.rst 0000666 4542626 0000144 00000002403 12652514124 023307 0 ustar pysdk-ci amazon 0000000 0000000 **To list managed policies that are available to your AWS account**
This example returns a collection of the first two managed policies available in the current AWS account::
aws iam list-policies --max-items 2
Output::
{
"Marker": "AAIWFnoA2MQ9zN9nnTorukxr1uesDIDa4u+q1mEfaurCDZ1AuCYagYfayKYGvu75BEGk8PooPsw5uvumkuizFACZ8f4rKtN1RuBWiVDBWet2OA==",
"IsTruncated": true,
"Policies": [
{
"PolicyName": "AdministratorAccess",
"CreateDate": "2015-02-06T18:39:46Z",
"AttachmentCount": 5,
"IsAttachable": true,
"PolicyId": "ANPAIWMBCKSKIEE64ZLYK",
"DefaultVersionId": "v1",
"Path": "/",
"Arn": "arn:aws:iam::aws:policy/AdministratorAccess",
"UpdateDate": "2015-02-06T18:39:46Z"
},
{
"PolicyName": "ASamplePolicy",
"CreateDate": "2015-06-17T19:23;32Z",
"AttachmentCount": "0",
"IsAttachable": "true",
"PolicyId": "Z27SI6FQMGNQ2EXAMPLE1",
"DefaultVersionId": "v1",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:policy/ASamplePolicy",
"UpdateDate": "2015-06-17T19:23:32Z"
}
]
}
For more information, see `Overview of IAM Policies`_ in the *Using IAM* guide.
.. _`Overview of IAM Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/policies_overview.html awscli-1.10.1/awscli/examples/iam/create-policy.rst 0000666 4542626 0000144 00000002425 12652514124 023273 0 ustar pysdk-ci amazon 0000000 0000000 The following command creates a customer managed policy named ``my-policy``::
aws iam create-policy --policy-name my-policy --policy-document file://policy
Output::
{
"Policy": {
"PolicyName": "my-policy",
"CreateDate": "2015-06-01T19:31:18.620Z",
"AttachmentCount": 0,
"IsAttachable": true,
"PolicyId": "ZXR6A36LTYANPAI7NJ5UV",
"DefaultVersionId": "v1",
"Path": "/",
"Arn": "arn:aws:iam::0123456789012:policy/my-policy",
"UpdateDate": "2015-06-01T19:31:18.620Z"
}
}
The file ``policy`` is a JSON document in the current folder that grants read only access to the ``shared`` folder in an Amazon S3 bucket named ``my-bucket``::
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::my-bucket/shared/*"
]
},
]
}
For more information on using files as input for string parameters, see `Specifying Parameter Values`_ in the *AWS CLI User Guide*.
.. _`Specifying Parameter Values`: http://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html
awscli-1.10.1/awscli/examples/iam/list-role-policies.rst 0000666 4542626 0000144 00000001125 12652514124 024246 0 ustar pysdk-ci amazon 0000000 0000000 **To list the policies attached to an IAM role**
The following ``list-role-policies`` command lists the names of the permissions policies for the specified IAM role::
aws iam list-role-policies --role-name Test-Role
Output::
"PolicyNames": [
"ExamplePolicy"
]
To see the trust policy attached to a role, use the ``get-role`` command. To see the details of a permissions policy, use the ``get-role-policy`` command.
For more information, see `Creating a Role`_ in the *Using IAM* guide.
.. _`Creating a Role`: http://docs.aws.amazon.com/IAM/latest/UserGuide/creating-role.html
awscli-1.10.1/awscli/examples/iam/create-open-id-connect-provider.rst 0000666 4542626 0000144 00000004545 12652514124 026613 0 ustar pysdk-ci amazon 0000000 0000000 **To create an OpenID Connect (OIDC) provider**
To create an OpenID Connect (OIDC) provider, we recommend using the ``--cli-input-json`` parameter to pass a JSON file that contains the required parameters. When you create an OIDC provider, you must pass the URL of the provider, and the URL must begin with ``https://``. It can be difficult to pass the URL as a command line parameter, because the colon (:) and forward slash (/) characters have special meaning in some command line environments. Using the ``--cli-input-json`` parameter gets around this limitation.
To use the ``--cli-input-json`` parameter, start by using the ``create-open-id-connect-provider`` command with the ``--generate-cli-skeleton`` parameter, as in the following example::
aws iam create-open-id-connect-provider --generate-cli-skeleton > create-open-id-connect-provider.json
The previous command creates a JSON file called create-open-id-connect-provider.json that you can use to fill in the information for a subsequent ``create-open-id-connect-provider`` command. For example::
{
"Url": "https://server.example.com",
"ClientIDList": [
"example-application-ID"
],
"ThumbprintList": [
"c3768084dfb3d2b68b7897bf5f565da8eEXAMPLE"
]
}
Next, to create the OpenID Connect (OIDC) provider, use the ``create-open-id-connect-provider`` command again, this time passing the ``--cli-input-json`` parameter to specify your JSON file. The following ``create-open-id-connect-provider`` command uses the ``--cli-input-json`` parameter with a JSON file called create-open-id-connect-provider.json::
aws iam create-open-id-connect-provider --cli-input-json file://create-open-id-connect-provider.json
Output::
{
"OpenIDConnectProviderArn": "arn:aws:iam::123456789012:oidc-provider/server.example.com"
}
For more information about OIDC providers, see `Using OpenID Connect Identity Providers`_ in the *Using IAM* guide.
For more information about obtaining thumbprints for an OIDC provider, see `Obtaining the Thumbprint for an OpenID Connect Provider`_ in the *Using IAM* guide.
.. _`Using OpenID Connect Identity Providers`: http://docs.aws.amazon.com/IAM/latest/UserGuide/identity-providers-oidc.html
.. _`Obtaining the Thumbprint for an OpenID Connect Provider`: http://docs.aws.amazon.com/IAM/latest/UserGuide/identity-providers-oidc-obtain-thumbprint.html awscli-1.10.1/awscli/examples/iam/delete-virtual-mfa-device.rst 0000666 4542626 0000144 00000000676 12652514124 025465 0 ustar pysdk-ci amazon 0000000 0000000 **To remove a virtual MFA device**
The following ``delete-virtual-mfa-device`` command removes the specified MFA device from the current account::
aws iam delete-virtual-mfa-device --serial-number arn:aws:iam::123456789012:mfa/MFATest
For more information, see `Using a Virtual MFA Device with AWS`_ in the *Using IAM* guide.
.. _`Using a Virtual MFA Device with AWS`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_VirtualMFA.html
awscli-1.10.1/awscli/examples/iam/delete-account-password-policy.rst 0000666 4542626 0000144 00000000643 12652514124 026564 0 ustar pysdk-ci amazon 0000000 0000000 **To delete the current account password policy**
The following ``delete-account-password-policy`` command removes the password policy for the current account::
aws iam delete-account-password-policy
For more information, see `Managing an IAM Password Policy`_ in the *Using IAM* guide.
.. _`Managing an IAM Password Policy`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingPasswordPolicies.html
awscli-1.10.1/awscli/examples/iam/deactivate-mfa-device.rst 0000666 4542626 0000144 00000001037 12652514124 024640 0 ustar pysdk-ci amazon 0000000 0000000 **To deactivate an MFA device**
This command deactivates the virtual MFA device with the ARN ``arn:aws:iam::210987654321:mfa/BobsMFADevice`` that is associated with the user ``Bob``::
aws iam deactivate-mfa-device --user-name Bob --serial-number arn:aws:iam::210987654321:mfa/BobsMFADevice
For more information, see `Using Multi-Factor Authentication (MFA) Devices with AWS`_ in the *Using IAM* guide.
.. _`Using Multi-Factor Authentication (MFA) Devices with AWS`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingMFA.html awscli-1.10.1/awscli/examples/iam/enable-mfa-device.rst 0000666 4542626 0000144 00000001443 12652514124 023756 0 ustar pysdk-ci amazon 0000000 0000000 **To enable an MFA device**
After you ran the ``create-virtual-mfa-device`` command to create a new virtual MFA device, you can then assign this MFA device to a user.
The following example assigns the MFA device with the serial number ``arn:aws:iam::210987654321:mfa/BobsMFADevice`` to the user ``Bob``.
The command also synchronizes the device with AWS by including the first two codes in sequence from the virtual MFA device::
aws iam enable-mfa-device --user-name Bob --serial-number arn:aws:iam::210987654321:mfa/BobsMFADevice --authentication-code-1 123456 --authentication-code-2 789012
For more information, see `Using a Virtual MFA Device with AWS`_ in the *Using IAM* guide.
.. _`Using a Virtual MFA Device with AWS`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_VirtualMFA.html awscli-1.10.1/awscli/examples/iam/update-account-password-policy.rst 0000666 4542626 0000144 00000001355 12652514124 026605 0 ustar pysdk-ci amazon 0000000 0000000 **To set or change the current account password policy**
The following ``update-account-password-policy`` command sets the password policy to require a minimum length of eight
characters and to require one or more numbers in the password::
aws iam update-account-password-policy --minimum-password-length 8 --require-numbers
Changes to an account's password policy affect any new passwords that are created for IAM users in the account. Password
policy changes do not affect existing passwords.
For more information, see `Setting an Account Password Policy for IAM Users`_ in the *Using IAM* guide.
.. _`Setting an Account Password Policy for IAM Users`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingPasswordPolicies.html
awscli-1.10.1/awscli/examples/iam/create-instance-profile.rst 0000666 4542626 0000144 00000001631 12652514124 025234 0 ustar pysdk-ci amazon 0000000 0000000 **To create an instance profile**
The following ``create-instance-profile`` command creates an instance profile named ``Webserver``::
aws iam create-instance-profile --instance-profile-name Webserver
Output::
{
"InstanceProfile": {
"InstanceProfileId": "AIPAJMBYC7DLSPEXAMPLE",
"Roles": [],
"CreateDate": "2015-03-09T20:33:19.626Z",
"InstanceProfileName": "Webserver",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:instance-profile/Webserver"
}
}
To add a role to an instance profile, use the ``add-role-to-instance-profile`` command.
For more information, see `Using IAM Roles to Delegate Permissions to Applications that Run on Amazon EC2`_ in the *Using IAM* guide.
.. _`Using IAM Roles to Delegate Permissions to Applications that Run on Amazon EC2`: http://docs.aws.amazon.com/IAM/latest/UserGuide/roles-usingrole-ec2instance.html awscli-1.10.1/awscli/examples/iam/add-user-to-group.rst 0000666 4542626 0000144 00000000675 12652514124 024016 0 ustar pysdk-ci amazon 0000000 0000000 **To add a user to an IAM group**
The following ``add-user-to-group`` command adds an IAM user named ``Bob`` to the IAM group named ``Admins``::
aws iam add-user-to-group --user-name Bob --group-name Admins
For more information, see `Adding and Removing Users in an IAM Group`_ in the *Using IAM* guide.
.. _`Adding and Removing Users in an IAM Group`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_AddOrRemoveUsersFromGroup.html
awscli-1.10.1/awscli/examples/iam/list-roles.rst 0000666 4542626 0000144 00000002723 12652514124 022631 0 ustar pysdk-ci amazon 0000000 0000000 **To list IAM roles for the current account**
The following ``list-roles`` command lists IAM roles for the current account::
aws iam list-roles
Output::
{
"Roles": [
{
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
},
"RoleId": "AROAJ52OTH4H7LEXAMPLE",
"CreateDate": "2013-05-11T00:02:27Z",
"RoleName": "ExampleRole1",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:role/ExampleRole1"
},
{
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "elastictranscoder.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
},
"RoleId": "AROAI4QRP7UFT7EXAMPLE",
"CreateDate": "2013-04-18T05:01:58Z",
"RoleName": "emr-access",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:role/emr-access"
}
]
}
For more information, see `Creating a Role`_ in the *Using IAM* guide.
.. _`Creating a Role`: http://docs.aws.amazon.com/IAM/latest/UserGuide/creating-role.html
awscli-1.10.1/awscli/examples/iam/get-instance-profile.rst 0000666 4542626 0000144 00000002160 12652514124 024546 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about an instance profile**
The following ``get-instance-profile`` command gets information about the instance profile named ``ExampleInstanceProfile``::
aws iam get-instance-profile --instance-profile-name ExampleInstanceProfile
Output::
{
"InstanceProfile": {
"InstanceProfileId": "AID2MAB8DPLSRHEXAMPLE",
"Roles": [
{
"AssumeRolePolicyDocument": "",
"RoleId": "AIDGPMS9RO4H3FEXAMPLE",
"CreateDate": "2013-01-09T06:33:26Z",
"RoleName": "Test-Role",
"Path": "/",
"Arn": "arn:aws:iam::336924118301:role/Test-Role"
}
],
"CreateDate": "2013-06-12T23:52:02Z",
"InstanceProfileName": "ExampleInstanceProfile",
"Path": "/",
"Arn": "arn:aws:iam::336924118301:instance-profile/ExampleInstanceProfile"
}
}
For more information, see `Instance Profiles`_ in the *Using IAM* guide.
.. _`Instance Profiles`: http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html
awscli-1.10.1/awscli/examples/iam/delete-access-key.rst 0000666 4542626 0000144 00000001154 12652514124 024020 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an access key for an IAM user**
The following ``delete-access-key`` command deletes the specified access key (access key ID and secret access key) for the IAM user named ``Bob``::
aws iam delete-access-key --access-key AKIDPMS9RO4H3FEXAMPLE --user-name Bob
To list the access keys defined for an IAM user, use the ``list-access-keys`` command.
For more information, see `Creating, Modifying, and Viewing User Security Credentials`_ in the *Using IAM* guide.
.. _`Creating, Modifying, and Viewing User Security Credentials`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_CreateAccessKey.html
awscli-1.10.1/awscli/examples/iam/delete-signing-certificate.rst 0000666 4542626 0000144 00000001144 12652514124 025706 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a signing certificate for an IAM user**
The following ``delete-signing-certificate`` command deletes the specified signing certificate for the IAM user named ``Bob``::
aws iam delete-signing-certificate --user-name Bob --certificate-id TA7SMP42TDN5Z26OBPJE7EXAMPLE
To get the ID for a signing certificate, use the ``list-signing-certificates`` command.
For more information, see `Creating and Uploading a User Signing Certificate`_ in the *Using IAM* guide.
.. _`Creating and Uploading a User Signing Certificate`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_UploadCertificate.html
awscli-1.10.1/awscli/examples/iam/delete-user.rst 0000666 4542626 0000144 00000000606 12652514124 022750 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an IAM user**
The following ``delete-user`` command removes the IAM user named ``Bob`` from the current account::
aws iam delete-user --user-name Bob
For more information, see `Deleting a User from Your AWS Account`_ in the *Using IAM* guide.
.. _`Deleting a User from Your AWS Account`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_DeletingUserFromAccount.html
awscli-1.10.1/awscli/examples/iam/list-attached-group-policies.rst 0000666 4542626 0000144 00000001412 12652514124 026213 0 ustar pysdk-ci amazon 0000000 0000000 **To list all managed policies that are attached to the specified group**
This example returns the names and ARNs of the managed policies that are attached to the IAM group named ``Admins`` in the AWS account::
aws iam list-attached-group-policies --group-name Admins
Output::
{
"AttachedPolicies": [
{
"PolicyName": "AdministratorAccess",
"PolicyArn": "arn:aws:iam::aws:policy/AdministratorAccess"
},
{
"PolicyName": "SecurityAudit",
"PolicyArn": "arn:aws:iam::aws:policy/SecurityAudit"
}
],
"IsTruncated": false
}
For more information, see `Overview of IAM Policies`_ in the *Using IAM* guide.
.. _`Overview of IAM Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/policies_overview.html awscli-1.10.1/awscli/examples/iam/update-saml-provider.rst 0000666 4542626 0000144 00000001244 12652514124 024575 0 ustar pysdk-ci amazon 0000000 0000000 **To update the metadata document for an existing SAML provider**
This example updates the SAML provider in IAM whose ARN is ``arn:aws:iam::123456789012:saml-provider/SAMLADFS`` with a new SAML metadata document from the file ``SAMLMetaData.xml``::
aws iam update-saml-provider --saml-metadata-document file://SAMLMetaData.xml --saml-provider-arn arn:aws:iam::123456789012:saml-provider/SAMLADFS
Output::
{
"SAMLProviderArn": "arn:aws:iam::123456789012:saml-provider/SAMLADFS"
}
For more information, see `Using SAML Providers`_ in the *Using IAM* guide.
.. _`Using SAML Providers`: http://docs.aws.amazon.com/IAM/latest/UserGuide/identity-providers-saml.html awscli-1.10.1/awscli/examples/iam/remove-user-from-group.rst 0000666 4542626 0000144 00000000732 12652514124 025076 0 ustar pysdk-ci amazon 0000000 0000000 **To remove a user from an IAM group**
The following ``remove-user-from-group`` command removes the user named ``Bob`` from the IAM group named ``Admins``::
aws iam remove-user-from-group --user-name Bob --group-name Admins
For more information, see `Adding Users to and Removing Users from a Group`_ in the *Using IAM* guide.
.. _`Adding Users to and Removing Users from a Group`: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_AddOrRemoveUsersFromGroup.html
awscli-1.10.1/awscli/examples/iam/get-account-authorization-details.rst 0000666 4542626 0000144 00000021300 12652514124 027256 0 ustar pysdk-ci amazon 0000000 0000000 The following ``get-account-authorization-details`` command returns information about all IAM users, groups, roles, and policies in the AWS account::
aws iam get-account-authorization-details
Output::
{
"RoleDetailList": [
{
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
},
"RoleId": "AROAFP4BKI7Y7TEXAMPLE",
"CreateDate": "2014-07-30T17:09:20Z",
"InstanceProfileList": [
{
"InstanceProfileId": "AIPAFFYRBHWXW2EXAMPLE",
"Roles": [
{
"AssumeRolePolicyDocument": {
"Version":"2012-10-17",
"Statement": [
{
"Sid":"",
"Effect":"Allow",
"Principal": {
"Service":"ec2.amazonaws.com"
},
"Action":"sts:AssumeRole"
}
]
},
"RoleId": "AROAFP4BKI7Y7TEXAMPLE",
"CreateDate": "2014-07-30T17:09:20Z",
"RoleName": "EC2role",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:role/EC2role"
}
],
"CreateDate": "2014-07-30T17:09:20Z",
"InstanceProfileName": "EC2role",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:instance-profile/EC2role"
}
],
"RoleName": "EC2role",
"Path": "/",
"AttachedManagedPolicies": [
{
"PolicyName": "AmazonS3FullAccess",
"PolicyArn": "arn:aws:iam::aws:policy/AmazonS3FullAccess"
},
{
"PolicyName": "AmazonDynamoDBFullAccess",
"PolicyArn": "arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess"
}
],
"RolePolicyList": [],
"Arn": "arn:aws:iam::123456789012:role/EC2role"
}],
"GroupDetailList": [
{
"GroupId": "AIDACKCEVSQ6C7EXAMPLE",
"AttachedManagedPolicies": {
"PolicyName": "AdministratorAccess",
"PolicyArn": "arn:aws:iam::aws:policy/AdministratorAccess"
},
"GroupName": "Admins",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:group/Admins",
"CreateDate": "2013-10-14T18:32:24Z",
"GroupPolicyList": []
},
{
"GroupId": "AIDACKCEVSQ6C8EXAMPLE",
"AttachedManagedPolicies": {
"PolicyName": "PowerUserAccess",
"PolicyArn": "arn:aws:iam::aws:policy/PowerUserAccess"
},
"GroupName": "Dev",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:group/Dev",
"CreateDate": "2013-10-14T18:33:55Z",
"GroupPolicyList": []
},
{
"GroupId": "AIDACKCEVSQ6C9EXAMPLE",
"AttachedManagedPolicies": [],
"GroupName": "Finance",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:group/Finance",
"CreateDate": "2013-10-14T18:57:48Z",
"GroupPolicyList": [
{
"PolicyName": "policygen-201310141157",
"PolicyDocument": {
"Version":"2012-10-17",
"Statement": [
{
"Action": "aws-portal:*",
"Sid":"Stmt1381777017000",
"Resource": "*",
"Effect":"Allow"
}
]
}
}
]
}],
"UserDetailList": [
{
"UserName": "Alice",
"GroupList": [
"Admins"
],
"CreateDate": "2013-10-14T18:32:24Z",
"UserId": "AIDACKCEVSQ6C2EXAMPLE",
"UserPolicyList": [],
"Path": "/",
"AttachedManagedPolicies": [],
"Arn": "arn:aws:iam::123456789012:user/Alice"
},
{
"UserName": "Bob",
"GroupList": [
"Admins"
],
"CreateDate": "2013-10-14T18:32:25Z",
"UserId": "AIDACKCEVSQ6C3EXAMPLE",
"UserPolicyList": [
{
"PolicyName": "DenyBillingAndIAMPolicy",
"PolicyDocument": {
"Version":"2012-10-17",
"Statement": {
"Effect":"Deny",
"Action": [
"aws-portal:*",
"iam:*"
],
"Resource":"*"
}
}
}
],
"Path": "/",
"AttachedManagedPolicies": [],
"Arn": "arn:aws:iam::123456789012:user/Bob"
},
{
"UserName": "Charlie",
"GroupList": [
"Dev"
],
"CreateDate": "2013-10-14T18:33:56Z",
"UserId": "AIDACKCEVSQ6C4EXAMPLE",
"UserPolicyList": [],
"Path": "/",
"AttachedManagedPolicies": [],
"Arn": "arn:aws:iam::123456789012:user/Charlie"
}],
"Policies": [
{
"PolicyName": "create-update-delete-set-managed-policies",
"CreateDate": "2015-02-06T19:58:34Z",
"AttachmentCount": 1,
"IsAttachable": true,
"PolicyId": "ANPAJ2UCCR6DPCEXAMPLE",
"DefaultVersionId": "v1",
"PolicyVersionList": [
{
"CreateDate": "2015-02-06T19:58:34Z",
"VersionId": "v1",
"Document": {
"Version":"2012-10-17",
"Statement": {
"Effect":"Allow",
"Action": [
"iam:CreatePolicy",
"iam:CreatePolicyVersion",
"iam:DeletePolicy",
"iam:DeletePolicyVersion",
"iam:GetPolicy",
"iam:GetPolicyVersion",
"iam:ListPolicies",
"iam:ListPolicyVersions",
"iam:SetDefaultPolicyVersion"
],
"Resource": "*"
}
},
"IsDefaultVersion": true
}
],
"Path": "/",
"Arn": "arn:aws:iam::123456789012:policy/create-update-delete-set-managed-policies",
"UpdateDate": "2015-02-06T19:58:34Z"
},
{
"PolicyName": "S3-read-only-specific-bucket",
"CreateDate": "2015-01-21T21:39:41Z",
"AttachmentCount": 1,
"IsAttachable": true,
"PolicyId": "ANPAJ4AE5446DAEXAMPLE",
"DefaultVersionId": "v1",
"PolicyVersionList": [
{
"CreateDate": "2015-01-21T21:39:41Z",
"VersionId": "v1",
"Document": {
"Version":"2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::example-bucket",
"arn:aws:s3:::example-bucket/*"
]
}
]
},
"IsDefaultVersion": true
}
],
"Path": "/",
"Arn": "arn:aws:iam::123456789012:policy/S3-read-only-specific-bucket",
"UpdateDate": "2015-01-21T23:39:41Z"
},
{
"PolicyName": "AmazonEC2FullAccess",
"CreateDate": "2015-02-06T18:40:15Z",
"AttachmentCount": 1,
"IsAttachable": true,
"PolicyId": "ANPAE3QWE5YT46TQ34WLG",
"DefaultVersionId": "v1",
"PolicyVersionList": [
{
"CreateDate": "2014-10-30T20:59:46Z",
"VersionId": "v1",
"Document": {
"Version":"2012-10-17",
"Statement": [
{
"Action":"ec2:*",
"Effect":"Allow",
"Resource":"*"
},
{
"Effect":"Allow",
"Action":"elasticloadbalancing:*",
"Resource":"*"
},
{
"Effect":"Allow",
"Action":"cloudwatch:*",
"Resource":"*"
},
{
"Effect":"Allow",
"Action":"autoscaling:*",
"Resource":"*"
}
]
},
"IsDefaultVersion": true
}
],
"Path": "/",
"Arn": "arn:aws:iam::aws:policy/AmazonEC2FullAccess",
"UpdateDate": "2015-02-06T18:40:15Z"
}],
"Marker": "EXAMPLEkakv9BCuUNFDtxWSyfzetYwEx2ADc8dnzfvERF5S6YMvXKx41t6gCl/eeaCX3Jo94/bKqezEAg8TEVS99EKFLxm3jtbpl25FDWEXAMPLE",
"IsTruncated": true
} awscli-1.10.1/awscli/examples/iam/remove-role-from-instance-profile.rst 0000666 4542626 0000144 00000000753 12652514124 027172 0 ustar pysdk-ci amazon 0000000 0000000 **To remove a role from an instance profile**
The following ``remove-role-from-instance-profile`` command removes the role named ``Test-Role`` from the instance
profile named ``ExampleInstanceProfile``::
aws iam remove-role-from-instance-profile --instance-profile-name ExampleInstanceProfile --role-name Test-Role
For more information, see `Instance Profiles`_ in the *Using IAM* guide.
.. _`Instance Profiles`: http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html
awscli-1.10.1/awscli/examples/iam/detach-role-policy.rst 0000666 4542626 0000144 00000000777 12652514124 024227 0 ustar pysdk-ci amazon 0000000 0000000 **To detach a policy from a role**
This example removes the managed policy with the ARN ``arn:aws:iam::123456789012:policy/FederatedTesterAccessPolicy`` from the role called ``FedTesterRole``::
aws iam detach-role-policy --role-name FedTesterRole --policy-arn arn:aws:iam::123456789012:policy/FederatedTesterAccessPolicy
For more information, see `Overview of IAM Policies`_ in the *Using IAM* guide.
.. _`Overview of IAM Policies`: http://docs.aws.amazon.com/IAM/latest/UserGuide/policies_overview.html awscli-1.10.1/awscli/examples/iot/ 0000777 4542626 0000144 00000000000 12652514126 020025 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/iot/create-certificate-from-csr.rst 0000666 4542626 0000144 00000002774 12652514124 026040 0 ustar pysdk-ci amazon 0000000 0000000 Create Batches of Certificates from Batches of CSRs
---------------------------------------------------
The following example shows how to create a batch of certificates given a
batch of CSRs. Assuming a set of CSRs are located inside of the
directory ``my-csr-directory``::
$ ls my-csr-directory/
csr.pem csr2.pem
a certificate can be created for each CSR in that directory
using a single command. On Linux and OSX, this command is::
$ ls my-csr-directory/ | xargs -I {} aws iot create-certificate-from-csr --certificate-signing-request file://my-csr-directory/{}
This command lists all of the CSRs in ``my-csr-directory`` and
pipes each CSR filename to the ``aws iot create-certificate-from-csr`` AWS CLI
command to create a certificate for the corresponding CSR.
The ``aws iot create-certificate-from-csr`` part of the command can also be
ran in parallel to speed up the certificate creation process::
$ ls my-csr-directory/ | xargs -P 10 -I {} aws iot create-certificate-from-csr --certificate-signing-request file://my-csr-directory/{}
On Windows PowerShell, the command to create certificates for all CSRs
in ``my-csr-directory`` is::
> ls -Name my-csr-directory | %{aws iot create-certificate-from-csr --certificate-signing-request file://my-csr-directory/$_}
On Windows Command Prompt, the command to create certificates for all CSRs
in ``my-csr-directory`` is::
> forfiles /p my-csr-directory /c "cmd /c aws iot create-certificate-from-csr --certificate-signing-request file://@path"
awscli-1.10.1/awscli/examples/s3/ 0000777 4542626 0000144 00000000000 12652514126 017557 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/s3/sync.rst 0000666 4542626 0000144 00000011220 12652514124 021257 0 ustar pysdk-ci amazon 0000000 0000000 The following ``sync`` command syncs objects under a specified prefix and bucket to files in a local directory by
uploading the local files to s3. A local file will require uploading if the size of the local file is different than
the size of the s3 object, the last modified time of the local file is newer than the last modified time of the s3
object, or the local file does not exist under the specified bucket and prefix. In this example, the user syncs the
bucket ``mybucket`` to the local current directory. The local current directory contains the files ``test.txt`` and
``test2.txt``. The bucket ``mybucket`` contains no objects::
aws s3 sync . s3://mybucket
Output::
upload: test.txt to s3://mybucket/test.txt
upload: test2.txt to s3://mybucket/test2.txt
The following ``sync`` command syncs objects under a specified prefix and bucket to objects under another specified
prefix and bucket by copying s3 objects. A s3 object will require copying if the sizes of the two s3 objects differ,
the last modified time of the source is newer than the last modified time of the destination, or the s3 object does not
exist under the specified bucket and prefix destination. In this example, the user syncs the bucket ``mybucket2`` to
the bucket ``mybucket``. The bucket ``mybucket`` contains the objects ``test.txt`` and ``test2.txt``. The bucket
``mybucket2`` contains no objects::
aws s3 sync s3://mybucket s3://mybucket2
Output::
copy: s3://mybucket/test.txt to s3://mybucket2/test.txt
copy: s3://mybucket/test2.txt to s3://mybucket2/test2.txt
The following ``sync`` command syncs files in a local directory to objects under a specified prefix and bucket by
downloading s3 objects. A s3 object will require downloading if the size of the s3 object differs from the size of the
local file, the last modified time of the s3 object is older than the last modified time of the local file, or the s3
object does not exist in the local directory. Take note that when objects are downloaded from s3, the last modified
time of the local file is changed to the last modified time of the s3 object. In this example, the user syncs the
current local directory to the bucket ``mybucket``. The bucket ``mybucket`` contains the objects ``test.txt`` and
``test2.txt``. The current local directory has no files::
aws s3 sync s3://mybucket .
Output::
download: s3://mybucket/test.txt to test.txt
download: s3://mybucket/test2.txt to test2.txt
The following ``sync`` command syncs objects under a specified prefix and bucket to files in a local directory by
uploading the local files to s3. Because the ``--delete`` parameter flag is thrown, any files existing under the
specified prefix and bucket but not existing in the local directory will be deleted. In this example, the user syncs
the bucket ``mybucket`` to the local current directory. The local current directory contains the files ``test.txt`` and
``test2.txt``. The bucket ``mybucket`` contains the object ``test3.txt``::
aws s3 sync . s3://mybucket --delete
Output::
upload: test.txt to s3://mybucket/test.txt
upload: test2.txt to s3://mybucket/test2.txt
delete: s3://mybucket/test3.txt
The following ``sync`` command syncs objects under a specified prefix and bucket to files in a local directory by
uploading the local files to s3. Because the ``--exclude`` parameter flag is thrown, all files matching the pattern
existing both in s3 and locally will be excluded from the sync. In this example, the user syncs the bucket ``mybucket``
to the local current directory. The local current directory contains the files ``test.jpg`` and ``test2.txt``. The
bucket ``mybucket`` contains the object ``test.jpg`` of a different size than the local ``test.jpg``::
aws s3 sync . s3://mybucket --exclude "*.jpg"
Output::
upload: test2.txt to s3://mybucket/test2.txt
The following ``sync`` command syncs files under a local directory to objects under a specified prefix and bucket by
downloading s3 objects. This example uses the ``--exclude`` parameter flag to exclude a specified directory
and s3 prefix from the ``sync`` command. In this example, the user syncs the local current directory to the bucket
``mybucket``. The local current directory contains the files ``test.txt`` and ``another/test2.txt``. The bucket
``mybucket`` contains the objects ``another/test5.txt`` and ``test1.txt``::
aws s3 sync s3://mybucket/ . --exclude "*another/*"
Output::
download: s3://mybucket/test1.txt to test1.txt
The following ``sync`` command syncs files between two buckets in different regions::
aws s3 sync s3://my-us-west-2-bucket s3://my-us-east-1-bucket --source-region us-west-2 --region us-east-1 awscli-1.10.1/awscli/examples/s3/mb.rst 0000666 4542626 0000144 00000001015 12652514124 020702 0 ustar pysdk-ci amazon 0000000 0000000 The following ``mb`` command creates a bucket. In this example, the user makes the bucket ``mybucket``. The bucket is
created in the region specified in the user's configuration file::
aws s3 mb s3://mybucket
Output::
make_bucket: mybucket
The following ``mb`` command creates a bucket in a region specified by the ``--region`` parameter. In this example, the
user makes the bucket ``mybucket`` in the region ``us-west-1``::
aws s3 mb s3://mybucket --region us-west-1
Output::
make_bucket: mybucket
awscli-1.10.1/awscli/examples/s3/rb.rst 0000666 4542626 0000144 00000001201 12652514124 020704 0 ustar pysdk-ci amazon 0000000 0000000 The following ``rb`` command removes a bucket. In this example, the user's bucket is ``mybucket``. Note that the bucket must be empty in order to remove::
aws s3 rb s3://mybucket
Output::
remove_bucket: mybucket
The following ``rb`` command uses the ``--force`` parameter to first remove all of the objects in the bucket and then
remove the bucket itself. In this example, the user's bucket is ``mybucket`` and the objects in ``mybucket`` are
``test1.txt`` and ``test2.txt``::
aws s3 rb s3://mybucket --force
Output::
delete: s3://mybucket/test1.txt
delete: s3://mybucket/test2.txt
remove_bucket: mybucket
awscli-1.10.1/awscli/examples/s3/rm.rst 0000666 4542626 0000144 00000002606 12652514124 020731 0 ustar pysdk-ci amazon 0000000 0000000 The following ``rm`` command deletes a single s3 object::
aws s3 rm s3://mybucket/test2.txt
Output::
delete: s3://mybucket/test2.txt
The following ``rm`` command recursively deletes all objects under a specified bucket and prefix when passed with the
parameter ``--recursive``. In this example, the bucket ``mybucket`` contains the objects ``test1.txt`` and
``test2.txt``::
aws s3 rm s3://mybucket --recursive
Output::
delete: s3://mybucket/test1.txt
delete: s3://mybucket/test2.txt
The following ``rm`` command recursively deletes all objects under a specified bucket and prefix when passed with the
parameter ``--recursive`` while excluding some objects by using an ``--exclude`` parameter. In this example, the bucket
``mybucket`` has the objects ``test1.txt`` and ``test2.jpg``::
aws s3 rm s3://mybucket/ --recursive --exclude "*.jpg"
Output::
delete: myDir/test1.txt to s3://mybucket/test1.txt
The following ``rm`` command recursively deletes all objects under a specified bucket and prefix when passed with the
parameter ``--recursive`` while excluding all objects under a particular prefix by using an ``--exclude`` parameter. In
this example, the bucket ``mybucket`` has the objects ``test1.txt`` and ``another/test.txt``::
aws s3 rm s3://mybucket/ --recursive --exclude "mybucket/another/*"
Output::
delete: myDir/test1.txt to s3://mybucket/test1.txt
awscli-1.10.1/awscli/examples/s3/_concepts.rst 0000666 4542626 0000144 00000015575 12652514124 022301 0 ustar pysdk-ci amazon 0000000 0000000 This section explains prominent concepts and notations in the set of high-level S3 commands provided.
Path Argument Type
++++++++++++++++++
Whenever using a command, at least one path argument must be specified. There
are two types of path arguments: ``LocalPath`` and ``S3Uri``.
``LocalPath``: represents the path of a local file or directory. It can be
written as an absolute path or relative path.
``S3Uri``: represents the location of a S3 object, prefix, or bucket. This
must be written in the form ``s3://mybucket/mykey`` where ``mybucket`` is
the specified S3 bucket, ``mykey`` is the specified S3 key. The path argument
must begin with ``s3://`` in order to denote that the path argument refers to
a S3 object. Note that prefixes are separated by forward slashes. For
example, if the S3 object ``myobject`` had the prefix ``myprefix``, the
S3 key would be ``myprefix/myobject``, and if the object was in the bucket
``mybucket``, the ``S3Uri`` would be ``s3://mybucket/myprefix/myobject``.
Order of Path Arguments
+++++++++++++++++++++++
Every command takes one or two positional path arguments. The first path
argument represents the source, which is the local file/directory or S3
object/prefix/bucket that is being referenced. If there is a second path
argument, it represents the destination, which is is the local file/directory
or S3 object/prefix/bucket that is being operated on. Commands with only
one path argument do not have a destination because the operation is being
performed only on the source.
Single Local File and S3 Object Operations
++++++++++++++++++++++++++++++++++++++++++
Some commands perform operations only on single files and S3 objects. The
following commands are single file/object operations if no ``--recursive``
flag is provided.
* ``cp``
* ``mv``
* ``rm``
For this type of operation, the first path argument, the source, must exist
and be a local file or S3 object. The second path argument, the destination,
can be the name of a local file, local directory, S3 object, S3 prefix,
or S3 bucket.
The destination is indicated as a local directory, S3 prefix, or S3 bucket
if it ends with a forward slash or back slash. The use of slash depends
on the path argument type. If the path argument is a ``LocalPath``,
the type of slash is the separator used by the operating system. If the
path is a ``S3Uri``, the forward slash must always be used. If a slash
is at the end of the destination, the destination file or object will
adopt the name of the source file or object. Otherwise, if there is no
slash at the end, the file or object will be saved under the name provided.
See examples in ``cp`` and ``mv`` to illustrate this description.
Directory and S3 Prefix Operations
++++++++++++++++++++++++++++++++++
Some commands only perform operations on the contents of a local directory
or S3 prefix/bucket. Adding or omitting a forward slash or back slash to
the end of any path argument, depending on its type, does not affect the
results of the operation. The following commands will always result in
a directory or S3 prefix/bucket operation:
* ``sync``
* ``mb``
* ``rb``
* ``ls``
Use of Exclude and Include Filters
++++++++++++++++++++++++++++++++++
Currently, there is no support for the use of UNIX style wildcards in
a command's path arguments. However, most commands have ``--exclude ""``
and ``--include ""`` parameters that can achieve the desired result.
These parameters perform pattern matching to either exclude or include
a particular file or object. The following pattern symbols are supported.
* ``*``: Matches everything
* ``?``: Matches any single character
* ``[sequence]``: Matches any character in ``sequence``
* ``[!sequence]``: Matches any character not in ``sequence``
Any number of these parameters can be passed to a command. You can do this by
providing an ``--exclude`` or ``--include`` argument multiple times, e.g.
``--include "*.txt" --include "*.png"``.
When there are multiple filters, the rule is the filters that appear later in
the command take precedence over filters that appear earlier in the command.
For example, if the filter parameters passed to the command were
::
--exclude "*" --include "*.txt"
All files will be excluded from the command except for files ending with
``.txt`` However, if the order of the filter parameters was changed to
::
--include "*.txt" --exclude "*"
All files will be excluded from the command.
Each filter is evaluated against the **source directory**. If the source
location is a file instead of a directory, the directory containing the file is
used as the source directory. For example, suppose you had the following
directory structure::
/tmp/foo/
.git/
|---config
|---description
foo.txt
bar.txt
baz.jpg
In the command ``aws s3 sync /tmp/foo s3://bucket/`` the source directory is
``/tmp/foo``. Any include/exclude filters will be evaluated with the source
directory prepended. Below are several examples to demonstrate this.
Given the directory structure above and the command
``aws s3 cp /tmp/foo s3://bucket/ --recursive --exclude ".git/*"``, the
files ``.git/config`` and ``.git/description`` will be excluded from the
files to upload because the exclude filter ``.git/*`` will have the source
prepended to the filter. This means that::
/tmp/foo/.git/* -> /tmp/foo/.git/config (matches, should exclude)
/tmp/foo/.git/* -> /tmp/foo/.git/description (matches, should exclude)
/tmp/foo/.git/* -> /tmp/foo/foo.txt (does not match, should include)
/tmp/foo/.git/* -> /tmp/foo/bar.txt (does not match, should include)
/tmp/foo/.git/* -> /tmp/foo/baz.jpg (does not match, should include)
The command ``aws s3 cp /tmp/foo/ s3://bucket/ --recursive --exclude "ba*"``
will exclude ``/tmp/foo/bar.txt`` and ``/tmp/foo/baz.jpg``::
/tmp/foo/ba* -> /tmp/foo/.git/config (does not match, should include)
/tmp/foo/ba* -> /tmp/foo/.git/description (does not match, should include)
/tmp/foo/ba* -> /tmp/foo/foo.txt (does not match, should include)
/tmp/foo/ba* -> /tmp/foo/bar.txt (matches, should exclude)
/tmp/foo/ba* -> /tmp/foo/baz.jpg (matches, should exclude)
Note that, by default, *all files are included*. This means that
providing **only** an ``--include`` filter will not change what
files are transferred. ``--include`` will only re-include files that
have been excluded from an ``--exclude`` filter. If you want only want
to upload files with a particular extension, you need to first exclude
all files, then re-include the files with the particular extension.
This command will upload **only** files ending with ``.jpg``::
aws s3 cp /tmp/foo/ s3://bucket/ --recursive --exclude "*" --include "*.jpg"
If you wanted to include both ``.jpg`` files as well as ``.txt`` files you
can run::
aws s3 cp /tmp/foo/ s3://bucket/ --recursive \
--exclude "*" --include "*.jpg" --include "*.txt"
awscli-1.10.1/awscli/examples/s3/cp.rst 0000666 4542626 0000144 00000011311 12652514124 020706 0 ustar pysdk-ci amazon 0000000 0000000 **Copying a local file to S3**
The following ``cp`` command copies a single file to a specified
bucket and key::
aws s3 cp test.txt s3://mybucket/test2.txt
Output::
upload: test.txt to s3://mybucket/test2.txt
**Copying a file from S3 to S3**
The following ``cp`` command copies a single s3 object to a specified bucket and key::
aws s3 cp s3://mybucket/test.txt s3://mybucket/test2.txt
Output::
copy: s3://mybucket/test.txt to s3://mybucket/test2.txt
**Copying an S3 object to a local file**
The following ``cp`` command copies a single object to a specified file locally::
aws s3 cp s3://mybucket/test.txt test2.txt
Output::
download: s3://mybucket/test.txt to test2.txt
**Copying an S3 object from one bucket to another**
The following ``cp`` command copies a single object to a specified bucket while retaining its original name::
aws s3 cp s3://mybucket/test.txt s3://mybucket2/
Output::
copy: s3://mybucket/test.txt to s3://mybucket2/test.txt
**Recursively copying S3 objects to a local directory**
When passed with the parameter ``--recursive``, the following ``cp`` command recursively copies all objects under a
specified prefix and bucket to a specified directory. In this example, the bucket ``mybucket`` has the objects
``test1.txt`` and ``test2.txt``::
aws s3 cp s3://mybucket . --recursive
Output::
download: s3://mybucket/test1.txt to test1.txt
download: s3://mybucket/test2.txt to test2.txt
**Recursively copying local files to S3**
When passed with the parameter ``--recursive``, the following ``cp`` command recursively copies all files under a
specifed directory to a specified bucket and prefix while excluding some files by using an ``--exclude`` parameter. In
this example, the directory ``myDir`` has the files ``test1.txt`` and ``test2.jpg``::
aws s3 cp myDir s3://mybucket/ --recursive --exclude "*.jpg"
Output::
upload: myDir/test1.txt to s3://mybucket2/test1.txt
**Recursively copying S3 objects to another bucket**
When passed with the parameter ``--recursive``, the following ``cp`` command recursively copies all objects under a
specifed bucket to another bucket while excluding some objects by using an ``--exclude`` parameter. In this example,
the bucket ``mybucket`` has the objects ``test1.txt`` and ``another/test1.txt``::
aws s3 cp s3://mybucket/ s3://mybucket2/ --recursive --exclude "mybucket/another/*"
Output::
copy: s3://mybucket/test1.txt to s3://mybucket2/test1.txt
You can combine ``--exclude`` and ``--include`` options to copy only objects that match a pattern, excluding all others::
aws s3 cp s3://mybucket/logs/ s3://mybucket2/logs/ --recursive --exclude "*" --include "*.log"
Output::
copy: s3://mybucket/test/test.log to s3://mybucket2/test/test.log
copy: s3://mybucket/test3.log to s3://mybucket2/test3.log
**Setting the Access Control List (ACL) while copying an S3 object**
The following ``cp`` command copies a single object to a specified bucket and key while setting the ACL to
``public-read-write``::
aws s3 cp s3://mybucket/test.txt s3://mybucket/test2.txt --acl public-read-write
Output::
copy: s3://mybucket/test.txt to s3://mybucket/test2.txt
Note that if you're using the ``--acl`` option, ensure that any associated IAM
policies include the ``"s3:PutObjectAcl"`` action::
aws iam get-user-policy --user-name myuser --policy-name mypolicy
Output::
{
"UserName": "myuser",
"PolicyName": "mypolicy",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
],
"Effect": "Allow",
"Sid": "Stmt1234567891234"
}
]
}
}
**Granting permissions for an S3 object**
The following ``cp`` command illustrates the use of the ``--grants`` option to grant read access to all users and full
control to a specific user identified by their email address::
aws s3 cp file.txt s3://mybucket/ --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers full=emailaddress=user@example.com
Output::
upload: file.txt to s3://mybucket/file.txt
**Uploading a local file stream to S3**
The following ``cp`` command uploads a local file stream from standard input to a specified bucket and key::
aws s3 cp - s3://mybucket/stream.txt
**Downloading a S3 object as a local file stream**
The following ``cp`` command downloads a S3 object locally as a stream to standard output::
aws s3 cp s3://mybucket/stream.txt -
awscli-1.10.1/awscli/examples/s3/mv.rst 0000666 4542626 0000144 00000005420 12652514124 020732 0 ustar pysdk-ci amazon 0000000 0000000 The following ``mv`` command moves a single file to a specified bucket and key::
aws s3 mv test.txt s3://mybucket/test2.txt
Output::
move: test.txt to s3://mybucket/test2.txt
The following ``mv`` command moves a single s3 object to a specified bucket and key::
aws s3 mv s3://mybucket/test.txt s3://mybucket/test2.txt
Output::
move: s3://mybucket/test.txt to s3://mybucket/test2.txt
The following ``mv`` command moves a single object to a specified file locally::
aws s3 mv s3://mybucket/test.txt test2.txt
Output::
move: s3://mybucket/test.txt to test2.txt
The following ``mv`` command moves a single object to a specified bucket while retaining its original name::
aws s3 mv s3://mybucket/test.txt s3://mybucket2/
Output::
move: s3://mybucket/test.txt to s3://mybucket2/test.txt
When passed with the parameter ``--recursive``, the following ``mv`` command recursively moves all objects under a
specified prefix and bucket to a specified directory. In this example, the bucket ``mybucket`` has the objects
``test1.txt`` and ``test2.txt``::
aws s3 mv s3://mybucket . --recursive
Output::
move: s3://mybucket/test1.txt to test1.txt
move: s3://mybucket/test2.txt to test2.txt
When passed with the parameter ``--recursive``, the following ``mv`` command recursively moves all files under a
specifed directory to a specified bucket and prefix while excluding some files by using an ``--exclude`` parameter. In
this example, the directory ``myDir`` has the files ``test1.txt`` and ``test2.jpg``::
aws s3 mv myDir s3://mybucket/ --recursive --exclude "*.jpg"
Output::
move: myDir/test1.txt to s3://mybucket2/test1.txt
When passed with the parameter ``--recursive``, the following ``mv`` command recursively moves all objects under a
specifed bucket to another bucket while excluding some objects by using an ``--exclude`` parameter. In this example,
the bucket ``mybucket`` has the objects ``test1.txt`` and ``another/test1.txt``::
aws s3 mv s3://mybucket/ s3://mybucket2/ --recursive --exclude "mybucket/another/*"
Output::
move: s3://mybucket/test1.txt to s3://mybucket2/test1.txt
The following ``mv`` command moves a single object to a specified bucket and key while setting the ACL to
``public-read-write``::
aws s3 mv s3://mybucket/test.txt s3://mybucket/test2.txt --acl public-read-write
Output::
move: s3://mybucket/test.txt to s3://mybucket/test2.txt
The following ``mv`` command illustrates the use of the ``--grants`` option to grant read access to all users and full
control to a specific user identified by their email address::
aws s3 mv file.txt s3://mybucket/ --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers full=emailaddress=user@example.com
Output::
move: file.txt to s3://mybucket/file.txt
awscli-1.10.1/awscli/examples/s3/website.rst 0000666 4542626 0000144 00000001563 12652514124 021756 0 ustar pysdk-ci amazon 0000000 0000000 The following command configures a bucket named ``my-bucket`` as a static website::
aws s3 website s3://my-bucket/ --index-document index.html --error-document error.html
The index document option specifies the file in ``my-bucket`` that visitors will be directed to when they navigate to the website URL. In this case, the bucket is in the us-west-2 region, so the site would appear at ``http://my-bucket.s3-website-us-west-2.amazonaws.com``.
All files in the bucket that appear on the static site must be configured to allow visitors to open them. File permissions are configured separately from the bucket website configuration. For information on hosting a static website in Amazon S3, see `Hosting a Static Website`_ in the *Amazon Simple Storage Service Developer Guide*.
.. _`Hosting a Static Website`: http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html awscli-1.10.1/awscli/examples/s3/ls.rst 0000666 4542626 0000144 00000005437 12652514124 020736 0 ustar pysdk-ci amazon 0000000 0000000 The following ``ls`` command lists all of the bucket owned by the user. In this example, the user owns the buckets
``mybucket`` and ``mybucket2``. The ``CreationTime`` is the date the bucket was created. Note if ``s3://`` is used for
the path argument ````, it will list all of the buckets as well::
aws s3 ls
Output::
2013-07-11 17:08:50 mybucket
2013-07-24 14:55:44 mybucket2
The following ``ls`` command lists objects and common prefixes under a specified bucket and prefix. In this example, the
user owns the bucket ``mybucket`` with the objects ``test.txt`` and ``somePrefix/test.txt``. The ``LastWriteTime`` and
``Length`` are arbitrary. Note that since the ``ls`` command has no interaction with the local filesystem, the ``s3://``
URI scheme is not required to resolve ambiguity and may be ommited::
aws s3 ls s3://mybucket
Output::
PRE somePrefix/
2013-07-25 17:06:27 88 test.txt
The following ``ls`` command lists objects and common prefixes under a specified bucket and prefix. However, there are
no objects nor common prefixes under the specified bucket and prefix::
aws s3 ls s3://mybucket/noExistPrefix
Output::
None
The following ``ls`` command will recursively list objects in a bucket. Rather than showing ``PRE dirname/`` in the
output, all the content in a bucket will be listed in order::
aws s3 ls s3://mybucket --recursive
Output::
2013-09-02 21:37:53 10 a.txt
2013-09-02 21:37:53 2863288 foo.zip
2013-09-02 21:32:57 23 foo/bar/.baz/a
2013-09-02 21:32:58 41 foo/bar/.baz/b
2013-09-02 21:32:57 281 foo/bar/.baz/c
2013-09-02 21:32:57 73 foo/bar/.baz/d
2013-09-02 21:32:57 452 foo/bar/.baz/e
2013-09-02 21:32:57 896 foo/bar/.baz/hooks/bar
2013-09-02 21:32:57 189 foo/bar/.baz/hooks/foo
2013-09-02 21:32:57 398 z.txt
The following ``ls`` command demonstrates the same command using the --human-readable
and --summarize options. --human-readable displays file size in
Bytes/MiB/KiB/GiB/TiB/PiB/EiB. --summarize displays the total number of objects
and total size at the end of the result listing::
aws s3 ls s3://mybucket --recursive --human-readable --summarize
Output::
2013-09-02 21:37:53 10 Bytes a.txt
2013-09-02 21:37:53 2.9 MiB foo.zip
2013-09-02 21:32:57 23 Bytes foo/bar/.baz/a
2013-09-02 21:32:58 41 Bytes foo/bar/.baz/b
2013-09-02 21:32:57 281 Bytes foo/bar/.baz/c
2013-09-02 21:32:57 73 Bytes foo/bar/.baz/d
2013-09-02 21:32:57 452 Bytes foo/bar/.baz/e
2013-09-02 21:32:57 896 Bytes foo/bar/.baz/hooks/bar
2013-09-02 21:32:57 189 Bytes foo/bar/.baz/hooks/foo
2013-09-02 21:32:57 398 Bytes z.txt
Total Objects: 10
Total Size: 2.9 MiB
awscli-1.10.1/awscli/examples/workspaces/ 0000777 4542626 0000144 00000000000 12652514126 021413 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/workspaces/create-workspaces.rst 0000666 4542626 0000144 00000001337 12652514124 025571 0 ustar pysdk-ci amazon 0000000 0000000 **To create a WorkSpace**
This example creates a WorkSpace for user ``jimsmith`` in the specified directory, from the specified bundle.
Command::
aws workspaces create-workspaces --cli-input-json file://create-workspaces.json
Input::
This is the contents of the create-workspaces.json file:
{
"Workspaces" : [
{
"DirectoryId" : "d-906732325d",
"UserName" : "jimsmith",
"BundleId" : "wsb-b0s22j3d7"
}
]
}
Output::
{
"PendingRequests" : [
{
"UserName" : "jimsmith",
"DirectoryId" : "d-906732325d",
"State" : "PENDING",
"WorkspaceId" : "ws-0d4y2sbl5",
"BundleId" : "wsb-b0s22j3d7"
}
],
"FailedRequests" : []
}
awscli-1.10.1/awscli/examples/workspaces/describe-workspaces.rst 0000666 4542626 0000144 00000001630 12652514124 026102 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your WorkSpaces**
This example describes all of your WorkSpaces in the region.
Command::
aws workspaces describe-workspaces
Output::
{
"Workspaces" : [
{
"UserName" : "johndoe",
"DirectoryId" : "d-906732325d",
"State" : "AVAILABLE",
"WorkspaceId" : "ws-3lvdznndy",
"SubnetId" : "subnet-435c036b",
"IpAddress" : "50.0.1.10",
"BundleId" : "wsb-86y2d88pq"
},
{
"UserName": "jimsmith",
"DirectoryId": "d-906732325d",
"State": "PENDING",
"WorkspaceId": "ws-0d4y2sbl5",
"BundleId": "wsb-b0s22j3d7"
},
{
"UserName" : "marym",
"DirectoryId" : "d-906732325d",
"State" : "AVAILABLE",
"WorkspaceId" : "ws-b3vg4shrh",
"SubnetId" : "subnet-775a6531",
"IpAddress" : "50.0.0.5",
"BundleId" : "wsb-3t36q0xfc"
}
]
}
awscli-1.10.1/awscli/examples/workspaces/terminate-workspaces.rst 0000666 4542626 0000144 00000000331 12652514124 026307 0 ustar pysdk-ci amazon 0000000 0000000 **To terminate a WorkSpace**
This example terminates the specified WorkSpace.
Command::
aws workspaces terminate-workspaces --terminate-workspace-requests wsb-3t36q0xfc
Output::
{
"FailedRequests": []
} awscli-1.10.1/awscli/examples/workspaces/describe-workspace-directories.rst 0000666 4542626 0000144 00000003333 12652514124 030233 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your WorkSpace directories**
This example describes all of your WorkSpace directories.
Command::
aws workspaces describe-workspace-directories
Output::
{
"Directories" : [
{
"CustomerUserName" : "Administrator",
"DirectoryId" : "d-906735683d",
"DirectoryName" : "example.awsapps.com",
"SubnetIds" : [
"subnet-af0e2a87",
"subnet-657e7a23"
],
"WorkspaceCreationProperties" :
{
"EnableInternetAccess" : false,
"EnableWorkDocs" : false,
"UserEnabledAsLocalAdministrator" : true
},
"Alias" : "example",
"State" : "REGISTERED",
"DirectoryType" : "SIMPLE_AD",
"RegistrationCode" : "SLiad+S393HD",
"IamRoleId" : "arn:aws:iam::972506530580:role/workspaces_DefaultRole",
"DnsIpAddresses" : [
"10.0.2.190",
"10.0.1.202"
],
"WorkspaceSecurityGroupId" : "sg-6e40640b"
},
{
"CustomerUserName" : "Administrator",
"DirectoryId" : "d-906732325d",
"DirectoryName" : "exampledomain.com",
"SubnetIds" : [
"subnet-775a6531",
"subnet-435c036b"
],
"WorkspaceCreationProperties" :
{
"EnableInternetAccess" : false,
"EnableWorkDocs" : true,
"UserEnabledAsLocalAdministrator" : true
},
"Alias" : "example-domain",
"State" : "REGISTERED",
"DirectoryType" : "AD_CONNECTOR",
"RegistrationCode" : "SLiad+UBZGNH",
"IamRoleId" : "arn:aws:iam::972506530580:role/workspaces_DefaultRole",
"DnsIpAddresses" : [
"50.0.2.223",
"50.0.2.184"
]
}
]
}
awscli-1.10.1/awscli/examples/workspaces/describe-workspace-bundles.rst 0000666 4542626 0000144 00000006744 12652514124 027364 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your WorkSpace bundles**
This example describes all of the WorkSpace bundles that are provided by AWS.
Command::
aws workspaces describe-workspace-bundles --owner AMAZON
Output::
{
"Bundles": [
{
"ComputeType": {
"Name": "PERFORMANCE"
},
"Description": "Performance Bundle",
"BundleId": "wsb-b0s22j3d7",
"Owner": "Amazon",
"UserStorage": {
"Capacity": "100"
},
"Name": "Performance"
},
{
"ComputeType": {
"Name": "VALUE"
},
"Description": "Value Base Bundle",
"BundleId": "wsb-92tn3b7gx",
"Owner": "Amazon",
"UserStorage": {
"Capacity": "10"
},
"Name": "Value"
},
{
"ComputeType": {
"Name": "STANDARD"
},
"Description": "Standard Bundle",
"BundleId": "wsb-3t36q0xfc",
"Owner": "Amazon",
"UserStorage": {
"Capacity": "50"
},
"Name": "Standard"
},
{
"ComputeType": {
"Name": "PERFORMANCE"
},
"Description": "Performance Plus Bundle",
"BundleId": "wsb-1b5w6vnz6",
"Owner": "Amazon",
"UserStorage": {
"Capacity": "100"
},
"Name": "Performance Plus"
},
{
"ComputeType": {
"Name": "VALUE"
},
"Description": "Value Plus Office 2013",
"BundleId": "wsb-fgy4lgypc",
"Owner": "Amazon",
"UserStorage": {
"Capacity": "10"
},
"Name": "Value Plus Office 2013"
},
{
"ComputeType": {
"Name": "PERFORMANCE"
},
"Description": "Performance Plus Office 2013",
"BundleId": "wsb-vbsjd64y6",
"Owner": "Amazon",
"UserStorage": {
"Capacity": "100"
},
"Name": "Performance Plus Office 2013"
},
{
"ComputeType": {
"Name": "VALUE"
},
"Description": "Value Plus Bundle",
"BundleId": "wsb-kgjp98lt8",
"Owner": "Amazon",
"UserStorage": {
"Capacity": "10"
},
"Name": "Value Plus"
},
{
"ComputeType": {
"Name": "STANDARD"
},
"Description": "Standard Plus Office 2013",
"BundleId": "wsb-5h1pf1zxc",
"Owner": "Amazon",
"UserStorage": {
"Capacity": "50"
},
"Name": "Standard Plus Office 2013"
},
{
"ComputeType": {
"Name": "STANDARD"
},
"Description": "Standard Plus Bundle",
"BundleId": "wsb-vlsvncjjf",
"Owner": "Amazon",
"UserStorage": {
"Capacity": "50"
},
"Name": "Standard Plus"
}
]
}
awscli-1.10.1/awscli/examples/route53/ 0000777 4542626 0000144 00000000000 12652514126 020540 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/route53/change-resource-record-sets.rst 0000666 4542626 0000144 00000016721 12652514124 026601 0 ustar pysdk-ci amazon 0000000 0000000 **To create, update, or delete a resource record set**
The following ``change-resource-record-sets`` command creates a resource record set using the ``hosted-zone-id`` ``Z1R8UBAEXAMPLE`` and the JSON-formatted configuration in the file ``C:\awscli\route53\change-resource-record-sets.json``::
aws route53 change-resource-record-sets --hosted-zone-id Z1R8UBAEXAMPLE --change-batch file://C:\awscli\route53\change-resource-record-sets.json
For more information, see `POST ChangeResourceRecordSets`_ in the *Amazon Route 53 API Reference*.
.. _`POST ChangeResourceRecordSets`: http://docs.aws.amazon.com/Route53/latest/APIReference/API_ChangeResourceRecordSets.html
The configuration in the JSON file depends on the kind of resource record set you want to create:
- Basic
- Weighted
- Alias
- Weighted Alias
- Latency
- Latency Alias
- Failover
- Failover Alias
**Basic Syntax**::
{
"Comment": "optional comment about the changes in this change batch request",
"Changes": [
{
"Action": "CREATE"|"DELETE"|"UPSERT",
"ResourceRecordSet": {
"Name": "DNS domain name",
"Type": "SOA"|"A"|"TXT"|"NS"|"CNAME"|"MX"|"PTR"|"SRV"|"SPF"|"AAAA",
"TTL": time to live in seconds,
"ResourceRecords": [
{
"Value": "applicable value for the record type"
},
{...}
]
}
},
{...}
]
}
**Weighted Syntax**::
{
"Comment": "optional comment about the changes in this change batch request",
"Changes": [
{
"Action": "CREATE"|"DELETE"|"UPSERT",
"ResourceRecordSet": {
"Name": "DNS domain name",
"Type": "SOA"|"A"|"TXT"|"NS"|"CNAME"|"MX"|"PTR"|"SRV"|"SPF"|"AAAA",
"SetIdentifier": "unique description for this resource record set",
"Weight": value between 0 and 255,
"TTL": time to live in seconds,
"ResourceRecords": [
{
"Value": "applicable value for the record type"
},
{...}
],
"HealthCheckId": "optional ID of an Amazon Route 53 health check"
}
},
{...}
]
}
**Alias Syntax**::
{
"Comment": "optional comment about the changes in this change batch request",
"Changes": [
{
"Action": "CREATE"|"DELETE"|"UPSERT",
"ResourceRecordSet": {
"Name": "DNS domain name",
"Type": "SOA"|"A"|"TXT"|"NS"|"CNAME"|"MX"|"PTR"|"SRV"|"SPF"|"AAAA",
"AliasTarget": {
"HostedZoneId": "hosted zone ID for your CloudFront distribution, Amazon S3 bucket, Elastic Load Balancing load balancer, or Amazon Route 53 hosted zone",
"DNSName": "DNS domain name for your CloudFront distribution, Amazon S3 bucket, Elastic Load Balancing load balancer, or another resource record set in this hosted zone",
"EvaluateTargetHealth": true|false
},
"HealthCheckId": "optional ID of an Amazon Route 53 health check"
}
},
{...}
]
}
**Weighted Alias Syntax**::
{
"Comment": "optional comment about the changes in this change batch request",
"Changes": [
{
"Action": "CREATE"|"DELETE"|"UPSERT",
"ResourceRecordSet": {
"Name": "DNS domain name",
"Type": "SOA"|"A"|"TXT"|"NS"|"CNAME"|"MX"|"PTR"|"SRV"|"SPF"|"AAAA",
"SetIdentifier": "unique description for this resource record set",
"Weight": value between 0 and 255,
"AliasTarget": {
"HostedZoneId": "hosted zone ID for your CloudFront distribution, Amazon S3 bucket, Elastic Load Balancing load balancer, or Amazon Route 53 hosted zone",
"DNSName": "DNS domain name for your CloudFront distribution, Amazon S3 bucket, Elastic Load Balancing load balancer, or another resource record set in this hosted zone",
"EvaluateTargetHealth": true|false
},
"HealthCheckId": "optional ID of an Amazon Route 53 health check"
}
},
{...}
]
}
**Latency Syntax**::
{
"Comment": "optional comment about the changes in this change batch request",
"Changes": [
{
"Action": "CREATE"|"DELETE"|"UPSERT",
"ResourceRecordSet": {
"Name": "DNS domain name",
"Type": "SOA"|"A"|"TXT"|"NS"|"CNAME"|"MX"|"PTR"|"SRV"|"SPF"|"AAAA",
"SetIdentifier": "unique description for this resource record set",
"Region": "Amazon EC2 region name",
"TTL": time to live in seconds,
"ResourceRecords": [
{
"Value": "applicable value for the record type"
},
{...}
],
"HealthCheckId": "optional ID of an Amazon Route 53 health check"
}
},
{...}
]
}
**Latency Alias Syntax**::
{
"Comment": "optional comment about the changes in this change batch request",
"Changes": [
{
"Action": "CREATE"|"DELETE"|"UPSERT",
"ResourceRecordSet": {
"Name": "DNS domain name",
"Type": "SOA"|"A"|"TXT"|"NS"|"CNAME"|"MX"|"PTR"|"SRV"|"SPF"|"AAAA",
"SetIdentifier": "unique description for this resource record set",
"Region": "Amazon EC2 region name",
"AliasTarget": {
"HostedZoneId": "hosted zone ID for your CloudFront distribution, Amazon S3 bucket, Elastic Load Balancing load balancer, or Amazon Route 53 hosted zone",
"DNSName": "DNS domain name for your CloudFront distribution, Amazon S3 bucket, Elastic Load Balancing load balancer, or another resource record set in this hosted zone",
"EvaluateTargetHealth": true|false
},
"HealthCheckId": "optional ID of an Amazon Route 53 health check"
}
},
{...}
]
}
**Failover Syntax**::
{
"Comment": "optional comment about the changes in this change batch request",
"Changes": [
{
"Action": "CREATE"|"DELETE"|"UPSERT",
"ResourceRecordSet": {
"Name": "DNS domain name",
"Type": "SOA"|"A"|"TXT"|"NS"|"CNAME"|"MX"|"PTR"|"SRV"|"SPF"|"AAAA",
"SetIdentifier": "unique description for this resource record set",
"Failover": "PRIMARY" | "SECONDARY",
"TTL": time to live in seconds,
"ResourceRecords": [
{
"Value": "applicable value for the record type"
},
{...}
],
"HealthCheckId": "ID of an Amazon Route 53 health check"
}
},
{...}
]
}
**Failover Alias Syntax**::
{
"Comment": "optional comment about the changes in this change batch request",
"Changes": [
{
"Action": "CREATE"|"DELETE"|"UPSERT",
"ResourceRecordSet": {
"Name": "DNS domain name",
"Type": "SOA"|"A"|"TXT"|"NS"|"CNAME"|"MX"|"PTR"|"SRV"|"SPF"|"AAAA",
"SetIdentifier": "unique description for this resource record set",
"Failover": "PRIMARY" | "SECONDARY",
"AliasTarget": {
"HostedZoneId": "hosted zone ID for your CloudFront distribution, Amazon S3 bucket, Elastic Load Balancing load balancer, or Amazon Route 53 hosted zone",
"DNSName": "DNS domain name for your CloudFront distribution, Amazon S3 bucket, Elastic Load Balancing load balancer, or another resource record set in this hosted zone",
"EvaluateTargetHealth": true|false
},
"HealthCheckId": "optional ID of an Amazon Route 53 health check"
}
},
{...}
]
}
awscli-1.10.1/awscli/examples/route53/delete-hosted-zone.rst 0000666 4542626 0000144 00000000301 12652514124 024761 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a hosted zone**
The following ``delete-hosted-zone`` command deletes the hosted zone with an ``id`` of ``Z36KTIQEXAMPLE``::
aws route53 delete-hosted-zone --id Z36KTIQEXAMPLE
awscli-1.10.1/awscli/examples/route53/get-health-check.rst 0000666 4542626 0000144 00000000445 12652514124 024370 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about a health check**
The following ``get-health-check`` command gets information about the health check that has a ``health-check-id`` of ``02ec8401-9879-4259-91fa-04e66d094674``::
aws route53 get-health-check --health-check-id 02ec8401-9879-4259-91fa-04e66d094674
awscli-1.10.1/awscli/examples/route53/list-hosted-zones-by-name.rst 0000666 4542626 0000144 00000002632 12652514124 026214 0 ustar pysdk-ci amazon 0000000 0000000 The following command lists up to 100 hosted zones ordered by domain name::
aws route53 list-hosted-zones-by-name
Output::
{
"HostedZones": [
{
"ResourceRecordSetCount": 2,
"CallerReference": "test20150527-2",
"Config": {
"Comment": "test2",
"PrivateZone": false
},
"Id": "/hostedzone/Z119WBBTVP5WFX",
"Name": "2.example.com."
},
{
"ResourceRecordSetCount": 2,
"CallerReference": "test20150527-1",
"Config": {
"Comment": "test",
"PrivateZone": false
},
"Id": "/hostedzone/Z3P5QSUBK4POTI",
"Name": "www.example.com."
}
],
"IsTruncated": false,
"MaxItems": "100"
}
The following command lists hosted zones ordered by name, beginning with ``www.example.com``::
aws route53 list-hosted-zones-by-name --dns-name www.example.com
Output::
{
"HostedZones": [
{
"ResourceRecordSetCount": 2,
"CallerReference": "mwunderl20150527-1",
"Config": {
"Comment": "test",
"PrivateZone": false
},
"Id": "/hostedzone/Z3P5QSUBK4POTI",
"Name": "www.example.com."
}
],
"DNSName": "www.example.com",
"IsTruncated": false,
"MaxItems": "100"
} awscli-1.10.1/awscli/examples/route53/create-hosted-zone.rst 0000666 4542626 0000144 00000001132 12652514124 024765 0 ustar pysdk-ci amazon 0000000 0000000 **To create a hosted zone**
The following ``create-hosted-zone`` command adds a hosted zone named ``example.com`` using the caller reference ``2014-04-01-18:47``. The optional comment includes a space, so it must be enclosed in quotation marks::
aws route53 create-hosted-zone --name example.com --caller-reference 2014-04-01-18:47 --hosted-zone-config Comment="command-line version"
For more information, see `Working with Hosted Zones`_ in the *Amazon Route 53 Developer Guide*.
.. _`Working with Hosted Zones`: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/AboutHZWorkingWith.html
awscli-1.10.1/awscli/examples/route53/list-health-checks.rst 0000666 4542626 0000144 00000001444 12652514124 024747 0 ustar pysdk-ci amazon 0000000 0000000 **To list the health checks associated with the current AWS account**
The following ``list-health-checks`` command lists detailed information about the first 100 health checks that are associated with the current AWS account.::
aws route53 list-health-checks
If you have more than 100 health checks, or if you want to list them in groups smaller than 100, include the ``--maxitems`` parameter. For example, to list health checks one at a time, use the following command::
aws route53 list-health-checks --max-items 1
To view the next health check, take the value of ``NextToken`` from the response to the previous command, and include it in the ``--starting-token`` parameter, for example::
aws route53 list-health-checks --max-items 1 --starting-token 02ec8401-9879-4259-91fa-094674111111
awscli-1.10.1/awscli/examples/route53/create-health-check.rst 0000666 4542626 0000144 00000002556 12652514124 025061 0 ustar pysdk-ci amazon 0000000 0000000 **To create a health check**
The following ``create-health-check`` command creates a health check using the caller reference ``2014-04-01-18:47`` and the JSON-formatted configuration in the file ``C:\awscli\route53\create-health-check.json``::
aws route53 create-health-check --caller-reference 2014-04-01-18:47 --health-check-config file://C:\awscli\route53\create-health-check.json
JSON syntax::
{
"IPAddress": "IP address of the endpoint to check",
"Port": port on the endpoint to check--required when Type is "TCP",
"Type": "HTTP"|"HTTPS"|"HTTP_STR_MATCH"|"HTTPS_STR_MATCH"|"TCP",
"ResourcePath": "path of the file that you want Amazon Route 53 to request--all Types except TCP",
"FullyQualifiedDomainName": "domain name of the endpoint to check--all Types except TCP",
"SearchString": "if Type is HTTP_STR_MATCH or HTTPS_STR_MATCH, the string to search for in the response body from the specified resource",
"RequestInterval": 10 | 30,
"FailureThreshold": integer between 1 and 10
}
To add the health check to a Route 53 resource record set, use the ``change-resource-record-sets`` command.
For more information, see `Amazon Route 53 Health Checks and DNS Failover`_ in the *Amazon Route 53 Developer Guide*.
.. _`Amazon Route 53 Health Checks and DNS Failover`: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html
awscli-1.10.1/awscli/examples/route53/delete-health-check.rst 0000666 4542626 0000144 00000000410 12652514124 025043 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a health check**
The following ``delete-health-check`` command deletes the health check with a ``health-check-id`` of ``e75b48d9-547a-4c3d-88a5-ae4002397608``::
aws route53 delete-health-check --health-check-id e75b48d9-547a-4c3d-88a5-ae4002397608
awscli-1.10.1/awscli/examples/route53/get-change.rst 0000666 4542626 0000144 00000000440 12652514124 023270 0 ustar pysdk-ci amazon 0000000 0000000 **To get the status of a change to resource record sets**
The following ``get-change`` command gets the status and other information about the ``change-resource-record-sets`` request that has an ``Id`` of ``/change/CWPIK4URU2I5S``::
aws route53 get-change --id /change/CWPIK4URU2I5S
awscli-1.10.1/awscli/examples/route53/list-hosted-zones.rst 0000666 4542626 0000144 00000001425 12652514124 024665 0 ustar pysdk-ci amazon 0000000 0000000 **To list the hosted zones associated with the current AWS account**
The following ``list-hosted-zones`` command lists summary information about the first 100 hosted zones that are associated with the current AWS account.::
aws route53 list-hosted-zones
If you have more than 100 hosted zones, or if you want to list them in groups smaller than 100, include the ``--maxitems`` parameter. For example, to list hosted zones one at a time, use the following command::
aws route53 list-hosted-zones --max-items 1
To view information about the next hosted zone, take the value of ``NextToken`` from the response to the previous command, and include it in the ``--starting-token`` parameter, for example::
aws route53 list-hosted-zones --max-items 1 --starting-token Z3M3LMPEXAMPLE
awscli-1.10.1/awscli/examples/route53/get-hosted-zone.rst 0000666 4542626 0000144 00000000327 12652514124 024306 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about a hosted zone**
The following ``get-hosted-zone`` command gets information about the hosted zone with an ``id`` of ``Z1R8UBAEXAMPLE``::
aws route53 get-hosted-zone --id Z1R8UBAEXAMPLE
awscli-1.10.1/awscli/examples/route53/list-resource-record-sets.rst 0000666 4542626 0000144 00000001673 12652514124 026327 0 ustar pysdk-ci amazon 0000000 0000000 **To list the resource record sets in a hosted zone**
The following ``list-resource-record-sets`` command lists summary information about the first 100 resource record sets in a specified hosted zone.::
aws route53 list-resource-record-sets --hosted-zone-id Z2LD58HEXAMPLE
If the hosted zone contains more than 100 resource record sets, or if you want to list them in groups smaller than 100, include the ``--maxitems`` parameter. For example, to list resource record sets one at a time, use the following command::
aws route53 list-resource-record-sets --hosted-zone-id Z2LD58HEXAMPLE --max-items 1
To view information about the next resource record set in the hosted zone, take the value of ``NextToken`` from the response to the previous command, and include it in the ``--starting-token`` parameter, for example::
aws route53 list-resource-record-sets --hosted-zone-id Z2LD58HEXAMPLE --max-items 1 --starting-token None___None___None___1
awscli-1.10.1/awscli/examples/emr/ 0000777 4542626 0000144 00000000000 12652514126 020015 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/emr/add-tags.rst 0000666 4542626 0000144 00000001210 12652514124 022223 0 ustar pysdk-ci amazon 0000000 0000000 **1. To add tags to a cluster**
- Command::
aws emr add-tags --resource-id j-xxxxxxx --tags name="John Doe" age=29 sex=male address="123 East NW Seattle"
- Output::
None
**2. To list tags of a cluster**
--Command::
aws emr describe-cluster --cluster-id j-XXXXXXYY --query Cluster.Tags
- Output::
[
{
"Value": "male",
"Key": "sex"
},
{
"Value": "123 East NW Seattle",
"Key": "address"
},
{
"Value": "John Doe",
"Key": "name"
},
{
"Value": "29",
"Key": "age"
}
]
awscli-1.10.1/awscli/examples/emr/list-steps.rst 0000666 4542626 0000144 00000000227 12652514124 022655 0 ustar pysdk-ci amazon 0000000 0000000 The following command lists all of the steps in a cluster with the cluster ID ``j-3SD91U2E1L2QX``::
aws emr list-steps --cluster-id j-3SD91U2E1L2QX
awscli-1.10.1/awscli/examples/emr/modify-cluster-attributes.rst 0000666 4542626 0000144 00000000304 12652514124 025674 0 ustar pysdk-ci amazon 0000000 0000000 The following command sets the visibility of an EMR cluster with the ID ``j-301CDNY0J5XM4`` to all users::
aws emr modify-cluster-attributes --cluster-id j-301CDNY0J5XM4 --visible-to-all-users
awscli-1.10.1/awscli/examples/emr/describe-step.rst 0000666 4542626 0000144 00000002172 12652514124 023300 0 ustar pysdk-ci amazon 0000000 0000000 The following command describes a step with the step ID ``s-3LZC0QUT43AM`` in a cluster with the cluster ID ``j-3SD91U2E1L2QX``::
aws emr describe-step --cluster-id j-3SD91U2E1L2QX --step-id s-3LZC0QUT43AM
Output::
{
"Step": {
"Status": {
"Timeline": {
"EndDateTime": 1433200470.481,
"CreationDateTime": 1433199926.597,
"StartDateTime": 1433200404.959
},
"State": "COMPLETED",
"StateChangeReason": {}
},
"Config": {
"Args": [
"s3://us-west-2.elasticmapreduce/libs/hive/hive-script",
"--base-path",
"s3://us-west-2.elasticmapreduce/libs/hive/",
"--install-hive",
"--hive-versions",
"0.13.1"
],
"Jar": "s3://us-west-2.elasticmapreduce/libs/script-runner/script-runner.jar",
"Properties": {}
},
"Id": "s-3LZC0QUT43AM",
"ActionOnFailure": "TERMINATE_CLUSTER",
"Name": "Setup hive"
}
}
awscli-1.10.1/awscli/examples/emr/ssh.rst 0000666 4542626 0000144 00000002666 12652514124 021354 0 ustar pysdk-ci amazon 0000000 0000000 The following command opens an ssh connection with the master instance in a cluster with the cluster ID ``j-3SD91U2E1L2QX``::
aws emr ssh --cluster-id j-3SD91U2E1L2QX --key-pair-file ~/.ssh/mykey.pem
The key pair file option takes a local path to a private key file.
Output::
ssh -o StrictHostKeyChecking=no -o ServerAliveInterval=10 -i /home/local/user/.ssh/mykey.pem hadoop@ec2-52-52-41-150.us-west-2.compute.amazonaws.com
Warning: Permanently added 'ec2-52-52-41-150.us-west-2.compute.amazonaws.com,52.52.41.150' (ECDSA) to the list of known hosts.
Last login: Mon Jun 1 23:15:38 2015
__| __|_ )
_| ( / Amazon Linux AMI
___|\___|___|
https://aws.amazon.com/amazon-linux-ami/2015.03-release-notes/
26 package(s) needed for security, out of 39 available
Run "sudo yum update" to apply all updates.
--------------------------------------------------------------------------------
Welcome to Amazon Elastic MapReduce running Hadoop and Amazon Linux.
Hadoop is installed in /home/hadoop. Log files are in /mnt/var/log/hadoop. Check
/mnt/var/log/hadoop/steps for diagnosing step failures.
The Hadoop UI can be accessed via the following commands:
ResourceManager lynx http://ip-172-21-11-216:9026/
NameNode lynx http://ip-172-21-11-216:9101/
--------------------------------------------------------------------------------
[hadoop@ip-172-31-16-216 ~]$
awscli-1.10.1/awscli/examples/emr/describe-cluster.rst 0000666 4542626 0000144 00000014232 12652514124 024006 0 ustar pysdk-ci amazon 0000000 0000000 - Command::
aws emr describe-cluster --cluster-id j-XXXXXXXX
- Output::
For release-label based cluster:
{
"Cluster": {
"Status": {
"Timeline": {
"ReadyDateTime": 1436475075.199,
"CreationDateTime": 1436474656.563,
},
"State": "WAITING",
"StateChangeReason": {
"Message": "Waiting for steps to run"
}
},
"Ec2InstanceAttributes": {
"ServiceAccessSecurityGroup": "sg-xxxxxxxx",
"EmrManagedMasterSecurityGroup": "sg-xxxxxxxx",
"IamInstanceProfile": "EMR_EC2_DefaultRole",
"Ec2KeyName": "myKey",
"Ec2AvailabilityZone": "us-east-1c",
"EmrManagedSlaveSecurityGroup": "sg-yyyyyyyyy"
},
"Name": "My Cluster",
"ServiceRole": "EMR_DefaultRole",
"Tags": [],
"TerminationProtected": true,
"ReleaseLabel": "emr-4.0.0",
"NormalizedInstanceHours": 96,
"InstanceGroups": [
{
"RequestedInstanceCount": 2,
"Status": {
"Timeline": {
"ReadyDateTime": 1436475074.245,
"CreationDateTime": 1436474656.564,
"EndDateTime": 1436638158.387
},
"State": "RUNNING",
"StateChangeReason": {
"Message": "",
}
},
"Name": "CORE",
"InstanceGroupType": "CORE",
"Id": "ig-YYYYYYY",
"Configurations": [],
"InstanceType": "m3.large",
"Market": "ON_DEMAND",
"RunningInstanceCount": 2
},
{
"RequestedInstanceCount": 1,
"Status": {
"Timeline": {
"ReadyDateTime": 1436475074.245,
"CreationDateTime": 1436474656.564,
"EndDateTime": 1436638158.387
},
"State": "RUNNING",
"StateChangeReason": {
"Message": "",
}
},
"Name": "MASTER",
"InstanceGroupType": "MASTER",
"Id": "ig-XXXXXXXXX",
"Configurations": [],
"InstanceType": "m3.large",
"Market": "ON_DEMAND",
"RunningInstanceCount": 1
}
],
"Applications": [
{
"Name": "Hadoop"
}
],
"VisibleToAllUsers": true,
"BootstrapActions": [],
"MasterPublicDnsName": "ec2-54-147-144-78.compute-1.amazonaws.com",
"AutoTerminate": false,
"Id": "j-XXXXXXXX",
"Configurations": [
{
"Properties": {
"fs.s3.consistent.retryPeriodSeconds": "20",
"fs.s3.enableServerSideEncryption": "true",
"fs.s3.consistent": "false",
"fs.s3.consistent.retryCount": "2"
},
"Classification": "emrfs-site"
}
]
}
}
For ami based cluster:
{
"Cluster": {
"Status": {
"Timeline": {
"ReadyDateTime": 1399400564.432,
"CreationDateTime": 1399400268.62
},
"State": "WAITING",
"StateChangeReason": {
"Message": "Waiting for steps to run"
}
},
"Ec2InstanceAttributes": {
"IamInstanceProfile": "EMR_EC2_DefaultRole",
"Ec2AvailabilityZone": "us-east-1c"
},
"Name": "My Cluster",
"Tags": [],
"TerminationProtected": true,
"RunningAmiVersion": "2.5.4",
"InstanceGroups": [
{
"RequestedInstanceCount": 1,
"Status": {
"Timeline": {
"ReadyDateTime": 1399400558.848,
"CreationDateTime": 1399400268.621
},
"State": "RUNNING",
"StateChangeReason": {
"Message": ""
}
},
"Name": "Master instance group",
"InstanceGroupType": "MASTER",
"InstanceType": "m1.small",
"Id": "ig-ABCD",
"Market": "ON_DEMAND",
"RunningInstanceCount": 1
},
{
"RequestedInstanceCount": 2,
"Status": {
"Timeline": {
"ReadyDateTime": 1399400564.439,
"CreationDateTime": 1399400268.621
},
"State": "RUNNING",
"StateChangeReason": {
"Message": ""
}
},
"Name": "Core instance group",
"InstanceGroupType": "CORE",
"InstanceType": "m1.small",
"Id": "ig-DEF",
"Market": "ON_DEMAND",
"RunningInstanceCount": 2
}
],
"Applications": [
{
"Version": "1.0.3",
"Name": "hadoop"
}
],
"BootstrapActions": [],
"VisibleToAllUsers": false,
"RequestedAmiVersion": "2.4.2",
"LogUri": "s3://myLogUri/",
"AutoTerminate": false,
"Id": "j-XXXXXXXX"
}
}
awscli-1.10.1/awscli/examples/emr/create-cluster-examples.rst 0000666 4542626 0000144 00000046435 12652514124 025317 0 ustar pysdk-ci amazon 0000000 0000000 Note: some of these examples assume you have specified your EMR service role and EC2 instance profile in the AWS CLI configuration file. If you have not done this, you must specify each required IAM role or use the --use-default-roles parameter when creating your cluster. You can learn more about specifying parameter values for EMR commands here:
http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-aws-cli-config.html
**1. Quick start: to create an Amazon EMR cluster**
- Command::
aws emr create-cluster --release-label emr-4.0.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge --auto-terminate
**2. Create an Amazon EMR cluster with ServiceRole and InstanceProfile**
- Command::
aws emr create-cluster --release-label emr-4.0.0 --service-role EMR_DefaultRole --ec2-attributes InstanceProfile=EMR_EC2_DefaultRole --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge
**3. Create an Amazon EMR cluster with default roles**
- Command::
aws emr create-cluster --release-label emr-4.0.0 --use-default-roles --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge --auto-terminate
**4. Create an Amazon EMR cluster with MASTER, CORE, and TASK instance groups**
- Command::
aws emr create-cluster --release-label emr-4.0.0 --auto-terminate --instance-groups Name=Master,InstanceGroupType=MASTER,InstanceType=m3.xlarge,InstanceCount=1 Name=Core,InstanceGroupType=CORE,InstanceType=m3.xlarge,InstanceCount=2 Name=Task,InstanceGroupType=TASK,InstanceType=m3.xlarge,InstanceCount=2
**5. Specify whether the cluster should terminate after completing all the steps**
- Create an Amazon EMR cluster that will terminate after completing all the steps::
aws emr create-cluster --release-label emr-4.0.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge --auto-terminate
**6. Specify EC2 Attributes**
- Create an Amazon EMR cluster with Amazon EC2 Key Pair "myKey" and instance profile "myProfile"::
aws emr create-cluster --ec2-attributes KeyName=myKey,InstanceProfile=myProfile --release-label emr-4.0.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge --auto-terminate
- Create an Amazon EMR cluster in an Amazon VPC subnet::
aws emr create-cluster --ec2-attributes SubnetId=subnet-xxxxx --release-label emr-4.0.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge --auto-terminate
- Create an Amazon EMR cluster in an AvailabilityZone. For example, us-east-1b::
aws emr create-cluster --ec2-attributes AvailabilityZone=us-east-1b --release-label emr-4.0.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge
- Create an Amazon EMR cluster specifying the Amazon EC2 security groups::
aws emr create-cluster --release-label emr-4.0.0 --service-role myServiceRole --ec2-attributes InstanceProfile=myRole,EmrManagedMasterSecurityGroup=sg-master1,EmrManagedSlaveSecurityGroup=sg-slave1,AdditionalMasterSecurityGroups=[sg-addMaster1,sg-addMaster2,sg-addMaster3,sg-addMaster4],AdditionalSlaveSecurityGroups=[sg-addSlave1,sg-addSlave2,sg-addSlave3,sg-addSlave4] --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge
- Create an Amazon EMR cluster specifying only the EMR managed Amazon EC2 security groups::
aws emr create-cluster --release-label emr-4.0.0 --service-role myServiceRole --ec2-attributes InstanceProfile=myRole,EmrManagedMasterSecurityGroup=sg-master1,EmrManagedSlaveSecurityGroup=sg-slave1 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge
- Create an Amazon EMR cluster specifying only the additional Amazon EC2 security groups::
aws emr create-cluster --release-label emr-4.0.0 --service-role myServiceRole --ec2-attributes InstanceProfile=myRole,AdditionalMasterSecurityGroups=[sg-addMaster1,sg-addMaster2,sg-addMaster3,sg-addMaster4],AdditionalSlaveSecurityGroups=[sg-addSlave1,sg-addSlave2,sg-addSlave3,sg-addSlave4] --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge
- Create an Amazon EMR cluster in a VPC private subnet and use a specific Amazon EC2 security group to enable the Amazon EMR service access (required for clusters in private subnets)::
aws emr create-cluster --release-label emr-4.2.0 --service-role myServiceRole --ec2-attributes InstanceProfile=myRole,ServiceAccessSecurityGroup=sg-service-access,EmrManagedMasterSecurityGroup=sg-master,EmrManagedSlaveSecurityGroup=sg-slave --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge
- JSON equivalent (contents of ec2_attributes.json)::
[
{
"SubnetId": "subnet-xxxxx",
"KeyName": "myKey",
"InstanceProfile":"myRole",
"EmrManagedMasterSecurityGroup": "sg-master1",
"EmrManagedSlaveSecurityGroup": "sg-slave1",
"ServiceAccessSecurityGroup": "sg-service-access"
"AdditionalMasterSecurityGroups": ["sg-addMaster1","sg-addMaster2","sg-addMaster3","sg-addMaster4"],
"AdditionalSlaveSecurityGroups": ["sg-addSlave1","sg-addSlave2","sg-addSlave3","sg-addSlave4"]
}
]
NOTE: JSON arguments must include options and values as their own items in the list.
- Command (using ec2_attributes.json)::
aws emr create-cluster --release-label emr-4.0.0 --service-role myServiceRole --ec2-attributes file://./ec2_attributes.json --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge
**7. Enable debugging and specify a Log URI**
- Command::
aws emr create-cluster --enable-debugging --log-uri s3://myBucket/myLog --release-label emr-4.0.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge --auto-terminate
**8. Add tags when creating an Amazon EMR cluster**
- Add a list of tags::
aws emr create-cluster --tags name="John Doe" age=29 address="123 East NW Seattle" --release-label emr-4.0.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge --auto-terminate
- List tags of an Amazon EMR cluster::
aws emr describe-cluster --cluster-id j-XXXXXXYY --query Cluster.Tags
**9. Add a list of bootstrap actions when creating an Amazon EMR Cluster**
- Command::
aws emr create-cluster --bootstrap-actions Path=s3://mybucket/myscript1,Name=BootstrapAction1,Args=[arg1,arg2] Path=s3://mybucket/myscript2,Name=BootstrapAction2,Args=[arg1,arg2] --release-label emr-4.0.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge --auto-terminate
**10. Configure Hadoop MapReduce component in an EMR release**
The following example changes the maximum number of map tasks and sets the NameNode heap size:
- Specifying configurations from a local file::
aws emr create-cluster --configurations file://configurations.json --release-label emr-4.0.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge --auto-terminate
- Specifying configurations from a file in Amazon S3::
aws emr create-cluster --configurations https://s3.amazonaws.com/myBucket/configurations.json --release-label emr-4.0.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge --auto-terminate
- Contents of configurations.json::
[
{
"Classification": "mapred-site",
"Properties": {
"mapred.tasktracker.map.tasks.maximum": 2
}
},
{
"Classification": "hadoop-env",
"Properties": {},
"Configurations": [
{
"Classification": "export",
"Properties": {
"HADOOP_DATANODE_HEAPSIZE": 2048,
"HADOOP_NAMENODE_OPTS": "-XX:GCTimeRatio=19"
}
}
]
}
]
**11. Create an Amazon EMR cluster with applications**
- Create an Amazon EMR cluster with Hadoop, Hive and Pig installed::
aws emr create-cluster --applications Name=Hadoop Name=Hive Name=Pig --release-label emr-4.0.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge --auto-terminate
- Create an Amazon EMR cluster with Spark installed:
aws emr create-cluster --release-label emr-4.0.0 --applications Name=Spark --ec2-attributes KeyName=myKey --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge --auto-terminate
- Create an Amazon EMR cluster with MapR M7 edition::
aws emr create-cluster --applications Name=MapR,Args=--edition,m7,--version,4.0.2 --ami-version 3.3.2 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge --auto-terminate
**12. Restore HBase data from backup when creating an Amazon EMR cluster**
Only supported with AMI versions.
-Command::
aws emr create-cluster --applications Name=HBase --restore-from-hbase-backup Dir=s3://myBucket/myBackup,BackupVersion=myBackupVersion --ami-version 3.1.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge --auto-terminate
**13. To add Custom JAR steps to a cluster when creating an Amazon EMR cluster**
- Command::
aws emr create-cluster --steps Type=CUSTOM_JAR,Name=CustomJAR,ActionOnFailure=CONTINUE,Jar=s3://myBucket/mytest.jar,Args=arg1,arg2,arg3 Type=CUSTOM_JAR,Name=CustomJAR,ActionOnFailure=CONTINUE,Jar=s3://myBucket/mytest.jar,MainClass=mymainclass,Args=arg1,arg2,arg3 --release-label emr-4.0.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge --auto-terminate
- Custom JAR steps required parameters::
Jar
- Custom JAR steps optional parameters::
Type, Name, ActionOnFailure, Args
**14. To add Streaming steps when creating an Amazon EMR cluster**
- Command::
aws emr create-cluster --steps Type=STREAMING,Name='Streaming Program',ActionOnFailure=CONTINUE,Args=[-files,s3://elasticmapreduce/samples/wordcount/wordSplitter.py,-mapper,wordSplitter.py,-reducer,aggregate,-input,s3://elasticmapreduce/samples/wordcount/input,-output,s3://mybucket/wordcount/output] --release-label emr-4.0.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge --auto-terminate
- Streaming steps required parameters::
Type, Args
- Streaming steps optional parameters::
Name, ActionOnFailure
- JSON equivalent (contents of step.json)::
[
{
"Name": "JSON Streaming Step",
"Args": ["-files","s3://elasticmapreduce/samples/wordcount/wordSplitter.py","-mapper","wordSplitter.py","-reducer","aggregate","-input","s3://elasticmapreduce/samples/wordcount/input","-output","s3://mybucket/wordcount/output"],
"ActionOnFailure": "CONTINUE",
"Type": "STREAMING"
}
]
NOTE: JSON arguments must include options and values as their own items in the list.
- Command (using step.json)::
aws emr create-cluster --steps file://./step.json --release-label emr-4.0.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge --auto-terminate
**15. To use multiple files in a Streaming step (JSON only)**
- JSON (multiplefiles.json)::
[
{
"Name": "JSON Streaming Step",
"Type": "STREAMING",
"ActionOnFailure": "CONTINUE",
"Args": [
"-files",
"s3://mybucket/mapper.py,s3://mybucket/reducer.py",
"-mapper",
"mapper.py",
"-reducer",
"reducer.py",
"-input",
"s3://mybucket/input",
"-output",
"s3://mybucket/output"]
}
]
- Command::
aws emr create-cluster --steps file://./multiplefiles.json --release-label emr-4.0.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge --auto-terminate
**16. To add Hive steps when creating an Amazon EMR cluster**
- Command::
aws emr create-cluster --steps Type=HIVE,Name='Hive program',ActionOnFailure=CONTINUE,ActionOnFailure=TERMINATE_CLUSTER,Args=[-f,s3://elasticmapreduce/samples/hive-ads/libs/model-build.q,-d,INPUT=s3://elasticmapreduce/samples/hive-ads/tables,-d,OUTPUT=s3://mybucket/hive-ads/output/2014-04-18/11-07-32,-d,LIBS=s3://elasticmapreduce/samples/hive-ads/libs] --applications Name=Hive --release-label emr-4.0.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge
- Hive steps required parameters::
Type, Args
- Hive steps optional parameters::
Name, ActionOnFailure
**17. To add Pig steps when creating an Amazon EMR cluster**
- Command::
aws emr create-cluster --steps Type=PIG,Name='Pig program',ActionOnFailure=CONTINUE,Args=[-f,s3://elasticmapreduce/samples/pig-apache/do-reports2.pig,-p,INPUT=s3://elasticmapreduce/samples/pig-apache/input,-p,OUTPUT=s3://mybucket/pig-apache/output] --applications Name=Pig --release-label emr-4.0.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge
- Pig steps required parameters::
Type, Args
- Pig steps optional parameters::
Name, ActionOnFailure
**18. To add Impala steps when creating an Amazon EMR cluster**
- Command::
aws emr create-cluster --steps Type=CUSTOM_JAR,Name='Wikipedia Impala program',ActionOnFailure=CONTINUE,Jar=s3://elasticmapreduce/libs/script-runner/script-runner.jar,Args="/home/hadoop/impala/examples/wikipedia/wikipedia-with-s3distcp.sh" Type=IMPALA,Name='Impala program',ActionOnFailure=CONTINUE,Args=-f,--impala-script,s3://myimpala/input,--console-output-path,s3://myimpala/output --applications Name=Impala --ami-version 3.1.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge
- Impala steps required parameters::
Type, Args
- Impala steps optional parameters::
Name, ActionOnFailure
**19. To enable consistent view in EMRFS and change the RetryCount and Retry Period settings when creating an Amazon EMR cluster**
- Command::
aws emr create-cluster --instance-type m3.xlarge --release-label emr-4.0.0 --emrfs Consistent=true,RetryCount=5,RetryPeriod=30
- Required parameters::
Consistent=true
- JSON equivalent (contents of emrfs.json)::
{
"Consistent": true,
"RetryCount": 5,
"RetryPeriod": 30
}
- Command (Using emrfs.json)::
aws emr create-cluster --instance-type m3.xlarge --release-label emr-4.0.0 --emrfs file://emrfs.json
**20. To enable consistent view with arguments e.g. change the DynamoDB read and write capacity when creating an Amazon EMR cluster**
- Command::
aws emr create-cluster --instance-type m3.xlarge --release-label emr-4.0.0 --emrfs Consistent=true,RetryCount=5,RetryPeriod=30,Args=[fs.s3.consistent.metadata.read.capacity=600,fs.s3.consistent.metadata.write.capacity=300]
- Required parameters::
Consistent=true
- JSON equivalent (contents of emrfs.json)::
{
"Consistent": true,
"RetryCount": 5,
"RetryPeriod": 30,
"Args":["fs.s3.consistent.metadata.read.capacity=600", "fs.s3.consistent.metadata.write.capacity=300"]
}
- Command (Using emrfs.json)::
aws emr create-cluster --instance-type m3.xlarge --release-label emr-4.0.0 --emrfs file://emrfs.json
**21. To enable Amazon S3 server-side encryption in EMRFS when creating an Amazon EMR cluster**
- Command (Use Encryption=ServerSide)::
aws emr create-cluster --instance-type m3.xlarge --release-label emr-4.0.0 --emrfs Encryption=ServerSide
- Required parameters::
Encryption=ServerSide
- Optional parameters::
Args
- JSON equivalent (contents of emrfs.json)::
{
"Encryption": "ServerSide",
"Args": ["fs.s3.serverSideEncryptionAlgorithm=AES256"]
}
**22. To enable Amazon S3 client-side encryption using a key managed by AWS Key Management Service (KMS) in EMRFS when creating an Amazon EMR cluster**
- Command::
aws emr create-cluster --instance-type m3.xlarge --release-label emr-4.0.0 --emrfs Encryption=ClientSide,ProviderType=KMS,KMSKeyId=myKMSKeyId
- Required parameters::
Encryption=ClientSide, ProviderType=KMS, KMSKeyId
- Optional parameters::
Args
- JSON equivalent (contents of emrfs.json)::
{
"Encryption": "ClientSide",
"ProviderType": "KMS",
"KMSKeyId": "myKMSKeyId"
}
**23. To enable Amazon S3 client-side encryption with a custom encryption provider in EMRFS when creating an Amazon EMR cluster**
- Command::
aws emr create-cluster --instance-type m3.xlarge --release-label emr-4.0.0 --emrfs Encryption=ClientSide,ProviderType=Custom,CustomProviderLocation=s3://mybucket/myfolder/provider.jar,CustomProviderClass=classname
- Required parameters::
Encryption=ClientSide, ProviderType=Custom, CustomProviderLocation, CustomProviderClass
- Optional parameters::
Args
- JSON equivalent (contents of emrfs.json)::
{
"Encryption": "ClientSide",
"ProviderType": "Custom",
"CustomProviderLocation": "s3://mybucket/myfolder/provider.jar",
"CustomProviderClass": "classname"
}
**24. To enable Amazon S3 client-side encryption with a custom encryption provider in EMRFS and passing arguments expected by the class**
- Command::
aws emr create-cluster --release-label emr-4.0.0 --instance-type m3.xlarge --instance-count 2 --emrfs Encryption=ClientSide,ProviderName=myProvider,CustomProviderLocation=s3://mybucket/myfolder/myprovider.jar,CustomProviderClass=classname,Args=[myProvider.arg1=value1,myProvider.arg2=value2]
- Required parameters::
Encryption=ClientSide, ProviderType=Custom, CustomProviderLocation, CustomProviderClass
- Optional parameters::
Args (expected by CustomProviderClass, passed to emrfs-site.xml using configure-hadoop bootstrap action)
- JSON equivalent (contents of emrfs.json)::
{
"Encryption": "ClientSide",
"ProviderType": "Custom",
"CustomProviderLocation": "s3://mybucket/myfolder/provider.jar",
"CustomProviderClass": "classname"
} awscli-1.10.1/awscli/examples/emr/put.rst 0000666 4542626 0000144 00000000437 12652514124 021361 0 ustar pysdk-ci amazon 0000000 0000000 The following command uploads a file named ``healthcheck.sh`` to the master instance in a cluster with the cluster ID ``j-3SD91U2E1L2QX``::
aws emr put --cluster-id j-3SD91U2E1L2QX --key-pair-file ~/.ssh/mykey.pem --src ~/scripts/healthcheck.sh --dest /home/hadoop/bin/healthcheck.sh
awscli-1.10.1/awscli/examples/emr/get.rst 0000666 4542626 0000144 00000000402 12652514124 021320 0 ustar pysdk-ci amazon 0000000 0000000 The following downloads the ``hadoop-examples.jar`` archive from the master instance in a cluster with the cluster ID ``j-3SD91U2E1L2QX``::
aws emr get --cluster-id j-3SD91U2E1L2QX --key-pair-file ~/.ssh/mykey.pem --src /home/hadoop-examples.jar --dest ~
awscli-1.10.1/awscli/examples/emr/add-steps.rst 0000666 4542626 0000144 00000010575 12652514124 022441 0 ustar pysdk-ci amazon 0000000 0000000 **1. To add Custom JAR steps to a cluster**
- Command::
aws emr add-steps --cluster-id j-XXXXXXXX --steps Type=CUSTOM_JAR,Name=CustomJAR,ActionOnFailure=CONTINUE,Jar=s3://mybucket/mytest.jar,Args=arg1,arg2,arg3 Type=CUSTOM_JAR,Name=CustomJAR,ActionOnFailure=CONTINUE,Jar=s3://mybucket/mytest.jar,MainClass=mymainclass,Args=arg1,arg2,arg3
- Required parameters::
Jar
- Optional parameters::
Type, Name, ActionOnFailure, Args
- Output::
{
"StepIds":[
"s-XXXXXXXX",
"s-YYYYYYYY"
]
}
**2. To add Streaming steps to a cluster**
- Command::
aws emr add-steps --cluster-id j-XXXXXXXX --steps Type=STREAMING,Name='Streaming Program',ActionOnFailure=CONTINUE,Args=[-files,s3://elasticmapreduce/samples/wordcount/wordSplitter.py,-mapper,wordSplitter.py,-reducer,aggregate,-input,s3://elasticmapreduce/samples/wordcount/input,-output,s3://mybucket/wordcount/output]
- Required parameters::
Type, Args
- Optional parameters::
Name, ActionOnFailure
- JSON equivalent (contents of step.json)::
[
{
"Name": "JSON Streaming Step",
"Args": ["-files","s3://elasticmapreduce/samples/wordcount/wordSplitter.py","-mapper","wordSplitter.py","-reducer","aggregate","-input","s3://elasticmapreduce/samples/wordcount/input","-output","s3://mybucket/wordcount/output"],
"ActionOnFailure": "CONTINUE",
"Type": "STREAMING"
}
]
NOTE: JSON arguments must include options and values as their own items in the list.
- Command (using step.json)::
aws emr add-steps --cluster-id j-XXXXXXXX --steps file://./step.json
- Output::
{
"StepIds":[
"s-XXXXXXXX",
"s-YYYYYYYY"
]
}
**3. To add a Streaming step with multiple files to a cluster (JSON only)**
- JSON (multiplefiles.json)::
[
{
"Name": "JSON Streaming Step",
"Type": "STREAMING",
"ActionOnFailure": "CONTINUE",
"Args": [
"-files",
"s3://mybucket/mapper.py,s3://mybucket/reducer.py",
"-mapper",
"mapper.py",
"-reducer",
"reducer.py",
"-input",
"s3://mybucket/input",
"-output",
"s3://mybucket/output"]
}
]
- Command::
aws emr add-steps --cluster-id j-XXXXXXXX --steps file://./multiplefiles.json
- Required parameters::
Type, Args
- Optional parameters::
Name, ActionOnFailure
- Output::
{
"StepIds":[
"s-XXXXXXXX",
]
}
**4. To add Hive steps to a cluster**
- Command::
aws emr add-steps --cluster-id j-XXXXXXXX --steps Type=HIVE,Name='Hive program',ActionOnFailure=CONTINUE,Args=[-f,s3://mybuckey/myhivescript.q,-d,INPUT=s3://mybucket/myhiveinput,-d,OUTPUT=s3://mybucket/myhiveoutput,arg1,arg2] Type=HIVE,Name='Hive steps',ActionOnFailure=TERMINATE_CLUSTER,Args=[-f,s3://elasticmapreduce/samples/hive-ads/libs/model-build.q,-d,INPUT=s3://elasticmapreduce/samples/hive-ads/tables,-d,OUTPUT=s3://mybucket/hive-ads/output/2014-04-18/11-07-32,-d,LIBS=s3://elasticmapreduce/samples/hive-ads/libs]
- Required parameters::
Type, Args
- Optional parameters::
Name, ActionOnFailure
- Output::
{
"StepIds":[
"s-XXXXXXXX",
"s-YYYYYYYY"
]
}
**5. To add Pig steps to a cluster**
- Command::
aws emr add-steps --cluster-id j-XXXXXXXX --steps Type=PIG,Name='Pig program',ActionOnFailure=CONTINUE,Args=[-f,s3://mybuckey/mypigscript.pig,-p,INPUT=s3://mybucket/mypiginput,-p,OUTPUT=s3://mybucket/mypigoutput,arg1,arg2] Type=PIG,Name='Pig program',Args=[-f,s3://elasticmapreduce/samples/pig-apache/do-reports2.pig,-p,INPUT=s3://elasticmapreduce/samples/pig-apache/input,-p,OUTPUT=s3://mybucket/pig-apache/output,arg1,arg2]
- Required parameters::
Type, Args
- Optional parameters::
Name, ActionOnFailure
- Output::
{
"StepIds":[
"s-XXXXXXXX",
"s-YYYYYYYY"
]
}
**6. To add Impala steps to a cluster**
- Command::
aws emr add-steps --cluster-id j-XXXXXXXX --steps Type=IMPALA,Name='Impala program',ActionOnFailure=CONTINUE,Args=--impala-script,s3://myimpala/input,--console-output-path,s3://myimpala/output
- Required parameters::
Type, Args
- Optional parameters::
Name, ActionOnFailure
- Output::
{
"StepIds":[
"s-XXXXXXXX",
"s-YYYYYYYY"
]
}
awscli-1.10.1/awscli/examples/emr/create-cluster-synopsis.rst 0000666 4542626 0000144 00000001355 12652514124 025360 0 ustar pysdk-ci amazon 0000000 0000000 create-cluster
--release-label | --ami-version
--instance-type | --instance-groups
--instance-count
[--auto-terminate | --no-auto-terminate]
[--use-default-roles]
[--service-role ]
[--configurations ]
[--name ]
[--log-uri ]
[--additional-info ]
[--ec2-attributes ]
[--termination-protected | --no-termination-protected]
[--visible-to-all-users | --no-visible-to-all-users]
[--enable-debugging | --no-enable-debugging]
[--tags ]
[--applications ]
[--emrfs ]
[--bootstrap-actions ]
[--steps ]
[--restore-from-hbase-backup ]
awscli-1.10.1/awscli/examples/emr/schedule-hbase-backup.rst 0000666 4542626 0000144 00000001016 12652514124 024662 0 ustar pysdk-ci amazon 0000000 0000000 **1. To schedule a full hbase backup**
- Command::
aws emr schedule-hbase-backup --cluster-id j-XXXXXXYY --type full --dir
s3://myBucket/backup --interval 10 --unit hours --start-time
2014-04-21T05:26:10Z --consistent
- Output::
None
**2. To schedule an incremental hbase backup**
- Command::
aws emr schedule-hbase-backup --cluster-id j-XXXXXXYY --type incremental
--dir s3://myBucket/backup --interval 30 --unit minutes --start-time
2014-04-21T05:26:10Z --consistent
- Output::
None awscli-1.10.1/awscli/examples/emr/wait.rst 0000666 4542626 0000144 00000000245 12652514124 021512 0 ustar pysdk-ci amazon 0000000 0000000 The following command waits until a cluster with the cluster ID ``j-3SD91U2E1L2QX`` is up and running::
aws emr wait cluster-running --cluster-id j-3SD91U2E1L2QX
awscli-1.10.1/awscli/examples/emr/list-clusters.rst 0000666 4542626 0000144 00000001236 12652514124 023364 0 ustar pysdk-ci amazon 0000000 0000000 The following command lists all active EMR clusters in the current region::
aws emr list-clusters --active
Output::
{
"Clusters": [
{
"Status": {
"Timeline": {
"ReadyDateTime": 1433200405.353,
"CreationDateTime": 1433199926.596
},
"State": "WAITING",
"StateChangeReason": {
"Message": "Waiting after step completed"
}
},
"NormalizedInstanceHours": 6,
"Id": "j-3SD91U2E1L2QX",
"Name": "my-cluster"
}
]
}
awscli-1.10.1/awscli/examples/emr/remove-tags.rst 0000666 4542626 0000144 00000000270 12652514124 022775 0 ustar pysdk-ci amazon 0000000 0000000 The following command removes a tag with the key ``prod`` from a cluster with the cluster ID ``j-3SD91U2E1L2QX``::
aws emr remove-tags --resource-id j-3SD91U2E1L2QX --tag-keys prod
awscli-1.10.1/awscli/examples/emr/create-default-roles.rst 0000666 4542626 0000144 00000013070 12652514124 024555 0 ustar pysdk-ci amazon 0000000 0000000 **1. To create the default IAM role for EC2**
- Command::
aws emr create-default-roles
- Output::
If the role already exists then the command returns nothing.
If the role does not exist then the output will be:
[
{
"RolePolicy": {
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"cloudwatch:*",
"dynamodb:*",
"ec2:Describe*",
"elasticmapreduce:Describe*",
"elasticmapreduce:ListBootstrapActions",
"elasticmapreduce:ListClusters",
"elasticmapreduce:ListInstanceGroups",
"elasticmapreduce:ListInstances",
"elasticmapreduce:ListSteps",
"kinesis:CreateStream",
"kinesis:DeleteStream",
"kinesis:DescribeStream",
"kinesis:GetRecords",
"kinesis:GetShardIterator",
"kinesis:MergeShards",
"kinesis:PutRecord",
"kinesis:SplitShard",
"rds:Describe*",
"s3:*",
"sdb:*",
"sns:*",
"sqs:*"
],
"Resource": "*",
"Effect": "Allow"
}
]
},
"Role": {
"AssumeRolePolicyDocument": {
"Version": "2008-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
}
}
]
},
"RoleId": "AROAIQ5SIQUGL5KMYBJX6",
"CreateDate": "2015-06-09T17:09:04.602Z",
"RoleName": "EMR_EC2_DefaultRole",
"Path": "/",
"Arn": "arn:aws:iam::176430881729:role/EMR_EC2_DefaultRole"
}
},
{
"RolePolicy": {
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CancelSpotInstanceRequests",
"ec2:CreateSecurityGroup",
"ec2:CreateTags",
"ec2:DeleteTags",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeAccountAttributes",
"ec2:DescribeInstances",
"ec2:DescribeInstanceStatus",
"ec2:DescribeKeyPairs",
"ec2:DescribePrefixLists",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSpotInstanceRequests",
"ec2:DescribeSpotPriceHistory",
"ec2:DescribeSubnets",
"ec2:DescribeVpcAttribute",
"ec2:DescribeVpcEndpoints",
"ec2:DescribeVpcEndpointServices",
"ec2:DescribeVpcs",
"ec2:ModifyImageAttribute",
"ec2:ModifyInstanceAttribute",
"ec2:RequestSpotInstances",
"ec2:RunInstances",
"ec2:TerminateInstances",
"iam:GetRole",
"iam:GetRolePolicy",
"iam:ListInstanceProfiles",
"iam:ListRolePolicies",
"iam:PassRole",
"s3:CreateBucket",
"s3:Get*",
"s3:List*",
"sdb:BatchPutAttributes",
"sdb:Select",
"sqs:CreateQueue",
"sqs:Delete*",
"sqs:GetQueue*",
"sqs:ReceiveMessage"
],
"Resource": "*",
"Effect": "Allow"
}
]
},
"Role": {
"AssumeRolePolicyDocument": {
"Version": "2008-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "elasticmapreduce.amazonaws.com"
}
}
]
},
"RoleId": "AROAI3SRVPPVSRDLARBPY",
"CreateDate": "2015-06-09T17:09:10.401Z",
"RoleName": "EMR_DefaultRole",
"Path": "/",
"Arn": "arn:aws:iam::176430881729:role/EMR_DefaultRole"
}
}
]
awscli-1.10.1/awscli/examples/emr/socks.rst 0000666 4542626 0000144 00000000421 12652514124 021664 0 ustar pysdk-ci amazon 0000000 0000000 The following command opens a socks connection with the master instance in a cluster with the cluster ID ``j-3SD91U2E1L2QX``::
aws emr socks --cluster-id j-3SD91U2E1L2QX --key-pair-file ~/.ssh/mykey.pem
The key pair file option takes a local path to a private key file. awscli-1.10.1/awscli/examples/emr/list-instances.rst 0000666 4542626 0000144 00000004206 12652514124 023507 0 ustar pysdk-ci amazon 0000000 0000000 The following command lists all of the instances in a cluster with the cluster ID ``j-3C6XNQ39VR9WL``::
aws emr list-instances --cluster-id j-3C6XNQ39VR9WL
Output::
{
"Instances": [
{
"Status": {
"Timeline": {
"ReadyDateTime": 1433200400.03,
"CreationDateTime": 1433199960.152
},
"State": "RUNNING",
"StateChangeReason": {}
},
"Ec2InstanceId": "i-f19ecfee",
"PublicDnsName": "ec2-52-52-41-150.us-west-2.compute.amazonaws.com",
"PrivateDnsName": "ip-172-21-11-216.us-west-2.compute.internal",
"PublicIpAddress": "52.52.41.150",
"Id": "ci-3NNHQUQ2TWB6Y",
"PrivateIpAddress": "172.21.11.216"
},
{
"Status": {
"Timeline": {
"ReadyDateTime": 1433200400.031,
"CreationDateTime": 1433199949.102
},
"State": "RUNNING",
"StateChangeReason": {}
},
"Ec2InstanceId": "i-1feee4c2",
"PublicDnsName": "ec2-52-63-246-32.us-west-2.compute.amazonaws.com",
"PrivateDnsName": "ip-172-31-24-130.us-west-2.compute.internal",
"PublicIpAddress": "52.63.246.32",
"Id": "ci-GAOCMKNKDCV7",
"PrivateIpAddress": "172.21.11.215"
},
{
"Status": {
"Timeline": {
"ReadyDateTime": 1433200400.031,
"CreationDateTime": 1433199949.102
},
"State": "RUNNING",
"StateChangeReason": {}
},
"Ec2InstanceId": "i-15cfeee3",
"PublicDnsName": "ec2-52-25-246-63.us-west-2.compute.amazonaws.com",
"PrivateDnsName": "ip-172-31-24-129.us-west-2.compute.internal",
"PublicIpAddress": "52.25.246.63",
"Id": "ci-2W3TDFFB47UAD",
"PrivateIpAddress": "172.21.11.214"
}
]
}
awscli-1.10.1/awscli/examples/storagegateway/ 0000777 4542626 0000144 00000000000 12652514126 022260 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/storagegateway/describe-gateway-information.rst 0000666 4542626 0000144 00000001221 12652514124 030546 0 ustar pysdk-ci amazon 0000000 0000000 **To describe a gateway**
The following ``describe-gateway-information`` command returns metadata about the specified gateway.
To specify which gateway to describe, use the Amazon Resource Name (ARN) of the gateway in the command.
This examples specifies a gateway with the id ``sgw-12A3456B`` in account ``123456789012``::
aws storagegateway describe-gateway-information --gateway-arn "arn:aws:storagegateway:us-west-2:123456789012:gateway/sgw-12A3456B"
This command outputs a JSON block that contains metadata about about the gateway such as its name,
network interfaces, configured time zone, and the state (whether the gateway is running or not).
awscli-1.10.1/awscli/examples/storagegateway/list-gateways.rst 0000666 4542626 0000144 00000000404 12652514124 025603 0 ustar pysdk-ci amazon 0000000 0000000 **To list gateways for an account**
The following ``list-gateways`` command lists all the gateways defined for an account::
aws storagegateway list-gateways
This command outputs a JSON block that contains a list of gateway Amazon Resource Names (ARNs).
awscli-1.10.1/awscli/examples/storagegateway/list-volumes.rst 0000666 4542626 0000144 00000001101 12652514124 025444 0 ustar pysdk-ci amazon 0000000 0000000 **To list the volumes configured for a gateway**
The following ``list-volumes`` command returns a list of volumes configured for the specified gateway.
To specify which gateway to describe, use the Amazon Resource Name (ARN) of the gateway in the command.
This examples specifies a gateway with the id ``sgw-12A3456B`` in account ``123456789012``::
aws storagegateway list-volumes --gateway-arn "arn:aws:storagegateway:us-west-2:123456789012:gateway/sgw-12A3456B"
This command outputs a JSON block that a list of volumes that includes the type and ARN for each volume.
awscli-1.10.1/awscli/examples/sns/ 0000777 4542626 0000144 00000000000 12652514126 020035 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/sns/create-topic.rst 0000666 4542626 0000144 00000001113 12652514124 023140 0 ustar pysdk-ci amazon 0000000 0000000 The following command creates an SNS topic named ``my-topic``::
aws sns create-topic --name my-topic
Output::
{
"ResponseMetadata": {
"RequestId": "1469e8d7-1642-564e-b85d-a19b4b341f83"
},
"TopicArn": "arn:aws:sns:us-west-2:0123456789012:my-topic"
}
For more information, see `Using the AWS Command Line Interface with Amazon SQS and Amazon SNS`_ in the *AWS Command Line Interface User Guide*.
.. _`Using the AWS Command Line Interface with Amazon SQS and Amazon SNS`: http://docs.aws.amazon.com/cli/latest/userguide/cli-sqs-queue-sns-topic.html
awscli-1.10.1/awscli/examples/sns/list-subscriptions-by-topic.rst 0000666 4542626 0000144 00000001050 12652514124 026165 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves a list of SNS subscriptions::
aws sns list-subscriptions-by-topic --topic-arn "arn:aws:sns:us-west-2:0123456789012:my-topic"
Output::
{
"Subscriptions": [
{
"Owner": "0123456789012",
"Endpoint": "my-email@example.com",
"Protocol": "email",
"TopicArn": "arn:aws:sns:us-west-2:0123456789012:my-topic",
"SubscriptionArn": "arn:aws:sns:us-west-2:0123456789012:my-topic:8a21d249-4329-4871-acc6-7be709c6ea7f"
}
]
}
awscli-1.10.1/awscli/examples/sns/get-topic-attributes.rst 0000666 4542626 0000144 00000002430 12652514124 024643 0 ustar pysdk-ci amazon 0000000 0000000 The following command gets the attributes of a topic named ``my-topic``::
aws sns get-topic-attributes --topic-arn "arn:aws:sns:us-west-2:0123456789012:my-topic"
Output::
{
"Attributes": {
"SubscriptionsConfirmed": "1",
"DisplayName": "my-topic",
"SubscriptionsDeleted": "0",
"EffectiveDeliveryPolicy": "{\"http\":{\"defaultHealthyRetryPolicy\":{\"minDelayTarget\":20,\"maxDelayTarget\":20,\"numRetries\":3,\"numMaxDelayRetries\":0,\"numNoDelayRetries\":0,\"numMinDelayRetries\":0,\"backoffFunction\":\"linear\"},\"disableSubscriptionOverrides\":false}}",
"Owner": "0123456789012",
"Policy": "{\"Version\":\"2008-10-17\",\"Id\":\"__default_policy_ID\",\"Statement\":[{\"Sid\":\"__default_statement_ID\",\"Effect\":\"Allow\",\"Principal\":{\"AWS\":\"*\"},\"Action\":[\"SNS:Subscribe\",\"SNS:ListSubscriptionsByTopic\",\"SNS:DeleteTopic\",\"SNS:GetTopicAttributes\",\"SNS:Publish\",\"SNS:RemovePermission\",\"SNS:AddPermission\",\"SNS:Receive\",\"SNS:SetTopicAttributes\"],\"Resource\":\"arn:aws:sns:us-west-2:0123456789012:my-topic\",\"Condition\":{\"StringEquals\":{\"AWS:SourceOwner\":\"0123456789012\"}}}]}",
"TopicArn": "arn:aws:sns:us-west-2:0123456789012:my-topic",
"SubscriptionsPending": "0"
}
}
awscli-1.10.1/awscli/examples/sns/get-subscription-attributes.rst 0000666 4542626 0000144 00000001366 12652514124 026260 0 ustar pysdk-ci amazon 0000000 0000000 The following command gets the attributes of a subscription to a topic named ``my-topic``::
aws sns get-subscription-attributes --subscription-arn "arn:aws:sns:us-west-2:0123456789012:my-topic:8a21d249-4329-4871-acc6-7be709c6ea7f"
The ``subscription-arn`` is available in the output of ``aws sns list-subscriptions``.
Output::
{
"Attributes": {
"Endpoint": "my-email@example.com",
"Protocol": "email",
"RawMessageDelivery": "false",
"ConfirmationWasAuthenticated": "false",
"Owner": "0123456789012",
"SubscriptionArn": "arn:aws:sns:us-west-2:0123456789012:my-topic:8a21d249-4329-4871-acc6-7be709c6ea7f",
"TopicArn": "arn:aws:sns:us-west-2:0123456789012:my-topic"
}
} awscli-1.10.1/awscli/examples/sns/publish.rst 0000666 4542626 0000144 00000000541 12652514124 022233 0 ustar pysdk-ci amazon 0000000 0000000 The following command publishes a message to an SNS topic named ``my-topic``::
aws sns publish --topic-arn "arn:aws:sns:us-west-2:0123456789012:my-topic" --message file://message.txt
``message.txt`` is a text file containing the message to publish::
Hello World
Second Line
Putting the message in a text file allows you to include line breaks. awscli-1.10.1/awscli/examples/sns/unsubscribe.rst 0000666 4542626 0000144 00000000255 12652514124 023113 0 ustar pysdk-ci amazon 0000000 0000000 The following command deletes a subscription::
aws sns unsubscribe --subscription-arn "arn:aws:sns:us-west-2:0123456789012:my-topic:8a21d249-4329-4871-acc6-7be709c6ea7f"
awscli-1.10.1/awscli/examples/sns/list-subscriptions.rst 0000666 4542626 0000144 00000000744 12652514124 024452 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves a list of SNS subscriptions::
aws sns list-subscriptions
Output::
{
"Subscriptions": [
{
"Owner": "0123456789012",
"Endpoint": "my-email@example.com",
"Protocol": "email",
"TopicArn": "arn:aws:sns:us-west-2:0123456789012:my-topic",
"SubscriptionArn": "arn:aws:sns:us-west-2:0123456789012:my-topic:8a21d249-4329-4871-acc6-7be709c6ea7f"
}
]
}
awscli-1.10.1/awscli/examples/sns/list-topics.rst 0000666 4542626 0000144 00000000334 12652514124 023037 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves a list of SNS topics::
aws sns list-topics
Output::
{
"Topics": [
{
"TopicArn": "arn:aws:sns:us-west-2:0123456789012:my-topic"
}
]
}
awscli-1.10.1/awscli/examples/sns/delete-topic.rst 0000666 4542626 0000144 00000000222 12652514124 023137 0 ustar pysdk-ci amazon 0000000 0000000 The following command deletes an SNS topic named ``my-topic``::
aws sns delete-topic --topic-arn "arn:aws:sns:us-west-2:0123456789012:my-topic" awscli-1.10.1/awscli/examples/sns/subscribe.rst 0000666 4542626 0000144 00000000416 12652514124 022547 0 ustar pysdk-ci amazon 0000000 0000000 The following command subscribes an email address to a topic::
aws sns subscribe --topic-arn arn:aws:sns:us-west-2:0123456789012:my-topic --protocol email --notification-endpoint my-email@example.com
Output::
{
"SubscriptionArn": "pending confirmation"
}
awscli-1.10.1/awscli/examples/sns/confirm-subscription.rst 0000666 4542626 0000144 00000001213 12652514124 024741 0 ustar pysdk-ci amazon 0000000 0000000 The following command confirms a subscription to an SNS topic named ``my-topic``::
aws sns confirm-subscription --topic-arn arn:aws:sns:us-west-2:0123456789012:my-topic --token 2336412f37fb687f5d51e6e241d7700ae02f7124d8268910b858cb4db727ceeb2474bb937929d3bdd7ce5d0cce19325d036bc858d3c217426bcafa9c501a2cace93b83f1dd3797627467553dc438a8c974119496fc3eff026eaa5d14472ded6f9a5c43aec62d83ef5f49109da7176391
The token is included in the confirmation message sent to the notification endpoint specified in the subscribe call.
Output::
{
"SubscriptionArn": "arn:aws:sns:us-west-2:0123456789012:my-topic:8a21d249-4329-4871-acc6-7be709c6ea7f"
}
awscli-1.10.1/awscli/examples/opsworks/ 0000777 4542626 0000144 00000000000 12652514126 021121 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/opsworks/create-app.rst 0000666 4542626 0000144 00000004105 12652514124 023672 0 ustar pysdk-ci amazon 0000000 0000000 **To create an app**
The following example creates a PHP app named SimplePHPApp from code stored in a GitHub repository.
The command uses the shorthand form of the application source definition. ::
aws opsworks --region us-east-1 create-app --stack-id f6673d70-32e6-4425-8999-265dd002fec7 --name SimplePHPApp --type php --app-source Type=git,Url=git://github.com/amazonwebservices/opsworks-demo-php-simple-app.git,Revision=version1
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*::
{
"AppId": "6cf5163c-a951-444f-a8f7-3716be75f2a2"
}
**To create an app with an attached database**
The following example creates a JSP app from code stored in .zip archive in a public S3 bucket.
It attaches an RDS DB instance to serve as the app's data store. The application and database sources are defined in separate
JSON files that are in the directory from which you run the command. ::
aws opsworks --region us-east-1 create-app --stack-id 8c428b08-a1a1-46ce-a5f8-feddc43771b8 --name SimpleJSP --type java --app-source file://appsource.json --data-sources file://datasource.json
The application source information is in ``appsource.json`` and contains the following. ::
{
"Type": "archive",
"Url": "https://s3.amazonaws.com/jsp_example/simplejsp.zip"
}
The database source information is in ``datasource.json`` and contains the following. ::
[
{
"Type": "RdsDbInstance",
"Arn": "arn:aws:rds:us-west-2:123456789012:db:clitestdb",
"DatabaseName": "mydb"
}
]
**Note**: For an RDS DB instance, you must first use ``register-rds-db-instance`` to register the instance with the stack.
For MySQL App Server instances, set ``Type`` to ``OpsworksMysqlInstance``. These instances are
created by AWS OpsWorks,
so they do not have to be registered.
*Output*::
{
"AppId": "26a61ead-d201-47e3-b55c-2a7c666942f8"
}
For more information, see `Adding Apps`_ in the *AWS OpsWorks User Guide*.
.. _`Adding Apps`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html
awscli-1.10.1/awscli/examples/opsworks/deregister-rds-db-instance.rst 0000666 4542626 0000144 00000001607 12652514124 026765 0 ustar pysdk-ci amazon 0000000 0000000 **To deregister an Amazon RDS DB instance from a stack**
The following example deregisters an RDS DB instance, identified by its ARN, from its stack. ::
aws opsworks deregister-rds-db-instance --region us-east-1 --rds-db-instance-arn arn:aws:rds:us-west-2:123456789012:db:clitestdb
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Deregistering Amazon RDS Instances`_ in the *ASW OpsWorks User Guide*.
.. _`Deregistering Amazon RDS Instances`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources-dereg.html#resources-dereg-rds
.. instance ID: clitestdb
Master usernams: cliuser
Master PWD: some23!pwd
DB Name: mydb
aws opsworks deregister-rds-db-instance --region us-east-1 --rds-db-instance-arn arn:aws:rds:us-west-2:645732743964:db:clitestdb awscli-1.10.1/awscli/examples/opsworks/delete-app.rst 0000666 4542626 0000144 00000001175 12652514124 023675 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an app**
The following example deletes a specified app, which is identified by its app ID.
You can obtain an app ID by going to the app's details page on the AWS OpsWorks console or by
running the ``describe-apps`` command. ::
aws opsworks delete-app --region us-east-1 --app-id 577943b9-2ec1-4baf-a7bf-1d347601edc5
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Apps`_ in the *AWS OpsWorks User Guide*.
.. _`Apps`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingapps.html
awscli-1.10.1/awscli/examples/opsworks/update-rds-db-instance.rst 0000666 4542626 0000144 00000001546 12652514124 026114 0 ustar pysdk-ci amazon 0000000 0000000 **To update a registered Amazon RDS DB instance**
The following example updates an Amazon RDS instance's master password value.
Note that this command does not change the RDS instance's master password, just the password that
you provide to AWS OpsWorks.
If this password does not match the RDS instance's password,
your application will not be able to connect to the database. ::
aws opsworks --region us-east-1 update-rds-db-instance --db-password 123456789
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Registering Amazon RDS Instances with a Stack`_ in the *AWS OpsWorks User Guide*.
.. _`Registering Amazon RDS Instances with a Stack`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources-reg.html#resources-reg-rds
awscli-1.10.1/awscli/examples/opsworks/register-rds-db-instance.rst 0000666 4542626 0000144 00000002006 12652514124 026446 0 ustar pysdk-ci amazon 0000000 0000000 **To register an Amazon RDS instance with a stack**
The following example registers an Amazon RDS DB instance, identified by its Amazon Resource Name (ARN), with a specified stack.
It also specifies the instance's master username and password. Note that AWS OpsWorks does not validate either of these
values. If either one is incorrect, your application will not be able to connect to the database. ::
aws opsworks register-rds-db-instance --region us-east-1 --stack-id d72553d4-8727-448c-9b00-f024f0ba1b06 --rds-db-instance-arn arn:aws:rds:us-west-2:123456789012:db:clitestdb --db-user cliuser --db-password some23!pwd
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Registering Amazon RDS Instances with a Stack`_ in the *AWS OpsWorks User Guide*.
.. _`Registering Amazon RDS Instances with a Stack`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources-reg.html#resources-reg-rds
awscli-1.10.1/awscli/examples/opsworks/assign-volume.rst 0000666 4542626 0000144 00000001771 12652514124 024450 0 ustar pysdk-ci amazon 0000000 0000000 **To assign a registered volume to an instance**
The following example assigns a registered Amazon Elastic Block Store (Amazon EBS) volume to an instance.
The volume is identified by its volume ID, which is the GUID that AWS OpsWorks assigns when
you register the volume with a stack, not the Amazon Elastic Compute Cloud (Amazon EC2) volume ID.
Before you run ``assign-volume``, you must first run ``update-volume`` to assign a mount point to the volume. ::
aws opsworks --region us-east-1 assign-volume --instance-id 4d6d1710-ded9-42a1-b08e-b043ad7af1e2 --volume-id 26cf1d32-6876-42fa-bbf1-9cadc0bff938
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Assigning Amazon EBS Volumes to an Instance`_ in the *AWS OpsWorks User Guide*.
.. _`Assigning Amazon EBS Volumes to an Instance`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources-attach.html#resources-attach-ebs
awscli-1.10.1/awscli/examples/opsworks/update-layer.rst 0000666 4542626 0000144 00000001167 12652514124 024252 0 ustar pysdk-ci amazon 0000000 0000000 **To update a layer**
The following example updates a specified layer to use Amazon EBS-optimized instances. ::
aws opsworks --region us-east-1 update-layer --layer-id 888c5645-09a5-4d0e-95a8-812ef1db76a4 --use-ebs-optimized-instances
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Editing an OpsWorks Layer's Configuration`_ in the *AWS OpsWorks User Guide*.
.. _`Editing an OpsWorks Layer's Configuration`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html
awscli-1.10.1/awscli/examples/opsworks/associate-elastic-ip.rst 0000666 4542626 0000144 00000001140 12652514124 025650 0 ustar pysdk-ci amazon 0000000 0000000 **To associate an Elastic IP address with an instance**
The following example associates an Elastic IP address with a specified instance. ::
aws opsworks --region us-east-1 associate-elastic-ip --instance-id dfe18b02-5327-493d-91a4-c5c0c448927f --elastic-ip 54.148.130.96
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Resource Management`_ in the *AWS OpsWorks User Guide*.
.. _`Resource Management`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources.html
awscli-1.10.1/awscli/examples/opsworks/describe-elastic-ips.rst 0000666 4542626 0000144 00000001320 12652514124 025640 0 ustar pysdk-ci amazon 0000000 0000000 **To describe Elastic IP instances**
The following ``describe-elastic-ips`` commmand describes the Elastic IP addresses in a specified instance. ::
aws opsworks --region us-east-1 describe-elastic-ips --instance-id b62f3e04-e9eb-436c-a91f-d9e9a396b7b0
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*::
{
"ElasticIps": [
{
"Ip": "192.0.2.0",
"Domain": "standard",
"Region": "us-west-2"
}
]
}
**More Information**
For more information, see Instances_ in the *AWS OpsWorks User Guide*.
.. _Instances: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances.html
awscli-1.10.1/awscli/examples/opsworks/set-time-based-auto-scaling.rst 0000666 4542626 0000144 00000002252 12652514124 027041 0 ustar pysdk-ci amazon 0000000 0000000 **To set the time-based scaling configuration for a layer**
The following example sets the time-based configuration for a specified instance.
You must first use ``create-instance`` to add the instance to the layer. ::
aws opsworks --region us-east-1 set-time-based-auto-scaling --instance-id 69b6237c-08c0-4edb-a6af-78f3d01cedf2 --auto-scaling-schedule file://schedule.json
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
The example puts the schedule in a separate file in the working directory named ``schedule.json``.
For this example, the instance is on for a few hours around midday UTC (Coordinated Universal Time) on Monday and Tuesday. ::
{
"Monday": {
"10": "on",
"11": "on",
"12": "on",
"13": "on"
},
"Tuesday": {
"10": "on",
"11": "on",
"12": "on",
"13": "on"
}
}
*Output*: None.
**More Information**
For more information, see `Using Automatic Time-based Scaling`_ in the *AWS OpsWorks User Guide*.
.. _`Using Automatic Time-based Scaling`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-autoscaling-timebased.html
awscli-1.10.1/awscli/examples/opsworks/delete-user-profile.rst 0000666 4542626 0000144 00000001555 12652514124 025533 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a user profile and remove an IAM user from AWS OpsWorks**
The following example deletes the user profile for a specified AWS Identity and Access Management
(IAM) user, who
is identified by Amazon Resource Name (ARN). The operation removes the user from AWS OpsWorks, but
does not delete the IAM user. You must use the IAM console, CLI, or API for that task. ::
aws opsworks --region us-east-1 delete-user-profile --iam-user-arn arn:aws:iam::123456789102:user/cli-user-test
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Importing Users into AWS OpsWorks`_ in the *AWS OpsWorks User Guide*.
.. _`Importing Users into AWS OpsWorks`: http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users-manage-import.html
awscli-1.10.1/awscli/examples/opsworks/describe-deployments.rst 0000666 4542626 0000144 00000003441 12652514124 025774 0 ustar pysdk-ci amazon 0000000 0000000 **To describe deployments**
The following ``describe-deployments`` commmand describes the deployments in a specified stack. ::
aws opsworks --region us-east-1 describe-deployments --stack-id 38ee91e2-abdc-4208-a107-0b7168b3cc7a
**Note**: AWS OpsWorks CLI commands should set the region to us-east-1 regardless of the stack's location.
*Output*::
{
"Deployments": [
{
"StackId": "38ee91e2-abdc-4208-a107-0b7168b3cc7a",
"Status": "successful",
"CompletedAt": "2013-07-25T18:57:49+00:00",
"DeploymentId": "6ed0df4c-9ef7-4812-8dac-d54a05be1029",
"Command": {
"Args": {},
"Name": "undeploy"
},
"CreatedAt": "2013-07-25T18:57:34+00:00",
"Duration": 15,
"InstanceIds": [
"8c2673b9-3fe5-420d-9cfa-78d875ee7687",
"9e588a25-35b2-4804-bd43-488f85ebe5b7"
]
},
{
"StackId": "38ee91e2-abdc-4208-a107-0b7168b3cc7a",
"Status": "successful",
"CompletedAt": "2013-07-25T18:56:41+00:00",
"IamUserArn": "arn:aws:iam::123456789012:user/someuser",
"DeploymentId": "19d3121e-d949-4ff2-9f9d-94eac087862a",
"Command": {
"Args": {},
"Name": "deploy"
},
"InstanceIds": [
"8c2673b9-3fe5-420d-9cfa-78d875ee7687",
"9e588a25-35b2-4804-bd43-488f85ebe5b7"
],
"Duration": 72,
"CreatedAt": "2013-07-25T18:55:29+00:00"
}
]
}
**More Information**
For more information, see `Deploying Apps`_ in the *AWS OpsWorks User Guide*.
.. _`Deploying Apps`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-deploying.html
awscli-1.10.1/awscli/examples/opsworks/update-app.rst 0000666 4542626 0000144 00000001021 12652514124 023703 0 ustar pysdk-ci amazon 0000000 0000000 **To update an app**
The following example updates a specified app to change its name. ::
aws opsworks --region us-east-1 update-app --app-id 26a61ead-d201-47e3-b55c-2a7c666942f8 --name NewAppName
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Editing Apps`_ in the *AWS OpsWorks User Guide*.
.. _`Editing Apps`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-editing.html
awscli-1.10.1/awscli/examples/opsworks/attach-elastic-load-balancer.rst 0000666 4542626 0000144 00000001173 12652514124 027223 0 ustar pysdk-ci amazon 0000000 0000000 **To attach a load balancer to a layer**
The following example attaches a load balancer, identified by its name, to a specified layer. ::
aws opsworks --region us-east-1 attach-elastic-load-balancer --elastic-load-balancer-name Java-LB --layer-id 888c5645-09a5-4d0e-95a8-812ef1db76a4
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Elastic Load Balancing`_ in the *AWS OpsWorks User Guide*.
.. _`Elastic Load Balancing`: http://docs.aws.amazon.com/opsworks/latest/userguide/load-balancer-elb.html
awscli-1.10.1/awscli/examples/opsworks/disassociate-elastic-ip.rst 0000666 4542626 0000144 00000001066 12652514124 026357 0 ustar pysdk-ci amazon 0000000 0000000 **To disassociate an Elastic IP address from an instance**
The following example disassociates an Elastic IP address from a specified instance. ::
aws opsworks --region us-east-1 disassociate-elastic-ip --elastic-ip 54.148.130.96
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Resource Management`_ in the *AWS OpsWorks User Guide*.
.. _`Resource Management`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources.html
awscli-1.10.1/awscli/examples/opsworks/create-stack.rst 0000666 4542626 0000144 00000002251 12652514124 024217 0 ustar pysdk-ci amazon 0000000 0000000 **To create a stack**
The following ``create-stack`` command creates a stack named CLI Stack. ::
aws opsworks create-stack --name "CLI Stack" --stack-region "us-east-1" --service-role-arn arn:aws:iam::123456789012:role/aws-opsworks-service-role --default-instance-profile-arn arn:aws:iam::123456789012:instance-profile/aws-opsworks-ec2-role --region us-east-1
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
The ``service-role-arn`` and ``default-instance-profile-arn`` parameters are required. You typically
use the ones that AWS OpsWorks
creates for you when you create your first stack. To get the Amazon Resource Names (ARNs) for your
account, go to the `IAM console`_, choose ``Roles`` in the navigation panel,
choose the role or profile, and choose the ``Summary`` tab.
.. _`IAM console`: https://console.aws.amazon.com/iam/home
*Output*::
{
"StackId": "f6673d70-32e6-4425-8999-265dd002fec7"
}
**More Information**
For more information, see `Create a New Stack`_ in the *AWS OpsWorks User Guide*.
.. _`Create a New Stack`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html
awscli-1.10.1/awscli/examples/opsworks/describe-stacks.rst 0000666 4542626 0000144 00000004513 12652514124 024722 0 ustar pysdk-ci amazon 0000000 0000000 **To describe stacks**
The following ``describe-stacks`` command describes an account's stacks. ::
aws opsworks --region us-east-1 describe-stacks
**Note**: AWS OpsWorks CLI commands should set the region to us-east-1 regardless of the stack's location.
*Output*::
{
"Stacks": [
{
"ServiceRoleArn": "arn:aws:iam::444455556666:role/aws-opsworks-service-role",
"StackId": "aeb7523e-7c8b-49d4-b866-03aae9d4fbcb",
"DefaultRootDeviceType": "instance-store",
"Name": "TomStack-sd",
"ConfigurationManager": {
"Version": "11.4",
"Name": "Chef"
},
"UseCustomCookbooks": true,
"CustomJson": "{\n \"tomcat\": {\n \"base_version\": 7,\n \"java_opts\": \"-Djava.awt.headless=true -Xmx256m\"\n },\n \"datasources\": {\n \"ROOT\": \"jdbc/mydb\"\n }\n}",
"Region": "us-east-1",
"DefaultInstanceProfileArn": "arn:aws:iam::444455556666:instance-profile/aws-opsworks-ec2-role",
"CustomCookbooksSource": {
"Url": "git://github.com/example-repo/tomcustom.git",
"Type": "git"
},
"DefaultAvailabilityZone": "us-east-1a",
"HostnameTheme": "Layer_Dependent",
"Attributes": {
"Color": "rgb(45, 114, 184)"
},
"DefaultOs": "Amazon Linux",
"CreatedAt": "2013-08-01T22:53:42+00:00"
},
{
"ServiceRoleArn": "arn:aws:iam::444455556666:role/aws-opsworks-service-role",
"StackId": "40738975-da59-4c5b-9789-3e422f2cf099",
"DefaultRootDeviceType": "instance-store",
"Name": "MyStack",
"ConfigurationManager": {
"Version": "11.4",
"Name": "Chef"
},
"UseCustomCookbooks": false,
"Region": "us-east-1",
"DefaultInstanceProfileArn": "arn:aws:iam::444455556666:instance-profile/aws-opsworks-ec2-role",
"CustomCookbooksSource": {},
"DefaultAvailabilityZone": "us-east-1a",
"HostnameTheme": "Layer_Dependent",
"Attributes": {
"Color": "rgb(45, 114, 184)"
},
"DefaultOs": "Amazon Linux",
"CreatedAt": "2013-10-25T19:24:30+00:00"
}
]
}
**More Information**
For more information, see `Stacks`_ in the *AWS OpsWorks User Guide*.
.. _`Stacks`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks.html
awscli-1.10.1/awscli/examples/opsworks/deregister-elastic-ip.rst 0000666 4542626 0000144 00000001170 12652514124 026035 0 ustar pysdk-ci amazon 0000000 0000000 **To deregister an Elastic IP address from a stack**
The following example deregisters an Elastic IP address, identified by its IP address, from its stack. ::
aws opsworks deregister-elastic-ip --region us-east-1 --elastic-ip 54.148.130.96
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Deregistering Elastic IP Addresses`_ in the *AWS OpsWorks User Guide*.
.. _`Deregistering Elastic IP Addresses`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources-dereg.html#resources-dereg-eip
awscli-1.10.1/awscli/examples/opsworks/delete-stack.rst 0000666 4542626 0000144 00000001515 12652514124 024220 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a stack**
The following example deletes a specified stack, which is identified by its stack ID.
You can obtain a stack ID by clicking **Stack Settings** on the AWS OpsWorks console or by
running the ``describe-stacks`` command.
**Note:** Before deleting a layer, you must use ``delete-app``, ``delete-instance``, and ``delete-layer``
to delete all of the stack's apps, instances, and layers. ::
aws opsworks delete-stack --region us-east-1 --stack-id 154a9d89-7e9e-433b-8de8-617e53756c84
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Shut Down a Stack`_ in the *AWS OpsWorks User Guide*.
.. _`Shut Down a Stack`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-shutting.html
awscli-1.10.1/awscli/examples/opsworks/register-elastic-ip.rst 0000666 4542626 0000144 00000001451 12652514124 025526 0 ustar pysdk-ci amazon 0000000 0000000 **To register an Elastic IP address with a stack**
The following example registers an Elastic IP address, identified by its IP address, with a specified stack.
**Note:** The Elastic IP address must be in the same region as the stack. ::
aws opsworks register-elastic-ip --region us-east-1 --stack-id d72553d4-8727-448c-9b00-f024f0ba1b06 --elastic-ip 54.148.130.96
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output* ::
{
"ElasticIp": "54.148.130.96"
}
**More Information**
For more information, see `Registering Elastic IP Addresses with a Stack`_ in the *OpsWorks User Guide*.
.. _`Registering Elastic IP Addresses with a Stack`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources-reg.html#resources-reg-eip
awscli-1.10.1/awscli/examples/opsworks/update-volume.rst 0000666 4542626 0000144 00000001546 12652514124 024446 0 ustar pysdk-ci amazon 0000000 0000000 **To update a registered volume**
The following example updates a registered Amazon Elastic Block Store (Amazon EBS) volume's mount point.
The volume is identified by its volume ID, which is the GUID that AWS OpsWorks assigns to the volume when
you register it with a stack, not the Amazon Elastic Compute Cloud (Amazon EC2) volume ID.::
aws opsworks --region us-east-1 update-volume --volume-id 8430177d-52b7-4948-9c62-e195af4703df --mount-point /mnt/myvol
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Assigning Amazon EBS Volumes to an Instance`_ in the *AWS OpsWorks User Guide*.
.. _`Assigning Amazon EBS Volumes to an Instance`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources-attach.html#resources-attach-ebs
awscli-1.10.1/awscli/examples/opsworks/set-permission.rst 0000666 4542626 0000144 00000002543 12652514124 024636 0 ustar pysdk-ci amazon 0000000 0000000 **To grant per-stack AWS OpsWorks permission levels**
When you import an AWS Identity and Access Management (IAM) user into AWS OpsWorks by calling ``create-user-profile``, the user has only those
permissions that are granted by the attached IAM policies.
You can grant AWS OpsWorks permissions by modifying a user's policies.
However, it is often easier to import a user and then use the ``set-permission`` command to grant
the user one of the standard permission levels for each stack to which the user will need access.
The following example grants permission for the specified stack for a user, who
is identified by Amazon Resource Name (ARN). The example grants the user a Manage permissions level, with sudo and SSH privileges on the stack's
instances. ::
aws opsworks set-permission --region us-east-1 --stack-id 71c7ca72-55ae-4b6a-8ee1-a8dcded3fa0f --level manage --iam-user-arn arn:aws:iam::123456789102:user/cli-user-test --allow-ssh --allow-sudo
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Granting AWS OpsWorks Users Per-Stack Permissions`_ in the *AWS OpsWorks User Guide*.
.. _`Granting AWS OpsWorks Users Per-Stack Permissions`: http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users-console.html
awscli-1.10.1/awscli/examples/opsworks/set-load-based-auto-scaling.rst 0000666 4542626 0000144 00000002666 12652514124 027033 0 ustar pysdk-ci amazon 0000000 0000000 **To set the load-based scaling configuration for a layer**
The following example enables load-based scaling for a specified layer and sets the configuration
for that layer.
You must use ``create-instance`` to add load-based instances to the layer. ::
aws opsworks --region us-east-1 set-load-based-auto-scaling --layer-id 523569ae-2faf-47ac-b39e-f4c4b381f36d --enable --up-scaling file://upscale.json --down-scaling file://downscale.json
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
The example puts the upscaling threshold settings in a separate file in the working directory named ``upscale.json``, which contains the following. ::
{
"InstanceCount": 2,
"ThresholdsWaitTime": 3,
"IgnoreMetricsTime": 3,
"CpuThreshold": 85,
"MemoryThreshold": 85,
"LoadThreshold": 85
}
The example puts the downscaling threshold settings in a separate file in the working directory named ``downscale.json``, which contains the following. ::
{
"InstanceCount": 2,
"ThresholdsWaitTime": 3,
"IgnoreMetricsTime": 3,
"CpuThreshold": 35,
"MemoryThreshold": 30,
"LoadThreshold": 30
}
*Output*: None.
**More Information**
For more information, see `Using Automatic Load-based Scaling`_ in the *AWS OpsWorks User Guide*.
.. _`Using Automatic Load-based Scaling`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-autoscaling-loadbased.html
awscli-1.10.1/awscli/examples/opsworks/describe-apps.rst 0000666 4542626 0000144 00000002232 12652514124 024371 0 ustar pysdk-ci amazon 0000000 0000000 **To describe apps**
The following ``describe-apps`` command describes the apps in a specified stack. ::
aws opsworks --region us-east-1 describe-apps --stack-id 38ee91e2-abdc-4208-a107-0b7168b3cc7a
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: This particular stack has one app.
::
{
"Apps": [
{
"StackId": "38ee91e2-abdc-4208-a107-0b7168b3cc7a",
"AppSource": {
"Url": "https://s3-us-west-2.amazonaws.com/opsworks-tomcat/simplejsp.zip",
"Type": "archive"
},
"Name": "SimpleJSP",
"EnableSsl": false,
"SslConfiguration": {},
"AppId": "da1decc1-0dff-43ea-ad7c-bb667cd87c8b",
"Attributes": {
"RailsEnv": null,
"AutoBundleOnDeploy": "true",
"DocumentRoot": "ROOT"
},
"Shortname": "simplejsp",
"Type": "other",
"CreatedAt": "2013-08-01T21:46:54+00:00"
}
]
}
**More Information**
For more information, see Apps_ in the *AWS OpsWorks User Guide*.
.. _Apps: http://docs.aws.amazon.com/opsworks/latest/userguide/workingapps.html
awscli-1.10.1/awscli/examples/opsworks/describe-stack-summary.rst 0000666 4542626 0000144 00000001562 12652514124 026233 0 ustar pysdk-ci amazon 0000000 0000000 **To describe a stack's configuration**
The following ``describe-stack-summary`` command returns a summary of the specified stack's configuration. ::
aws opsworks --region us-east-1 describe-stack-summary --stack-id 8c428b08-a1a1-46ce-a5f8-feddc43771b8
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*::
{
"StackSummary": {
"StackId": "8c428b08-a1a1-46ce-a5f8-feddc43771b8",
"InstancesCount": {
"Booting": 1
},
"Name": "CLITest",
"AppsCount": 1,
"LayersCount": 1,
"Arn": "arn:aws:opsworks:us-west-2:123456789012:stack/8c428b08-a1a1-46ce-a5f8-feddc43771b8/"
}
}
**More Information**
For more information, see `Stacks`_ in the *AWS OpsWorks User Guide*.
.. _`Stacks`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks.html
awscli-1.10.1/awscli/examples/opsworks/stop-instance.rst 0000666 4542626 0000144 00000001552 12652514124 024443 0 ustar pysdk-ci amazon 0000000 0000000 **To stop an instance**
The following example stops a specified instance, which is identified by its instance ID.
You can obtain an instance ID by going to the instance's details page on the AWS OpsWorks console or by
running the ``describe-instances`` command. ::
aws opsworks stop-instance --region us-east-1 --instance-id 3a21cfac-4a1f-4ce2-a921-b2cfba6f7771
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
You can restart a stopped instance by calling ``start-instance`` or by deleting the instance by calling
``delete-instance``.
*Output*: None.
**More Information**
For more information, see `Stopping an Instance`_ in the *AWS OpsWorks User Guide*.
.. _`Stopping an Instance`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-starting.html#workinginstances-starting-stop
awscli-1.10.1/awscli/examples/opsworks/deregister-instance.rst 0000666 4542626 0000144 00000001203 12652514124 025604 0 ustar pysdk-ci amazon 0000000 0000000 **To deregister a registered instance from a stack**
The following ``deregister-instance`` command deregisters a registered instance from its stack. ::
aws opsworks --region us-east-1 deregister-instance --instance-id 4d6d1710-ded9-42a1-b08e-b043ad7af1e2
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Deregistering a Registered Instance`_ in the *AWS OpsWorks User Guide*.
.. _`Deregistering a Registered Instance`: http://docs.aws.amazon.com/opsworks/latest/userguide/registered-instances-unassign.html
awscli-1.10.1/awscli/examples/opsworks/delete-instance.rst 0000666 4542626 0000144 00000002264 12652514124 024721 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an instance**
The following example deletes a specified instance, which is identified by its instance ID.
It also deletes any attached Amazon Elastic Block Store (Amazon EBS) volumes or Elastic IP addresses.
You can obtain an instance ID by going to the instance's details page on the AWS OpsWorks console or by
running the ``describe-instances`` command.
If the instance is online, you must first stop the instance by calling ``stop-instance``, and then
wait until the instance has stopped. You can use ``describe-instances`` to check the instance status. ::
aws opsworks delete-instance --region us-east-1 --instance-id 3a21cfac-4a1f-4ce2-a921-b2cfba6f7771
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
To retain the instance's Amazon EBS volumes or Elastic IP addresses,
use the ``--no-delete-volumes`` or ``--no-delete-elastic-ip`` arguments, respectively.
*Output*: None.
**More Information**
For more information, see `Deleting AWS OpsWorks Instances`_ in the *AWS OpsWorks User Guide*.
.. _`Deleting AWS OpsWorks Instances`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-delete.html
awscli-1.10.1/awscli/examples/opsworks/describe-volumes.rst 0000666 4542626 0000144 00000001663 12652514124 025127 0 ustar pysdk-ci amazon 0000000 0000000 **To describe a stack's volumes**
The following example describes a stack's EBS volumes. ::
aws opsworks --region us-east-1 describe-volumes --stack-id 8c428b08-a1a1-46ce-a5f8-feddc43771b8
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*::
{
"Volumes": [
{
"Status": "in-use",
"AvailabilityZone": "us-west-2a",
"Name": "CLITest",
"InstanceId": "dfe18b02-5327-493d-91a4-c5c0c448927f",
"VolumeType": "standard",
"VolumeId": "56b66fbd-e1a1-4aff-9227-70f77118d4c5",
"Device": "/dev/sdi",
"Ec2VolumeId": "vol-295c1638",
"MountPoint": "/mnt/myvolume",
"Size": 1
}
]
}
**More Information**
For more information, see `Resource Management`_ in the *AWS OpsWorks User Guide*.
.. _`Resource Management`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources.html
awscli-1.10.1/awscli/examples/opsworks/describe-timebased-auto-scaling.rst 0000666 4542626 0000144 00000002474 12652514124 027757 0 ustar pysdk-ci amazon 0000000 0000000 **To describe the time-based scaling configuration of an instance**
The following example describes a specified instance's time-based scaling configuration.
The instance is identified by its instance ID, which you can find on the instances's
details page or by running ``describe-instances``. ::
aws opsworks describe-time-based-auto-scaling --region us-east-1 --instance-ids 701f2ffe-5d8e-4187-b140-77b75f55de8d
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: The example has a single time-based instance. ::
{
"TimeBasedAutoScalingConfigurations": [
{
"InstanceId": "701f2ffe-5d8e-4187-b140-77b75f55de8d",
"AutoScalingSchedule": {
"Monday": {
"11": "on",
"10": "on",
"13": "on",
"12": "on"
},
"Tuesday": {
"11": "on",
"10": "on",
"13": "on",
"12": "on"
}
}
}
]
}
**More Information**
For more information, see `How Automatic Time-based Scaling Works`_ in the *AWS OpsWorks User Guide*.
.. _`How Automatic Time-based Scaling Works`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-autoscaling.html#workinginstances-autoscaling-timebased
awscli-1.10.1/awscli/examples/opsworks/describe-commands.rst 0000666 4542626 0000144 00000004064 12652514124 025234 0 ustar pysdk-ci amazon 0000000 0000000 **To describe commands**
The following ``describe-commands`` commmand describes the commands in a specified instance. ::
aws opsworks --region us-east-1 describe-commands --instance-id 8c2673b9-3fe5-420d-9cfa-78d875ee7687
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*::
{
"Commands": [
{
"Status": "successful",
"CompletedAt": "2013-07-25T18:57:47+00:00",
"InstanceId": "8c2673b9-3fe5-420d-9cfa-78d875ee7687",
"DeploymentId": "6ed0df4c-9ef7-4812-8dac-d54a05be1029",
"AcknowledgedAt": "2013-07-25T18:57:41+00:00",
"LogUrl": "https://s3.amazonaws.com/prod_stage-log/logs/008c1a91-ec59-4d51-971d-3adff54b00cc?AWSAccessKeyId=AKIAIOSFODNN7EXAMPLE &Expires=1375394373&Signature=HkXil6UuNfxTCC37EPQAa462E1E%3D&response-cache-control=private&response-content-encoding=gzip&response-content- type=text%2Fplain",
"Type": "undeploy",
"CommandId": "008c1a91-ec59-4d51-971d-3adff54b00cc",
"CreatedAt": "2013-07-25T18:57:34+00:00",
"ExitCode": 0
},
{
"Status": "successful",
"CompletedAt": "2013-07-25T18:55:40+00:00",
"InstanceId": "8c2673b9-3fe5-420d-9cfa-78d875ee7687",
"DeploymentId": "19d3121e-d949-4ff2-9f9d-94eac087862a",
"AcknowledgedAt": "2013-07-25T18:55:32+00:00",
"LogUrl": "https://s3.amazonaws.com/prod_stage-log/logs/899d3d64-0384-47b6-a586-33433aad117c?AWSAccessKeyId=AKIAIOSFODNN7EXAMPLE &Expires=1375394373&Signature=xMsJvtLuUqWmsr8s%2FAjVru0BtRs%3D&response-cache-control=private&response-content-encoding=gzip&response-conten t-type=text%2Fplain",
"Type": "deploy",
"CommandId": "899d3d64-0384-47b6-a586-33433aad117c",
"CreatedAt": "2013-07-25T18:55:29+00:00",
"ExitCode": 0
}
]
}
**More Information**
For more information, see `AWS OpsWorks Lifecycle Events`_ in the *AWS OpsWorks User Guide*.
.. _`AWS OpsWorks Lifecycle Events`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-events.html
awscli-1.10.1/awscli/examples/opsworks/stop-stack.rst 0000666 4542626 0000144 00000001167 12652514124 023746 0 ustar pysdk-ci amazon 0000000 0000000 **To stop a stack's instances**
The following example stops all of a stack's 24/7 instances.
To stop a particular instance, use ``stop-instance``. ::
aws opsworks --region us-east-1 stop-stack --stack-id 8c428b08-a1a1-46ce-a5f8-feddc43771b8
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: No output.
**More Information**
For more information, see `Stopping an Instance`_ in the *AWS OpsWorks User Guide*.
.. _`Stopping an Instance`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-starting.html#workinginstances-starting-stop
awscli-1.10.1/awscli/examples/opsworks/delete-layer.rst 0000666 4542626 0000144 00000001470 12652514124 024227 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a layer**
The following example deletes a specified layer, which is identified by its layer ID.
You can obtain a layer ID by going to the layer's details page on the AWS OpsWorks console or by
running the ``describe-layers`` command.
**Note:** Before deleting a layer, you must use ``delete-instance`` to delete all of the layer's instances. ::
aws opsworks delete-layer --region us-east-1 --layer-id a919454e-b816-4598-b29a-5796afb498ed
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Deleting AWS OpsWorks Instances`_ in the *AWS OpsWorks User Guide*.
.. _`Deleting AWS OpsWorks Instances`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-delete.html
awscli-1.10.1/awscli/examples/opsworks/describe-load-based-auto-scaling.rst 0000666 4542626 0000144 00000002525 12652514124 030012 0 ustar pysdk-ci amazon 0000000 0000000 **To describe a layer's load-based scaling configuration**
The following example describes a specified layer's load-based scaling configuration.
The layer is identified by its layer ID, which you can find on the layer's
details page or by running ``describe-layers``. ::
aws opsworks describe-load-based-auto-scaling --region us-east-1 --layer-ids 6bec29c9-c866-41a0-aba5-fa3e374ce2a1
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: The example layer has a single load-based instance. ::
{
"LoadBasedAutoScalingConfigurations": [
{
"DownScaling": {
"IgnoreMetricsTime": 10,
"ThresholdsWaitTime": 10,
"InstanceCount": 1,
"CpuThreshold": 30.0
},
"Enable": true,
"UpScaling": {
"IgnoreMetricsTime": 5,
"ThresholdsWaitTime": 5,
"InstanceCount": 1,
"CpuThreshold": 80.0
},
"LayerId": "6bec29c9-c866-41a0-aba5-fa3e374ce2a1"
}
]
}
**More Information**
For more information, see `How Automatic Load-based Scaling Works`_ in the *AWS OpsWorks User Guide*.
.. _`How Automatic Load-based Scaling Works`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-autoscaling.html#workinginstances-autoscaling-loadbased
awscli-1.10.1/awscli/examples/opsworks/create-user-profile.rst 0000666 4542626 0000144 00000002405 12652514124 025527 0 ustar pysdk-ci amazon 0000000 0000000 **To create a user profile**
You import an AWS Identity and Access Manager (IAM) user into AWS OpsWorks by calling `create-user-profile` to create a user profile.
The following example creates a user profile for the cli-user-test IAM user, who
is identified by Amazon Resource Name (ARN). The example assigns the user an SSH username of ``myusername`` and enables self management,
which allows the user to specify an SSH public key. ::
aws opsworks --region us-east-1 create-user-profile --iam-user-arn arn:aws:iam::123456789102:user/cli-user-test --ssh-username myusername --allow-self-management
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*::
{
"IamUserArn": "arn:aws:iam::123456789102:user/cli-user-test"
}
**Tip**: This command imports an IAM user into AWS OpsWorks, but only with the permissions that are
granted by the attached policies. You can grant per-stack AWS OpsWorks permissions by using the ``set-permissions`` command.
**More Information**
For more information, see `Importing Users into AWS OpsWorks`_ in the *AWS OpsWorks User Guide*.
.. _`Importing Users into AWS OpsWorks`: http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users-manage-import.html
awscli-1.10.1/awscli/examples/opsworks/start-instance.rst 0000666 4542626 0000144 00000001677 12652514124 024623 0 ustar pysdk-ci amazon 0000000 0000000 **To start an instance**
The following ``start-instance`` command starts a specified 24/7 instance. ::
aws opsworks start-instance --instance-id f705ee48-9000-4890-8bd3-20eb05825aaf
**Note**: AWS OpsWorks CLI commands should set the region to us-east-1 regardless of the stack's location.
*Output*: None. Use describe-instances_ to check the instance's status.
.. _describe-instances: http://docs.aws.amazon.com/cli/latest/reference/opsworks/describe-instances.html
**Tip** You can start every offline instance in a stack with one command by calling start-stack_.
.. _start-stack: http://docs.aws.amazon.com/cli/latest/reference/opsworks/start-stack.html
**More Information**
For more information, see `Manually Starting, Stopping, and Rebooting 24/7 Instances`_ in the *AWS OpsWorks User Guide*.
.. _`Manually Starting, Stopping, and Rebooting 24/7 Instances`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-starting.html
awscli-1.10.1/awscli/examples/opsworks/start-stack.rst 0000666 4542626 0000144 00000001170 12652514124 024110 0 ustar pysdk-ci amazon 0000000 0000000 **To start a stack's instances**
The following example starts all of a stack's 24/7 instances.
To start a particular instance, use ``start-instance``. ::
aws opsworks --region us-east-1 start-stack --stack-id 8c428b08-a1a1-46ce-a5f8-feddc43771b8
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Starting an Instance`_ in the *AWS OpsWorks User Guide*.
.. _`Starting an Instance`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-starting.html#workinginstances-starting-start
awscli-1.10.1/awscli/examples/opsworks/update-my-user-profile.rst 0000666 4542626 0000144 00000001503 12652514124 026167 0 ustar pysdk-ci amazon 0000000 0000000 **To update a user's profile**
The following example updates the ``development`` user's profile to use a specified SSH public key.
The user's AWS credentials are represented by the ``development`` profile in the ``credentials`` file
(``~\.aws\credentials``), and the key is in a ``.pem`` file in the working directory. ::
aws opsworks --region us-east-1 --profile development update-my-user-profile --ssh-public-key file://development_key.pem
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Editing AWS OpsWorks User Settings`_ in the *AWS OpsWorks User Guide*.
.. _`Editing AWS OpsWorks User Settings`: http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users-manage-edit.html
awscli-1.10.1/awscli/examples/opsworks/describe-permissions.rst 0000666 4542626 0000144 00000001767 12652514124 026015 0 ustar pysdk-ci amazon 0000000 0000000 **To obtain a user's per-stack AWS OpsWorks permission level**
The following example shows how to to obtain an AWS Identity and Access Management (IAM) user's permission level on a specified stack. ::
aws opsworks --region us-east-1 describe-permissions --iam-user-arn arn:aws:iam::123456789012:user/cli-user-test --stack-id d72553d4-8727-448c-9b00-f024f0ba1b06
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*::
{
"Permissions": [
{
"StackId": "d72553d4-8727-448c-9b00-f024f0ba1b06",
"IamUserArn": "arn:aws:iam::123456789012:user/cli-user-test",
"Level": "manage",
"AllowSudo": true,
"AllowSsh": true
}
]
}
**More Information**
For more information, see `Granting Per-Stack Permissions Levels`_ in the *AWS OpsWorks User Guide*.
.. _`Granting Per-Stack Permissions Levels`: http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users-console.html
awscli-1.10.1/awscli/examples/opsworks/register.rst 0000666 4542626 0000144 00000015713 12652514124 023504 0 ustar pysdk-ci amazon 0000000 0000000 **To register instances with a stack**
The following examples show a variety of ways to register instances with a stack that were created outside of AWS Opsworks.
You can run ``register`` from the instance to be registered, or from a separate workstation.
For more information, see `Registering Amazon EC2 and On-premises Instances`_ in the *AWS OpsWorks User Guide*.
.. _`Registering Amazon EC2 and On-premises Instances`: http://docs.aws.amazon.com/opsworks/latest/userguide/registered-instances-register-registering.html
**Note**: For brevity, the examples omit the ``region`` argument. AWS OpsWorks CLI commands should set ``region``
to us-east-1 regardless of the stack's location.
*To register an Amazon EC2 instance*
To indicate that you are registering an EC2 instance, set the ``--infrastructure-class`` argument
to ``ec2``.
The following example registers an EC2 instance with the specified stack from a separate workstation.
The instance is identified by its EC2 ID, ``i-12345678``. The example uses the workstation's default SSH username and attempts
to log in to the instance using authentication techniques that do not require a password,
such as a default private SSH key. If that fails, ``register`` queries for the password. ::
aws opsworks register --infrastructure-class=ec2 --stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb i-12345678
The following example registers an EC2 instance with the specifed stack from a separate workstation.
It uses the ``--ssh-username`` and ``--ssh-private-key`` arguments to explicitly
specify the SSH username and private key file that the command uses to log into the instance.
``ec2-user`` is the standard username for Amazon Linux instances. Use ``ubuntu`` for Ubuntu instances. ::
aws opsworks register --infrastructure-class=ec2 --stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb --ssh-username ec2-user --ssh-private-key ssh_private_key i-12345678
The following example registers the EC2 instance that is running the ``register`` command.
Log in to the instance with SSH and run ``register`` with the ``--local`` argument instead of an instance ID or hostname. ::
aws opsworks register --infrastructure-class ec2 --stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb --local
*To register an on-premises instance*
To indicate that you are registering an on-premises instance, set the ``--infrastructure-class`` argument
to ``on-premises``.
The following example registers an existing on-premises instance with a specified stack from a separate workstation.
The instance is identified by its IP address, ``192.0.2.3``. The example uses the workstation's default SSH username and attempts
to log in to the instance using authentication techniques that do not require a password,
such as a default private SSH key. If that fails, ``register`` queries for the password. ::
aws opsworks register --infrastructure-class on-premises --stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb 192.0.2.3
The following example registers an on-premises instance with a specified stack from a separate workstation.
The instance is identified by its hostname, ``host1``. The ``--override-...`` arguments direct AWS OpsWorks
to display ``webserver1`` as the host name and ``192.0.2.3`` and ``10.0.0.2`` as the instance's public and
private IP addresses, respectively. ::
aws opsworks register --infrastructure-class on-premises --stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb --override-hostname webserver1 --override-public-ip 192.0.2.3 --override-private-ip 10.0.0.2 host1
The following example registers an on-premises instance with a specified stack from a separate workstation.
The instance is identified by its IP address. ``register`` logs into the instance using the specified SSH username and private key file. ::
aws opsworks register --infrastructure-class on-premises --stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb --ssh-username admin --ssh-private-key ssh_private_key 192.0.2.3
The following example registers an existing on-premises instance with a specified stack from a separate workstation.
The command logs into the instance using a custom SSH command string that specifies
the SSH password and the instance's IP address. ::
aws opsworks register --infrastructure-class on-premises --stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb --override-ssh "sshpass -p 'mypassword' ssh your-user@192.0.2.3"
The following example registers the on-premises instance that is running the ``register`` command.
Log in to the instance with SSH and run ``register`` with the ``--local`` argument instead of an instance ID or hostname. ::
aws opsworks register --infrastructure-class on-premises --stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb --local
*Output*: The following is typical output for registering an EC2 instance.
::
Warning: Permanently added '52.11.41.206' (ECDSA) to the list of known hosts.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 6403k 100 6403k 0 0 2121k 0 0:00:03 0:00:03 --:--:-- 2121k
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Initializing AWS OpsWorks environment
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Running on Ubuntu
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Checking if OS is supported
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Running on supported OS
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Setup motd
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Executing: ln -sf --backup /etc/motd.opsworks-static /etc/motd
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Enabling multiverse repositories
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Customizing APT environment
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Installing system packages
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Executing: dpkg --configure -a
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Executing with retry: apt-get update
[Tue, 24 Feb 2015 20:49:13 +0000] opsworks-init: Executing: apt-get install -y ruby ruby-dev libicu-dev libssl-dev libxslt-dev libxml2-dev libyaml-dev monit
[Tue, 24 Feb 2015 20:50:13 +0000] opsworks-init: Using assets bucket from environment: 'opsworks-instance-assets-us-east-1.s3.amazonaws.com'.
[Tue, 24 Feb 2015 20:50:13 +0000] opsworks-init: Installing Ruby for the agent
[Tue, 24 Feb 2015 20:50:13 +0000] opsworks-init: Executing: /tmp/opsworks-agent-installer.YgGq8wF3UUre6yDy/opsworks-agent-installer/opsworks-agent/bin/installer_wrapper.sh -r -R opsworks-instance-assets-us-east-1.s3.amazonaws.com
[Tue, 24 Feb 2015 20:50:44 +0000] opsworks-init: Starting the installer
Instance successfully registered. Instance ID: 4d6d1710-ded9-42a1-b08e-b043ad7af1e2
Connection to 52.11.41.206 closed.
**More Information**
For more information, see `Registering an Instance with an AWS OpsWorks Stack`_ in the *AWS OpsWorks User Guide*.
.. _`Registering an Instance with an AWS OpsWorks Stack`: http://docs.aws.amazon.com/opsworks/latest/userguide/registered-instances-register.html
awscli-1.10.1/awscli/examples/opsworks/unassign-instance.rst 0000666 4542626 0000144 00000001170 12652514124 025301 0 ustar pysdk-ci amazon 0000000 0000000 **To unassign a registered instance from its layers**
The following ``unassign-instance`` command unassigns an instance from its attached layers. ::
aws opsworks --region us-east-1 unassign-instance --instance-id 4d6d1710-ded9-42a1-b08e-b043ad7af1e2
**Note**: AWS OpsWorks CLI commands should set the region to us-east-1 regardless of the stack's location.
**Output**: None.
**More Information**
For more information, see `Unassigning a Registered Instance`_ in the *AWS OpsWorks User Guide*.
.. _`Unassigning a Registered Instance`: http://docs.aws.amazon.com/opsworks/latest/userguide/registered-instances-unassign.html
awscli-1.10.1/awscli/examples/opsworks/update-elastic-ip.rst 0000666 4542626 0000144 00000001044 12652514124 025162 0 ustar pysdk-ci amazon 0000000 0000000 **To update an Elastic IP address name**
The following example updates the name of a specified Elastic IP address. ::
aws opsworks --region us-east-1 update-elastic-ip --elastic-ip 54.148.130.96 --name NewIPName
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Resource Management`_ in the *AWS OpsWorks User Guide*.
.. _`Resource Management`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources.html
awscli-1.10.1/awscli/examples/opsworks/unassign-volume.rst 0000666 4542626 0000144 00000001477 12652514124 025016 0 ustar pysdk-ci amazon 0000000 0000000 **To unassign a volume from its instance**
The following example unassigns a registered Amazon Elastic Block Store (Amazon EBS) volume from its instance.
The volume is identified by its volume ID, which is the GUID that AWS OpsWorks assigns when
you register the volume with a stack, not the Amazon Elastic Compute Cloud (Amazon EC2) volume ID. ::
aws opsworks --region us-east-1 unassign-volume --volume-id 8430177d-52b7-4948-9c62-e195af4703df
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Unassigning Amazon EBS Volumes`_ in the *AWS OpsWorks User Guide*.
.. _`Unassigning Amazon EBS Volumes`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources-detach.html#resources-detach-ebs
awscli-1.10.1/awscli/examples/opsworks/create-layer.rst 0000666 4542626 0000144 00000001272 12652514124 024230 0 ustar pysdk-ci amazon 0000000 0000000 **To create a layer**
The following ``create-layer`` command creates a PHP App Server layer named MyPHPLayer in a specified stack. ::
aws opsworks create-layer --region us-east-1 --stack-id f6673d70-32e6-4425-8999-265dd002fec7 --type php-app --name MyPHPLayer --shortname myphplayer
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*::
{
"LayerId": "0b212672-6b4b-40e4-8a34-5a943cf2e07a"
}
**More Information**
For more information, see `How to Create a Layer`_ in the *AWS OpsWorks User Guide*.
.. _`How to Create a Layer`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-create.html
awscli-1.10.1/awscli/examples/opsworks/describe-my-user-profile.rst 0000666 4542626 0000144 00000001666 12652514124 026477 0 ustar pysdk-ci amazon 0000000 0000000 **To obtain a user's profile**
The following example shows how to obtain the profile
of the AWS Identity and Access Management (IAM) user that is running the command. ::
aws opsworks --region us-east-1 describe-user-profile
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: For brevity, most of the user's SSH public key is replaced by an ellipsis (...). ::
{
"UserProfile": {
"IamUserArn": "arn:aws:iam::123456789012:user/myusername",
"SshPublicKey": "ssh-rsa AAAAB3NzaC1yc2EAAAABJQ...3LQ4aX9jpxQw== rsa-key-20141104",
"Name": "myusername",
"SshUsername": "myusername"
}
}
**More Information**
For more information, see `Importing Users into AWS OpsWorks`_ in the *AWS OpsWorks User Guide*.
.. _`Importing Users into AWS OpsWorks`: http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users-manage-import.html
awscli-1.10.1/awscli/examples/opsworks/get-hostname-suggestion.rst 0000666 4542626 0000144 00000001447 12652514124 026437 0 ustar pysdk-ci amazon 0000000 0000000 **To get the next hostname for a layer**
The following example gets the next generated hostname for a specified layer. The layer used for
this example is a Java Application Server layer with one instance. The stack's hostname theme is
the default, Layer_Dependent. ::
aws opsworks --region us-east-1 get-hostname-suggestion --layer-id 888c5645-09a5-4d0e-95a8-812ef1db76a4
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*::
{
"Hostname": "java-app2",
"LayerId": "888c5645-09a5-4d0e-95a8-812ef1db76a4"
}
**More Information**
For more information, see `Create a New Stack`_ in the *AWS OpsWorks User Guide*.
.. _`Create a New Stack`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html
awscli-1.10.1/awscli/examples/opsworks/update-instance.rst 0000666 4542626 0000144 00000001125 12652514124 024734 0 ustar pysdk-ci amazon 0000000 0000000 **To update an instance**
The following example updates a specified instance's type. ::
aws opsworks --region us-east-1 update-instance --instance-id dfe18b02-5327-493d-91a4-c5c0c448927f --instance-type c3.xlarge
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Editing the Instance Configuration`_ in the *AWS OpsWorks User Guide*.
.. _`Editing the Instance Configuration`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-properties.html
awscli-1.10.1/awscli/examples/opsworks/reboot-instance.rst 0000666 4542626 0000144 00000001060 12652514124 024742 0 ustar pysdk-ci amazon 0000000 0000000 **To reboot an instance**
The following example reboots an instance. ::
aws opsworks --region us-east-1 reboot-instance --instance-id dfe18b02-5327-493d-91a4-c5c0c448927f
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Rebooting an Instance`_ in the *AWS OpsWorks User Guide*.
.. _`Rebooting an Instance`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-starting.html#workinginstances-starting-reboot
awscli-1.10.1/awscli/examples/opsworks/create-instance.rst 0000666 4542626 0000144 00000002202 12652514124 024712 0 ustar pysdk-ci amazon 0000000 0000000 **To create an instance**
The following ``create-instance`` command creates an m1.large Amazon Linux instance named myinstance1 in a specified stack.
The instance is assigned to one layer. ::
aws opsworks --region us-east-1 create-instance --stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb --layer-ids 5c8c272a-f2d5-42e3-8245-5bf3927cb65b --hostname myinstance1 --instance-type m1.large --os "Amazon Linux"
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
To use an autogenerated name, call `get-hostname-suggestion`_, which generates
a hostname based on the theme that you specified when you created the stack.
Then pass that name to the `hostname` argument.
.. _get-hostname-suggestion: http://docs.aws.amazon.com/cli/latest/reference/opsworks/get-hostname-suggestion.html
*Output*::
{
"InstanceId": "5f9adeaa-c94c-42c6-aeef-28a5376002cd"
}
**More Information**
For more information, see `Adding an Instance to a Layer`_ in the *AWS OpsWorks User Guide*.
.. _`Adding an Instance to a Layer`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-add.html
awscli-1.10.1/awscli/examples/opsworks/describe-elastic-load-balancers.rst 0000666 4542626 0000144 00000002337 12652514124 027725 0 ustar pysdk-ci amazon 0000000 0000000 **To describe a stack's elastic load balancers**
The following ``describe-elastic-load-balancers`` command describes a specified stack's load balancers. ::
aws opsworks --region us-west-1 describe-elastic-load-balancers --stack-id d72553d4-8727-448c-9b00-f024f0ba1b06
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: This particular stack has one app.
::
{
"Apps": [
{
"StackId": "38ee91e2-abdc-4208-a107-0b7168b3cc7a",
"AppSource": {
"Url": "https://s3-us-west-2.amazonaws.com/opsworks-tomcat/simplejsp.zip",
"Type": "archive"
},
"Name": "SimpleJSP",
"EnableSsl": false,
"SslConfiguration": {},
"AppId": "da1decc1-0dff-43ea-ad7c-bb667cd87c8b",
"Attributes": {
"RailsEnv": null,
"AutoBundleOnDeploy": "true",
"DocumentRoot": "ROOT"
},
"Shortname": "simplejsp",
"Type": "other",
"CreatedAt": "2013-08-01T21:46:54+00:00"
}
]
}
**More Information**
For more information, see Apps_ in the *AWS OpsWorks User Guide*.
.. _Apps: http://docs.aws.amazon.com/opsworks/latest/userguide/workingapps.html
awscli-1.10.1/awscli/examples/opsworks/deregister-volume.rst 0000666 4542626 0000144 00000001356 12652514124 025320 0 ustar pysdk-ci amazon 0000000 0000000 **To deregister an Amazon EBS volume**
The following example deregisters an EBS volume from its stack.
The volume is identified by its volume ID, which is the GUID that AWS OpsWorks assigned when
you registered the volume with the stack, not the EC2 volume ID. ::
aws opsworks deregister-volume --region us-east-1 --volume-id 5c48ef52-3144-4bf5-beaa-fda4deb23d4d
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Deregistering Amazon EBS Volumes`_ in the *AWS OpsWorks User Guide*.
.. _`Deregistering Amazon EBS Volumes`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources-dereg.html#resources-dereg-ebs
awscli-1.10.1/awscli/examples/opsworks/describe-layers.rst 0000666 4542626 0000144 00000013375 12652514124 024737 0 ustar pysdk-ci amazon 0000000 0000000 **To describe a stack's layers**
The following ``describe-layers`` commmand describes the layers in a specified stack::
aws opsworks --region us-east-1 describe-layers --stack-id 38ee91e2-abdc-4208-a107-0b7168b3cc7a
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*::
{
"Layers": [
{
"StackId": "38ee91e2-abdc-4208-a107-0b7168b3cc7a",
"Type": "db-master",
"DefaultSecurityGroupNames": [
"AWS-OpsWorks-DB-Master-Server"
],
"Name": "MySQL",
"Packages": [],
"DefaultRecipes": {
"Undeploy": [],
"Setup": [
"opsworks_initial_setup",
"ssh_host_keys",
"ssh_users",
"mysql::client",
"dependencies",
"ebs",
"opsworks_ganglia::client",
"mysql::server",
"dependencies",
"deploy::mysql"
],
"Configure": [
"opsworks_ganglia::configure-client",
"ssh_users",
"agent_version",
"deploy::mysql"
],
"Shutdown": [
"opsworks_shutdown::default",
"mysql::stop"
],
"Deploy": [
"deploy::default",
"deploy::mysql"
]
},
"CustomRecipes": {
"Undeploy": [],
"Setup": [],
"Configure": [],
"Shutdown": [],
"Deploy": []
},
"EnableAutoHealing": false,
"LayerId": "41a20847-d594-4325-8447-171821916b73",
"Attributes": {
"MysqlRootPasswordUbiquitous": "true",
"RubygemsVersion": null,
"RailsStack": null,
"HaproxyHealthCheckMethod": null,
"RubyVersion": null,
"BundlerVersion": null,
"HaproxyStatsPassword": null,
"PassengerVersion": null,
"MemcachedMemory": null,
"EnableHaproxyStats": null,
"ManageBundler": null,
"NodejsVersion": null,
"HaproxyHealthCheckUrl": null,
"MysqlRootPassword": "*****FILTERED*****",
"GangliaPassword": null,
"GangliaUser": null,
"HaproxyStatsUrl": null,
"GangliaUrl": null,
"HaproxyStatsUser": null
},
"Shortname": "db-master",
"AutoAssignElasticIps": false,
"CustomSecurityGroupIds": [],
"CreatedAt": "2013-07-25T18:11:19+00:00",
"VolumeConfigurations": [
{
"MountPoint": "/vol/mysql",
"Size": 10,
"NumberOfDisks": 1
}
]
},
{
"StackId": "38ee91e2-abdc-4208-a107-0b7168b3cc7a",
"Type": "custom",
"DefaultSecurityGroupNames": [
"AWS-OpsWorks-Custom-Server"
],
"Name": "TomCustom",
"Packages": [],
"DefaultRecipes": {
"Undeploy": [],
"Setup": [
"opsworks_initial_setup",
"ssh_host_keys",
"ssh_users",
"mysql::client",
"dependencies",
"ebs",
"opsworks_ganglia::client"
],
"Configure": [
"opsworks_ganglia::configure-client",
"ssh_users",
"agent_version"
],
"Shutdown": [
"opsworks_shutdown::default"
],
"Deploy": [
"deploy::default"
]
},
"CustomRecipes": {
"Undeploy": [],
"Setup": [
"tomcat::setup"
],
"Configure": [
"tomcat::configure"
],
"Shutdown": [],
"Deploy": [
"tomcat::deploy"
]
},
"EnableAutoHealing": true,
"LayerId": "e6cbcd29-d223-40fc-8243-2eb213377440",
"Attributes": {
"MysqlRootPasswordUbiquitous": null,
"RubygemsVersion": null,
"RailsStack": null,
"HaproxyHealthCheckMethod": null,
"RubyVersion": null,
"BundlerVersion": null,
"HaproxyStatsPassword": null,
"PassengerVersion": null,
"MemcachedMemory": null,
"EnableHaproxyStats": null,
"ManageBundler": null,
"NodejsVersion": null,
"HaproxyHealthCheckUrl": null,
"MysqlRootPassword": null,
"GangliaPassword": null,
"GangliaUser": null,
"HaproxyStatsUrl": null,
"GangliaUrl": null,
"HaproxyStatsUser": null
},
"Shortname": "tomcustom",
"AutoAssignElasticIps": false,
"CustomSecurityGroupIds": [],
"CreatedAt": "2013-07-25T18:12:53+00:00",
"VolumeConfigurations": []
}
]
}
**More Information**
For more information, see Layers_ in the *AWS OpsWorks User Guide*.
.. _Layers: http://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers.html
awscli-1.10.1/awscli/examples/opsworks/register-volume.rst 0000666 4542626 0000144 00000001357 12652514124 025010 0 ustar pysdk-ci amazon 0000000 0000000 **To register an Amazon EBS volume with a stack**
The following example registers an Amazon EBS volume, identified by its volume ID, with a specified stack. ::
aws opsworks register-volume --region us-east-1 --stack-id d72553d4-8727-448c-9b00-f024f0ba1b06 --ec-2-volume-id vol-295c1638
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*::
{
"VolumeId": "ee08039c-7cb7-469f-be10-40fb7f0c05e8"
}
**More Information**
For more information, see `Registering Amazon EBS Volumes with a Stack`_ in the *AWS OpsWorks User Guide*.
.. _`Registering Amazon EBS Volumes with a Stack`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources-reg.html#resources-reg-ebs
awscli-1.10.1/awscli/examples/opsworks/detach-elastic-load-balancer.rst 0000666 4542626 0000144 00000001171 12652514124 027205 0 ustar pysdk-ci amazon 0000000 0000000 **To detach a load balancer from its layer**
The following example detaches a load balancer, identified by its name, from its layer. ::
aws opsworks --region us-east-1 detach-elastic-load-balancer --elastic-load-balancer-name Java-LB --layer-id 888c5645-09a5-4d0e-95a8-812ef1db76a4
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Elastic Load Balancing`_ in the *AWS OpsWorks User Guide*.
.. _`Elastic Load Balancing`: http://docs.aws.amazon.com/opsworks/latest/userguide/load-balancer-elb.html
awscli-1.10.1/awscli/examples/opsworks/assign-instance.rst 0000666 4542626 0000144 00000001231 12652514124 024734 0 ustar pysdk-ci amazon 0000000 0000000 **To assign a registered instance to a layer**
The following example assigns a registered instance to a custom layer. ::
aws opsworks --region us-east-1 assign-instance --instance-id 4d6d1710-ded9-42a1-b08e-b043ad7af1e2 --layer-ids 26cf1d32-6876-42fa-bbf1-9cadc0bff938
**Note**: OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: None.
**More Information**
For more information, see `Assigning a Registered Instance to a Layer`_ in the *AWS OpsWorks User Guide*.
.. _`Assigning a Registered Instance to a Layer`: http://docs.aws.amazon.com/opsworks/latest/userguide/registered-instances-assign.html
awscli-1.10.1/awscli/examples/opsworks/describe-rds-db-instances.rst 0000666 4542626 0000144 00000002145 12652514124 026571 0 ustar pysdk-ci amazon 0000000 0000000 **To describe a stack's registered Amazon RDS instances**
The following example describes the Amazon RDS instances registered with a specified stack. ::
aws opsworks --region us-east-1 describe-rds-db-instances --stack-id d72553d4-8727-448c-9b00-f024f0ba1b06
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: The following is the output for a stack with one registered RDS instance. ::
{
"RdsDbInstances": [
{
"Engine": "mysql",
"StackId": "d72553d4-8727-448c-9b00-f024f0ba1b06",
"MissingOnRds": false,
"Region": "us-west-2",
"RdsDbInstanceArn": "arn:aws:rds:us-west-2:123456789012:db:clitestdb",
"DbPassword": "*****FILTERED*****",
"Address": "clitestdb.cdlqlk5uwd0k.us-west-2.rds.amazonaws.com",
"DbUser": "cliuser",
"DbInstanceIdentifier": "clitestdb"
}
]
}
For more information, see `Resource Management`_ in the *AWS OpsWorks User Guide*.
.. _`Resource Management`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources.html
awscli-1.10.1/awscli/examples/opsworks/describe-instances.rst 0000666 4542626 0000144 00000006670 12652514124 025427 0 ustar pysdk-ci amazon 0000000 0000000 **To describe instances**
The following ``describe-instances`` commmand describes the instances in a specified stack::
aws opsworks --region us-east-1 describe-instances --stack-id 8c428b08-a1a1-46ce-a5f8-feddc43771b8
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: The following output example is for a stack with two instances. The first is a registered
EC2 instance, and the second was created by AWS OpsWorks.
::
{
"Instances": [
{
"StackId": "71c7ca72-55ae-4b6a-8ee1-a8dcded3fa0f",
"PrivateDns": "ip-10-31-39-66.us-west-2.compute.internal",
"LayerIds": [
"26cf1d32-6876-42fa-bbf1-9cadc0bff938"
],
"EbsOptimized": false,
"ReportedOs": {
"Version": "14.04",
"Name": "ubuntu",
"Family": "debian"
},
"Status": "online",
"InstanceId": "4d6d1710-ded9-42a1-b08e-b043ad7af1e2",
"SshKeyName": "US-West-2",
"InfrastructureClass": "ec2",
"RootDeviceVolumeId": "vol-d08ec6c1",
"SubnetId": "subnet-b8de0ddd",
"InstanceType": "t1.micro",
"CreatedAt": "2015-02-24T20:52:49+00:00",
"AmiId": "ami-35501205",
"Hostname": "ip-192-0-2-0",
"Ec2InstanceId": "i-5cd23551",
"PublicDns": "ec2-192-0-2-0.us-west-2.compute.amazonaws.com",
"SecurityGroupIds": [
"sg-c4d3f0a1"
],
"Architecture": "x86_64",
"RootDeviceType": "ebs",
"InstallUpdatesOnBoot": true,
"Os": "Custom",
"VirtualizationType": "paravirtual",
"AvailabilityZone": "us-west-2a",
"PrivateIp": "10.31.39.66",
"PublicIp": "192.0.2.06",
"RegisteredBy": "arn:aws:iam::123456789102:user/AWS/OpsWorks/OpsWorks-EC2Register-i-5cd23551"
},
{
"StackId": "71c7ca72-55ae-4b6a-8ee1-a8dcded3fa0f",
"PrivateDns": "ip-10-31-39-158.us-west-2.compute.internal",
"SshHostRsaKeyFingerprint": "69:6b:7b:8b:72:f3:ed:23:01:00:05:bc:9f:a4:60:c1",
"LayerIds": [
"26cf1d32-6876-42fa-bbf1-9cadc0bff938"
],
"EbsOptimized": false,
"ReportedOs": {},
"Status": "booting",
"InstanceId": "9b137a0d-2f5d-4cc0-9704-13da4b31fdcb",
"SshKeyName": "US-West-2",
"InfrastructureClass": "ec2",
"RootDeviceVolumeId": "vol-e09dd5f1",
"SubnetId": "subnet-b8de0ddd",
"InstanceProfileArn": "arn:aws:iam::123456789102:instance-profile/aws-opsworks-ec2-role",
"InstanceType": "c3.large",
"CreatedAt": "2015-02-24T21:29:33+00:00",
"AmiId": "ami-9fc29baf",
"SshHostDsaKeyFingerprint": "fc:87:95:c3:f5:e1:3b:9f:d2:06:6e:62:9a:35:27:e8",
"Ec2InstanceId": "i-8d2dca80",
"PublicDns": "ec2-192-0-2-1.us-west-2.compute.amazonaws.com",
"SecurityGroupIds": [
"sg-b022add5",
"sg-b122add4"
],
"Architecture": "x86_64",
"RootDeviceType": "ebs",
"InstallUpdatesOnBoot": true,
"Os": "Amazon Linux 2014.09",
"VirtualizationType": "paravirtual",
"AvailabilityZone": "us-west-2a",
"Hostname": "custom11",
"PrivateIp": "10.31.39.158",
"PublicIp": "192.0.2.0"
}
]
}
**More Information**
For more information, see Instances_ in the *AWS OpsWorks User Guide*.
.. _Instances: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances.html
awscli-1.10.1/awscli/examples/opsworks/describe-user-profiles.rst 0000666 4542626 0000144 00000002012 12652514124 026221 0 ustar pysdk-ci amazon 0000000 0000000 **To describe user profiles**
The following ``describe-user-profiles`` command describes the account's user profiles. ::
aws opsworks --region us-east-1 describe-user-profiles
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*::
{
"UserProfiles": [
{
"IamUserArn": "arn:aws:iam::123456789012:user/someuser",
"SshPublicKey": "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAkOuP7i80q3Cko...",
"AllowSelfManagement": true,
"Name": "someuser",
"SshUsername": "someuser"
},
{
"IamUserArn": "arn:aws:iam::123456789012:user/cli-user-test",
"AllowSelfManagement": true,
"Name": "cli-user-test",
"SshUsername": "myusername"
}
]
}
**More Information**
For more information, see `Managing AWS OpsWorks Users`_ in the *AWS OpsWorks User Guide*.
.. _`Managing AWS OpsWorks Users`: http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users-manage.html
awscli-1.10.1/awscli/examples/opsworks/describe-raid-arrays.rst 0000666 4542626 0000144 00000002241 12652514124 025644 0 ustar pysdk-ci amazon 0000000 0000000 **To describe RAID arrays**
The following example describes the RAID arrays attached to the instances in a specified stack. ::
aws opsworks --region us-east-1 describe-raid-arrays --stack-id d72553d4-8727-448c-9b00-f024f0ba1b06
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
*Output*: The following is the output for a stack with one RAID array. ::
{
"RaidArrays": [
{
"StackId": "d72553d4-8727-448c-9b00-f024f0ba1b06",
"AvailabilityZone": "us-west-2a",
"Name": "Created for php-app1",
"NumberOfDisks": 2,
"InstanceId": "9f14adbc-ced5-43b6-bf01-e7d0db6cf2f7",
"RaidLevel": 0,
"VolumeType": "standard",
"RaidArrayId": "f2d4e470-5972-4676-b1b8-bae41ec3e51c",
"Device": "/dev/md0",
"MountPoint": "/mnt/workspace",
"CreatedAt": "2015-02-26T23:53:09+00:00",
"Size": 100
}
]
}
For more information, see `EBS Volumes`_ in the *AWS OpsWorks User Guide*.
.. _`EBS Volumes`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html#workinglayers-basics-edit-ebs
awscli-1.10.1/awscli/examples/opsworks/create-deployment.rst 0000666 4542626 0000144 00000005077 12652514124 025303 0 ustar pysdk-ci amazon 0000000 0000000 **To deploy apps and run stack commands**
The following examples show how to use the ``create-deployment`` command to deploy apps and run stack commands. Notice that the
quote (``"``) characters in the JSON object that specifies the command are all preceded by
escape characters (\). Without the escape characters, the command might
return an invalid JSON error.
**Note**: AWS OpsWorks CLI commands should set the region to ``us-east-1`` regardless of the stack's location.
**Deploy an App**
The following ``create-deployment`` command deploys an app to a specified stack. ::
aws opsworks --region us-east-1 create-deployment --stack-id cfb7e082-ad1d-4599-8e81-de1c39ab45bf --app-id 307be5c8-d55d-47b5-bd6e-7bd417c6c7eb --command "{\"Name\":\"deploy\"}"
*Output*::
{
"DeploymentId": "5746c781-df7f-4c87-84a7-65a119880560"
}
**Deploy a Rails App and Migrate the Database**
The following ``create-deployment`` command deploys a Ruby on Rails app to a specified stack and migrates the
database. ::
aws opsworks --region us-east-1 create-deployment --stack-id cfb7e082-ad1d-4599-8e81-de1c39ab45bf --app-id 307be5c8-d55d-47b5-bd6e-7bd417c6c7eb --command "{\"Name\":\"deploy\", \"Args\":{\"migrate\":[\"true\"]}}"
*Output*::
{
"DeploymentId": "5746c781-df7f-4c87-84a7-65a119880560"
}
For more information on deployment, see `Deploying Apps`_ in the *AWS OpsWorks User Guide*.
**Execute a Recipe**
The following ``create-deployment`` command runs a custom recipe, ``phpapp::appsetup``, on the instances in a specified
stack. ::
aws opsworks --region us-east-1 create-deployment --stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb --command "{\"Name\":\"execute_recipes\", \"Args\":{\"recipes\":[\"phpapp::appsetup\"]}}
*Output*::
{
"DeploymentId": "5cbaa7b9-4e09-4e53-aa1b-314fbd106038"
}
For more information, see `Run Stack Commands`_ in the *AWS OpsWorks User Guide*.
**Install Dependencies**
The following ``create-deployment`` command installs dependencies, such as packages or Ruby gems, on the instances in a
specified stack. ::
aws opsworks --region us-east-1 create-deployment --stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb --command "{\"Name\":\"install_dependencies\"}"
*Output*::
{
"DeploymentId": "aef5b255-8604-4928-81b3-9b0187f962ff"
}
**More Information**
For more information, see `Run Stack Commands`_ in the *AWS OpsWorks User Guide*.
.. _`Deploying Apps`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-deploying.html
.. _`Run Stack Commands`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-commands.html
awscli-1.10.1/awscli/examples/s3api/ 0000777 4542626 0000144 00000000000 12652514126 020251 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/s3api/put-bucket-cors.rst 0000666 4542626 0000144 00000001275 12652514124 024035 0 ustar pysdk-ci amazon 0000000 0000000 The following example enables ``PUT``, ``POST``, and ``DELETE`` requests from *www.example.com*, and enables ``GET``
requests from any domain::
aws s3api put-bucket-cors --bucket MyBucket --cors-configuration file://cors.json
cors.json:
{
"CORSRules": [
{
"AllowedOrigins": ["http://www.example.com"],
"AllowedHeaders": ["*"],
"AllowedMethods": ["PUT", "POST", "DELETE"],
"MaxAgeSeconds": 3000,
"ExposeHeaders": ["x-amz-server-side-encryption"]
},
{
"AllowedOrigins": ["*"],
"AllowedHeaders": ["Authorization"],
"AllowedMethods": ["GET"],
"MaxAgeSeconds": 3000
}
]
}
awscli-1.10.1/awscli/examples/s3api/delete-bucket-tagging.rst 0000666 4542626 0000144 00000000220 12652514124 025126 0 ustar pysdk-ci amazon 0000000 0000000 The following command deletes a tagging configuration from a bucket named ``my-bucket``::
aws s3api delete-bucket-tagging --bucket my-bucket
awscli-1.10.1/awscli/examples/s3api/put-bucket-acl.rst 0000666 4542626 0000144 00000001023 12652514124 023615 0 ustar pysdk-ci amazon 0000000 0000000 This example grants ``full control`` to two AWS users (*user1@example.com* and *user2@example.com*) and ``read``
permission to everyone::
aws s3api put-bucket-acl --bucket MyBucket --grant-full-control emailaddress=user1@example.com,emailaddress=user2@example.com --grant-read uri=http://acs.amazonaws.com/groups/global/AllUsers
See http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTacl.html for details on custom ACLs (the s3api ACL
commands, such as ``put-bucket-acl``, use the same shorthand argument notation).
awscli-1.10.1/awscli/examples/s3api/upload-part.rst 0000666 4542626 0000144 00000001515 12652514124 023233 0 ustar pysdk-ci amazon 0000000 0000000 The following command uploads the first part in a multipart upload initiated with the ``create-multipart-upload`` command::
aws s3api upload-part --bucket my-bucket --key 'multipart/01' --part-number 1 --body part01 --upload-id "dfRtDYU0WWCCcH43C3WFbkRONycyCpTJJvxu2i5GYkZljF.Yxwh6XG7WfS2vC4to6HiV6Yjlx.cph0gtNBtJ8P3URCSbB7rjxI5iEwVDmgaXZOGgkk5nVTW16HOQ5l0R"
The ``body`` option takes the name or path of a local file for upload (do not use the file:// prefix). The minimum part size is 5 MB. Upload ID is returned by ``create-multipart-upload`` and can also be retrieved with ``list-multipart-uploads``. Bucket and key are specified when you create the multipart upload.
Output::
{
"ETag": "\"e868e0f4719e394144ef36531ee6824c\""
}
Save the ETag value of each part for later. They are required to complete the multipart upload. awscli-1.10.1/awscli/examples/s3api/get-bucket-notification-configuration.rst 0000666 4542626 0000144 00000000751 12652514124 030367 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves the notification configuration for a bucket named ``my-bucket``::
aws s3api get-bucket-notification-configuration --bucket my-bucket
Output::
{
"TopicConfigurations": [
{
"Id": "YmQzMmEwM2EjZWVlI0NGItNzVtZjI1MC00ZjgyLWZDBiZWNl",
"TopicArn": "arn:aws:sns:us-west-2:123456789012:my-notification-topic",
"Events": [
"s3:ObjectCreated:*"
]
}
]
}
awscli-1.10.1/awscli/examples/s3api/list-parts.rst 0000666 4542626 0000144 00000002457 12652514124 023113 0 ustar pysdk-ci amazon 0000000 0000000 The following command lists all of the parts that have been uploaded for a multipart upload with key ``multipart/01`` in the bucket ``my-bucket``::
aws s3api list-parts --bucket my-bucket --key 'multipart/01' --upload-id dfRtDYU0WWCCcH43C3WFbkRONycyCpTJJvxu2i5GYkZljF.Yxwh6XG7WfS2vC4to6HiV6Yjlx.cph0gtNBtJ8P3URCSbB7rjxI5iEwVDmgaXZOGgkk5nVTW16HOQ5l0R
Output::
{
"Owner": {
"DisplayName": "aws-account-name",
"ID": "100719349fc3b6dcd7c820a124bf7aecd408092c3d7b51b38494939801fc248b"
},
"Initiator": {
"DisplayName": "username",
"ID": "arn:aws:iam::0123456789012:user/username"
},
"Parts": [
{
"LastModified": "2015-06-02T18:07:35.000Z",
"PartNumber": 1,
"ETag": "\"e868e0f4719e394144ef36531ee6824c\"",
"Size": 5242880
},
{
"LastModified": "2015-06-02T18:07:42.000Z",
"PartNumber": 2,
"ETag": "\"6bb2b12753d66fe86da4998aa33fffb0\"",
"Size": 5242880
},
{
"LastModified": "2015-06-02T18:07:47.000Z",
"PartNumber": 3,
"ETag": "\"d0a0112e841abec9c9ec83406f0159c8\"",
"Size": 5242880
}
],
"StorageClass": "STANDARD"
} awscli-1.10.1/awscli/examples/s3api/create-multipart-upload.rst 0000666 4542626 0000144 00000001124 12652514124 025543 0 ustar pysdk-ci amazon 0000000 0000000 The following command creates a multipart upload in the bucket ``my-bucket`` with the key ``multipart/01``::
aws s3api create-multipart-upload --bucket my-bucket --key 'multipart/01'
Output::
{
"Bucket": "my-bucket",
"UploadId": "dfRtDYU0WWCCcH43C3WFbkRONycyCpTJJvxu2i5GYkZljF.Yxwh6XG7WfS2vC4to6HiV6Yjlx.cph0gtNBtJ8P3URCSbB7rjxI5iEwVDmgaXZOGgkk5nVTW16HOQ5l0R",
"Key": "multipart/01"
}
The completed file will be named ``01`` in a folder called ``multipart`` in the bucket ``my-bucket``. Save the upload ID, key and bucket name for use with the ``upload-part`` command. awscli-1.10.1/awscli/examples/s3api/put-bucket-notification-configuration.rst 0000666 4542626 0000144 00000002125 12652514124 030415 0 ustar pysdk-ci amazon 0000000 0000000 The applies a notification configuration to a bucket named ``my-bucket``::
aws s3api put-bucket-notification-configuration --bucket my-bucket --notification-configuration file://notification.json
The file ``notification.json`` is a JSON document in the current folder that specifies an SNS topic and an event type to monitor::
{
"TopicConfigurations": [
{
"TopicArn": "arn:aws:sns:us-west-2:123456789012:s3-notification-topic",
"Events": [
"s3:ObjectCreated:*"
]
}
]
}
The SNS topic must have an IAM policy attached to it that allows Amazon S3 to publish to it::
{
"Version": "2008-10-17",
"Id": "example-ID",
"Statement": [
{
"Sid": "example-statement-ID",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": [
"SNS:Publish"
],
"Resource": "arn:aws:sns:us-west-2:123456789012:my-bucket",
"Condition": {
"ArnLike": {
"aws:SourceArn": "arn:aws:s3:*:*:my-bucket"
}
}
}
]
} awscli-1.10.1/awscli/examples/s3api/get-bucket-lifecycle.rst 0000666 4542626 0000144 00000001167 12652514124 024775 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves the lifecycle configuration for a bucket named ``my-bucket``::
aws s3api get-bucket-lifecycle --bucket my-bucket
Output::
{
"Rules": [
{
"ID": "Move to Glacier after sixty days (objects in logs/2015/)",
"Prefix": "logs/2015/",
"Status": "Enabled",
"Transition": {
"Days": 60,
"StorageClass": "GLACIER"
}
},
{
"Expiration": {
"Date": "2016-01-01T00:00:00.000Z"
},
"ID": "Delete 2014 logs in 2016.",
"Prefix": "logs/2014/",
"Status": "Enabled"
}
]
}
awscli-1.10.1/awscli/examples/s3api/head-object.rst 0000666 4542626 0000144 00000000641 12652514124 023147 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves metadata for an object in a bucket named ``my-bucket``::
aws s3api head-object --bucket my-bucket --key index.html
Output::
{
"AcceptRanges": "bytes",
"ContentType": "text/html",
"LastModified": "Thu, 16 Apr 2015 18:19:14 GMT",
"ContentLength": 77,
"VersionId": "null",
"ETag": "\"30a6ec7e1a9ad79c203d05a589c8b400\"",
"Metadata": {}
} awscli-1.10.1/awscli/examples/s3api/get-object-acl.rst 0000666 4542626 0000144 00000001426 12652514124 023564 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves the access control list for an object in a bucket named ``my-bucket``::
aws s3api get-object-acl --bucket my-bucket --key index.html
Output::
{
"Owner": {
"DisplayName": "my-username",
"ID": "7009a8971cd538e11f6b6606438875e7c86c5b672f46db45460ddcd087d36c32"
},
"Grants": [
{
"Grantee": {
"DisplayName": "my-username",
"ID": "7009a8971cd538e11f6b6606438875e7c86c5b672f46db45460ddcd087d36c32"
},
"Permission": "FULL_CONTROL"
},
{
"Grantee": {
"URI": "http://acs.amazonaws.com/groups/global/AllUsers"
},
"Permission": "READ"
}
]
} awscli-1.10.1/awscli/examples/s3api/put-bucket-versioning.rst 0000666 4542626 0000144 00000000253 12652514124 025245 0 ustar pysdk-ci amazon 0000000 0000000 The following command enables versioning on a bucket named ``my-bucket``::
aws s3api put-bucket-versioning --bucket my-bucket --versioning-configuration Status=Enabled
awscli-1.10.1/awscli/examples/s3api/delete-bucket-lifecycle.rst 0000666 4542626 0000144 00000000224 12652514124 025451 0 ustar pysdk-ci amazon 0000000 0000000 The following command deletes a lifecycle configuration from a bucket named ``my-bucket``::
aws s3api delete-bucket-lifecycle --bucket my-bucket
awscli-1.10.1/awscli/examples/s3api/put-bucket-lifecycle-configuration.rst 0000666 4542626 0000144 00000002575 12652514124 027677 0 ustar pysdk-ci amazon 0000000 0000000 The following command applies a lifecycle configuration to a bucket named ``my-bucket``::
aws s3api put-bucket-lifecycle-configuration --bucket my-bucket --lifecycle-configuration file://lifecycle.json
The file ``lifecycle.json`` is a JSON document in the current folder that specifies two rules::
{
"Rules": [
{
"ID": "Move rotated logs to Glacier",
"Prefix": "rotated/",
"Status": "Enabled",
"Transitions": [
{
"Date": "2015-11-10T00:00:00.000Z",
"StorageClass": "GLACIER"
}
]
},
{
"Status": "Enabled",
"Prefix": "",
"NoncurrentVersionTransitions": [
{
"NoncurrentDays": 2,
"StorageClass": "GLACIER"
}
],
"ID": "Move old versions to Glacier"
}
]
}
The first rule moves files with the prefix ``rotated`` to Glacier on the specified date. The second rule moves old object versions to Glacier when they are no longer current. For information on acceptable timestamp formats, see `Specifying Parameter Values`_ in the *AWS CLI User Guide*.
.. _`Specifying Parameter Values`: http://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html awscli-1.10.1/awscli/examples/s3api/copy-object.rst 0000666 4542626 0000144 00000000602 12652514124 023215 0 ustar pysdk-ci amazon 0000000 0000000 The following command copies an object from ``bucket-1`` to ``bucket-2``::
aws s3api copy-object --copy-source bucket-1/test.txt --key test.txt --bucket bucket-2
Output::
{
"CopyObjectResult": {
"LastModified": "2015-11-10T01:07:25.000Z",
"ETag": "\"589c8b79c230a6ecd5a7e1d040a9a030\""
},
"VersionId": "YdnYvTCVDqRRFA.NFJjy36p0hxifMlkA"
}
awscli-1.10.1/awscli/examples/s3api/get-bucket-lifecycle-configuration.rst 0000666 4542626 0000144 00000001560 12652514124 027637 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves the lifecycle configuration for a bucket named ``my-bucket``::
aws s3api get-bucket-lifecycle-configuration --bucket my-bucket
Output::
{
"Rules": [
{
"ID": "Move rotated logs to Glacier",
"Prefix": "rotated/",
"Status": "Enabled",
"Transitions": [
{
"Date": "2015-11-10T00:00:00.000Z",
"StorageClass": "GLACIER"
}
]
},
{
"Status": "Enabled",
"Prefix": "",
"NoncurrentVersionTransitions": [
{
"NoncurrentDays": 0,
"StorageClass": "GLACIER"
}
],
"ID": "Move old versions to Glacier"
}
]
} awscli-1.10.1/awscli/examples/s3api/put-bucket-website.rst 0000666 4542626 0000144 00000000662 12652514124 024530 0 ustar pysdk-ci amazon 0000000 0000000 The applies a static website configuration to a bucket named ``my-bucket``::
aws s3api put-bucket-website --bucket my-bucket --website-configuration file://website.json
The file ``website.json`` is a JSON document in the current folder that specifies index and error pages for the website::
{
"IndexDocument": {
"Suffix": "index.html"
},
"ErrorDocument": {
"Key": "error.html"
}
}
awscli-1.10.1/awscli/examples/s3api/list-multipart-uploads.rst 0000666 4542626 0000144 00000002013 12652514124 025434 0 ustar pysdk-ci amazon 0000000 0000000 The following command lists all of the active multipart uploads for a bucket named ``my-bucket``::
aws s3api list-multipart-uploads --bucket my-bucket
Output::
{
"Uploads": [
{
"Initiator": {
"DisplayName": "username",
"ID": "arn:aws:iam::0123456789012:user/username"
},
"Initiated": "2015-06-02T18:01:30.000Z",
"UploadId": "dfRtDYU0WWCCcH43C3WFbkRONycyCpTJJvxu2i5GYkZljF.Yxwh6XG7WfS2vC4to6HiV6Yjlx.cph0gtNBtJ8P3URCSbB7rjxI5iEwVDmgaXZOGgkk5nVTW16HOQ5l0R",
"StorageClass": "STANDARD",
"Key": "multipart/01",
"Owner": {
"DisplayName": "aws-account-name",
"ID": "100719349fc3b6dcd7c820a124bf7aecd408092c3d7b51b38494939801fc248b"
}
}
],
"CommonPrefixes": []
}
In progress multipart uploads incur storage costs in Amazon S3. Complete or abort an active multipart upload to remove its parts from your account. awscli-1.10.1/awscli/examples/s3api/get-object-torrent.rst 0000666 4542626 0000144 00000000607 12652514124 024522 0 ustar pysdk-ci amazon 0000000 0000000 The following command creates a torrent for an object in a bucket named ``my-bucket``::
aws s3api get-object-torrent --bucket my-bucket --key large-video-file.mp4 large-video-file.torrent
The torrent file is saved locally in the current folder. Note that the output filename (``large-video-file.torrent``) is specified without an option name and must be the last argument in the command. awscli-1.10.1/awscli/examples/s3api/list-buckets.rst 0000666 4542626 0000144 00000000743 12652514124 023416 0 ustar pysdk-ci amazon 0000000 0000000 The following command uses the ``list-buckets`` command to display the names of all your Amazon S3 buckets (across all
regions)::
aws s3api list-buckets --query 'Buckets[].Name'
The query option filters the output of ``list-buckets`` down to only the bucket names.
For more information about buckets, see `Working with Amazon S3 Buckets`_ in the *Amazon S3 Developer Guide*.
.. _`Working with Amazon S3 Buckets`: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html
awscli-1.10.1/awscli/examples/s3api/get-bucket-tagging.rst 0000666 4542626 0000144 00000000435 12652514124 024453 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves the tagging configuration for a bucket named ``my-bucket``::
aws s3api get-bucket-tagging --bucket my-bucket
Output::
{
"TagSet": [
{
"Value": "marketing",
"Key": "organization"
}
]
}
awscli-1.10.1/awscli/examples/s3api/get-object.rst 0000666 4542626 0000144 00000001051 12652514124 023021 0 ustar pysdk-ci amazon 0000000 0000000 The following example uses the ``get-object`` command to download an object from Amazon S3::
aws s3api get-object --bucket text-content --key dir/my_images.tar.bz2 my_images.tar.bz2
Note that the outfile parameter is specified without an option name such as "--outfile". The name of the output file must be the last parameter in the command.
For more information about retrieving objects, see `Getting Objects`_ in the *Amazon S3 Developer Guide*.
.. _`Getting Objects`: http://docs.aws.amazon.com/AmazonS3/latest/dev/GettingObjectsUsingAPIs.html
awscli-1.10.1/awscli/examples/s3api/delete-objects.rst 0000666 4542626 0000144 00000001056 12652514124 023674 0 ustar pysdk-ci amazon 0000000 0000000 The following command deletes an object from a bucket named ``my-bucket``::
aws s3api delete-objects --bucket my-bucket --delete file://delete.json
``delete.json`` is a JSON document in the current directory that specifies the object to delete::
{
"Objects": [
{
"Key": "test1.txt"
}
],
"Quiet": false
}
Output::
{
"Deleted": [
{
"DeleteMarkerVersionId": "mYAT5Mc6F7aeUL8SS7FAAqUPO1koHwzU",
"Key": "test1.txt",
"DeleteMarker": true
}
]
} awscli-1.10.1/awscli/examples/s3api/get-bucket-acl.rst 0000666 4542626 0000144 00000001106 12652514124 023566 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves the access control list for a bucket named ``my-bucket``::
aws s3api get-bucket-acl --bucket my-bucket
Output::
{
"Owner": {
"DisplayName": "my-username",
"ID": "7009a8971cd538e11f6b6606438875e7c86c5b672f46db45460ddcd087d36c32"
},
"Grants": [
{
"Grantee": {
"DisplayName": "my-username",
"ID": "7009a8971cd538e11f6b6606438875e7c86c5b672f46db45460ddcd087d36c32"
},
"Permission": "FULL_CONTROL"
}
]
}
awscli-1.10.1/awscli/examples/s3api/get-bucket-website.rst 0000666 4542626 0000144 00000000454 12652514124 024476 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves the static website configuration for a bucket named ``my-bucket``::
aws s3api get-bucket-website --bucket my-bucket
Output::
{
"IndexDocument": {
"Suffix": "index.html"
},
"ErrorDocument": {
"Key": "error.html"
}
}
awscli-1.10.1/awscli/examples/s3api/get-bucket-cors.rst 0000666 4542626 0000144 00000001700 12652514124 023775 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves the Cross-Origin Resource Sharing configuration for a bucket named ``my-bucket``::
aws s3api get-bucket-cors --bucket my-bucket
Output::
{
"CORSRules": [
{
"AllowedHeaders": [
"*"
],
"ExposeHeaders": [
"x-amz-server-side-encryption"
],
"AllowedMethods": [
"PUT",
"POST",
"DELETE"
],
"MaxAgeSeconds": 3000,
"AllowedOrigins": [
"http://www.example.com"
]
},
{
"AllowedHeaders": [
"Authorization"
],
"MaxAgeSeconds": 3000,
"AllowedMethods": [
"GET"
],
"AllowedOrigins": [
"*"
]
}
]
}
awscli-1.10.1/awscli/examples/s3api/put-bucket-logging.rst 0000666 4542626 0000144 00000002446 12652514124 024516 0 ustar pysdk-ci amazon 0000000 0000000 The example below sets the logging policy for *MyBucket*. The AWS user *user@example.com* will have full control over
the log files, and all users will have access to them. First, grant S3 permission with ``put-bucket-acl``::
aws s3api put-bucket-acl --bucket MyBucket --grant-write URI=http://acs.amazonaws.com/groups/s3/LogDelivery --grant-read-acp URI=http://acs.amazonaws.com/groups/s3/LogDelivery
Then apply the logging policy::
aws s3api put-bucket-logging --bucket MyBucket --bucket-logging-status file://logging.json
``logging.json`` is a JSON document in the current folder that contains the logging policy::
{
"LoggingEnabled": {
"TargetBucket": "MyBucket",
"TargetPrefix": "MyBucketLogs/",
"TargetGrants": [
{
"Grantee": {
"Type": "AmazonCustomerByEmail",
"EmailAddress": "user@example.com"
},
"Permission": "FULL_CONTROL"
},
{
"Grantee": {
"Type": "Group",
"URI": "http://acs.amazonaws.com/groups/global/AllUsers"
},
"Permission": "READ"
}
]
}
}
.. note:: the ``put-bucket-acl`` command is required to grant S3's log delivery system the necessary permissions (write
and read-acp permissions).
awscli-1.10.1/awscli/examples/s3api/delete-bucket-replication.rst 0000666 4542626 0000144 00000000230 12652514124 026020 0 ustar pysdk-ci amazon 0000000 0000000 The following command deletes a replication configuration from a bucket named ``my-bucket``::
aws s3api delete-bucket-replication --bucket my-bucket
awscli-1.10.1/awscli/examples/s3api/put-object.rst 0000666 4542626 0000144 00000001207 12652514124 023055 0 ustar pysdk-ci amazon 0000000 0000000 The following example uses the ``put-object`` command to upload an object to Amazon S3::
aws s3api put-object --bucket text-content --key dir-1/my_images.tar.bz2 --body my_images.tar.bz2
The following example shows an upload of a video file (The video file is
specified using Windows file system syntax.)::
aws s3api put-object --bucket text-content --key dir-1/big-video-file.mp4 --body e:\media\videos\f-sharp-3-data-services.mp4
For more information about uploading objects, see `Uploading Objects`_ in the *Amazon S3 Developer Guide*.
.. _`Uploading Objects`: http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadingObjects.html
awscli-1.10.1/awscli/examples/s3api/get-bucket-policy.rst 0000666 4542626 0000144 00000002063 12652514124 024331 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves the bucket policy for a bucket named ``my-bucket``::
aws s3api get-bucket-policy --bucket my-bucket
Output::
{
"Policy": "{\"Version\":\"2008-10-17\",\"Statement\":[{\"Sid\":\"\",\"Effect\":\"Allow\",\"Principal\":\"*\",\"Action\":\"s3:GetObject\",\"Resource\":\"arn:aws:s3:::my-bucket/*\"},{\"Sid\":\"\",\"Effect\":\"Deny\",\"Principal\":\"*\",\"Action\":\"s3:GetObject\",\"Resource\":\"arn:aws:s3:::my-bucket/secret/*\"}]}"
}
Get and put a bucket policy
---------------------------
The following example shows how you can download an Amazon S3 bucket policy,
make modifications to the file, and then use ``put-bucket-policy`` to
apply the modified bucket policy. To download the bucket policy to a file,
you can run::
aws s3api get-bucket-policy --bucket mybucket --query Policy --output text > policy.json
You can then modify the ``policy.json`` file as needed. Finally you can apply
this modified policy back to the S3 bucket by running::
aws s3api put-bucket-policy --bucket mybucket --policy file://policy.json
awscli-1.10.1/awscli/examples/s3api/get-bucket-notification.rst 0000666 4542626 0000144 00000000724 12652514124 025522 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves the notification configuration for a bucket named ``my-bucket``::
aws s3api get-bucket-notification --bucket my-bucket
Output::
{
"TopicConfiguration": {
"Topic": "arn:aws:sns:us-west-2:123456789012:my-notification-topic",
"Id": "YmQzMmEwM2EjZWVlI0NGItNzVtZjI1MC00ZjgyLWZDBiZWNl",
"Event": "s3:ObjectCreated:*",
"Events": [
"s3:ObjectCreated:*"
]
}
}
awscli-1.10.1/awscli/examples/s3api/get-bucket-versioning.rst 0000666 4542626 0000144 00000000303 12652514124 025210 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves the versioning configuration for a bucket named ``my-bucket``::
aws s3api get-bucket-versioning --bucket my-bucket
Output::
{
"Status": "Enabled"
}
awscli-1.10.1/awscli/examples/s3api/put-bucket-tagging.rst 0000666 4542626 0000144 00000000564 12652514124 024507 0 ustar pysdk-ci amazon 0000000 0000000 The following command applies a tagging configuration to a bucket named ``my-bucket``::
aws s3api put-bucket-tagging --bucket my-bucket --tagging file://tagging.json
The file ``tagging.json`` is a JSON document in the current folder that specifies tags::
{
"TagSet": [
{
"Key": "organization",
"Value": "marketing"
}
]
}
awscli-1.10.1/awscli/examples/s3api/put-object-acl.rst 0000666 4542626 0000144 00000001052 12652514124 023610 0 ustar pysdk-ci amazon 0000000 0000000 The following command grants ``full control`` to two AWS users (*user1@example.com* and *user2@example.com*) and ``read``
permission to everyone::
aws s3api put-object-acl --bucket MyBucket --key file.txt --grant-full-control emailaddress=user1@example.com,emailaddress=user2@example.com --grant-read uri=http://acs.amazonaws.com/groups/global/AllUsers
See http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTacl.html for details on custom ACLs (the s3api ACL
commands, such as ``put-object-acl``, use the same shorthand argument notation).
awscli-1.10.1/awscli/examples/s3api/abort-multipart-upload.rst 0000666 4542626 0000144 00000000717 12652514124 025416 0 ustar pysdk-ci amazon 0000000 0000000 The following command aborts a multipart upload for the key ``multipart/01`` in the bucket ``my-bucket``::
aws s3api abort-multipart-upload --bucket my-bucket --key 'multipart/01' --upload-id dfRtDYU0WWCCcH43C3WFbkRONycyCpTJJvxu2i5GYkZljF.Yxwh6XG7WfS2vC4to6HiV6Yjlx.cph0gtNBtJ8P3URCSbB7rjxI5iEwVDmgaXZOGgkk5nVTW16HOQ5l0R
The upload ID required by this command is output by ``create-multipart-upload`` and can also be retrieved with ``list-multipart-uploads``. awscli-1.10.1/awscli/examples/s3api/put-bucket-replication.rst 0000666 4542626 0000144 00000002337 12652514124 025400 0 ustar pysdk-ci amazon 0000000 0000000 The following command applies a replication configuration to a bucket named ``my-bucket``::
aws s3api put-bucket-replication --bucket my-bucket --replication-configuration file://replication.json
The file ``replication.json`` is a JSON document in the current folder that specifies a replication rule::
{
"Role": "arn:aws:iam::123456789012:role/s3-replication-role",
"Rules": [
{
"Prefix": "",
"Status": "Enabled",
"Destination": {
"Bucket": "arn:aws:s3:::my-bucket-backup",
"StorageClass": "STANDARD"
}
}
]
}
The destination bucket must be in a different region and have versioning enabled. The service role must have permission to write to the destination bucket and have a trust relationship that allows Amazon S3 to assume it.
Example service role permissions::
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
Trust relationship::
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
awscli-1.10.1/awscli/examples/s3api/delete-object.rst 0000666 4542626 0000144 00000001032 12652514124 023503 0 ustar pysdk-ci amazon 0000000 0000000 The following command deletes an object named ``test.txt`` from a bucket named ``my-bucket``::
aws s3api delete-object --bucket my-bucket --key test.txt
If bucket versioning is enabled, the output will contain the version ID of the delete marker::
{
"VersionId": "9_gKg5vG56F.TTEUdwkxGpJ3tNDlWlGq",
"DeleteMarker": true
}
For more information about deleting objects, see `Deleting Objects`_ in the *Amazon S3 Developer Guide*.
.. _`Deleting Objects`: http://docs.aws.amazon.com/AmazonS3/latest/dev/DeletingObjects.html
awscli-1.10.1/awscli/examples/s3api/list-objects.rst 0000666 4542626 0000144 00000001060 12652514124 023400 0 ustar pysdk-ci amazon 0000000 0000000 The following example uses the ``list-objects`` command to display the names of all the objects in the specified bucket::
aws s3api list-objects --bucket text-content --query 'Contents[].{Key: Key, Size: Size}'
The example uses the ``--query`` argument to filter the output of
``list-objects`` down to the key value and size for each object
For more information about objects, see `Working with Amazon S3 Objects`_ in the *Amazon S3 Developer Guide*.
.. _`Working with Amazon S3 Objects`: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html
awscli-1.10.1/awscli/examples/s3api/create-bucket.rst 0000666 4542626 0000144 00000001162 12652514124 023517 0 ustar pysdk-ci amazon 0000000 0000000 The following command creates a bucket named ``my-bucket``::
aws s3api create-bucket --bucket my-bucket --region us-east-1
Output::
{
"Location": "/my-bucket"
}
The following command creates a bucket named ``my-bucket`` in the
``eu-west-1`` region. Regions outside of ``us-east-1`` require the appropriate
``LocationConstraint`` to be specified in order to create the bucket in the
desired region::
$ aws s3api create-bucket --bucket my-bucket --region eu-west-1 --create-bucket-configuration LocationConstraint=eu-west-1
Output::
{
"Location": "http://my-bucket.s3.amazonaws.com/"
}
awscli-1.10.1/awscli/examples/s3api/put-bucket-policy.rst 0000666 4542626 0000144 00000001757 12652514124 024373 0 ustar pysdk-ci amazon 0000000 0000000 This example allows all users to retrieve any object in *MyBucket* except those in the *MySecretFolder*. It also
grants ``put`` and ``delete`` permission to the root user of the AWS account ``1234-5678-9012``::
aws s3api put-bucket-policy --bucket MyBucket --policy file://policy.json
policy.json:
{
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MyBucket/*"
},
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MyBucket/MySecretFolder/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:root"
},
"Action": [
"s3:DeleteObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::MyBucket/*"
}
]
}
awscli-1.10.1/awscli/examples/s3api/put-bucket-notification.rst 0000666 4542626 0000144 00000001773 12652514124 025560 0 ustar pysdk-ci amazon 0000000 0000000 The applies a notification configuration to a bucket named ``my-bucket``::
aws s3api put-bucket-notification --bucket my-bucket --notification-configuration file://notification.json
The file ``notification.json`` is a JSON document in the current folder that specifies an SNS topic and an event type to monitor::
{
"TopicConfiguration": {
"Event": "s3:ObjectCreated:*",
"Topic": "arn:aws:sns:us-west-2:123456789012:s3-notification-topic"
}
}
The SNS topic must have an IAM policy attached to it that allows Amazon S3 to publish to it::
{
"Version": "2008-10-17",
"Id": "example-ID",
"Statement": [
{
"Sid": "example-statement-ID",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": [
"SNS:Publish"
],
"Resource": "arn:aws:sns:us-west-2:123456789012:my-bucket",
"Condition": {
"ArnLike": {
"aws:SourceArn": "arn:aws:s3:*:*:my-bucket"
}
}
}
]
} awscli-1.10.1/awscli/examples/s3api/delete-bucket-policy.rst 0000666 4542626 0000144 00000000207 12652514124 025012 0 ustar pysdk-ci amazon 0000000 0000000 The following command deletes a bucket policy from a bucket named ``my-bucket``::
aws s3api delete-bucket-policy --bucket my-bucket
awscli-1.10.1/awscli/examples/s3api/put-bucket-lifecycle.rst 0000666 4542626 0000144 00000003076 12652514124 025027 0 ustar pysdk-ci amazon 0000000 0000000 The following command applies a lifecycle configuration to the bucket ``my-bucket``::
aws s3api put-bucket-lifecycle --bucket my-bucket --lifecycle-configuration file://lifecycle.json
The file ``lifecycle.json`` is a JSON document in the current folder that specifies two rules::
{
"Rules": [
{
"ID": "Move to Glacier after sixty days (objects in logs/2015/)",
"Prefix": "logs/2015/",
"Status": "Enabled",
"Transition": {
"Days": 60,
"StorageClass": "GLACIER"
}
},
{
"Expiration": {
"Date": "2016-01-01T00:00:00.000Z"
},
"ID": "Delete 2014 logs in 2016.",
"Prefix": "logs/2014/",
"Status": "Enabled"
}
]
}
The first rule moves files to Amazon Glacier after sixty days. The second rule deletes files from Amazon S3 on the specified date. For information on acceptable timestamp formats, see `Specifying Parameter Values`_ in the *AWS CLI User Guide*.
Each rule in the above example specifies a policy (``Transition`` or ``Expiration``) and file prefix (folder name) to which it applies. You can also create a rule that applies to an entire bucket by specifying a blank prefix::
{
"Rules": [
{
"ID": "Move to Glacier after sixty days (all objects in bucket)",
"Prefix": "",
"Status": "Enabled",
"Transition": {
"Days": 60,
"StorageClass": "GLACIER"
}
}
]
}
.. _`Specifying Parameter Values`: http://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html
awscli-1.10.1/awscli/examples/s3api/get-bucket-location.rst 0000666 4542626 0000144 00000000341 12652514124 024637 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves the location constraint for a bucket named ``my-bucket``, if a constraint exists::
aws s3api get-bucket-location --bucket my-bucket
Output::
{
"LocationConstraint": "us-west-2"
} awscli-1.10.1/awscli/examples/s3api/get-bucket-replication.rst 0000666 4542626 0000144 00000001233 12652514124 025341 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves the replication configuration for a bucket named ``my-bucket``::
aws s3api get-bucket-replication --bucket my-bucket
Output::
{
"ReplicationConfiguration": {
"Rules": [
{
"Status": "Enabled",
"Prefix": "",
"Destination": {
"Bucket": "arn:aws:s3:::my-bucket-backup",
"StorageClass": "STANDARD"
},
"ID": "ZmUwNzE4ZmQ4tMjVhOS00MTlkLOGI4NDkzZTIWJjNTUtYTA1"
}
],
"Role": "arn:aws:iam::123456789012:role/s3-replication-role"
}
} awscli-1.10.1/awscli/examples/s3api/list-object-versions.rst 0000666 4542626 0000144 00000005030 12652514124 025064 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves version information for an object in a bucket named ``my-bucket``::
aws s3api list-object-versions --bucket my-bucket --key index.html
Output::
{
"DeleteMarkers": [
{
"Owner": {
"DisplayName": "my-username",
"ID": "7009a8971cd660687538875e7c86c5b672fe116bd438f46db45460ddcd036c32"
},
"IsLatest": true,
"VersionId": "B2VsEK5saUNNHKcOAJj7hIE86RozToyq",
"Key": "index.html",
"LastModified": "2015-11-10T00:57:03.000Z"
},
{
"Owner": {
"DisplayName": "my-username",
"ID": "7009a8971cd660687538875e7c86c5b672fe116bd438f46db45460ddcd036c32"
},
"IsLatest": false,
"VersionId": ".FLQEZscLIcfxSq.jsFJ.szUkmng2Yw6",
"Key": "index.html",
"LastModified": "2015-11-09T23:32:20.000Z"
}
],
"Versions": [
{
"LastModified": "2015-11-10T00:20:11.000Z",
"VersionId": "Rb_l2T8UHDkFEwCgJjhlgPOZC0qJ.vpD",
"ETag": "\"0622528de826c0df5db1258a23b80be5\"",
"StorageClass": "STANDARD",
"Key": "index.html",
"Owner": {
"DisplayName": "my-username",
"ID": "7009a8971cd660687538875e7c86c5b672fe116bd438f46db45460ddcd036c32"
},
"IsLatest": false,
"Size": 38
},
{
"LastModified": "2015-11-09T23:26:41.000Z",
"VersionId": "rasWWGpgk9E4s0LyTJgusGeRQKLVIAFf",
"ETag": "\"06225825b8028de826c0df5db1a23be5\"",
"StorageClass": "STANDARD",
"Key": "index.html",
"Owner": {
"DisplayName": "my-username",
"ID": "7009a8971cd660687538875e7c86c5b672fe116bd438f46db45460ddcd036c32"
},
"IsLatest": false,
"Size": 38
},
{
"LastModified": "2015-11-09T22:50:50.000Z",
"VersionId": "null",
"ETag": "\"d1f45267a863c8392e07d24dd592f1b9\"",
"StorageClass": "STANDARD",
"Key": "index.html",
"Owner": {
"DisplayName": "my-username",
"ID": "7009a8971cd660687538875e7c86c5b672fe116bd438f46db45460ddcd036c32"
},
"IsLatest": false,
"Size": 533823
}
]
}
awscli-1.10.1/awscli/examples/s3api/delete-bucket-cors.rst 0000666 4542626 0000144 00000000243 12652514124 024461 0 ustar pysdk-ci amazon 0000000 0000000 The following command deletes a Cross-Origin Resource Sharing configuration from a bucket named ``my-bucket``::
aws s3api delete-bucket-cors --bucket my-bucket
awscli-1.10.1/awscli/examples/s3api/delete-bucket.rst 0000666 4542626 0000144 00000000176 12652514124 023522 0 ustar pysdk-ci amazon 0000000 0000000 The following command deletes a bucket named ``my-bucket``::
aws s3api delete-bucket --bucket my-bucket --region us-east-1
awscli-1.10.1/awscli/examples/s3api/delete-bucket-website.rst 0000666 4542626 0000144 00000000220 12652514124 025150 0 ustar pysdk-ci amazon 0000000 0000000 The following command deletes a website configuration from a bucket named ``my-bucket``::
aws s3api delete-bucket-website --bucket my-bucket
awscli-1.10.1/awscli/examples/s3api/head-bucket.rst 0000666 4542626 0000144 00000000506 12652514124 023156 0 ustar pysdk-ci amazon 0000000 0000000 The following command verifies access to a bucket named ``my-bucket``::
aws s3api head-bucket --bucket my-bucket
If the bucket exists and you have access to it, no output is returned. Otherwise, an error message will be shown. For example::
A client error (404) occurred when calling the HeadBucket operation: Not Found awscli-1.10.1/awscli/examples/s3api/complete-multipart-upload.rst 0000666 4542626 0000144 00000003016 12652514124 026112 0 ustar pysdk-ci amazon 0000000 0000000 The following command completes a multipart upload for the key ``multipart/01`` in the bucket ``my-bucket``::
aws s3api complete-multipart-upload --multipart-upload file://mpustruct --bucket my-bucket --key 'multipart/01' --upload-id dfRtDYU0WWCCcH43C3WFbkRONycyCpTJJvxu2i5GYkZljF.Yxwh6XG7WfS2vC4to6HiV6Yjlx.cph0gtNBtJ8P3URCSbB7rjxI5iEwVDmgaXZOGgkk5nVTW16HOQ5l0R
The upload ID required by this command is output by ``create-multipart-upload`` and can also be retrieved with ``list-multipart-uploads``.
The multipart upload option in the above command takes a JSON structure that describes the parts of the multipart upload that should be reassembled into the complete file. In this example, the ``file://`` prefix is used to load the JSON structure from a file in the local folder named ``mpustruct``.
mpustruct::
{
"Parts": [
{
"ETag": "e868e0f4719e394144ef36531ee6824c",
"PartNumber": 1
},
{
"ETag": "6bb2b12753d66fe86da4998aa33fffb0",
"PartNumber": 2
},
{
"ETag": "d0a0112e841abec9c9ec83406f0159c8",
"PartNumber": 3
}
]
}
The ETag value for each part is upload is output each time you upload a part using the ``upload-part`` command and can also be retrieved by calling ``list-parts`` or calculated by taking the MD5 checksum of each part.
Output::
{
"ETag": "\"3944a9f7a4faab7f78788ff6210f63f0-3\"",
"Bucket": "my-bucket",
"Location": "https://my-bucket.s3.amazonaws.com/multipart%2F01",
"Key": "multipart/01"
}
awscli-1.10.1/awscli/examples/ses/ 0000777 4542626 0000144 00000000000 12652514126 020024 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/ses/verify-domain-dkim.rst 0000666 4542626 0000144 00000001464 12652514124 024254 0 ustar pysdk-ci amazon 0000000 0000000 **To generate a verified domain's DKIM tokens for DKIM signing with Amazon SES**
The following example uses the ``verify-domain-dkim`` command to generate DKIM tokens for a domain that has been verified with Amazon SES::
aws ses verify-domain-dkim --domain example.com
Output::
{
"DkimTokens": [
"EXAMPLEq76owjnks3lnluwg65scbemvw",
"EXAMPLEi3dnsj67hstzaj673klariwx2",
"EXAMPLEwfbtcukvimehexktmdtaz6naj"
]
}
To set up DKIM, you must use the returned DKIM tokens to update your domain's DNS settings with CNAME records that point to DKIM public keys hosted by Amazon SES. For more information, see `Easy DKIM in Amazon SES`_ in the *Amazon Simple Email Service Developer Guide*.
.. _`Easy DKIM in Amazon SES`: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/easy-dkim.html
awscli-1.10.1/awscli/examples/ses/set-identity-dkim-enabled.rst 0000666 4542626 0000144 00000000762 12652514124 025515 0 ustar pysdk-ci amazon 0000000 0000000 **To enable or disable Easy DKIM for an Amazon SES verified identity**
The following example uses the ``set-identity-dkim-enabled`` command to disable DKIM for a verified email address::
aws ses set-identity-dkim-enabled --identity user@example.com --no-dkim-enabled
For more information about Easy DKIM, see `Easy DKIM in Amazon SES`_ in the *Amazon Simple Email Service Developer Guide*.
.. _`Easy DKIM in Amazon SES`: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/easy-dkim.html
awscli-1.10.1/awscli/examples/ses/get-send-statistics.rst 0000666 4542626 0000144 00000002244 12652514124 024454 0 ustar pysdk-ci amazon 0000000 0000000 **To get your Amazon SES sending statistics**
The following example uses the ``get-send-statistics`` command to return your Amazon SES sending statistics ::
aws ses get-send-statistics
Output::
{
"SendDataPoints": [
{
"Complaints": 0,
"Timestamp": "2013-06-12T19:32:00Z",
"DeliveryAttempts": 2,
"Bounces": 0,
"Rejects": 0
},
{
"Complaints": 0,
"Timestamp": "2013-06-12T00:47:00Z",
"DeliveryAttempts": 1,
"Bounces": 0,
"Rejects": 0
}
]
}
The result is a list of data points, representing the last two weeks of sending activity. Each data point in the list
contains statistics for a 15-minute interval.
In this example, there are only two data points because the only emails that the user sent in the last two weeks fell
within two 15-minute intervals.
For more information, see `Monitoring Your Amazon SES Usage Statistics`_ in the *Amazon Simple Email Service Developer Guide*.
.. _`Monitoring Your Amazon SES Usage Statistics`: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/monitor-usage-statistics.html
awscli-1.10.1/awscli/examples/ses/get-identity-notification-attributes.rst 0000666 4542626 0000144 00000002531 12652514124 030033 0 ustar pysdk-ci amazon 0000000 0000000 **To get the Amazon SES notification attributes for a list of identities**
The following example uses the ``get-identity-notification-attributes`` command to retrieve the Amazon SES notification attributes for a list of identities::
aws ses get-identity-notification-attributes --identities "user1@example.com" "user2@example.com"
Output::
{
"NotificationAttributes": {
"user1@example.com": {
"ForwardingEnabled": false,
"ComplaintTopic": "arn:aws:sns:us-east-1:EXAMPLE65304:MyTopic",
"BounceTopic": "arn:aws:sns:us-east-1:EXAMPLE65304:MyTopic",
"DeliveryTopic": "arn:aws:sns:us-east-1:EXAMPLE65304:MyTopic"
},
"user2@example.com": {
"ForwardingEnabled": true
}
}
}
This command returns the status of email feedback forwarding and, if applicable, the Amazon Resource Names (ARNs) of the Amazon SNS topics that bounce, complaint, and delivery notifications are sent to.
If you call this command with an identity that you have never submitted for verification, that identity won't appear in the output.
For more information about notifications, see `Using Notifications With Amazon SES`_ in the *Amazon Simple Email Service Developer Guide*.
.. _`Using Notifications With Amazon SES`: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/notifications.html
awscli-1.10.1/awscli/examples/ses/get-send-quota.rst 0000666 4542626 0000144 00000002354 12652514124 023415 0 ustar pysdk-ci amazon 0000000 0000000 **To get your Amazon SES sending limits**
The following example uses the ``get-send-quota`` command to return your Amazon SES sending limits::
aws ses get-send-quota
Output::
{
"Max24HourSend": 200.0,
"SentLast24Hours": 1.0,
"MaxSendRate": 1.0
}
Max24HourSend is your sending quota, which is the maximum number of emails that you can send in a 24-hour period.
The sending quota reflects a rolling time period. Every time you try to send an email, Amazon SES checks how many
emails you sent in the previous 24 hours. As long as the total number of emails that you have sent is less than
your quota, your send request will be accepted and your email will be sent.
SentLast24Hours is the number of emails that you have sent in the previous 24 hours.
MaxSendRate is the maximum number of emails that you can send per second.
Note that sending limits are based on recipients rather than on messages. For example, an email that has 10 recipients
counts as 10 against your sending quota.
For more information, see `Managing Your Amazon SES Sending Limits`_ in the *Amazon Simple Email Service Developer Guide*.
.. _`Managing Your Amazon SES Sending Limits`: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/manage-sending-limits.html
awscli-1.10.1/awscli/examples/ses/verify-domain-identity.rst 0000666 4542626 0000144 00000001166 12652514124 025160 0 ustar pysdk-ci amazon 0000000 0000000 **To verify a domain with Amazon SES**
The following example uses the ``verify-domain-identity`` command to verify a domain::
aws ses verify-domain-identity --domain example.com
Output::
{
"VerificationToken": "eoEmxw+YaYhb3h3iVJHuXMJXqeu1q1/wwmvjuEXAMPLE"
}
To complete domain verification, you must add a TXT record with the returned verification token to your domain's DNS settings. For more information, see `Verifying Domains in Amazon SES`_ in the *Amazon Simple Email Service Developer Guide*.
.. _`Verifying Domains in Amazon SES`: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-domains.html
awscli-1.10.1/awscli/examples/ses/verify-email-identity.rst 0000666 4542626 0000144 00000001712 12652514124 024775 0 ustar pysdk-ci amazon 0000000 0000000 **To verify an email address with Amazon SES**
The following example uses the ``verify-email-identity`` command to verify an email address::
aws ses verify-email-identity --email-address user@example.com
Before you can send an email using Amazon SES, you must verify the address or domain that you are sending the email
from to prove that you own it. If you do not have production access yet, you also need to verify any email addresses
that you send emails to except for email addresses provided by the Amazon SES mailbox simulator.
After verify-email-identity is called, the email address will receive a verification email. The user must click on the link in
the email to complete the verification process.
For more information, see `Verifying Email Addresses in Amazon SES`_ in the *Amazon Simple Email Service Developer Guide*.
.. _`Verifying Email Addresses in Amazon SES`: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-email-addresses.html
awscli-1.10.1/awscli/examples/ses/send-email.rst 0000666 4542626 0000144 00000004404 12652514124 022574 0 ustar pysdk-ci amazon 0000000 0000000 **To send a formatted email using Amazon SES**
The following example uses the ``send-email`` command to send a formatted email::
aws ses send-email --from sender@example.com --destination file://c:\temp\destination.json --message file://c:\temp\message.json
Output::
{
"MessageId": "EXAMPLEf3a5efcd1-51adec81-d2a4-4e3f-9fe2-5d85c1b23783-000000"
}
The destination and the message are JSON data structures saved in .json files in a directory called c:\\temp. These files are as follows:
``destination.json``::
{
"ToAddresses": ["recipient1@example.com", "recipient2@example.com"],
"CcAddresses": ["recipient3@example.com"],
"BccAddresses": []
}
``message.json``::
{
"Subject": {
"Data": "Test email sent using the AWS CLI",
"Charset": "UTF-8"
},
"Body": {
"Text": {
"Data": "This is the message body in text format.",
"Charset": "UTF-8"
},
"Html": {
"Data": "This message body contains HTML formatting. It can, for example, contain links like this one: Amazon SES Developer Guide.",
"Charset": "UTF-8"
}
}
}
Replace the sender and recipient email addresses with the ones you want to use. Note that the sender's email address must be verified with Amazon SES. Until you are granted production access to Amazon SES, you must also verify the email address of each recipient
unless the recipient is the Amazon SES mailbox simulator. For more information on verification, see `Verifying Email Addresses and Domains in Amazon SES`_ in the *Amazon Simple Email Service Developer Guide*.
The Message ID in the output indicates that the call to send-email was successful.
If you don't receive the email, check your Junk box.
For more information on sending formatted email, see `Sending Formatted Email Using the Amazon SES API`_ in the *Amazon Simple Email Service Developer Guide*.
.. _`Verifying Email Addresses and Domains in Amazon SES`: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-addresses-and-domains.html
.. _`Sending Formatted Email Using the Amazon SES API`: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-email-formatted.html
awscli-1.10.1/awscli/examples/ses/delete-identity.rst 0000666 4542626 0000144 00000001021 12652514124 023637 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an identity**
The following example uses the ``delete-identity`` command to delete an identity from the list of identities verified with Amazon SES::
aws ses delete-identity --identity user@example.com
For more information about verified identities, see `Verifying Email Addresses and Domains in Amazon SES`_ in the *Amazon Simple Email Service Developer Guide*.
.. _`Verifying Email Addresses and Domains in Amazon SES`: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-addresses-and-domains.html
awscli-1.10.1/awscli/examples/ses/get-identity-dkim-attributes.rst 0000666 4542626 0000144 00000002200 12652514124 026262 0 ustar pysdk-ci amazon 0000000 0000000 **To get the Amazon SES Easy DKIM attributes for a list of identities**
The following example uses the ``get-identity-dkim-attributes`` command to retrieve the Amazon SES Easy DKIM attributes for a list of identities::
aws ses get-identity-dkim-attributes --identities "example.com" "user@example.com"
Output::
{
"DkimAttributes": {
"example.com": {
"DkimTokens": [
"EXAMPLEjcs5xoyqytjsotsijas7236gr",
"EXAMPLEjr76cvoc6mysspnioorxsn6ep",
"EXAMPLEkbmkqkhlm2lyz77ppkulerm4k"
],
"DkimEnabled": true,
"DkimVerificationStatus": "Success"
},
"user@example.com": {
"DkimEnabled": false,
"DkimVerificationStatus": "NotStarted"
}
}
}
If you call this command with an identity that you have never submitted for verification, that identity won't appear in the output.
For more information about Easy DKIM, see `Easy DKIM in Amazon SES`_ in the *Amazon Simple Email Service Developer Guide*.
.. _`Easy DKIM in Amazon SES`: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/easy-dkim.html
awscli-1.10.1/awscli/examples/ses/set-identity-feedback-forwarding-enabled.rst 0000666 4542626 0000144 00000001572 12652514124 030455 0 ustar pysdk-ci amazon 0000000 0000000 **To enable or disable bounce and complaint email feedback forwarding for an Amazon SES verified identity**
The following example uses the ``set-identity-feedback-forwarding-enabled`` command to enable a verified email address to receive bounce and complaint notifications by email::
aws ses set-identity-feedback-forwarding-enabled --identity user@example.com --forwarding-enabled
You are required to receive bounce and complaint notifications via either Amazon SNS or email feedback forwarding, so you can only disable email feedback forwarding if you select an Amazon SNS topic for both bounce and complaint notifications.
For more information about notifications, see `Using Notifications With Amazon SES`_ in the *Amazon Simple Email Service Developer Guide*.
.. _`Using Notifications With Amazon SES`: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/notifications.html
awscli-1.10.1/awscli/examples/ses/get-identity-verification-attributes.rst 0000666 4542626 0000144 00000001745 12652514124 030035 0 ustar pysdk-ci amazon 0000000 0000000 **To get the Amazon SES verification status for a list of identities**
The following example uses the ``get-identity-verification-attributes`` command to retrieve the Amazon SES verification status for a list of identities::
aws ses get-identity-verification-attributes --identities "user1@example.com" "user2@example.com"
Output::
{
"VerificationAttributes": {
"user1@example.com": {
"VerificationStatus": "Success"
},
"user2@example.com": {
"VerificationStatus": "Pending"
}
}
}
If you call this command with an identity that you have never submitted for verification, that identity won't appear in the output.
For more information about verified identities, see `Verifying Email Addresses and Domains in Amazon SES`_ in the *Amazon Simple Email Service Developer Guide*.
.. _`Verifying Email Addresses and Domains in Amazon SES`: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-addresses-and-domains.html
awscli-1.10.1/awscli/examples/ses/set-identity-notification-topic.rst 0000666 4542626 0000144 00000001330 12652514124 026773 0 ustar pysdk-ci amazon 0000000 0000000 **To set the Amazon SNS topic to which Amazon SES will publish bounce, complaint, and/or delivery notifications for a verified identity**
The following example uses the ``set-identity-notification-topic`` command to specify the Amazon SNS topic to which a verified email address will receive bounce notifications::
aws ses set-identity-notification-topic --identity user@example.com --notification-type Bounce --sns-topic arn:aws:sns:us-east-1:EXAMPLE65304:MyTopic
For more information about notifications, see `Using Notifications With Amazon SES`_ in the *Amazon Simple Email Service Developer Guide*.
.. _`Using Notifications With Amazon SES`: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/notifications.html
awscli-1.10.1/awscli/examples/ses/list-identities.rst 0000666 4542626 0000144 00000001612 12652514124 023666 0 ustar pysdk-ci amazon 0000000 0000000 **To list all identities (email addresses and domains) for a specific AWS account**
The following example uses the ``list-identities`` command to list all identities that have been submitted for verification with Amazon SES::
aws ses list-identities
Output::
{
"Identities": [
"user@example.com",
"example.com"
]
}
The list that is returned contains all identities regardless of verification status (verified, pending verification, failure, etc.).
In this example, email addresses *and* domains are returned because we did not specify the identity-type parameter.
For more information about verification, see `Verifying Email Addresses and Domains in Amazon SES`_ in the *Amazon Simple Email Service Developer Guide*.
.. _`Verifying Email Addresses and Domains in Amazon SES`: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-addresses-and-domains.html
awscli-1.10.1/awscli/examples/ses/send-raw-email.rst 0000666 4542626 0000144 00000003773 12652514124 023373 0 ustar pysdk-ci amazon 0000000 0000000 **To send a raw email using Amazon SES**
The following example uses the ``send-raw-email`` command to send an email with a TXT attachment::
aws ses send-raw-email --raw-message file://c:\temp\message.json
Output::
{
"MessageId": "EXAMPLEf3f73d99b-c63fb06f-d263-41f8-a0fb-d0dc67d56c07-000000"
}
The raw message is a JSON data structure saved in the message.json file. It contains the following::
{
"Data": "From: sender@example.com\nTo: recipient@example.com\nSubject: Test email sent using the AWS CLI (contains an attachment)\nMIME-Version: 1.0\nContent-type: Multipart/Mixed; boundary=\"NextPart\"\n\n--NextPart\nContent-Type: text/plain\n\nThis is the message body.\n\n--NextPart\nContent-Type: text/plain;\nContent-Disposition: attachment; filename=\"attachment.txt\"\n\nThis is the text in the attachment.\n\n--NextPart--"
}
As you can see, "Data" is one long string that contains the entire raw email content in MIME format, including an attachment called attachment.txt.
Replace sender@example.com and recipient@example.com with the addresses you want to use. Note that the sender's email address must be verified with Amazon SES. Until you are granted production access to Amazon SES, you must also verify the email address of the recipient
unless the recipient is the Amazon SES mailbox simulator. For more information on verification, see `Verifying Email Addresses and Domains in Amazon SES`_ in the *Amazon Simple Email Service Developer Guide*.
The Message ID in the output indicates that the call to send-raw-email was successful.
If you don't receive the email, check your Junk box.
For more information on sending raw email, see `Sending Raw Email Using the Amazon SES API`_ in the *Amazon Simple Email Service Developer Guide*.
.. _`Sending Raw Email Using the Amazon SES API`: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-email-raw.html
.. _`Verifying Email Addresses and Domains in Amazon SES`: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-addresses-and-domains.html
awscli-1.10.1/awscli/examples/importexport/ 0000777 4542626 0000144 00000000000 12652514126 022006 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/importexport/cancel-job.rst 0000666 4542626 0000144 00000000355 12652514124 024536 0 ustar pysdk-ci amazon 0000000 0000000 The following command cancels the specified job::
aws importexport cancel-job --job-id EX1ID
Only jobs that were created by the AWS account you're currently using can be canceled. Jobs that have already completed cannot be canceled.
awscli-1.10.1/awscli/examples/importexport/list-jobs.rst 0000666 4542626 0000144 00000000630 12652514124 024443 0 ustar pysdk-ci amazon 0000000 0000000 The following command lists the jobs you've created::
aws importexport list-jobs
The output for the list-jobs command looks like the following::
JOBS 2015-05-27T18:58:21Z False EX1ID Import
You can only list jobs created by users under the AWS account you are currently using. Listing jobs returns useful information, like job IDs, which are necessary for other AWS Import/Export commands.
awscli-1.10.1/awscli/examples/importexport/create-job.rst 0000666 4542626 0000144 00000003003 12652514124 024545 0 ustar pysdk-ci amazon 0000000 0000000 The following command creates an import job from a manifest file::
aws importexport create-job --job-type import --manifest file://manifest --no-validate-only
The file ``manifest`` is a YAML formatted text file in the current directory with the following content::
manifestVersion: 2.0;
returnAddress:
name: Jane Roe
company: Example Corp.
street1: 123 Any Street
city: Anytown
stateOrProvince: WA
postalCode: 91011-1111
phoneNumber: 206-555-1111
country: USA
deviceId: 49382
eraseDevice: yes
notificationEmail: john.doe@example.com;jane.roe@example.com
bucket: myBucket
For more information on the manifest file format, see `Creating Import Manifests`_ in the *AWS Import/Export Developer Guide*.
.. _`Creating Import Manifests`: http://docs.aws.amazon.com/AWSImportExport/latest/DG/ImportManifestFile.html
You can also pass the manifest as a string in quotes::
aws importexport create-job --job-type import --manifest 'manifestVersion: 2.0;
returnAddress:
name: Jane Roe
company: Example Corp.
street1: 123 Any Street
city: Anytown
stateOrProvince: WA
postalCode: 91011-1111
phoneNumber: 206-555-1111
country: USA
deviceId: 49382
eraseDevice: yes
notificationEmail: john.doe@example.com;jane.roe@example.com
bucket: myBucket'
For information on quoting string arguments and using files, see `Specifying Parameter Values`_ in the *AWS CLI User Guide*.
.. _`Specifying Parameter Values`: http://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html
awscli-1.10.1/awscli/examples/importexport/get-shipping-label.rst 0000666 4542626 0000144 00000001650 12652514124 026213 0 ustar pysdk-ci amazon 0000000 0000000 The following command creates a pre-paid shipping label for the specified job::
aws importexport get-shipping-label --job-ids EX1ID --name "Jane Roe" --company "Example Corp." --phone-number "206-555-1111" --country "USA" --state-or-province "WA" --city "Anytown" --postal-code "91011-1111" --street-1 "123 Any Street"
The output for the get-shipping-label command looks like the following::
https://s3.amazonaws.com/myBucket/shipping-label-EX1ID.pdf
The link in the output contains the pre-paid shipping label generated in a PDF. It also contains shipping instructions with a unique bar code to identify and authenticate your device. For more information about using the pre-paid shipping label and shipping your device, see `Shipping Your Storage Device`_ in the *AWS Import/Export Developer Guide*.
.. _`Shipping Your Storage Device`: http://docs.aws.amazon.com/AWSImportExport/latest/DG/CHAP_ShippingYourStorageDevice.html
awscli-1.10.1/awscli/examples/importexport/get-status.rst 0000666 4542626 0000144 00000002272 12652514124 024641 0 ustar pysdk-ci amazon 0000000 0000000 The following command returns the status the specified job::
aws importexport get-status --job-id EX1ID
The output for the get-status command looks like the following::
2015-05-27T18:58:21Z manifestVersion:2.0
generator:Text editor
bucket:myBucket
deviceId:49382
eraseDevice:yes
notificationEmail:john.doe@example.com;jane.roe@example.com
trueCryptPassword:password123
acl:private
serviceLevel:standard
returnAddress:
name: Jane Roe
company: Example Corp.
street1: 123 Any Street
street2:
street3:
city: Anytown
stateOrProvince: WA
postalCode: 91011-1111
country:USA
phoneNumber:206-555-1111 0 EX1ID Import NotReceived AWS has not received your device. Pending The specified job has not started.
ktKDXpdbEXAMPLEyGFJmQO744UHw= version:2.0
signingMethod:HmacSHA1
jobId:EX1ID
signature:ktKDXpdbEXAMPLEyGFJmQO744UHw=
When you ship your device, it will be delivered to a sorting facility, and then forwarded on to an AWS data center. Note that when you send a get-status command, the status of your job will not show as ``At AWS`` until the shipment has been received at the AWS data center.
awscli-1.10.1/awscli/examples/importexport/update-job.rst 0000666 4542626 0000144 00000000742 12652514124 024573 0 ustar pysdk-ci amazon 0000000 0000000 The following command updates the specified job::
aws importexport update-job --job-id EX1ID --job-type import --manifest file://manifest.txt --no-validate-only
The output for the update-jobs command looks like the following::
True **** Device will be erased before being returned. ****
With this command, you can either modify the original manifest you submitted, or you can start over and create a new manifest file. In either case, the original manifest is discarded.
awscli-1.10.1/awscli/examples/glacier/ 0000777 4542626 0000144 00000000000 12652514126 020640 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/glacier/upload-archive.rst 0000666 4542626 0000144 00000001763 12652514124 024302 0 ustar pysdk-ci amazon 0000000 0000000 The following command uploads an archive in the current folder named ``archive.zip`` to a vault named ``my-vault``::
aws glacier upload-archive --account-id - --vault-name my-vault --body archive.zip
Output::
{
"archiveId": "kKB7ymWJVpPSwhGP6ycSOAekp9ZYe_--zM_mw6k76ZFGEIWQX-ybtRDvc2VkPSDtfKmQrj0IRQLSGsNuDp-AJVlu2ccmDSyDUmZwKbwbpAdGATGDiB3hHO0bjbGehXTcApVud_wyDw",
"checksum": "969fb39823836d81f0cc028195fcdbcbbe76cdde932d4646fa7de5f21e18aa67",
"location": "/0123456789012/vaults/my-vault/archives/kKB7ymWJVpPSwhGP6ycSOAekp9ZYe_--zM_mw6k76ZFGEIWQX-ybtRDvc2VkPSDtfKmQrj0IRQLSGsNuDp-AJVlu2ccmDSyDUmZwKbwbpAdGATGDiB3hHO0bjbGehXTcApVud_wyDw"
}
Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account.
To retrieve an uploaded archive, initiate a retrieval job with the `aws glacier initiate-job`_ command.
.. _`aws glacier initiate-job`: http://docs.aws.amazon.com/cli/latest/reference/glacier/initiate-job.html awscli-1.10.1/awscli/examples/glacier/list-parts.rst 0000666 4542626 0000144 00000002433 12652514124 023474 0 ustar pysdk-ci amazon 0000000 0000000 The following command lists the uploaded parts for a multipart upload to a vault named ``my-vault``::
aws glacier list-parts --account-id - --vault-name my-vault --upload-id "SYZi7qnL-YGqGwAm8Kn3BLP2ElNCvnB-5961R09CSaPmPwkYGHOqeN_nX3-Vhnd2yF0KfB5FkmbnBU9GubbdrCs8ut-D"
Output::
{
"MultipartUploadId": "SYZi7qnL-YGqGwAm8Kn3BLP2ElNCvnB-5961R09CSaPmPwkYGHOqeN_nX3-Vhnd2yF0KfB5FkmbnBU9GubbdrCs8ut-D",
"Parts": [
{
"RangeInBytes": "0-1048575",
"SHA256TreeHash": "e1f2a7cd6e047350f69b9f8cfa60fa606fe2f02802097a9a026360a7edc1f553"
},
{
"RangeInBytes": "1048576-2097151",
"SHA256TreeHash": "43cf3061fb95796aed99a11a6aa3cd8f839eed15e655ab0a597126210636aee6"
}
],
"VaultARN": "arn:aws:glacier:us-west-2:0123456789012:vaults/my-vault",
"CreationDate": "2015-07-18T00:05:23.830Z",
"PartSizeInBytes": 1048576
}
Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account.
For more information on multipart uploads to Amazon Glacier using the AWS CLI, see `Using Amazon Glacier`_ in the *AWS CLI User Guide*.
.. _`Using Amazon Glacier`: http://docs.aws.amazon.com/cli/latest/userguide/cli-using-glacier.html awscli-1.10.1/awscli/examples/glacier/describe-job.rst 0000666 4542626 0000144 00000001653 12652514124 023725 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves information about an inventory retrieval job on a vault named ``my-vault``::
aws glacier describe-job --account-id - --vault-name my-vault --job-id zbxcm3Z_3z5UkoroF7SuZKrxgGoDc3RloGduS7Eg-RO47Yc6FxsdGBgf_Q2DK5Ejh18CnTS5XW4_XqlNHS61dsO4CnMW
Output::
{
"InventoryRetrievalParameters": {
"Format": "JSON"
},
"VaultARN": "arn:aws:glacier:us-west-2:0123456789012:vaults/my-vault",
"Completed": false,
"JobId": "zbxcm3Z_3z5UkoroF7SuZKrxgGoDc3RloGduS7Eg-RO47Yc6FxsdGBgf_Q2DK5Ejh18CnTS5XW4_XqlNHS61dsO4CnMW",
"Action": "InventoryRetrieval",
"CreationDate": "2015-07-17T20:23:41.616Z",
"StatusCode": "InProgress"
}
The job ID can be found in the output of ``aws glacier initiate-job`` and ``aws glacier list-jobs``.
Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account.
awscli-1.10.1/awscli/examples/glacier/get-vault-notifications.rst 0000666 4542626 0000144 00000001207 12652514124 026147 0 ustar pysdk-ci amazon 0000000 0000000 The following command gets a description of the notification configuration for a vault named ``my-vault``::
aws glacier get-vault-notifications --account-id - --vault-name my-vault
Output::
{
"vaultNotificationConfig": {
"Events": [
"InventoryRetrievalCompleted",
"ArchiveRetrievalCompleted"
],
"SNSTopic": "arn:aws:sns:us-west-2:0123456789012:my-vault"
}
}
If no notifications have been configured for the vault, an error is returned. Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account.
awscli-1.10.1/awscli/examples/glacier/initiate-multipart-upload.rst 0000666 4542626 0000144 00000001523 12652514124 026500 0 ustar pysdk-ci amazon 0000000 0000000 The following command initiates a multipart upload to a vault named ``my-vault`` with a part size of 1 MiB (1024 x 1024 bytes) per file::
aws glacier initiate-multipart-upload --account-id - --part-size 1048576 --vault-name my-vault --archive-description "multipart upload test"
The archive description parameter is optional. Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account.
This command outputs an upload ID when successful. Use the upload ID when uploading each part of your archive with ``aws glacier upload-multipart-part``. For more information on multipart uploads to Amazon Glacier using the AWS CLI, see `Using Amazon Glacier`_ in the *AWS CLI User Guide*.
.. _`Using Amazon Glacier`: http://docs.aws.amazon.com/cli/latest/userguide/cli-using-glacier.html awscli-1.10.1/awscli/examples/glacier/create-vault.rst 0000666 4542626 0000144 00000000403 12652514124 023761 0 ustar pysdk-ci amazon 0000000 0000000 The following command creates a new vault named ``my-vault``::
aws glacier create-vault --vault-name my-vault --account-id -
Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account. awscli-1.10.1/awscli/examples/glacier/initiate-job.rst 0000666 4542626 0000144 00000003515 12652514124 023752 0 ustar pysdk-ci amazon 0000000 0000000 The following command initiates a job to get an inventory of the vault ``my-vault``::
aws glacier initiate-job --account-id - --vault-name my-vault --job-parameters '{"Type": "inventory-retrieval"}'
Output::
{
"location": "/0123456789012/vaults/my-vault/jobs/zbxcm3Z_3z5UkoroF7SuZKrxgGoDc3RloGduS7Eg-RO47Yc6FxsdGBgf_Q2DK5Ejh18CnTS5XW4_XqlNHS61dsO4CnMW",
"jobId": "zbxcm3Z_3z5UkoroF7SuZKrxgGoDc3RloGduS7Eg-RO47Yc6FxsdGBgf_Q2DK5Ejh18CnTS5XW4_XqlNHS61dsO4CnMW"
}
Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account.
The following command initiates a job to retrieve an archive from the vault ``my-vault``::
aws glacier initiate-job --account-id - --vault-name my-vault --job-parameters file://job-archive-retrieval.json
``job-archive-retrieval.json`` is a JSON file in the local folder that specifies the type of job, archive ID, and some optional parameters::
{
"Type": "archive-retrieval",
"ArchiveId": "kKB7ymWJVpPSwhGP6ycSOAekp9ZYe_--zM_mw6k76ZFGEIWQX-ybtRDvc2VkPSDtfKmQrj0IRQLSGsNuDp-AJVlu2ccmDSyDUmZwKbwbpAdGATGDiB3hHO0bjbGehXTcApVud_wyDw",
"Description": "Retrieve archive on 2015-07-17",
"SNSTopic": "arn:aws:sns:us-west-2:0123456789012:my-topic"
}
Archive IDs are available in the output of ``aws glacier upload-archive`` and ``aws glacier get-job-output``.
Output::
{
"location": "/011685312445/vaults/mwunderl/jobs/l7IL5-EkXyEY9Ws95fClzIbk2O5uLYaFdAYOi-azsX_Z8V6NH4yERHzars8wTKYQMX6nBDI9cMNHzyZJO59-8N9aHWav",
"jobId": "l7IL5-EkXy2O5uLYaFdAYOiEY9Ws95fClzIbk-azsX_Z8V6NH4yERHzars8wTKYQMX6nBDI9cMNHzyZJO59-8N9aHWav"
}
See `Initiate Job`_ in the *Amazon Glacier API Reference* for details on the job parameters format.
.. _`Initiate Job`: http://docs.aws.amazon.com/amazonglacier/latest/dev/api-initiate-job-post.html awscli-1.10.1/awscli/examples/glacier/set-vault-notifications.rst 0000666 4542626 0000144 00000001154 12652514124 026164 0 ustar pysdk-ci amazon 0000000 0000000 The following command configures SNS notifications for a vault named ``my-vault``::
aws glacier set-vault-notifications --account-id - --vault-name my-vault --vault-notification-config file://notificationconfig.json
``notificationconfig.json`` is a JSON file in the current folder that specifies an SNS topic and the events to publish::
{
"SNSTopic": "arn:aws:sns:us-west-2:0123456789012:my-vault",
"Events": ["ArchiveRetrievalCompleted", "InventoryRetrievalCompleted"]
}
Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account. awscli-1.10.1/awscli/examples/glacier/list-jobs.rst 0000666 4542626 0000144 00000003512 12652514124 023277 0 ustar pysdk-ci amazon 0000000 0000000 The following command lists in-progress and recently completed jobs for a vault named ``my-vault``::
aws glacier list-jobs --account-id - --vault-name my-vault
Output::
{
"JobList": [
{
"VaultARN": "arn:aws:glacier:us-west-2:0123456789012:vaults/my-vault",
"RetrievalByteRange": "0-3145727",
"SNSTopic": "arn:aws:sns:us-west-2:0123456789012:my-vault",
"Completed": false,
"SHA256TreeHash": "9628195fcdbcbbe76cdde932d4646fa7de5f219fb39823836d81f0cc0e18aa67",
"JobId": "l7IL5-EkXyEY9Ws95fClzIbk2O5uLYaFdAYOi-azsX_Z8V6NH4yERHzars8wTKYQMX6nBDI9cMNHzyZJO59-8N9aHWav",
"ArchiveId": "kKB7ymWJVpPSwhGP6ycSOAekp9ZYe_--zM_mw6k76ZFGEIWQX-ybtRDvc2VkPSDtfKmQrj0IRQLSGsNuDp-AJVlu2ccmDSyDUmZwKbwbpAdGATGDiB3hHO0bjbGehXTcApVud_wyDw",
"JobDescription": "Retrieve archive on 2015-07-17",
"ArchiveSizeInBytes": 3145728,
"Action": "ArchiveRetrieval",
"ArchiveSHA256TreeHash": "9628195fcdbcbbe76cdde932d4646fa7de5f219fb39823836d81f0cc0e18aa67",
"CreationDate": "2015-07-17T21:16:13.840Z",
"StatusCode": "InProgress"
},
{
"InventoryRetrievalParameters": {
"Format": "JSON"
},
"VaultARN": "arn:aws:glacier:us-west-2:0123456789012:vaults/my-vault",
"Completed": false,
"JobId": "zbxcm3Z_3z5UkoroF7SuZKrxgGoDc3RloGduS7Eg-RO47Yc6FxsdGBgf_Q2DK5Ejh18CnTS5XW4_XqlNHS61dsO4CnMW",
"Action": "InventoryRetrieval",
"CreationDate": "2015-07-17T20:23:41.616Z",
"StatusCode": ""InProgress""
}
]
}
Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account.
awscli-1.10.1/awscli/examples/glacier/list-tags-for-vault.rst 0000666 4542626 0000144 00000000571 12652514124 025217 0 ustar pysdk-ci amazon 0000000 0000000 The following command lists the tags applied to a vault named ``my-vault``::
aws glacier list-tags-for-vault --account-id - --vault-name my-vault
Output::
{
"Tags": {
"date": "july2015",
"id": "1234"
}
}
Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account.
awscli-1.10.1/awscli/examples/glacier/list-multipart-uploads.rst 0000666 4542626 0000144 00000001042 12652514124 026024 0 ustar pysdk-ci amazon 0000000 0000000 The following command shows all of the in-progress multipart uploads for a vault named ``my-vault``::
aws glacier list-multipart-uploads --account-id - --vault-name my-vault
Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account.
For more information on multipart uploads to Amazon Glacier using the AWS CLI, see `Using Amazon Glacier`_ in the *AWS CLI User Guide*.
.. _`Using Amazon Glacier`: http://docs.aws.amazon.com/cli/latest/userguide/cli-using-glacier.html awscli-1.10.1/awscli/examples/glacier/abort-multipart-upload.rst 0000666 4542626 0000144 00000001505 12652514124 026001 0 ustar pysdk-ci amazon 0000000 0000000 The following command deletes an in-progress multipart upload to a vault named ``my-vault``::
aws glacier abort-multipart-upload --account-id - --vault-name my-vault --upload-id 19gaRezEXAMPLES6Ry5YYdqthHOC_kGRCT03L9yetr220UmPtBYKk-OssZtLqyFu7sY1_lR7vgFuJV6NtcV5zpsJ
This command does not produce any output. Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account. The upload ID is returned by the ``aws glacier initiate-multipart-upload`` command and can also be obtained by using ``aws glacier list-multipart-uploads``.
For more information on multipart uploads to Amazon Glacier using the AWS CLI, see `Using Amazon Glacier`_ in the *AWS CLI User Guide*.
.. _`Using Amazon Glacier`: http://docs.aws.amazon.com/cli/latest/userguide/cli-using-glacier.html awscli-1.10.1/awscli/examples/glacier/list-vaults.rst 0000666 4542626 0000144 00000001177 12652514124 023665 0 ustar pysdk-ci amazon 0000000 0000000 The following command lists the vaults in the default account and region::
aws glacier list-vaults --account-id -
Output::
{
"VaultList": [
{
"SizeInBytes": 3178496,
"VaultARN": "arn:aws:glacier:us-west-2:0123456789012:vaults/my-vault",
"LastInventoryDate": "2015-04-07T00:26:19.028Z",
"VaultName": "my-vault",
"NumberOfArchives": 1,
"CreationDate": "2015-04-06T21:23:45.708Z"
}
]
}
Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account. awscli-1.10.1/awscli/examples/glacier/remove-tags-from-vault.rst 0000666 4542626 0000144 00000000473 12652514124 025717 0 ustar pysdk-ci amazon 0000000 0000000 The following command removes a tag with the key ``date`` from a vault named ``my-vault``::
aws glacier remove-tags-from-vault --account-id - --vault-name my-vault --tag-keys date
Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account.
awscli-1.10.1/awscli/examples/glacier/set-data-retrieval-policy.rst 0000666 4542626 0000144 00000001705 12652514124 026365 0 ustar pysdk-ci amazon 0000000 0000000 The following command configures a data retrieval policy for the in-use account::
aws glacier set-data-retrieval-policy --account-id - --policy file://data-retrieval-policy.json
``data-retrieval-policy.json`` is a JSON file in the current folder that specifies a data retrieval policy::
{
"Rules":[
{
"Strategy":"BytesPerHour",
"BytesPerHour":10737418240
}
]
}
Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account.
The following command sets the data retrieval policy to ``FreeTier`` using inline JSON::
aws glacier set-data-retrieval-policy --account-id - --policy '{"Rules":[{"Strategy":"FreeTier"}]}'
See `Set Data Retrieval Policy`_ in the *Amazon Glacier API Reference* for details on the policy format.
.. _`Set Data Retrieval Policy`: http://docs.aws.amazon.com/amazonglacier/latest/dev/api-SetDataRetrievalPolicy.html
awscli-1.10.1/awscli/examples/glacier/get-data-retrieval-policy.rst 0000666 4542626 0000144 00000000724 12652514124 026351 0 ustar pysdk-ci amazon 0000000 0000000 The following command gets the data retrieval policy for the in-use account::
aws glacier get-data-retrieval-policy --account-id -
Output::
{
"Policy": {
"Rules": [
{
"BytesPerHour": 10737418240,
"Strategy": "BytesPerHour"
}
]
}
}
Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account.
awscli-1.10.1/awscli/examples/glacier/delete-vault.rst 0000666 4542626 0000144 00000000451 12652514124 023763 0 ustar pysdk-ci amazon 0000000 0000000 The following command deletes a vault named ``my-vault``::
aws glacier delete-vault --vault-name my-vault --account-id -
This command does not produce any output. Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account. awscli-1.10.1/awscli/examples/glacier/describe-vault.rst 0000666 4542626 0000144 00000000416 12652514124 024302 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves data about a vault named ``my-vault``::
aws glacier describe-vault --vault-name my-vault --account-id -
Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account. awscli-1.10.1/awscli/examples/glacier/upload-multipart-part.rst 0000666 4542626 0000144 00000001774 12652514124 025650 0 ustar pysdk-ci amazon 0000000 0000000 The following command uploads the first 1 MiB (1024 x 1024 bytes) part of an archive::
aws glacier upload-multipart-part --body part1 --range 'bytes 0-1048575/*' --account-id - --vault-name my-vault --upload-id 19gaRezEXAMPLES6Ry5YYdqthHOC_kGRCT03L9yetr220UmPtBYKk-OssZtLqyFu7sY1_lR7vgFuJV6NtcV5zpsJ
Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account.
The body parameter takes a path to a part file on the local filesystem. The range parameter takes an HTTP content range indicating the bytes that the part occupies in the completed archive. The upload ID is returned by the ``aws glacier initiate-multipart-upload`` command and can also be obtained by using ``aws glacier list-multipart-uploads``.
For more information on multipart uploads to Amazon Glacier using the AWS CLI, see `Using Amazon Glacier`_ in the *AWS CLI User Guide*.
.. _`Using Amazon Glacier`: http://docs.aws.amazon.com/cli/latest/userguide/cli-using-glacier.html awscli-1.10.1/awscli/examples/glacier/complete-multipart-upload.rst 0000666 4542626 0000144 00000001757 12652514124 026513 0 ustar pysdk-ci amazon 0000000 0000000 The following command completes multipart upload for a 3 MiB archive::
aws glacier complete-multipart-upload --archive-size 3145728 --checksum 9628195fcdbcbbe76cdde456d4646fa7de5f219fb39823836d81f0cc0e18aa67 --upload-id 19gaRezEXAMPLES6Ry5YYdqthHOC_kGRCT03L9yetr220UmPtBYKk-OssZtLqyFu7sY1_lR7vgFuJV6NtcV5zpsJ --account-id - --vault-name my-vault
Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account.
The upload ID is returned by the ``aws glacier initiate-multipart-upload`` command and can also be obtained by using ``aws glacier list-multipart-uploads``. The checksum parameter takes a SHA-256 tree hash of the archive in hexadecimal.
For more information on multipart uploads to Amazon Glacier using the AWS CLI, including instructions on calculating a tree hash, see `Using Amazon Glacier`_ in the *AWS CLI User Guide*.
.. _`Using Amazon Glacier`: http://docs.aws.amazon.com/cli/latest/userguide/cli-using-glacier.html awscli-1.10.1/awscli/examples/glacier/get-job-output.rst 0000666 4542626 0000144 00000002261 12652514124 024256 0 ustar pysdk-ci amazon 0000000 0000000 The following command saves the output from a vault inventory job to a file in the current directory named ``output.json``::
aws glacier get-job-output --account-id - --vault-name my-vault --job-id zbxcm3Z_3z5UkoroF7SuZKrxgGoDc3RloGduS7Eg-RO47Yc6FxsdGBgf_Q2DK5Ejh18CnTS5XW4_XqlNHS61dsO4CnMW output.json
The ``job-id`` is available in the output of ``aws glacier list-jobs``. Note that the output file name is a positional argument that is not prefixed by an option name. Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account.
Output::
{
"status": 200,
"acceptRanges": "bytes",
"contentType": "application/json"
}
``output.json``::
{"VaultARN":"arn:aws:glacier:us-west-2:0123456789012:vaults/my-vault","InventoryDate":"2015-04-07T00:26:18Z","ArchiveList":[{"ArchiveId":"kKB7ymWJVpPSwhGP6ycSOAekp9ZYe_--zM_mw6k76ZFGEIWQX-ybtRDvc2VkPSDtfKmQrj0IRQLSGsNuDp-AJVlu2ccmDSyDUmZwKbwbpAdGATGDiB3hHO0bjbGehXTcApVud_wyDw","ArchiveDescription":"multipart upload test","CreationDate":"2015-04-06T22:24:34Z","Size":3145728,"SHA256TreeHash":"9628195fcdbcbbe76cdde932d4646fa7de5f219fb39823836d81f0cc0e18aa67"}]} awscli-1.10.1/awscli/examples/glacier/add-tags-to-vault.rst 0000666 4542626 0000144 00000000453 12652514124 024627 0 ustar pysdk-ci amazon 0000000 0000000 The following command adds two tags to a vault named ``my-vault``::
aws glacier add-tags-to-vault --account-id - --vault-name my-vault --tags id=1234,date=july2015
Amazon Glacier requires an account ID argument when performing operations, but you can use a hyphen to specify the in-use account.
awscli-1.10.1/awscli/examples/sqs/ 0000777 4542626 0000144 00000000000 12652514126 020040 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/sqs/create-queue.rst 0000666 4542626 0000144 00000001247 12652514124 023161 0 ustar pysdk-ci amazon 0000000 0000000 **To create a queue**
This example creates a queue with the specified name, sets the message retention period to 3 days (3 days * 24 hours * 60 minutes * 60 seconds), and sets the queue's dead letter queue to the specified queue with a maximum receive count of 1,000 messages.
Command::
aws sqs create-queue --queue-name MyQueue --attributes file://create-queue.json
Input file (create-queue.json)::
{
"RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:80398EXAMPLE:MyDeadLetterQueue\",\"maxReceiveCount\":\"1000\"}",
"MessageRetentionPeriod": "259200"
}
Output::
{
"QueueUrl": "https://queue.amazonaws.com/80398EXAMPLE/MyQueue"
}
awscli-1.10.1/awscli/examples/sqs/receive-message.rst 0000666 4542626 0000144 00000003733 12652514124 023642 0 ustar pysdk-ci amazon 0000000 0000000 **To receive a message**
This example receives up to 10 available messages, returning all available attributes.
Command::
aws sqs receive-message --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --attribute-names All --message-attribute-names All --max-number-of-messages 10
Output::
{
"Messages": [
{
"Body": "My first message.",
"ReceiptHandle": "AQEBzbVv...fqNzFw==",
"MD5OfBody": "1000f835...a35411fa",
"MD5OfMessageAttributes": "9424c491...26bc3ae7",
"MessageId": "d6790f8d-d575-4f01-bc51-40122EXAMPLE",
"Attributes": {
"ApproximateFirstReceiveTimestamp": "1442428276921",
"SenderId": "AIDAIAZKMSNQ7TEXAMPLE",
"ApproximateReceiveCount": "5",
"SentTimestamp": "1442428276921"
},
"MessageAttributes": {
"PostalCode": {
"DataType": "String",
"StringValue": "ABC123"
},
"City": {
"DataType": "String",
"StringValue": "Any City"
}
}
}
]
}
This example receives the next available message, returning only the SenderId and SentTimestamp attributes as well as the PostalCode message attribute.
Command::
aws sqs receive-message --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --attribute-names SenderId SentTimestamp --message-attribute-names PostalCode
Output::
{
"Messages": [
{
"Body": "My first message.",
"ReceiptHandle": "AQEB6nR4...HzlvZQ==",
"MD5OfBody": "1000f835...a35411fa",
"MD5OfMessageAttributes": "b8e89563...e088e74f",
"MessageId": "d6790f8d-d575-4f01-bc51-40122EXAMPLE",
"Attributes": {
"SenderId": "AIDAIAZKMSNQ7TEXAMPLE",
"SentTimestamp": "1442428276921"
},
"MessageAttributes": {
"PostalCode": {
"DataType": "String",
"StringValue": "ABC123"
}
}
}
]
} awscli-1.10.1/awscli/examples/sqs/send-message-batch.rst 0000666 4542626 0000144 00000004211 12652514124 024220 0 ustar pysdk-ci amazon 0000000 0000000 **To send multiple messages as a batch**
This example sends 2 messages with the specified message bodies, delay periods, and message attributes, to the specified queue.
Command::
aws sqs send-message-batch --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --entries file://send-message-batch.json
Input file (send-message-batch.json)::
[
{
"Id": "FuelReport-0001-2015-09-16T140731Z",
"MessageBody": "Fuel report for account 0001 on 2015-09-16 at 02:07:31 PM.",
"DelaySeconds": 10,
"MessageAttributes": {
"SellerName": {
"DataType": "String",
"StringValue": "Example Store"
},
"City": {
"DataType": "String",
"StringValue": "Any City"
},
"Region": {
"DataType": "String",
"StringValue": "WA"
},
"PostalCode": {
"DataType": "String",
"StringValue": "99065"
},
"PricePerGallon": {
"DataType": "Number",
"StringValue": "1.99"
}
}
},
{
"Id": "FuelReport-0002-2015-09-16T140930Z",
"MessageBody": "Fuel report for account 0002 on 2015-09-16 at 02:09:30 PM.",
"DelaySeconds": 10,
"MessageAttributes": {
"SellerName": {
"DataType": "String",
"StringValue": "Example Fuels"
},
"City": {
"DataType": "String",
"StringValue": "North Town"
},
"Region": {
"DataType": "String",
"StringValue": "WA"
},
"PostalCode": {
"DataType": "String",
"StringValue": "99123"
},
"PricePerGallon": {
"DataType": "Number",
"StringValue": "1.87"
}
}
}
]
Output::
{
"Successful": [
{
"MD5OfMessageBody": "203c4a38...7943237e",
"MD5OfMessageAttributes": "10809b55...baf283ef",
"Id": "FuelReport-0001-2015-09-16T140731Z",
"MessageId": "d175070c-d6b8-4101-861d-adeb3EXAMPLE"
},
{
"MD5OfMessageBody": "2cf0159a...c1980595",
"MD5OfMessageAttributes": "55623928...ae354a25",
"Id": "FuelReport-0002-2015-09-16T140930Z",
"MessageId": "f9b7d55d-0570-413e-b9c5-a9264EXAMPLE"
}
]
}
awscli-1.10.1/awscli/examples/sqs/list-queues.rst 0000666 4542626 0000144 00000001425 12652514124 023052 0 ustar pysdk-ci amazon 0000000 0000000 **To list queues**
This example lists all queues.
Command::
aws sqs list-queues
Output::
{
"QueueUrls": [
"https://queue.amazonaws.com/80398EXAMPLE/MyDeadLetterQueue",
"https://queue.amazonaws.com/80398EXAMPLE/MyQueue",
"https://queue.amazonaws.com/80398EXAMPLE/MyOtherQueue",
"https://queue.amazonaws.com/80398EXAMPLE/TestQueue1",
"https://queue.amazonaws.com/80398EXAMPLE/TestQueue2"
]
}
This example lists only queues that start with "My".
Command::
aws sqs list-queues --queue-name-prefix My
Output::
{
"QueueUrls": [
"https://queue.amazonaws.com/80398EXAMPLE/MyDeadLetterQueue",
"https://queue.amazonaws.com/80398EXAMPLE/MyQueue",
"https://queue.amazonaws.com/80398EXAMPLE/MyOtherQueue"
]
} awscli-1.10.1/awscli/examples/sqs/set-queue-attributes.rst 0000666 4542626 0000144 00000001735 12652514124 024677 0 ustar pysdk-ci amazon 0000000 0000000 **To set queue attributes**
This example sets the specified queue to a delivery delay of 10 seconds, a maximum message size of 128 KB (128 KB * 1,024 bytes), a message retention period of 3 days (3 days * 24 hours * 60 minutes * 60 seconds), a receive message wait time of 20 seconds, and a default visibility timeout of 60 seconds. This example also associates the specified dead letter queue with a maximum receive count of 1,000 messages.
Command::
aws sqs set-queue-attributes --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyNewQueue --attributes file://set-queue-attributes.json
Input file (set-queue-attributes.json)::
{
"DelaySeconds": "10",
"MaximumMessageSize": "131072",
"MessageRetentionPeriod": "259200",
"ReceiveMessageWaitTimeSeconds": "20",
"RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:80398EXAMPLE:MyDeadLetterQueue\",\"maxReceiveCount\":\"1000\"}",
"VisibilityTimeout": "60"
}
Output::
None.
awscli-1.10.1/awscli/examples/sqs/send-message.rst 0000666 4542626 0000144 00000001533 12652514124 023145 0 ustar pysdk-ci amazon 0000000 0000000 **To send a message**
This example sends a message with the specified message body, delay period, and message attributes, to the specified queue.
Command::
aws sqs send-message --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --message-body "Information about the largest city in Any Region." --delay-seconds 10 --message-attributes file://send-message.json
Input file (send-message.json)::
{
"City": {
"DataType": "String",
"StringValue": "Any City"
},
"Greeting": {
"DataType": "Binary",
"BinaryValue": "Hello, World!"
},
"Population": {
"DataType": "Number",
"StringValue": "1250800"
}
}
Output::
{
"MD5OfMessageBody": "51b0a325...39163aa0",
"MD5OfMessageAttributes": "00484c68...59e48f06",
"MessageId": "da68f62c-0c07-4bee-bf5f-7e856EXAMPLE"
}
awscli-1.10.1/awscli/examples/sqs/delete-queue.rst 0000666 4542626 0000144 00000000300 12652514124 023145 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a queue**
This example deletes the specified queue.
Command::
aws sqs delete-queue --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyNewerQueue
Output::
None. awscli-1.10.1/awscli/examples/sqs/delete-message-batch.rst 0000666 4542626 0000144 00000001122 12652514124 024527 0 ustar pysdk-ci amazon 0000000 0000000 **To delete multiple messages as a batch**
This example deletes the specified messages.
Command::
aws sqs delete-message-batch --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --entries file://delete-message-batch.json
Input file (delete-message-batch.json)::
[
{
"Id": "FirstMessage",
"ReceiptHandle": "AQEB1mgl...Z4GuLw=="
},
{
"Id": "SecondMessage",
"ReceiptHandle": "AQEBLsYM...VQubAA=="
}
]
Output::
{
"Successful": [
{
"Id": "FirstMessage"
},
{
"Id": "SecondMessage"
}
]
} awscli-1.10.1/awscli/examples/sqs/remove-permission.rst 0000666 4542626 0000144 00000000422 12652514124 024251 0 ustar pysdk-ci amazon 0000000 0000000 **To remove a permission**
This example removes the permission with the specified label from the specified queue.
Command::
aws sqs remove-permission --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --label SendMessagesFromMyQueue
Output::
None. awscli-1.10.1/awscli/examples/sqs/get-queue-attributes.rst 0000666 4542626 0000144 00000002366 12652514124 024664 0 ustar pysdk-ci amazon 0000000 0000000 **To get a queue's attributes**
This example gets all of the specified queue's attributes.
Command::
aws sqs get-queue-attributes --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --attribute-names All
Output::
{
"Attributes": {
"ApproximateNumberOfMessagesNotVisible": "0",
"RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:80398EXAMPLE:MyDeadLetterQueue\",\"maxReceiveCount\":1000}",
"MessageRetentionPeriod": "345600",
"ApproximateNumberOfMessagesDelayed": "0",
"MaximumMessageSize": "262144",
"CreatedTimestamp": "1442426968",
"ApproximateNumberOfMessages": "0",
"ReceiveMessageWaitTimeSeconds": "0",
"DelaySeconds": "0",
"VisibilityTimeout": "30",
"LastModifiedTimestamp": "1442426968",
"QueueArn": "arn:aws:sqs:us-east-1:80398EXAMPLE:MyNewQueue"
}
}
This example gets only the specified queue's maximum message size and visibility timeout attributes.
Command::
aws sqs get-queue-attributes --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyNewQueue --attribute-names MaximumMessageSize VisibilityTimeout
Output::
{
"Attributes": {
"VisibilityTimeout": "30",
"MaximumMessageSize": "262144"
}
}
awscli-1.10.1/awscli/examples/sqs/change-message-visibility.rst 0000666 4542626 0000144 00000000547 12652514124 025632 0 ustar pysdk-ci amazon 0000000 0000000 **To change a message's timeout visibility**
This example changes the specified message's timeout visibility to 10 hours (10 hours * 60 minutes * 60 seconds).
Command::
aws sqs change-message-visibility --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --receipt-handle AQEBTpyI...t6HyQg== --visibility-timeout 36000
Output::
None. awscli-1.10.1/awscli/examples/sqs/change-message-visibility-batch.rst 0000666 4542626 0000144 00000001431 12652514124 026702 0 ustar pysdk-ci amazon 0000000 0000000 **To change multiple messages' timeout visibilities as a batch**
This example changes the 2 specified messages' timeout visibilities to 10 hours (10 hours * 60 minutes * 60 seconds).
Command::
aws sqs change-message-visibility-batch --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --entries file://change-message-visibility-batch.json
Input file (change-message-visibility-batch.json)::
[
{
"Id": "FirstMessage",
"ReceiptHandle": "AQEBhz2q...Jf3kaw==",
"VisibilityTimeout": 36000
},
{
"Id": "SecondMessage",
"ReceiptHandle": "AQEBkTUH...HifSnw==",
"VisibilityTimeout": 36000
}
]
Output::
{
"Successful": [
{
"Id": "SecondMessage"
},
{
"Id": "FirstMessage"
}
]
}
awscli-1.10.1/awscli/examples/sqs/list-dead-letter-source-queues.rst 0000666 4542626 0000144 00000000655 12652514124 026544 0 ustar pysdk-ci amazon 0000000 0000000 **To list dead letter source queues**
This example lists the queues that are associated with the specified dead letter source queue.
Command::
aws sqs list-dead-letter-source-queues --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyDeadLetterQueue
Output::
{
"queueUrls": [
"https://queue.amazonaws.com/80398EXAMPLE/MyQueue",
"https://queue.amazonaws.com/80398EXAMPLE/MyOtherQueue"
]
} awscli-1.10.1/awscli/examples/sqs/get-queue-url.rst 0000666 4542626 0000144 00000000323 12652514124 023267 0 ustar pysdk-ci amazon 0000000 0000000 **To get a queue URL**
This example gets the specified queue's URL.
Command::
aws sqs get-queue-url --queue-name MyQueue
Output::
{
"QueueUrl": "https://queue.amazonaws.com/80398EXAMPLE/MyQueue"
} awscli-1.10.1/awscli/examples/sqs/purge-queue.rst 0000666 4542626 0000144 00000000314 12652514124 023032 0 ustar pysdk-ci amazon 0000000 0000000 **To purge a queue**
This example deletes all messages in the specified queue.
Command::
aws sqs purge-queue --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyNewQueue
Output::
None. awscli-1.10.1/awscli/examples/sqs/delete-message.rst 0000666 4542626 0000144 00000000346 12652514124 023457 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a message**
This example deletes the specified message.
Command::
aws sqs delete-message --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --receipt-handle AQEBRXTo...q2doVA==
Output::
None. awscli-1.10.1/awscli/examples/sqs/add-permission.rst 0000666 4542626 0000144 00000000515 12652514124 023507 0 ustar pysdk-ci amazon 0000000 0000000 **To add a permission to a queue**
This example enables the specified AWS account to send messages to the specified queue.
Command::
aws sqs add-permission --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --label SendMessagesFromMyQueue --aws-account-ids 12345EXAMPLE --actions SendMessage
Output::
None. awscli-1.10.1/awscli/examples/configservice/ 0000777 4542626 0000144 00000000000 12652514126 022060 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/configservice/describe-config-rule-evaluation-status.rst 0000666 4542626 0000144 00000001267 12652514125 032275 0 ustar pysdk-ci amazon 0000000 0000000 **To get status information for an AWS Config rule**
The following command returns the status information for an AWS Config rule named ``MyConfigRule``::
aws configservice describe-config-rule-evaluation-status --config-rule-names MyConfigRule
Output::
{
"ConfigRulesEvaluationStatus": [
{
"ConfigRuleArn": "arn:aws:config:us-east-1:123456789012:config-rule/config-rule-abcdef",
"FirstActivatedTime": 1450311703.844,
"ConfigRuleId": "config-rule-abcdef",
"LastSuccessfulInvocationTime": 1450314643.156,
"ConfigRuleName": "MyConfigRule"
}
]
} awscli-1.10.1/awscli/examples/configservice/put-delivery-channel.rst 0000666 4542626 0000144 00000001443 12652514125 026652 0 ustar pysdk-ci amazon 0000000 0000000 **To create a delivery channel**
The following command provides the settings for the delivery channel as JSON code::
aws configservice put-delivery-channel --delivery-channel file://deliveryChannel.json
`deliveryChannel.json` is a JSON file that specifies the Amazon S3 bucket and Amazon SNS topic to which AWS Config will deliver configuration information::
{
"name": "default",
"s3BucketName": "config-bucket-123456789012",
"snsTopicARN": "arn:aws:sns:us-east-1:123456789012:config-topic"
}
If the command succeeds, AWS Config returns no output. To verify the settings of your delivery channel, run the `describe-delivery-channels`__ command.
.. __: http://docs.aws.amazon.com/cli/latest/reference/configservice/describe-delivery-channels.html awscli-1.10.1/awscli/examples/configservice/describe-compliance-by-resource.rst 0000666 4542626 0000144 00000002605 12652514125 030741 0 ustar pysdk-ci amazon 0000000 0000000 **To get compliance information for your AWS resources**
The following command returns compliance information for each EC2 instance that is recorded by AWS Config and that violates one or more rules::
aws configservice describe-compliance-by-resource --resource-type AWS::EC2::Instance --compliance-types NON_COMPLIANT
In the output, the value for each ``CappedCount`` attribute indicates how many rules the resource violates. For example, the following output indicates that instance ``i-1a2b3c4d`` violates 2 rules.
Output::
{
"ComplianceByResources": [
{
"ResourceType": "AWS::EC2::Instance",
"ResourceId": "i-1a2b3c4d",
"Compliance": {
"ComplianceContributorCount": {
"CappedCount": 2,
"CapExceeded": false
},
"ComplianceType": "NON_COMPLIANT"
}
},
{
"ResourceType": "AWS::EC2::Instance",
"ResourceId": "i-2a2b3c4d ",
"Compliance": {
"ComplianceContributorCount": {
"CappedCount": 3,
"CapExceeded": false
},
"ComplianceType": "NON_COMPLIANT"
}
}
]
} awscli-1.10.1/awscli/examples/configservice/list-discovered-resources.rst 0000666 4542626 0000144 00000001252 12652514125 027721 0 ustar pysdk-ci amazon 0000000 0000000 **To list resources that AWS Config has discovered**
The following command lists the EC2 instances that AWS Config has discovered::
aws configservice list-discovered-resources --resource-type AWS::EC2::Instance
Output::
{
"resourceIdentifiers": [
{
"resourceType": "AWS::EC2::Instance",
"resourceId": "i-1a2b3c4d"
},
{
"resourceType": "AWS::EC2::Instance",
"resourceId": "i-2a2b3c4d"
},
{
"resourceType": "AWS::EC2::Instance",
"resourceId": "i-3a2b3c4d"
}
]
} awscli-1.10.1/awscli/examples/configservice/describe-delivery-channels.rst 0000666 4542626 0000144 00000000700 12652514125 030000 0 ustar pysdk-ci amazon 0000000 0000000 **To get details about the delivery channel**
The following command returns details about the delivery channel::
aws configservice describe-delivery-channels
Output::
{
"DeliveryChannels": [
{
"snsTopicARN": "arn:aws:sns:us-east-1:123456789012:config-topic",
"name": "default",
"s3BucketName": "config-bucket-123456789012"
}
]
} awscli-1.10.1/awscli/examples/configservice/delete-config-rule.rst 0000666 4542626 0000144 00000000271 12652514125 026263 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an AWS Config rule**
The following command deletes an AWS Config rule named ``MyConfigRule``::
aws configservice delete-config-rule --config-rule-name MyConfigRule awscli-1.10.1/awscli/examples/configservice/describe-compliance-by-config-rule.rst 0000666 4542626 0000144 00000002476 12652514125 031332 0 ustar pysdk-ci amazon 0000000 0000000 **To get compliance information for your AWS Config rules**
The following command returns compliance information for each AWS Config rule that is violated by one or more AWS resources::
aws configservice describe-compliance-by-config-rule --compliance-types NON_COMPLIANT
In the output, the value for each ``CappedCount`` attribute indicates how many resources do not comply with the related rule. For example, the following output indicates that 3 resources do not comply with the rule named ``InstanceTypesAreT2micro``.
Output::
{
"ComplianceByConfigRules": [
{
"Compliance": {
"ComplianceContributorCount": {
"CappedCount": 3,
"CapExceeded": false
},
"ComplianceType": "NON_COMPLIANT"
},
"ConfigRuleName": "InstanceTypesAreT2micro"
},
{
"Compliance": {
"ComplianceContributorCount": {
"CappedCount": 10,
"CapExceeded": false
},
"ComplianceType": "NON_COMPLIANT"
},
"ConfigRuleName": "RequiredTagsForVolumes"
}
]
} awscli-1.10.1/awscli/examples/configservice/describe-configuration-recorder-status.rst 0000666 4542626 0000144 00000001115 12652514125 032360 0 ustar pysdk-ci amazon 0000000 0000000 **To get status information for the configuration recorder**
The following command returns the status of the default configuration recorder::
aws configservice describe-configuration-recorder-status
Output::
{
"ConfigurationRecordersStatus": [
{
"name": "default",
"lastStatus": "SUCCESS",
"recording": true,
"lastStatusChangeTime": 1452193834.344,
"lastStartTime": 1441039997.819,
"lastStopTime": 1441039992.835
}
]
} awscli-1.10.1/awscli/examples/configservice/get-resource-config-history.rst 0000666 4542626 0000144 00000000432 12652514125 030156 0 ustar pysdk-ci amazon 0000000 0000000 **To get the configuration history of an AWS resource**
The following command returns a list of configuration items for an EC2 instance with an ID of ``i-1a2b3c4d``::
aws configservice get-resource-config-history --resource-type AWS::EC2::Instance --resource-id i-1a2b3c4d awscli-1.10.1/awscli/examples/configservice/describe-configuration-recorders.rst 0000666 4542626 0000144 00000001161 12652514125 031223 0 ustar pysdk-ci amazon 0000000 0000000 **To get details about the configuration recorder**
The following command returns details about the default configuration recorder::
aws configservice describe-configuration-recorders
Output::
{
"ConfigurationRecorders": [
{
"recordingGroup": {
"allSupported": true,
"resourceTypes": [],
"includeGlobalResourceTypes": true
},
"roleARN": "arn:aws:iam::123456789012:role/config-ConfigRole-A1B2C3D4E5F6",
"name": "default"
}
]
} awscli-1.10.1/awscli/examples/configservice/delete-delivery-channel.rst 0000666 4542626 0000144 00000000261 12652514125 027301 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a delivery channel**
The following command deletes the default delivery channel::
aws configservice delete-delivery-channel --delivery-channel-name default awscli-1.10.1/awscli/examples/configservice/get-compliance-summary-by-resource-type.rst 0000666 4542626 0000144 00000003657 12652514125 032422 0 ustar pysdk-ci amazon 0000000 0000000 **To get the compliance summary for all resource types**
The following command returns the number of AWS resources that are noncompliant and the number that are compliant::
aws configservice get-compliance-summary-by-resource-type
In the output, the value for each ``CappedCount`` attribute indicates how many resources are compliant or noncompliant.
Output::
{
"ComplianceSummariesByResourceType": [
{
"ComplianceSummary": {
"NonCompliantResourceCount": {
"CappedCount": 16,
"CapExceeded": false
},
"ComplianceSummaryTimestamp": 1453237464.543,
"CompliantResourceCount": {
"CappedCount": 10,
"CapExceeded": false
}
}
}
]
}
**To get the compliance summary for a specific resource type**
The following command returns the number of EC2 instances that are noncompliant and the number that are compliant::
aws configservice get-compliance-summary-by-resource-type --resource-types AWS::EC2::Instance
In the output, the value for each ``CappedCount`` attribute indicates how many resources are compliant or noncompliant.
Output::
{
"ComplianceSummariesByResourceType": [
{
"ResourceType": "AWS::EC2::Instance",
"ComplianceSummary": {
"NonCompliantResourceCount": {
"CappedCount": 3,
"CapExceeded": false
},
"ComplianceSummaryTimestamp": 1452204923.518,
"CompliantResourceCount": {
"CappedCount": 7,
"CapExceeded": false
}
}
}
]
} awscli-1.10.1/awscli/examples/configservice/put-configuration-recorder.rst 0000666 4542626 0000144 00000003331 12652514125 030071 0 ustar pysdk-ci amazon 0000000 0000000 **To record all supported resources**
The following command creates a configuration recorder that tracks changes to all supported resource types, including global resource types::
aws configservice put-configuration-recorder --configuration-recorder name=default,roleARN=arn:aws:iam::123456789012:role/config-role --recording-group allSupported=true,includeGlobalResourceTypes=true
**To record specific types of resources**
The following command creates a configuration recorder that tracks changes to only those types of resources that are specified in the JSON file for the `--recording-group` option::
aws configservice put-configuration-recorder --configuration-recorder name=default,roleARN=arn:aws:iam::123456789012:role/config-role --recording-group file://recordingGroup.json
`recordingGroup.json` is a JSON file that specifies the types of resources that AWS Config will record::
{
"allSupported": false,
"includeGlobalResourceTypes": false,
"resourceTypes": [
"AWS::EC2::EIP",
"AWS::EC2::Instance",
"AWS::EC2::NetworkAcl",
"AWS::EC2::SecurityGroup",
"AWS::CloudTrail::Trail",
"AWS::EC2::Volume",
"AWS::EC2::VPC",
"AWS::IAM::User",
"AWS::IAM::Policy"
]
}
Before you can specify resource types for the `resourceTypes` key, you must set the `allSupported` and `includeGlobalResourceTypes` options to false or omit them.
If the command succeeds, AWS Config returns no output. To verify the settings of your configuration recorder, run the `describe-configuration-recorders`__ command.
.. __: http://docs.aws.amazon.com/cli/latest/reference/configservice/describe-configuration-recorders.html awscli-1.10.1/awscli/examples/configservice/put-config-rule.rst 0000666 4542626 0000144 00000006417 12652514125 025641 0 ustar pysdk-ci amazon 0000000 0000000 **To add an AWS managed Config rule**
The following command provides JSON code to add an AWS managed Config rule::
aws configservice put-config-rule --config-rule file://RequiredTagsForEC2Instances.json
``RequiredTagsForEC2Instances.json`` is a JSON file that contains the rule configuration::
{
"ConfigRuleName": "RequiredTagsForEC2Instances",
"Description": "Checks whether the CostCenter and Owner tags are applied to EC2 instances.",
"Scope": {
"ComplianceResourceTypes": [
"AWS::EC2::Instance"
]
},
"Source": {
"Owner": "AWS",
"SourceIdentifier": "REQUIRED_TAGS"
},
"InputParameters": "{\"tag1Key\":\"CostCenter\",\"tag2Key\":\"Owner\"}"
}
For the ``ComplianceResourceTypes`` attribute, this JSON code limits the scope to resources of the ``AWS::EC2::Instance`` type, so AWS Config will evaluate only EC2 instances against the rule. Because the rule is a managed rule, the ``Owner`` attribute is set to ``AWS``, and the ``SourceIdentifier`` attribute is set to the rule identifier, ``REQUIRED_TAGS``. For the ``InputParameters`` attribute, the tag keys that the rule requires, ``CostCenter`` and ``Owner``, are specified.
If the command succeeds, AWS Config returns no output. To verify the rule configuration, run the `describe-config-rules`__ command, and specify the rule name.
.. __: http://docs.aws.amazon.com/cli/latest/reference/configservice/describe-config-rules.html
**To add a customer managed Config rule**
The following command provides JSON code to add a customer managed Config rule::
aws configservice put-config-rule --config-rule file://InstanceTypesAreT2micro.json
``InstanceTypesAreT2micro.json`` is a JSON file that contains the rule configuration::
{
"ConfigRuleName": "InstanceTypesAreT2micro",
"Description": "Evaluates whether EC2 instances are the t2.micro type.",
"Scope": {
"ComplianceResourceTypes": [
"AWS::EC2::Instance"
]
},
"Source": {
"Owner": "CUSTOM_LAMBDA",
"SourceIdentifier": "arn:aws:lambda:us-east-1:123456789012:function:InstanceTypeCheck",
"SourceDetails": [
{
"EventSource": "aws.config",
"MessageType": "ConfigurationItemChangeNotification"
}
]
},
"InputParameters": "{\"desiredInstanceType\":\"t2.micro\"}"
}
For the ``ComplianceResourceTypes`` attribute, this JSON code limits the scope to resources of the ``AWS::EC2::Instance`` type, so AWS Config will evaluate only EC2 instances against the rule. Because this rule is a customer managed rule, the ``Owner`` attribute is set to ``CUSTOM_LAMBDA``, and the ``SourceIdentifier`` attribute is set to the ARN of the AWS Lambda function. The ``SourceDetails`` object is required. The parameters that are specified for the ``InputParameters`` attribute are passed to the AWS Lambda function when AWS Config invokes it to evaluate resources against the rule.
If the command succeeds, AWS Config returns no output. To verify the rule configuration, run the `describe-config-rules`__ command, and specify the rule name.
.. __: http://docs.aws.amazon.com/cli/latest/reference/configservice/describe-config-rules.html
awscli-1.10.1/awscli/examples/configservice/start-configuration-recorder.rst 0000666 4542626 0000144 00000000661 12652514125 030421 0 ustar pysdk-ci amazon 0000000 0000000 **To start the configuration recorder**
The following command starts the default configuration recorder::
aws configservice start-configuration-recorder --configuration-recorder-name default
If the command succeeds, AWS Config returns no output. To verify that AWS Config is recording your resources, run the `get-status`__ command.
.. __: http://docs.aws.amazon.com/cli/latest/reference/configservice/get-status.html awscli-1.10.1/awscli/examples/configservice/get-compliance-summary-by-config-rule.rst 0000666 4542626 0000144 00000001350 12652514125 032012 0 ustar pysdk-ci amazon 0000000 0000000 **To get the compliance summary for your AWS Config rules**
The following command returns the number of rules that are compliant and the number that are noncompliant::
aws configservice get-compliance-summary-by-config-rule
In the output, the value for each ``CappedCount`` attribute indicates how many rules are compliant or noncompliant.
Output::
{
"ComplianceSummary": {
"NonCompliantResourceCount": {
"CappedCount": 3,
"CapExceeded": false
},
"ComplianceSummaryTimestamp": 1452204131.493,
"CompliantResourceCount": {
"CappedCount": 2,
"CapExceeded": false
}
}
} awscli-1.10.1/awscli/examples/configservice/describe-delivery-channel-status.rst 0000666 4542626 0000144 00000001671 12652514125 031146 0 ustar pysdk-ci amazon 0000000 0000000 **To get status information for the delivery channel**
The following command returns the status of the delivery channel::
aws configservice describe-delivery-channel-status
Output::
{
"DeliveryChannelsStatus": [
{
"configStreamDeliveryInfo": {
"lastStatusChangeTime": 1452193834.381,
"lastStatus": "SUCCESS"
},
"configHistoryDeliveryInfo": {
"lastSuccessfulTime": 1450317838.412,
"lastStatus": "SUCCESS",
"lastAttemptTime": 1450317838.412
},
"configSnapshotDeliveryInfo": {
"lastSuccessfulTime": 1452185597.094,
"lastStatus": "SUCCESS",
"lastAttemptTime": 1452185597.094
},
"name": "default"
}
]
} awscli-1.10.1/awscli/examples/configservice/describe-config-rules.rst 0000666 4542626 0000144 00000002560 12652514125 026767 0 ustar pysdk-ci amazon 0000000 0000000 **To get details for an AWS Config rule**
The following command returns details for an AWS Config rule named ``InstanceTypesAreT2micro``::
aws configservice describe-config-rules --config-rule-names InstanceTypesAreT2micro
Output::
{
"ConfigRules": [
{
"ConfigRuleState": "ACTIVE",
"Description": "Evaluates whether EC2 instances are the t2.micro type.",
"ConfigRuleName": "InstanceTypesAreT2micro",
"ConfigRuleArn": "arn:aws:config:us-east-1:123456789012:config-rule/config-rule-abcdef",
"Source": {
"Owner": "CUSTOM_LAMBDA",
"SourceIdentifier": "arn:aws:lambda:us-east-1:123456789012:function:InstanceTypeCheck",
"SourceDetails": [
{
"EventSource": "aws.config",
"MessageType": "ConfigurationItemChangeNotification"
}
]
},
"InputParameters": "{\"desiredInstanceType\":\"t2.micro\"}",
"Scope": {
"ComplianceResourceTypes": [
"AWS::EC2::Instance"
]
},
"ConfigRuleId": "config-rule-abcdef"
}
]
} awscli-1.10.1/awscli/examples/configservice/subscribe.rst 0000666 4542626 0000144 00000002270 12652514125 024573 0 ustar pysdk-ci amazon 0000000 0000000 **To subscribe to AWS Config**
The following command creates the default delivery channel and configuration recorder. The command also specifies the Amazon S3 bucket and Amazon SNS topic to which AWS Config will deliver configuration information::
aws configservice subscribe --s3-bucket config-bucket-123456789012 --sns-topic arn:aws:sns:us-east-1:123456789012:config-topic --iam-role arn:aws:iam::123456789012:role/ConfigRole-A1B2C3D4E5F6
Output::
Using existing S3 bucket: config-bucket-123456789012
Using existing SNS topic: arn:aws:sns:us-east-1:123456789012:config-topic
Subscribe succeeded:
Configuration Recorders: [
{
"recordingGroup": {
"allSupported": true,
"resourceTypes": [],
"includeGlobalResourceTypes": false
},
"roleARN": "arn:aws:iam::123456789012:role/ConfigRole-A1B2C3D4E5F6",
"name": "default"
}
]
Delivery Channels: [
{
"snsTopicARN": "arn:aws:sns:us-east-1:123456789012:config-topic",
"name": "default",
"s3BucketName": "config-bucket-123456789012"
}
] awscli-1.10.1/awscli/examples/configservice/stop-configuration-recorder.rst 0000666 4542626 0000144 00000000662 12652514125 030252 0 ustar pysdk-ci amazon 0000000 0000000 **To stop the configuration recorder**
The following command stops the default configuration recorder::
aws configservice stop-configuration-recorder --configuration-recorder-name default
If the command succeeds, AWS Config returns no output. To verify that AWS Config is not recording your resources, run the `get-status`__ command.
.. __: http://docs.aws.amazon.com/cli/latest/reference/configservice/get-status.html awscli-1.10.1/awscli/examples/configservice/deliver-config-snapshot.rst 0000666 4542626 0000144 00000000540 12652514125 027342 0 ustar pysdk-ci amazon 0000000 0000000 **To deliver a configuration snapshot**
The following command delivers a configuration snapshot to the Amazon S3 bucket that belongs to the default delivery channel::
aws configservice deliver-config-snapshot --delivery-channel-name default
Output::
{
"configSnapshotId": "d0333b00-a683-44af-921e-examplefb794"
} awscli-1.10.1/awscli/examples/configservice/get-status.rst 0000666 4542626 0000144 00000000706 12652514125 024714 0 ustar pysdk-ci amazon 0000000 0000000 **To get the status for AWS Config**
The following command returns the status of the delivery channel and configuration recorder::
aws configservice get-status
Output::
Configuration Recorders:
name: default
recorder: ON
last status: SUCCESS
Delivery Channels:
name: default
last stream delivery status: SUCCESS
last history delivery status: SUCCESS
last snapshot delivery status: SUCCESS awscli-1.10.1/awscli/examples/configservice/get-compliance-details-by-config-rule.rst 0000666 4542626 0000144 00000004171 12652514125 031746 0 ustar pysdk-ci amazon 0000000 0000000 **To get the evaluation results for an AWS Config rule**
The following command returns the evaluation results for all of the resources that don't comply with an AWS Config rule named ``InstanceTypesAreT2micro``::
aws configservice get-compliance-details-by-config-rule --config-rule-name InstanceTypesAreT2micro --compliance-types NON_COMPLIANT
Output::
{
"EvaluationResults": [
{
"EvaluationResultIdentifier": {
"OrderingTimestamp": 1450314635.065,
"EvaluationResultQualifier": {
"ResourceType": "AWS::EC2::Instance",
"ResourceId": "i-1a2b3c4d",
"ConfigRuleName": "InstanceTypesAreT2micro"
}
},
"ResultRecordedTime": 1450314645.261,
"ConfigRuleInvokedTime": 1450314642.948,
"ComplianceType": "NON_COMPLIANT"
},
{
"EvaluationResultIdentifier": {
"OrderingTimestamp": 1450314635.065,
"EvaluationResultQualifier": {
"ResourceType": "AWS::EC2::Instance",
"ResourceId": "i-2a2b3c4d",
"ConfigRuleName": "InstanceTypesAreT2micro"
}
},
"ResultRecordedTime": 1450314645.18,
"ConfigRuleInvokedTime": 1450314642.902,
"ComplianceType": "NON_COMPLIANT"
},
{
"EvaluationResultIdentifier": {
"OrderingTimestamp": 1450314635.065,
"EvaluationResultQualifier": {
"ResourceType": "AWS::EC2::Instance",
"ResourceId": "i-3a2b3c4d",
"ConfigRuleName": "InstanceTypesAreT2micro"
}
},
"ResultRecordedTime": 1450314643.346,
"ConfigRuleInvokedTime": 1450314643.124,
"ComplianceType": "NON_COMPLIANT"
}
]
} awscli-1.10.1/awscli/examples/configservice/get-compliance-details-by-resource.rst 0000666 4542626 0000144 00000003046 12652514125 031363 0 ustar pysdk-ci amazon 0000000 0000000 **To get the evaluation results for an AWS resource**
The following command returns the evaluation results for each rule with which the EC2 instance ``i-1a2b3c4d`` does not comply::
aws configservice get-compliance-details-by-resource --resource-type AWS::EC2::Instance --resource-id i-1a2b3c4d --compliance-types NON_COMPLIANT
Output::
{
"EvaluationResults": [
{
"EvaluationResultIdentifier": {
"OrderingTimestamp": 1450314635.065,
"EvaluationResultQualifier": {
"ResourceType": "AWS::EC2::Instance",
"ResourceId": "i-1a2b3c4d",
"ConfigRuleName": "InstanceTypesAreT2micro"
}
},
"ResultRecordedTime": 1450314643.288,
"ConfigRuleInvokedTime": 1450314643.034,
"ComplianceType": "NON_COMPLIANT"
},
{
"EvaluationResultIdentifier": {
"OrderingTimestamp": 1450314635.065,
"EvaluationResultQualifier": {
"ResourceType": "AWS::EC2::Instance",
"ResourceId": "i-1a2b3c4d",
"ConfigRuleName": "RequiredTagForEC2Instances"
}
},
"ResultRecordedTime": 1450314645.261,
"ConfigRuleInvokedTime": 1450314642.948,
"ComplianceType": "NON_COMPLIANT"
}
]
} awscli-1.10.1/awscli/examples/configure/ 0000777 4542626 0000144 00000000000 12652514126 021213 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/configure/get/ 0000777 4542626 0000144 00000000000 12652514126 021772 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/configure/get/_examples.rst 0000666 4542626 0000144 00000001447 12652514124 024505 0 ustar pysdk-ci amazon 0000000 0000000 Suppose you had the following config file::
[default]
aws_access_key_id=default_access_key
aws_secret_access_key=default_secret_key
[preview]
cloudsearch=true
[profile testing]
aws_access_key_id=testing_access_key
aws_secret_access_key=testing_secret_key
region=us-west-2
The following commands would have the corresponding output::
$ aws configure get aws_access_key_id
default_access_key
$ aws configure get default.aws_access_key_id
default_access_key
$ aws configure get aws_access_key_id --profile testing
testing_access_key
$ aws configure get profile.testing.aws_access_key_id
default_access_key
$ aws configure get preview.cloudsearch
true
$ aws configure get preview.does-not-exist
$
$ echo $?
1
awscli-1.10.1/awscli/examples/configure/get/_description.rst 0000666 4542626 0000144 00000003620 12652514124 025205 0 ustar pysdk-ci amazon 0000000 0000000 Get a configuration value from the config file.
The ``aws configure get`` command can be used to print a configuration value in
the AWS config file. The ``get`` command supports two types of configuration
values, *unqualified* and *qualified* config values.
Note that ``aws configure get`` only looks at values in the AWS configuration
file. It does **not** resolve configuration variables specified anywhere else,
including environment variables, command line arguments, etc.
Unqualified Names
-----------------
Every value in the AWS configuration file must be placed in a section (denoted
by ``[section-name]`` in the config file). To retrieve a value from the
config file, the section name and the config name must be known.
An unqualified configuration name refers to a name that is not scoped to a
specific section in the configuration file. Sections are specified by
separating parts with the ``"."`` character (``section.config-name``). An
unqualified name will be scoped to the current profile. For example,
``aws configure get aws_access_key_id`` will retrieve the ``aws_access_key_id``
from the current profile, or the ``default`` profile if no profile is
specified. You can still provide a ``--profile`` argument to the ``aws
configure get`` command. For example, ``aws configure get region --profile
testing`` will print the region value for the ``testing`` profile.
Qualified Names
---------------
A qualified name is a name that has at least one ``"."`` character in the name.
This name provides a way to specify the config section from which to retrieve
the config variable. When a qualified name is provided to ``aws configure
get``, the currently specified profile is ignored. Section names that have
the format ``[profile profile-name]`` can be specified by using the
``profile.profile-name.config-name`` syntax, and the default profile can be
specified using the ``default.config-name`` syntax.
awscli-1.10.1/awscli/examples/configure/set/ 0000777 4542626 0000144 00000000000 12652514126 022006 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/configure/set/_examples.rst 0000666 4542626 0000144 00000001540 12652514124 024513 0 ustar pysdk-ci amazon 0000000 0000000 Given an empty config file, the following commands::
$ aws configure set aws_access_key_id default_access_key
$ aws configure set aws_secret_access_key default_secret_key
$ aws configure set default.region us-west-2
$ aws configure set default.ca_bundle /path/to/ca-bundle.pem
$ aws configure set region us-west-1 --profile testing
$ aws configure set profile.testing2.region eu-west-1
$ aws configure set preview.cloudsearch true
will produce the following config file::
[default]
region = us-west-2
ca_bundle = /path/to/ca-bundle.pem
[profile testing]
region = us-west-1
[profile testing2]
region = eu-west-1
[preview]
cloudsearch = true
and the following ``~/.aws/credentials`` file::
[default]
aws_access_key_id = default_access_key
aws_secret_access_key = default_secret_key
awscli-1.10.1/awscli/examples/configure/set/_description.rst 0000666 4542626 0000144 00000001600 12652514124 025215 0 ustar pysdk-ci amazon 0000000 0000000 Set a configuration value from the config file.
The ``aws configure set`` command can be used to set a single configuration
value in the AWS config file. The ``set`` command supports both the
*qualified* and *unqualified* config values documented in the ``get`` command
(see ``aws configure get help`` for more information).
To set a single value, provide the configuration name followed by the
configuration value.
If the config file does not exist, one will automatically be created. If the
configuration value already exists in the config file, it will updated with the
new configuration value.
Setting a value for the ``aws_access_key_id``, ``aws_secret_access_key``, or
the ``aws_session_token`` will result in the value being writen to the
shared credentials file (``~/.aws/credentials``). All other values will
be written to the config file (default location is ``~/.aws/config``).
awscli-1.10.1/awscli/examples/configure/_description.rst 0000666 4542626 0000144 00000003753 12652514124 024435 0 ustar pysdk-ci amazon 0000000 0000000 Configure AWS CLI options. If this command is run with no
arguments, you will be prompted for configuration values such as your AWS
Access Key Id and you AWS Secret Access Key. You can configure a named
profile using the ``--profile`` argument. If your config file does not exist
(the default location is ``~/.aws/config``), the AWS CLI will create it
for you. To keep an existing value, hit enter when prompted for the value.
When you are prompted for information, the current value will be displayed in
``[brackets]``. If the config item has no value, it be displayed as
``[None]``. Note that the ``configure`` command only work with values from the
config file. It does not use any configuration values from environment
variables or the IAM role.
Note: the values you provide for the AWS Access Key ID and the AWS Secret
Access Key will be written to the shared credentials file
(``~/.aws/credentials``).
=======================
Configuration Variables
=======================
The following configuration variables are supported in the config file:
* **aws_access_key_id** - The AWS access key part of your credentials
* **aws_secret_access_key** - The AWS secret access key part of your credentials
* **aws_session_token** - The session token part of your credentials (session tokens only)
* **metadata_service_timeout** - The number of seconds to wait until the metadata service
request times out. This is used if you are using an IAM role to provide
your credentials.
* **metadata_service_num_attempts** - The number of attempts to try to retrieve
credentials. If you know for certain you will be using an IAM role on an
Amazon EC2 instance, you can set this value to ensure any intermittent
failures are retried. By default this value is 1.
For more information on configuration options, see `Configuring the AWS Command Line Interface`_ in the *AWS CLI User Guide*.
.. _`Configuring the AWS Command Line Interface`: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html awscli-1.10.1/awscli/examples/ec2/ 0000777 4542626 0000144 00000000000 12652514126 017703 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/ec2/describe-tags.rst 0000666 4542626 0000144 00000007075 12652514124 023160 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your tags**
This example describes the tags for all your resources.
Command::
aws ec2 describe-tags
Output::
{
"Tags": [
{
"ResourceType": "image",
"ResourceId": "ami-78a54011",
"Value": "Production",
"Key": "Stack"
},
{
"ResourceType": "image",
"ResourceId": "ami-3ac33653",
"Value": "Test",
"Key": "Stack"
},
{
"ResourceType": "instance",
"ResourceId": "i-12345678",
"Value": "Production",
"Key": "Stack"
},
{
"ResourceType": "instance",
"ResourceId": "i-5f4e3d2a",
"Value": "Test",
"Key": "Stack"
},
{
"ResourceType": "instance",
"ResourceId": "i-5f4e3d2a",
"Value": "Beta Server",
"Key": "Name"
},
{
"ResourceType": "volume",
"ResourceId": "vol-1a2b3c4d",
"Value": "Project1",
"Key": "Purpose"
},
{
"ResourceType": "volume",
"ResourceId": "vol-87654321",
"Value": "Logs",
"Key": "Purpose"
}
]
}
**To describe the tags for a single resource**
This example describes the tags for the specified instance.
Command::
aws ec2 describe-tags --filters "Name=resource-id,Values=i-5f4e3d2a"
Output::
{
"Tags": [
{
"ResourceType": "instance",
"ResourceId": "i-5f4e3d2a",
"Value": "Test",
"Key": "Stack"
},
{
"ResourceType": "instance",
"ResourceId": "i-5f4e3d2a",
"Value": "Beta Server",
"Key": "Name"
}
]
}
**To describe the tags for a type of resource**
This example describes the tags for your volumes.
Command::
aws ec2 describe-tags --filters "Name=resource-type,Values=volume"
Output::
{
"Tags": [
{
"ResourceType": "volume",
"ResourceId": "vol-1a2b3c4d",
"Value": "Project1",
"Key": "Purpose"
},
{
"ResourceType": "volume",
"ResourceId": "vol-87654321",
"Value": "Logs",
"Key": "Purpose"
}
]
}
**To describe the tags for your resources based on a key and a value**
This example describes the tags for your resources that have the key ``Stack`` and a value ``Test``.
Command::
aws ec2 describe-tags --filters "Name=key,Values=Stack" "Name=value,Values=Test"
Output::
{
"Tags": [
{
"ResourceType": "image",
"ResourceId": "ami-3ac33653",
"Value": "Test",
"Key": "Stack"
},
{
"ResourceType": "instance",
"ResourceId": "i-5f4e3d2a",
"Value": "Test",
"Key": "Stack"
}
]
}
This example describes the tags for all your instances that have a tag with the key ``Purpose`` and no value.
Command::
aws ec2 describe-tags --filters "Name=resource-type,Values=instance" "Name=key,Values=Purpose" "Name=value,Values="
Output::
{
"Tags": [
{
"ResourceType": "instance",
"ResourceId": "i-1a2b3c4d",
"Value": null,
"Key": "Purpose"
}
]
}
awscli-1.10.1/awscli/examples/ec2/describe-spot-fleet-requests.rst 0000666 4542626 0000144 00000011503 12652514124 026144 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your Spot fleet requests**
This example describes all of your Spot fleet requests.
Command::
aws ec2 describe-spot-fleet-requests
Output::
{
"SpotFleetRequestConfigs": [
{
"SpotFleetRequestId": "sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE",
"SpotFleetRequestConfig": {
"TargetCapacity": 20,
"LaunchSpecifications": [
{
"EbsOptimized": false,
"NetworkInterfaces": [
{
"SubnetId": "subnet-a61dafcf",
"DeviceIndex": 0,
"DeleteOnTermination": false,
"AssociatePublicIpAddress": true,
"SecondaryPrivateIpAddressCount": 0
}
],
"InstanceType": "cc2.8xlarge",
"ImageId": "ami-1a2b3c4d"
},
{
"EbsOptimized": false,
"NetworkInterfaces": [
{
"SubnetId": "subnet-a61dafcf",
"DeviceIndex": 0,
"DeleteOnTermination": false,
"AssociatePublicIpAddress": true,
"SecondaryPrivateIpAddressCount": 0
}
],
"InstanceType": "r3.8xlarge",
"ImageId": "ami-1a2b3c4d"
}
],
"SpotPrice": "0.05",
"IamFleetRole": "arn:aws:iam::123456789012:role/my-spot-fleet-role"
},
"SpotFleetRequestState": "active"
},
{
"SpotFleetRequestId": "sfr-306341ed-9739-402e-881b-ce47bEXAMPLE",
"SpotFleetRequestConfig": {
"TargetCapacity": 20,
"LaunchSpecifications": [
{
"EbsOptimized": false,
"NetworkInterfaces": [
{
"SubnetId": "subnet-6e7f829e",
"DeviceIndex": 0,
"DeleteOnTermination": false,
"AssociatePublicIpAddress": true,
"SecondaryPrivateIpAddressCount": 0
}
],
"InstanceType": "m3.medium",
"ImageId": "ami-1a2b3c4d"
}
],
"SpotPrice": "0.05",
"IamFleetRole": "arn:aws:iam::123456789012:role/my-spot-fleet-role"
},
"SpotFleetRequestState": "active"
}
]
}
**To describe a Spot fleet request**
This example describes the specified Spot fleet request.
Command::
aws ec2 describe-spot-fleet-requests --spot-fleet-request-ids sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
Output::
{
"SpotFleetRequestConfigs": [
{
"SpotFleetRequestId": "sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE",
"SpotFleetRequestConfig": {
"TargetCapacity": 20,
"LaunchSpecifications": [
{
"EbsOptimized": false,
"NetworkInterfaces": [
{
"SubnetId": "subnet-a61dafcf",
"DeviceIndex": 0,
"DeleteOnTermination": false,
"AssociatePublicIpAddress": true,
"SecondaryPrivateIpAddressCount": 0
}
],
"InstanceType": "cc2.8xlarge",
"ImageId": "ami-1a2b3c4d"
},
{
"EbsOptimized": false,
"NetworkInterfaces": [
{
"SubnetId": "subnet-a61dafcf",
"DeviceIndex": 0,
"DeleteOnTermination": false,
"AssociatePublicIpAddress": true,
"SecondaryPrivateIpAddressCount": 0
}
],
"InstanceType": "r3.8xlarge",
"ImageId": "ami-1a2b3c4d"
}
],
"SpotPrice": "0.05",
"IamFleetRole": "arn:aws:iam::123456789012:role/my-spot-fleet-role"
},
"SpotFleetRequestState": "active"
}
]
}
awscli-1.10.1/awscli/examples/ec2/describe-availability-zones.rst 0000666 4542626 0000144 00000001442 12652514124 026020 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your Availability Zones**
This example describes the Availability Zones that are available to you. The response includes Availability Zones only for the current region.
Command::
aws ec2 describe-availability-zones
Output::
{
"AvailabilityZones": [
{
"State": "available",
"RegionName": "us-east-1",
"Messages": [],
"ZoneName": "us-east-1b"
},
{
"State": "available",
"RegionName": "us-east-1",
"Messages": [],
"ZoneName": "us-east-1c"
},
{
"State": "available",
"RegionName": "us-east-1",
"Messages": [],
"ZoneName": "us-east-1d"
}
]
}
awscli-1.10.1/awscli/examples/ec2/modify-volume-attribute.rst 0000666 4542626 0000144 00000000423 12652514124 025227 0 ustar pysdk-ci amazon 0000000 0000000 **To modify a volume attribute**
This example sets the ``autoEnableIo`` attribute of the volume with the ID ``vol-1a2b3c4d`` to ``true``. If the command succeeds, no output is returned.
Command::
aws ec2 modify-volume-attribute --volume-id vol-1a2b3c4d --auto-enable-io
awscli-1.10.1/awscli/examples/ec2/describe-hosts.rst 0000666 4542626 0000144 00000002645 12652514124 023360 0 ustar pysdk-ci amazon 0000000 0000000 **To describe Dedicated hosts in your account and generate a machine-readable list**
To output a list of Dedicated host IDs in JSON (comma separated).
Command::
aws ec2 describe-hosts --query 'Hosts[].HostId' --output json
Output::
[
"h-085664df5899941c",
"h-056c1b0724170dc38"
]
To output a list of Dedicated host IDs in plaintext (comma separated).
Command::
aws ec2 describe-hosts --query 'Hosts[].HostId' --output text
Output::
h-085664df5899941c
h-056c1b0724170dc38
**To describe available Dedicated hosts in your account**
Command::
aws ec2 describe-hosts --filter "Name=state,Values=available"
Output::
{
"Hosts": [
{
"HostId": "h-085664df5899941c"
"HostProperties: {
"Cores": 20,
"Sockets": 2,
"InstanceType": "m3.medium".
"TotalVCpus": 32
},
"Instances": [],
"State": "available",
"AvailabilityZone": "us-east-1b",
"AvailableCapacity": {
"AvailableInstanceCapacity": [
{
"AvailableCapacity": 32,
"InstanceType": "m3.medium",
"TotalCapacity": 32
}
],
"AvailableVCpus": 32
},
"AutoPlacement": "off"
}
]
}
awscli-1.10.1/awscli/examples/ec2/replace-network-acl-association.rst 0000666 4542626 0000144 00000000535 12652514124 026607 0 ustar pysdk-ci amazon 0000000 0000000 **To replace the network ACL associated with a subnet**
This example associates the specified network ACL with the subnet for the specified network ACL association.
Command::
aws ec2 replace-network-acl-association --association-id aclassoc-e5b95c8c --network-acl-id acl-5fb85d36
Output::
{
"NewAssociationId": "aclassoc-3999875b"
} awscli-1.10.1/awscli/examples/ec2/reject-vpc-peering-connection.rst 0000666 4542626 0000144 00000000357 12652514124 026266 0 ustar pysdk-ci amazon 0000000 0000000 **To reject a VPC peering connection**
This example rejects the specified VPC peering connection request.
Command::
aws ec2 reject-vpc-peering-connection --vpc-peering-connection-id pcx-1a2b3c4d
Output::
{
"Return": true
} awscli-1.10.1/awscli/examples/ec2/detach-internet-gateway.rst 0000666 4542626 0000144 00000000425 12652514124 025151 0 ustar pysdk-ci amazon 0000000 0000000 **To detach an Internet gateway from your VPC**
This example detaches the specified Internet gateway from the specified VPC. If the command succeeds, no output is returned.
Command::
aws ec2 detach-internet-gateway --internet-gateway-id igw-c0a643a9 --vpc-id vpc-a01106c2
awscli-1.10.1/awscli/examples/ec2/replace-route.rst 0000666 4542626 0000144 00000000571 12652514124 023205 0 ustar pysdk-ci amazon 0000000 0000000 **To replace a route**
This example replaces the specified route in the specified table table. The new route matches the specified CIDR and sends the traffic to the specified virtual private gateway. If the command succeeds, no output is returned.
Command::
aws ec2 replace-route --route-table-id rtb-22574640 --destination-cidr-block 10.0.0.0/16 --gateway-id vgw-9a4cacf3 awscli-1.10.1/awscli/examples/ec2/attach-network-interface.rst 0000666 4542626 0000144 00000000476 12652514124 025333 0 ustar pysdk-ci amazon 0000000 0000000 **To attach a network interface to an instance**
This example attaches the specified network interface to the specified instance.
Command::
aws ec2 attach-network-interface --network-interface-id eni-e5aa89a3 --instance-id i-640a3c17 --device-index 1
Output::
{
"AttachmentId": "eni-attach-66c4350a"
} awscli-1.10.1/awscli/examples/ec2/delete-route.rst 0000666 4542626 0000144 00000000364 12652514124 023034 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a route**
This example deletes the specified route from the specified route table. If the command succeeds, no output is returned.
Command::
aws ec2 delete-route --route-table-id rtb-22574640 --destination-cidr-block 0.0.0.0/0
awscli-1.10.1/awscli/examples/ec2/create-vpn-connection-route.rst 0000666 4542626 0000144 00000000442 12652514124 025770 0 ustar pysdk-ci amazon 0000000 0000000 **To create a static route for a VPN connection**
This example creates a static route for the specified VPN connection. If the command succeeds, no output is returned.
Command::
aws ec2 create-vpn-connection-route --vpn-connection-id vpn-40f41529 --destination-cidr-block 11.12.0.0/16
awscli-1.10.1/awscli/examples/ec2/describe-spot-fleet-request-history.rst 0000666 4542626 0000144 00000003220 12652514124 027455 0 ustar pysdk-ci amazon 0000000 0000000 **To describe Spot fleet history**
This example command returns the history for the specified Spot fleet starting at the specified time.
Command::
aws ec2 describe-spot-fleet-request-history --spot-fleet-request-id sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE --start-time 2015-05-26T00:00:00Z
The following example output shows the successful launches of two Spot Instances for the Spot fleet.
Output::
{
"HistoryRecords": [
{
"Timestamp": "2015-05-26T23:17:20.697Z",
"EventInformation": {
"EventSubType": "submitted"
},
"EventType": "fleetRequestChange"
},
{
"Timestamp": "2015-05-26T23:17:20.873Z",
"EventInformation": {
"EventSubType": "active"
},
"EventType": "fleetRequestChange"
},
{
"Timestamp": "2015-05-26T23:21:21.712Z",
"EventInformation": {
"InstanceId": "i-3a52c1cd",
"EventSubType": "launched"
},
"EventType": "instanceChange"
},
{
"Timestamp": "2015-05-26T23:21:21.816Z",
"EventInformation": {
"InstanceId": "i-3852c1cf",
"EventSubType": "launched"
},
"EventType": "instanceChange"
}
],
"SpotFleetRequestId": "sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE",
"NextToken": "CpHNsscimcV5oH7bSbub03CI2Qms5+ypNpNm+53MNlR0YcXAkp0xFlfKf91yVxSExmbtma3awYxMFzNA663ZskT0AHtJ6TCb2Z8bQC2EnZgyELbymtWPfpZ1ZbauVg+P+TfGlWxWWB/Vr5dk5d4LfdgA/DRAHUrYgxzrEXAMPLE=",
"StartTime": "2015-05-26T00:00:00Z"
}
awscli-1.10.1/awscli/examples/ec2/disable-vgw-route-propagation.rst 0000666 4542626 0000144 00000000467 12652514124 026323 0 ustar pysdk-ci amazon 0000000 0000000 **To disable route propagation**
This example disables the specified virtual private gateway from propagating static routes to the specified route table. If the command succeeds, no output is returned.
Command::
aws ec2 disable-vgw-route-propagation --route-table-id rtb-22574640 --gateway-id vgw-9a4cacf3
awscli-1.10.1/awscli/examples/ec2/confirm-product-instance.rst 0000666 4542626 0000144 00000000433 12652514124 025350 0 ustar pysdk-ci amazon 0000000 0000000 **To confirm the product instance**
This example determines whether the specified product code is associated with the specified instance.
Command::
aws ec2 confirm-product-instance --product-code 774F4FF8 --instance-id i-5203422c
Output::
{
"OwnerId": "123456789012"
} awscli-1.10.1/awscli/examples/ec2/create-placement-group.rst 0000666 4542626 0000144 00000000301 12652514124 024770 0 ustar pysdk-ci amazon 0000000 0000000 **To create a placement group**
This example command creates a placement group with the specified name.
Command::
aws ec2 create-placement-group --group-name my-cluster --strategy cluster
awscli-1.10.1/awscli/examples/ec2/release-hosts.rst 0000666 4542626 0000144 00000000570 12652514124 023213 0 ustar pysdk-ci amazon 0000000 0000000 **To release a Dedicated host from your account**
To release a Dedicated host from your account. Instances that are on the host must be stopped or terminated before the host can be released.
Command::
aws ec2 release-hosts --host-id=h-0029d6e3cacf1b3da
Output::
{
"Successful": [
"h-0029d6e3cacf1b3da"
],
"Unsuccessful": []
}
awscli-1.10.1/awscli/examples/ec2/request-spot-fleet.rst 0000666 4542626 0000144 00000013606 12652514124 024211 0 ustar pysdk-ci amazon 0000000 0000000 **To request a Spot fleet in the subnet with the lowest price**
This example command creates a Spot fleet request with two launch specifications that differ only by subnet.
The Spot fleet launches the instances in the specified subnet with the lowest price.
If the instances are launched in a default VPC, they receive a public IP address by default.
If the instances are launched in a nondefault VPC, they do not receive a public IP address by default.
Note that you can't specify different subnets from the same Availability Zone in a Spot fleet request.
Command::
aws ec2 request-spot-fleet --spot-fleet-request-config file://config.json
Config.json::
{
"SpotPrice": "0.04",
"TargetCapacity": 2,
"IamFleetRole": "arn:aws:iam::123456789012:role/my-spot-fleet-role",
"LaunchSpecifications": [
{
"ImageId": "ami-1a2b3c4d",
"KeyName": "my-key-pair",
"SecurityGroups": [
{
"GroupId": "sg-1a2b3c4d"
}
],
"InstanceType": "m3.medium",
"SubnetId": "subnet-1a2b3c4d",
"IamInstanceProfile": {
"Arn": "arn:aws:iam::123456789012:instance-profile/my-iam-role"
}
},
{
"ImageId": "ami-1a2b3c4d",
"KeyName": "my-key-pair",
"SecurityGroups": [
{
"GroupId": "sg-1a2b3c4d"
}
],
"InstanceType": "m3.medium",
"SubnetId": "subnet-3c4d5e6f",
"IamInstanceProfile": {
"Arn": "arn:aws:iam::123456789012:instance-profile/my-iam-role"
}
}
]
}
Output::
{
"SpotFleetRequestId": "sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE"
}
**To request a Spot fleet in the Availability Zone with the lowest price**
This example command creates a Spot fleet request with two launch specifications that differ only by Availability Zone.
The Spot fleet launches the instances in the specified Availability Zone with the lowest price.
If your account supports EC2-VPC only, Amazon EC2 launches the Spot instances in the default subnet of the Availability Zone.
If your account supports EC2-Classic, Amazon EC2 launches the instances in EC2-Classic in the Availability Zone.
Command::
aws ec2 request-spot-fleet --spot-fleet-request-config file://config.json
Config.json::
{
"SpotPrice": "0.04",
"TargetCapacity": 2,
"IamFleetRole": "arn:aws:iam::123456789012:role/my-spot-fleet-role",
"LaunchSpecifications": [
{
"ImageId": "ami-1a2b3c4d",
"KeyName": "my-key-pair",
"SecurityGroups": [
{
"GroupId": "sg-1a2b3c4d"
}
],
"InstanceType": "m3.medium",
"Placement": {
"AvailabilityZone": "us-west-2a"
},
"IamInstanceProfile": {
"Arn": "arn:aws:iam::123456789012:instance-profile/my-iam-role"
}
},
{
"ImageId": "ami-1a2b3c4d",
"KeyName": "my-key-pair",
"SecurityGroups": [
{
"GroupId": "sg-1a2b3c4d"
}
],
"InstanceType": "m3.medium",
"Placement": {
"AvailabilityZone": "us-west-2b"
},
"IamInstanceProfile": {
"Arn": "arn:aws:iam::123456789012:instance-profile/my-iam-role"
}
}
]
}
**To launch Spot instances in a subnet and assign them public IP addresses**
This example command assigns public addresses to instances launched in a nondefault VPC.
Note that when you specify a network interface, you must include the subnet ID and security group ID
using the network interface.
Command::
aws ec2 request-spot-fleet --spot-fleet-request-config file://config.json
Config.json::
{
"SpotPrice": "0.04",
"TargetCapacity": 2,
"IamFleetRole": "arn:aws:iam::123456789012:role/my-spot-fleet-role",
"LaunchSpecifications": [
{
"ImageId": "ami-1a2b3c4d",
"KeyName": "my-key-pair",
"InstanceType": "m3.medium",
"NetworkInterfaces": [
{
"DeviceIndex": 0,
"SubnetId": "subnet-1a2b3c4d",
"Groups": [ "sg-1a2b3c4d" ],
"AssociatePublicIpAddress": true
}
],
"IamInstanceProfile": {
"Arn": "arn:aws:iam::880185128111:instance-profile/my-iam-role"
}
}
]
}
**To request a Spot fleet using the diversified allocation strategy**
This example command creates a Spot fleet request that launches 30 instances using the diversified allocation strategy.
The launch specifications differ by instance type. The Spot fleet distributes the instances
across the launch specifications such that there are 10 instances of each type.
Command::
aws ec2 request-spot-fleet --spot-fleet-request-config file://config.json
Config.json::
{
"SpotPrice": "0.70",
"TargetCapacity": 30,
"AllocationStrategy": "diversified",
"IamFleetRole": "arn:aws:iam::123456789012:role/my-spot-fleet-role",
"LaunchSpecifications": [
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "c4.2xlarge",
"SubnetId": "subnet-1a2b3c4d"
},
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "m3.2xlarge",
"SubnetId": "subnet-1a2b3c4d"
},
{
"ImageId": "ami-1a2b3c4d",
"InstanceType": "r3.2xlarge",
"SubnetId": "subnet-1a2b3c4d"
}
]
}
For more information, see `Spot Fleet Requests`_ in the *Amazon Elastic Compute Cloud User Guide*.
.. _`Spot Fleet Requests`: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-requests.html
awscli-1.10.1/awscli/examples/ec2/describe-reserved-instances-offerings.rst 0000666 4542626 0000144 00000006750 12652514124 030005 0 ustar pysdk-ci amazon 0000000 0000000 **To describe Reserved Instances offerings**
This example command describes all Reserved Instances available for purchase in the region.
Command::
aws ec2 describe-reserved-instances-offerings
Output::
{
"ReservedInstancesOfferings": [
{
"OfferingType": "Partial Upfront",
"AvailabilityZone": "us-east-1b",
"InstanceTenancy": "default",
"PricingDetails": [],
"ProductDescription": "Red Hat Enterprise Linux",
"UsagePrice": 0.0,
"RecurringCharges": [
{
"Amount": 0.088,
"Frequency": "Hourly"
}
],
"Marketplace": false,
"CurrencyCode": "USD",
"FixedPrice": 631.0,
"Duration": 94608000,
"ReservedInstancesOfferingId": "9a06095a-bdc6-47fe-a94a-2a382f016040",
"InstanceType": "c1.medium"
},
{
"OfferingType": "PartialUpfront",
"AvailabilityZone": "us-east-1b",
"InstanceTenancy": "default",
"PricingDetails": [],
"ProductDescription": "Linux/UNIX",
"UsagePrice": 0.0,
"RecurringCharges": [
{
"Amount": 0.028,
"Frequency": "Hourly"
}
],
"Marketplace": false,
"CurrencyCode": "USD",
"FixedPrice": 631.0,
"Duration": 94608000,
"ReservedInstancesOfferingId": "bfbefc6c-0d10-418d-b144-7258578d329d",
"InstanceType": "c1.medium"
},
...
}
**To describe your Reserved Instances offerings using options**
This example lists Reserved Instances offered by AWS with the following specifications: t1.micro instance types, Windows (Amazon VPC) product, and Heavy Utilization offerings.
Command::
aws ec2 describe-reserved-instances-offerings --no-include-marketplace --instance-type "t1.micro" --product-description "Windows (Amazon VPC)" --offering-type "no upfront"
Output::
{
"ReservedInstancesOfferings": [
{
"OfferingType": "No Upfront",
"AvailabilityZone": "us-east-1b",
"InstanceTenancy": "default",
"PricingDetails": [],
"ProductDescription": "Windows",
"UsagePrice": 0.0,
"RecurringCharges": [
{
"Amount": 0.015,
"Frequency": "Hourly"
}
],
"Marketplace": false,
"CurrencyCode": "USD",
"FixedPrice": 0.0,
"Duration": 31536000,
"ReservedInstancesOfferingId": "c48ab04c-fe69-4f94-8e39-a23842292823",
"InstanceType": "t1.micro"
},
...
{
"OfferingType": "No Upfront",
"AvailabilityZone": "us-east-1d",
"InstanceTenancy": "default",
"PricingDetails": [],
"ProductDescription": "Windows (Amazon VPC)",
"UsagePrice": 0.0,
"RecurringCharges": [
{
"Amount": 0.015,
"Frequency": "Hourly"
}
],
"Marketplace": false,
"CurrencyCode": "USD",
"FixedPrice": 0.0,
"Duration": 31536000,
"ReservedInstancesOfferingId": "3a98bf7d-2123-42d4-b4f5-8dbec4b06dc6",
"InstanceType": "t1.micro"
}
]
}
awscli-1.10.1/awscli/examples/ec2/describe-vpc-classic-link-dns-support.rst 0000666 4542626 0000144 00000000640 12652514125 027650 0 ustar pysdk-ci amazon 0000000 0000000 **To describe ClassicLink DNS support for your VPCs**
This example describes the ClassicLink DNS support status of all of your VPCs.
Command::
aws ec2 describe-vpc-classic-link-dns-support
Output::
{
"Vpcs": [
{
"VpcId": "vpc-88888888",
"ClassicLinkDnsSupported": true
},
{
"VpcId": "vpc-1a2b3c4d",
"ClassicLinkDnsSupported": false
}
]
} awscli-1.10.1/awscli/examples/ec2/cancel-conversion-task.rst 0000666 4542626 0000144 00000000422 12652514124 025001 0 ustar pysdk-ci amazon 0000000 0000000 **To cancel an active conversion of an instance or a volume**
This example cancels the upload associated with the task ID import-i-fh95npoc. If the command succeeds, no output is returned.
Command::
aws ec2 cancel-conversion-task --conversion-task-id import-i-fh95npoc
awscli-1.10.1/awscli/examples/ec2/create-flow-logs.rst 0000666 4542626 0000144 00000001226 12652514124 023606 0 ustar pysdk-ci amazon 0000000 0000000 **To create a flow log**
This example creates a flow log that captures all rejected traffic for network interface ``eni-aa22bb33``. The flow logs are delivered to a log group in CloudWatch Logs called ``my-flow-logs`` in account 123456789101, using the IAM role ``publishFlowLogs``.
Command::
aws ec2 create-flow-logs --resource-type NetworkInterface --resource-ids eni-aa22bb33 --traffic-type REJECT --log-group-name my-flow-logs --deliver-logs-permission-arn arn:aws:iam::123456789101:role/publishFlowLogs
Output::
{
"Unsuccessful": [],
"FlowLogIds": [
"fl-1a2b3c4d"
],
"ClientToken": "lO+mDZGO+HCFEXAMPLEfWNO00bInKkBcLfrC"
} awscli-1.10.1/awscli/examples/ec2/associate-route-table.rst 0000666 4542626 0000144 00000000436 12652514124 024632 0 ustar pysdk-ci amazon 0000000 0000000 **To associate a route table with a subnet**
This example associates the specified route table with the specified subnet.
Command::
aws ec2 associate-route-table --route-table-id rtb-22574640 --subnet-id subnet-9d4a7b6c
Output::
{
"AssociationId": "rtbassoc-781d0d1a"
} awscli-1.10.1/awscli/examples/ec2/modify-id-format.rst 0000666 4542626 0000144 00000000722 12652514125 023604 0 ustar pysdk-ci amazon 0000000 0000000 **To enable the longer ID format for a resource**
This example enables the longer ID format for the ``instance`` resource type. If the request is successful, no output is returned.
Command::
aws ec2 modify-id-format --resource instance --use-long-ids
**To disable the longer ID format for a resource**
This example disables the longer ID format for the ``instance`` resource type.
Command::
aws ec2 modify-id-format --resource instance --no-use-long-ids
awscli-1.10.1/awscli/examples/ec2/delete-internet-gateway.rst 0000666 4542626 0000144 00000000331 12652514124 025157 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an Internet gateway**
This example deletes the specified Internet gateway. If the command succeeds, no output is returned.
Command::
aws ec2 delete-internet-gateway --internet-gateway-id igw-c0a643a9
awscli-1.10.1/awscli/examples/ec2/modify-snapshot-attribute.rst 0000666 4542626 0000144 00000001153 12652514124 025560 0 ustar pysdk-ci amazon 0000000 0000000 **To modify a snapshot attribute**
This example modifies snapshot ``snap-1a2b3c4d`` to remove the create volume permission for a user with the account ID ``123456789012``. If the command succeeds, no output is returned.
Command::
aws ec2 modify-snapshot-attribute --snapshot-id snap-1a2b3c4d --attribute createVolumePermission --operation-type remove --user-ids 123456789012
**To make a snapshot public**
This example makes the snapshot ``snap-1a2b3c4d`` public.
Command::
aws ec2 modify-snapshot-attribute --snapshot-id snap-1a2b3c4d --attribute createVolumePermission --operation-type add --group-names all awscli-1.10.1/awscli/examples/ec2/disassociate-route-table.rst 0000666 4542626 0000144 00000000365 12652514124 025333 0 ustar pysdk-ci amazon 0000000 0000000 **To disassociate a route table**
This example disassociates the specified route table from the specified subnet. If the command succeeds, no output is returned.
Command::
aws ec2 disassociate-route-table --association-id rtbassoc-781d0d1a
awscli-1.10.1/awscli/examples/ec2/import-key-pair.rst 0000666 4542626 0000144 00000002115 12652514124 023463 0 ustar pysdk-ci amazon 0000000 0000000 **To import a public key**
First, generate a key pair with the tool of your choice. For example, use this OpenSSL command:
Command::
openssl genrsa -out my-key.pem 2048
Next, save the public key to a local file. For example, use this OpenSSL command:
Command::
openssl rsa -in my-key.pem -pubout > my-key.pub
Finally, this example command imports the specified public key. The public key is the text in the .pub file that is between ``-----BEGIN PUBLIC KEY-----`` and ``-----END PUBLIC KEY-----``.
Command::
aws ec2 import-key-pair --key-name my-key --public-key-material MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuhrGNglwb2Zz/Qcz1zV+l12fJOnWmJxC2GMwQOjAX/L7p01o9vcLRoHXxOtcHBx0TmwMo+i85HWMUE7aJtYclVWPMOeepFmDqR1AxFhaIc9jDe88iLA07VK96wY4oNpp8+lICtgCFkuXyunsk4+KhuasN6kOpk7B2w5cUWveooVrhmJprR90FOHQB2Uhe9MkRkFjnbsA/hvZ/Ay0Cflc2CRZm/NG00lbLrV4l/SQnZmP63DJx194T6pI3vAev2+6UMWSwptNmtRZPMNADjmo50KiG2c3uiUIltiQtqdbSBMh9ztL/98AHtn88JG0s8u2uSRTNEHjG55tyuMbLD40QEXAMPLE
Output::
{
"KeyName": "my-key",
"KeyFingerprint": "1f:51:ae:28:bf:89:e9:d8:1f:25:5d:37:2d:7d:b8:ca"
} awscli-1.10.1/awscli/examples/ec2/describe-prefix-lists.rst 0000666 4542626 0000144 00000000536 12652514124 024646 0 ustar pysdk-ci amazon 0000000 0000000 **To describe prefix lists**
This example lists all available prefix lists for the region.
Command::
aws ec2 describe-prefix-lists
Output::
{
"PrefixLists": [
{
"PrefixListName": "com.amazonaws.us-east-1.s3",
"Cidrs": [
"54.231.0.0/17"
],
"PrefixListId": "pl-63a5400a"
}
]
}
awscli-1.10.1/awscli/examples/ec2/delete-nat-gateway.rst 0000666 4542626 0000144 00000000347 12652514124 024120 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a NAT gateway**
This example deletes NAT gateway ``nat-04ae55e711cec5680``.
Command::
aws ec2 delete-nat-gateway --nat-gateway-id nat-04ae55e711cec5680
Output::
{
"NatGatewayId": "nat-04ae55e711cec5680"
}
awscli-1.10.1/awscli/examples/ec2/disable-vpc-classic-link.rst 0000666 4542626 0000144 00000000303 12652514124 025172 0 ustar pysdk-ci amazon 0000000 0000000 **To disable ClassicLink for a VPC**
This example disables ClassicLink for vpc-8888888.
Command::
aws ec2 disable-vpc-classic-link --vpc-id vpc-88888888
Output::
{
"Return": true
} awscli-1.10.1/awscli/examples/ec2/create-route.rst 0000666 4542626 0000144 00000001453 12652514124 023035 0 ustar pysdk-ci amazon 0000000 0000000 **To create a route**
This example creates a route for the specified route table. The route matches all traffic (``0.0.0.0/0``) and routes it to the specified Internet gateway. If the command succeeds, no output is returned.
Command::
aws ec2 create-route --route-table-id rtb-22574640 --destination-cidr-block 0.0.0.0/0 --gateway-id igw-c0a643a9
This example command creates a route in route table rtb-g8ff4ea2. The route matches traffic for the CIDR block
10.0.0.0/16 and routes it to VPC peering connection, pcx-111aaa22. This route enables traffic to be directed to the peer
VPC in the VPC peering connection. If the command succeeds, no output is returned.
Command::
aws ec2 create-route --route-table-id rtb-g8ff4ea2 --destination-cidr-block 10.0.0.0/16 --vpc-peering-connection-id pcx-1a2b3c4d
awscli-1.10.1/awscli/examples/ec2/create-subnet.rst 0000666 4542626 0000144 00000001163 12652514124 023175 0 ustar pysdk-ci amazon 0000000 0000000 **To create a subnet**
This example creates a subnet in the specified VPC with the specified CIDR block. We recommend that you let us select an Availability Zone for you. Alternatively, you can use the ``--availability-zone`` option to specify the Availability Zone.
Command::
aws ec2 create-subnet --vpc-id vpc-a01106c2 --cidr-block 10.0.1.0/24
Output::
{
"Subnet": {
"VpcId": "vpc-a01106c2",
"CidrBlock": "10.0.1.0/24",
"State": "pending",
"AvailabilityZone": "us-east-1c",
"SubnetId": "subnet-9d4a7b6c",
"AvailableIpAddressCount": 251
}
} awscli-1.10.1/awscli/examples/ec2/describe-vpcs.rst 0000666 4542626 0000144 00000002636 12652514124 023173 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your VPCs**
This example describes your VPCs.
Command::
aws ec2 describe-vpcs
Output::
{
"Vpcs": [
{
"VpcId": "vpc-a01106c2",
"InstanceTenancy": "default",
"Tags": [
{
"Value": "MyVPC",
"Key": "Name"
}
],
"State": "available",
"DhcpOptionsId": "dopt-7a8b9c2d",
"CidrBlock": "10.0.0.0/16",
"IsDefault": false
},
{
"VpcId": "vpc-b61106d4",
"InstanceTenancy": "dedicated",
"State": "available",
"DhcpOptionsId": "dopt-97eb5efa",
"CidrBlock": "10.50.0.0/16",
"IsDefault": false
}
]
}
**To describe a specific VPC**
This example describes the specified VPC.
Command::
aws ec2 describe-vpcs --vpc-ids vpc-a01106c2
Output::
{
"Vpcs": [
{
"VpcId": "vpc-a01106c2",
"InstanceTenancy": "default",
"Tags": [
{
"Value": "MyVPC",
"Key": "Name"
}
],
"State": "available",
"DhcpOptionsId": "dopt-7a8b9c2d",
"CidrBlock": "10.0.0.0/16",
"IsDefault": false
}
]
} awscli-1.10.1/awscli/examples/ec2/modify-vpc-attribute.rst 0000666 4542626 0000144 00000001642 12652514124 024514 0 ustar pysdk-ci amazon 0000000 0000000 **To modify the enableDnsSupport attribute**
This example modifies the ``enableDnsSupport`` attribute. This attribute indicates whether DNS resolution is enabled for the VPC. If this attribute is ``true``, the Amazon DNS server resolves DNS hostnames for your instances to their corresponding IP addresses; otherwise, it does not. If the command succeeds, no output is returned.
Command::
aws ec2 modify-vpc-attribute --vpc-id vpc-a01106c2 --enable-dns-support "{\"Value\":false}"
**To modify the enableDnsHostnames attribute**
This example modifies the ``enableDnsHostnames`` attribute. This attribute indicates whether instances launched in the VPC get DNS hostnames. If this attribute is ``true``, instances in the VPC get DNS hostnames; otherwise, they do not. If the command succeeds, no output is returned.
Command::
aws ec2 modify-vpc-attribute --vpc-id vpc-a01106c2 --enable-dns-hostnames "{\"Value\":false}"
awscli-1.10.1/awscli/examples/ec2/describe-vpc-endpoint-services.rst 0000666 4542626 0000144 00000000364 12652514124 026443 0 ustar pysdk-ci amazon 0000000 0000000 **To describe VPC endpoint services**
This example describes all available endpoint services for the region.
Command::
aws ec2 describe-vpc-endpoint-services
Output::
{
"ServiceNames": [
"com.amazonaws.us-east-1.s3"
]
} awscli-1.10.1/awscli/examples/ec2/stop-instances.rst 0000666 4542626 0000144 00000001365 12652514124 023412 0 ustar pysdk-ci amazon 0000000 0000000 **To stop an Amazon EC2 instance**
This example stops the specified Amazon EBS-backed instance.
Command::
aws ec2 stop-instances --instance-ids i-1a2b3c4d
Output::
{
"StoppingInstances": [
{
"InstanceId": "i-1a2b3c4d",
"CurrentState": {
"Code": 64,
"Name": "stopping"
},
"PreviousState": {
"Code": 16,
"Name": "running"
}
}
]
}
For more information, see `Stop and Start Your Instance`_ in the *Amazon Elastic Compute Cloud User Guide*.
.. _`Stop and Start Your Instance`: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html
awscli-1.10.1/awscli/examples/ec2/register-image.rst 0000666 4542626 0000144 00000001747 12652514124 023350 0 ustar pysdk-ci amazon 0000000 0000000 **To register an AMI using a manifest file**
This example registers an AMI using the specified manifest file in Amazon S3.
Command::
aws ec2 register-image --image-location my-s3-bucket/myimage/image.manifest.xml --name "MyImage"
Output::
{
"ImageId": "ami-61341708"
}
**To add a block device mapping**
Add the following parameter to your ``register-image`` command to add an Amazon EBS volume with the device name ``/dev/sdh`` and a volume size of 100::
--block-device-mappings "[{\"DeviceName\": \"/dev/sdh\",\"Ebs\":{\"VolumeSize\":100}}]"
Add the following parameter to your ``register-image`` command to add ``ephemeral1`` as an instance store volume with the device name ``/dev/sdc``::
--block-device-mappings "[{\"DeviceName\": \"/dev/sdc\",\"VirtualName\":\"ephemeral1\"}]"
Add the following parameter to your ``register-image`` command to omit a device (for example, ``/dev/sdf``)::
--block-device-mappings "[{\"DeviceName\": \"/dev/sdf\",\"NoDevice\":\"\"}]"
awscli-1.10.1/awscli/examples/ec2/create-nat-gateway.rst 0000666 4542626 0000144 00000001205 12652514124 024113 0 ustar pysdk-ci amazon 0000000 0000000 **To create a NAT gateway**
This example creates a NAT gateway in subnet ``subnet-1a2b3c4d`` and associates an Elastic IP address with the allocation ID ``eipalloc-37fc1a52`` with the NAT gateway.
Command::
aws ec2 create-nat-gateway --subnet-id subnet-1a2b3c4d --allocation-id eipalloc-37fc1a52
Output::
{
"NatGateway": {
"NatGatewayAddresses": [
{
"AllocationId": "eipalloc-37fc1a52"
}
],
"VpcId": "vpc-1122aabb",
"State": "pending",
"NatGatewayId": "nat-08d48af2a8e83edfd",
"SubnetId": "subnet-1a2b3c4d",
"CreateTime": "2015-12-17T12:45:26.732Z"
}
} awscli-1.10.1/awscli/examples/ec2/describe-vpn-gateways.rst 0000666 4542626 0000144 00000001502 12652514125 024635 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your virtual private gateways**
This example describes your virtual private gateways.
Command::
aws ec2 describe-vpn-gateways
Output::
{
"VpnGateways": [
{
"State": "available",
"Type": "ipsec.1",
"VpnGatewayId": "vgw-f211f09b",
"VpcAttachments": [
{
"State": "attached",
"VpcId": "vpc-98eb5ef5"
}
]
},
{
"State": "available",
"Type": "ipsec.1",
"VpnGatewayId": "vgw-9a4cacf3",
"VpcAttachments": [
{
"State": "attaching",
"VpcId": "vpc-a01106c2"
}
]
}
]
} awscli-1.10.1/awscli/examples/ec2/report-instance-status.rst 0000666 4542626 0000144 00000000352 12652514124 025071 0 ustar pysdk-ci amazon 0000000 0000000 **To report status feedback for an instance**
This example command reports status feedback for the specified instance.
Command::
aws ec2 report-instance-status --instances i-570e5a28 --status impaired --reason-codes unresponsive
awscli-1.10.1/awscli/examples/ec2/unmonitor-instances.rst 0000666 4542626 0000144 00000000607 12652514124 024455 0 ustar pysdk-ci amazon 0000000 0000000 **To disable detailed monitoring for an instance**
This example command disables detailed monitoring for the specified instance.
Command::
aws ec2 unmonitor-instances --instance-ids i-570e5a28
Output::
{
"InstanceMonitorings": [
{
"InstanceId": "i-570e5a28",
"Monitoring": {
"State": "disabling"
}
}
]
}
awscli-1.10.1/awscli/examples/ec2/purchase-reserved-instances-offering.rst 0000666 4542626 0000144 00000000652 12652514124 027647 0 ustar pysdk-ci amazon 0000000 0000000 **To purchase a Reserved Instance offering**
This example command illustrates a purchase of a Reserved Instances offering, specifying an offering ID and instance count.
Command::
aws ec2 purchase-reserved-instances-offering --reserved-instances-offering-id ec06327e-dd07-46ee-9398-75b5fexample --instance-count 3
Output::
{
"ReservedInstancesId": "af9f760e-6f91-4559-85f7-4980eexample"
}
awscli-1.10.1/awscli/examples/ec2/delete-network-acl-entry.rst 0000666 4542626 0000144 00000000411 12652514124 025254 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a network ACL entry**
This example deletes ingress rule number 100 from the specified network ACL. If the command succeeds, no output is returned.
Command::
aws ec2 delete-network-acl-entry --network-acl-id acl-5fb85d36 --ingress --rule-number 100
awscli-1.10.1/awscli/examples/ec2/authorize-security-group-egress.rst 0000666 4542626 0000144 00000001352 12652514124 026733 0 ustar pysdk-ci amazon 0000000 0000000 **To add a rule that allows outbound traffic to a specific address range**
This example command adds a rule that grants access to the specified address ranges on TCP port 80.
Command::
aws ec2 authorize-security-group-egress --group-id sg-1a2b3c4d --ip-permissions '[{"IpProtocol": "tcp", "FromPort": 80, "ToPort": 80, "IpRanges": [{"CidrIp": "10.0.0.0/16"}]}]'
**To add a rule that allows outbound traffic to a specific security group**
This example command adds a rule that grants access to the specified security group on TCP port 80.
Command::
aws ec2 authorize-security-group-egress --group-id sg-1a2b3c4d --ip-permissions '[{"IpProtocol": "tcp", "FromPort": 80, "ToPort": 80, "UserIdGroupPairs": [{"GroupId": "sg-4b51a32f"}]}]'
awscli-1.10.1/awscli/examples/ec2/create-volume.rst 0000666 4542626 0000144 00000002315 12652514124 023204 0 ustar pysdk-ci amazon 0000000 0000000 **To create a new volume**
This example command creates an 80 GiB General Purpose (SSD) volume in the Availability Zone ``us-east-1a``.
Command::
aws ec2 create-volume --size 80 --region us-east-1 --availability-zone us-east-1a --volume-type gp2
Output::
{
"AvailabilityZone": "us-east-1a",
"Attachments": [],
"Tags": [],
"VolumeType": "gp2",
"VolumeId": "vol-1234abcd",
"State": "creating",
"SnapshotId": null,
"CreateTime": "YYYY-MM-DDTHH:MM:SS.000Z",
"Size": 80
}
**To create a new Provisioned IOPS (SSD) volume from a snapshot**
This example command creates a new Provisioned IOPS (SSD) volume with 1000 provisioned IOPS from a snapshot in the Availability Zone ``us-east-1a``.
Command::
aws ec2 create-volume --region us-east-1 --availability-zone us-east-1a --snapshot-id snap-abcd1234 --volume-type io1 --iops 1000
Output::
{
"AvailabilityZone": "us-east-1a",
"Attachments": [],
"Tags": [],
"VolumeType": "io1",
"VolumeId": "vol-1234abcd",
"State": "creating",
"Iops": 1000,
"SnapshotId": "snap-abcd1234",
"CreateTime": "YYYY-MM-DDTHH:MM:SS.000Z",
"Size": 500
}
awscli-1.10.1/awscli/examples/ec2/attach-vpn-gateway.rst 0000666 4542626 0000144 00000000532 12652514124 024137 0 ustar pysdk-ci amazon 0000000 0000000 **To attach a virtual private gateway to your VPC**
This example attaches the specified virtual private gateway to the specified VPC.
Command::
aws ec2 attach-vpn-gateway --vpn-gateway-id vgw-9a4cacf3 --vpc-id vpc-a01106c2
Output::
{
"VpcAttachement": {
"State": "attaching",
"VpcId": "vpc-a01106c2"
}
} awscli-1.10.1/awscli/examples/ec2/describe-vpc-endpoints.rst 0000666 4542626 0000144 00000001162 12652514124 025002 0 ustar pysdk-ci amazon 0000000 0000000 **To describe endpoints**
This example describes all of your endpoints.
Command::
aws ec2 describe-vpc-endpoints
Output::
{
"VpcEndpoints": [
{
"PolicyDocument": "{\"Version\":\"2008-10-17\",\"Statement\":[{\"Sid\":\"\",\"Effect\":\"Allow\",\"Principal\":\"*\",\"Action\":\"*\",\"Resource\":\"*\"}]}",
"VpcId": "vpc-ec43eb89",
"State": "available",
"ServiceName": "com.amazonaws.us-east-1.s3",
"RouteTableIds": [
"rtb-4e5ef02b"
],
"VpcEndpointId": "vpce-3ecf2a57",
"CreationTimestamp": "2015-05-15T09:40:50Z"
}
]
} awscli-1.10.1/awscli/examples/ec2/delete-vpc-endpoints.rst 0000666 4542626 0000144 00000000565 12652514124 024472 0 ustar pysdk-ci amazon 0000000 0000000 **To delete an endpoint**
This example deletes endpoints vpce-aa22bb33 and vpce-1a2b3c4d. If the command is partially successful or unsuccessful, a list of unsuccessful items is returned. If the command succeeds, the returned list is empty.
Command::
aws ec2 delete-vpc-endpoints --vpc-endpoint-ids vpce-aa22bb33 vpce-1a2b3c4d
Output::
{
"Unsuccessful": []
} awscli-1.10.1/awscli/examples/ec2/delete-security-group.rst 0000666 4542626 0000144 00000001323 12652514124 024673 0 ustar pysdk-ci amazon 0000000 0000000 **[EC2-Classic] To delete a security group**
This example deletes the security group named ``MySecurityGroup``. If the command succeeds, no output is returned.
Command::
aws ec2 delete-security-group --group-name MySecurityGroup
**[EC2-VPC] To delete a security group**
This example deletes the security group with the ID ``sg-903004f8``. Note that you can't reference a security group for EC2-VPC by name. If the command succeeds, no output is returned.
Command::
aws ec2 delete-security-group --group-id sg-903004f8
For more information, see `Using Security Groups`_ in the *AWS Command Line Interface User Guide*.
.. _`Using Security Groups`: http://docs.aws.amazon.com/cli/latest/userguide/cli-ec2-sg.html
awscli-1.10.1/awscli/examples/ec2/modify-subnet-attribute.rst 0000666 4542626 0000144 00000001041 12652514124 025215 0 ustar pysdk-ci amazon 0000000 0000000 **To change a subnet's public IP addressing behavior**
This example modifies subnet-1a2b3c4d to specify that all instances launched into this subnet are assigned a public IP address. If the command succeeds, no output is returned.
Command::
aws ec2 modify-subnet-attribute --subnet-id subnet-1a2b3c4d --map-public-ip-on-launch
For more information, see `IP Addressing in Your VPC`_ in the *AWS Virtual Private Cloud User Guide*.
.. _`IP Addressing in Your VPC`: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-ip-addressing.html awscli-1.10.1/awscli/examples/ec2/attach-internet-gateway.rst 0000666 4542626 0000144 00000000420 12652514124 025160 0 ustar pysdk-ci amazon 0000000 0000000 **To attach an Internet gateway to your VPC**
This example attaches the specified Internet gateway to the specified VPC. If the command succeeds, no output is returned.
Command::
aws ec2 attach-internet-gateway --internet-gateway-id igw-c0a643a9 --vpc-id vpc-a01106c2 awscli-1.10.1/awscli/examples/ec2/delete-vpc.rst 0000666 4542626 0000144 00000000244 12652514124 022463 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a VPC**
This example deletes the specified VPC. If the command succeeds, no output is returned.
Command::
aws ec2 delete-vpc --vpc-id vpc-a01106c2
awscli-1.10.1/awscli/examples/ec2/detach-vpn-gateway.rst 0000666 4542626 0000144 00000000430 12652514125 024121 0 ustar pysdk-ci amazon 0000000 0000000 **To detach a virtual private gateway from your VPC**
This example detaches the specified virtual private gateway from the specified VPC. If the command succeeds, no output is returned.
Command::
aws ec2 detach-vpn-gateway --vpn-gateway-id vgw-9a4cacf3 --vpc-id vpc-a01106c2
awscli-1.10.1/awscli/examples/ec2/describe-id-format.rst 0000666 4542626 0000144 00000001540 12652514125 024074 0 ustar pysdk-ci amazon 0000000 0000000 **To describe the ID format for your resources**
This example describes the ID format for all resource types that support longer IDs. The output indicates that the ``instance`` and ``reservation`` resource types can be enabled or disabled for longer IDs. The ``reservation`` resource is already enabled. The ``Deadline`` field indicates the date (in UTC) at which you're automatically switched over to using longer IDs for that resource type. If a deadline is not yet available for the resource type, this value is not returned.
Command::
aws ec2 describe-id-format
Output::
{
"Statuses": [
{
"Deadline": "2016-11-01T13:00:00.000Z",
"UseLongIds": false,
"Resource": "instance"
},
{
"Deadline": "2016-11-01T13:00:00.000Z",
"UseLongIds": true,
"Resource": "reservation"
}
]
} awscli-1.10.1/awscli/examples/ec2/describe-spot-fleet-instances.rst 0000666 4542626 0000144 00000001054 12652514124 026260 0 ustar pysdk-ci amazon 0000000 0000000 **To describe the Spot Instances associated with a Spot fleet**
This example command lists the Spot instances associated with the specified Spot fleet.
Command::
aws ec2 describe-spot-fleet-instances --spot-fleet-request-id sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
Output::
{
"ActiveInstances": [
{
"InstanceId": "i-3852c1cf",
"InstanceType": "m3.medium",
"SpotInstanceRequestId": "sir-08b93456"
},
...
],
"SpotFleetRequestId": "sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE"
}
awscli-1.10.1/awscli/examples/ec2/run-instances.rst 0000666 4542626 0000144 00000023500 12652514125 023225 0 ustar pysdk-ci amazon 0000000 0000000 **To launch an instance in EC2-Classic**
This example launches a single instance of type ``t1.micro``.
The key pair and security group, named ``MyKeyPair`` and ``MySecurityGroup``, must exist.
Command::
aws ec2 run-instances --image-id ami-1a2b3c4d --count 1 --instance-type t1.micro --key-name MyKeyPair --security-groups MySecurityGroup
Output::
{
"OwnerId": "123456789012",
"ReservationId": "r-5875ca20",
"Groups": [
{
"GroupName": "MySecurityGroup",
"GroupId": "sg-903004f8"
}
],
"Instances": [
{
"Monitoring": {
"State": "disabled"
},
"PublicDnsName": null,
"RootDeviceType": "ebs",
"State": {
"Code": 0,
"Name": "pending"
},
"EbsOptimized": false,
"LaunchTime": "2013-07-19T02:42:39.000Z",
"ProductCodes": [],
"StateTransitionReason": null,
"InstanceId": "i-123abc45",
"ImageId": "ami-1a2b3c4d",
"PrivateDnsName": null,
"KeyName": "MyKeyPair",
"SecurityGroups": [
{
"GroupName": "MySecurityGroup",
"GroupId": "sg-903004f8"
}
],
"ClientToken": null,
"InstanceType": "t1.micro",
"NetworkInterfaces": [],
"Placement": {
"Tenancy": "default",
"GroupName": null,
"AvailabilityZone": "us-east-1b"
},
"Hypervisor": "xen",
"BlockDeviceMappings": [],
"Architecture": "x86_64",
"StateReason": {
"Message": "pending",
"Code": "pending"
},
"RootDeviceName": "/dev/sda1",
"VirtualizationType": "hvm",
"AmiLaunchIndex": 0
}
]
}
**To launch an instance in EC2-VPC**
This example launches a single instance of type ``t2.micro`` into the specified subnet.
The key pair named ``MyKeyPair`` and the security group sg-903004f8 must exist.
Command::
aws ec2 run-instances --image-id ami-abc12345 --count 1 --instance-type t2.micro --key-name MyKeyPair --security-group-ids sg-903004f8 --subnet-id subnet-6e7f829e
Output::
{
"OwnerId": "123456789012",
"ReservationId": "r-5875ca20",
"Groups": [],
"Instances": [
{
"Monitoring": {
"State": "disabled"
},
"PublicDnsName": null,
"RootDeviceType": "ebs",
"State": {
"Code": 0,
"Name": "pending"
},
"EbsOptimized": false,
"LaunchTime": "2013-07-19T02:42:39.000Z",
"PrivateIpAddress": "10.0.1.114",
"ProductCodes": [],
"VpcId": "vpc-1a2b3c4d",
"InstanceId": "i-5203422c",
"ImageId": "ami-abc12345",
"PrivateDnsName": "ip-10-0-1-114.ec2.internal",
"KeyName": "MyKeyPair",
"SecurityGroups": [
{
"GroupName": "MySecurityGroup",
"GroupId": "sg-903004f8"
}
],
"ClientToken": null,
"SubnetId": "subnet-6e7f829e",
"InstanceType": "t2.micro",
"NetworkInterfaces": [
{
"Status": "in-use",
"MacAddress": "0e:ad:05:3b:60:52",
"SourceDestCheck": true,
"VpcId": "vpc-1a2b3c4d",
"Description": "null",
"NetworkInterfaceId": "eni-a7edb1c9",
"PrivateIpAddresses": [
{
"PrivateDnsName": "ip-10-0-1-114.ec2.internal",
"Primary": true,
"PrivateIpAddress": "10.0.1.114"
}
],
"PrivateDnsName": "ip-10-0-1-114.ec2.internal",
"Attachment": {
"Status": "attached",
"DeviceIndex": 0,
"DeleteOnTermination": true,
"AttachmentId": "eni-attach-52193138",
"AttachTime": "2013-07-19T02:42:39.000Z"
},
"Groups": [
{
"GroupName": "MySecurityGroup",
"GroupId": "sg-903004f8"
}
],
"SubnetId": "subnet-6e7f829e",
"OwnerId": "123456789012",
"PrivateIpAddress": "10.0.1.114"
}
],
"SourceDestCheck": true,
"Placement": {
"Tenancy": "default",
"GroupName": null,
"AvailabilityZone": "us-east-1b"
},
"Hypervisor": "xen",
"BlockDeviceMappings": [],
"Architecture": "x86_64",
"StateReason": {
"Message": "pending",
"Code": "pending"
},
"RootDeviceName": "/dev/sda1",
"VirtualizationType": "hvm",
"AmiLaunchIndex": 0
}
]
}
The following example requests a public IP address for an instance that you're launching into a nondefault subnet:
Command::
aws ec2 run-instances --image-id ami-c3b8d6aa --count 1 --instance-type t1.micro --key-name MyKeyPair --security-group-ids sg-903004f8 --subnet-id subnet-6e7f829e --associate-public-ip-address
**To launch an instance using a block device mapping**
Add the following parameter to your ``run-instances`` command to specify block devices::
--block-device-mappings file://mapping.json
To add an Amazon EBS volume with the device name ``/dev/sdh`` and a volume size of 100, specify the following in mapping.json::
[
{
"DeviceName": "/dev/sdh",
"Ebs": {
"VolumeSize": 100
}
}
]
To add ``ephemeral1`` as an instance store volume with the device name ``/dev/sdc``, specify the following in mapping.json::
[
{
"DeviceName": "/dev/sdc",
"VirtualName": "ephemeral1"
}
]
To omit a device specified by the AMI used to launch the instance (for example, ``/dev/sdf``), specify the following in mapping.json::
[
{
"DeviceName": "/dev/sdf",
"NoDevice": ""
}
]
You can view only the Amazon EBS volumes in your block device mapping using the console or the ``describe-instances`` command. To view all volumes, including the instance store volumes, use the following command.
Command::
GET http://169.254.169.254/latest/meta-data/block-device-mapping
Output::
ami
ephemeral1
Note that ``ami`` represents the root volume. To get details about the instance store volume ``ephemeral1``, use the following command.
Command::
GET http://169.254.169.254/latest/meta-data/block-device-mapping/ephemeral1
Output::
sdc
**To launch an instance with a modified block device mapping**
You can change individual characteristics of existing AMI block device mappings to suit your needs. Perhaps you want to use an existing AMI, but you want a larger root volume than the usual 8 GiB. Or, you would like to use a General Purpose (SSD) volume for an AMI that currently uses a Magnetic volume.
Use the ``describe-images`` command with the image ID of the AMI you want to use to find its existing block device mapping. You should see a block device mapping in the output::
{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": true,
"SnapshotId": "snap-b047276d",
"VolumeSize": 8,
"VolumeType": "standard",
"Encrypted": false
}
}
You can modify the above mapping by changing the individual parameters. For example, to launch an instance with a modified block device mapping, add the following parameter to your ``run-instances`` command to change the above mapping's volume size and type::
--block-device-mappings file://mapping.json
Where mapping.json contains the following::
[
{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": true,
"SnapshotId": "snap-b047276d",
"VolumeSize": 100,
"VolumeType": "gp2"
}
}
]
**To launch an instance with user data**
You can launch an instance and specify user data that performs instance configuration, or that runs a script. The user data needs to be passed as normal string, base64 encoding is handled internally. The following example passes user data in a file called ``my_script.txt`` that contains a configuration script for your instance. The script runs at launch.
Command::
aws ec2 run-instances --image-id ami-abc1234 --count 1 --instance-type m4.large --key-name keypair --user-data file://my_script.txt --subnet-id subnet-abcd1234 --security-group-ids sg-abcd1234
For more information about launching instances, see `Using Amazon EC2 Instances`_ in the *AWS Command Line Interface User Guide*.
.. _`Using Amazon EC2 Instances`: http://docs.aws.amazon.com/cli/latest/userguide/cli-ec2-launch.html
**To launch an instance with an instance profile**
This example shows the use of the ``iam-instance-profile`` option to specify an `IAM instance profile`_ by name.
.. _`IAM instance profile`: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
Command::
aws ec2 run-instances --iam-instance-profile Name=MyInstanceProfile --image-id ami-1a2b3c4d --count 1 --instance-type t2.micro --key-name MyKeyPair --security-groups MySecurityGroup
awscli-1.10.1/awscli/examples/ec2/modify-hosts.rst 0000666 4542626 0000144 00000002645 12652514124 023067 0 ustar pysdk-ci amazon 0000000 0000000 **To describe Dedicated hosts in your account and generate a machine-readable list**
To output a list of Dedicated host IDs in JSON (comma separated).
Command::
aws ec2 describe-hosts --query 'Hosts[].HostId' --output json
Output::
[
"h-085664df5899941c",
"h-056c1b0724170dc38"
]
To output a list of Dedicated host IDs in plaintext (comma separated).
Command::
aws ec2 describe-hosts --query 'Hosts[].HostId' --output text
Output::
h-085664df5899941c
h-056c1b0724170dc38
**To describe available Dedicated hosts in your account**
Command::
aws ec2 describe-hosts --filter "Name=state,Values=available"
Output::
{
"Hosts": [
{
"HostId": "h-085664df5899941c"
"HostProperties: {
"Cores": 20,
"Sockets": 2,
"InstanceType": "m3.medium".
"TotalVCpus": 32
},
"Instances": [],
"State": "available",
"AvailabilityZone": "us-east-1b",
"AvailableCapacity": {
"AvailableInstanceCapacity": [
{
"AvailableCapacity": 32,
"InstanceType": "m3.medium",
"TotalCapacity": 32
}
],
"AvailableVCpus": 32
},
"AutoPlacement": "off"
}
]
}
awscli-1.10.1/awscli/examples/ec2/create-vpc-peering-connection.rst 0000666 4542626 0000144 00000002344 12652514124 026253 0 ustar pysdk-ci amazon 0000000 0000000 **To create a VPC peering connection between your VPCs**
This example requests a peering connection between your VPCs vpc-1a2b3c4d and vpc-11122233.
Command::
aws ec2 create-vpc-peering-connection --vpc-id vpc-1a2b3c4d --peer-vpc-id vpc-11122233
Output::
{
"VpcPeeringConnection": {
"Status": {
"Message": "Initiating Request to 444455556666",
"Code": "initiating-request"
},
"Tags": [],
"RequesterVpcInfo": {
"OwnerId": "444455556666",
"VpcId": "vpc-1a2b3c4d",
"CidrBlock": "10.0.0.0/28"
},
"VpcPeeringConnectionId": "pcx-111aaa111",
"ExpirationTime": "2014-04-02T16:13:36.000Z",
"AccepterVpcInfo": {
"OwnerId": "444455556666",
"VpcId": "vpc-11122233"
}
}
}
**To create a VPC peering connection with a VPC in another account**
This example requests a peering connection between your VPC (vpc-1a2b3c4d), and a VPC (vpc-123abc45) that belongs AWS account 123456789012.
Command::
aws ec2 create-vpc-peering-connection --vpc-id vpc-1a2b3c4d --peer-vpc-id vpc-11122233 --peer-owner-id 123456789012
awscli-1.10.1/awscli/examples/ec2/delete-vpn-connection-route.rst 0000666 4542626 0000144 00000000460 12652514124 025767 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a static route from a VPN connection**
This example deletes the specified static route from the specified VPN connection. If the command succeeds, no output is returned.
Command::
aws ec2 delete-vpn-connection-route --vpn-connection-id vpn-40f41529 --destination-cidr-block 11.12.0.0/16
awscli-1.10.1/awscli/examples/ec2/describe-account-attributes.rst 0000666 4542626 0000144 00000004213 12652514124 026031 0 ustar pysdk-ci amazon 0000000 0000000 **To describe all the attributes for your AWS account**
This example describes the attributes for your AWS account.
Command::
aws ec2 describe-account-attributes
Output::
{
"AccountAttributes": [
{
"AttributeName": "vpc-max-security-groups-per-interface",
"AttributeValues": [
{
"AttributeValue": "5"
}
]
},
{
"AttributeName": "max-instances",
"AttributeValues": [
{
"AttributeValue": "20"
}
]
},
{
"AttributeName": "supported-platforms",
"AttributeValues": [
{
"AttributeValue": "EC2"
},
{
"AttributeValue": "VPC"
}
]
},
{
"AttributeName": "default-vpc",
"AttributeValues": [
{
"AttributeValue": "none"
}
]
},
{
"AttributeName": "max-elastic-ips",
"AttributeValues": [
{
"AttributeValue": "5"
}
]
},
{
"AttributeName": "vpc-max-elastic-ips",
"AttributeValues": [
{
"AttributeValue": "5"
}
]
}
]
}
**To describe a single attribute for your AWS account**
This example describes the ``supported-platforms`` attribute for your AWS account.
Command::
aws ec2 describe-account-attributes --attribute-names supported-platforms
Output::
{
"AccountAttributes": [
{
"AttributeName": "supported-platforms",
"AttributeValues": [
{
"AttributeValue": "EC2"
},
{
"AttributeValue": "VPC"
}
]
}
]
}
awscli-1.10.1/awscli/examples/ec2/accept-vpc-peering-connection.rst 0000666 4542626 0000144 00000001263 12652514124 026246 0 ustar pysdk-ci amazon 0000000 0000000 **To accept a VPC peering connection**
This example accepts the specified VPC peering connection request.
Command::
aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id pcx-1a2b3c4d
Output::
{
"VpcPeeringConnection": {
"Status": {
"Message": "Provisioning",
"Code": "provisioning"
},
"Tags": [],
"AccepterVpcInfo": {
"OwnerId": "444455556666",
"VpcId": "vpc-44455566",
"CidrBlock": "10.0.1.0/28"
},
"VpcPeeringConnectionId": "pcx-1a2b3c4d",
"RequesterVpcInfo": {
"OwnerId": "444455556666",
"VpcId": "vpc-111abc45",
"CidrBlock": "10.0.0.0/28"
}
}
} awscli-1.10.1/awscli/examples/ec2/revoke-security-group-egress.rst 0000666 4542626 0000144 00000001370 12652514124 026214 0 ustar pysdk-ci amazon 0000000 0000000 **To remove the rule that allows outbound traffic to a specific address range**
This example command removes the rule that grants access to the specified address ranges on TCP port 80.
Command::
aws ec2 revoke-security-group-egress --group-id sg-1a2b3c4d --ip-permissions '[{"IpProtocol": "tcp", "FromPort": 80, "ToPort": 80, "IpRanges": [{"CidrIp": "10.0.0.0/16"}]}]'
**To remove the rule that allows outbound traffic to a specific security group**
This example command removes the rule that grants access to the specified security group on TCP port 80.
Command::
aws ec2 revoke-security-group-egress --group-id sg-1a2b3c4d --ip-permissions '[{"IpProtocol": "tcp", "FromPort": 80, "ToPort": 80, "UserIdGroupPairs": [{"GroupId": "sg-4b51a32f"}]}]'
awscli-1.10.1/awscli/examples/ec2/describe-internet-gateways.rst 0000666 4542626 0000144 00000002211 12652514124 025657 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your Internet gateways**
This example describes your Internet gateways.
Command::
aws ec2 describe-internet-gateways
Output::
{
"InternetGateways": [
{
"Tags": [],
"InternetGatewayId": "igw-c0a643a9",
"Attachments": [
{
"State": "available",
"VpcId": "vpc-a01106c2"
}
]
},
{
"Tags": [],
"InternetGatewayId": "igw-046d7966",
"Attachments": []
}
]
}
**To describe the Internet gateway for a specific VPC**
This example describes the Internet gateway for the specified VPC.
Command::
aws ec2 describe-subnets --filters "Name=attachment.vpc-id,Values=vpc-a01106c2"
Output::
{
"InternetGateways": [
{
"Tags": [],
"InternetGatewayId": "igw-c0a643a9",
"Attachments": [
{
"State": "available",
"VpcId": "vpc-a01106c2"
}
]
}
]
}
awscli-1.10.1/awscli/examples/ec2/create-dhcp-options.rst 0000666 4542626 0000144 00000001017 12652514124 024302 0 ustar pysdk-ci amazon 0000000 0000000 **To create a DHCP options set**
This example creates a DHCP options set.
Command::
aws ec2 create-dhcp-options --dhcp-configuration "Key=domain-name-servers,Values=10.2.5.1,10.2.5.2"
Output::
{
"DhcpOptions": {
"DhcpConfigurations": [
{
"Values": [
"10.2.5.2",
"10.2.5.1"
],
"Key": "domain-name-servers"
}
],
"DhcpOptionsId": "dopt-d9070ebb"
}
} awscli-1.10.1/awscli/examples/ec2/delete-network-interface.rst 0000666 4542626 0000144 00000000334 12652514124 025322 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a network interface**
This example deletes the specified network interface. If the command succeeds, no output is returned.
Command::
aws ec2 delete-network-interface --network-interface-id eni-e5aa89a3
awscli-1.10.1/awscli/examples/ec2/create-network-acl.rst 0000666 4542626 0000144 00000001514 12652514124 024123 0 ustar pysdk-ci amazon 0000000 0000000 **To create a network ACL**
This example creates a network ACL for the specified VPC.
Command::
aws ec2 create-network-acl --vpc-id vpc-a01106c2
Output::
{
"NetworkAcl": {
"Associations": [],
"NetworkAclId": "acl-5fb85d36",
"VpcId": "vpc-a01106c2",
"Tags": [],
"Entries": [
{
"CidrBlock": "0.0.0.0/0",
"RuleNumber": 32767,
"Protocol": "-1",
"Egress": true,
"RuleAction": "deny"
},
{
"CidrBlock": "0.0.0.0/0",
"RuleNumber": 32767,
"Protocol": "-1",
"Egress": false,
"RuleAction": "deny"
}
],
"IsDefault": false
}
} awscli-1.10.1/awscli/examples/ec2/detach-network-interface.rst 0000666 4542626 0000144 00000000414 12652514124 025307 0 ustar pysdk-ci amazon 0000000 0000000 **To detach a network interface from your instance**
This example detaches the specified network interface from the specified instance. If the command succeeds, no output is returned.
Command::
aws ec2 detach-network-interface --attachment-id eni-attach-66c4350a
awscli-1.10.1/awscli/examples/ec2/unassign-private-ip-addresses.rst 0000666 4542626 0000144 00000000520 12652514124 026310 0 ustar pysdk-ci amazon 0000000 0000000 **To unassign a secondary private IP address from a network interface**
This example unassigns the specified private IP address from the specified network interface. If the command succeeds, no output is returned.
Command::
aws ec2 unassign-private-ip-addresses --network-interface-id eni-e5aa89a3 --private-ip-addresses 10.0.0.82
awscli-1.10.1/awscli/examples/ec2/delete-key-pair.rst 0000666 4542626 0000144 00000000571 12652514124 023417 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a key pair**
This example deletes the key pair named ``MyKeyPair``. If the command succeeds, no output is returned.
Command::
aws ec2 delete-key-pair --key-name MyKeyPair
For more information, see `Using Key Pairs`_ in the *AWS Command Line Interface User Guide*.
.. _`Using Key Pairs`: http://docs.aws.amazon.com/cli/latest/userguide/cli-ec2-keypairs.html
awscli-1.10.1/awscli/examples/ec2/describe-conversion-tasks.rst 0000666 4542626 0000144 00000002540 12652514124 025522 0 ustar pysdk-ci amazon 0000000 0000000 **To view the status of a conversion task**
This example returns the status of a conversion task with the ID import-i-ffvko9js.
Command::
aws ec2 describe-conversion-tasks --conversion-task-ids import-i-ffvko9js
Output::
{
"ConversionTasks": [
{
"ConversionTaskId": "import-i-ffvko9js",
"ImportInstance": {
"InstanceId": "i-6cc70a3f",
"Volumes": [
{
"Volume": {
"Id": "vol-99e2ebdb",
"Size": 16
},
"Status": "completed",
"Image": {
"Size": 1300687360,
"ImportManifestUrl": "https://s3.amazonaws.com/myimportbucket/411443cd-d620-4f1c-9d66-13144EXAMPLE/RHEL5.vmdkmanifest.xml?AWSAccessKeyId=AKIAIOSFODNN7EXAMPLE&Expires=140EXAMPLE&Signature=XYNhznHNgCqsjDxL9wRL%2FJvEXAMPLE",
"Format": "VMDK"
},
"BytesConverted": 1300682960,
"AvailabilityZone": "us-east-1d"
}
]
},
"ExpirationTime": "2014-05-14T22:06:23Z",
"State": "completed"
}
]
}
awscli-1.10.1/awscli/examples/ec2/describe-key-pairs.rst 0000666 4542626 0000144 00000001055 12652514124 024116 0 ustar pysdk-ci amazon 0000000 0000000 **To display a key pair**
This example displays the fingerprint for the key pair named ``MyKeyPair``.
Command::
aws ec2 describe-key-pairs --key-name MyKeyPair
Output::
{
"KeyPairs": [
{
"KeyName": "MyKeyPair",
"KeyFingerprint": "1f:51:ae:28:bf:89:e9:d8:1f:25:5d:37:2d:7d:b8:ca:9f:f5:f1:6f"
}
]
}
For more information, see `Using Key Pairs`_ in the *AWS Command Line Interface User Guide*.
.. _`Using Key Pairs`: http://docs.aws.amazon.com/cli/latest/userguide/cli-ec2-keypairs.html
awscli-1.10.1/awscli/examples/ec2/describe-vpn-connections.rst 0000666 4542626 0000144 00000002371 12652514124 025337 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your VPN connections**
This example describes your VPN connections.
Command::
aws ec2 describe-vpn-connections
Output::
{
"VpnConnections": {
"VpnConnectionId": "vpn-40f41529"
"CustomerGatewayConfiguration": "...configuration information...",
"VgwTelemetry": [
{
"Status": "DOWN",
"AcceptedRouteCount": 0,
"OutsideIpAddress": "72.21.209.192",
"LastStatusChange": "2013-02-04T20:19:34.000Z",
"StatusMessage": "IPSEC IS DOWN"
},
{
"Status": "DOWN",
"AcceptedRouteCount": 0,
"OutsideIpAddress": "72.21.209.224",
"LastStatusChange": "2013-02-04T20:19:34.000Z",
"StatusMessage": "IPSEC IS DOWN"
}
],
"State": "available",
"VpnGatewayId": "vgw-9a4cacf3",
"CustomerGatewayId": "cgw-0e11f167"
"Type": "ipsec.1"
}
}
**To describe your available VPN connections**
This example describes your VPN connections with a state of ``available``.
Command::
aws ec2 describe-vpn-connections --filters "Name=state,Values=available"
awscli-1.10.1/awscli/examples/ec2/describe-security-groups.rst 0000666 4542626 0000144 00000010013 12652514124 025370 0 ustar pysdk-ci amazon 0000000 0000000 **To describe a security group for EC2-Classic**
This example displays information about the security group named ``MySecurityGroup``.
Command::
aws ec2 describe-security-groups --group-names MySecurityGroup
Output::
{
"SecurityGroups": [
{
"IpPermissionsEgress": [],
"Description": "My security group",
"IpPermissions": [
{
"ToPort": 22,
"IpProtocol": "tcp",
"IpRanges": [
{
"CidrIp": "203.0.113.0/24"
}
],
"UserIdGroupPairs": [],
"FromPort": 22
}
],
"GroupName": "MySecurityGroup",
"OwnerId": "123456789012",
"GroupId": "sg-903004f8",
}
]
}
**To describe a security group for EC2-VPC**
This example displays information about the security group with the ID sg-903004f8. Note that you can't reference a security group for EC2-VPC by name.
Command::
aws ec2 describe-security-groups --group-ids sg-903004f8
Output::
{
"SecurityGroups": [
{
"IpPermissionsEgress": [
{
"IpProtocol": "-1",
"IpRanges": [
{
"CidrIp": "0.0.0.0/0"
}
],
"UserIdGroupPairs": []
}
],
"Description": "My security group",
"IpPermissions": [
{
"ToPort": 22,
"IpProtocol": "tcp",
"IpRanges": [
{
"CidrIp": "203.0.113.0/24"
}
],
"UserIdGroupPairs": [],
"FromPort": 22
}
],
"GroupName": "MySecurityGroup",
"VpcId": "vpc-1a2b3c4d",
"OwnerId": "123456789012",
"GroupId": "sg-903004f8",
}
]
}
**To describe security groups that have specific rules**
(EC2-VPC only) This example uses filters to describe security groups that have a rule that allows SSH traffic (port 22) and a rule that allows traffic from all addresses (``0.0.0.0/0``). The output is filtered to display only the names of the security groups. Security groups must match all filters to be returned in the results; however, a single rule does not have to match all filters. For example, the output returns a security group with a rule that allows SSH traffic from a specific IP address and another rule that allows HTTP traffic from all addresses.
Command::
aws ec2 describe-security-groups --filters Name=ip-permission.from-port,Values=22 Name=ip-permission.to-port,Values=22 Name=ip-permission.cidr,Values='0.0.0.0/0' --query 'SecurityGroups[*].{Name:GroupName}'
Output::
[
{
"Name": "default"
},
{
"Name": "Test SG"
},
{
"Name": "SSH-Access-Group"
}
]
**To describe tagged security groups**
This example describes all security groups that include ``test`` in the security group name, and that have the tag ``Test=To-delete``. The output is filtered to display only the names and IDs of the security groups.
Command::
aws ec2 describe-security-groups --filters Name=group-name,Values='*test*' Name=tag-key,Values=Test Name=tag-value,Values=To-delete --query 'SecurityGroups[*].{Name:GroupName,ID:GroupId}'
Output::
[
{
"Name": "testfornewinstance",
"ID": "sg-33bb22aa"
},
{
"Name": "newgrouptest",
"ID": "sg-1a2b3c4d"
}
]
For more information, see `Using Security Groups`_ in the *AWS Command Line Interface User Guide*.
.. _`Using Security Groups`: http://docs.aws.amazon.com/cli/latest/userguide/cli-ec2-sg.html
awscli-1.10.1/awscli/examples/ec2/start-instances.rst 0000666 4542626 0000144 00000001366 12652514124 023563 0 ustar pysdk-ci amazon 0000000 0000000 **To start an Amazon EC2 instance**
This example starts the specified Amazon EBS-backed instance.
Command::
aws ec2 start-instances --instance-ids i-1a2b3c4d
Output::
{
"StartingInstances": [
{
"InstanceId": "i-1a2b3c4d",
"CurrentState": {
"Code": 0,
"Name": "pending"
},
"PreviousState": {
"Code": 80,
"Name": "stopped"
}
}
]
}
For more information, see `Stop and Start Your Instance`_ in the *Amazon Elastic Compute Cloud User Guide*.
.. _`Stop and Start Your Instance`: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html
awscli-1.10.1/awscli/examples/ec2/delete-placement-group.rst 0000666 4542626 0000144 00000000242 12652514124 024773 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a placement group**
This example command deletes the specified placement group.
Command::
aws ec2 delete-placement-group --group-name my-cluster
awscli-1.10.1/awscli/examples/ec2/modify-instance-attribute.rst 0000666 4542626 0000144 00000002640 12652514124 025527 0 ustar pysdk-ci amazon 0000000 0000000 **To modify the instance type**
This example modifies the instance type of the specified instance. The instance must be in the ``stopped`` state. If the command succeeds, no output is returned.
Command::
aws ec2 modify-instance-attribute --instance-id i-5203422c --instance-type "{\"Value\": \"m1.small\"}"
**To enable enhanced networking on an instance**
This example enables enhanced networking for the specified instance. The instance must be in the ``stopped`` state. If the command succeeds, no output is returned.
Command::
aws ec2 modify-instance-attribute --instance-id i-1a2b3c4d --sriov-net-support simple
**To modify the sourceDestCheck attribute**
This example sets the ``sourceDestCheck`` attribute of the specified instance to ``true``. The instance must be in a VPC. If the command succeeds, no output is returned.
Command::
aws ec2 modify-instance-attribute --instance-id i-5203422c --source-dest-check "{\"Value\": true}"
**To modify the deleteOnTermination attribute of the root volume**
This example sets the ``deleteOnTermination`` attribute for the root volume of the specified Amazon EBS-backed instance to ``false``. By default, this attribute is ``true`` for the root volume. If the command succeeds, no output is returned.
Command::
aws ec2 modify-instance-attribute --instance-id i-5203422c --block-device-mappings "[{\"DeviceName\": \"/dev/sda1\",\"Ebs\":{\"DeleteOnTermination\":false}}]"
awscli-1.10.1/awscli/examples/ec2/create-key-pair.rst 0000666 4542626 0000144 00000000663 12652514124 023422 0 ustar pysdk-ci amazon 0000000 0000000 **To create a key pair**
This example creates a key pair named ``MyKeyPair``.
Command::
aws ec2 create-key-pair --key-name MyKeyPair
The output is an ASCII version of the private key and key fingerprint. You need to save the key to a file.
For more information, see `Using Key Pairs`_ in the *AWS Command Line Interface User Guide*.
.. _`Using Key Pairs`: http://docs.aws.amazon.com/cli/latest/userguide/cli-ec2-keypairs.html
awscli-1.10.1/awscli/examples/ec2/delete-vpn-gateway.rst 0000666 4542626 0000144 00000000334 12652514124 024135 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a virtual private gateway**
This example deletes the specified virtual private gateway. If the command succeeds, no output is returned.
Command::
aws ec2 delete-vpn-gateway --vpn-gateway-id vgw-9a4cacf3
awscli-1.10.1/awscli/examples/ec2/delete-flow-logs.rst 0000666 4542626 0000144 00000000263 12652514124 023605 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a flow log**
This example deletes flow log ``fl-1a2b3c4d``.
Command::
aws ec2 delete-flow-logs --flow-log-id fl-1a2b3c4d
Output::
{
"Unsuccessful": []
} awscli-1.10.1/awscli/examples/ec2/reset-snapshot-attribute.rst 0000666 4542626 0000144 00000000424 12652514124 025413 0 ustar pysdk-ci amazon 0000000 0000000 **To reset a snapshot attribute**
This example resets the create volume permissions for snapshot ``snap-1a2b3c4d``. If the command succeeds, no output is returned.
Command::
aws ec2 reset-snapshot-attribute --snapshot-id snap-1a2b3c4d --attribute createVolumePermission
awscli-1.10.1/awscli/examples/ec2/create-tags.rst 0000666 4542626 0000144 00000003210 12652514124 022626 0 ustar pysdk-ci amazon 0000000 0000000 **To add a tag to a resource**
This example adds the tag ``Stack=production`` to the specified image, or overwrites an existing tag for the AMI where the tag key is ``Stack``. If the command succeeds, no output is returned.
Command::
aws ec2 create-tags --resources ami-78a54011 --tags Key=Stack,Value=production
**To add tags to multiple resources**
This example adds (or overwrites) two tags for an AMI and an instance. One of the tags contains just a key (``webserver``), with no value (we set the value to an empty string). The other tag consists of a key (``stack``) and value (``Production``). If the command succeeds, no output is returned.
Command::
aws ec2 create-tags --resources ami-1a2b3c4d i-10a64379 --tags Key=webserver,Value= Key=stack,Value=Production
**To add tags with special characters**
This example adds the tag ``[Group]=test`` for an instance. The square brackets ([ and ]) are special characters, and must be escaped. If you are using Windows, surround the value with (\"):
Command::
aws ec2 create-tags --resources i-1a2b3c4d --tags Key=\"[Group]\",Value=test
If you are using Windows PowerShell, break out the characters with a backslash (\\), surround them with double quotes ("), and then surround the entire key and value structure with single quotes ('):
Command::
aws ec2 create-tags --resources i-1a2b3c4d --tags 'Key=\"[Group]\",Value=test'
If you are using Linux or OS X, enclose the entire key and value structure with single quotes ('), and then enclose the element with the special character with double quotes ("):
Command::
aws ec2 create-tags --resources i-1a2b3c4d --tags 'Key="[Group]",Value=test'
awscli-1.10.1/awscli/examples/ec2/describe-volumes.rst 0000666 4542626 0000144 00000006162 12652514124 023710 0 ustar pysdk-ci amazon 0000000 0000000 **To describe all volumes**
This example command describes all of your volumes in the default region.
Command::
aws ec2 describe-volumes
Output::
{
"Volumes": [
{
"AvailabilityZone": "us-east-1a",
"Attachments": [
{
"AttachTime": "2013-12-18T22:35:00.000Z",
"InstanceId": "i-abe041d4",
"VolumeId": "vol-21083656",
"State": "attached",
"DeleteOnTermination": true,
"Device": "/dev/sda1"
}
],
"VolumeType": "standard",
"VolumeId": "vol-21083656",
"State": "in-use",
"SnapshotId": "snap-b4ef17a9",
"CreateTime": "2013-12-18T22:35:00.084Z",
"Size": 8
},
{
"AvailabilityZone": "us-east-1a",
"Attachments": [],
"VolumeType": "io1",
"VolumeId": "vol-2725bc51",
"State": "available",
"Iops": 1000,
"SnapshotId": null,
"CreateTime": "2014-02-27T00:02:41.791Z",
"Size": 100
}
]
}
**To describe volumes that are attached to a specific instance**
This example command describes all volumes that are both attached to the instance with the ID i-abe041d4 and set to delete when the instance terminates.
Command::
aws ec2 describe-volumes --region us-east-1 --filters Name=attachment.instance-id,Values=i-abe041d4 Name=attachment.delete-on-termination,Values=true
Output::
{
"Volumes": [
{
"AvailabilityZone": "us-east-1a",
"Attachments": [
{
"AttachTime": "2013-12-18T22:35:00.000Z",
"InstanceId": "i-abe041d4",
"VolumeId": "vol-21083656",
"State": "attached",
"DeleteOnTermination": true,
"Device": "/dev/sda1"
}
],
"VolumeType": "standard",
"VolumeId": "vol-21083656",
"State": "in-use",
"SnapshotId": "snap-b4ef17a9",
"CreateTime": "2013-12-18T22:35:00.084Z",
"Size": 8
}
]
}
**To describe tagged volumes and filter the output**
This example command describes all volumes that have the tag key ``Name`` and a value that begins with ``Test``. The output is filtered to display only the tags and IDs of the volumes.
Command::
aws ec2 describe-volumes --filters Name=tag-key,Values="Name" Name=tag-value,Values="Test*" --query 'Volumes[*].{ID:VolumeId,Tag:Tags}'
Output::
[
{
"Tag": [
{
"Value": "Test2",
"Key": "Name"
}
],
"ID": "vol-9de9e9d9"
},
{
"Tag": [
{
"Value": "Test1",
"Key": "Name"
}
],
"ID": "vol-b2242df9"
}
]
awscli-1.10.1/awscli/examples/ec2/create-spot-datafeed-subscription.rst 0000666 4542626 0000144 00000000635 12652514124 027142 0 ustar pysdk-ci amazon 0000000 0000000 **To create a Spot Instance datafeed**
This example command creates a Spot Instance data feed for the account.
Command::
aws ec2 create-spot-datafeed-subscription --bucket --prefix spotdata
Output::
{
"SpotDatafeedSubscription": {
"OwnerId": "",
"Prefix": "spotdata",
"Bucket": "",
"State": "Active"
}
}
awscli-1.10.1/awscli/examples/ec2/cancel-bundle-task.rst 0000666 4542626 0000144 00000000771 12652514124 024074 0 ustar pysdk-ci amazon 0000000 0000000 **To cancel a bundle task**
This example cancels bundle task ``bun-2a4e041c``.
Command::
aws ec2 cancel-bundle-task --bundle-id bun-2a4e041c
Output::
{
"BundleTask": {
"UpdateTime": "2015-09-15T13:27:40.000Z",
"InstanceId": "i-1a2b3c4d",
"Storage": {
"S3": {
"Prefix": "winami",
"Bucket": "bundletasks"
}
},
"State": "cancelling",
"StartTime": "2015-09-15T13:24:35.000Z",
"BundleId": "bun-2a4e041c"
}
} awscli-1.10.1/awscli/examples/ec2/disable-vpc-classic-link-dns-support.rst 0000666 4542626 0000144 00000000354 12652514124 027474 0 ustar pysdk-ci amazon 0000000 0000000 **To disable ClassicLink DNS support for a VPC**
This example disables ClassicLink DNS support for ``vpc-88888888``.
Command::
aws ec2 disable-vpc-classic-link-dns-support --vpc-id vpc-88888888
Output::
{
"Return": true
} awscli-1.10.1/awscli/examples/ec2/cancel-spot-instance-requests.rst 0000666 4542626 0000144 00000000560 12652514124 026317 0 ustar pysdk-ci amazon 0000000 0000000 **To cancel Spot Instance requests**
This example command cancels a Spot Instance request.
Command::
aws ec2 cancel-spot-instance-requests --spot-instance-request-ids sir-08b93456
Output::
{
"CancelledSpotInstanceRequests": [
{
"State": "cancelled",
"SpotInstanceRequestId": "sir-08b93456"
}
]
}
awscli-1.10.1/awscli/examples/ec2/describe-image-attribute.rst 0000666 4542626 0000144 00000001273 12652514124 025277 0 ustar pysdk-ci amazon 0000000 0000000 **To describe the launch permissions for an AMI**
This example describes the launch permissions for the specified AMI.
Command::
aws ec2 describe-image-attribute --image-id ami-5731123e --attribute launchPermission
Output::
{
"LaunchPermissions": [
{
"UserId": "123456789012"
}
],
"ImageId": "ami-5731123e",
}
**To describe the product codes for an AMI**
This example describes the product codes for the specified AMI. Note that this AMI has no product codes.
Command::
aws ec2 describe-image-attribute --image-id ami-5731123e --attribute productCodes
Output::
{
"ProductCodes": [],
"ImageId": "ami-5731123e",
} awscli-1.10.1/awscli/examples/ec2/delete-route-table.rst 0000666 4542626 0000144 00000000304 12652514124 024113 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a route table**
This example deletes the specified route table. If the command succeeds, no output is returned.
Command::
aws ec2 delete-route-table --route-table-id rtb-22574640
awscli-1.10.1/awscli/examples/ec2/describe-instance-status.rst 0000666 4542626 0000144 00000002000 12652514124 025326 0 ustar pysdk-ci amazon 0000000 0000000 **To describe the status of an instance**
This example describes the current status of the specified instance.
Command::
aws ec2 describe-instance-status --instance-id i-5203422c
Output::
{
"InstanceStatuses": [
{
"InstanceId": "i-5203422c",
"InstanceState": {
"Code": 16,
"Name": "running"
},
"AvailabilityZone": "us-east-1d",
"SystemStatus": {
"Status": "ok",
"Details": [
{
"Status": "passed",
"Name": "reachability"
}
]
},
"InstanceStatus": {
"Status": "ok",
"Details": [
{
"Status": "passed",
"Name": "reachability"
}
]
}
}
]
}
awscli-1.10.1/awscli/examples/ec2/describe-regions.rst 0000666 4542626 0000144 00000004101 12652514124 023653 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your regions**
This example describes all the regions that are available to you.
Command::
aws ec2 describe-regions
Output::
{
"Regions": [
{
"Endpoint": "ec2.eu-west-1.amazonaws.com",
"RegionName": "eu-west-1"
},
{
"Endpoint": "ec2.ap-southeast-1.amazonaws.com",
"RegionName": "ap-southeast-1"
},
{
"Endpoint": "ec2.ap-southeast-2.amazonaws.com",
"RegionName": "ap-southeast-2"
},
{
"Endpoint": "ec2.eu-central-1.amazonaws.com",
"RegionName": "eu-central-1"
},
{
"Endpoint": "ec2.ap-northeast-2.amazonaws.com",
"RegionName": "ap-northeast-2"
},
{
"Endpoint": "ec2.ap-northeast-1.amazonaws.com",
"RegionName": "ap-northeast-1"
},
{
"Endpoint": "ec2.us-east-1.amazonaws.com",
"RegionName": "us-east-1"
},
{
"Endpoint": "ec2.sa-east-1.amazonaws.com",
"RegionName": "sa-east-1"
},
{
"Endpoint": "ec2.us-west-1.amazonaws.com",
"RegionName": "us-west-1"
},
{
"Endpoint": "ec2.us-west-2.amazonaws.com",
"RegionName": "us-west-2"
}
]
}
**To describe the regions with an endpoint that has a specific string**
This example describes all regions that are available to you that have the string "us" in the endpoint.
Command::
aws ec2 describe-regions --filters "Name=endpoint,Values=*us*"
Output::
{
"Regions": [
{
"Endpoint": "ec2.us-east-1.amazonaws.com",
"RegionName": "us-east-1"
},
{
"Endpoint": "ec2.us-west-2.amazonaws.com",
"RegionName": "us-west-2"
},
{
"Endpoint": "ec2.us-west-1.amazonaws.com",
"RegionName": "us-west-1"
},
]
}
awscli-1.10.1/awscli/examples/ec2/describe-subnets.rst 0000666 4542626 0000144 00000003451 12652514124 023677 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your subnets**
This example describes your subnets.
Command::
aws ec2 describe-subnets
Output::
{
"Subnets": [
{
"VpcId": "vpc-a01106c2",
"CidrBlock": "10.0.1.0/24",
"MapPublicIpOnLaunch": false,
"DefaultForAz": false,
"State": "available",
"AvailabilityZone": "us-east-1c",
"SubnetId": "subnet-9d4a7b6c",
"AvailableIpAddressCount": 251
},
{
"VpcId": "vpc-b61106d4",
"CidrBlock": "10.0.0.0/24",
"MapPublicIpOnLaunch": false,
"DefaultForAz": false,
"State": "available",
"AvailabilityZone": "us-east-1d",
"SubnetId": "subnet-65ea5f08",
"AvailableIpAddressCount": 251
}
]
}
**To describe the subnets for a specific VPC**
This example describes the subnets for the specified VPC.
Command::
aws ec2 describe-subnets --filters "Name=vpc-id,Values=vpc-a01106c2"
Output::
{
"Subnets": [
{
"VpcId": "vpc-a01106c2",
"CidrBlock": "10.0.1.0/24",
"MapPublicIpOnLaunch": false,
"DefaultForAz": false,
"State": "available",
"AvailabilityZone": "us-east-1c",
"SubnetId": "subnet-9d4a7b6c",
"AvailableIpAddressCount": 251
}
]
}
**To describe subnets with a specific tag**
This example lists subnets with the tag ``Name=MySubnet`` and returns the output in text format.
Command::
aws ec2 describe-subnets --filters Name=tag:Name,Values=MySubnet --output text
Output::
SUBNETS us-east-1a 251 10.0.1.0/24 False False available subnet-1a2b3c4d vpc-11223344
TAGS Name MySubnet awscli-1.10.1/awscli/examples/ec2/copy-image.rst 0000666 4542626 0000144 00000000503 12652514124 022463 0 ustar pysdk-ci amazon 0000000 0000000 **To copy an AMI to another region**
This example copies the specified AMI from the ``us-east-1`` region to the ``ap-northeast-1`` region.
Command::
aws ec2 copy-image --source-image-id ami-5731123e --source-region us-east-1 --region ap-northeast-1 --name "My server"
Output::
{
"ImageId": "ami-438bea42"
} awscli-1.10.1/awscli/examples/ec2/attach-volume.rst 0000666 4542626 0000144 00000000677 12652514124 023216 0 ustar pysdk-ci amazon 0000000 0000000 **To attach a volume to an instance**
This example command attaches a volume (``vol-1234abcd``) to an instance (``i-abcd1234``) as ``/dev/sdf``.
Command::
aws ec2 attach-volume --volume-id vol-1234abcd --instance-id i-abcd1234 --device /dev/sdf
Output::
{
"AttachTime": "YYYY-MM-DDTHH:MM:SS.000Z",
"InstanceId": "i-abcd1234",
"VolumeId": "vol-1234abcd",
"State": "attaching",
"Device": "/dev/sdf"
}
awscli-1.10.1/awscli/examples/ec2/create-snapshot.rst 0000666 4542626 0000144 00000001105 12652514124 023530 0 ustar pysdk-ci amazon 0000000 0000000 **To create a snapshot**
This example command creates a snapshot of the volume with a volume ID of ``vol-1234abcd`` and a short description to identify the snapshot.
Command::
aws ec2 create-snapshot --volume-id vol-1234abcd --description "This is my root volume snapshot."
Output::
{
"Description": "This is my root volume snapshot.",
"Tags": [],
"VolumeId": "vol-1234abcd",
"State": "pending",
"VolumeSize": 8,
"StartTime": "2014-02-28T21:06:01.000Z",
"OwnerId": "012345678910",
"SnapshotId": "snap-1a2b3c4d"
} awscli-1.10.1/awscli/examples/ec2/create-image.rst 0000666 4542626 0000144 00000002005 12652514124 022753 0 ustar pysdk-ci amazon 0000000 0000000 **To create an AMI from an Amazon EBS-backed instance**
This example creates an AMI from the specified instance.
Command::
aws ec2 create-image --instance-id i-10a64379 --name "My server" --description "An AMI for my server"
Output::
{
"ImageId": "ami-5731123e"
}
**To create an AMI using a block device mapping**
Add the following parameter to your ``create-image`` command to add an Amazon EBS volume with the device name ``/dev/sdh`` and a volume size of 100::
--block-device-mappings "[{\"DeviceName\": \"/dev/sdh\",\"Ebs\":{\"VolumeSize\":100}}]"
Add the following parameter to your ``create-image`` command to add ``ephemeral1`` as an instance store volume with the device name ``/dev/sdc``::
--block-device-mappings "[{\"DeviceName\": \"/dev/sdc\",\"VirtualName\":\"ephemeral1\"}]"
Add the following parameter to your ``create-image`` command to omit a device included on the instance (for example, ``/dev/sdf``)::
--block-device-mappings "[{\"DeviceName\": \"/dev/sdf\",\"NoDevice\":\"\"}]"
awscli-1.10.1/awscli/examples/ec2/associate-dhcp-options.rst 0000666 4542626 0000144 00000001055 12652514124 025014 0 ustar pysdk-ci amazon 0000000 0000000 **To associate a DHCP options set with your VPC**
This example associates the specified DHCP options set with the specified VPC. If the command succeeds, no output is returned.
Command::
aws ec2 associate-dhcp-options --dhcp-options-id dopt-d9070ebb --vpc-id vpc-a01106c2
**To associate the default DHCP options set with your VPC**
This example associates the default DHCP options set with the specified VPC. If the command succeeds, no output is returned.
Command::
aws ec2 associate-dhcp-options --dhcp-options-id default --vpc-id vpc-a01106c2
awscli-1.10.1/awscli/examples/ec2/get-console-output.rst 0000666 4542626 0000144 00000000442 12652514124 024210 0 ustar pysdk-ci amazon 0000000 0000000 **To get the console output**
This example gets the console ouput for the specified Linux instance.
Command::
aws ec2 get-console-output --instance-id i-10a64379
Output::
{
"InstanceId": "i-10a64379",
"Timestamp": "2013-07-25T21:23:53.000Z",
"Output": "..."
}
awscli-1.10.1/awscli/examples/ec2/modify-image-attribute.rst 0000666 4542626 0000144 00000002124 12652514124 025002 0 ustar pysdk-ci amazon 0000000 0000000 **To make an AMI public**
This example makes the specified AMI public. If the command succeeds, no output is returned.
Command::
aws ec2 modify-image-attribute --image-id ami-5731123e --launch-permission "{\"Add\": [{\"Group\":\"all\"}]}"
**To make an AMI private**
This example makes the specified AMI private. If the command succeeds, no output is returned.
Command::
aws ec2 modify-image-attribute --image-id ami-5731123e --launch-permission "{\"Remove\": [{\"Group\":\"all\"}]}"
**To grant launch permission to an AWS account**
This example grants launch permissions to the specified AWS account. If the command succeeds, no output is returned.
Command::
aws ec2 modify-image-attribute --image-id ami-5731123e --launch-permission "{\"Add\": [{\"UserId\":\"123456789012\"}]}"
**To removes launch permission from an AWS account**
This example removes launch permissions from the specified AWS account. If the command succeeds, no output is returned.
Command::
aws ec2 modify-image-attribute --image-id ami-5731123e --launch-permission "{\"Remove\": [{\"UserId\":\"123456789012\"}]}"
awscli-1.10.1/awscli/examples/ec2/describe-spot-datafeed-subscription.rst 0000666 4542626 0000144 00000000611 12652514124 027451 0 ustar pysdk-ci amazon 0000000 0000000 **To describe Spot Instance datafeed subscription for an account**
This example command describes the data feed for the the account.
Command::
aws ec2 describe-spot-datafeed-subscription
Output::
{
"SpotDatafeedSubscription": {
"OwnerId": "",
"Prefix": "spotdata",
"Bucket": "",
"State": "Active"
}
}
awscli-1.10.1/awscli/examples/ec2/allocate-hosts.rst 0000666 4542626 0000144 00000000573 12652514124 023362 0 ustar pysdk-ci amazon 0000000 0000000 **To allocate a Dedicated host to your account**
This example allocates a single Dedicated host in a specific Availability Zone, onto which you can launch m3.medium instances, to your account.
Command::
aws ec2 allocate-hosts --instance-type m3.medium --availability-zone us-east-1b --quantity 1
Output::
{
"HostIds": [
"h-029e7409a337631f"
]
}
awscli-1.10.1/awscli/examples/ec2/detach-classic-link-vpc.rst 0000666 4542626 0000144 00000000375 12652514124 025030 0 ustar pysdk-ci amazon 0000000 0000000 **To unlink (detach) an EC2-Classic instance from a VPC**
This example unlinks instance i-1a2b3c4d from VPC vpc-88888888.
Command::
aws ec2 detach-classic-link-vpc --instance-id i-1a2b3c4d --vpc-id vpc-88888888
Output::
{
"Return": true
} awscli-1.10.1/awscli/examples/ec2/delete-dhcp-options.rst 0000666 4542626 0000144 00000000321 12652514124 024276 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a DHCP options set**
This example deletes the specified DHCP options set. If the command succeeds, no output is returned.
Command::
aws ec2 delete-dhcp-options --dhcp-options-id dopt-d9070ebb
awscli-1.10.1/awscli/examples/ec2/reboot-instances.rst 0000666 4542626 0000144 00000000623 12652514124 023713 0 ustar pysdk-ci amazon 0000000 0000000 **To reboot an Amazon EC2 instance**
This example reboots the specified instance. If the command succeeds, no output is returned.
Command::
aws ec2 reboot-instances --instance-ids i-1a2b3c4d
For more information, see `Reboot Your Instance`_ in the *Amazon Elastic Compute Cloud User Guide*.
.. _`Reboot Your Instance`: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-reboot.html
awscli-1.10.1/awscli/examples/ec2/describe-network-interfaces.rst 0000666 4542626 0000144 00000007050 12652514124 026025 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your network interfaces**
This example describes all your network interfaces.
Command::
aws ec2 describe-network-interfaces
Output::
{
"NetworkInterfaces": [
{
"Status": "in-use",
"MacAddress": "02:2f:8f:b0:cf:75",
"SourceDestCheck": true,
"VpcId": "vpc-a01106c2",
"Description": "my network interface",
"Association": {
"PublicIp": "203.0.113.12",
"AssociationId": "eipassoc-0fbb766a",
"PublicDnsName": "ec2-203-0-113-12.compute-1.amazonaws.com",
"IpOwnerId": "123456789012"
},
"NetworkInterfaceId": "eni-e5aa89a3",
"PrivateIpAddresses": [
{
"PrivateDnsName": "ip-10-0-1-17.ec2.internal",
"Association": {
"PublicIp": "203.0.113.12",
"AssociationId": "eipassoc-0fbb766a",
"PublicDnsName": "ec2-203-0-113-12.compute-1.amazonaws.com",
"IpOwnerId": "123456789012"
},
"Primary": true,
"PrivateIpAddress": "10.0.1.17"
}
],
"RequesterManaged": false,
"PrivateDnsName": "ip-10-0-1-17.ec2.internal",
"AvailabilityZone": "us-east-1d",
"Attachment": {
"Status": "attached",
"DeviceIndex": 1,
"AttachTime": "2013-11-30T23:36:42.000Z",
"InstanceId": "i-640a3c17",
"DeleteOnTermination": false,
"AttachmentId": "eni-attach-66c4350a",
"InstanceOwnerId": "123456789012"
},
"Groups": [
{
"GroupName": "default",
"GroupId": "sg-8637d3e3"
}
],
"SubnetId": "subnet-b61f49f0",
"OwnerId": "123456789012",
"TagSet": [],
"PrivateIpAddress": "10.0.1.17"
},
{
"Status": "in-use",
"MacAddress": "02:58:f5:ef:4b:06",
"SourceDestCheck": true,
"VpcId": "vpc-a01106c2",
"Description": "Primary network interface",
"Association": {
"PublicIp": "198.51.100.0",
"IpOwnerId": "amazon"
},
"NetworkInterfaceId": "eni-f9ba99bf",
"PrivateIpAddresses": [
{
"Association": {
"PublicIp": "198.51.100.0",
"IpOwnerId": "amazon"
},
"Primary": true,
"PrivateIpAddress": "10.0.1.149"
}
],
"RequesterManaged": false,
"AvailabilityZone": "us-east-1d",
"Attachment": {
"Status": "attached",
"DeviceIndex": 0,
"AttachTime": "2013-11-30T23:35:33.000Z",
"InstanceId": "i-640a3c17",
"DeleteOnTermination": true,
"AttachmentId": "eni-attach-1b9db777",
"InstanceOwnerId": "123456789012"
},
"Groups": [
{
"GroupName": "default",
"GroupId": "sg-8637d3e3"
}
],
"SubnetId": "subnet-b61f49f0",
"OwnerId": "123456789012",
"TagSet": [],
"PrivateIpAddress": "10.0.1.149"
}
]
}
awscli-1.10.1/awscli/examples/ec2/describe-spot-price-history.rst 0000666 4542626 0000144 00000004111 12652514124 025772 0 ustar pysdk-ci amazon 0000000 0000000 **To describe Spot price history**
This example command returns the Spot Price history for m1.xlarge instances for a particular day in January.
Command::
aws ec2 describe-spot-price-history --instance-types m1.xlarge --start-time 2014-01-06T07:08:09 --end-time 2014-01-06T08:09:10
Output::
{
"SpotPriceHistory": [
{
"Timestamp": "2014-01-06T07:10:55.000Z",
"ProductDescription": "SUSE Linux",
"InstanceType": "m1.xlarge",
"SpotPrice": "0.087000",
"AvailabilityZone": "us-west-1b"
},
{
"Timestamp": "2014-01-06T07:10:55.000Z",
"ProductDescription": "SUSE Linux",
"InstanceType": "m1.xlarge",
"SpotPrice": "0.087000",
"AvailabilityZone": "us-west-1c"
},
{
"Timestamp": "2014-01-06T05:42:36.000Z",
"ProductDescription": "SUSE Linux (Amazon VPC)",
"InstanceType": "m1.xlarge",
"SpotPrice": "0.087000",
"AvailabilityZone": "us-west-1a"
},
...
}
**To describe Spot price history for Linux/UNIX Amazon VPC**
This example command returns the Spot Price history for m1.xlarge, Linux/UNIX Amazon VPC instances for a particular day in January.
Command::
aws ec2 describe-spot-price-history --instance-types m1.xlarge --product-description "Linux/UNIX (Amazon VPC)" --start-time 2014-01-06T07:08:09 --end-time 2014-01-06T08:09:10
Output::
{
"SpotPriceHistory": [
{
"Timestamp": "2014-01-06T04:32:53.000Z",
"ProductDescription": "Linux/UNIX (Amazon VPC)",
"InstanceType": "m1.xlarge",
"SpotPrice": "0.080000",
"AvailabilityZone": "us-west-1a"
},
{
"Timestamp": "2014-01-05T11:28:26.000Z",
"ProductDescription": "Linux/UNIX (Amazon VPC)",
"InstanceType": "m1.xlarge",
"SpotPrice": "0.080000",
"AvailabilityZone": "us-west-1c"
}
]
} awscli-1.10.1/awscli/examples/ec2/describe-dhcp-options.rst 0000666 4542626 0000144 00000001530 12652514124 024617 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your DHCP options sets**
This example describes your DHCP options sets.
Command::
aws ec2 describe-dhcp-options
Output::
{
"DhcpOptions": [
{
"DhcpConfigurations": [
{
"Values": [
"10.2.5.2",
"10.2.5.1"
],
"Key": "domain-name-servers"
}
],
"DhcpOptionsId": "dopt-d9070ebb"
},
{
"DhcpConfigurations": [
{
"Values": [
"AmazonProvidedDNS"
],
"Key": "domain-name-servers"
}
],
"DhcpOptionsId": "dopt-7a8b9c2d"
}
]
} awscli-1.10.1/awscli/examples/ec2/describe-reserved-instances-modifications.rst 0000666 4542626 0000144 00000003172 12652514124 030646 0 ustar pysdk-ci amazon 0000000 0000000 **To describe Reserved Instances modifications**
This example command describes all the Reserved Instances modification requests that have been submitted for your account.
Command::
aws ec2 describe-reserved-instances-modifications
Output::
{
"ReservedInstancesModifications": [
{
"Status": "fulfilled",
"ModificationResults": [
{
"ReservedInstancesId": "93bbbca2-62f1-4d9d-b225-16bada29e6c7",
"TargetConfiguration": {
"AvailabilityZone": "us-east-1b",
"InstanceType": "m1.large",
"InstanceCount": 3
}
},
{
"ReservedInstancesId": "1ba8e2e3-aabb-46c3-bcf5-3fe2fda922e6",
"TargetConfiguration": {
"AvailabilityZone": "us-east-1d",
"InstanceType": "m1.xlarge",
"InstanceCount": 1
}
}
],
"EffectiveDate": "2015-08-12T17:00:00.000Z",
"CreateDate": "2015-08-12T17:52:52.630Z",
"UpdateDate": "2015-08-12T18:08:06.698Z",
"ClientToken": "c9adb218-3222-4889-8216-0cf0e52dc37e:
"ReservedInstancesModificationId": "rimod-d3ed4335-b1d3-4de6-ab31-0f13aaf46687",
"ReservedInstancesIds": [
{
"ReservedInstancesId": "b847fa93-e282-4f55-b59a-1342f5bd7c02"
}
]
}
]
}
awscli-1.10.1/awscli/examples/ec2/describe-placement-groups.rst 0000666 4542626 0000144 00000000542 12652514124 025477 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your placement groups**
This example command describes all of your placement groups.
Command::
aws ec2 describe-placement-groups
Output::
{
"PlacementGroups": [
{
"GroupName": "my-cluster",
"State": "available",
"Strategy": "cluster"
},
...
]
}
awscli-1.10.1/awscli/examples/ec2/assign-private-ip-addresses.rst 0000666 4542626 0000144 00000001542 12652514124 025752 0 ustar pysdk-ci amazon 0000000 0000000 **To assign a specific secondary private IP address a network interface**
This example assigns the specified secondary private IP address to the specified network interface. If the command succeeds, no output is returned.
Command::
aws ec2 assign-private-ip-addresses --network-interface-id eni-e5aa89a3 --private-ip-addresses 10.0.0.82
**To assign secondary private IP addresses that Amazon EC2 selects to a network interface**
This example assigns two secondary private IP addresses to the specified network interface. Amazon EC2 automatically assigns these IP addresses from the available IP addresses in the CIDR block range of the subnet the network interface is associated with. If the command succeeds, no output is returned.
Command::
aws ec2 assign-private-ip-addresses --network-interface-id eni-e5aa89a3 --secondary-private-ip-address-count 2
awscli-1.10.1/awscli/examples/ec2/create-route-table.rst 0000666 4542626 0000144 00000001061 12652514124 024115 0 ustar pysdk-ci amazon 0000000 0000000 **To create a route table**
This example creates a route table for the specified VPC.
Command::
aws ec2 create-route-table --vpc-id vpc-a01106c2
Output::
{
"RouteTable": {
"Associations": [],
"RouteTableId": "rtb-22574640",
"VpcId": "vpc-a01106c2",
"PropagatingVgws": [],
"Tags": [],
"Routes": [
{
"GatewayId": "local",
"DestinationCidrBlock": "10.0.0.0/16",
"State": "active"
}
]
}
} awscli-1.10.1/awscli/examples/ec2/describe-network-interface-attribute.rst 0000666 4542626 0000144 00000003575 12652514124 027653 0 ustar pysdk-ci amazon 0000000 0000000 **To describe the attachment attribute of a network interface**
This example command describes the ``attachment`` attribute of the specified network interface.
Command::
aws ec2 describe-network-interface-attribute --network-interface-id eni-686ea200 --attribute attachment
Output::
{
"NetworkInterfaceId": "eni-686ea200",
"Attachment": {
"Status": "attached",
"DeviceIndex": 0,
"AttachTime": "2015-05-21T20:02:20.000Z",
"InstanceId": "i-d5652e23",
"DeleteOnTermination": true,
"AttachmentId": "eni-attach-43348162",
"InstanceOwnerId": "123456789012"
}
}
**To describe the description attribute of a network interface**
This example command describes the ``description`` attribute of the specified network interface.
Command::
aws ec2 describe-network-interface-attribute --network-interface-id eni-686ea200 --attribute description
Output::
{
"NetworkInterfaceId": "eni-686ea200",
"Description": {
"Value": "My description"
}
}
**To describe the groupSet attribute of a network interface**
This example command describes the ``groupSet`` attribute of the specified network interface.
Command::
aws ec2 describe-network-interface-attribute --network-interface-id eni-686ea200 --attribute groupSet
Output::
{
"NetworkInterfaceId": "eni-686ea200",
"Groups": [
{
"GroupName": "my-security-group",
"GroupId": "sg-903004f8"
}
]
}
**To describe the sourceDestCheck attribute of a network interface**
This example command describes the ``sourceDestCheck`` attribute of the specified network interface.
Command::
aws ec2 describe-network-interface-attribute --network-interface-id eni-686ea200 --attribute sourceDestCheck
Output::
{
"NetworkInterfaceId": "eni-686ea200",
"SourceDestCheck": {
"Value": true
}
}
awscli-1.10.1/awscli/examples/ec2/create-vpc.rst 0000666 4542626 0000144 00000001456 12652514124 022472 0 ustar pysdk-ci amazon 0000000 0000000 **To create a VPC**
This example creates a VPC with the specified CIDR block.
Command::
aws ec2 create-vpc --cidr-block 10.0.0.0/16
Output::
{
"Vpc": {
"InstanceTenancy": "default",
"State": "pending",
"VpcId": "vpc-a01106c2",
"CidrBlock": "10.0.0.0/16",
"DhcpOptionsId": "dopt-7a8b9c2d"
}
}
**To create a VPC with dedicated tenancy**
This example creates a VPC with the specified CIDR block and ``dedicated`` tenancy.
Command::
aws ec2 create-vpc --cidr-block 10.0.0.0/16 --instance-tenancy dedicated
Output::
{
"Vpc": {
"InstanceTenancy": "dedicated",
"State": "pending",
"VpcId": "vpc-a01106c2",
"CidrBlock": "10.0.0.0/16",
"DhcpOptionsId": "dopt-7a8b9c2d"
}
} awscli-1.10.1/awscli/examples/ec2/describe-vpc-classic-link.rst 0000666 4542626 0000144 00000001223 12652514124 025351 0 ustar pysdk-ci amazon 0000000 0000000 **To describe the ClassicLink status of your VPCs**
This example lists the ClassicLink status of vpc-88888888.
Command::
aws ec2 describe-vpc-classic-link --vpc-id vpc-88888888
Output::
{
"Vpcs": [
{
"ClassicLinkEnabled": true,
"VpcId": "vpc-88888888",
"Tags": [
{
"Value": "classiclinkvpc",
"Key": "Name"
}
]
}
]
}
This example lists only VPCs that are enabled for Classiclink (the filter value of ``is-classic-link-enabled`` is set to ``true``).
Command::
aws ec2 describe-vpc-classic-link --filter "Name=is-classic-link-enabled,Values=true"
awscli-1.10.1/awscli/examples/ec2/delete-tags.rst 0000666 4542626 0000144 00000002257 12652514124 022637 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a tag from a resource**
This example deletes the tag ``Stack=Test`` from the specified image. If the command succeeds, no output is returned.
Command::
aws ec2 delete-tags --resources ami-78a54011 --tags Key=Stack,Value=Test
It's optional to specify the value for any tag with a value. If you specify a value for the key, the tag is deleted only if the tag's value matches the one you specified. If you specify the empty string as the value, the tag is deleted only if the tag's value is the empty string. The following example specifies the empty string as the value for the tag to delete.
Command::
aws ec2 delete-tags --resources i-12345678 --tags Key=Name,Value=
This example deletes the tag with the ``purpose`` key from the specified instance, regardless of the tag's value.
Command::
aws ec2 delete-tags --resources i-12345678 --tags Key=purpose
**To delete a tag from multiple resources**
This example deletes the ``Purpose=Test`` tag from a specified instance and AMI. The tag's value can be omitted from the command. If the command succeeds, no output is returned.
Command::
aws ec2 delete-tags --resources i-12345678 ami-78a54011 --tags Key=Purpose
awscli-1.10.1/awscli/examples/ec2/restore-address-to-classic.rst 0000666 4542626 0000144 00000000432 12652514124 025577 0 ustar pysdk-ci amazon 0000000 0000000 **To restore an address to EC2-Classic**
This example restores Elastic IP address 198.51.100.0 to the EC2-Classic platform.
Command::
aws ec2 restore-address-to-classic --public-ip 198.51.100.0
Output::
{
"Status": "MoveInProgress",
"PublicIp": "198.51.100.0"
}
awscli-1.10.1/awscli/examples/ec2/describe-export-tasks.rst 0000666 4542626 0000144 00000001436 12652514124 024661 0 ustar pysdk-ci amazon 0000000 0000000 **To list details about an instance export task**
This example describes the export task with ID export-i-fh8sjjsq.
Command::
aws ec2 describe-export-tasks --export-task-ids export-i-fh8sjjsq
Output::
{
"ExportTasks": [
{
"State": "active",
"InstanceExportDetails": {
"InstanceId": "i-38e485d8",
"TargetEnvironment": "vmware"
},
"ExportToS3Task": {
"S3Bucket": "myexportbucket",
"S3Key": "RHEL5export-i-fh8sjjsq.ova",
"DiskImageFormat": "vmdk",
"ContainerFormat": "ova"
},
"Description": "RHEL5 instance",
"ExportTaskId": "export-i-fh8sjjsq"
}
]
}
awscli-1.10.1/awscli/examples/ec2/create-vpc-endpoint.rst 0000666 4542626 0000144 00000001411 12652514124 024277 0 ustar pysdk-ci amazon 0000000 0000000 **To create an endpoint**
This example creates a VPC endpoint between VPC vpc-1a2b3c4d and Amazon S3 in the us-east-1 region, and associates route table rtb-11aa22bb with the endpoint.
Command::
aws ec2 create-vpc-endpoint --vpc-id vpc-1a2b3c4d --service-name com.amazonaws.us-east-1.s3 --route-table-ids rtb-11aa22bb
Output::
{
"VpcEndpoint": {
"PolicyDocument": "{\"Version\":\"2008-10-17\",\"Statement\":[{\"Sid\":\"\",\"Effect\":\"Allow\",\"Principal\":\"*\",\"Action\":\"*\",\"Resource\":\"*\"}]}",
"VpcId": "vpc-1a2b3c4d",
"State": "available",
"ServiceName": "com.amazonaws.us-east-1.s3",
"RouteTableIds": [
"rtb-11aa22bb"
],
"VpcEndpointId": "vpce-3ecf2a57",
"CreationTimestamp": "2015-05-15T09:40:50Z"
}
} awscli-1.10.1/awscli/examples/ec2/describe-images.rst 0000666 4542626 0000144 00000003567 12652514124 023471 0 ustar pysdk-ci amazon 0000000 0000000 **To describe a specific AMI**
This example describes the specified AMI.
Command::
aws ec2 describe-images --image-ids ami-5731123e
Output::
{
"Images": [
{
"VirtualizationType": "paravirtual",
"Name": "My server",
"Hypervisor": "xen",
"ImageId": "ami-5731123e",
"RootDeviceType": "ebs",
"State": "available",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": true,
"SnapshotId": "snap-ca7b3bd1",
"VolumeSize": 8,
"VolumeType": "standard"
}
}
],
"Architecture": "x86_64",
"ImageLocation": "123456789012/My server",
"KernelId": "aki-88aa75e1",
"OwnerId": "123456789012",
"RootDeviceName": "/dev/sda1",
"Public": false,
"ImageType": "machine",
"Description": "An AMI for my server"
}
]
}
**To describe Windows AMIs from Amazon that are backed by Amazon EBS**
This example describes Windows AMIs provided by Amazon that are backed by Amazon EBS.
Command::
aws ec2 describe-images --owners amazon --filters "Name=platform,Values=windows" "Name=root-device-type,Values=ebs"
**To describe tagged AMIs**
This example describes all AMIs that have the tag ``Custom=Linux1`` or ``Custom=Ubuntu1``. The output is filtered to display only the AMI IDs.
Command::
aws ec2 describe-images --filters Name=tag-key,Values=Custom Name=tag-value,Values=Linux1,Ubuntu1 --query 'Images[*].{ID:ImageId}'
Output::
[
{
"ID": "ami-1a2b3c4d"
},
{
"ID": "ami-ab12cd34"
}
]
awscli-1.10.1/awscli/examples/ec2/create-customer-gateway.rst 0000666 4542626 0000144 00000000707 12652514124 025200 0 ustar pysdk-ci amazon 0000000 0000000 **To create a customer gateway**
This example creates a customer gateway with the specified IP address for its outside interface.
Command::
aws ec2 create-customer-gateway --type ipsec.1 --public-ip 12.1.2.3 --bgp-asn 65534
Output::
{
"CustomerGateway": {
"CustomerGatewayId": "cgw-0e11f167",
"IpAddress": "12.1.2.3",
"State": "available",
"Type": "ipsec.1",
"BgpAsn": "65534"
}
} awscli-1.10.1/awscli/examples/ec2/describe-snapshots.rst 0000666 4542626 0000144 00000003373 12652514124 024241 0 ustar pysdk-ci amazon 0000000 0000000 **To describe a snapshot**
This example command describes a snapshot with the snapshot ID of ``snap-1234abcd``.
Command::
aws ec2 describe-snapshots --snapshot-id snap-1234abcd
Output::
{
"Snapshots": [
{
"Description": "This is my snapshot.",
"VolumeId": "vol-a1b2c3d4",
"State": "completed",
"VolumeSize": 8,
"Progress": "100%",
"StartTime": "2014-02-28T21:28:32.000Z",
"SnapshotId": "snap-b2c3d4e5",
"OwnerId": "012345678910"
}
]
}
**To describe snapshots using filters**
This example command describes all snapshots owned by the ID 012345678910 that are in the ``pending`` status.
Command::
aws ec2 describe-snapshots --owner-ids 012345678910 --filters Name=status,Values=pending
Output::
{
"Snapshots": [
{
"Description": "This is my copied snapshot.",
"VolumeId": "vol-4d3c2b1a",
"State": "pending",
"VolumeSize": 8,
"Progress": "87%",
"StartTime": "2014-02-28T21:37:27.000Z",
"SnapshotId": "snap-d4e5f6g7",
"OwnerId": "012345678910"
}
]
}
**To describe tagged snapshots and filter the output**
This example command describes all snapshots that have the tag ``Group=Prod``. The output is filtered to display only the snapshot IDs and the time the snapshot was started.
Command::
aws ec2 describe-snapshots --filters Name=tag-key,Values="Group" Name=tag-value,Values="Prod" --query 'Snapshots[*].{ID:SnapshotId,Time:StartTime}'
Output::
[
{
"ID": "snap-12345abc",
"Time": "2014-08-04T12:48:18.000Z"
}
] awscli-1.10.1/awscli/examples/ec2/delete-vpc-peering-connection.rst 0000666 4542626 0000144 00000000350 12652514124 026245 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a VPC peering connection**
This example deletes the specified VPC peering connection.
Command::
aws ec2 delete-vpc-peering-connection --vpc-peering-connection-id pcx-1a2b3c4d
Output::
{
"Return": true
}
awscli-1.10.1/awscli/examples/ec2/enable-vgw-route-propagation.rst 0000666 4542626 0000144 00000000460 12652514124 026137 0 ustar pysdk-ci amazon 0000000 0000000 **To enable route propagation**
This example enables the specified virtual private gateway to propagate static routes to the specified route table. If the command succeeds, no output is returned.
Command::
aws ec2 enable-vgw-route-propagation --route-table-id rtb-22574640 --gateway-id vgw-9a4cacf3
awscli-1.10.1/awscli/examples/ec2/delete-customer-gateway.rst 0000666 4542626 0000144 00000000330 12652514124 025167 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a customer gateway**
This example deletes the specified customer gateway. If the command succeeds, no output is returned.
Command::
aws ec2 delete-customer-gateway --customer-gateway-id cgw-0e11f167
awscli-1.10.1/awscli/examples/ec2/describe-snapshot-attribute.rst 0000666 4542626 0000144 00000000700 12652514124 026046 0 ustar pysdk-ci amazon 0000000 0000000 **To describe snapshot attributes**
This example command describes the ``createVolumePermission`` and ``productCodes`` attributes on a snapshot with the snapshot ID of ``snap-1234abcd``.
Command::
aws ec2 describe-snapshot-attribute --snapshot-id snap-1234abcd --attribute createVolumePermission --attribute productCodes
Output::
{
"SnapshotId": "snap-b52c0044",
"CreateVolumePermissions": [],
"ProductCodes": []
} awscli-1.10.1/awscli/examples/ec2/describe-customer-gateways.rst 0000666 4542626 0000144 00000002102 12652514124 025667 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your customer gateways**
This example describes your customer gateways.
Command::
aws ec2 describe-customer-gateways
Output::
{
"CustomerGateways": [
{
"CustomerGatewayId": "cgw-b4dc3961",
"IpAddress": "203.0.113.12",
"State": "available",
"Type": "ipsec.1",
"BgpAsn": "65000"
},
{
"CustomerGatewayId": "cgw-0e11f167",
"IpAddress": "12.1.2.3",
"State": "available",
"Type": "ipsec.1",
"BgpAsn": "65534"
}
]
}
**To describe a specific customer gateway**
This example describes the specified customer gateway.
Command::
aws ec2 describe-customer-gateways --customer-gateway-ids cgw-0e11f167
Output::
{
"CustomerGateways": [
{
"CustomerGatewayId": "cgw-0e11f167",
"IpAddress": "12.1.2.3",
"State": "available",
"Type": "ipsec.1",
"BgpAsn": "65534"
}
]
} awscli-1.10.1/awscli/examples/ec2/describe-bundle-tasks.rst 0000666 4542626 0000144 00000001053 12652514124 024604 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your bundle tasks**
This example describes all of your bundle tasks.
Command::
aws ec2 describe-bundle-tasks
Output::
{
"BundleTasks": [
{
"UpdateTime": "2015-09-15T13:26:54.000Z",
"InstanceId": "i-1a2b3c4d",
"Storage": {
"S3": {
"Prefix": "winami",
"Bucket": "bundletasks"
}
},
"State": "bundling",
"StartTime": "2015-09-15T13:24:35.000Z",
"Progress": "3%",
"BundleId": "bun-2a4e041c"
}
]
} awscli-1.10.1/awscli/examples/ec2/describe-flow-logs.rst 0000666 4542626 0000144 00000001300 12652514124 024114 0 ustar pysdk-ci amazon 0000000 0000000 **To describe flow logs**
This example describes all of your flow logs.
Command::
aws ec2 describe-flow-logs
Output::
{
"FlowLogs": [
{
"ResourceId": "eni-11aa22bb",
"CreationTime": "2015-06-12T14:41:15Z",
"LogGroupName": "MyFlowLogs",
"TrafficType": "ALL",
"FlowLogStatus": "ACTIVE",
"FlowLogId": "fl-1a2b3c4d",
"DeliverLogsPermissionArn": "arn:aws:iam::123456789101:role/flow-logs-role"
}
]
}
This example uses a filter to describe only flow logs that are in the log group ``MyFlowLogs`` in Amazon CloudWatch Logs.
Command::
aws ec2 describe-flow-logs --filter "Name=log-group-name,Values=MyFlowLogs" awscli-1.10.1/awscli/examples/ec2/reset-instance-attribute.rst 0000666 4542626 0000144 00000001577 12652514124 025372 0 ustar pysdk-ci amazon 0000000 0000000 **To reset the sourceDestCheck attribute**
This example resets the ``sourceDestCheck`` attribute of the specified instance. The instance must be in a VPC. If the command succeeds, no output is returned.
Command::
aws ec2 reset-instance-attribute --instance-id i-5203422c --attribute sourceDestCheck
**To reset the kernel attribute**
This example resets the ``kernel`` attribute of the specified instance. The instance must be in the ``stopped`` state. If the command succeeds, no output is returned.
Command::
aws ec2 reset-instance-attribute --instance-id i-5203422c --attribute kernel
**To reset the ramdisk attribute**
This example resets the ``ramdisk`` attribute of the specified instance. The instance must be in the ``stopped`` state. If the command succeeds, no output is returned.
Command::
aws ec2 reset-instance-attribute --instance-id i-5203422c --attribute ramdisk
awscli-1.10.1/awscli/examples/ec2/delete-network-acl.rst 0000666 4542626 0000144 00000000304 12652514124 024116 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a network ACL**
This example deletes the specified network ACL. If the command succeeds, no output is returned.
Command::
aws ec2 delete-network-acl --network-acl-id acl-5fb85d36
awscli-1.10.1/awscli/examples/ec2/authorize-security-group-ingress.rst 0000666 4542626 0000144 00000005361 12652514124 027121 0 ustar pysdk-ci amazon 0000000 0000000 **[EC2-Classic] To add a rule that allows inbound SSH traffic**
This example enables inbound traffic on TCP port 22 (SSH). If the command succeeds, no output is returned.
Command::
aws ec2 authorize-security-group-ingress --group-name MySecurityGroup --protocol tcp --port 22 --cidr 203.0.113.0/24
**[EC2-Classic] To add a rule that allows inbound HTTP traffic from a security group in another account**
This example enables inbound traffic on TCP port 80 from a source security group (otheraccountgroup) in a different AWS account (123456789012). If the command succeeds, no output is returned.
Command::
aws ec2 authorize-security-group-ingress --group-name MySecurityGroup --protocol tcp --port 80 --source-group otheraccountgroup --group-owner 123456789012
**[EC2-Classic] To add a rule that allows inbound HTTPS traffic from an ELB**
This example enables inbound traffic on TCP port 443 from an ELB. If the command succeeds, no output is returned.
Command::
aws ec2 authorize-security-group-ingress --group-name MySecurityGroup --protocol tcp --port 443 --source-group amazon-elb-sg --group-owner amazon-elb
**[EC2-VPC] To add a rule that allows inbound SSH traffic**
This example enables inbound traffic on TCP port 22 (SSH). Note that you can't reference a security group for EC2-VPC by name. If the command succeeds, no output is returned.
Command::
aws ec2 authorize-security-group-ingress --group-id sg-903004f8 --protocol tcp --port 22 --cidr 203.0.113.0/24
**[EC2-VPC] To add a rule that allows inbound HTTP traffic from another security group**
This example enables inbound access on TCP port 80 from the source security group sg-1a2b3c4d. Note that for EC2-VPC, the source group must be in the same VPC. If the command succeeds, no output is returned.
Command::
aws ec2 authorize-security-group-ingress --group-id sg-111aaa22 --protocol tcp --port 80 --source-group sg-1a2b3c4d
**[EC2-VPC] To add a custom ICMP rule**
This example uses the ``ip-permissions`` parameter to add an inbound rule that allows the ICMP message ``Destination Unreachable: Fragmentation Needed and Don't Fragment was Set`` (Type 3, Code 4) from anywhere. If the command succeeds, no output is returned. For more information about quoting JSON-formatted parameters, see `Quoting Strings`_.
Command::
aws ec2 authorize-security-group-ingress --group-id sg-123abc12 --ip-permissions '[{"IpProtocol": "icmp", "FromPort": 3, "ToPort": 4, "IpRanges": [{"CidrIp": "0.0.0.0/0"}]}]'
For more information, see `Using Security Groups`_ in the *AWS Command Line Interface User Guide*.
.. _`Using Security Groups`: http://docs.aws.amazon.com/cli/latest/userguide/cli-ec2-sg.html
.. _`Quoting Strings`: http://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html#quoting-strings
awscli-1.10.1/awscli/examples/ec2/bundle-instance.rst 0000666 4542626 0000144 00000001567 12652514124 023517 0 ustar pysdk-ci amazon 0000000 0000000 **To bundle an instance**
This example bundles instance ``i-1a2b3c4d`` to a bucket called ``bundletasks``. Before you specify values for your access key IDs, review and follow the guidance in `Best Practices for Managing AWS Access Keys`_.
Command::
aws ec2 bundle-instance --instance-id i-1a2b3c4d --bucket bundletasks --prefix winami --owner-akid AK12AJEXAMPLE --owner-sak example123example
Output::
{
"BundleTask": {
"UpdateTime": "2015-09-15T13:30:35.000Z",
"InstanceId": "i-1a2b3c4d",
"Storage": {
"S3": {
"Prefix": "winami",
"Bucket": "bundletasks"
}
},
"State": "pending",
"StartTime": "2015-09-15T13:30:35.000Z",
"BundleId": "bun-294e041f"
}
}
.. _`Best Practices for Managing AWS Access Keys`: http://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html awscli-1.10.1/awscli/examples/ec2/reset-image-attribute.rst 0000666 4542626 0000144 00000000477 12652514124 024646 0 ustar pysdk-ci amazon 0000000 0000000 **To reset the launchPermission attribute**
This example resets the ``launchPermission`` attribute for the specified AMI to its default value. By default, AMIs are private. If the command succeeds, no output is returned.
Command::
aws ec2 reset-image-attribute --image-id ami-5731123e --attribute launchPermission
awscli-1.10.1/awscli/examples/ec2/move-address-to-vpc.rst 0000666 4542626 0000144 00000000341 12652514124 024230 0 ustar pysdk-ci amazon 0000000 0000000 **To move an address to EC2-VPC**
This example moves Elastic IP address 54.123.4.56 to the EC2-VPC platform.
Command::
aws ec2 move-address-to-vpc --public-ip 54.123.4.56
Output::
{
"Status": "MoveInProgress"
} awscli-1.10.1/awscli/examples/ec2/delete-subnet.rst 0000666 4542626 0000144 00000000263 12652514124 023174 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a subnet**
This example deletes the specified subnet. If the command succeeds, no output is returned.
Command::
aws ec2 delete-subnet --subnet-id subnet-9d4a7b6c
awscli-1.10.1/awscli/examples/ec2/describe-volume-attribute.rst 0000666 4542626 0000144 00000000576 12652514124 025531 0 ustar pysdk-ci amazon 0000000 0000000 **To describe a volume attribute**
This example command describes the ``autoEnableIo`` attribute of the volume with the ID ``vol-2725bc51``.
Command::
aws ec2 describe-volume-attribute --volume-id vol-2725bc51 --attribute autoEnableIO
Output::
{
"AutoEnableIO": {
"Value": false
},
"ProductCodes": [],
"VolumeId": "vol-2725bc51"
} awscli-1.10.1/awscli/examples/ec2/replace-route-table-association.rst 0000666 4542626 0000144 00000000535 12652514124 026604 0 ustar pysdk-ci amazon 0000000 0000000 **To replace the route table associated with a subnet**
This example associates the specified route table with the subnet for the specified route table association.
Command::
aws ec2 replace-route-table-association --association-id rtbassoc-781d0d1a --route-table-id rtb-22574640
Output::
{
"NewAssociationId": "rtbassoc-3a1f0f58"
} awscli-1.10.1/awscli/examples/ec2/describe-vpc-peering-connections.rst 0000666 4542626 0000144 00000004771 12652514124 026761 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your VPC peering connections**
This example describes all of your VPC peering connections.
Command::
aws ec2 describe-vpc-peering-connections
Output::
{
"VpcPeeringConnections": [
{
"Status": {
"Message": "Active",
"Code": "active"
},
"Tags": [
{
"Value": "Peering-1",
"Key": "Name"
}
],
"AccepterVpcInfo": {
"OwnerId": "111122223333",
"VpcId": "vpc-1a2b3c4d",
"CidrBlock": "10.0.1.0/28"
},
"VpcPeeringConnectionId": "pcx-11122233",
"RequesterVpcInfo": {
"OwnerId": "444455556666",
"VpcId": "vpc-123abc45",
"CidrBlock": "10.0.0.0/28"
}
},
{
"Status": {
"Message": "Pending Acceptance by 123456789123",
"Code": "pending-acceptance"
},
"Tags": [
{
"Value": null,
"Key": "Name"
}
],
"RequesterVpcInfo": {
"OwnerId": "123456789123",
"VpcId": "vpc-11aa22bb",
"CidrBlock": "10.0.0.0/28"
},
"VpcPeeringConnectionId": "pcx-abababab",
"ExpirationTime": "2014-04-03T09:12:43.000Z",
"AccepterVpcInfo": {
"OwnerId": "123456789123",
"VpcId": "vpc-33cc44dd"
}
}
]
}
**To describe specific VPC peering connections**
This example describes all of your VPC peering connections that are in the pending-acceptance state.
Command::
aws ec2 describe-vpc-peering-connections --filters Name=status-code,Values=pending-acceptance
This example describes all of your VPC peering connections that have the tag Name=Finance or Name=Accounts.
Command::
aws ec2 describe-vpc-peering-connections --filters Name=tag-key,Values=Name Name=tag-value,Values=Finance,Accounts
This example describes all of the VPC peering connections you requested for the specified VPC, vpc-1a2b3c4d.
Command::
aws ec2 describe-vpc-peering-connections --filters Name=requester-vpc-info.vpc-id,Values=vpc-1a2b3c4d
awscli-1.10.1/awscli/examples/ec2/detach-volume.rst 0000666 4542626 0000144 00000000613 12652514124 023170 0 ustar pysdk-ci amazon 0000000 0000000 **To detach a volume from an instance**
This example command detaches the volume (``vol-1234abcd``) from the instance it is attached to.
Command::
aws ec2 detach-volume --volume-id vol-1234abcd
Output::
{
"AttachTime": "2014-02-27T19:23:06.000Z",
"InstanceId": "i-0751440e",
"VolumeId": "vol-1234abcd",
"State": "detaching",
"Device": "/dev/sdb"
} awscli-1.10.1/awscli/examples/ec2/describe-route-tables.rst 0000666 4542626 0000144 00000003374 12652514124 024626 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your route tables**
This example describes your route tables.
Command::
aws ec2 describe-route-tables
Output::
{
"RouteTables": [
{
"Associations": [
{
"RouteTableAssociationId": "rtbassoc-d8ccddba",
"Main": true,
"RouteTableId": "rtb-1f382e7d"
}
],
"RouteTableId": "rtb-1f382e7d",
"VpcId": "vpc-a01106c2",
"PropagatingVgws": [],
"Tags": [],
"Routes": [
{
"GatewayId": "local",
"DestinationCidrBlock": "10.0.0.0/16",
"State": "active"
}
]
},
{
"Associations": [
{
"SubnetId": "subnet-b61f49f0",
"RouteTableAssociationId": "rtbassoc-781d0d1a",
"RouteTableId": "rtb-22574640"
}
],
"RouteTableId": "rtb-22574640",
"VpcId": "vpc-a01106c2",
"PropagatingVgws": [
{
"GatewayId": "vgw-f211f09b"
}
],
"Tags": [],
"Routes": [
{
"GatewayId": "local",
"DestinationCidrBlock": "10.0.0.0/16",
"State": "active"
},
{
"GatewayId": "igw-046d7966",
"DestinationCidrBlock": "0.0.0.0/0",
"State": "active"
}
]
}
]
} awscli-1.10.1/awscli/examples/ec2/replace-network-acl-entry.rst 0000666 4542626 0000144 00000000625 12652514124 025434 0 ustar pysdk-ci amazon 0000000 0000000 **To replace a network ACL entry**
This example replaces an entry for the specified network ACL. The new rule 100 allows ingress traffic from 203.0.113.12/24 on UDP port 53 (DNS) into any associated subnet.
Command::
aws ec2 replace-network-acl-entry --network-acl-id acl-5fb85d36 --ingress --rule-number 100 --protocol udp --port-range From=53,To=53 --cidr-block 203.0.113.12/24 --rule-action allow
awscli-1.10.1/awscli/examples/ec2/describe-classic-link-instances.rst 0000666 4542626 0000144 00000002305 12652514124 026552 0 ustar pysdk-ci amazon 0000000 0000000 **To describe linked EC2-Classic instances**
This example lists all of your linked EC2-Classic instances.
Command::
aws ec2 describe-classic-link-instances
Output::
{
"Instances": [
{
"InstanceId": "i-1a2b3c4d",
"VpcId": "vpc-88888888",
"Groups": [
{
"GroupId": "sg-11122233"
}
],
"Tags": [
{
"Value": "ClassicInstance",
"Key": "Name"
}
]
},
{
"InstanceId": "i-ab12cd34",
"VpcId": "vpc-12312312",
"Groups": [
{
"GroupId": "sg-aabbccdd"
}
],
"Tags": [
{
"Value": "ClassicInstance2",
"Key": "Name"
}
]
}
]
}
This example lists all of your linked EC2-Classic instances, and filters the response to include only instances that are linked to VPC vpc-88888888.
Command::
aws ec2 describe-classic-link-instances --filter "Name=vpc-id,Values=vpc-88888888"
Output::
{
"Instances": [
{
"InstanceId": "i-1a2b3c4d",
"VpcId": "vpc-88888888",
"Groups": [
{
"GroupId": "sg-11122233"
}
],
"Tags": [
{
"Value": "ClassicInstance",
"Key": "Name"
}
]
}
]
}
awscli-1.10.1/awscli/examples/ec2/disassociate-address.rst 0000666 4542626 0000144 00000001003 12652514124 024523 0 ustar pysdk-ci amazon 0000000 0000000 **To disassociate an Elastic IP addresses in EC2-Classic**
This example disassociates an Elastic IP address from an instance in EC2-Classic. If the command succeeds, no output is returned.
Command::
aws ec2 disassociate-address --public-ip 198.51.100.0
**To disassociate an Elastic IP address in EC2-VPC**
This example disassociates an Elastic IP address from an instance in a VPC. If the command succeeds, no output is returned.
Command::
aws ec2 disassociate-address --association-id eipassoc-2bebb745
awscli-1.10.1/awscli/examples/ec2/delete-volume.rst 0000666 4542626 0000144 00000000336 12652514124 023204 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a volume**
This example command deletes an available volume with the volume ID of ``vol-1234abcd``. If the command succeeds, no output is returned.
Command::
aws ec2 delete-volume --volume-id vol-1234abcd
awscli-1.10.1/awscli/examples/ec2/describe-reserved-instances.rst 0000666 4542626 0000144 00000004420 12652514124 026015 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your Reserved Instances**
This example command describes the Reserved Instances that you own.
Command::
aws ec2 describe-reserved-instances
Output::
{
"ReservedInstances": [
{
"ReservedInstancesId": "b847fa93-e282-4f55-b59a-1342fexample",
"OfferingType": "No Upfront",
"AvailabilityZone": "us-west-1c",
"End": "2016-08-14T21:34:34.000Z",
"ProductDescription": "Linux/UNIX",
"UsagePrice": 0.00,
"RecurringCharges": [
{
"Amount": 0.104,
"Frequency": "Hourly"
}
],
"Start": "2015-08-15T21:34:35.086Z",
"State": "active",
"FixedPrice": 0.0,
"CurrencyCode": "USD",
"Duration": 31536000,
"InstanceTenancy": "default",
"InstanceType": "m3.medium",
"InstanceCount": 2
},
...
]
}
**To describe your Reserved Instances using filters**
This example filters the response to include only three-year, t2.micro Linux/UNIX Reserved Instances in us-west-1c.
Command::
aws ec2 describe-reserved-instances --filters Name=duration,Values=94608000 Name=instance-type,Values=t2.micro Name=product-description,Values=Linux/UNIX Name=availability-zone,Values=us-east-1e
Output::
{
"ReservedInstances": [
{
"ReservedInstancesId": "f127bd27-edb7-44c9-a0eb-0d7e09259af0",
"OfferingType": "All Upfront",
"AvailabilityZone": "us-east-1e",
"End": "2018-03-26T21:34:34.000Z",
"ProductDescription": "Linux/UNIX",
"UsagePrice": 0.00,
"RecurringCharges": [],
"Start": "2015-03-27T21:34:35.848Z",
"State": "active",
"FixedPrice": 151.0,
"CurrencyCode": "USD",
"Duration": 94608000,
"InstanceTenancy": "default",
"InstanceType": "t2.micro",
"InstanceCount": 1
}
]
}
For more information, see `Using Amazon EC2 Instances`_ in the *AWS Command Line Interface User Guide*.
.. _`Using Amazon EC2 Instances`: http://docs.aws.amazon.com/cli/latest/userguide/cli-ec2-launch.html
awscli-1.10.1/awscli/examples/ec2/describe-spot-instance-requests.rst 0000666 4542626 0000144 00000004612 12652514124 026654 0 ustar pysdk-ci amazon 0000000 0000000 **To describe Spot Instance requests**
This example describes all of your Spot Instance requests.
Command::
aws ec2 describe-spot-instance-requests
Output::
{
"SpotInstanceRequests": [
{
"Status": {
"UpdateTime": "2014-04-30T18:16:21.000Z",
"Code": "fulfilled",
"Message": "Your Spot request is fulfilled."
},
"ProductDescription": "Linux/UNIX",
"InstanceId": "i-20170a7c",
"SpotInstanceRequestId": "sir-08b93456",
"State": "active",
"LaunchedAvailabilityZone": "us-west-1b",
"LaunchSpecification": {
"ImageId": "ami-7aba833f",
"KeyName": "May14Key",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": true,
"VolumeType": "standard",
"VolumeSize": 8
}
}
],
"EbsOptimized": false,
"SecurityGroups": [
{
"GroupName": "launch-wizard-1",
"GroupId": "sg-e38f24a7"
}
],
"InstanceType": "m1.small"
},
"Type": "one-time",
"CreateTime": "2014-04-30T18:14:55.000Z",
"SpotPrice": "0.010000"
},
{
"Status": {
"UpdateTime": "2014-04-30T18:16:21.000Z",
"Code": "fulfilled",
"Message": "Your Spot request is fulfilled."
},
"ProductDescription": "Linux/UNIX",
"InstanceId": "i-894f53d5",
"SpotInstanceRequestId": "sir-285b1e56",
"State": "active",
"LaunchedAvailabilityZone": "us-west-1b",
"LaunchSpecification": {
"ImageId": "ami-7aba833f",
"KeyName": "May14Key",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": true,
"VolumeType": "standard",
"VolumeSize": 8
}
}
],
"EbsOptimized": false,
"SecurityGroups": [
{
"GroupName": "launch-wizard-1",
"GroupId": "sg-e38f24a7"
}
],
"InstanceType": "m1.small"
},
"Type": "one-time",
"CreateTime": "2014-04-30T18:14:55.000Z",
"SpotPrice": "0.010000"
}
]
}
awscli-1.10.1/awscli/examples/ec2/enable-vpc-classic-link-dns-support.rst 0000666 4542626 0000144 00000000351 12652514124 027314 0 ustar pysdk-ci amazon 0000000 0000000 **To enable ClassicLink DNS support for a VPC**
This example enables ClassicLink DNS support for ``vpc-88888888``.
Command::
aws ec2 enable-vpc-classic-link-dns-support --vpc-id vpc-88888888
Output::
{
"Return": true
} awscli-1.10.1/awscli/examples/ec2/delete-spot-datafeed-subscription.rst 0000666 4542626 0000144 00000000352 12652514124 027135 0 ustar pysdk-ci amazon 0000000 0000000 **To cancel a Spot Instance data feed subscription**
This example command deletes a Spot data feed subscription for the account. If the command succeeds, no output is returned.
Command::
aws ec2 delete-spot-datafeed-subscription
awscli-1.10.1/awscli/examples/ec2/allocate-address.rst 0000666 4542626 0000144 00000001060 12652514124 023637 0 ustar pysdk-ci amazon 0000000 0000000 **To allocate an Elastic IP address for EC2-Classic**
This example allocates an Elastic IP address to use with an instance in EC2-Classic.
Command::
aws ec2 allocate-address
Output::
{
"PublicIp": "198.51.100.0",
"Domain": "standard"
}
**To allocate an Elastic IP address for EC2-VPC**
This example allocates an Elastic IP address to use with an instance in a VPC.
Command::
aws ec2 allocate-address --domain vpc
Output::
{
"PublicIp": "203.0.113.0",
"Domain": "vpc",
"AllocationId": "eipalloc-64d5890a"
}
awscli-1.10.1/awscli/examples/ec2/describe-volume-status.rst 0000666 4542626 0000144 00000002710 12652514124 025041 0 ustar pysdk-ci amazon 0000000 0000000 **To describe the status of a single volume**
This example command describes the status for the volume ``vol-2725bc51``.
Command::
aws ec2 describe-volume-status --volume-ids vol-2725bc51
Output::
{
"VolumeStatuses": [
{
"VolumeStatus": {
"Status": "ok",
"Details": [
{
"Status": "passed",
"Name": "io-enabled"
},
{
"Status": "not-applicable",
"Name": "io-performance"
}
]
},
"AvailabilityZone": "us-east-1a",
"VolumeId": "vol-2725bc51",
"Actions": [],
"Events": []
}
]
}
**To describe the status of impaired volumes**
This example command describes the status for all volumes that are impaired. In this example output, there are no impaired volumes.
Command::
aws ec2 describe-volume-status --filters Name=volume-status.status,Values=impaired
Output::
{
"VolumeStatuses": []
}
If you have a volume with a failed status check (status is impaired), see `Working with an Impaired Volume`_ in the *Amazon EC2 User Guide*.
.. _`Working with an Impaired Volume`: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-volume-status.html#work_volumes_impaired
awscli-1.10.1/awscli/examples/ec2/create-internet-gateway.rst 0000666 4542626 0000144 00000000422 12652514124 025161 0 ustar pysdk-ci amazon 0000000 0000000 **To create an Internet gateway**
This example creates an Internet gateway.
Command::
aws ec2 create-internet-gateway
Output::
{
"InternetGateway": {
"Tags": [],
"InternetGatewayId": "igw-c0a643a9",
"Attachments": []
}
} awscli-1.10.1/awscli/examples/ec2/modify-reserved-instances.rst 0000666 4542626 0000144 00000003775 12652514124 025540 0 ustar pysdk-ci amazon 0000000 0000000 **To modify Reserved Instances**
This example command moves a Reserved Instance to another Availability Zone in the same region.
Command::
aws ec2 modify-reserved-instances --reserved-instances-ids b847fa93-e282-4f55-b59a-1342f5bd7c02 --target-configurations AvailabilityZone=us-west-1c,Platform=EC2-Classic,InstanceCount=10
Output::
{
"ReservedInstancesModificationId": "rimod-d3ed4335-b1d3-4de6-ab31-0f13aaf46687"
}
**To modify the network platform of Reserved Instances**
This example command converts EC2-Classic Reserved Instances to EC2-VPC.
Command::
aws ec2 modify-reserved-instances --reserved-instances-ids f127bd27-edb7-44c9-a0eb-0d7e09259af0 --target-configurations AvailabilityZone=us-west-1c,Platform=EC2-VPC,InstanceCount=5
Output::
{
"ReservedInstancesModificationId": "rimod-82fa9020-668f-4fb6-945d-61537009d291"
}
For more information, see `Modifying Your Reserved Instances`_ in the *Amazon EC2 User Guide*.
**To modify the instance types of Reserved Instances**
This example command modifies a Reserved Instance that has 10 m1.small Linux/UNIX instances in us-west-1c so that 8
m1.small instances become 2 m1.large instances, and the remaining 2 m1.small become 1 m1.medium instance in the same
Availability Zone. Command::
aws ec2 modify-reserved-instances --reserved-instances-ids 1ba8e2e3-3556-4264-949e-63ee671405a9 --target-configurations AvailabilityZone=us-west-1c,Platform=EC2-Classic,InstanceCount=2,InstanceType=m1.large AvailabilityZone=us-west-1c,Platform=EC2-Classic,InstanceCount=1,InstanceType=m1.medium
Output::
{
"ReservedInstancesModificationId": "rimod-acc5f240-080d-4717-b3e3-1c6b11fa00b6"
}
For more information, see `Changing the Instance Type of Your Reservations`_ in the *Amazon EC2 User Guide*.
.. _`Changing the Instance Type of Your Reservations`: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-modification-instancemove.html
.. _`Modifying Your Reserved Instances`: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-modifying.html
awscli-1.10.1/awscli/examples/ec2/attach-classic-link-vpc.rst 0000666 4542626 0000144 00000000465 12652514124 025044 0 ustar pysdk-ci amazon 0000000 0000000 **To link (attach) an EC2-Classic instance to a VPC**
This example links instance i-1a2b3c4d to VPC vpc-88888888 through the VPC security group sg-12312312.
Command::
aws ec2 attach-classic-link-vpc --instance-id i-1a2b3c4d --vpc-id vpc-88888888 --groups sg-12312312
Output::
{
"Return": true
} awscli-1.10.1/awscli/examples/ec2/revoke-security-group-ingress.rst 0000666 4542626 0000144 00000002024 12652514124 026373 0 ustar pysdk-ci amazon 0000000 0000000 **To remove a rule from a security group**
This example removes TCP port 22 access for the ``203.0.113.0/24`` address range from the security group named ``MySecurityGroup``. If the command succeeds, no output is returned.
Command::
aws ec2 revoke-security-group-ingress --group-name MySecurityGroup --protocol tcp --port 22 --cidr 203.0.113.0/24
**[EC2-VPC] To remove a rule using the IP permissions set**
This example uses the ``ip-permissions`` parameter to remove an inbound rule that allows the ICMP message ``Destination Unreachable: Fragmentation Needed and Don't Fragment was Set`` (Type 3, Code 4). If the command succeeds, no output is returned. For more information about quoting JSON-formatted parameters, see `Quoting Strings`_.
Command::
aws ec2 revoke-security-group-ingress --group-id sg-123abc12 --ip-permissions '[{"IpProtocol": "icmp", "FromPort": 3, "ToPort": 4, "IpRanges": [{"CidrIp": "0.0.0.0/0"}]}]'
.. _`Quoting Strings`: http://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html#quoting-strings awscli-1.10.1/awscli/examples/ec2/modify-instance-placement.rst 0000666 4542626 0000144 00000000503 12652514124 025470 0 ustar pysdk-ci amazon 0000000 0000000 **To set the instance affinity value for a specific stopped Dedicated host**
To modify the affinity of an instance so it always has affinity with the specified Dedicated host .
Command::
aws ec2 modify-instance-placement --instance-id=i-f0d45a40 --host-id h-029e7409a3350a31f
Output::
{
"Return": true
}
awscli-1.10.1/awscli/examples/ec2/enable-vpc-classic-link.rst 0000666 4542626 0000144 00000000300 12652514124 025012 0 ustar pysdk-ci amazon 0000000 0000000 **To enable a VPC for ClassicLink**
This example enables vpc-8888888 for ClassicLink.
Command::
aws ec2 enable-vpc-classic-link --vpc-id vpc-88888888
Output::
{
"Return": true
} awscli-1.10.1/awscli/examples/ec2/create-network-interface.rst 0000666 4542626 0000144 00000002122 12652514124 025320 0 ustar pysdk-ci amazon 0000000 0000000 **To create a network interface**
This example creates a network interface for the specified subnet.
Command::
aws ec2 create-network-interface --subnet-id subnet-9d4a7b6c --description "my network interface" --groups sg-903004f8 --private-ip-address 10.0.2.17
Output::
{
"NetworkInterface": {
"Status": "pending",
"MacAddress": "02:1a:80:41:52:9c",
"SourceDestCheck": true,
"VpcId": "vpc-a01106c2",
"Description": "my network interface",
"NetworkInterfaceId": "eni-e5aa89a3",
"PrivateIpAddresses": [
{
"Primary": true,
"PrivateIpAddress": "10.0.2.17"
}
],
"RequesterManaged": false,
"AvailabilityZone": "us-east-1d",
"Groups": [
{
"GroupName": "default",
"GroupId": "sg-903004f8"
}
],
"SubnetId": "subnet-9d4a7b6c",
"OwnerId": "123456789012",
"TagSet": [],
"PrivateIpAddress": "10.0.2.17"
}
} awscli-1.10.1/awscli/examples/ec2/monitor-instances.rst 0000666 4542626 0000144 00000000601 12652514124 024104 0 ustar pysdk-ci amazon 0000000 0000000 **To enable detailed monitoring for an instance**
This example command enables detailed monitoring for the specified instance.
Command::
aws ec2 monitor-instances --instance-ids i-570e5a28
Output::
{
"InstanceMonitorings": [
{
"InstanceId": "i-570e5a28",
"Monitoring": {
"State": "pending"
}
}
]
}
awscli-1.10.1/awscli/examples/ec2/create-instance-export-task.rst 0000666 4542626 0000144 00000001576 12652514124 025770 0 ustar pysdk-ci amazon 0000000 0000000 **To export an instance**
This example command creates a task to export the instance i-38e485d8 to the Amazon S3 bucket
myexportbucket.
Command::
aws ec2 create-instance-export-task --description "RHEL5 instance" --instance-id i-38e485d8 --target-environment vmware --export-to-s3-task DiskImageFormat=vmdk,ContainerFormat=ova,S3Bucket=myexportbucket,S3Prefix=RHEL5
Output::
{
"ExportTask": {
"State": "active",
"InstanceExportDetails": {
"InstanceId": "i-38e485d8",
"TargetEnvironment": "vmware"
},
"ExportToS3Task": {
"S3Bucket": "myexportbucket",
"S3Key": "RHEL5export-i-fh8sjjsq.ova",
"DiskImageFormat": "vmdk",
"ContainerFormat": "ova"
},
"Description": "RHEL5 instance",
"ExportTaskId": "export-i-fh8sjjsq"
}
}
awscli-1.10.1/awscli/examples/ec2/get-password-data.rst 0000666 4542626 0000144 00000001657 12652514124 023772 0 ustar pysdk-ci amazon 0000000 0000000 **To get the encrypted password**
This example gets the encrypted password.
Command::
aws ec2 get-password-data --instance-id i-5203422c
Output::
{
"InstanceId": "i-5203422c",
"Timestamp": "2013-08-07T22:18:38.000Z",
"PasswordData": "gSlJFq+VpcZXqy+iktxMF6NyxQ4qCrT4+gaOuNOenX1MmgXPTj7XEXAMPLE
UQ+YeFfb+L1U4C4AKv652Ux1iRB3CPTYP7WmU3TUnhsuBd+p6LVk7T2lKUml6OXbk6WPW1VYYm/TRPB1
e1DQ7PY4an/DgZT4mwcpRFigzhniQgDDeO1InvSDcwoUTwNs0Y1S8ouri2W4n5GNlriM3Q0AnNVelVz/
53TkDtxbNoU606M1gK9zUWSxqEgwvbV2j8c5rP0WCuaMWSFl4ziDu4bd7q+4RSyi8NUsVWnKZ4aEZffu
DPGzKrF5yLlf3etP2L4ZR6CvG7K1hx7VKOQVN32Dajw=="
}
**To get the decrypted password**
This example gets the decrypted password.
Command::
aws ec2 get-password-data --instance-id i-5203422c --priv-launch-key C:\Keys\MyKeyPair.pem
Output::
{
"InstanceId": "i-5203422c",
"Timestamp": "2013-08-30T23:18:05.000Z",
"PasswordData": "&ViJ652e*u"
}
awscli-1.10.1/awscli/examples/ec2/modify-network-interface-attribute.rst 0000666 4542626 0000144 00000002235 12652514124 027352 0 ustar pysdk-ci amazon 0000000 0000000 **To modify the attachment attribute of a network interface**
This example command modifies the ``attachment`` attribute of the specified network interface.
Command::
aws ec2 modify-network-interface-attribute --network-interface-id eni-686ea200 --attachment AttachmentId=eni-attach-43348162,DeleteOnTermination=false
**To modify the description attribute of a network interface**
This example command modifies the ``description`` attribute of the specified network interface.
Command::
aws ec2 modify-network-interface-attribute --network-interface-id eni-686ea200 --description "My description"
**To modify the groupSet attribute of a network interface**
This example command modifies the ``groupSet`` attribute of the specified network interface.
Command::
aws ec2 modify-network-interface-attribute --network-interface-id eni-686ea200 --groups sg-903004f8 sg-1a2b3c4d
**To modify the sourceDestCheck attribute of a network interface**
This example command modifies the ``sourceDestCheck`` attribute of the specified network interface.
Command::
aws ec2 modify-network-interface-attribute --network-interface-id eni-686ea200 --no-source-dest-check
awscli-1.10.1/awscli/examples/ec2/release-address.rst 0000666 4542626 0000144 00000000762 12652514124 023503 0 ustar pysdk-ci amazon 0000000 0000000 **To release an Elastic IP addresses for EC2-Classic**
This example releases an Elastic IP address for use with instances in EC2-Classic. If the command succeeds, no output is returned.
Command::
aws ec2 release-address --public-ip 198.51.100.0
**To release an Elastic IP address for EC2-VPC**
This example releases an Elastic IP address for use with instances in a VPC. If the command succeeds, no output is returned.
Command::
aws ec2 release-address --allocation-id eipalloc-64d5890a
awscli-1.10.1/awscli/examples/ec2/describe-instance-attribute.rst 0000666 4542626 0000144 00000003133 12652514124 026016 0 ustar pysdk-ci amazon 0000000 0000000 **To describe the instance type**
This example describes the instance type of the specified instance.
Command::
aws ec2 describe-instance-attribute --instance-id i-5203422c --attribute instanceType
Output::
{
"InstanceId": "i-5203422c"
"InstanceType": {
"Value": "t1.micro"
}
}
**To describe the disableApiTermination attribute**
This example describes the ``disableApiTermination`` attribute of the specified instance.
Command::
aws ec2 describe-instance-attribute --instance-id i-5203422c --attribute disableApiTermination
Output::
{
"InstanceId": "i-5203422c"
"DisableApiTermination": {
"Value": "false"
}
}
**To describe the block device mapping for an instance**
This example describes the ``blockDeviceMapping`` attribute of the specified instance.
Command::
aws ec2 describe-instance-attribute --instance-id i-5203422c --attribute blockDeviceMapping
Output::
{
"InstanceId": "i-5203422c"
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
"Status": "attached",
"DeleteOnTermination": true,
"VolumeId": "vol-615a1339",
"AttachTime": "2013-05-17T22:42:34.000Z"
}
},
{
"DeviceName": "/dev/sdf",
"Ebs": {
"Status": "attached",
"DeleteOnTermination": false,
"VolumeId": "vol-9f54b8dc",
"AttachTime": "2013-09-10T23:07:00.000Z"
}
}
],
}
awscli-1.10.1/awscli/examples/ec2/request-spot-instances.rst 0000666 4542626 0000144 00000011577 12652514124 025106 0 ustar pysdk-ci amazon 0000000 0000000 **To request Spot Instances**
This example command creates a one-time Spot Instance request for five instances in the specified Availability Zone.
If your account supports EC2-VPC only, Amazon EC2 launches the instances in the default subnet of the specified Availability Zone.
If your account supports EC2-Classic, Amazon EC2 launches the instances in EC2-Classic in the specified Availability Zone.
Command::
aws ec2 request-spot-instances --spot-price "0.03" --instance-count 5 --type "one-time" --launch-specification file://specification.json
Specification.json::
{
"ImageId": "ami-1a2b3c4d",
"KeyName": "my-key-pair",
"SecurityGroupIds": [ "sg-1a2b3c4d" ],
"InstanceType": "m3.medium",
"Placement": {
"AvailabilityZone": "us-west-2a"
},
"IamInstanceProfile": {
"Arn": "arn:aws:iam::123456789012:instance-profile/my-iam-role"
}
}
Output::
{
"SpotInstanceRequests": [
{
"Status": {
"UpdateTime": "2014-03-25T20:54:21.000Z",
"Code": "pending-evaluation",
"Message": "Your Spot request has been submitted for review, and is pending evaluation."
},
"ProductDescription": "Linux/UNIX",
"SpotInstanceRequestId": "sir-df6f405d",
"State": "open",
"LaunchSpecification": {
"Placement": {
"AvailabilityZone": "us-west-2a"
},
"ImageId": "ami-1a2b3c4d",
"KeyName": "my-key-pair",
"SecurityGroups": [
{
"GroupName": "my-security-group",
"GroupId": "sg-1a2b3c4d"
}
],
"Monitoring": {
"Enabled": false
},
"IamInstanceProfile": {
"Arn": "arn:aws:iam::123456789012:instance-profile/my-iam-role"
},
"InstanceType": "m3.medium"
},
"Type": "one-time",
"CreateTime": "2014-03-25T20:54:20.000Z",
"SpotPrice": "0.050000"
},
...
]
}
This example command creates a one-time Spot Instance request for five instances in the specified subnet.
Amazon EC2 launches the instances in the specified subnet. If the VPC is a nondefault VPC, the instances
do not receive a public IP address by default.
Command::
aws ec2 request-spot-instances --spot-price "0.050" --instance-count 5 --type "one-time" --launch-specification file://specification.json
Specification.json::
{
"ImageId": "ami-1a2b3c4d",
"SecurityGroupIds": [ "sg-1a2b3c4d" ],
"InstanceType": "m3.medium",
"SubnetId": "subnet-1a2b3c4d",
"IamInstanceProfile": {
"Arn": "arn:aws:iam::123456789012:instance-profile/my-iam-role"
}
}
Output::
{
"SpotInstanceRequests": [
{
"Status": {
"UpdateTime": "2014-03-25T22:21:58.000Z",
"Code": "pending-evaluation",
"Message": "Your Spot request has been submitted for review, and is pending evaluation."
},
"ProductDescription": "Linux/UNIX",
"SpotInstanceRequestId": "sir-df6f405d",
"State": "open",
"LaunchSpecification": {
"Placement": {
"AvailabilityZone": "us-west-2a"
}
"ImageId": "ami-1a2b3c4d"
"SecurityGroups": [
{
"GroupName": "my-security-group",
"GroupID": "sg-1a2b3c4d"
}
]
"SubnetId": "subnet-1a2b3c4d",
"Monitoring": {
"Enabled": false
},
"IamInstanceProfile": {
"Arn": "arn:aws:iam::123456789012:instance-profile/my-iam-role"
},
"InstanceType": "m3.medium",
},
"Type": "one-time",
"CreateTime": "2014-03-25T22:21:58.000Z",
"SpotPrice": "0.050000"
},
...
]
}
This example assigns a public IP address to the Spot Instances that you launch in a nondefault VPC.
Note that when you specify a network interface, you must include the subnet ID and security group ID
using the network interface.
Command::
aws ec2 request-spot-instances --spot-price "0.050" --instance-count 1 --type "one-time" --launch-specification file://specification.json
Specification.json::
{
"ImageId": "ami-1a2b3c4d",
"KeyName": "my-key-pair",
"InstanceType": "m3.medium",
"NetworkInterfaces": [
{
"DeviceIndex": 0,
"SubnetId": "subnet-1a2b3c4d",
"Groups": [ "sg-1a2b3c4d" ],
"AssociatePublicIpAddress": true
}
],
"IamInstanceProfile": {
"Arn": "arn:aws:iam::123456789012:instance-profile/my-iam-role"
}
}
awscli-1.10.1/awscli/examples/ec2/terminate-instances.rst 0000666 4542626 0000144 00000001327 12652514124 024413 0 ustar pysdk-ci amazon 0000000 0000000 **To terminate an Amazon EC2 instance**
This example terminates the specified instance.
Command::
aws ec2 terminate-instances --instance-ids i-5203422c
Output::
{
"TerminatingInstances": [
{
"InstanceId": "i-5203422c",
"CurrentState": {
"Code": 32,
"Name": "shutting-down"
},
"PreviousState": {
"Code": 16,
"Name": "running"
}
}
]
}
For more information, see `Using Amazon EC2 Instances`_ in the *AWS Command Line Interface User Guide*.
.. _`Using Amazon EC2 Instances`: http://docs.aws.amazon.com/cli/latest/userguide/cli-ec2-launch.html
awscli-1.10.1/awscli/examples/ec2/create-security-group.rst 0000666 4542626 0000144 00000001407 12652514124 024677 0 ustar pysdk-ci amazon 0000000 0000000 **To create a security group for EC2-Classic**
This example creates a security group named ``MySecurityGroup``.
Command::
aws ec2 create-security-group --group-name MySecurityGroup --description "My security group"
Output::
{
"GroupId": "sg-903004f8"
}
**To create a security group for EC2-VPC**
This example creates a security group named ``MySecurityGroup`` for the specified VPC.
Command::
aws ec2 create-security-group --group-name MySecurityGroup --description "My security group" --vpc-id vpc-1a2b3c4d
Output::
{
"GroupId": "sg-903004f8"
}
For more information, see `Using Security Groups`_ in the *AWS Command Line Interface User Guide*.
.. _`Using Security Groups`: http://docs.aws.amazon.com/cli/latest/userguide/cli-ec2-sg.html
awscli-1.10.1/awscli/examples/ec2/modify-spot-fleet-request.rst 0000666 4542626 0000144 00000001227 12652514124 025472 0 ustar pysdk-ci amazon 0000000 0000000 **To modify a Spot fleet request**
This example command updates the target capacity of the specified Spot fleet request.
Command::
aws ec2 modify-spot-fleet-request --target-capacity 20 --spot-fleet-request-id sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
Output::
{
"Return": true
}
This example command decreases the target capacity of the specified Spot fleet request without terminating any Spot Instances as a result.
Command::
aws ec2 modify-spot-fleet-request --target-capacity 10 --excess-capacity-termination-policy NoTermination --spot-fleet-request-ids sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
Output::
{
"Return": true
}
awscli-1.10.1/awscli/examples/ec2/describe-network-acls.rst 0000666 4542626 0000144 00000007470 12652514124 024632 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your network ACLs**
This example describes your network ACLs.
Command::
aws ec2 describe-network-acls
Output::
{
"NetworkAcls": [
{
"Associations": [],
"NetworkAclId": "acl-7aaabd18",
"VpcId": "vpc-a01106c2",
"Tags": [],
"Entries": [
{
"CidrBlock": "0.0.0.0/0",
"RuleNumber": 100,
"Protocol": "-1",
"Egress": true,
"RuleAction": "allow"
},
{
"CidrBlock": "0.0.0.0/0",
"RuleNumber": 32767,
"Protocol": "-1",
"Egress": true,
"RuleAction": "deny"
},
{
"CidrBlock": "0.0.0.0/0",
"RuleNumber": 100,
"Protocol": "-1",
"Egress": false,
"RuleAction": "allow"
},
{
"CidrBlock": "0.0.0.0/0",
"RuleNumber": 32767,
"Protocol": "-1",
"Egress": false,
"RuleAction": "deny"
}
],
"IsDefault": true
},
{
"Associations": [],
"NetworkAclId": "acl-5fb85d36",
"VpcId": "vpc-a01106c2",
"Tags": [],
"Entries": [
{
"CidrBlock": "0.0.0.0/0",
"RuleNumber": 32767,
"Protocol": "-1",
"Egress": true,
"RuleAction": "deny"
},
{
"CidrBlock": "0.0.0.0/0",
"RuleNumber": 32767,
"Protocol": "-1",
"Egress": false,
"RuleAction": "deny"
}
],
"IsDefault": false
},
{
"Associations": [
{
"SubnetId": "subnet-6bea5f06",
"NetworkAclId": "acl-9aeb5ef7",
"NetworkAclAssociationId": "aclassoc-67ea5f0a"
},
{
"SubnetId": "subnet-65ea5f08",
"NetworkAclId": "acl-9aeb5ef7",
"NetworkAclAssociationId": "aclassoc-66ea5f0b"
}
],
"NetworkAclId": "acl-9aeb5ef7",
"VpcId": "vpc-98eb5ef5",
"Tags": [],
"Entries": [
{
"CidrBlock": "0.0.0.0/0",
"RuleNumber": 100,
"Protocol": "-1",
"Egress": true,
"RuleAction": "allow"
},
{
"CidrBlock": "0.0.0.0/0",
"RuleNumber": 32767,
"Protocol": "-1",
"Egress": true,
"RuleAction": "deny"
},
{
"CidrBlock": "0.0.0.0/0",
"RuleNumber": 100,
"Protocol": "-1",
"Egress": false,
"RuleAction": "allow"
},
{
"CidrBlock": "0.0.0.0/0",
"RuleNumber": 32767,
"Protocol": "-1",
"Egress": false,
"RuleAction": "deny"
}
],
"IsDefault": true
}
]
} awscli-1.10.1/awscli/examples/ec2/create-network-acl-entry.rst 0000666 4542626 0000144 00000000671 12652514124 025265 0 ustar pysdk-ci amazon 0000000 0000000 **To create a network ACL entry**
This example creates an entry for the specified network ACL. The rule allows ingress traffic from anywhere (0.0.0.0/0) on UDP port 53 (DNS) into any associated subnet. If the command succeeds, no output is returned.
Command::
aws ec2 create-network-acl-entry --network-acl-id acl-5fb85d36 --ingress --rule-number 100 --protocol udp --port-range From=53,To=53 --cidr-block 0.0.0.0/0 --rule-action allow
awscli-1.10.1/awscli/examples/ec2/create-vpn-connection.rst 0000666 4542626 0000144 00000003014 12652514124 024632 0 ustar pysdk-ci amazon 0000000 0000000 **To create a VPN connection with dynamic routing**
This example creates a VPN connection between the specified virtual private gateway and the specified customer gateway. The output includes the configuration information that your network administrator needs, in XML format.
Command::
aws ec2 create-vpn-connection --type ipsec.1 --customer-gateway-id cgw-0e11f167 --vpn-gateway-id vgw-9a4cacf3
Output::
{
"VpnConnection": {
"VpnConnectionId": "vpn-40f41529"
"CustomerGatewayConfiguration": "...configuration information...",
"State": "available",
"VpnGatewayId": "vgw-f211f09b",
"CustomerGatewayId": "cgw-b4de3fdd"
}
}
**To create a VPN connection with static routing**
This example creates a VPN connection between the specified virtual private gateway and the specified customer gateway. The options specify static routing. The output includes the configuration information that your network administrator needs, in XML format.
Command::
aws ec2 create-vpn-connection --type ipsec.1 --customer-gateway-id cgw-0e11f167 --vpn-gateway-id vgw-9a4cacf3 --options "{\"StaticRoutesOnly\":true}"
Output::
{
"VpnConnection": {
"VpnConnectionId": "vpn-40f41529"
"CustomerGatewayConfiguration": "...configuration information...",
"State": "pending",
"VpnGatewayId": "vgw-f211f09b",
"CustomerGatewayId": "cgw-b4de3fdd",
"Options": {
"StaticRoutesOnly": true
}
}
} awscli-1.10.1/awscli/examples/ec2/describe-instances.rst 0000666 4542626 0000144 00000002455 12652514124 024206 0 ustar pysdk-ci amazon 0000000 0000000 **To describe an Amazon EC2 instance**
Command::
aws ec2 describe-instances --instance-ids i-5203422c
**To describe all instances with the instance type m1.small**
Command::
aws ec2 describe-instances --filters "Name=instance-type,Values=m1.small"
**To describe all instances with a Owner tag**
Command::
aws ec2 describe-instances --filters "Name=tag-key,Values=Owner"
**To describe all instances with a Purpose=test tag**
Command::
aws ec2 describe-instances --filters "Name=tag:Purpose,Values=test"
**To describe all EC2 instances that have an instance type of m1.small or m1.medium that are also in the us-west-2c Availability Zone**
Command::
aws ec2 describe-instances --filters "Name=instance-type,Values=m1.small,m1.medium" "Name=availability-zone,Values=us-west-2c"
The following JSON input performs the same filtering.
Command::
aws ec2 describe-instances --filters file://filters.json
filters.json::
[
{
"Name": "instance-type",
"Values": ["m1.small", "m1.medium"]
},
{
"Name": "availability-zone",
"Values": ["us-west-2c"]
}
]
For more information, see `Using Amazon EC2 Instances`_ in the *AWS Command Line Interface User Guide*.
.. _`Using Amazon EC2 Instances`: http://docs.aws.amazon.com/cli/latest/userguide/cli-ec2-launch.html
awscli-1.10.1/awscli/examples/ec2/deregister-image.rst 0000666 4542626 0000144 00000000265 12652514124 023653 0 ustar pysdk-ci amazon 0000000 0000000 **To deregister an AMI**
This example deregisters the specified AMI. If the command succeeds, no output is returned.
Command::
aws ec2 deregister-image --image-id ami-4fa54026
awscli-1.10.1/awscli/examples/ec2/delete-snapshot.rst 0000666 4542626 0000144 00000000337 12652514124 023535 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a snapshot**
This example command deletes a snapshot with the snapshot ID of ``snap-1234abcd``. If the command succeeds, no output is returned.
Command::
aws ec2 delete-snapshot --snapshot-id snap-1234abcd
awscli-1.10.1/awscli/examples/ec2/describe-addresses.rst 0000666 4542626 0000144 00000005063 12652514124 024172 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your Elastic IP addresses**
This example describes your Elastic IP addresses.
Command::
aws ec2 describe-addresses
Output::
{
"Addresses": [
{
"InstanceId": null,
"PublicIp": "198.51.100.0",
"Domain": "standard"
},
{
"PublicIp": "203.0.113.0",
"Domain": "vpc",
"AllocationId": "eipalloc-64d5890a"
}
]
}
**To describe your Elastic IP addresses for EC2-VPC**
This example describes your Elastic IP addresses for use with instances in a VPC.
Command::
aws ec2 describe-addresses --filters "Name=domain,Values=vpc"
Output::
{
"Addresses": [
{
"PublicIp": "203.0.113.0",
"Domain": "vpc",
"AllocationId": "eipalloc-64d5890a"
}
]
}
This example describes the Elastic IP address with the allocation ID ``eipalloc-282d9641``, which is associated with an instance in EC2-VPC.
Command::
aws ec2 describe-addresses --allocation-ids eipalloc-282d9641
Output::
{
"Addresses": [
{
"Domain": "vpc",
"InstanceId": "i-10a64379",
"NetworkInterfaceId": "eni-1a2b3c4d",
"AssociationId": "eipassoc-123abc12",
"NetworkInterfaceOwnerId": "1234567891012",
"PublicIp": "203.0.113.25",
"AllocationId": "eipalloc-282d9641",
"PrivateIpAddress": "10.251.50.12"
}
]
}
This example describes the Elastic IP address associated with a particular private IP address in EC2-VPC.
Command::
aws ec2 describe-addresses --filters "Name=private-ip-address,Values=10.251.50.12"
**To describe your Elastic IP addresses in EC2-Classic**
This example describes your Elastic IP addresses for use in EC2-Classic.
Command::
aws ec2 describe-addresses --filters "Name=domain,Values=standard"
Output::
{
"Addresses": [
{
"InstanceId": null,
"PublicIp": "203.0.110.25",
"Domain": "standard"
}
]
}
This example describes the Elastic IP address with the value ``203.0.110.25``, which is associated with an instance in EC2-Classic.
Command::
aws ec2 describe-addresses --public-ips 203.0.110.25
Output::
{
"Addresses": [
{
"InstanceId": "i-1a2b3c4d",
"PublicIp": "203.0.110.25",
"Domain": "standard"
}
]
}
awscli-1.10.1/awscli/examples/ec2/associate-address.rst 0000666 4542626 0000144 00000001744 12652514124 024037 0 ustar pysdk-ci amazon 0000000 0000000 **To associate an Elastic IP addresses in EC2-Classic**
This example associates an Elastic IP address with an instance in EC2-Classic. If the command succeeds, no output is returned.
Command::
aws ec2 associate-address --instance-id i-5203422c --public-ip 198.51.100.0
**To associate an Elastic IP address in EC2-VPC**
This example associates an Elastic IP address with an instance in a VPC.
Command::
aws ec2 associate-address --instance-id i-43a4412a --allocation-id eipalloc-64d5890a
Output::
{
"AssociationId": "eipassoc-2bebb745"
}
This example associates an Elastic IP address with a network interface.
Command::
aws ec2 associate-address --allocation-id eipalloc-64d5890a --network-interface-id eni-1a2b3c4d
This example associates an Elastic IP with a private IP address that's associated with a network interface.
Command::
aws ec2 associate-address --allocation-id eipalloc-64d5890a --network-interface-id eni-1a2b3c4d --private-ip-address 10.0.0.85
awscli-1.10.1/awscli/examples/ec2/describe-moving-addresses.rst 0000666 4542626 0000144 00000000725 12652514124 025467 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your moving addresses**
This example describes all of your moving Elastic IP addresses.
Command::
aws ec2 describe-moving-addresses
Output::
{
"MovingAddressStatuses": [
{
"PublicIp": "198.51.100.0",
"MoveStatus": "MovingToVpc"
}
]
}
This example describes all addresses that are moving to the EC2-VPC platform.
Command::
aws ec2 describe-moving-addresses --filters Name=moving-status,Values=MovingToVpc awscli-1.10.1/awscli/examples/ec2/cancel-spot-fleet-requests.rst 0000666 4542626 0000144 00000002164 12652514124 025614 0 ustar pysdk-ci amazon 0000000 0000000 **To cancel Spot fleet requests**
This example command cancels a Spot fleet request and terminates the associated Spot Instances.
Command::
aws ec2 cancel-spot-fleet-requests --spot-fleet-request-ids sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE --terminate-instances
Output::
{
"SuccessfulFleetRequests": [
{
"SpotFleetRequestId": "sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE",
"CurrentSpotFleetRequestState": "cancelled_running",
"PreviousSpotFleetRequestState": "active"
}
],
"UnsuccessfulFleetRequests": []
}
This example command cancels a Spot fleet request without terminating the associated Spot Instances.
Command::
aws ec2 cancel-spot-fleet-requests --spot-fleet-request-ids sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE --no-terminate-instances
Output::
{
"SuccessfulFleetRequests": [
{
"SpotFleetRequestId": "sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE",
"CurrentSpotFleetRequestState": "cancelled_terminating",
"PreviousSpotFleetRequestState": "active"
}
],
"UnsuccessfulFleetRequests": []
}
awscli-1.10.1/awscli/examples/ec2/describe-vpc-attribute.rst 0000666 4542626 0000144 00000002025 12652514124 025001 0 ustar pysdk-ci amazon 0000000 0000000 **To describe the enableDnsSupport attribute**
This example describes the ``enableDnsSupport`` attribute. This attribute indicates whether DNS resolution is enabled for the VPC. If this attribute is ``true``, the Amazon DNS server resolves DNS hostnames for your instances to their corresponding IP addresses; otherwise, it does not.
Command::
aws ec2 describe-vpc-attribute --vpc-id vpc-a01106c2 --attribute enableDnsSupport
Output::
{
"VpcId": "vpc-a01106c2",
"EnableDnsSupport": {
"Value": true
}
}
**To describe the enableDnsHostnames attribute**
This example describes the ``enableDnsHostnames`` attribute. This attribute indicates whether the instances launched in the VPC get DNS hostnames. If this attribute is ``true``, instances in the VPC get DNS hostnames; otherwise, they do not.
Command::
aws ec2 describe-vpc-attribute --vpc-id vpc-a01106c2 --attribute enableDnsHostnames
Output::
{
"VpcId": "vpc-a01106c2",
"EnableDnsHostnames": {
"Value": true
}
} awscli-1.10.1/awscli/examples/ec2/modify-vpc-endpoint.rst 0000666 4542626 0000144 00000000507 12652514124 024330 0 ustar pysdk-ci amazon 0000000 0000000 **To modify an endpoint**
This example modifies endpoint vpce-1a2b3c4d by associating route table rtb-aaa222bb with the endpoint, and resetting the policy document.
Command::
aws ec2 modify-vpc-endpoint --vpc-endpoint-id vpce-1a2b3c4d --add-route-table-ids rtb-aaa222bb --reset-policy
Output::
{
"Return": true
} awscli-1.10.1/awscli/examples/ec2/copy-snapshot.rst 0000666 4542626 0000144 00000000663 12652514124 023247 0 ustar pysdk-ci amazon 0000000 0000000 **To copy a snapshot**
This example command copies a snapshot with the snapshot ID of ``snap-1234abcd`` from the ``us-west-2`` region to the ``us-east-1`` region and adds a short description to identify the snapshot.
Command::
aws --region us-east-1 ec2 copy-snapshot --source-region us-west-2 --source-snapshot-id snap-1234abcd --description "This is my copied snapshot."
Output::
{
"SnapshotId": "snap-2345bcde"
} awscli-1.10.1/awscli/examples/ec2/delete-vpn-connection.rst 0000666 4542626 0000144 00000000320 12652514124 024626 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a VPN connection**
This example deletes the specified VPN connection. If the command succeeds, no output is returned.
Command::
aws ec2 delete-vpn-connection --vpn-connection-id vpn-40f41529
awscli-1.10.1/awscli/examples/ec2/describe-nat-gateways.rst 0000666 4542626 0000144 00000002210 12652514124 024610 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your NAT gateways**
This example describes all of your NAT gateways.
Command::
aws ec2 describe-nat-gateways
Output::
{
"NatGateways": [
{
"NatGatewayAddresses": [
{
"PublicIp": "198.11.222.333",
"NetworkInterfaceId": "eni-9dec76cd",
"AllocationId": "eipalloc-89c620ec",
"PrivateIp": "10.0.0.149"
}
],
"VpcId": "vpc-1a2b3c4d",
"State": "available",
"NatGatewayId": "nat-05dba92075d71c408",
"SubnetId": "subnet-847e4dc2",
"CreateTime": "2015-12-01T12:26:55.983Z"
},
{
"NatGatewayAddresses": [
{
"PublicIp": "1.2.3.12",
"NetworkInterfaceId": "eni-71ec7621",
"AllocationId": "eipalloc-5d42583f",
"PrivateIp": "10.0.0.77"
}
],
"VpcId": "vpc-11aa22bb",
"State": "deleting",
"NatGatewayId": "nat-0a93acc57881d4199",
"SubnetId": "subnet-7f7e4d39",
"DeleteTime": "2015-12-17T12:26:14.564Z",
"CreateTime": "2015-12-01T12:09:22.040Z"
}
]
} awscli-1.10.1/awscli/examples/ec2/create-vpn-gateway.rst 0000666 4542626 0000144 00000000507 12652514124 024140 0 ustar pysdk-ci amazon 0000000 0000000 **To create a virtual private gateway**
This example creates a virtual private gateway.
Command::
aws ec2 create-vpn-gateway --type ipsec.1
Output::
{
"VpnGateway": {
"State": "available",
"Type": "ipsec.1",
"VpnGatewayId": "vgw-9a4cacf3",
"VpcAttachments": []
}
} awscli-1.10.1/awscli/examples/ec2/cancel-export-task.rst 0000666 4542626 0000144 00000000360 12652514124 024136 0 ustar pysdk-ci amazon 0000000 0000000 **To cancel an active export task**
This example cancels an active export task with the task ID export-i-fgelt0i7. If the command succeeds, no output is returned.
Command::
aws ec2 cancel-export-task --export-task-id export-i-fgelt0i7
awscli-1.10.1/awscli/examples/logs/ 0000777 4542626 0000144 00000000000 12652514126 020176 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/logs/put-log-events.rst 0000666 4542626 0000144 00000002107 12652514124 023617 0 ustar pysdk-ci amazon 0000000 0000000 The following command puts log events to a log stream named ``20150601`` in the log group ``my-logs``::
aws logs put-log-events --log-group-name my-logs --log-stream-name 20150601 --log-events file://events
Output::
{
"nextSequenceToken": "49542672486831074009579604567656788214806863282469607346"
}
The above example reads a JSON array of events from a file named ``events`` in the current directory::
[
{
"timestamp": 1433190184356,
"message": "Example Event 1"
},
{
"timestamp": 1433190184358,
"message": "Example Event 2"
},
{
"timestamp": 1433190184360,
"message": "Example Event 3"
}
]
Each subsequent call requires the next sequence token provided by the previous call to be specified with the sequence token option::
aws logs put-log-events --log-group-name my-logs --log-stream-name 20150601 --log-events file://events2 --sequence-token "49542672486831074009579604567656788214806863282469607346"
Output::
{
"nextSequenceToken": "49542672486831074009579604567900991230369019956308219826"
}
awscli-1.10.1/awscli/examples/logs/describe-log-streams.rst 0000666 4542626 0000144 00000001333 12652514124 024741 0 ustar pysdk-ci amazon 0000000 0000000 The following command shows all log streams starting with the prefix ``2015`` in the log group ``my-logs``::
aws logs describe-log-streams --log-group-name my-logs --log-stream-name-prefix 2015
Output::
{
"logStreams": [
{
"creationTime": 1433189871774,
"arn": "arn:aws:logs:us-west-2:0123456789012:log-group:my-logs:log-stream:20150531",
"logStreamName": "20150531",
"storedBytes": 0
},
{
"creationTime": 1433189873898,
"arn": "arn:aws:logs:us-west-2:0123456789012:log-group:my-logs:log-stream:20150601",
"logStreamName": "20150601",
"storedBytes": 0
}
]
}
awscli-1.10.1/awscli/examples/logs/delete-retention-policy.rst 0000666 4542626 0000144 00000000264 12652514124 025474 0 ustar pysdk-ci amazon 0000000 0000000 The following command removes the retention policy that has previously been applied to a log group named ``my-logs``::
aws logs delete-retention-policy --log-group-name my-logs
awscli-1.10.1/awscli/examples/logs/create-log-stream.rst 0000666 4542626 0000144 00000000257 12652514124 024245 0 ustar pysdk-ci amazon 0000000 0000000 The following command creates a log stream named ``20150601`` in the log group ``my-logs``::
aws logs create-log-stream --log-group-name my-logs --log-stream-name 20150601
awscli-1.10.1/awscli/examples/logs/create-log-group.rst 0000666 4542626 0000144 00000000164 12652514124 024103 0 ustar pysdk-ci amazon 0000000 0000000 The following command creates a log group named ``my-logs``::
aws logs create-log-group --log-group-name my-logs
awscli-1.10.1/awscli/examples/logs/put-retention-policy.rst 0000666 4542626 0000144 00000000247 12652514124 025043 0 ustar pysdk-ci amazon 0000000 0000000 The following command adds a 5 day retention policy to a log group named ``my-logs``::
aws logs put-retention-policy --log-group-name my-logs --retention-in-days 5
awscli-1.10.1/awscli/examples/logs/delete-log-stream.rst 0000666 4542626 0000144 00000000265 12652514124 024243 0 ustar pysdk-ci amazon 0000000 0000000 The following command deletes a log stream named ``20150531`` from a log group named ``my-logs``::
aws logs delete-log-stream --log-group-name my-logs --log-stream-name 20150531
awscli-1.10.1/awscli/examples/logs/get-log-events.rst 0000666 4542626 0000144 00000001557 12652514124 023576 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves log events from a log stream named ``20150601`` in the log group ``my-logs``::
aws logs get-log-events --log-group-name my-logs --log-stream-name 20150601
Output::
{
"nextForwardToken": "f/31961209122447488583055879464742346735121166569214640130",
"events": [
{
"ingestionTime": 1433190494190,
"timestamp": 1433190184356,
"message": "Example Event 1"
},
{
"ingestionTime": 1433190516679,
"timestamp": 1433190184356,
"message": "Example Event 1"
},
{
"ingestionTime": 1433190494190,
"timestamp": 1433190184358,
"message": "Example Event 2"
}
],
"nextBackwardToken": "b/31961209122358285602261756944988674324553373268216709120"
}
awscli-1.10.1/awscli/examples/logs/delete-log-group.rst 0000666 4542626 0000144 00000000164 12652514124 024102 0 ustar pysdk-ci amazon 0000000 0000000 The following command deletes a log group named ``my-logs``::
aws logs delete-log-group --log-group-name my-logs
awscli-1.10.1/awscli/examples/logs/describe-log-groups.rst 0000666 4542626 0000144 00000000730 12652514124 024602 0 ustar pysdk-ci amazon 0000000 0000000 The following command describes a log group named ``my-logs``::
aws logs describe-log-groups --log-group-name-prefix my-logs
Output::
{
"logGroups": [
{
"storedBytes": 0,
"metricFilterCount": 0,
"creationTime": 1433189500783,
"logGroupName": "my-logs",
"retentionInDays": 5,
"arn": "arn:aws:logs:us-west-2:0123456789012:log-group:my-logs:*"
}
]
}
awscli-1.10.1/awscli/examples/codecommit/ 0000777 4542626 0000144 00000000000 12652514126 021355 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/codecommit/update-repository-name.rst 0000666 4542626 0000144 00000001224 12652514124 026521 0 ustar pysdk-ci amazon 0000000 0000000 **To change the name of a repository**
This example changes the name of an AWS CodeCommit repository. This command produces output only if there are errors. Changing the name of the AWS CodeCommit repository will change the SSH and HTTPS URLs that users need to connect to the repository. Users will not be able to connect to this repository until they update their connection settings. Also, because the repository's ARN will change, changing the repository name will invalidate any IAM user policies that rely on this repository's ARN.
Command::
aws codecommit update-repository-name --old-name MyDemoRepo --new-name MyRenamedDemoRepo
Output::
None. awscli-1.10.1/awscli/examples/codecommit/update-repository-description.rst 0000666 4542626 0000144 00000000520 12652514124 030122 0 ustar pysdk-ci amazon 0000000 0000000 **To change the description for a repository**
This example changes the description for an AWS CodeCommit repository. This command produces output only if there are errors.
Command::
aws codecommit update-repository-description --repository-name MyDemoRepo --repository-description "This description was changed"
Output::
None. awscli-1.10.1/awscli/examples/codecommit/list-branches.rst 0000666 4542626 0000144 00000000403 12652514124 024640 0 ustar pysdk-ci amazon 0000000 0000000 **To view a list of branch names**
This example lists all branch names in an AWS CoceCommit repository.
Command::
aws codecommit list-branches --repository-name MyDemoRepo
Output::
{
"branches": [
"MyNewBranch",
"master"
]
} awscli-1.10.1/awscli/examples/codecommit/get-branch.rst 0000666 4542626 0000144 00000000511 12652514124 024114 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about a branch**
This example gets information about a branch in an AWS CoceCommit repository.
Command::
aws codecommit get-branch --repository-name MyDemoRepo --branch-name MyNewBranch
Output::
{
"BranchInfo": {
"commitID": "317f8570EXAMPLE",
"branchName": "MyNewBranch"
}
} awscli-1.10.1/awscli/examples/codecommit/create-repository.rst 0000666 4542626 0000144 00000001437 12652514124 025572 0 ustar pysdk-ci amazon 0000000 0000000 **To create a repository**
This example creates a repository and associates it with the user's AWS account.
Command::
aws codecommit create-repository --repository-name MyDemoRepo --repository-description "My demonstration repository"
Output::
{
"repositoryMetadata": {
"repositoryName": "MyDemoRepo",
"cloneUrlSsh": "ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/MyDemoRepo",
"lastModifiedDate": 1444766838.027,
"repositoryDescription": "My demonstration repository",
"cloneUrlHttp": "https://git-codecommit.us-east-1.amazonaws.com/v1/repos/MyDemoRepo",
"repositoryId": "f7579e13-b83e-4027-aaef-650c0EXAMPLE",
"Arn": "arn:aws:codecommit:us-east-1:111111111111EXAMPLE:MyDemoRepo",
"accountId": "111111111111"
}
} awscli-1.10.1/awscli/examples/codecommit/batch-get-repositories.rst 0000666 4542626 0000144 00000003334 12652514124 026473 0 ustar pysdk-ci amazon 0000000 0000000 **To view details about multiple repositories**
This example shows details about multiple AWS CodeCommit repositories.
Command::
aws codecommit batch-get-repositories --repository-names MyDemoRepo MyOtherDemoRepo
Output::
{
"repositories": [
{
"creationDate": 1429203623.625,
"defaultBranch": "master",
"repositoryName": "MyDemoRepo",
"cloneUrlSsh": "ssh://ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos//v1/repos/MyDemoRepo",
"lastModifiedDate": 1430783812.0869999,
"repositoryDescription": "My demonstration repository",
"cloneUrlHttp": "https://codecommit.us-east-1.amazonaws.com/v1/repos/MyDemoRepo",
"repositoryId": "f7579e13-b83e-4027-aaef-650c0EXAMPLE",
"Arn": "arn:aws:codecommit:us-east-1:111111111111EXAMPLE:MyDemoRepo",
"accountId": "111111111111"
},
{
"creationDate": 1429203623.627,
"defaultBranch": "master",
"repositoryName": "MyOtherDemoRepo",
"cloneUrlSsh": "ssh://ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos//v1/repos/MyOtherDemoRepo",
"lastModifiedDate": 1430783812.0889999,
"repositoryDescription": "My other demonstration repository",
"cloneUrlHttp": "https://codecommit.us-east-1.amazonaws.com/v1/repos/MyOtherDemoRepo",
"repositoryId": "cfc29ac4-b0cb-44dc-9990-f6f51EXAMPLE",
"Arn": "arn:aws:codecommit:us-east-1:111111111111EXAMPLE:MyOtherDemoRepo",
"accountId": "111111111111"
}
],
"repositoriesNotFound": []
} awscli-1.10.1/awscli/examples/codecommit/get-repository.rst 0000666 4542626 0000144 00000001544 12652514124 025105 0 ustar pysdk-ci amazon 0000000 0000000 **To get information about a repository**
This example shows details about an AWS CodeCommit repository.
Command::
aws codecommit get-repository --repository-name MyDemoRepo
Output::
{
"repositoryMetadata": {
"creationDate": 1429203623.625,
"defaultBranch": "master",
"repositoryName": "MyDemoRepo",
"cloneUrlSsh": "ssh://ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos//v1/repos/MyDemoRepo",
"lastModifiedDate": 1430783812.0869999,
"repositoryDescription": "My demonstration repository",
"cloneUrlHttp": "https://codecommit.us-east-1.amazonaws.com/v1/repos/MyDemoRepo",
"repositoryId": "f7579e13-b83e-4027-aaef-650c0EXAMPLE",
"Arn": "arn:aws:codecommit:us-east-1:80398EXAMPLE:MyDemoRepo
"accountId": "111111111111"
}
} awscli-1.10.1/awscli/examples/codecommit/create-branch.rst 0000666 4542626 0000144 00000000436 12652514124 024606 0 ustar pysdk-ci amazon 0000000 0000000 **To create a branch**
This example creates a branch in an AWS CoceCommit repository. This command produces output only if there are errors.
Command::
aws codecommit create-branch --repository-name MyDemoRepo --branch-name MyNewBranch --commit-id 317f8570EXAMPLE
Output::
None. awscli-1.10.1/awscli/examples/codecommit/list-repositories.rst 0000666 4542626 0000144 00000000743 12652514124 025611 0 ustar pysdk-ci amazon 0000000 0000000 **To view a list of repositories**
This example lists all AWS CodeCommit repositories associated with the user's AWS account.
Command::
aws codecommit list-repositories
Output::
{
"repositories": [
{
"repositoryName": "MyDemoRepo"
"repositoryId": "f7579e13-b83e-4027-aaef-650c0EXAMPLE",
},
{
"repositoryName": "MyOtherDemoRepo"
"repositoryId": "cfc29ac4-b0cb-44dc-9990-f6f51EXAMPLE"
}
]
} awscli-1.10.1/awscli/examples/codecommit/update-default-branch.rst 0000666 4542626 0000144 00000000470 12652514124 026245 0 ustar pysdk-ci amazon 0000000 0000000 **To change the default branch for a repository**
This example changes the default branch for an AWS CodeCommit repository. This command produces output only if there are errors.
Command::
aws codecommit update-default-branch --repository-name MyDemoRepo --default-branch-name MyNewBranch
Output::
None. awscli-1.10.1/awscli/examples/codecommit/delete-repository.rst 0000666 4542626 0000144 00000000364 12652514124 025567 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a repository**
This example shows how to delete an AWS CodeCommit repository.
Command::
aws codecommit delete-repository --repository-name MyDemoRepo
Output::
{
"repositoryId": "f7579e13-b83e-4027-aaef-650c0EXAMPLE"
} awscli-1.10.1/awscli/examples/codecommit/delete-branch.rst 0000666 4542626 0000144 00000000470 12652514124 024603 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a branch**
This example shows how to delete a branch in an AWS CodeCommit repository.
Command::
aws codecommit delete-branch --repository-name MyDemoRepo --branch-name MyNewBranch
Output::
{
"branch": {
"commitId": "317f8570EXAMPLE",
"branchName": "MyNewBranch"
}
} awscli-1.10.1/awscli/examples/swf/ 0000777 4542626 0000144 00000000000 12652514126 020031 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/swf/list-domains.rst 0000666 4542626 0000144 00000005240 12652514124 023165 0 ustar pysdk-ci amazon 0000000 0000000 Listing your Domains
--------------------
To list the SWF domains that you have registered for your account, you can use ``swf list-domains``. There is only one
required parameter: ``--registration-status``, which you can set to either ``REGISTERED`` or ``DEPRECATED``.
Here's a typical example::
aws swf list-domains --registration-status REGISTERED
Result::
{
"domainInfos": [
{
"status": "REGISTERED",
"name": "DataFrobotz"
},
{
"status": "REGISTERED",
"name": "erontest"
}
]
}
If you set ``--registration-status`` to ``DEPRECATED``, you will see deprecated domains (domains that can not register
new workflows or activities, but that can still be queried). For example::
aws swf list-domains --registration-status DEPRECATED
Result::
{
"domainInfos": [
{
"status": "DEPRECATED",
"name": "MyNeatNewDomain"
}
]
}
If you have many domains, you can set the ``--maximum-page-size`` option to limit the number of results returned. If
there are more results to return than the maximum number that you specified, you will receive a ``nextPageToken`` that
you can send to the next call to ``list-domains`` to retrieve additional entries.
Here's an example of using ``--maximum-page-size``::
aws swf list-domains --registration-status REGISTERED --maximum-page-size 1
Result::
{
"domainInfos": [
{
"status": "REGISTERED",
"name": "DataFrobotz"
}
],
"nextPageToken": "AAAAKgAAAAEAAAAAAAAAA2QJKNtidVgd49TTeNwYcpD+QKT2ynuEbibcQWe2QKrslMGe63gpS0MgZGpcpoKttL4OCXRFn98Xif557it+wSZUsvUDtImjDLvguyuyyFdIZtvIxIKEOPm3k2r4OjAGaFsGOuVbrKljvla7wdU7FYH3OlkNCP8b7PBj9SBkUyGoiAghET74P93AuVIIkdKGtQ=="
}
When you make the call again, this time supplying the value of ``nextPageToken`` in the ``--next-page-token`` argument,
you'll get another page of results::
aws swf list-domains --registration-status REGISTERED --maximum-page-size 1 --next-page-token "AAAAKgAAAAEAAAAAAAAAA2QJKNtidVgd49TTeNwYcpD+QKT2ynuEbibcQWe2QKrslMGe63gpS0MgZGpcpoKttL4OCXRFn98Xif557it+wSZUsvUDtImjDLvguyuyyFdIZtvIxIKEOPm3k2r4OjAGaFsGOuVbrKljvla7wdU7FYH3OlkNCP8b7PBj9SBkUyGoiAghET74P93AuVIIkdKGtQ=="
Result::
{
"domainInfos": [
{
"status": "REGISTERED",
"name": "erontest"
}
]
}
When there are no further pages of results to retrieve, ``nextPageToken`` will not be returned in the results.
See Also
--------
- `ListDomains `__
in the *Amazon Simple Workflow Service API Reference*
awscli-1.10.1/awscli/examples/swf/register-domain.rst 0000666 4542626 0000144 00000003622 12652514124 023655 0 ustar pysdk-ci amazon 0000000 0000000 Registering a Domain
--------------------
You can use the AWS CLI to register new domains. Use the ``swf register-domain`` command. There are two required
parameters, ``--name``, which takes the domain name, and ``--workflow-execution-retention-period-in-days``, which takes
an integer to specify the number of days to retain workflow execution data on this domain, up to a maxium period of 90
days (for more information, see the `SWF FAQ `). If you specify zero (0)
for this value, the retention period is automatically set at the maximum duration. Otherwise, workflow execution data
will not be retained after the specified number of days have passed.
Here's an example of registering a new domain:
::
$ aws swf register-domain --name MyNeatNewDomain --workflow-execution-retention-period-in-days 0
""
When you register a domain, nothing is returned (""), but you can use
``swf list-domains`` or ``swf describe-domain`` to see the new domain.
For example:
::
$ aws swf list-domains --registration-status REGISTERED
{
"domainInfos": [
{
"status": "REGISTERED",
"name": "DataFrobotz"
},
{
"status": "REGISTERED",
"name": "MyNeatNewDomain"
},
{
"status": "REGISTERED",
"name": "erontest"
}
]
}
Using ``swf describe-domain``:
::
aws swf describe-domain --name MyNeatNewDomain
{
"domainInfo": {
"status": "REGISTERED",
"name": "MyNeatNewDomain"
},
"configuration": {
"workflowExecutionRetentionPeriodInDays": "0"
}
}
See Also
--------
- `RegisterDomain `__
in the *Amazon Simple Workflow Service API Reference*
awscli-1.10.1/awscli/examples/swf/deprecate-domain.rst 0000666 4542626 0000144 00000003337 12652514124 023770 0 ustar pysdk-ci amazon 0000000 0000000 Deprecating a Domain
--------------------
To deprecate a domain (you can still see it, but cannot create new
workflow executions or register types on it), use
``swf deprecate-domain``. It has a sole required parameter, ``--name``,
which takes the name of the domain to deprecate.
::
$ aws swf deprecate-domain --name MyNeatNewDomain
""
As with ``register-domain``, no output is returned. If you use
``list-domains`` to view the registered domains, however, you will see
that the domain has been deprecated and no longer appears in the
returned data:
::
$ aws swf list-domains --registration-status REGISTERED
{
"domainInfos": [
{
"status": "REGISTERED",
"name": "DataFrobotz"
},
{
"status": "REGISTERED",
"name": "erontest"
}
]
}
If you use ``--registration-status DEPRECATED`` with ``list-domains``,
you will see your deprecated domain:
::
$ aws swf list-domains --registration-status DEPRECATED
{
"domainInfos": [
{
"status": "DEPRECATED",
"name": "MyNeatNewDomain"
}
]
}
You can still use ``describe-domain`` to get information about a
deprecated domain:
::
$ aws swf describe-domain --name MyNeatNewDomain
{
"domainInfo": {
"status": "DEPRECATED",
"name": "MyNeatNewDomain"
},
"configuration": {
"workflowExecutionRetentionPeriodInDays": "0"
}
}
See Also
--------
- `DeprecateDomain `__
in the *Amazon Simple Workflow Service API Reference*
awscli-1.10.1/awscli/examples/swf/count-open-workflow-executions.rst 0000666 4542626 0000144 00000002451 12652514124 026706 0 ustar pysdk-ci amazon 0000000 0000000 **Counting Open Workflow Executions**
You can use ``swf count-open-workflow-executions`` to retrieve the number of open workflow executions for a given
domain. You can specify filters to count specific classes of executions.
The ``--domain`` and ``--start-time-filter`` arguments are required. All other arguments are optional.
Here is a basic example::
aws swf count-open-workflow-executions --domain DataFrobtzz --start-time-filter "{ \"latestDate\" : 1377129600, \"oldestDate\" : 1370044800 }"
Result::
{
"count": 4,
"truncated": false
}
If "truncated" is ``true``, then "count" represents the maximum number that can be returned by Amazon SWF. Any further
results are truncated.
To reduce the number of results returned, you can:
- modify the ``--start-time-filter`` values to narrow the time range that is searched.
- use the ``--close-status-filter``, ``--execution-filter``, ``--tag-filter`` or ``--type-filter`` arguments to further
filter the results. Each of these is mutually exclusive: You can specify *only one of these* in a request.
For more information, see `CountOpenWorkflowExecutions`_ in the *Amazon Simple Workflow Service API Reference*
.. _`CountOpenWorkflowExecutions`: http://docs.aws.amazon.com/amazonswf/latest/apireference/API_CountOpenWorkflowExecutions.html
awscli-1.10.1/awscli/examples/swf/register-workflow-type.rst 0000666 4542626 0000144 00000001664 12652514124 025243 0 ustar pysdk-ci amazon 0000000 0000000 Registering a Workflow Type
---------------------------
To register a Workflow type with the AWS CLI, use the ``swf register-workflow-type`` command::
aws swf register-workflow-type --domain DataFrobtzz --name "MySimpleWorkflow" --workflow-version "v1"
If successful, the command returns no result. On an error (for example, if you try to register the same workflow type
twice, or specify a domain that doesn't exist) you will get a response in JSON::
{
"message": "WorkflowType=[name=MySimpleWorkflow, version=v1]",
"__type": "com.amazonaws.swf.base.model#TypeAlreadyExistsFault"
}
The ``--domain``, ``--name`` and ``--workflow-version`` are required. You can also set the workflow description,
timeouts, and child workflow policy.
See Also
--------
- `RegisterWorkflowType ` in the
*Amazon Simple Workflow Service API Reference*
awscli-1.10.1/awscli/examples/swf/count-closed-workflow-executions.rst 0000666 4542626 0000144 00000002666 12652514124 027226 0 ustar pysdk-ci amazon 0000000 0000000 Counting Closed Workflow Executions
-----------------------------------
You can use ``swf count-closed-workflow-executions`` to retrieve the number of closed workflow executions for a given
domain. You can specify filters to count specific classes of executions.
The ``--domain`` and *either* ``--close-time-filter`` or ``--start-time-filter`` arguments are required. All other
arguments are optional.
Here is a basic example::
aws swf count-closed-workflow-executions --domain DataFrobtzz --close-time-filter "{ \"latestDate\" : 1377129600, \"oldestDate\" : 1370044800 }"
Result::
{
"count": 2,
"truncated": false
}
If "truncated" is ``true``, then "count" represents the maximum number that can be returned by Amazon SWF. Any further
results are truncated.
To reduce the number of results returned, you can:
- modify the ``--close-time-filter`` or ``--start-time-filter`` values to narrow the time range that is searched. Each
of these is mutually exclusive: You can specify *only one of these* in a request.
- use the ``--close-status-filter``, ``--execution-filter``, ``--tag-filter`` or ``--type-filter`` arguments to further
filter the results. However, these arguments are also mutually exclusive.
See Also
--------
- `CountClosedWorkflowExecutions `_ in the *Amazon Simple Workflow Service API Reference*
awscli-1.10.1/awscli/examples/swf/list-workflow-types.rst 0000666 4542626 0000144 00000002227 12652514124 024551 0 ustar pysdk-ci amazon 0000000 0000000 Listing Workflow Types
----------------------
To get a list of the workflow types for a domain, use ``swf list-workflow-types``. The ``--domain`` and
``--registration-status`` arguments are required. Here's a simple example::
aws swf list-workflow-types --domain DataFrobtzz --registration-status REGISTERED
Results::
{
"typeInfos": [
{
"status": "REGISTERED",
"creationDate": 1371454149.598,
"description": "DataFrobtzz subscribe workflow",
"workflowType": {
"version": "v3",
"name": "subscribe"
}
}
]
}
As with ``list-activity-types``, you can use the ``--name`` argument to select only workflow types with a particular
name, and use the ``--maximum-page-size`` argument in coordination with ``--next-page-token`` to page results. To
reverse the order in which results are returned, use ``--reverse-order``.
See Also
--------
- `ListWorkflowTypes `_
in the *Amazon Simple Workflow Service API Reference*
awscli-1.10.1/awscli/examples/swf/list-activity-types.rst 0000666 4542626 0000144 00000015747 12652514124 024546 0 ustar pysdk-ci amazon 0000000 0000000 Listing Activity Types
----------------------
To get a list of the activity types for a domain, use ``swf list-activity-types``. The ``--domain`` and
``--registration-status`` arguments are required. Here's a simple example::
aws swf list-activity-types --domain DataFrobtzz --registration-status REGISTERED
Results::
{
"typeInfos": [
{
"status": "REGISTERED",
"creationDate": 1371454150.451,
"activityType": {
"version": "1",
"name": "confirm-user-email"
},
"description": "subscribe confirm-user-email activity"
},
{
"status": "REGISTERED",
"creationDate": 1371454150.709,
"activityType": {
"version": "1",
"name": "confirm-user-phone"
},
"description": "subscribe confirm-user-phone activity"
},
{
"status": "REGISTERED",
"creationDate": 1371454149.871,
"activityType": {
"version": "1",
"name": "get-subscription-info"
},
"description": "subscribe get-subscription-info activity"
},
{
"status": "REGISTERED",
"creationDate": 1371454150.909,
"activityType": {
"version": "1",
"name": "send-subscription-success"
},
"description": "subscribe send-subscription-success activity"
},
{
"status": "REGISTERED",
"creationDate": 1371454150.085,
"activityType": {
"version": "1",
"name": "subscribe-user-sns"
},
"description": "subscribe subscribe-user-sns activity"
}
]
}
You can use the ``--name`` argument to select only activity types with a particular name::
aws swf list-activity-types --domain DataFrobtzz --registration-status REGISTERED --name "send-subscription-success"
Results::
{
"typeInfos": [
{
"status": "REGISTERED",
"creationDate": 1371454150.909,
"activityType": {
"version": "1",
"name": "send-subscription-success"
},
"description": "subscribe send-subscription-success activity"
}
]
}
To retrieve results in pages, you can set the ``--maximum-page-size`` argument. If more results are returned than will
fit in a page of results, a "nextPageToken" will be returned in the result set::
aws swf list-activity-types --domain DataFrobtzz --registration-status REGISTERED --maximum-page-size 2
Results::
{
"nextPageToken": "AAAAKgAAAAEAAAAAAAAAA1Gp1BelJq+PmHvAnDxJYbup8+0R4LVtbXLDl7QNY7C3OpHo9Sz06D/GuFz1OyC73umBQ1tOPJ/gC/aYpzDMqUIWIA1T9W0s2DryyZX4OC/6Lhk9/o5kdsuWMSBkHhgaZjgwp3WJINIFJFdaSMxY2vYAX7AtRtpcqJuBDDRE9RaRqDGYqIYUMltarkiqpSY1ZVveBasBvlvyUb/WGAaqehiDz7/JzLT/wWNNUMOd+Nhe",
"typeInfos": [
{
"status": "REGISTERED",
"creationDate": 1371454150.451,
"activityType": {
"version": "1",
"name": "confirm-user-email"
},
"description": "subscribe confirm-user-email activity"
},
{
"status": "REGISTERED",
"creationDate": 1371454150.709,
"activityType": {
"version": "1",
"name": "confirm-user-phone"
},
"description": "subscribe confirm-user-phone activity"
}
]
}
You can pass the nextPageToken value to the next call to ``list-activity-types`` in the ``--next-page-token`` argument,
retrieving the next page of results::
aws swf list-activity-types --domain DataFrobtzz --registration-status REGISTERED --maximum-page-size 2
--next-page-token "AAAAKgAAAAEAAAAAAAAAA1Gp1BelJq+PmHvAnDxJYbup8+0R4LVtbXLDl7QNY7C3OpHo9Sz06D/GuFz1OyC73umBQ1tOPJ/gC/aYpzDMqUIWIA1T9W0s2DryyZX4OC/6Lhk9/o5kdsuWMSBkHhgaZjgwp3WJINIFJFdaSMxY2vYAX7AtRtpcqJuBDDRE9RaRqDGYqIYUMltarkiqpSY1ZVveBasBvlvyUb/WGAaqehiDz7/JzLT/wWNNUMOd+Nhe"
Result::
{
"nextPageToken": "AAAAKgAAAAEAAAAAAAAAAw+7LZ4GRZPzTqBHsp2wBxWB8m1sgLCclgCuq3J+h/m3+vOfFqtkcjLwV5cc4OjNAzTCuq/XcylPumGwkjbajtqpZpbqOcVNfjFxGoi0LB2Olbvv0krbUISBvlpFPmSWpDSZJsxg5UxCcweteSlFn1PNSZ/MoinBZo8OTkjMuzcsTuKOzH9wCaR8ITcALJ3SaqHU3pyIRS5hPmFA3OLIc8zaAepjlaujo6hntNSCruB4"
"typeInfos": [
{
"status": "REGISTERED",
"creationDate": 1371454149.871,
"activityType": {
"version": "1",
"name": "get-subscription-info"
},
"description": "subscribe get-subscription-info activity"
},
{
"status": "REGISTERED",
"creationDate": 1371454150.909,
"activityType": {
"version": "1",
"name": "send-subscription-success"
},
"description": "subscribe send-subscription-success activity"
}
]
}
If there are still more results to return, "nextPageToken" will be returned with the results. When there are no more
pages of results to return, "nextPageToken" will *not* be returned in the result set.
You can use the ``--reverse-order`` argument to reverse the order of the returned results. This also affects paged
results::
aws swf list-activity-types --domain DataFrobtzz --registration-status REGISTERED --maximum-page-size 2 --reverse-order
Results::
{
"nextPageToken": "AAAAKgAAAAEAAAAAAAAAAwXcpu5ePSyQkrC+8WMbmSrenuZC2ZkIXQYBPB/b9xIOVkj+bMEFhGj0KmmJ4rF7iddhjf7UMYCsfGkEn7mk+yMCgVc1JxDWmB0EH46bhcmcLmYNQihMDmUWocpr7To6/R7CLu0St1gkFayxOidJXErQW0zdNfQaIWAnF/cwioBbXlkz1fQzmDeU3M5oYGMPQIrUqkPq7pMEW0q0lK5eDN97NzFYdZZ/rlcLDWPZhUjY",
"typeInfos": [
{
"status": "REGISTERED",
"creationDate": 1371454150.085,
"activityType": {
"version": "1",
"name": "subscribe-user-sns"
},
"description": "subscribe subscribe-user-sns activity"
},
{
"status": "REGISTERED",
"creationDate": 1371454150.909,
"activityType": {
"version": "1",
"name": "send-subscription-success"
},
"description": "subscribe send-subscription-success activity"
}
]
}
See Also
--------
- `ListActivityTypes `_
in the *Amazon Simple Workflow Service API Reference*
awscli-1.10.1/awscli/examples/swf/describe-domain.rst 0000666 4542626 0000144 00000002106 12652514124 023605 0 ustar pysdk-ci amazon 0000000 0000000 Getting Information About a Domain
----------------------------------
To get detailed information about a particular domain, use the
``swf describe-domain`` command. There is one required parameter:
``--name``, which takes the name of the domain you want information
about. For example:
::
$ aws swf describe-domain --name DataFrobotz
{
"domainInfo": {
"status": "REGISTERED",
"name": "DataFrobotz"
},
"configuration": {
"workflowExecutionRetentionPeriodInDays": "1"
}
}
You can also use ``describe-domain`` to get information about deprecated
domains:
::
$ aws swf describe-domain --name MyNeatNewDomain
{
"domainInfo": {
"status": "DEPRECATED",
"name": "MyNeatNewDomain"
},
"configuration": {
"workflowExecutionRetentionPeriodInDays": "0"
}
}
See Also
--------
- `DescribeDomain `__
in the *Amazon Simple Workflow Service API Reference*
awscli-1.10.1/awscli/examples/cloudfront/ 0000777 4542626 0000144 00000000000 12652514126 021411 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/cloudfront/list-distributions.rst 0000666 4542626 0000144 00000006207 12652514124 026021 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves a list of distributions::
aws cloudfront list-distributions
Output::
{
"DistributionList": {
"Marker": "",
"Items": [
{
"Status": "Deployed",
"CacheBehaviors": {
"Quantity": 0
},
"Origins": {
"Items": [
{
"OriginPath": "",
"S3OriginConfig": {
"OriginAccessIdentity": ""
},
"Id": "my-origin",
"DomainName": "my-bucket.s3.amazonaws.com"
}
],
"Quantity": 1
},
"DomainName": "d2wkuj2w9l34gt.cloudfront.net",
"PriceClass": "PriceClass_All",
"Enabled": true,
"DefaultCacheBehavior": {
"TrustedSigners": {
"Enabled": false,
"Quantity": 0
},
"TargetOriginId": "my-origin",
"ViewerProtocolPolicy": "allow-all",
"ForwardedValues": {
"Headers": {
"Quantity": 0
},
"Cookies": {
"Forward": "none"
},
"QueryString": true
},
"MaxTTL": 31536000,
"SmoothStreaming": false,
"DefaultTTL": 86400,
"AllowedMethods": {
"Items": [
"HEAD",
"GET"
],
"CachedMethods": {
"Items": [
"HEAD",
"GET"
],
"Quantity": 2
},
"Quantity": 2
},
"MinTTL": 3600
},
"Comment": "",
"ViewerCertificate": {
"CloudFrontDefaultCertificate": true,
"MinimumProtocolVersion": "SSLv3"
},
"CustomErrorResponses": {
"Quantity": 0
},
"LastModifiedTime": "2015-08-31T21:11:29.093Z",
"Id": "S11A16G5KZMEQD",
"Restrictions": {
"GeoRestriction": {
"RestrictionType": "none",
"Quantity": 0
}
},
"Aliases": {
"Quantity": 0
}
}
],
"IsTruncated": false,
"MaxItems": 100,
"Quantity": 1
}
} awscli-1.10.1/awscli/examples/cloudfront/get-distribution-config.rst 0000666 4542626 0000144 00000005255 12652514124 026707 0 ustar pysdk-ci amazon 0000000 0000000 The following command gets a distribution config for a CloudFront distribution with the ID ``S11A16G5KZMEQD``::
aws cloudfront get-distribution-config --id S11A16G5KZMEQD
The distribution ID is available in the output of ``create-distribution`` and ``list-distributions``.
Output::
{
"ETag": "E37HOT42DHPVYH",
"DistributionConfig": {
"Comment": "",
"CacheBehaviors": {
"Quantity": 0
},
"Logging": {
"Bucket": "",
"Prefix": "",
"Enabled": false,
"IncludeCookies": false
},
"Origins": {
"Items": [
{
"OriginPath": "",
"S3OriginConfig": {
"OriginAccessIdentity": ""
},
"Id": "my-origin",
"DomainName": "my-bucket.s3.amazonaws.com"
}
],
"Quantity": 1
},
"DefaultRootObject": "",
"PriceClass": "PriceClass_All",
"Enabled": true,
"DefaultCacheBehavior": {
"TrustedSigners": {
"Enabled": false,
"Quantity": 0
},
"TargetOriginId": "my-origin",
"ViewerProtocolPolicy": "allow-all",
"ForwardedValues": {
"Headers": {
"Quantity": 0
},
"Cookies": {
"Forward": "none"
},
"QueryString": true
},
"MaxTTL": 31536000,
"SmoothStreaming": false,
"DefaultTTL": 86400,
"AllowedMethods": {
"Items": [
"HEAD",
"GET"
],
"CachedMethods": {
"Items": [
"HEAD",
"GET"
],
"Quantity": 2
},
"Quantity": 2
},
"MinTTL": 3600
},
"CallerReference": "my-distribution-2015-09-01",
"ViewerCertificate": {
"CloudFrontDefaultCertificate": true,
"MinimumProtocolVersion": "SSLv3"
},
"CustomErrorResponses": {
"Quantity": 0
},
"Restrictions": {
"GeoRestriction": {
"RestrictionType": "none",
"Quantity": 0
}
},
"Aliases": {
"Quantity": 0
}
}
} awscli-1.10.1/awscli/examples/cloudfront/list-invalidations.rst 0000666 4542626 0000144 00000001232 12652514124 025754 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves a list of invalidations for a CloudFront web distribution with the ID ``S11A16G5KZMEQD``::
aws cloudfront list-invalidations --distribution-id S11A16G5KZMEQD
The distribution ID is available in the output of ``create-distribution`` and ``list-distributions``.
Output::
{
"InvalidationList": {
"Marker": "",
"Items": [
{
"Status": "Completed",
"Id": "YNY2LI2BVJ4NJU",
"CreateTime": "2015-08-31T21:15:52.042Z"
}
],
"IsTruncated": false,
"MaxItems": 100,
"Quantity": 1
}
}
awscli-1.10.1/awscli/examples/cloudfront/delete-distribution.rst 0000666 4542626 0000144 00000001020 12652514124 026111 0 ustar pysdk-ci amazon 0000000 0000000 The following command deletes a CloudFront distribution with the ID ``S11A16G5KZMEQD``::
aws cloudfront delete-distribution --id S11A16G5KZMEQD --if-match 8UBQECEJX24ST
The distribution ID is available in the output of ``create-distribution`` and ``list-distributions``. The distribution must be disabled with ``update-distribution`` prior to deletion. The ETag value ``8UBQECEJX24ST`` for the ``if-match`` parameter is available in the output of ``update-distribution``, ``get-distribution`` or ``get-distribution-config``. awscli-1.10.1/awscli/examples/cloudfront/get-distribution.rst 0000666 4542626 0000144 00000006463 12652514124 025446 0 ustar pysdk-ci amazon 0000000 0000000 The following command gets a distribution with the ID ``S11A16G5KZMEQD``::
aws cloudfront get-distribution --id S11A16G5KZMEQD
The distribution ID is available in the output of ``create-distribution`` and ``list-distributions``.
Output::
{
"Distribution": {
"Status": "Deployed",
"DomainName": "d2wkuj2w9l34gt.cloudfront.net",
"InProgressInvalidationBatches": 0,
"DistributionConfig": {
"Comment": "",
"CacheBehaviors": {
"Quantity": 0
},
"Logging": {
"Bucket": "",
"Prefix": "",
"Enabled": false,
"IncludeCookies": false
},
"Origins": {
"Items": [
{
"OriginPath": "",
"S3OriginConfig": {
"OriginAccessIdentity": ""
},
"Id": "my-origin",
"DomainName": "my-bucket.s3.amazonaws.com"
}
],
"Quantity": 1
},
"DefaultRootObject": "",
"PriceClass": "PriceClass_All",
"Enabled": true,
"DefaultCacheBehavior": {
"TrustedSigners": {
"Enabled": false,
"Quantity": 0
},
"TargetOriginId": "my-origin",
"ViewerProtocolPolicy": "allow-all",
"ForwardedValues": {
"Headers": {
"Quantity": 0
},
"Cookies": {
"Forward": "none"
},
"QueryString": true
},
"MaxTTL": 31536000,
"SmoothStreaming": false,
"DefaultTTL": 86400,
"AllowedMethods": {
"Items": [
"HEAD",
"GET"
],
"CachedMethods": {
"Items": [
"HEAD",
"GET"
],
"Quantity": 2
},
"Quantity": 2
},
"MinTTL": 3600
},
"CallerReference": "my-distribution-2015-09-01",
"ViewerCertificate": {
"CloudFrontDefaultCertificate": true,
"MinimumProtocolVersion": "SSLv3"
},
"CustomErrorResponses": {
"Quantity": 0
},
"Restrictions": {
"GeoRestriction": {
"RestrictionType": "none",
"Quantity": 0
}
},
"Aliases": {
"Quantity": 0
}
},
"ActiveTrustedSigners": {
"Enabled": false,
"Quantity": 0
},
"LastModifiedTime": "2015-08-31T21:11:29.093Z",
"Id": "S11A16G5KZMEQD"
},
"ETag": "E37HOT42DHPVYH"
}
awscli-1.10.1/awscli/examples/cloudfront/create-invalidation.rst 0000666 4542626 0000144 00000002765 12652514124 026075 0 ustar pysdk-ci amazon 0000000 0000000 The following command creates an invalidation for a CloudFront distribution with the ID ``S11A16G5KZMEQD``::
aws cloudfront create-invalidation --distribution-id S11A16G5KZMEQD \
--paths /index.html /error.html
The --paths will automatically generate a random ``CallerReference`` every time.
Or you can use the following command to do the same thing, so that you can have a chance to specify your own ``CallerReference`` here::
aws cloudfront create-invalidation --invalidation-batch file://invbatch.json --distribution-id S11A16G5KZMEQD
The distribution ID is available in the output of ``create-distribution`` and ``list-distributions``.
The file ``invbatch.json`` is a JSON document in the current folder that specifies two paths to invalidate::
{
"Paths": {
"Quantity": 2,
"Items": ["/index.html", "/error.html"]
},
"CallerReference": "my-invalidation-2015-09-01"
}
Output of both commands::
{
"Invalidation": {
"Status": "InProgress",
"InvalidationBatch": {
"Paths": {
"Items": [
"/index.html",
"/error.html"
],
"Quantity": 2
},
"CallerReference": "my-invalidation-2015-09-01"
},
"Id": "YNY2LI2BVJ4NJU",
"CreateTime": "2015-08-31T21:15:52.042Z"
},
"Location": "https://cloudfront.amazonaws.com/2015-04-17/distribution/S11A16G5KZMEQD/invalidation/YNY2LI2BVJ4NJU"
} awscli-1.10.1/awscli/examples/cloudfront/get-invalidation.rst 0000666 4542626 0000144 00000001621 12652514124 025377 0 ustar pysdk-ci amazon 0000000 0000000 The following command retrieves an invalidation with the ID ``YNY2LI2BVJ4NJU`` for a CloudFront web distribution with the ID ``S11A16G5KZMEQD``::
aws cloudfront get-invalidation --id YNY2LI2BVJ4NJU --distribution-id S11A16G5KZMEQD
The distribution ID is available in the output of ``create-distribution`` and ``list-distributions``. The invalidation ID is available in the output of ``create-invalidation`` and ``list-invalidations``.
Output::
{
"Invalidation": {
"Status": "Completed",
"InvalidationBatch": {
"Paths": {
"Items": [
"/index.html",
"/error.html"
],
"Quantity": 2
},
"CallerReference": "my-invalidation-2015-09-01"
},
"Id": "YNY2LI2BVJ4NJU",
"CreateTime": "2015-08-31T21:15:52.042Z"
}
}
awscli-1.10.1/awscli/examples/cloudfront/update-distribution.rst 0000666 4542626 0000144 00000014244 12652514124 026145 0 ustar pysdk-ci amazon 0000000 0000000 The following command updates the Default Root Object to "index.html"
for a CloudFront distribution with the ID ``S11A16G5KZMEQD``::
aws cloudfront update-distribution --id S11A16G5KZMEQD \
--default-root-object index.html
The following command disables a CloudFront distribution with the ID ``S11A16G5KZMEQD``::
aws cloudfront update-distribution --id S11A16G5KZMEQD --distribution-config file://distconfig-disabled.json --if-match E37HOT42DHPVYH
The distribution ID is available in the output of ``create-distribution`` and ``list-distributions``. The ETag value ``E37HOT42DHPVYH`` for the ``if-match`` parameter is available in the output of ``create-distribution``, ``get-distribution`` or ``get-distribution-config``.
The file ``distconfig-disabled.json`` is a JSON document in the current folder that modifies the existing distribution config for ``S11A16G5KZMEQD`` to disable the distribution. This file was created by taking the existing config from the output of ``get-distribution-config`` and changing the ``Enabled`` key's value to ``false``::
{
"Comment": "",
"CacheBehaviors": {
"Quantity": 0
},
"Logging": {
"Bucket": "",
"Prefix": "",
"Enabled": false,
"IncludeCookies": false
},
"Origins": {
"Items": [
{
"OriginPath": "",
"S3OriginConfig": {
"OriginAccessIdentity": ""
},
"Id": "my-origin",
"DomainName": "my-bucket.s3.amazonaws.com"
}
],
"Quantity": 1
},
"DefaultRootObject": "",
"PriceClass": "PriceClass_All",
"Enabled": false,
"DefaultCacheBehavior": {
"TrustedSigners": {
"Enabled": false,
"Quantity": 0
},
"TargetOriginId": "my-origin",
"ViewerProtocolPolicy": "allow-all",
"ForwardedValues": {
"Headers": {
"Quantity": 0
},
"Cookies": {
"Forward": "none"
},
"QueryString": true
},
"MaxTTL": 31536000,
"SmoothStreaming": false,
"DefaultTTL": 86400,
"AllowedMethods": {
"Items": [
"HEAD",
"GET"
],
"CachedMethods": {
"Items": [
"HEAD",
"GET"
],
"Quantity": 2
},
"Quantity": 2
},
"MinTTL": 3600
},
"CallerReference": "my-distribution-2015-09-01",
"ViewerCertificate": {
"CloudFrontDefaultCertificate": true,
"MinimumProtocolVersion": "SSLv3"
},
"CustomErrorResponses": {
"Quantity": 0
},
"Restrictions": {
"GeoRestriction": {
"RestrictionType": "none",
"Quantity": 0
}
},
"Aliases": {
"Quantity": 0
}
}
After disabling a CloudFront distribution you can delete it with ``delete-distribution``.
The output includes the updated distribution config. Note that the ``ETag`` value has also changed::
{
"Distribution": {
"Status": "InProgress",
"DomainName": "d2wkuj2w9l34gt.cloudfront.net",
"InProgressInvalidationBatches": 0,
"DistributionConfig": {
"Comment": "",
"CacheBehaviors": {
"Quantity": 0
},
"Logging": {
"Bucket": "",
"Prefix": "",
"Enabled": false,
"IncludeCookies": false
},
"Origins": {
"Items": [
{
"OriginPath": "",
"S3OriginConfig": {
"OriginAccessIdentity": ""
},
"Id": "my-origin",
"DomainName": "my-bucket.s3.amazonaws.com"
}
],
"Quantity": 1
},
"DefaultRootObject": "",
"PriceClass": "PriceClass_All",
"Enabled": false,
"DefaultCacheBehavior": {
"TrustedSigners": {
"Enabled": false,
"Quantity": 0
},
"TargetOriginId": "my-origin",
"ViewerProtocolPolicy": "allow-all",
"ForwardedValues": {
"Headers": {
"Quantity": 0
},
"Cookies": {
"Forward": "none"
},
"QueryString": true
},
"MaxTTL": 31536000,
"SmoothStreaming": false,
"DefaultTTL": 86400,
"AllowedMethods": {
"Items": [
"HEAD",
"GET"
],
"CachedMethods": {
"Items": [
"HEAD",
"GET"
],
"Quantity": 2
},
"Quantity": 2
},
"MinTTL": 3600
},
"CallerReference": "my-distribution-2015-09-01",
"ViewerCertificate": {
"CloudFrontDefaultCertificate": true,
"MinimumProtocolVersion": "SSLv3"
},
"CustomErrorResponses": {
"Quantity": 0
},
"Restrictions": {
"GeoRestriction": {
"RestrictionType": "none",
"Quantity": 0
}
},
"Aliases": {
"Quantity": 0
}
},
"ActiveTrustedSigners": {
"Enabled": false,
"Quantity": 0
},
"LastModifiedTime": "2015-09-01T17:54:11.453Z",
"Id": "S11A16G5KZMEQD"
},
"ETag": "8UBQECEJX24ST"
} awscli-1.10.1/awscli/examples/cloudfront/create-distribution.rst 0000666 4542626 0000144 00000011445 12652514124 026126 0 ustar pysdk-ci amazon 0000000 0000000 You can create a CloudFront web distribution for an S3 domain (such as
my-bucket.s3.amazonaws.com) or for a custom domain (such as example.com).
The following command shows an example for an S3 domain, and optionally also
specifies a default root object::
aws cloudfront create-distribution \
--origin-domain-name my-bucket.s3.amazonaws.com \
--default-root-object index.html
Or you can use the following command together with a JSON document to do the
same thing::
aws cloudfront create-distribution --distribution-config file://distconfig.json
The file ``distconfig.json`` is a JSON document in the current folder that defines a CloudFront distribution::
{
"CallerReference": "my-distribution-2015-09-01",
"Aliases": {
"Quantity": 0
},
"DefaultRootObject": "index.html",
"Origins": {
"Quantity": 1,
"Items": [
{
"Id": "my-origin",
"DomainName": "my-bucket.s3.amazonaws.com",
"S3OriginConfig": {
"OriginAccessIdentity": ""
}
}
]
},
"DefaultCacheBehavior": {
"TargetOriginId": "my-origin",
"ForwardedValues": {
"QueryString": true,
"Cookies": {
"Forward": "none"
}
},
"TrustedSigners": {
"Enabled": false,
"Quantity": 0
},
"ViewerProtocolPolicy": "allow-all",
"MinTTL": 3600
},
"CacheBehaviors": {
"Quantity": 0
},
"Comment": "",
"Logging": {
"Enabled": false,
"IncludeCookies": true,
"Bucket": "",
"Prefix": ""
},
"PriceClass": "PriceClass_All",
"Enabled": true
}
Output::
{
"Distribution": {
"Status": "InProgress",
"DomainName": "d2wkuj2w9l34gt.cloudfront.net",
"InProgressInvalidationBatches": 0,
"DistributionConfig": {
"Comment": "",
"CacheBehaviors": {
"Quantity": 0
},
"Logging": {
"Bucket": "",
"Prefix": "",
"Enabled": false,
"IncludeCookies": false
},
"Origins": {
"Items": [
{
"OriginPath": "",
"S3OriginConfig": {
"OriginAccessIdentity": ""
},
"Id": "my-origin",
"DomainName": "my-bucket.s3.amazonaws.com"
}
],
"Quantity": 1
},
"DefaultRootObject": "",
"PriceClass": "PriceClass_All",
"Enabled": true,
"DefaultCacheBehavior": {
"TrustedSigners": {
"Enabled": false,
"Quantity": 0
},
"TargetOriginId": "my-origin",
"ViewerProtocolPolicy": "allow-all",
"ForwardedValues": {
"Headers": {
"Quantity": 0
},
"Cookies": {
"Forward": "none"
},
"QueryString": true
},
"MaxTTL": 31536000,
"SmoothStreaming": false,
"DefaultTTL": 86400,
"AllowedMethods": {
"Items": [
"HEAD",
"GET"
],
"CachedMethods": {
"Items": [
"HEAD",
"GET"
],
"Quantity": 2
},
"Quantity": 2
},
"MinTTL": 3600
},
"CallerReference": "my-distribution-2015-09-01",
"ViewerCertificate": {
"CloudFrontDefaultCertificate": true,
"MinimumProtocolVersion": "SSLv3"
},
"CustomErrorResponses": {
"Quantity": 0
},
"Restrictions": {
"GeoRestriction": {
"RestrictionType": "none",
"Quantity": 0
}
},
"Aliases": {
"Quantity": 0
}
},
"ActiveTrustedSigners": {
"Enabled": false,
"Quantity": 0
},
"LastModifiedTime": "2015-08-31T21:11:29.093Z",
"Id": "S11A16G5KZMEQD"
},
"ETag": "E37HOT42DHPVYH",
"Location": "https://cloudfront.amazonaws.com/2015-04-17/distribution/S11A16G5KZMEQD"
}
awscli-1.10.1/awscli/examples/datapipeline/ 0000777 4542626 0000144 00000000000 12652514126 021671 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/examples/datapipeline/add-tags.rst 0000666 4542626 0000144 00000001233 12652514124 024104 0 ustar pysdk-ci amazon 0000000 0000000 **To add a tag to a pipeline**
This example adds the specified tag to the specified pipeline::
aws datapipeline add-tags --pipeline-id df-00627471SOVYZEXAMPLE --tags key=environment,value=production key=owner,value=sales
To view the tags, use the describe-pipelines command. For example, the tags added in the example command appear as follows in the output for describe-pipelines::
{
...
"tags": [
{
"value": "production",
"key": "environment"
},
{
"value": "sales",
"key": "owner"
}
]
...
}
awscli-1.10.1/awscli/examples/datapipeline/activate-pipeline.rst 0000666 4542626 0000144 00000000540 12652514124 026023 0 ustar pysdk-ci amazon 0000000 0000000 **To activate a pipeline**
This example activates the specified pipeline::
aws datapipeline activate-pipeline --pipeline-id df-00627471SOVYZEXAMPLE
To activate the pipeline at a specific date and time, use the following command::
aws datapipeline activate-pipeline --pipeline-id df-00627471SOVYZEXAMPLE --start-timestamp 2015-04-07T00:00:00Z
awscli-1.10.1/awscli/examples/datapipeline/deactivate-pipeline.rst 0000666 4542626 0000144 00000000546 12652514124 026342 0 ustar pysdk-ci amazon 0000000 0000000 **To deactivate a pipeline**
This example deactivates the specified pipeline::
aws datapipeline deactivate-pipeline --pipeline-id df-00627471SOVYZEXAMPLE
To deactivate the pipeline only after all running activities finish, use the following command::
aws datapipeline deactivate-pipeline --pipeline-id df-00627471SOVYZEXAMPLE --no-cancel-active
awscli-1.10.1/awscli/examples/datapipeline/list-runs.rst 0000666 4542626 0000144 00000002221 12652514124 024356 0 ustar pysdk-ci amazon 0000000 0000000 **To list your pipeline runs**
This example lists the runs for the specified pipeline::
aws datapipeline list-runs --pipeline-id df-00627471SOVYZEXAMPLE
The following is example output::
Name Scheduled Start Status ID Started Ended
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1. EC2ResourceObj 2015-04-12T17:33:02 CREATING @EC2ResourceObj_2015-04-12T17:33:02 2015-04-12T17:33:10
2. S3InputLocation 2015-04-12T17:33:02 FINISHED @S3InputLocation_2015-04-12T17:33:02 2015-04-12T17:33:09 2015-04-12T17:33:09
3. S3OutputLocation 2015-04-12T17:33:02 WAITING_ON_DEPENDENCIES @S3OutputLocation_2015-04-12T17:33:02 2015-04-12T17:33:09
4. ShellCommandActivityObj 2015-04-12T17:33:02 WAITING_FOR_RUNNER @ShellCommandActivityObj_2015-04-12T17:33:02 2015-04-12T17:33:09
awscli-1.10.1/awscli/examples/datapipeline/put-pipeline-definition.rst 0000666 4542626 0000144 00000000602 12652514124 027160 0 ustar pysdk-ci amazon 0000000 0000000 **To upload a pipeline definition**
This example uploads the specified pipeline definition to the specified pipeline::
aws datapipeline put-pipeline-definition --pipeline-id df-00627471SOVYZEXAMPLE --pipeline-definition file://my-pipeline-definition.json
The following is example output::
{
"validationErrors": [],
"errored": false,
"validationWarnings": []
}
awscli-1.10.1/awscli/examples/datapipeline/create-pipeline.rst 0000666 4542626 0000144 00000000357 12652514124 025474 0 ustar pysdk-ci amazon 0000000 0000000 **To create a pipeline**
This example creates a pipeline::
aws datapipeline create-pipeline --name my-pipeline --unique-id my-pipeline-token
The following is example output::
{
"pipelineId": "df-00627471SOVYZEXAMPLE"
}
awscli-1.10.1/awscli/examples/datapipeline/get-pipeline-definition.rst 0000666 4542626 0000144 00000005435 12652514124 027140 0 ustar pysdk-ci amazon 0000000 0000000 **To get a pipeline definition**
This example gets the pipeline definition for the specified pipeline::
aws datapipeline get-pipeline-definition --pipeline-id df-00627471SOVYZEXAMPLE
The following is example output::
{
"parameters": [
{
"type": "AWS::S3::ObjectKey",
"id": "myS3OutputLoc",
"description": "S3 output folder"
},
{
"default": "s3://us-east-1.elasticmapreduce.samples/pig-apache-logs/data",
"type": "AWS::S3::ObjectKey",
"id": "myS3InputLoc",
"description": "S3 input folder"
},
{
"default": "grep -rc \"GET\" ${INPUT1_STAGING_DIR}/* > ${OUTPUT1_STAGING_DIR}/output.txt",
"type": "String",
"id": "myShellCmd",
"description": "Shell command to run"
}
],
"objects": [
{
"type": "Ec2Resource",
"terminateAfter": "20 Minutes",
"instanceType": "t1.micro",
"id": "EC2ResourceObj",
"name": "EC2ResourceObj"
},
{
"name": "Default",
"failureAndRerunMode": "CASCADE",
"resourceRole": "DataPipelineDefaultResourceRole",
"schedule": {
"ref": "DefaultSchedule"
},
"role": "DataPipelineDefaultRole",
"scheduleType": "cron",
"id": "Default"
},
{
"directoryPath": "#{myS3OutputLoc}/#{format(@scheduledStartTime, 'YYYY-MM-dd-HH-mm-ss')}",
"type": "S3DataNode",
"id": "S3OutputLocation",
"name": "S3OutputLocation"
},
{
"directoryPath": "#{myS3InputLoc}",
"type": "S3DataNode",
"id": "S3InputLocation",
"name": "S3InputLocation"
},
{
"startAt": "FIRST_ACTIVATION_DATE_TIME",
"name": "Every 15 minutes",
"period": "15 minutes",
"occurrences": "4",
"type": "Schedule",
"id": "DefaultSchedule"
},
{
"name": "ShellCommandActivityObj",
"command": "#{myShellCmd}",
"output": {
"ref": "S3OutputLocation"
},
"input": {
"ref": "S3InputLocation"
},
"stage": "true",
"type": "ShellCommandActivity",
"id": "ShellCommandActivityObj",
"runsOn": {
"ref": "EC2ResourceObj"
}
}
],
"values": {
"myS3OutputLoc": "s3://my-s3-bucket/",
"myS3InputLoc": "s3://us-east-1.elasticmapreduce.samples/pig-apache-logs/data",
"myShellCmd": "grep -rc \"GET\" ${INPUT1_STAGING_DIR}/* > ${OUTPUT1_STAGING_DIR}/output.txt"
}
}
awscli-1.10.1/awscli/examples/datapipeline/list-pipelines.rst 0000666 4542626 0000144 00000001102 12652514124 025354 0 ustar pysdk-ci amazon 0000000 0000000 **To list your pipelines**
This example lists your pipelines::
aws datapipeline list-pipelines
The following is example output::
{
"pipelineIdList": [
{
"id": "df-00627471SOVYZEXAMPLE",
"name": "my-pipeline"
},
{
"id": "df-09028963KNVMREXAMPLE",
"name": "ImportDDB"
},
{
"id": "df-0870198233ZYVEXAMPLE",
"name": "CrossRegionDDB"
},
{
"id": "df-00189603TB4MZEXAMPLE",
"name": "CopyRedshift"
}
]
}
awscli-1.10.1/awscli/examples/datapipeline/remove-tags.rst 0000666 4542626 0000144 00000000310 12652514124 024644 0 ustar pysdk-ci amazon 0000000 0000000 **To remove a tag from a pipeline**
This example removes the specified tag from the specified pipeline::
aws datapipeline remove-tags --pipeline-id df-00627471SOVYZEXAMPLE --tag-keys environment
awscli-1.10.1/awscli/examples/datapipeline/delete-pipeline.rst 0000666 4542626 0000144 00000000223 12652514124 025463 0 ustar pysdk-ci amazon 0000000 0000000 **To delete a pipeline**
This example deletes the specified pipeline::
aws datapipeline delete-pipeline --pipeline-id df-00627471SOVYZEXAMPLE
awscli-1.10.1/awscli/examples/datapipeline/describe-pipelines.rst 0000666 4542626 0000144 00000003042 12652514124 026166 0 ustar pysdk-ci amazon 0000000 0000000 **To describe your pipelines**
This example describes the specified pipeline::
aws datapipeline describe-pipelines --pipeline-ids df-00627471SOVYZEXAMPLE
The following is example output::
{
"pipelineDescriptionList": [
{
"fields": [
{
"stringValue": "PENDING",
"key": "@pipelineState"
},
{
"stringValue": "my-pipeline",
"key": "name"
},
{
"stringValue": "2015-04-07T16:05:58",
"key": "@creationTime"
},
{
"stringValue": "df-00627471SOVYZEXAMPLE",
"key": "@id"
},
{
"stringValue": "123456789012",
"key": "pipelineCreator"
},
{
"stringValue": "PIPELINE",
"key": "@sphere"
},
{
"stringValue": "123456789012",
"key": "@userId"
},
{
"stringValue": "123456789012",
"key": "@accountId"
},
{
"stringValue": "my-pipeline-token",
"key": "uniqueId"
}
],
"pipelineId": "df-00627471SOVYZEXAMPLE",
"name": "my-pipeline",
"tags": []
}
]
}
awscli-1.10.1/awscli/paramfile.py 0000666 4542626 0000144 00000011402 12652514124 017722 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
import os
from botocore.vendored import requests
from awscli.compat import six
from awscli.compat import compat_open
logger = logging.getLogger(__name__)
# These are special cased arguments that do _not_ get the
# special param file processing. This is typically because it
# refers to an actual URI of some sort and we don't want to actually
# download the content (i.e TemplateURL in cloudformation).
PARAMFILE_DISABLED = set([
'apigateway.put-integration.uri',
'cloudformation.create-stack.template-url',
'cloudformation.update-stack.template-url',
'cloudformation.validate-template.template-url',
'cloudformation.estimate-template-cost.template-url',
'cloudformation.create-stack.stack-policy-url',
'cloudformation.update-stack.stack-policy-url',
'cloudformation.set-stack-policy.stack-policy-url',
'cloudformation.update-stack.stack-policy-during-update-url',
# We will want to change the event name to ``s3`` as opposed to
# custom in the near future along with ``s3`` to ``s3api``.
'custom.cp.website-redirect',
'custom.mv.website-redirect',
'custom.sync.website-redirect',
'iam.create-open-id-connect-provider.url',
'machinelearning.predict.predict-endpoint',
'sqs.add-permission.queue-url',
'sqs.change-message-visibility.queue-url',
'sqs.change-message-visibility-batch.queue-url',
'sqs.delete-message.queue-url',
'sqs.delete-message-batch.queue-url',
'sqs.delete-queue.queue-url',
'sqs.get-queue-attributes.queue-url',
'sqs.list-dead-letter-source-queues.queue-url',
'sqs.receive-message.queue-url',
'sqs.remove-permission.queue-url',
'sqs.send-message.queue-url',
'sqs.send-message-batch.queue-url',
'sqs.set-queue-attributes.queue-url',
'sqs.purge-queue.queue-url',
's3.copy-object.website-redirect-location',
's3.create-multipart-upload.website-redirect-location',
's3.put-object.website-redirect-location',
# Double check that this has been renamed!
'sns.subscribe.notification-endpoint',
])
class ResourceLoadingError(Exception):
pass
def get_paramfile(path):
"""Load parameter based on a resource URI.
It is possible to pass parameters to operations by referring
to files or URI's. If such a reference is detected, this
function attempts to retrieve the data from the file or URI
and returns it. If there are any errors or if the ``path``
does not appear to refer to a file or URI, a ``None`` is
returned.
:type path: str
:param path: The resource URI, e.g. file://foo.txt. This value
may also be a non resource URI, in which case ``None`` is returned.
:return: The loaded value associated with the resource URI.
If the provided ``path`` is not a resource URI, then a
value of ``None`` is returned.
"""
data = None
if isinstance(path, six.string_types):
for prefix, function_spec in PREFIX_MAP.items():
if path.startswith(prefix):
function, kwargs = function_spec
data = function(prefix, path, **kwargs)
return data
def get_file(prefix, path, mode):
file_path = os.path.expandvars(os.path.expanduser(path[len(prefix):]))
try:
with compat_open(file_path, mode) as f:
return f.read()
except UnicodeDecodeError:
raise ResourceLoadingError(
'Unable to load paramfile (%s), text contents could '
'not be decoded. If this is a binary file, please use the '
'fileb:// prefix instead of the file:// prefix.' % file_path)
except (OSError, IOError) as e:
raise ResourceLoadingError('Unable to load paramfile %s: %s' % (
path, e))
def get_uri(prefix, uri):
try:
r = requests.get(uri)
if r.status_code == 200:
return r.text
else:
raise ResourceLoadingError(
"received non 200 status code of %s" % (
r.status_code))
except Exception as e:
raise ResourceLoadingError('Unable to retrieve %s: %s' % (uri, e))
PREFIX_MAP = {
'file://': (get_file, {'mode': 'r'}),
'fileb://': (get_file, {'mode': 'rb'}),
'http://': (get_uri, {}),
'https://': (get_uri, {}),
}
awscli-1.10.1/awscli/shorthand.py 0000666 4542626 0000144 00000034507 12652514124 017767 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""Module for parsing shorthand syntax.
This module parses any CLI options that use a "shorthand"
syntax::
--foo A=b,C=d
|------|
|
Shorthand syntax
This module provides two main classes to do this.
First, there's a ``ShorthandParser`` class. This class works
on a purely syntactic level. It looks only at the string value
provided to it in order to figure out how the string should be parsed.
However, because there was a pre-existing shorthand parser, we need
to remain backwards compatible with the previous parser. One of the
things the previous parser did was use the associated JSON model to
control how the expression was parsed.
In order to accommodate this a post processing class is provided that
takes the parsed values from the ``ShorthandParser`` as well as the
corresponding JSON model for the CLI argument and makes any adjustments
necessary to maintain backwards compatibility. This is done in the
``BackCompatVisitor`` class.
"""
import re
import string
_EOF = object()
class _NamedRegex(object):
def __init__(self, name, regex_str):
self.name = name
self.regex = re.compile(regex_str, re.UNICODE)
def match(self, value):
return self.regex.match(value)
class ShorthandParseError(Exception):
def __init__(self, value, expected, actual, index):
self.value = value
self.expected = expected
self.actual = actual
self.index = index
msg = self._construct_msg()
super(ShorthandParseError, self).__init__(msg)
def _construct_msg(self):
consumed, remaining, num_spaces = self.value, '', self.index
if '\n' in self.value[:self.index]:
# If there's newlines in the consumed expression, we want
# to make sure we're only counting the spaces
# from the last newline:
# foo=bar,\n
# bar==baz
# ^
last_newline = self.value[:self.index].rindex('\n')
num_spaces = self.index - last_newline - 1
if '\n' in self.value[self.index:]:
# If there's newline in the remaining, divide value
# into consumed and remainig
# foo==bar,\n
# ^
# bar=baz
next_newline = self.index + self.value[self.index:].index('\n')
consumed = self.value[:next_newline]
remaining = self.value[next_newline:]
msg = (
"Expected: '%s', received: '%s' for input:\n"
"%s\n"
"%s"
"%s"
) % (self.expected, self.actual, consumed,
' ' * num_spaces + '^', remaining)
return msg
class ShorthandParser(object):
"""Parses shorthand syntax in the CLI.
Note that this parser does not rely on any JSON models to control
how to parse the shorthand syntax.
"""
_SINGLE_QUOTED = _NamedRegex('singled quoted', r'\'(?:\\\\|\\\'|[^\'])*\'')
_DOUBLE_QUOTED = _NamedRegex('double quoted', r'"(?:\\\\|\\"|[^"])*"')
_START_WORD = u'\!\#-&\(-\+\--\<\>-Z\\\\-z\u007c-\uffff'
_FIRST_FOLLOW_CHARS = u'\s\!\#-&\(-\+\--\\\\\^-\|~-\uffff'
_SECOND_FOLLOW_CHARS = u'\s\!\#-&\(-\+\--\<\>-\uffff'
_ESCAPED_COMMA = '(\\\\,)'
_FIRST_VALUE = _NamedRegex(
'first',
u'({escaped_comma}|[{start_word}])'
u'({escaped_comma}|[{follow_chars}])*'.format(
escaped_comma=_ESCAPED_COMMA,
start_word=_START_WORD,
follow_chars=_FIRST_FOLLOW_CHARS,
))
_SECOND_VALUE = _NamedRegex(
'second',
u'({escaped_comma}|[{start_word}])'
u'({escaped_comma}|[{follow_chars}])*'.format(
escaped_comma=_ESCAPED_COMMA,
start_word=_START_WORD,
follow_chars=_SECOND_FOLLOW_CHARS,
))
def __init__(self):
self._tokens = []
def parse(self, value):
"""Parse shorthand syntax.
For example::
parser = ShorthandParser()
parser.parse('a=b') # {'a': 'b'}
parser.parse('a=b,c') # {'a': ['b', 'c']}
:tpye value: str
:param value: Any value that needs to be parsed.
:return: Parsed value, which will be a dictionary.
"""
self._input_value = value
self._index = 0
return self._parameter()
def _parameter(self):
# parameter = keyval *("," keyval)
params = {}
params.update(self._keyval())
while self._index < len(self._input_value):
self._expect(',', consume_whitespace=True)
params.update(self._keyval())
return params
def _keyval(self):
# keyval = key "=" [values]
key = self._key()
self._expect('=', consume_whitespace=True)
values = self._values()
return {key: values}
def _key(self):
# key = 1*(alpha / %x30-39 / %x5f / %x2e) ; [a-zA-Z0-9\-_.]
valid_chars = string.ascii_letters + string.digits + '-_.'
start = self._index
while not self._at_eof():
if self._current() not in valid_chars:
break
self._index += 1
return self._input_value[start:self._index]
def _values(self):
# values = csv-list / explicit-list / hash-literal
if self._at_eof():
return ''
elif self._current() == '[':
return self._explicit_list()
elif self._current() == '{':
return self._hash_literal()
else:
return self._csv_value()
def _csv_value(self):
# Supports either:
# foo=bar -> 'bar'
# ^
# foo=bar,baz -> ['bar', 'baz']
# ^
first_value = self._first_value()
self._consume_whitespace()
if self._at_eof() or self._input_value[self._index] != ',':
return first_value
self._expect(',', consume_whitespace=True)
csv_list = [first_value]
# Try to parse remaining list values.
# It's possible we don't parse anything:
# a=b,c=d
# ^-here
# In the case above, we'll hit the ShorthandParser,
# backtrack to the comma, and return a single scalar
# value 'b'.
while True:
try:
current = self._second_value()
self._consume_whitespace()
if self._at_eof():
csv_list.append(current)
break
self._expect(',', consume_whitespace=True)
csv_list.append(current)
except ShorthandParseError:
# Backtrack to the previous comma.
# This can happen when we reach this case:
# foo=a,b,c=d,e=f
# ^-start
# foo=a,b,c=d,e=f
# ^-error, "expected ',' received '='
# foo=a,b,c=d,e=f
# ^-backtrack to here.
if self._at_eof():
raise
self._backtrack_to(',')
break
if len(csv_list) == 1:
# Then this was a foo=bar case, so we expect
# this to parse to a scalar value 'bar', i.e
# {"foo": "bar"} instead of {"bar": ["bar"]}
return first_value
return csv_list
def _value(self):
result = self._FIRST_VALUE.match(self._input_value[self._index:])
if result is not None:
consumed = self._consume_matched_regex(result)
return consumed.replace('\\,', ',').rstrip()
return ''
def _explicit_list(self):
# explicit-list = "[" [value *(",' value)] "]"
self._expect('[', consume_whitespace=True)
values = []
while self._current() != ']':
val = self._explicit_values()
values.append(val)
self._consume_whitespace()
if self._current() != ']':
self._expect(',')
self._consume_whitespace()
self._expect(']')
return values
def _explicit_values(self):
# values = csv-list / explicit-list / hash-literal
if self._current() == '[':
return self._explicit_list()
elif self._current() == '{':
return self._hash_literal()
else:
return self._first_value()
def _hash_literal(self):
self._expect('{', consume_whitespace=True)
keyvals = {}
while self._current() != '}':
key = self._key()
self._expect('=', consume_whitespace=True)
v = self._explicit_values()
self._consume_whitespace()
if self._current() != '}':
self._expect(',')
self._consume_whitespace()
keyvals[key] = v
self._expect('}')
return keyvals
def _first_value(self):
# first-value = value / single-quoted-val / double-quoted-val
if self._current() == "'":
return self._single_quoted_value()
elif self._current() == '"':
return self._double_quoted_value()
return self._value()
def _single_quoted_value(self):
# single-quoted-value = %x27 *(val-escaped-single) %x27
# val-escaped-single = %x20-26 / %x28-7F / escaped-escape /
# (escape single-quote)
return self._consume_quoted(self._SINGLE_QUOTED, escaped_char="'")
def _consume_quoted(self, regex, escaped_char=None):
value = self._must_consume_regex(regex)[1:-1]
if escaped_char is not None:
value = value.replace("\\%s" % escaped_char, escaped_char)
value = value.replace("\\\\", "\\")
return value
def _double_quoted_value(self):
return self._consume_quoted(self._DOUBLE_QUOTED, escaped_char='"')
def _second_value(self):
if self._current() == "'":
return self._single_quoted_value()
elif self._current() == '"':
return self._double_quoted_value()
else:
consumed = self._must_consume_regex(self._SECOND_VALUE)
return consumed.replace('\\,', ',').rstrip()
def _expect(self, char, consume_whitespace=False):
if consume_whitespace:
self._consume_whitespace()
if self._index >= len(self._input_value):
raise ShorthandParseError(self._input_value, char,
'EOF', self._index)
actual = self._input_value[self._index]
if actual != char:
raise ShorthandParseError(self._input_value, char,
actual, self._index)
self._index += 1
if consume_whitespace:
self._consume_whitespace()
def _must_consume_regex(self, regex):
result = regex.match(self._input_value[self._index:])
if result is not None:
return self._consume_matched_regex(result)
raise ShorthandParseError(self._input_value, '<%s>' % regex.name,
'', self._index)
def _consume_matched_regex(self, result):
start, end = result.span()
v = self._input_value[self._index+start:self._index+end]
self._index += (end - start)
return v
def _current(self):
# If the index is at the end of the input value,
# then _EOF will be returned.
if self._index < len(self._input_value):
return self._input_value[self._index]
return _EOF
def _at_eof(self):
return self._index >= len(self._input_value)
def _backtrack_to(self, char):
while self._index >= 0 and self._input_value[self._index] != char:
self._index -= 1
def _consume_whitespace(self):
while self._current() != _EOF and self._current() in string.whitespace:
self._index += 1
class ModelVisitor(object):
def visit(self, params, model):
self._visit({}, model, '', params)
def _visit(self, parent, shape, name, value):
method = getattr(self, '_visit_%s' % shape.type_name,
self._visit_scalar)
method(parent, shape, name, value)
def _visit_structure(self, parent, shape, name, value):
if not isinstance(value, dict):
return
for member_name, member_shape in shape.members.items():
self._visit(value, member_shape, member_name,
value.get(member_name))
def _visit_list(self, parent, shape, name, value):
if not isinstance(value, list):
return
for i, element in enumerate(value):
self._visit(value, shape.member, i, element)
def _visit_map(self, parent, shape, name, value):
if not isinstance(value, dict):
return
value_shape = shape.value
for k, v in value.items():
self._visit(value, value_shape, k, v)
def _visit_scalar(self, parent, shape, name, value):
pass
class BackCompatVisitor(ModelVisitor):
def _visit_list(self, parent, shape, name, value):
if not isinstance(value, list):
# Convert a -> [a] because they specified
# "foo=bar", but "bar" should really be ["bar"].
if value is not None:
parent[name] = [value]
else:
return super(BackCompatVisitor, self)._visit_list(
parent, shape, name, value)
def _visit_scalar(self, parent, shape, name, value):
if value is None:
return
type_name = shape.type_name
if type_name in ['integer', 'long']:
parent[name] = int(value)
elif type_name in ['double', 'float']:
parent[name] = float(value)
elif type_name == 'boolean':
# We want to make sure we only set a value
# only if "true"/"false" is specified.
if value.lower() == 'true':
parent[name] = True
elif value.lower() == 'false':
parent[name] = False
awscli-1.10.1/awscli/topics/ 0000777 4542626 0000144 00000000000 12652514126 016715 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/topics/s3-config.rst 0000666 4542626 0000144 00000013271 12652514124 021241 0 ustar pysdk-ci amazon 0000000 0000000 :title: AWS CLI S3 Configuration
:description: Advanced configuration for AWS S3 Commands
:category: S3
:related command: s3 cp, s3 sync, s3 mv, s3 rm
The ``aws s3`` transfer commands, which include the ``cp``, ``sync``, ``mv``,
and ``rm`` commands, have additional configuration values you can use to
control S3 transfers. This topic guide discusses these parameters as well as
best practices and guidelines for setting these values.
Before discussing the specifics of these values, note that these values are
entirely optional. You should be able to use the ``aws s3`` transfer commands
without having to configure any of these values. These configuration values
are provided in the case where you need to modify one of these values, either
for performance reasons or to account for the specific environment where these
``aws s3`` commands are being run.
Configuration Values
====================
These are the configuration values you can set for S3:
* ``max_concurrent_requests`` - The maximum number of concurrent requests.
* ``max_queue_size`` - The maximum number of tasks in the task queue.
* ``multipart_threshold`` - The size threshold the CLI uses for multipart
transfers of individual files.
* ``multipart_chunksize`` - When using multipart transfers, this is the chunk
size that the CLI uses for multipart transfers of individual files.
These values must be set under the top level ``s3`` key in the AWS Config File,
which has a default location of ``~/.aws/config``. Below is an example
configuration::
[profile development]
aws_access_key_id=foo
aws_secret_access_key=bar
s3 =
max_concurrent_requests = 20
max_queue_size = 10000
multipart_threshold = 64MB
multipart_chunksize = 16MB
Note that all the S3 configuration values are indented and nested under the top
level ``s3`` key.
You can also set these values programatically using the ``aws configure set``
command. For example, to set the above values for the default profile, you
could instead run these commands::
$ aws configure set default.s3.max_concurrent_requests 20
$ aws configure set default.s3.max_queue_size 10000
$ aws configure set default.s3.multipart_threshold 64MB
$ aws configure set default.s3.multipart_chunksize 16MB
max_concurrent_requests
-----------------------
**Default** - ``10``
The ``aws s3`` transfer commands are multithreaded. At any given time,
multiple requests to Amazon S3 are in flight. For example, if you are
uploading a directory via ``aws s3 cp localdir s3://bucket/ --recursive``, the
AWS CLI could be uploading the local files ``localdir/file1``,
``localdir/file2``, and ``localdir/file3`` in parallel. The
``max_concurrent_requests`` specifies the maximum number of transfer commands
that are allowed at any given time.
You may need to change this value for a few reasons:
* Decreasing this value - On some environments, the default of 10 concurrent
requests can overwhelm a system. This may cause connection timeouts or
slow the responsiveness of the system. Lowering this value will make the
S3 transfer commands less resource intensive. The tradeoff is that
S3 transfers may take longer to complete.
* Increasing this value - In some scenarios, you may want the S3 transfers
to complete as quickly as possible, using as much network bandwidth
as necessary. In this scenario, the default number of concurrent requests
may not be sufficient to utilize all the network bandwidth available.
Increasing this value may improve the time it takes to complete an
S3 transfer.
max_queue_size
--------------
**Default** - ``1000``
The AWS CLI internally uses a producer consumer model, where we queue up S3
tasks that are then executed by consumers, which in this case utilize a bound
thread pool, controlled by ``max_concurrent_requests``. A task generally maps
to a single S3 operation. For example, as task could be a ``PutObjectTask``,
or a ``GetObjectTask``, or an ``UploadPartTask``. The enqueuing rate can be
much faster than the rate at which consumers are executing tasks. To avoid
unbounded growth, the task queue size is capped to a specific size. This
configuration value changes the value of that maximum number.
You generally will not need to change this value. This value also corresponds
to the number of tasks we are aware of that need to be executed. This means
that by default we can only see 1000 tasks ahead. Until the S3 command knows
the total number of tasks executed, the progress line will show a total of
``...``. Increasing this value means that we will be able to more quickly know
the total number of tasks needed, assuming that the enqueuing rate is quicker
than the rate of task consumption. The tradeoff is that a larger max queue
size will require more memory.
multipart_threshold
-------------------
**Default** - ``8MB``
When uploading, downloading, or copying a file, the S3 commands
will switch to multipart operations if the file reaches a given
size threshold. The ``multipart_threshold`` controls this value.
You can specify this value in one of two ways:
* The file size in bytes. For example, ``1048576``.
* The file size with a size suffix. You can use ``KB``, ``MB``, ``GB``,
``TB``. For example: ``10MB``, ``1GB``. Note that S3 imposes
constraints on valid values that can be used for multipart
operations.
multipart_chunksize
-------------------
**Default** - ``8MB``
Once the S3 commands have decided to use multipart operations, the
file is divided into chunks. This configuration option specifies what
the chunk size (also referred to as the part size) should be. This
value can specified using the same semantics as ``multipart_threshold``,
that is either as the number of bytes as an integer, or using a size
suffix.
awscli-1.10.1/awscli/topics/config-vars.rst 0000666 4542626 0000144 00000022621 12652514124 021666 0 ustar pysdk-ci amazon 0000000 0000000 :title: AWS CLI Configuration Variables
:description: Configuration Variables for the AWS CLI
:category: General
:related command: configure, configure get, configure set
:related topic: s3-config
Configuration values for the AWS CLI can come from several sources:
* As a command line option
* As an environment variable
* As a value in the AWS CLI config file
* As a value in the AWS Shared Credential file
Some options are only available in the AWS CLI config. This topic guide covers
all the configuration variables available in the AWS CLI.
Note that if you are just looking to get the minimum required configuration to
run the AWS CLI, we recommend running ``aws configure``, which will prompt you
for the necessary configuration values.
Config File Format
==================
The AWS CLI config file, which defaults to ``~/.aws/config`` has the following
format::
[default]
aws_access_key_id=foo
aws_secret_access_key=bar
region=us-west-2
The ``default`` section refers to the configuration values for the default
profile. You can create profiles, which represent logical groups of
configuration. Profiles that aren't the default profile are specified by
creating a section titled "profile profilename"::
[profile testing]
aws_access_key_id=foo
aws_secret_access_key=bar
region=us-west-2
Nested Values
-------------
Some service specific configuration, discussed in more detail below, has a
single top level key, with nested sub values. These sub values are denoted by
indentation::
[profile testing]
aws_access_key_id = foo
aws_secret_access_key = bar
region = us-west-2
s3 =
max_concurrent_requests=10
max_queue_size=1000
General Options
===============
The AWS CLI has a few general options:
=========== ========= ===================== ===================== ============================
Variable Option Config Entry Environment Variable Description
=========== ========= ===================== ===================== ============================
profile --profile N/A AWS_DEFAULT_PROFILE Default profile name
----------- --------- --------------------- --------------------- ----------------------------
region --region region AWS_DEFAULT_REGION Default AWS Region
----------- --------- --------------------- --------------------- ----------------------------
output --output output AWS_DEFAULT_OUTPUT Default output style
=========== ========= ===================== ===================== ============================
The third column, Config Entry, is the value you would specify in the AWS CLI
config file. By default, this location is ``~/.aws/config``. If you need to
change this value, you can set the ``AWS_CONFIG_FILE`` environment variable
to change this location.
The valid values of the ``output`` configuration variable are:
* json
* table
* text
When you specify a profile, either using ``--profile profile-name`` or by
setting a value for the ``AWS_DEFAULT_PROFILE`` environment variable, profile
name you provide is used to find the corresponding section in the AWS CLI
config file. For example, specifying ``--profile development`` will instruct
the AWS CLI to look for a section in the AWS CLI config file of
``[profile development]``.
Precedence
----------
The above configuration values have the following precedence:
* Command line options
* Environment variables
* Configuration file
Credentials
===========
Credentials can be specified in several ways:
* Environment variables
* The AWS Shared Credential File
* The AWS CLI config file
=========== ===================== ===================== ============================
Variable Creds/Config Entry Environment Variable Description
=========== ===================== ===================== ============================
access_key aws_access_key_id AWS_ACCESS_KEY_ID AWS Access Key
----------- --------------------- --------------------- ----------------------------
secret_key aws_secret_access_key AWS_SECRET_ACCESS_KEY AWS Secret Key
----------- --------------------- --------------------- ----------------------------
token aws_session_token AWS_SESSION_TOKEN AWS Token (temp credentials)
=========== ===================== ===================== ============================
The second column specifies the name that you can specify in either the AWS CLI
config file or the AWS Shared credentials file (``~/.aws/credentials``).
The Shared Credentials File
---------------------------
The shared credentials file has a default location of
``~/.aws/credentials``. You can change the location of the shared
credentials file by setting the ``AWS_SHARED_CREDENTIALS_FILE``
environment variable.
This file is an INI formatted file with section names
corresponding to profiles. With each section, the three configuration
variables shown above can be specified: ``aws_access_key_id``,
``aws_secret_access_key``, ``aws_session_token``. **These are the only
supported values in the shared credential file.** Also note that the
section names are different than the AWS CLI config file (``~/.aws/config``).
In the AWS CLI config file, you create a new profile by creating a section of
``[profile profile-name]``, for example::
[profile development]
aws_access_key_id=foo
aws_secret_access_key=bar
In the shared credentials file, profiles are not prefixed with ``profile``,
for example::
[development]
aws_access_key_id=foo
aws_secret_access_key=bar
Precedence
----------
Credentials from environment variables have precedence over credentials from
the shared credentials and AWS CLI config file. Credentials specified in the
shared credentials file have precedence over credentials in the AWS CLI config
file.
Using AWS IAM Roles
-------------------
If you are on an Amazon EC2 instance that was launched with an IAM role, the
AWS CLI will automatically retrieve credentials for you. You do not need
to configure any credentials.
Additionally, you can specify a role for the AWS CLI to assume, and the AWS
CLI will automatically make the corresponding ``AssumeRole`` calls for you.
Note that configuration variables for using IAM roles can only be in the AWS
CLI config file.
You can specify the following configuration values for configuring an IAM role
in the AWS CLI config file:
* ``role_arn`` - The ARN of the role you want to assume.
* ``source_profile`` - The AWS CLI profile that contains credentials we should
use for the initial ``assume-role`` call.
* ``external_id`` - A unique identifier that is used by third parties to assume
a role in their customers' accounts. This maps to the ``ExternalId``
parameter in the ``AssumeRole`` operation. This is an optional parameter.
* ``mfa_serial`` - The identification number of the MFA device to use when
assuming a role. This is an optional parameter. Specify this value if the
trust policy of the role being assumed includes a condition that requires MFA
authentication. The value is either the serial number for a hardware device
(such as GAHT12345678) or an Amazon Resource Name (ARN) for a virtual device
(such as arn:aws:iam::123456789012:mfa/user).
* ``role_session_name`` - The name applied to this assume-role session. This
value affects the assumed role user ARN (such as
arn:aws:sts::123456789012:assumed-role/role_name/role_session_name). This
maps to the ``RoleSessionName`` parameter in the ``AssumeRole`` operation.
This is an optional parameter. If you do not provide this value, a
session name will be automatically generated.
If you do not have MFA authentication required, then you only need to specify a
``role_arn`` and a ``source_profile``.
When you specify a profile that has IAM role configuration, the AWS CLI
will make an ``AssumeRole`` call to retrieve temporary credentials. These
credentials are then stored (in ``~/.aws/cache``). Subsequent AWS CLI commands
will use the cached temporary credentials until they expire, in which case the
AWS CLI will automatically refresh credentials.
If you specify an ``mfa_serial``, then the first time an ``AssumeRole`` call is
made, you will be prompted to enter the MFA code. Subsequent commands will use
the cached temporary credentials. However, when the temporary credentials
expire, you will be re-prompted for another MFA code.
Example configuration::
# In ~/.aws/credentials:
[development]
aws_access_key_id=foo
aws_access_key_id=bar
# In ~/.aws/config
[profile crossaccount]
role_arn=arn:aws:iam:...
source_profile=development
Service Specific Configuration
==============================
aws s3
------
These values are only applicable for the ``aws s3`` commands. These
configuration values are sub values that must be specified under the top level
``s3`` key.
These are the configuration values you can set for S3:
* ``max_concurrent_requests`` - The maximum number of concurrent requests.
* ``max_queue_size`` - The maximum number of tasks in the task queue.
* ``multipart_threshold`` - The size threshold where the CLI uses multipart
transfers.
* ``multipart_chunksize`` - When using multipart transfers, this is the chunk
size that will be used.
Example config::
[profile development]
aws_access_key_id=foo
aws_secret_access_key=bar
s3 =
max_concurrent_requests = 20
max_queue_size = 10000
multipart_threshold = 64MB
multipart_chunksize = 16MB
For a more in depth discussion of these S3 configuration values, see ``aws help
s3-config``.
awscli-1.10.1/awscli/topics/return-codes.rst 0000666 4542626 0000144 00000003656 12652514124 022071 0 ustar pysdk-ci amazon 0000000 0000000 :title: AWS CLI Return Codes
:description: Describes the various return codes of the AWS CLI
:category: General
:related command: s3, s3 cp, s3 sync, s3 mv, s3 rm
These are the following return codes returned at the end of execution
of a CLI command:
* ``0`` -- Command was successful. There were no errors thrown by either
the CLI or by the service the request was made to.
* ``1`` -- Limited to ``s3`` commands, at least one or more s3 transfers
failed for the command executed.
* ``2`` -- The meaning of this return code depends on the command being run.
The primary meaning is that the command entered on the command
line failed to be parsed. Parsing failures can be caused by,
but are not limted to, missing any required subcommands or arguments
or using any unknown commands or arguments.
Note that this return code meaning is applicable to all CLI commands.
The other meaning is only applicable to ``s3`` commands.
It can mean at least one or more files marked
for transfer were skipped during the transfer process. However, all
other files marked for transfer were successfully transferred.
Files that are skipped during the transfer process include:
files that do not exist, files that are character special devices,
block special device, FIFO's, or sockets, and files that the user cannot
read from.
* ``130`` -- The process received a SIGINT (Ctrl-C).
* ``255`` -- Command failed. There were errors thrown by either the CLI or
by the service the request was made to.
To determine the return code of a command, run the following right after
running a CLI command. Note that this will work only on POSIX systems::
$ echo $?
Output (if successful)::
0
On Windows PowerShell, the return code can be determined by running::
> echo $lastexitcode
Output (if successful)::
0
On Windows Command Prompt, the return code can be determined by running::
> echo %errorlevel%
Output (if successful)::
0
awscli-1.10.1/awscli/topics/topic-tags.json 0000666 4542626 0000144 00000002270 12652514124 021661 0 ustar pysdk-ci amazon 0000000 0000000 {
"config-vars": {
"category": [
"General"
],
"description": [
"Configuration Variables for the AWS CLI"
],
"related command": [
"configure",
"configure get",
"configure set"
],
"related topic": [
"s3-config"
],
"title": [
"AWS CLI Configuration Variables"
]
},
"return-codes": {
"category": [
"General"
],
"description": [
"Describes the various return codes of the AWS CLI"
],
"related command": [
"s3",
"s3 cp",
"s3 sync",
"s3 mv",
"s3 rm"
],
"title": [
"AWS CLI Return Codes"
]
},
"s3-config": {
"category": [
"S3"
],
"description": [
"Advanced configuration for AWS S3 Commands"
],
"related command": [
"s3 cp",
"s3 sync",
"s3 mv",
"s3 rm"
],
"title": [
"AWS CLI S3 Configuration"
]
}
} awscli-1.10.1/awscli/customizations/ 0000777 4542626 0000144 00000000000 12652514126 020507 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/customizations/globalargs.py 0000666 4542626 0000144 00000010150 12652514124 023171 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import sys
import os
from botocore.client import Config
from botocore.endpoint import DEFAULT_TIMEOUT
from botocore.handlers import disable_signing
import jmespath
from awscli.compat import urlparse
def register_parse_global_args(cli):
cli.register('top-level-args-parsed', resolve_types)
cli.register('top-level-args-parsed', no_sign_request)
cli.register('top-level-args-parsed', resolve_verify_ssl)
cli.register('top-level-args-parsed', resolve_cli_read_timeout)
cli.register('top-level-args-parsed', resolve_cli_connect_timeout)
def resolve_types(parsed_args, **kwargs):
# This emulates the "type" arg from argparse, but does so in a way
# that plugins can also hook into this process.
_resolve_arg(parsed_args, 'query')
_resolve_arg(parsed_args, 'endpoint_url')
def _resolve_arg(parsed_args, name):
value = getattr(parsed_args, name, None)
if value is not None:
new_value = getattr(sys.modules[__name__], '_resolve_%s' % name)(value)
setattr(parsed_args, name, new_value)
def _resolve_query(value):
try:
return jmespath.compile(value)
except Exception as e:
raise ValueError("Bad value for --query %s: %s" % (value, str(e)))
def _resolve_endpoint_url(value):
parsed = urlparse.urlparse(value)
# Our http library requires you specify an endpoint url
# that contains a scheme, so we'll verify that up front.
if not parsed.scheme:
raise ValueError('Bad value for --endpoint-url "%s": scheme is '
'missing. Must be of the form '
'http:/// or https:///' % value)
return value
def resolve_verify_ssl(parsed_args, session, **kwargs):
arg_name = 'verify_ssl'
arg_value = getattr(parsed_args, arg_name, None)
if arg_value is not None:
verify = None
# Only consider setting a custom ca_bundle if they
# haven't provided --no-verify-ssl.
if not arg_value:
verify = False
else:
verify = getattr(parsed_args, 'ca_bundle', None) or \
session.get_config_variable('ca_bundle')
setattr(parsed_args, arg_name, verify)
def no_sign_request(parsed_args, session, **kwargs):
if not parsed_args.sign_request:
# In order to make signing disabled for all requests
# we need to use botocore's ``disable_signing()`` handler.
session.register('choose-signer', disable_signing)
def resolve_cli_connect_timeout(parsed_args, session, **kwargs):
arg_name = 'connect_timeout'
_resolve_timeout(session, parsed_args, arg_name)
def resolve_cli_read_timeout(parsed_args, session, **kwargs):
arg_name = 'read_timeout'
_resolve_timeout(session, parsed_args, arg_name)
def _resolve_timeout(session, parsed_args, arg_name):
arg_value = getattr(parsed_args, arg_name, None)
if arg_value is None:
arg_value = DEFAULT_TIMEOUT
arg_value = int(arg_value)
if arg_value == 0:
arg_value = None
setattr(parsed_args, arg_name, arg_value)
# Update in the default client config so that the timeout will be used
# by all clients created from then on.
_update_default_client_config(session, arg_name, arg_value)
def _update_default_client_config(session, arg_name, arg_value):
current_default_config = session.get_default_client_config()
new_default_config = Config(**{arg_name: arg_value})
if current_default_config is not None:
new_default_config = current_default_config.merge(new_default_config)
session.set_default_client_config(new_default_config)
awscli-1.10.1/awscli/customizations/removals.py 0000666 4542626 0000144 00000005102 12652514124 022705 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""
Remove deprecated commands
--------------------------
This customization removes commands that are either deprecated or not
yet fully supported.
"""
import logging
from functools import partial
LOG = logging.getLogger(__name__)
def register_removals(event_handler):
cmd_remover = CommandRemover(event_handler)
cmd_remover.remove(on_event='building-command-table.ses',
remove_commands=['delete-verified-email-address',
'list-verified-email-addresses',
'verify-email-address'])
cmd_remover.remove(on_event='building-command-table.ec2',
remove_commands=['import-instance', 'import-volume'])
cmd_remover.remove(on_event='building-command-table.emr',
remove_commands=['run-job-flow', 'describe-job-flows',
'add-job-flow-steps',
'terminate-job-flows',
'list-bootstrap-actions',
'list-instance-groups',
'set-termination-protection',
'set-visible-to-all-users'])
class CommandRemover(object):
def __init__(self, events):
self._events = events
def remove(self, on_event, remove_commands):
self._events.register(on_event,
self._create_remover(remove_commands))
def _create_remover(self, commands_to_remove):
return partial(_remove_commands, commands_to_remove=commands_to_remove)
def _remove_commands(command_table, commands_to_remove, **kwargs):
# Hooked up to building-command-table.
for command in commands_to_remove:
try:
LOG.debug("Removing operation: %s", command)
del command_table[command]
except KeyError:
LOG.warning("Attempting to delete command that does not exist: %s",
command)
awscli-1.10.1/awscli/customizations/cloudsearch.py 0000666 4542626 0000144 00000010303 12652514124 023350 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
from awscli.customizations.flatten import FlattenArguments, SEP
from botocore.compat import OrderedDict
LOG = logging.getLogger(__name__)
DEFAULT_VALUE_TYPE_MAP = {
'Int': int,
'Double': float,
'IntArray': int,
'DoubleArray': float
}
def index_hydrate(params, container, cli_type, key, value):
"""
Hydrate an index-field option value to construct something like::
{
'index_field': {
'DoubleOptions': {
'DefaultValue': 0.0
}
}
}
"""
if 'IndexField' not in params:
params['IndexField'] = {}
if 'IndexFieldType' not in params['IndexField']:
raise RuntimeError('You must pass the --type option.')
# Find the type and transform it for the type options field name
# E.g: int-array => IntArray
_type = params['IndexField']['IndexFieldType']
_type = ''.join([i.capitalize() for i in _type.split('-')])
# ``index_field`` of type ``latlon`` is mapped to ``Latlon``.
# However, it is defined as ``LatLon`` in the model so it needs to
# be changed.
if _type == 'Latlon':
_type = 'LatLon'
# Transform string value to the correct type?
if key.split(SEP)[-1] == 'DefaultValue':
value = DEFAULT_VALUE_TYPE_MAP.get(_type, lambda x: x)(value)
# Set the proper options field
if _type + 'Options' not in params['IndexField']:
params['IndexField'][_type + 'Options'] = {}
params['IndexField'][_type + 'Options'][key.split(SEP)[-1]] = value
FLATTEN_CONFIG = {
"define-expression": {
"expression": {
"keep": False,
"flatten": OrderedDict([
# Order is crucial here! We're
# flattening ExpressionValue to be "expression",
# but this is the name ("expression") of the our parent
# key, the top level nested param.
("ExpressionName", {"name": "name"}),
("ExpressionValue", {"name": "expression"}),]),
}
},
"define-index-field": {
"index-field": {
"keep": False,
# We use an ordered dict because `type` needs to be parsed before
# any of the Options values.
"flatten": OrderedDict([
("IndexFieldName", {"name": "name"}),
("IndexFieldType", {"name": "type"}),
("IntOptions.DefaultValue", {"name": "default-value",
"type": "string",
"hydrate": index_hydrate}),
("IntOptions.FacetEnabled", {"name": "facet-enabled",
"hydrate": index_hydrate }),
("IntOptions.SearchEnabled", {"name": "search-enabled",
"hydrate": index_hydrate}),
("IntOptions.ReturnEnabled", {"name": "return-enabled",
"hydrate": index_hydrate}),
("IntOptions.SortEnabled", {"name": "sort-enabled",
"hydrate": index_hydrate}),
("TextOptions.HighlightEnabled", {"name": "highlight-enabled",
"hydrate": index_hydrate}),
("TextOptions.AnalysisScheme", {"name": "analysis-scheme",
"hydrate": index_hydrate})
])
}
}
}
def initialize(cli):
"""
The entry point for CloudSearch customizations.
"""
flattened = FlattenArguments('cloudsearch', FLATTEN_CONFIG)
flattened.register(cli)
awscli-1.10.1/awscli/customizations/iamvirtmfa.py 0000666 4542626 0000144 00000006316 12652514124 023224 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""
This customization makes it easier to deal with the bootstrapping
data returned by the ``iam create-virtual-mfa-device`` command.
You can choose to bootstrap via a QRCode or via a Base32String.
You specify your choice via the ``--bootstrap-method`` option
which should be either "QRCodePNG" or "Base32StringSeed". You
then specify the path to where you would like your bootstrapping
data saved using the ``--outfile`` option. The command will
pull the appropriate data field out of the response and write it
to the specified file. It will also remove the two bootstrap data
fields from the response.
"""
import base64
from awscli.customizations.arguments import StatefulArgument
from awscli.customizations.arguments import resolve_given_outfile_path
from awscli.customizations.arguments import is_parsed_result_successful
CHOICES = ('QRCodePNG', 'Base32StringSeed')
OUTPUT_HELP = ('The output path and file name where the bootstrap '
'information will be stored.')
BOOTSTRAP_HELP = ('Method to use to seed the virtual MFA. '
'Valid values are: %s | %s' % CHOICES)
class FileArgument(StatefulArgument):
def add_to_params(self, parameters, value):
# Validate the file here so we can raise an error prior
# calling the service.
value = resolve_given_outfile_path(value)
super(FileArgument, self).add_to_params(parameters, value)
class IAMVMFAWrapper(object):
def __init__(self, event_handler):
self._event_handler = event_handler
self._outfile = FileArgument(
'outfile', help_text=OUTPUT_HELP, required=True)
self._method = StatefulArgument(
'bootstrap-method', help_text=BOOTSTRAP_HELP,
choices=CHOICES, required=True)
self._event_handler.register(
'building-argument-table.iam.create-virtual-mfa-device',
self._add_options)
self._event_handler.register(
'after-call.iam.CreateVirtualMFADevice', self._save_file)
def _add_options(self, argument_table, **kwargs):
argument_table['outfile'] = self._outfile
argument_table['bootstrap-method'] = self._method
def _save_file(self, parsed, **kwargs):
if not is_parsed_result_successful(parsed):
return
method = self._method.value
outfile = self._outfile.value
if method in parsed['VirtualMFADevice']:
body = parsed['VirtualMFADevice'][method]
with open(outfile, 'wb') as fp:
fp.write(base64.b64decode(body))
for choice in CHOICES:
if choice in parsed['VirtualMFADevice']:
del parsed['VirtualMFADevice'][choice]
awscli-1.10.1/awscli/customizations/streamingoutputarg.py 0000666 4542626 0000144 00000007471 12652514124 025034 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from botocore.model import Shape
from awscli.arguments import BaseCLIArgument
def add_streaming_output_arg(argument_table, operation_model,
session, **kwargs):
# Implementation detail: hooked up to 'building-argument-table'
# event.
if _has_streaming_output(operation_model):
streaming_argument_name = _get_streaming_argument_name(operation_model)
argument_table['outfile'] = StreamingOutputArgument(
response_key=streaming_argument_name,
operation_model=operation_model,
session=session, name='outfile')
def _has_streaming_output(model):
return model.has_streaming_output
def _get_streaming_argument_name(model):
return model.output_shape.serialization['payload']
class StreamingOutputArgument(BaseCLIArgument):
BUFFER_SIZE = 32768
HELP = 'Filename where the content will be saved'
def __init__(self, response_key, operation_model, name,
session, buffer_size=None):
self._name = name
self.argument_model = Shape('StreamingOutputArgument',
{'type': 'string'})
if buffer_size is None:
buffer_size = self.BUFFER_SIZE
self._buffer_size = buffer_size
# This is the key in the response body where we can find the
# streamed contents.
self._response_key = response_key
self._output_file = None
self._name = name
self._required = True
self._operation_model = operation_model
self._session = session
@property
def cli_name(self):
# Because this is a parameter, not an option, it shouldn't have the
# '--' prefix. We want to use the self.py_name to indicate that it's an
# argument.
return self._name
@property
def cli_type_name(self):
return 'string'
@property
def required(self):
return self._required
@required.setter
def required(self, value):
self._required = value
@property
def documentation(self):
return self.HELP
def add_to_parser(self, parser):
parser.add_argument(self._name, metavar=self.py_name,
help=self.HELP)
def add_to_params(self, parameters, value):
self._output_file = value
service_name = self._operation_model.service_model.endpoint_prefix
operation_name = self._operation_model.name
self._session.register('after-call.%s.%s' % (
service_name, operation_name), self.save_file)
def save_file(self, parsed, **kwargs):
if self._response_key not in parsed:
# If the response key is not in parsed, then
# we've received an error message and we'll let the AWS CLI
# error handler print out an error message. We have no
# file to save in this situation.
return
body = parsed[self._response_key]
buffer_size = self._buffer_size
with open(self._output_file, 'wb') as fp:
data = body.read(buffer_size)
while data:
fp.write(data)
data = body.read(buffer_size)
# We don't want to include the streaming param in
# the returned response.
del parsed[self._response_key]
awscli-1.10.1/awscli/customizations/waiters.py 0000666 4542626 0000144 00000022771 12652514124 022546 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from botocore import xform_name
from botocore.exceptions import DataNotFoundError
from awscli.clidriver import ServiceOperation
from awscli.customizations.commands import BasicCommand, BasicHelp, \
BasicDocHandler
def register_add_waiters(cli):
cli.register('building-command-table', add_waiters)
def add_waiters(command_table, session, command_object, **kwargs):
# Check if the command object passed in has a ``service_object``. We
# only want to add wait commands to top level model-driven services.
# These require service objects.
service_model = getattr(command_object, 'service_model', None)
if service_model is not None:
# Get a client out of the service object.
waiter_model = get_waiter_model_from_service_model(session,
service_model)
if waiter_model is None:
return
waiter_names = waiter_model.waiter_names
# If there are waiters make a wait command.
if waiter_names:
command_table['wait'] = WaitCommand(
session, waiter_model, service_model)
def get_waiter_model_from_service_model(session, service_model):
try:
model = session.get_waiter_model(service_model.service_name,
service_model.api_version)
except DataNotFoundError:
return None
return model
class WaitCommand(BasicCommand):
NAME = 'wait'
DESCRIPTION = 'Wait until a particular condition is satisfied.'
def __init__(self, session, waiter_model, service_model):
self._model = waiter_model
self._service_model = service_model
self.waiter_cmd_builder = WaiterStateCommandBuilder(
session=session,
model=self._model,
service_model=self._service_model
)
super(WaitCommand, self).__init__(session)
def _run_main(self, parsed_args, parsed_globals):
if parsed_args.subcommand is None:
raise ValueError("usage: aws [options] "
"[parameters]\naws: error: too few arguments")
def _build_subcommand_table(self):
subcommand_table = super(WaitCommand, self)._build_subcommand_table()
self.waiter_cmd_builder.build_all_waiter_state_cmds(subcommand_table)
self._add_lineage(subcommand_table)
return subcommand_table
def create_help_command(self):
return BasicHelp(self._session, self,
command_table=self.subcommand_table,
arg_table=self.arg_table,
event_handler_class=WaiterCommandDocHandler)
class WaiterStateCommandBuilder(object):
def __init__(self, session, model, service_model):
self._session = session
self._model = model
self._service_model = service_model
def build_all_waiter_state_cmds(self, subcommand_table):
"""This adds waiter state commands to the subcommand table passed in.
This is the method that adds waiter state commands like
``instance-running`` to ``ec2 wait``.
"""
waiter_names = self._model.waiter_names
for waiter_name in waiter_names:
waiter_cli_name = xform_name(waiter_name, '-')
subcommand_table[waiter_cli_name] = \
self._build_waiter_state_cmd(waiter_name)
def _build_waiter_state_cmd(self, waiter_name):
# Get the waiter
waiter_config = self._model.get_waiter(waiter_name)
# Create the cli name for the waiter operation
waiter_cli_name = xform_name(waiter_name, '-')
# Obtain the name of the service operation that is used to implement
# the specified waiter.
operation_name = waiter_config.operation
# Create an operation object to make a command for the waiter. The
# operation object is used to generate the arguments for the waiter
# state command.
operation_model = self._service_model.operation_model(operation_name)
waiter_state_command = WaiterStateCommand(
name=waiter_cli_name, parent_name='wait',
operation_caller=WaiterCaller(self._session, waiter_name),
session=self._session,
operation_model=operation_model,
)
# Build the top level description for the waiter state command.
# Most waiters do not have a description so they need to be generated
# using the waiter configuration.
waiter_state_doc_builder = WaiterStateDocBuilder(waiter_config)
description = waiter_state_doc_builder.build_waiter_state_description()
waiter_state_command.DESCRIPTION = description
return waiter_state_command
class WaiterStateDocBuilder(object):
SUCCESS_DESCRIPTIONS = {
'error': u'%s is thrown ',
'path': u'%s ',
'pathAll': u'%s for all elements ',
'pathAny': u'%s for any element ',
'status': u'%s response is received '
}
def __init__(self, waiter_config):
self._waiter_config = waiter_config
def build_waiter_state_description(self):
description = self._waiter_config.description
# Use the description provided in the waiter config file. If no
# description is provided, use a heuristic to generate a description
# for the waiter.
if not description:
description = u'Wait until '
# Look at all of the acceptors and find the success state
# acceptor.
for acceptor in self._waiter_config.acceptors:
# Build the description off of the success acceptor.
if acceptor.state == 'success':
description += self._build_success_description(acceptor)
break
# Include what operation is being used.
description += self._build_operation_description(
self._waiter_config.operation)
description += self._build_polling_description(
self._waiter_config.delay, self._waiter_config.max_attempts)
return description
def _build_success_description(self, acceptor):
matcher = acceptor.matcher
# Pick the description template to use based on what the matcher is.
success_description = self.SUCCESS_DESCRIPTIONS[matcher]
resource_description = None
# If success is based off of the state of a resource include the
# description about what resource is looked at.
if matcher in ['path', 'pathAny', 'pathAll']:
resource_description = u'JMESPath query %s returns ' % \
acceptor.argument
# Prepend the resource description to the template description
success_description = resource_description + success_description
# Complete the description by filling in the expected success state.
full_success_description = success_description % acceptor.expected
return full_success_description
def _build_operation_description(self, operation):
operation_name = xform_name(operation).replace('_', '-')
return u'when polling with ``%s``.' % operation_name
def _build_polling_description(self, delay, max_attempts):
description = (
' It will poll every %s seconds until a successful state '
'has been reached. This will exit with a return code of 255 '
'after %s failed checks.'
% (delay, max_attempts))
return description
class WaiterCaller(object):
def __init__(self, session, waiter_name):
self._session = session
self._waiter_name = waiter_name
def invoke(self, service_name, operation_name, parameters, parsed_globals):
self._session.unregister(
'after-call', unique_id='awscli-error-handler')
client = self._session.create_client(
service_name, region_name=parsed_globals.region,
endpoint_url=parsed_globals.endpoint_url,
verify=parsed_globals.verify_ssl)
waiter = client.get_waiter(xform_name(self._waiter_name))
waiter.wait(**parameters)
return 0
class WaiterStateCommand(ServiceOperation):
DESCRIPTION = ''
def create_help_command(self):
help_command = super(WaiterStateCommand, self).create_help_command()
# Change the operation object's description by changing it to the
# description for a waiter state command.
self._operation_model.documentation = self.DESCRIPTION
# Change the output shape because waiters provide no output.
self._operation_model.output_shape = None
return help_command
class WaiterCommandDocHandler(BasicDocHandler):
def doc_synopsis_start(self, help_command, **kwargs):
pass
def doc_synopsis_option(self, arg_name, help_command, **kwargs):
pass
def doc_synopsis_end(self, help_command, **kwargs):
pass
def doc_options_start(self, help_command, **kwargs):
pass
def doc_option(self, arg_name, help_command, **kwargs):
pass
awscli-1.10.1/awscli/customizations/toplevelbool.py 0000666 4542626 0000144 00000013422 12652514124 023567 0 ustar pysdk-ci amazon 0000000 0000000 # language governing permissions and limitations under the License.
"""
Top Level Boolean Parameters
----------------------------
This customization will take a parameter that has
a structure of a single boolean element and allow the argument
to be specified without a value.
Instead of having to say::
--ebs-optimized '{"Value": true}'
--ebs-optimized '{"Value": false}'
You can instead say `--ebs-optimized/--no-ebs-optimized`.
"""
import logging
from functools import partial
from awscli.argprocess import detect_shape_structure
from awscli import arguments
from awscli.customizations.utils import validate_mutually_exclusive_handler
LOG = logging.getLogger(__name__)
# This sentinel object is used to distinguish when
# a parameter is not specified vs. specified with no value
# (a value of None).
_NOT_SPECIFIED = object()
def register_bool_params(event_handler):
event_handler.register('building-argument-table.ec2.*',
partial(pull_up_bool,
event_handler=event_handler))
def _qualifies_for_simplification(arg_model):
if detect_shape_structure(arg_model) == 'structure(scalar)':
members = arg_model.members
if (len(members) == 1 and
list(members.keys())[0] == 'Value' and
list(members.values())[0].type_name == 'boolean'):
return True
return False
def pull_up_bool(argument_table, event_handler, **kwargs):
# List of tuples of (positive_bool, negative_bool)
# This is used to validate that we don't specify
# an --option and a --no-option.
boolean_pairs = []
event_handler.register(
'operation-args-parsed.ec2.*',
partial(validate_boolean_mutex_groups,
boolean_pairs=boolean_pairs))
for key, value in list(argument_table.items()):
if hasattr(value, 'argument_model'):
arg_model = value.argument_model
if _qualifies_for_simplification(arg_model):
# Swap out the existing CLIArgument for two args:
# one that supports --option and --option
# and another arg of --no-option.
new_arg = PositiveBooleanArgument(
value.name, arg_model, value._operation_model,
value._event_emitter,
group_name=value.name,
serialized_name=value._serialized_name)
argument_table[value.name] = new_arg
negative_name = 'no-%s' % value.name
negative_arg = NegativeBooleanParameter(
negative_name, arg_model, value._operation_model,
value._event_emitter,
action='store_true', dest='no_%s' % new_arg.py_name,
group_name=value.name,
serialized_name=value._serialized_name)
argument_table[negative_name] = negative_arg
# If we've pulled up a structure(scalar) arg
# into a pair of top level boolean args, we need
# to validate that a user only provides the argument
# once. They can't say --option/--no-option, nor
# can they say --option --option Value=false.
boolean_pairs.append((new_arg, negative_arg))
def validate_boolean_mutex_groups(boolean_pairs, parsed_args, **kwargs):
# Validate we didn't pass in an --option and a --no-option.
for positive, negative in boolean_pairs:
if getattr(parsed_args, positive.py_name) is not _NOT_SPECIFIED and \
getattr(parsed_args, negative.py_name) is not _NOT_SPECIFIED:
raise ValueError(
'Cannot specify both the "%s" option and '
'the "%s" option.' % (positive.cli_name, negative.cli_name))
class PositiveBooleanArgument(arguments.CLIArgument):
def __init__(self, name, argument_model, operation_model,
event_emitter, serialized_name, group_name):
super(PositiveBooleanArgument, self).__init__(
name, argument_model, operation_model, event_emitter,
serialized_name=serialized_name)
self._group_name = group_name
@property
def group_name(self):
return self._group_name
def add_to_parser(self, parser):
# We need to support three forms:
# --option-name
# --option-name Value=(true|false)
parser.add_argument(self.cli_name,
help=self.documentation,
action='store',
default=_NOT_SPECIFIED,
nargs='?')
def add_to_params(self, parameters, value):
if value is _NOT_SPECIFIED:
return
elif value is None:
# Then this means that the user explicitly
# specified this arg with no value,
# e.g. --boolean-parameter
# which means we should add a true value
# to the parameters dict.
parameters[self._serialized_name] = {'Value': True}
else:
# Otherwise the arg was specified with a value.
parameters[self._serialized_name] = self._unpack_argument(
value)
class NegativeBooleanParameter(arguments.BooleanArgument):
def __init__(self, name, argument_model, operation_model,
event_emitter, serialized_name, action='store_true',
dest=None, group_name=None):
super(NegativeBooleanParameter, self).__init__(
name, argument_model, operation_model, event_emitter,
default=_NOT_SPECIFIED, serialized_name=serialized_name)
self._group_name = group_name
def add_to_params(self, parameters, value):
if value is not _NOT_SPECIFIED and value:
parameters[self._serialized_name] = {'Value': False}
awscli-1.10.1/awscli/customizations/paginate.py 0000666 4542626 0000144 00000024167 12652514124 022661 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""This module has customizations to unify paging paramters.
For any operation that can be paginated, we will:
* Hide the service specific pagination params. This can vary across
services and we're going to replace them with a consistent set of
arguments. The arguments will still work, but they are not
documented. This allows us to add a pagination config after
the fact and still remain backwards compatible with users that
were manually doing pagination.
* Add a ``--starting-token`` and a ``--max-items`` argument.
"""
import logging
from functools import partial
from botocore import xform_name
from botocore.exceptions import DataNotFoundError
from botocore import model
from awscli.arguments import BaseCLIArgument
logger = logging.getLogger(__name__)
STARTING_TOKEN_HELP = """
A token to specify where to start paginating. This is the
NextToken from a previously truncated response.
"""
MAX_ITEMS_HELP = """
The total number of items to return. If the total number
of items available is more than the value specified in
max-items then a NextToken will
be provided in the output that you can use to resume pagination.
This NextToken response element should not be
used directly outside of the AWS CLI.
"""
PAGE_SIZE_HELP = """
The size of each page.
"""
def register_pagination(event_handlers):
event_handlers.register('building-argument-table', unify_paging_params)
event_handlers.register_last('doc-description', add_paging_description)
def get_paginator_config(session, service_name, operation_name):
try:
paginator_model = session.get_paginator_model(service_name)
except DataNotFoundError:
return None
try:
operation_paginator_config = paginator_model.get_paginator(
operation_name)
except ValueError:
return None
return operation_paginator_config
def add_paging_description(help_command, **kwargs):
# This customization is only applied to the description of
# Operations, so we must filter out all other events.
if not isinstance(help_command.obj, model.OperationModel):
return
service_name = help_command.obj.service_model.service_name
paginator_config = get_paginator_config(
help_command.session, service_name, help_command.obj.name)
if not paginator_config:
return
help_command.doc.style.new_paragraph()
help_command.doc.writeln(
('``%s`` is a paginated operation. Multiple API calls may be issued '
'in order to retrieve the entire data set of results. You can '
'disable pagination by providing the ``--no-paginate`` argument.')
% help_command.name)
# Only include result key information if it is present.
if paginator_config.get('result_key'):
queries = paginator_config['result_key']
if type(queries) is not list:
queries = [queries]
queries = ", ".join([('``%s``' % s) for s in queries])
help_command.doc.writeln(
('When using ``--output text`` and the ``--query`` argument on a '
'paginated response, the ``--query`` argument must extract data '
'from the results of the following query expressions: %s')
% queries)
def unify_paging_params(argument_table, operation_model, event_name,
session, **kwargs):
paginator_config = get_paginator_config(
session, operation_model.service_model.service_name,
operation_model.name)
if paginator_config is None:
# We only apply these customizations to paginated responses.
return
logger.debug("Modifying paging parameters for operation: %s",
operation_model.name)
_remove_existing_paging_arguments(argument_table, paginator_config)
parsed_args_event = event_name.replace('building-argument-table.',
'operation-args-parsed.')
shadowed_args = {}
add_paging_argument(argument_table, 'starting-token',
PageArgument('starting-token', STARTING_TOKEN_HELP,
parse_type='string',
serialized_name='StartingToken'),
shadowed_args)
input_members = operation_model.input_shape.members
type_name = 'integer'
if 'limit_key' in paginator_config:
limit_key_shape = input_members[paginator_config['limit_key']]
type_name = limit_key_shape.type_name
if type_name not in PageArgument.type_map:
raise TypeError(
('Unsupported pagination type {0} for operation {1}'
' and parameter {2}').format(
type_name, operation_model.name,
paginator_config['limit_key']))
add_paging_argument(argument_table, 'page-size',
PageArgument('page-size', PAGE_SIZE_HELP,
parse_type=type_name,
serialized_name='PageSize'),
shadowed_args)
add_paging_argument(argument_table, 'max-items',
PageArgument('max-items', MAX_ITEMS_HELP,
parse_type=type_name,
serialized_name='MaxItems'),
shadowed_args)
session.register(
parsed_args_event,
partial(check_should_enable_pagination,
list(_get_all_cli_input_tokens(paginator_config)),
shadowed_args, argument_table))
def add_paging_argument(argument_table, arg_name, argument, shadowed_args):
if arg_name in argument_table:
# If there's already an entry in the arg table for this argument,
# this means we're shadowing an argument for this operation. We
# need to store this later in case pagination is turned off because
# we put these arguments back.
# See the comment in check_should_enable_pagination() for more info.
shadowed_args[arg_name] = argument_table[arg_name]
argument_table[arg_name] = argument
def check_should_enable_pagination(input_tokens, shadowed_args, argument_table,
parsed_args, parsed_globals, **kwargs):
normalized_paging_args = ['start_token', 'max_items']
for token in input_tokens:
py_name = token.replace('-', '_')
if getattr(parsed_args, py_name) is not None and \
py_name not in normalized_paging_args:
# The user has specified a manual (undocumented) pagination arg.
# We need to automatically turn pagination off.
logger.debug("User has specified a manual pagination arg. "
"Automatically setting --no-paginate.")
parsed_globals.paginate = False
# Because we've now disabled pagination, there's a chance that
# we were shadowing arguments. For example, we inject a
# --max-items argument in unify_paging_params(). If the
# the operation also provides its own MaxItems (which we
# expose as --max-items) then our custom pagination arg
# was shadowing the customers arg. When we turn pagination
# off we need to put back the original argument which is
# what we're doing here.
for key, value in shadowed_args.items():
argument_table[key] = value
def _remove_existing_paging_arguments(argument_table, pagination_config):
for cli_name in _get_all_cli_input_tokens(pagination_config):
argument_table[cli_name]._UNDOCUMENTED = True
def _get_all_cli_input_tokens(pagination_config):
# Get all input tokens including the limit_key
# if it exists.
tokens = _get_input_tokens(pagination_config)
for token_name in tokens:
cli_name = xform_name(token_name, '-')
yield cli_name
if 'limit_key' in pagination_config:
key_name = pagination_config['limit_key']
cli_name = xform_name(key_name, '-')
yield cli_name
def _get_input_tokens(pagination_config):
tokens = pagination_config['input_token']
if not isinstance(tokens, list):
return [tokens]
return tokens
def _get_cli_name(param_objects, token_name):
for param in param_objects:
if param.name == token_name:
return param.cli_name.lstrip('-')
class PageArgument(BaseCLIArgument):
type_map = {
'string': str,
'integer': int,
}
def __init__(self, name, documentation, parse_type, serialized_name):
self.argument_model = model.Shape('PageArgument', {'type': 'string'})
self._name = name
self._serialized_name = serialized_name
self._documentation = documentation
self._parse_type = parse_type
self._required = False
@property
def cli_name(self):
return '--' + self._name
@property
def cli_type_name(self):
return self._parse_type
@property
def required(self):
return self._required
@required.setter
def required(self, value):
self._required = value
@property
def documentation(self):
return self._documentation
def add_to_parser(self, parser):
parser.add_argument(self.cli_name, dest=self.py_name,
type=self.type_map[self._parse_type])
def add_to_params(self, parameters, value):
if value is not None:
pagination_config = parameters.get('PaginationConfig', {})
pagination_config[self._serialized_name] = value
parameters['PaginationConfig'] = pagination_config
awscli-1.10.1/awscli/customizations/sessendemail.py 0000666 4542626 0000144 00000010553 12652514124 023537 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""
This customization provides a simpler interface for the ``ses send-email``
command. This simplified form is based on the legacy CLI. The simple format
will be::
aws ses send-email --subject SUBJECT --from FROM_EMAIL
--to-addresses addr ... --cc-addresses addr ...
--bcc-addresses addr ... --reply-to-addresses addr ...
--return-path addr --text TEXTBODY --html HTMLBODY
"""
from awscli.customizations import utils
from awscli.arguments import CustomArgument
from awscli.customizations.utils import validate_mutually_exclusive_handler
TO_HELP = ('The email addresses of the primary recipients. '
'You can specify multiple recipients as space-separated values')
CC_HELP = ('The email addresses of copy recipients (Cc). '
'You can specify multiple recipients as space-separated values')
BCC_HELP = ('The email addresses of blind-carbon-copy recipients (Bcc). '
'You can specify multiple recipients as space-separated values')
SUBJECT_HELP = 'The subject of the message'
TEXT_HELP = 'The raw text body of the message'
HTML_HELP = 'The HTML body of the message'
def register_ses_send_email(event_handler):
event_handler.register('building-argument-table.ses.send-email',
_promote_args)
event_handler.register(
'operation-args-parsed.ses.send-email',
validate_mutually_exclusive_handler(
['destination'], ['to', 'cc', 'bcc']))
event_handler.register(
'operation-args-parsed.ses.send-email',
validate_mutually_exclusive_handler(
['message'], ['text', 'html']))
def _promote_args(argument_table, **kwargs):
argument_table['message'].required = False
argument_table['destination'].required = False
utils.rename_argument(argument_table, 'source',
new_name='from')
argument_table['to'] = AddressesArgument(
'to', 'ToAddresses', help_text=TO_HELP)
argument_table['cc'] = AddressesArgument(
'cc', 'CcAddresses', help_text=CC_HELP)
argument_table['bcc'] = AddressesArgument(
'bcc', 'BccAddresses', help_text=BCC_HELP)
argument_table['subject'] = BodyArgument(
'subject', 'Subject', help_text=SUBJECT_HELP)
argument_table['text'] = BodyArgument(
'text', 'Text', help_text=TEXT_HELP)
argument_table['html'] = BodyArgument(
'html', 'Html', help_text=HTML_HELP)
def _build_destination(params, key, value):
# Build up the Destination data structure
if 'Destination' not in params:
params['Destination'] = {}
params['Destination'][key] = value
def _build_message(params, key, value):
# Build up the Message data structure
if 'Message' not in params:
params['Message'] = {'Subject': {}, 'Body': {}}
if key in ('Text', 'Html'):
params['Message']['Body'][key] = {'Data': value}
elif key == 'Subject':
params['Message']['Subject'] = {'Data': value}
class AddressesArgument(CustomArgument):
def __init__(self, name, json_key, help_text='', dest=None, default=None,
action=None, required=None, choices=None, cli_type_name=None):
super(AddressesArgument, self).__init__(name=name, help_text=help_text,
required=required, nargs='+')
self._json_key = json_key
def add_to_params(self, parameters, value):
if value:
_build_destination(parameters, self._json_key, value)
class BodyArgument(CustomArgument):
def __init__(self, name, json_key, help_text='', required=None):
super(BodyArgument, self).__init__(name=name, help_text=help_text,
required=required)
self._json_key = json_key
def add_to_params(self, parameters, value):
if value:
_build_message(parameters, self._json_key, value)
awscli-1.10.1/awscli/customizations/utils.py 0000666 4542626 0000144 00000010525 12652514124 022222 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""
Utility functions to make it easier to work with customizations.
"""
import copy
from botocore.exceptions import ClientError
def rename_argument(argument_table, existing_name, new_name):
current = argument_table[existing_name]
argument_table[new_name] = current
current.name = new_name
del argument_table[existing_name]
def make_hidden_alias(argument_table, existing_name, alias_name):
"""Create a hidden alias for an existing argument.
This will copy an existing argument object in an arg table,
and add a new entry to the arg table with a different name.
The new argument will also be undocumented.
This is needed if you want to check an existing argument,
but you still need the other one to work for backwards
compatibility reasons.
"""
current = argument_table[existing_name]
copy_arg = copy.copy(current)
copy_arg._UNDOCUMENTED = True
copy_arg.name = alias_name
argument_table[alias_name] = copy_arg
def rename_command(command_table, existing_name, new_name):
current = command_table[existing_name]
command_table[new_name] = current
current.name = new_name
del command_table[existing_name]
def validate_mutually_exclusive_handler(*groups):
def _handler(parsed_args, **kwargs):
return validate_mutually_exclusive(parsed_args, *groups)
return _handler
def validate_mutually_exclusive(parsed_args, *groups):
"""Validate mututally exclusive groups in the parsed args."""
args_dict = vars(parsed_args)
all_args = set(arg for group in groups for arg in group)
if not any(k in all_args for k in args_dict if args_dict[k] is not None):
# If none of the specified args are in a mutually exclusive group
# there is nothing left to validate.
return
current_group = None
for key in [k for k in args_dict if args_dict[k] is not None]:
key_group = _get_group_for_key(key, groups)
if key_group is None:
# If they key is not part of a mutex group, we can move on.
continue
if current_group is None:
current_group = key_group
elif not key_group == current_group:
raise ValueError('The key "%s" cannot be specified when one '
'of the following keys are also specified: '
'%s' % (key, ', '.join(current_group)))
def _get_group_for_key(key, groups):
for group in groups:
if key in group:
return group
def s3_bucket_exists(s3_client, bucket_name):
bucket_exists = True
try:
# See if the bucket exists by running a head bucket
s3_client.head_bucket(Bucket=bucket_name)
except ClientError as e:
# If a client error is thrown. Check that it was a 404 error.
# If it was a 404 error, than the bucket does not exist.
error_code = int(e.response['Error']['Code'])
if error_code == 404:
bucket_exists = False
return bucket_exists
def create_client_from_parsed_globals(session, service_name, parsed_globals,
overrides=None):
"""Creates a service client, taking parsed_globals into account
Any values specified in overrides will override the returned dict. Note
that this override occurs after 'region' from parsed_globals has been
translated into 'region_name' in the resulting dict.
"""
client_args = {}
if 'region' in parsed_globals:
client_args['region_name'] = parsed_globals.region
if 'endpoint_url' in parsed_globals:
client_args['endpoint_url'] = parsed_globals.endpoint_url
if 'verify_ssl' in parsed_globals:
client_args['verify'] = parsed_globals.verify_ssl
if overrides:
client_args.update(overrides)
return session.create_client(service_name, **client_args)
awscli-1.10.1/awscli/customizations/kms.py 0000666 4542626 0000144 00000001615 12652514124 021654 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
def register_fix_kms_create_grant_docs(cli):
# Docs may actually refer to actual api name (not the CLI command).
# In that case we want to remove the translation map.
cli.register('doc-title.kms.create-grant', remove_translation_map)
def remove_translation_map(help_command, **kwargs):
help_command.doc.translation_map = {}
awscli-1.10.1/awscli/customizations/s3endpoint.py 0000666 4542626 0000144 00000003415 12652514124 023150 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""Disable endpoint url customizations for s3.
There's a customization in botocore such that for S3 operations
we try to fix the S3 endpoint url based on whether a bucket is
dns compatible. We also try to map the endpoint url to the
standard S3 region (s3.amazonaws.com). This normally happens
even if a user provides an --endpoint-url (if the bucket is
DNS compatible).
This customization ensures that if a user specifies
an --endpoint-url, then we turn off the botocore customization
that messes with endpoint url.
"""
from functools import partial
from botocore.utils import fix_s3_host
def register_s3_endpoint(cli):
handler = partial(on_top_level_args_parsed, event_handler=cli)
cli.register('top-level-args-parsed', handler)
def on_top_level_args_parsed(parsed_args, event_handler, **kwargs):
# The fix_s3_host has logic to set the endpoint to the
# standard region endpoint for s3 (s3.amazonaws.com) under
# certain conditions. We're making sure that if
# the user provides an --endpoint-url, that entire handler
# is disabled.
if parsed_args.command in ['s3', 's3api'] and \
parsed_args.endpoint_url is not None:
event_handler.unregister('before-sign.s3', fix_s3_host)
awscli-1.10.1/awscli/customizations/route53.py 0000666 4542626 0000144 00000002243 12652514124 022366 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
def register_create_hosted_zone_doc_fix(cli):
# We can remove this customization once we begin documenting
# members of complex parameters because the member's docstring
# has the necessary documentation.
cli.register(
'doc-option.route53.create-hosted-zone.hosted-zone-config',
add_private_zone_note)
def add_private_zone_note(help_command, **kwargs):
note = (
'
Note do not include PrivateZone in this '
'input structure. Its value is returned in the output to the command.'
'
'
)
help_command.doc.include_doc_string(note)
awscli-1.10.1/awscli/customizations/ec2bundleinstance.py 0000666 4542626 0000144 00000015132 12652514124 024451 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
from hashlib import sha1
import hmac
import base64
import datetime
from awscli.compat import six
from awscli.arguments import CustomArgument
logger = logging.getLogger('ec2bundleinstance')
# This customization adds the following scalar parameters to the
# bundle-instance operation:
# --bucket:
BUCKET_DOCS = ('The bucket in which to store the AMI. '
'You can specify a bucket that you already own or '
'a new bucket that Amazon EC2 creates on your behalf. '
'If you specify a bucket that belongs to someone else, '
'Amazon EC2 returns an error.')
# --prefix:
PREFIX_DOCS = ('The prefix for the image component names being stored '
'in Amazon S3.')
# --owner-akid
OWNER_AKID_DOCS = 'The access key ID of the owner of the Amazon S3 bucket.'
# --policy
POLICY_DOCS = (
"An Amazon S3 upload policy that gives "
"Amazon EC2 permission to upload items into Amazon S3 "
"on the user's behalf. If you provide this parameter, "
"you must also provide "
"your secret access key, so we can create a policy "
"signature for you (the secret access key is not passed "
"to Amazon EC2). If you do not provide this parameter, "
"we generate an upload policy for you automatically. "
"For more information about upload policies see the "
"sections about policy construction and signatures in the "
''
'Amazon Simple Storage Service Developer Guide.')
# --owner-sak
OWNER_SAK_DOCS = ('The AWS secret access key for the owner of the '
'Amazon S3 bucket specified in the --bucket '
'parameter. This parameter is required so that a '
'signature can be computed for the policy.')
def _add_params(argument_table, **kwargs):
# Add the scalar parameters and also change the complex storage
# param to not be required so the user doesn't get an error from
# argparse if they only supply scalar params.
storage_arg = argument_table['storage']
storage_arg.required = False
arg = BundleArgument(storage_param='Bucket',
name='bucket',
help_text=BUCKET_DOCS)
argument_table['bucket'] = arg
arg = BundleArgument(storage_param='Prefix',
name='prefix',
help_text=PREFIX_DOCS)
argument_table['prefix'] = arg
arg = BundleArgument(storage_param='AWSAccessKeyId',
name='owner-akid',
help_text=OWNER_AKID_DOCS)
argument_table['owner-akid'] = arg
arg = BundleArgument(storage_param='_SAK',
name='owner-sak',
help_text=OWNER_SAK_DOCS)
argument_table['owner-sak'] = arg
arg = BundleArgument(storage_param='UploadPolicy',
name='policy',
help_text=POLICY_DOCS)
argument_table['policy'] = arg
def _check_args(parsed_args, **kwargs):
# This function checks the parsed args. If the user specified
# the --ip-permissions option with any of the scalar options we
# raise an error.
logger.debug(parsed_args)
arg_dict = vars(parsed_args)
if arg_dict['storage']:
for key in ('bucket', 'prefix', 'owner-akid',
'owner-sak', 'policy'):
if arg_dict[key]:
msg = ('Mixing the --storage option '
'with the simple, scalar options is '
'not recommended.')
raise ValueError(msg)
POLICY = ('{{"expiration": "{expires}",'
'"conditions": ['
'{{"bucket": "{bucket}"}},'
'{{"acl": "ec2-bundle-read"}},'
'["starts-with", "$key", "{prefix}"]'
']}}'
)
def _generate_policy(params):
# Called if there is no policy supplied by the user.
# Creates a policy that provides access for 24 hours.
delta = datetime.timedelta(hours=24)
expires = datetime.datetime.utcnow() + delta
expires_iso = expires.strftime("%Y-%m-%dT%H:%M:%S.%fZ")
policy = POLICY.format(expires=expires_iso,
bucket=params['Bucket'],
prefix=params['Prefix'])
params['UploadPolicy'] = policy
def _generate_signature(params):
# If we have a policy and a sak, create the signature.
policy = params.get('UploadPolicy')
sak = params.get('_SAK')
if policy and sak:
policy = base64.b64encode(six.b(policy)).decode('utf-8')
new_hmac = hmac.new(sak.encode('utf-8'), digestmod=sha1)
new_hmac.update(six.b(policy))
ps = base64.encodestring(new_hmac.digest()).strip().decode('utf-8')
params['UploadPolicySignature'] = ps
del params['_SAK']
def _check_params(params, **kwargs):
# Called just before call but prior to building the params.
# Adds information not supplied by the user.
storage = params['Storage']['S3']
if 'UploadPolicy' not in storage:
_generate_policy(storage)
if 'UploadPolicySignature' not in storage:
_generate_signature(storage)
EVENTS = [
('building-argument-table.ec2.bundle-instance', _add_params),
('operation-args-parsed.ec2.bundle-instance', _check_args),
('before-parameter-build.ec2.BundleInstance', _check_params),
]
def register_bundleinstance(event_handler):
# Register all of the events for customizing BundleInstance
for event, handler in EVENTS:
event_handler.register(event, handler)
class BundleArgument(CustomArgument):
def __init__(self, storage_param, *args, **kwargs):
super(BundleArgument, self).__init__(*args, **kwargs)
self._storage_param = storage_param
def _build_storage(self, params, value):
# Build up the Storage data structure
if 'Storage' not in params:
params['Storage'] = {'S3': {}}
params['Storage']['S3'][self._storage_param] = value
def add_to_params(self, parameters, value):
if value:
self._build_storage(parameters, value)
awscli-1.10.1/awscli/customizations/rds.py 0000666 4542626 0000144 00000005332 12652514124 021652 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""
This customization splits the modify-option-group into two separate commands:
* ``add-option-group``
* ``remove-option-group``
In both commands the ``--options-to-remove`` and ``--options-to-add`` args will
be renamed to just ``--options``.
All the remaining args will be available in both commands (which proxy
modify-option-group).
"""
from awscli.clidriver import ServiceOperation
from awscli.clidriver import CLIOperationCaller
from awscli.customizations import utils
def register_rds_modify_split(cli):
cli.register('building-command-table.rds', _building_command_table)
cli.register('building-argument-table.rds.add-option-to-option-group',
_rename_add_option)
cli.register('building-argument-table.rds.remove-option-from-option-group',
_rename_remove_option)
def _rename_add_option(argument_table, **kwargs):
utils.rename_argument(argument_table, 'options-to-include',
new_name='options')
del argument_table['options-to-remove']
def _rename_remove_option(argument_table, **kwargs):
utils.rename_argument(argument_table, 'options-to-remove',
new_name='options')
del argument_table['options-to-include']
def _building_command_table(command_table, session, **kwargs):
# Hooked up to building-command-table.rds
# We don't need the modify-option-group operation.
del command_table['modify-option-group']
# We're going to replace modify-option-group with two commands:
# add-option-group and remove-option-group
rds_model = session.get_service_model('rds')
modify_operation_model = rds_model.operation_model('ModifyOptionGroup')
command_table['add-option-to-option-group'] = ServiceOperation(
parent_name='rds', name='add-option-to-option-group',
operation_caller=CLIOperationCaller(session),
session=session,
operation_model=modify_operation_model)
command_table['remove-option-from-option-group'] = ServiceOperation(
parent_name='rds', name='remove-option-from-option-group',
session=session,
operation_model=modify_operation_model,
operation_caller=CLIOperationCaller(session))
awscli-1.10.1/awscli/customizations/assumerole.py 0000666 4542626 0000144 00000006111 12652514124 023235 0 ustar pysdk-ci amazon 0000000 0000000 import os
import json
import logging
from botocore.exceptions import ProfileNotFound
LOG = logging.getLogger(__name__)
def register_assume_role_provider(event_handlers):
event_handlers.register('session-initialized',
inject_assume_role_provider_cache,
unique_id='inject_assume_role_cred_provider_cache')
def inject_assume_role_provider_cache(session, **kwargs):
try:
cred_chain = session.get_component('credential_provider')
except ProfileNotFound:
# If a user has provided a profile that does not exist,
# trying to retrieve components/config on the session
# will raise ProfileNotFound. Sometimes this is invalid:
#
# "ec2 describe-instances --profile unknown"
#
# and sometimes this is perfectly valid:
#
# "configure set region us-west-2 --profile brand-new-profile"
#
# Because we can't know (and don't want to know) whether
# the customer is trying to do something valid, we just
# immediately return. If it's invalid something else
# up the stack will raise ProfileNotFound, otherwise
# the configure (and other) commands will work as expected.
LOG.debug("ProfileNotFound caught when trying to inject "
"assume-role cred provider cache. Not configuring "
"JSONFileCache for assume-role.")
return
provider = cred_chain.get_provider('assume-role')
provider.cache = JSONFileCache()
class JSONFileCache(object):
"""JSON file cache.
This provides a dict like interface that stores JSON serializable
objects.
The objects are serialized to JSON and stored in a file. These
values can be retrieved at a later time.
"""
CACHE_DIR = os.path.expanduser(os.path.join('~', '.aws', 'cli', 'cache'))
def __init__(self, working_dir=CACHE_DIR):
self._working_dir = working_dir
def __contains__(self, cache_key):
actual_key = self._convert_cache_key(cache_key)
return os.path.isfile(actual_key)
def __getitem__(self, cache_key):
"""Retrieve value from a cache key."""
actual_key = self._convert_cache_key(cache_key)
try:
with open(actual_key) as f:
return json.load(f)
except (OSError, ValueError, IOError):
raise KeyError(cache_key)
def __setitem__(self, cache_key, value):
full_key = self._convert_cache_key(cache_key)
try:
file_content = json.dumps(value)
except (TypeError, ValueError):
raise ValueError("Value cannot be cached, must be "
"JSON serializable: %s" % value)
if not os.path.isdir(self._working_dir):
os.makedirs(self._working_dir)
with os.fdopen(os.open(full_key,
os.O_WRONLY | os.O_CREAT, 0o600), 'w') as f:
f.write(file_content)
def _convert_cache_key(self, cache_key):
full_path = os.path.join(self._working_dir, cache_key + '.json')
return full_path
awscli-1.10.1/awscli/customizations/flatten.py 0000666 4542626 0000144 00000022441 12652514124 022517 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
from awscli.arguments import CustomArgument
LOG = logging.getLogger(__name__)
# Nested argument member separator
SEP = '.'
class FlattenedArgument(CustomArgument):
"""
A custom argument which has been flattened from an existing structure. When
added to the call params it is hydrated back into the structure.
Supports both an object and a list of objects, in which case the flattened
parameters will hydrate a list with a single object in it.
"""
def __init__(self, name, container, prop, help_text='', required=None,
type=None, hydrate=None, hydrate_value=None):
self.type = type
self._container = container
self._property = prop
self._hydrate = hydrate
self._hydrate_value = hydrate_value
super(FlattenedArgument, self).__init__(name=name, help_text=help_text,
required=required)
@property
def cli_type_name(self):
return self.type
def add_to_params(self, parameters, value):
"""
Hydrate the original structure with the value of this flattened
argument.
TODO: This does not hydrate nested structures (``XmlName1.XmlName2``)!
To do this for now you must provide your own ``hydrate`` method.
"""
container = self._container.argument_model.name
cli_type = self._container.cli_type_name
key = self._property
LOG.debug('Hydrating {0}[{1}]'.format(container, key))
if value is not None:
# Convert type if possible
if self.type == 'boolean':
value = not value.lower() == 'false'
elif self.type in ['integer', 'long']:
value = int(value)
elif self.type in ['float', 'double']:
value = float(value)
if self._hydrate:
self._hydrate(parameters, container, cli_type, key, value)
else:
if container not in parameters:
if cli_type == 'list':
parameters[container] = [{}]
else:
parameters[container] = {}
if self._hydrate_value:
value = self._hydrate_value(value)
if cli_type == 'list':
parameters[container][0][key] = value
else:
parameters[container][key] = value
class FlattenArguments(object):
"""
Flatten arguments for one or more commands for a particular service from
a given configuration which maps service call parameters to flattened
names. Takes in a configuration dict of the form::
{
"command-cli-name": {
"argument-cli-name": {
"keep": False,
"flatten": {
"XmlName": {
"name": "flattened-cli-name",
"type": "Optional custom type",
"required": "Optional custom required",
"help_text": "Optional custom docs",
"hydrate_value": Optional function to hydrate value,
"hydrate": Optional function to hydrate
},
...
}
},
...
},
...
}
The ``type``, ``required`` and ``help_text`` arguments are entirely
optional and by default are pulled from the model. You should only set them
if you wish to override the default values in the model.
The ``keep`` argument determines whether the original command is still
accessible vs. whether it is removed. It defaults to ``False`` if not
present, which removes the original argument.
The keys inside of ``flatten`` (e.g. ``XmlName`` above) can include nested
references to structures via a colon. For example, ``XmlName1:XmlName2``
for the following structure::
{
"XmlName1": {
"XmlName2": ...
}
}
The ``hydrate_value`` function takes in a value and should return a value.
It is only called when the value is not ``None``. Example::
"hydrate_value": lambda (value): value.upper()
The ``hydrate`` function takes in a list of existing parameters, the name
of the container, its type, the name of the container key and its set
value. For the example above, the container would be
``'argument-cli-name'``, the key would be ``'XmlName'`` and the value
whatever the user passed in. Example::
def my_hydrate(params, container, cli_type, key, value):
if container not in params:
params[container] = {'default': 'values'}
params[container][key] = value
It's possible for ``cli_type`` to be ``list``, in which case you should
ensure that a list of one or more objects is hydrated rather than a
single object.
"""
def __init__(self, service_name, configs):
self.configs = configs
self.service_name = service_name
def register(self, cli):
"""
Register with a CLI instance, listening for events that build the
argument table for operations in the configuration dict.
"""
# Flatten each configured operation when they are built
service = self.service_name
for operation in self.configs:
cli.register('building-argument-table.{0}.{1}'.format(service,
operation),
self.flatten_args)
def flatten_args(self, command, argument_table, **kwargs):
# For each argument with a bag of parameters
for name, argument in self.configs[command.name].items():
argument_from_table = argument_table[name]
overwritten = False
LOG.debug('Flattening {0} argument {1} into {2}'.format(
command.name, name,
', '.join([v['name'] for k, v in argument['flatten'].items()])
))
# For each parameter to flatten out
for sub_argument, new_config in argument['flatten'].items():
config = new_config.copy()
config['container'] = argument_from_table
config['prop'] = sub_argument
# Handle nested arguments
_arg = self._find_nested_arg(
argument_from_table.argument_model, sub_argument
)
# Pull out docs and required attribute
self._merge_member_config(_arg, sub_argument, config)
# Create and set the new flattened argument
new_arg = FlattenedArgument(**config)
argument_table[new_config['name']] = new_arg
if name == new_config['name']:
overwritten = True
# Delete the original argument?
if not overwritten and ('keep' not in argument or
not argument['keep']):
del argument_table[name]
def _find_nested_arg(self, argument, name):
"""
Find and return a nested argument, if it exists. If no nested argument
is requested then the original argument is returned. If the nested
argument cannot be found, then a ValueError is raised.
"""
if SEP in name:
# Find the actual nested argument to pull out
LOG.debug('Finding nested argument in {0}'.format(name))
for piece in name.split(SEP)[:-1]:
for member_name, member in argument.members.items():
if member_name == piece:
argument = member
break
else:
raise ValueError('Invalid piece {0}'.format(piece))
return argument
def _merge_member_config(self, argument, name, config):
"""
Merges an existing config taken from the configuration dict with an
existing member of an existing argument object. This pulls in
attributes like ``required`` and ``help_text`` if they have not been
overridden in the configuration dict. Modifies the config in-place.
"""
# Pull out docs and required attribute
for member_name, member in argument.members.items():
if member_name == name.split(SEP)[-1]:
if 'help_text' not in config:
config['help_text'] = member.documentation
if 'required' not in config:
config['required'] = member_name in argument.required_members
if 'type' not in config:
config['type'] = member.type_name
break
awscli-1.10.1/awscli/customizations/awslambda.py 0000666 4542626 0000144 00000007354 12652514124 023023 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import zipfile
import copy
from contextlib import closing
from botocore.vendored import six
from awscli.arguments import CustomArgument, CLIArgument
from awscli.customizations import utils
ERROR_MSG = (
"--zip-file must be a file with the fileb:// prefix.\n"
"Example usage: --zip-file fileb://path/to/file.zip")
ZIP_DOCSTRING = ('
The path to the zip file of the code you are uploading. '
'Example: fileb://code.zip
')
def register_lambda_create_function(cli):
cli.register('building-argument-table.lambda.create-function',
_extract_code_and_zip_file_arguments)
cli.register('building-argument-table.lambda.update-function-code',
_modify_zipfile_docstring)
cli.register('process-cli-arg.lambda.update-function-code',
validate_is_zip_file)
def validate_is_zip_file(cli_argument, value, **kwargs):
if cli_argument.name == 'zip-file':
_should_contain_zip_content(value)
def _extract_code_and_zip_file_arguments(session, argument_table, **kwargs):
argument_table['zip-file'] = ZipFileArgument(
'zip-file', help_text=ZIP_DOCSTRING, cli_type_name='blob')
code_argument = argument_table['code']
code_model = copy.deepcopy(code_argument.argument_model)
del code_model.members['ZipFile']
argument_table['code'] = CodeArgument(
name='code',
argument_model=code_model,
operation_model=code_argument._operation_model,
is_required=False,
event_emitter=session.get_component('event_emitter'),
serialized_name='Code'
)
def _modify_zipfile_docstring(session, argument_table, **kwargs):
if 'zip-file' in argument_table:
argument_table['zip-file'].documentation = ZIP_DOCSTRING
def _should_contain_zip_content(value):
if not isinstance(value, bytes):
# If it's not bytes it's basically impossible for
# this to be valid zip content, but we'll at least
# still try to load the contents as a zip file
# to be absolutely sure.
value = value.encode('utf-8')
fileobj = six.BytesIO(value)
try:
with closing(zipfile.ZipFile(fileobj)) as f:
f.infolist()
except zipfile.BadZipfile:
raise ValueError(ERROR_MSG)
class ZipFileArgument(CustomArgument):
def add_to_params(self, parameters, value):
if value is None:
return
_should_contain_zip_content(value)
zip_file_param = {'ZipFile': value}
if parameters.get('Code'):
parameters['Code'].update(zip_file_param)
else:
parameters['Code'] = zip_file_param
class CodeArgument(CLIArgument):
def add_to_params(self, parameters, value):
if value is None:
return
unpacked = self._unpack_argument(value)
if 'ZipFile' in unpacked:
raise ValueError("ZipFile cannot be provided "
"as part of the --code argument. "
"Please use the '--zip-file' "
"option instead to specify a zip file.")
if parameters.get('Code'):
parameters['Code'].update(unpacked)
else:
parameters['Code'] = unpacked
awscli-1.10.1/awscli/customizations/preview.py 0000666 4542626 0000144 00000012142 12652514124 022540 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""This module enables the preview-mode customization.
If a service is marked as being in preview mode, then any attempts
to call operations on that service will print a message pointing
the user to alternate solutions. A user can still access this
service by enabling the service in their config file via:
[preview]
servicename=true
or by running:
aws configure set preview.servicename true
Also any service that is marked as being in preview will *not*
be listed in the help docs, unless the service has been enabled
in the config file as shown above.
"""
import logging
import sys
import textwrap
logger = logging.getLogger(__name__)
PREVIEW_SERVICES = [
'cloudfront',
'sdb',
'efs',
]
def register_preview_commands(events):
events.register('building-command-table.main', mark_as_preview)
def mark_as_preview(command_table, session, **kwargs):
# These are services that are marked as preview but are
# explicitly enabled in the config file.
allowed_services = _get_allowed_services(session)
for preview_service in PREVIEW_SERVICES:
is_enabled = False
if preview_service in allowed_services:
# Then we don't need to swap it as a preview
# service, the user has specifically asked to
# enable this service.
logger.debug("Preview service enabled through config file: %s",
preview_service)
is_enabled = True
original_command = command_table[preview_service]
preview_cls = type(
'PreviewCommand',
(PreviewModeCommandMixin, original_command.__class__), {})
command_table[preview_service] = preview_cls(
cli_name=original_command.name,
session=session,
service_name=original_command.service_model.service_name,
is_enabled=is_enabled)
# We also want to register a handler that will update the
# description in the docs to say that this is a preview service.
session.get_component('event_emitter').register_last(
'doc-description.%s' % preview_service,
update_description_with_preview)
def update_description_with_preview(help_command, **kwargs):
style = help_command.doc.style
style.start_note()
style.bold(PreviewModeCommandMixin.HELP_SNIPPET.strip())
# bcdoc does not currently allow for what I'd like to do
# which is have a code block like:
#
# ::
# [preview]
# service=true
#
# aws configure set preview.service true
#
# So for now we're just going to add the configure command
# to enable this.
style.doc.write("You can enable this service by running: ")
# The service name will always be the first element in the
# event class for the help object
service_name = help_command.event_class.split('.')[0]
style.code("aws configure set preview.%s true" % service_name)
style.end_note()
def _get_allowed_services(session):
# For a service to be marked as preview, it must be in the
# [preview] section and it must have a value of 'true'
# (case insensitive).
allowed = []
preview_services = session.full_config.get('preview', {})
for preview, value in preview_services.items():
if value == 'true':
allowed.append(preview)
return allowed
class PreviewModeCommandMixin(object):
ENABLE_DOCS = textwrap.dedent("""\
However, if you'd like to use the "aws {service}" commands with the
AWS CLI, you can enable this service by adding the following to your CLI
config file:
[preview]
{service}=true
or by running:
aws configure set preview.{service} true
""")
HELP_SNIPPET = ("AWS CLI support for this service is only "
"available in a preview stage.\n")
def __init__(self, *args, **kwargs):
self._is_enabled = kwargs.pop('is_enabled')
super(PreviewModeCommandMixin, self).__init__(*args, **kwargs)
def __call__(self, args, parsed_globals):
if self._is_enabled or self._is_help_command(args):
return super(PreviewModeCommandMixin, self).__call__(
args, parsed_globals)
else:
return self._display_opt_in_message()
def _is_help_command(self, args):
return args and args[-1] == 'help'
def _display_opt_in_message(self):
sys.stderr.write(self.HELP_SNIPPET)
sys.stderr.write("\n")
# Then let them know how to enable this service.
sys.stderr.write(self.ENABLE_DOCS.format(service=self._service_name))
return 1
awscli-1.10.1/awscli/customizations/opsworks.py 0000666 4542626 0000144 00000045673 12652514124 022765 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import datetime
import json
import logging
import os
import platform
import re
import shlex
import socket
import subprocess
import tempfile
import textwrap
from awscli.compat import shlex_quote, urlopen
from awscli.customizations.commands import BasicCommand
from awscli.customizations.utils import create_client_from_parsed_globals
from awscli.errorhandler import ClientError
LOG = logging.getLogger(__name__)
IAM_USER_POLICY_NAME = "OpsWorks-Instance"
IAM_USER_POLICY_TIMEOUT = datetime.timedelta(minutes=15)
IAM_PATH = '/AWS/OpsWorks/'
HOSTNAME_RE = re.compile(r"^(?!-)[a-z0-9-]{1,63}(?$AGENT_TMP_DIR/opsworks-agent-installer/preconfig <]',
'help_text': """Either the EC2 instance ID or the hostname of the
instance or machine to be registered with OpsWorks.
Cannot be used together with `--local`."""},
]
def __init__(self, session):
super(OpsWorksRegister, self).__init__(session)
self._stack = None
self._ec2_instance = None
self._prov_params = None
self._use_address = None
self._use_hostname = None
self._name_for_iam = None
def _create_clients(self, args, parsed_globals):
self.iam = self._session.create_client('iam')
self.opsworks = create_client_from_parsed_globals(
self._session, 'opsworks', parsed_globals)
def _run_main(self, args, parsed_globals):
self._create_clients(args, parsed_globals)
self.prevalidate_arguments(args)
self.retrieve_stack(args)
self.validate_arguments(args)
self.determine_details(args)
self.create_iam_entities()
self.setup_target_machine(args)
def prevalidate_arguments(self, args):
"""
Validates command line arguments before doing anything else.
"""
if not args.target and not args.local:
raise ValueError("One of target or --local is required.")
elif args.target and args.local:
raise ValueError(
"Arguments target and --local are mutually exclusive.")
if args.local and platform.system() != 'Linux':
raise ValueError(
"Non-Linux instances are not supported by AWS OpsWorks.")
if args.ssh and (args.username or args.private_key):
raise ValueError(
"Argument --override-ssh cannot be used together with "
"--ssh-username or --ssh-private-key.")
if args.infrastructure_class == 'ec2':
if args.private_ip:
raise ValueError(
"--override-private-ip is not supported for EC2.")
if args.public_ip:
raise ValueError(
"--override-public-ip is not supported for EC2.")
if args.hostname:
if not HOSTNAME_RE.match(args.hostname):
raise ValueError(
"Invalid hostname: '%s'. Hostnames must consist of "
"letters, digits and dashes only and must not start or "
"end with a dash." % args.hostname)
def retrieve_stack(self, args):
"""
Retrieves the stack from the API, thereby ensures that it exists.
Provides `self._stack`, `self._prov_params`, `self._use_address`, and
`self._ec2_instance`.
"""
LOG.debug("Retrieving stack and provisioning parameters")
self._stack = self.opsworks.describe_stacks(
StackIds=[args.stack_id]
)['Stacks'][0]
self._prov_params = \
self.opsworks.describe_stack_provisioning_parameters(
StackId=self._stack['StackId']
)
if args.infrastructure_class == 'ec2' and not args.local:
LOG.debug("Retrieving EC2 instance information")
ec2 = self._session.create_client(
'ec2', region_name=self._stack['Region'])
# `desc_args` are arguments for the describe_instances call,
# whereas `conditions` is a list of lambdas for further filtering
# on the results of the call.
desc_args = {'Filters': []}
conditions = []
# make sure that the platforms (EC2/VPC) and VPC IDs of the stack
# and the instance match
if 'VpcId' in self._stack:
desc_args['Filters'].append(
{'Name': 'vpc-id', 'Values': [self._stack['VpcId']]}
)
else:
# Cannot search for non-VPC instances directly, thus filter
# afterwards
conditions.append(lambda instance: 'VpcId' not in instance)
# target may be an instance ID, an IP address, or a name
if INSTANCE_ID_RE.match(args.target):
desc_args['InstanceIds'] = [args.target]
elif IP_ADDRESS_RE.match(args.target):
# Cannot search for either private or public IP at the same
# time, thus filter afterwards
conditions.append(
lambda instance:
instance.get('PrivateIpAddress') == args.target or
instance.get('PublicIpAddress') == args.target)
# also use the given address to connect
self._use_address = args.target
else:
# names are tags
desc_args['Filters'].append(
{'Name': 'tag:Name', 'Values': [args.target]}
)
# find all matching instances
instances = [
i
for r in ec2.describe_instances(**desc_args)['Reservations']
for i in r['Instances']
if all(c(i) for c in conditions)
]
if not instances:
raise ValueError(
"Did not find any instance matching %s." % args.target)
elif len(instances) > 1:
raise ValueError(
"Found multiple instances matching %s: %s." % (
args.target,
", ".join(i['InstanceId'] for i in instances)))
self._ec2_instance = instances[0]
def validate_arguments(self, args):
"""
Validates command line arguments using the retrieved information.
"""
if args.hostname:
instances = self.opsworks.describe_instances(
StackId=self._stack['StackId']
)['Instances']
if any(args.hostname.lower() == instance['Hostname']
for instance in instances):
raise ValueError(
"Invalid hostname: '%s'. Hostnames must be unique within "
"a stack." % args.hostname)
if args.infrastructure_class == 'ec2' and args.local:
# make sure the regions match
region = json.loads(urlopen(IDENTITY_URL).read())['region']
if region != self._stack['Region']:
raise ValueError(
"The stack's and the instance's region must match.")
def determine_details(self, args):
"""
Determine details (like the address to connect to and the hostname to
use) from the given arguments and the retrieved data.
Provides `self._use_address` (if not provided already),
`self._use_hostname` and `self._name_for_iam`.
"""
# determine the address to connect to
if not self._use_address:
if args.local:
pass
elif args.infrastructure_class == 'ec2':
if 'PublicIpAddress' in self._ec2_instance:
self._use_address = self._ec2_instance['PublicIpAddress']
elif 'PrivateIpAddress' in self._ec2_instance:
LOG.warn(
"Instance does not have a public IP address. Trying "
"to use the private address to connect.")
self._use_address = self._ec2_instance['PrivateIpAddress']
else:
# Should never happen
raise ValueError(
"The instance does not seem to have an IP address.")
elif args.infrastructure_class == 'on-premises':
self._use_address = args.target
# determine the names to use
if args.hostname:
self._use_hostname = args.hostname
self._name_for_iam = args.hostname
elif args.local:
self._use_hostname = None
self._name_for_iam = socket.gethostname()
else:
self._use_hostname = None
self._name_for_iam = args.target
def create_iam_entities(self):
"""
Creates an IAM group, user and corresponding credentials.
Provides `self.access_key`.
"""
LOG.debug("Creating the IAM group if necessary")
group_name = "OpsWorks-%s" % clean_for_iam(self._stack['StackId'])
try:
self.iam.create_group(GroupName=group_name, Path=IAM_PATH)
LOG.debug("Created IAM group %s", group_name)
except ClientError as e:
if e.error_code == 'EntityAlreadyExists':
LOG.debug("IAM group %s exists, continuing", group_name)
# group already exists, good
pass
else:
raise
# create the IAM user, trying alternatives if it already exists
LOG.debug("Creating an IAM user")
base_username = "OpsWorks-%s-%s" % (
shorten_name(clean_for_iam(self._stack['Name']), 25),
shorten_name(clean_for_iam(self._name_for_iam), 25)
)
for try_ in range(20):
username = base_username + ("+%s" % try_ if try_ else "")
try:
self.iam.create_user(UserName=username, Path=IAM_PATH)
except ClientError as e:
if e.error_code == 'EntityAlreadyExists':
LOG.debug(
"IAM user %s already exists, trying another name",
username
)
# user already exists, try the next one
pass
else:
raise
else:
LOG.debug("Created IAM user %s", username)
break
else:
raise ValueError("Couldn't find an unused IAM user name.")
LOG.debug("Adding the user to the group and attaching a policy")
self.iam.add_user_to_group(GroupName=group_name, UserName=username)
self.iam.put_user_policy(
PolicyName=IAM_USER_POLICY_NAME,
PolicyDocument=self._iam_policy_document(
self._stack['Arn'], IAM_USER_POLICY_TIMEOUT),
UserName=username
)
LOG.debug("Creating an access key")
self.access_key = self.iam.create_access_key(
UserName=username
)['AccessKey']
def setup_target_machine(self, args):
"""
Setups the target machine by copying over the credentials and starting
the installation process.
"""
remote_script = REMOTE_SCRIPT % {
'agent_installer_url':
self._prov_params['AgentInstallerUrl'],
'preconfig':
self._to_ruby_yaml(self._pre_config_document(args)),
'assets_download_bucket':
self._prov_params['Parameters']['assets_download_bucket']
}
if args.local:
LOG.debug("Running the installer locally")
subprocess.check_call(["/bin/sh", "-c", remote_script])
else:
LOG.debug("Connecting to the target machine to run the installer.")
self.ssh(args, remote_script)
def ssh(self, args, remote_script):
"""
Runs a (sh) script on a remote machine via SSH.
"""
if platform.system() == 'Windows':
try:
script_file = tempfile.NamedTemporaryFile("wt", delete=False)
script_file.write(remote_script)
script_file.close()
if args.ssh:
call = args.ssh
else:
call = 'plink'
if args.username:
call += ' -l "%s"' % args.username
if args.private_key:
call += ' -i "%s"' % args.private_key
call += ' "%s"' % self._use_address
call += ' -m'
call += ' "%s"' % script_file.name
subprocess.check_call(call, shell=True)
finally:
os.remove(script_file.name)
else:
if args.ssh:
call = shlex.split(str(args.ssh))
else:
call = ['ssh', '-tt']
if args.username:
call.extend(['-l', args.username])
if args.private_key:
call.extend(['-i', args.private_key])
call.append(self._use_address)
remote_call = ["/bin/sh", "-c", remote_script]
call.append(" ".join(shlex_quote(word) for word in remote_call))
subprocess.check_call(call)
def _pre_config_document(self, args):
parameters = dict(
access_key_id=self.access_key['AccessKeyId'],
secret_access_key=self.access_key['SecretAccessKey'],
stack_id=self._stack['StackId'],
**self._prov_params["Parameters"]
)
if self._use_hostname:
parameters['hostname'] = self._use_hostname
if args.private_ip:
parameters['private_ip'] = args.private_ip
if args.public_ip:
parameters['public_ip'] = args.public_ip
parameters['import'] = args.infrastructure_class == 'ec2'
LOG.debug("Using pre-config: %r", parameters)
return parameters
@staticmethod
def _iam_policy_document(arn, timeout=None):
statement = {
"Action": "opsworks:RegisterInstance",
"Effect": "Allow",
"Resource": arn,
}
if timeout is not None:
valid_until = datetime.datetime.utcnow() + timeout
statement["Condition"] = {
"DateLessThan": {
"aws:CurrentTime":
valid_until.strftime("%Y-%m-%dT%H:%M:%SZ")
}
}
policy_document = {
"Statement": [statement],
"Version": "2012-10-17"
}
return json.dumps(policy_document)
@staticmethod
def _to_ruby_yaml(parameters):
return "\n".join(":%s: %s" % (k, json.dumps(v))
for k, v in sorted(parameters.items()))
def clean_for_iam(name):
"""
Cleans a name to fit IAM's naming requirements.
"""
return re.sub(r'[^A-Za-z0-9+=,.@_-]+', '-', name)
def shorten_name(name, max_length):
"""
Shortens a name to the given number of characters.
"""
if len(name) <= max_length:
return name
q, r = divmod(max_length - 3, 2)
return name[:q + r] + "..." + name[-q:]
awscli-1.10.1/awscli/customizations/generatecliskeleton.py 0000666 4542626 0000144 00000007073 12652514124 025115 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import json
import sys
from botocore.utils import ArgumentGenerator
from awscli.customizations.arguments import OverrideRequiredArgsArgument
def register_generate_cli_skeleton(cli):
cli.register('building-argument-table', add_generate_skeleton)
def add_generate_skeleton(session, operation_model, argument_table, **kwargs):
# This argument cannot support operations with streaming output which
# is designated by the argument name `outfile`.
if 'outfile' not in argument_table:
generate_cli_skeleton_argument = GenerateCliSkeletonArgument(
session, operation_model)
generate_cli_skeleton_argument.add_to_arg_table(argument_table)
class GenerateCliSkeletonArgument(OverrideRequiredArgsArgument):
"""This argument writes a generated JSON skeleton to stdout
The argument, if present in the command line, will prevent the intended
command from taking place. Instead, it will generate a JSON skeleton and
print it to standard output. This JSON skeleton then can be filled out
and can be used as input to ``--input-cli-json`` in order to run the
command with the filled out JSON skeleton.
"""
ARG_DATA = {
'name': 'generate-cli-skeleton',
'help_text': 'Prints a sample input JSON to standard output. Note the '
'specified operation is not run if this argument is '
'specified. The sample input can be used as an argument '
'for ``--cli-input-json``.',
'action': 'store_true',
'group_name': 'generate_cli_skeleton'
}
def __init__(self, session, operation_model):
super(GenerateCliSkeletonArgument, self).__init__(session)
self._operation_model = operation_model
def _register_argument_action(self):
self._session.register(
'calling-command.*', self.generate_json_skeleton)
super(GenerateCliSkeletonArgument, self)._register_argument_action()
def generate_json_skeleton(self, call_parameters, parsed_args,
parsed_globals, **kwargs):
# Only perform the method if the ``--generate-cli-skeleton`` was
# included in the command line.
if getattr(parsed_args, 'generate_cli_skeleton', False):
# Obtain the model of the operation
operation_model = self._operation_model
# Generate the skeleton based on the ``input_shape``.
argument_generator = ArgumentGenerator()
operation_input_shape = operation_model.input_shape
# If the ``input_shape`` is ``None``, generate an empty
# dictionary.
if operation_input_shape is None:
skeleton = {}
else:
skeleton = argument_generator.generate_skeleton(
operation_input_shape)
# Write the generated skeleton to standard output.
sys.stdout.write(json.dumps(skeleton, indent=4))
sys.stdout.write('\n')
# This is the return code
return 0
awscli-1.10.1/awscli/customizations/s3/ 0000777 4542626 0000144 00000000000 12652514126 021034 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/customizations/s3/executor.py 0000666 4542626 0000144 00000032613 12652514124 023247 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import os
import logging
import sys
import threading
from awscli.customizations.s3.utils import uni_print, bytes_print, \
IORequest, IOCloseRequest, StablePriorityQueue, set_file_utime
from awscli.customizations.s3.tasks import OrderableTask
from awscli.compat import queue
LOGGER = logging.getLogger(__name__)
class ShutdownThreadRequest(OrderableTask):
PRIORITY = 11
def __init__(self, priority_override=None):
if priority_override is not None:
self.PRIORITY = priority_override
class Executor(object):
"""
This class is in charge of all of the threads. It starts up the threads
and cleans up the threads when finished. The two type of threads the
``Executor``runs is a worker and a print thread.
"""
STANDARD_PRIORITY = 11
IMMEDIATE_PRIORITY = 1
def __init__(self, num_threads, result_queue, quiet,
only_show_errors, max_queue_size, write_queue):
self._max_queue_size = max_queue_size
LOGGER.debug("Using max queue size for s3 tasks of: %s",
self._max_queue_size)
self.queue = StablePriorityQueue(maxsize=self._max_queue_size,
max_priority=20)
self.num_threads = num_threads
self.result_queue = result_queue
self.quiet = quiet
self.only_show_errors = only_show_errors
self.threads_list = []
self.write_queue = write_queue
self.print_thread = PrintThread(self.result_queue, self.quiet,
self.only_show_errors)
self.print_thread.daemon = True
self.io_thread = IOWriterThread(self.write_queue)
@property
def num_tasks_failed(self):
tasks_failed = 0
if self.print_thread is not None:
tasks_failed = self.print_thread.num_errors_seen
return tasks_failed
@property
def num_tasks_warned(self):
tasks_warned = 0
if self.print_thread is not None:
tasks_warned = self.print_thread.num_warnings_seen
return tasks_warned
def start(self):
self.io_thread.start()
# Note that we're *not* adding the IO thread to the threads_list.
# There's a specific shutdown order we need and we're going to be
# explicit about it rather than relying on the threads_list order.
# See .join() for more info.
self.print_thread.start()
LOGGER.debug("Using a threadpool size of: %s", self.num_threads)
for i in range(self.num_threads):
worker = Worker(queue=self.queue)
worker.setDaemon(True)
self.threads_list.append(worker)
worker.start()
def submit(self, task):
"""
This is the function used to submit a task to the ``Executor``.
"""
LOGGER.debug("Submitting task: %s", task)
self.queue.put(task)
def initiate_shutdown(self, priority=STANDARD_PRIORITY):
"""Instruct all threads to shutdown.
This is a graceful shutdown. It will wait until all
currently queued tasks have been completed before the threads
shutdown. If the task queue is completely full, it may
take a while for the threads to shutdown.
This method does not block. Once ``initiate_shutdown`` has
been called, you can all ``wait_until_shutdown`` to block
until the Executor has been shutdown.
"""
# Implementation detail: we only queue the worker threads
# to shutdown. The print/io threads are shutdown in the
# ``wait_until_shutdown`` method.
for i in range(self.num_threads):
LOGGER.debug(
"Queueing end sentinel for worker thread (priority: %s)",
priority)
self.queue.put(ShutdownThreadRequest(priority))
def wait_until_shutdown(self):
"""Block until the Executor is fully shutdown.
This will wait until all worker threads are shutdown, along
with any additional helper threads used by the executor.
"""
for thread in self.threads_list:
LOGGER.debug("Waiting for thread to shutdown: %s", thread)
while True:
thread.join(timeout=1)
if not thread.is_alive():
break
LOGGER.debug("Thread has been shutdown: %s", thread)
LOGGER.debug("Queueing end sentinel for result thread.")
self.result_queue.put(ShutdownThreadRequest())
LOGGER.debug("Queueing end sentinel for IO thread.")
self.write_queue.put(ShutdownThreadRequest())
LOGGER.debug("Waiting for result thread to shutdown.")
self.print_thread.join()
LOGGER.debug("Waiting for IO thread to shutdown.")
self.io_thread.join()
LOGGER.debug("All threads have been shutdown.")
class IOWriterThread(threading.Thread):
def __init__(self, queue):
threading.Thread.__init__(self)
self.queue = queue
self.fd_descriptor_cache = {}
def run(self):
while True:
task = self.queue.get(True)
if isinstance(task, ShutdownThreadRequest):
LOGGER.debug("Shutdown request received in IO thread, "
"shutting down.")
self._cleanup()
return
try:
self._handle_task(task)
except Exception as e:
LOGGER.debug(
"Error processing IO request: %s", e, exc_info=True)
def _handle_task(self, task):
if isinstance(task, IORequest):
filename, offset, data, is_stream = task
if is_stream:
self._handle_stream_task(data)
else:
self._handle_file_write_task(filename, offset, data)
elif isinstance(task, IOCloseRequest):
self._handle_io_close_request(task)
def _handle_io_close_request(self, task):
LOGGER.debug("IOCloseRequest received for %s, closing file.",
task.filename)
fileobj = self.fd_descriptor_cache.get(task.filename)
if fileobj is not None:
fileobj.close()
del self.fd_descriptor_cache[task.filename]
if task.desired_mtime is not None:
set_file_utime(task.filename, task.desired_mtime)
def _handle_stream_task(self, data):
fileobj = sys.stdout
bytes_print(data)
fileobj.flush()
def _handle_file_write_task(self, filename, offset, data):
fileobj = self.fd_descriptor_cache.get(filename)
if fileobj is None:
fileobj = open(filename, 'rb+')
self.fd_descriptor_cache[filename] = fileobj
LOGGER.debug("Writing data to: %s, offset: %s",
filename, offset)
fileobj.seek(offset)
fileobj.write(data)
def _cleanup(self):
for fileobj in self.fd_descriptor_cache.values():
fileobj.close()
class Worker(threading.Thread):
"""
This thread is in charge of performing the tasks provided via
the main queue ``queue``.
"""
def __init__(self, queue):
threading.Thread.__init__(self)
# This is the queue where work (tasks) are submitted.
self.queue = queue
def run(self):
while True:
try:
function = self.queue.get(True)
if isinstance(function, ShutdownThreadRequest):
LOGGER.debug("Shutdown request received in worker thread, "
"shutting down worker thread.")
break
try:
LOGGER.debug("Worker thread invoking task: %s", function)
function()
except Exception as e:
LOGGER.debug('Error calling task: %s', e, exc_info=True)
except queue.Empty:
pass
class PrintThread(threading.Thread):
"""
This thread controls the printing of results. When a task is
completely finished it is permanently write the result to standard
out. Otherwise, it is a part of a multipart upload/download and
only shows the most current part upload/download.
Result Queue
------------
Result queue items are PrintTask objects that have the following
attributes:
* message: An arbitrary string associated with the entry. This
can be used to communicate the result of the task.
* error: Boolean indicating whether or not the task completely
successfully.
* total_parts: The total number of parts for multipart transfers (
deprecated, will be removed in the future).
* warning: Boolean indicating whether or not a file generated a
warning.
"""
def __init__(self, result_queue, quiet, only_show_errors):
threading.Thread.__init__(self)
self._progress_dict = {}
self._result_queue = result_queue
self._quiet = quiet
self._only_show_errors = only_show_errors
self._progress_length = 0
self._num_parts = 0
self._file_count = 0
self._lock = threading.Lock()
self._needs_newline = False
self._total_parts = '...'
self._total_files = '...'
# This is a public attribute that clients can inspect to determine
# whether or not we saw any results indicating that an error occurred.
self.num_errors_seen = 0
self.num_warnings_seen = 0
def set_total_parts(self, total_parts):
with self._lock:
self._total_parts = total_parts
def set_total_files(self, total_files):
with self._lock:
self._total_files = total_files
def run(self):
while True:
try:
print_task = self._result_queue.get(True)
if isinstance(print_task, ShutdownThreadRequest):
if self._needs_newline:
sys.stdout.write('\n')
LOGGER.debug("Shutdown request received in print thread, "
"shutting down print thread.")
break
LOGGER.debug("Received print task: %s", print_task)
try:
self._process_print_task(print_task)
except Exception as e:
LOGGER.debug("Error processing print task: %s", e,
exc_info=True)
except queue.Empty:
pass
def _process_print_task(self, print_task):
print_str = print_task.message
print_to_stderr = False
if print_task.error:
self.num_errors_seen += 1
print_to_stderr = True
final_str = ''
if print_task.warning:
self.num_warnings_seen += 1
print_to_stderr = True
final_str += print_str.ljust(self._progress_length, ' ')
final_str += '\n'
elif print_task.total_parts:
# Normalize keys so failures and sucess
# look the same.
op_list = print_str.split(':')
print_str = ':'.join(op_list[1:])
total_part = print_task.total_parts
self._num_parts += 1
if print_str in self._progress_dict:
self._progress_dict[print_str]['parts'] += 1
else:
self._progress_dict[print_str] = {}
self._progress_dict[print_str]['parts'] = 1
self._progress_dict[print_str]['total'] = total_part
else:
print_components = print_str.split(':')
final_str += print_str.ljust(self._progress_length, ' ')
final_str += '\n'
key = ':'.join(print_components[1:])
if key in self._progress_dict:
self._progress_dict.pop(print_str, None)
else:
self._num_parts += 1
self._file_count += 1
# If the message is an error or warning, print it to standard error.
if print_to_stderr and not self._quiet:
uni_print(final_str, sys.stderr)
final_str = ''
is_done = self._total_files == self._file_count
if not is_done:
final_str += self._make_progress_bar()
if not (self._quiet or self._only_show_errors):
uni_print(final_str)
self._needs_newline = not final_str.endswith('\n')
def _make_progress_bar(self):
"""Creates the progress bar string to print out."""
prog_str = "Completed %s " % self._num_parts
num_files = self._total_files
if self._total_files != '...':
prog_str += "of %s " % self._total_parts
num_files = self._total_files - self._file_count
prog_str += "part(s) with %s file(s) remaining" % \
num_files
length_prog = len(prog_str)
prog_str += '\r'
prog_str = prog_str.ljust(self._progress_length, ' ')
self._progress_length = length_prog
return prog_str
awscli-1.10.1/awscli/customizations/s3/comparator.py 0000666 4542626 0000144 00000014074 12652514124 023561 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
from awscli.compat import advance_iterator
LOG = logging.getLogger(__name__)
class Comparator(object):
"""
This class performs all of the comparisons behind the sync operation
"""
def __init__(self, file_at_src_and_dest_sync_strategy,
file_not_at_dest_sync_strategy,
file_not_at_src_sync_strategy):
self._sync_strategy = file_at_src_and_dest_sync_strategy
self._not_at_dest_sync_strategy = file_not_at_dest_sync_strategy
self._not_at_src_sync_strategy = file_not_at_src_sync_strategy
def call(self, src_files, dest_files):
"""
This function preforms the actual comparisons. The parameters it takes
are the generated files for both the source and the destination. The
key concept in this function is that no matter the type of where the
files are coming from, they are listed in the same order, least to
greatest in collation order. This allows for easy comparisons to
determine if file needs to be added or deleted. Comparison keys are
used to determine if two files are the same and each file has a
unique comparison key. If they are the same compare the size and
last modified times to see if a file needs to be updated. Ultimately,
it will yield a sequence of file info objectsthat will be sent to
the ``S3Handler``.
:param src_files: The generated FileInfo objects from the source.
:param dest_files: The genereated FileInfo objects from the dest.
:returns: Yields the FilInfo objects of the files that need to be
operated on
Algorithm:
Try to take next from both files. If it is empty signal
corresponding done flag. If both generated lists are not done
compare compare_keys. If equal, compare size and time to see if
it needs to be updated. If source compare_key is less than dest
compare_key, the file needs to be added to the destination. Take
the next source file but not not destination file. If the source
compare_key is greater than dest compare_key, that destination file
needs to be deleted from the destination. Take the next dest file
but not the source file. If the source list is empty delete the
rest of the files in the dest list from the destination. If the
dest list is empty add the rest of the file in source list to
the destionation.
"""
# :var src_done: True if there are no more files from the source left.
src_done = False
# :var dest_done: True if there are no more files form the dest left.
dest_done = False
# :var src_take: Take the next source file from the generated files if
# true
src_take = True
# :var dest_take: Take the next dest file from the generated files if
# true
dest_take = True
while True:
try:
if (not src_done) and src_take:
src_file = advance_iterator(src_files)
except StopIteration:
src_file = None
src_done = True
try:
if (not dest_done) and dest_take:
dest_file = advance_iterator(dest_files)
except StopIteration:
dest_file = None
dest_done = True
if (not src_done) and (not dest_done):
src_take = True
dest_take = True
compare_keys = self.compare_comp_key(src_file, dest_file)
if compare_keys == 'equal':
should_sync = self._sync_strategy.determine_should_sync(
src_file, dest_file
)
if should_sync:
yield src_file
elif compare_keys == 'less_than':
src_take = True
dest_take = False
should_sync = self._not_at_dest_sync_strategy.determine_should_sync(src_file, None)
if should_sync:
yield src_file
elif compare_keys == 'greater_than':
src_take = False
dest_take = True
should_sync = self._not_at_src_sync_strategy.determine_should_sync(None, dest_file)
if should_sync:
yield dest_file
elif (not src_done) and dest_done:
src_take = True
should_sync = self._not_at_dest_sync_strategy.determine_should_sync(src_file, None)
if should_sync:
yield src_file
elif src_done and (not dest_done):
dest_take = True
should_sync = self._not_at_src_sync_strategy.determine_should_sync(None, dest_file)
if should_sync:
yield dest_file
else:
break
def compare_comp_key(self, src_file, dest_file):
"""
Determines if the source compare_key is less than, equal to,
or greater than the destination compare_key
"""
src_comp_key = src_file.compare_key
dest_comp_key = dest_file.compare_key
if (src_comp_key == dest_comp_key):
return 'equal'
elif (src_comp_key < dest_comp_key):
return 'less_than'
else:
return 'greater_than'
awscli-1.10.1/awscli/customizations/s3/filegenerator.py 0000666 4542626 0000144 00000036635 12652514124 024247 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import os
import sys
import stat
from dateutil.parser import parse
from dateutil.tz import tzlocal
from awscli.customizations.s3.utils import find_bucket_key, get_file_stat
from awscli.customizations.s3.utils import BucketLister, create_warning, \
find_dest_path_comp_key, EPOCH_TIME
from awscli.errorhandler import ClientError
from awscli.compat import six
from awscli.compat import queue
_open = open
def is_special_file(path):
"""
This function checks to see if a special file. It checks if the
file is a character special device, block special device, FIFO, or
socket.
"""
mode = os.stat(path).st_mode
# Character special device.
if stat.S_ISCHR(mode):
return True
# Block special device
if stat.S_ISBLK(mode):
return True
# FIFO.
if stat.S_ISFIFO(mode):
return True
# Socket.
if stat.S_ISSOCK(mode):
return True
return False
def is_readable(path):
"""
This function checks to see if a file or a directory can be read.
This is tested by performing an operation that requires read access
on the file or the directory.
"""
if os.path.isdir(path):
try:
os.listdir(path)
except (OSError, IOError):
return False
else:
try:
with _open(path, 'r') as fd:
pass
except (OSError, IOError):
return False
return True
# This class is provided primarily to provide a detailed error message.
class FileDecodingError(Exception):
"""Raised when there was an issue decoding the file."""
ADVICE = (
"Please check your locale settings. The filename was decoded as: %s\n"
"On posix platforms, check the LC_CTYPE environment variable."
% (sys.getfilesystemencoding())
)
def __init__(self, directory, filename):
self.directory = directory
self.file_name = filename
self.error_message = (
'There was an error trying to decode the the file %s in '
'directory "%s". \n%s' % (repr(self.file_name),
self.directory,
self.ADVICE)
)
super(FileDecodingError, self).__init__(self.error_message)
class FileStat(object):
def __init__(self, src, dest=None, compare_key=None, size=None,
last_update=None, src_type=None, dest_type=None,
operation_name=None, response_data=None):
self.src = src
self.dest = dest
self.compare_key = compare_key
self.size = size
self.last_update = last_update
self.src_type = src_type
self.dest_type = dest_type
self.operation_name = operation_name
self.response_data = response_data
class FileGenerator(object):
"""
This is a class the creates a generator to yield files based on information
returned from the ``FileFormat`` class. It is universal in the sense that
it will handle s3 files, local files, local directories, and s3 objects
under the same common prefix. The generator yields corresponding
``FileInfo`` objects to send to a ``Comparator`` or ``S3Handler``.
"""
def __init__(self, client, operation_name, follow_symlinks=True,
page_size=None, result_queue=None, request_parameters=None):
self._client = client
self.operation_name = operation_name
self.follow_symlinks = follow_symlinks
self.page_size = page_size
self.result_queue = result_queue
if not result_queue:
self.result_queue = queue.Queue()
self.request_parameters = {}
if request_parameters is not None:
self.request_parameters = request_parameters
def call(self, files):
"""
This is the generalized function to yield the ``FileInfo`` objects.
``dir_op`` and ``use_src_name`` flags affect which files are used and
ensure the proper destination paths and compare keys are formed.
"""
function_table = {'s3': self.list_objects, 'local': self.list_files}
source = files['src']['path']
src_type = files['src']['type']
dest_type = files['dest']['type']
file_iterator = function_table[src_type](source, files['dir_op'])
for src_path, extra_information in file_iterator:
dest_path, compare_key = find_dest_path_comp_key(files, src_path)
file_stat_kwargs = {
'src': src_path, 'dest': dest_path, 'compare_key': compare_key,
'src_type': src_type, 'dest_type': dest_type,
'operation_name': self.operation_name
}
self._inject_extra_information(file_stat_kwargs, extra_information)
yield FileStat(**file_stat_kwargs)
def _inject_extra_information(self, file_stat_kwargs, extra_information):
src_type = file_stat_kwargs['src_type']
file_stat_kwargs['size'] = extra_information['Size']
file_stat_kwargs['last_update'] = extra_information['LastModified']
# S3 objects require the response data retrieved from HeadObject
# and ListObject
if src_type == 's3':
file_stat_kwargs['response_data'] = extra_information
def list_files(self, path, dir_op):
"""
This function yields the appropriate local file or local files
under a directory depending on if the operation is on a directory.
For directories a depth first search is implemented in order to
follow the same sorted pattern as a s3 list objects operation
outputs. It yields the file's source path, size, and last
update
"""
join, isdir, isfile = os.path.join, os.path.isdir, os.path.isfile
error, listdir = os.error, os.listdir
if not self.should_ignore_file(path):
if not dir_op:
size, last_update = get_file_stat(path)
last_update = self._validate_update_time(last_update, path)
yield path, {'Size': size, 'LastModified': last_update}
else:
# We need to list files in byte order based on the full
# expanded path of the key: 'test/1/2/3.txt' However,
# listdir() will only give us contents a single directory
# at a time, so we'll get 'test'. At the same time we don't
# want to load the entire list of files into memory. This
# is handled by first going through the current directory
# contents and adding the directory separator to any
# directories. We can then sort the contents,
# and ensure byte order.
listdir_names = listdir(path)
names = []
for name in listdir_names:
if not self.should_ignore_file_with_decoding_warnings(
path, name):
file_path = join(path, name)
if isdir(file_path):
name = name + os.path.sep
names.append(name)
self.normalize_sort(names, os.sep, '/')
for name in names:
file_path = join(path, name)
if isdir(file_path):
# Anything in a directory will have a prefix of
# this current directory and will come before the
# remaining contents in this directory. This
# means we need to recurse into this sub directory
# before yielding the rest of this directory's
# contents.
for x in self.list_files(file_path, dir_op):
yield x
else:
size, last_update = get_file_stat(file_path)
last_update = self._validate_update_time(
last_update, path)
yield (
file_path,
{'Size': size, 'LastModified': last_update}
)
def _validate_update_time(self, update_time, path):
# If the update time is None we know we ran into an invalid tiemstamp.
if update_time is None:
warning = create_warning(
path=path,
error_message="File has an invalid timestamp. Passing epoch "
"time as timestamp.",
skip_file=False)
self.result_queue.put(warning)
return EPOCH_TIME
return update_time
def normalize_sort(self, names, os_sep, character):
"""
The purpose of this function is to ensure that the same path seperator
is used when sorting. In windows, the path operator is a backslash as
opposed to a forward slash which can lead to differences in sorting
between s3 and a windows machine.
"""
names.sort(key=lambda item: item.replace(os_sep, character))
def should_ignore_file_with_decoding_warnings(self, dirname, filename):
"""
We can get a UnicodeDecodeError if we try to listdir() and
can't decode the contents with sys.getfilesystemencoding(). In this
case listdir() returns the bytestring, which means that
join(, ) could raise a UnicodeDecodeError. When this
happens we warn using a FileDecodingError that provides more
information into what's going on.
"""
if not isinstance(filename, six.text_type):
decoding_error = FileDecodingError(dirname, filename)
warning = create_warning(repr(filename),
decoding_error.error_message)
self.result_queue.put(warning)
return True
path = os.path.join(dirname, filename)
return self.should_ignore_file(path)
def should_ignore_file(self, path):
"""
This function checks whether a file should be ignored in the
file generation process. This includes symlinks that are not to be
followed and files that generate warnings.
"""
if not self.follow_symlinks:
if os.path.isdir(path) and path.endswith(os.sep):
# Trailing slash must be removed to check if it is a symlink.
path = path[:-1]
if os.path.islink(path):
return True
warning_triggered = self.triggers_warning(path)
if warning_triggered:
return True
return False
def triggers_warning(self, path):
"""
This function checks the specific types and properties of a file.
If the file would cause trouble, the function adds a
warning to the result queue to be printed out and returns a boolean
value notify whether the file caused a warning to be generated.
Files that generate warnings are skipped. Currently, this function
checks for files that do not exist and files that the user does
not have read access.
"""
if not os.path.exists(path):
warning = create_warning(path, "File does not exist.")
self.result_queue.put(warning)
return True
if is_special_file(path):
warning = create_warning(path,
("File is character special device, "
"block special device, FIFO, or "
"socket."))
self.result_queue.put(warning)
return True
if not is_readable(path):
warning = create_warning(path, "File/Directory is not readable.")
self.result_queue.put(warning)
return True
return False
def list_objects(self, s3_path, dir_op):
"""
This function yields the appropriate object or objects under a
common prefix depending if the operation is on objects under a
common prefix. It yields the file's source path, size, and last
update.
"""
# Short circuit path: if we are not recursing into the s3
# bucket and a specific path was given, we can just yield
# that path and not have to call any operation in s3.
bucket, prefix = find_bucket_key(s3_path)
if not dir_op and prefix:
yield self._list_single_object(s3_path)
else:
lister = BucketLister(self._client)
for key in lister.list_objects(bucket=bucket, prefix=prefix,
page_size=self.page_size):
source_path, response_data = key
if response_data['Size'] == 0 and source_path.endswith('/'):
if self.operation_name == 'delete':
# This is to filter out manually created folders
# in S3. They have a size zero and would be
# undesirably downloaded. Local directories
# are automatically created when they do not
# exist locally. But user should be able to
# delete them.
yield source_path, response_data
elif not dir_op and s3_path != source_path:
pass
else:
yield source_path, response_data
def _list_single_object(self, s3_path):
# When we know we're dealing with a single object, we can avoid
# a ListObjects operation (which causes concern for anyone setting
# IAM policies with the smallest set of permissions needed) and
# instead use a HeadObject request.
bucket, key = find_bucket_key(s3_path)
try:
params = {'Bucket': bucket, 'Key': key}
params.update(self.request_parameters.get('HeadObject', {}))
response = self._client.head_object(**params)
except ClientError as e:
# We want to try to give a more helpful error message.
# This is what the customer is going to see so we want to
# give as much detail as we have.
copy_fields = e.__dict__.copy()
if not e.error_message == 'Not Found':
raise
if e.http_status_code == 404:
# The key does not exist so we'll raise a more specific
# error message here.
copy_fields['error_message'] = 'Key "%s" does not exist' % key
else:
reason = six.moves.http_client.responses[
e.http_status_code]
copy_fields['error_code'] = reason
copy_fields['error_message'] = reason
raise ClientError(**copy_fields)
response['Size'] = int(response.pop('ContentLength'))
last_update = parse(response['LastModified'])
response['LastModified'] = last_update.astimezone(tzlocal())
return s3_path, response
awscli-1.10.1/awscli/customizations/s3/utils.py 0000666 4542626 0000144 00000061166 12652514124 022556 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import argparse
from datetime import datetime
import mimetypes
import hashlib
import math
import errno
import os
import sys
from collections import namedtuple, deque
from functools import partial
from dateutil.parser import parse
from dateutil.tz import tzlocal, tzutc
from botocore.compat import unquote_str
from awscli.compat import six
from awscli.compat import PY3
from awscli.compat import queue
HUMANIZE_SUFFIXES = ('KiB', 'MiB', 'GiB', 'TiB', 'PiB', 'EiB')
MAX_PARTS = 10000
EPOCH_TIME = datetime(1970, 1, 1, tzinfo=tzutc())
# The maximum file size you can upload via S3 per request.
# See: http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadingObjects.html
# and: http://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html
MAX_SINGLE_UPLOAD_SIZE = 5 * (1024 ** 3)
SIZE_SUFFIX = {
'kb': 1024,
'mb': 1024 ** 2,
'gb': 1024 ** 3,
'tb': 1024 ** 4,
'kib': 1024,
'mib': 1024 ** 2,
'gib': 1024 ** 3,
'tib': 1024 ** 4,
}
def human_readable_size(value):
"""Convert an size in bytes into a human readable format.
For example::
>>> human_readable_size(1)
'1 Byte'
>>> human_readable_size(10)
'10 Bytes'
>>> human_readable_size(1024)
'1.0 KiB'
>>> human_readable_size(1024 * 1024)
'1.0 MiB'
:param value: The size in bytes
:return: The size in a human readable format based on base-2 units.
"""
one_decimal_point = '%.1f'
base = 1024
bytes_int = float(value)
if bytes_int == 1:
return '1 Byte'
elif bytes_int < base:
return '%d Bytes' % bytes_int
for i, suffix in enumerate(HUMANIZE_SUFFIXES):
unit = base ** (i+2)
if round((bytes_int / unit) * base) < base:
return '%.1f %s' % ((base * bytes_int / unit), suffix)
def human_readable_to_bytes(value):
"""Converts a human readable size to bytes.
:param value: A string such as "10MB". If a suffix is not included,
then the value is assumed to be an integer representing the size
in bytes.
:returns: The converted value in bytes as an integer
"""
value = value.lower()
if value[-2:] == 'ib':
# Assume IEC suffix.
suffix = value[-3:].lower()
else:
suffix = value[-2:].lower()
has_size_identifier = (
len(value) >= 2 and suffix in SIZE_SUFFIX)
if not has_size_identifier:
try:
return int(value)
except ValueError:
raise ValueError("Invalid size value: %s" % value)
else:
multiplier = SIZE_SUFFIX[suffix]
return int(value[:-len(suffix)]) * multiplier
class AppendFilter(argparse.Action):
"""
This class is used as an action when parsing the parameters.
Specifically it is used for actions corresponding to exclude
and include filters. What it does is that it appends a list
consisting of the name of the parameter and its value onto
a list containing these [parameter, value] lists. In this
case, the name of the parameter will either be --include or
--exclude and the value will be the rule to apply. This will
format all of the rules inputted into the command line
in a way compatible with the Filter class. Note that rules that
appear later in the command line take preferance over rulers that
appear earlier.
"""
def __call__(self, parser, namespace, values, option_string=None):
filter_list = getattr(namespace, self.dest)
if filter_list:
filter_list.append([option_string, values[0]])
else:
filter_list = [[option_string, values[0]]]
setattr(namespace, self.dest, filter_list)
class MD5Error(Exception):
"""
Exception for md5's that do not match.
"""
pass
class StablePriorityQueue(queue.Queue):
"""Priority queue that maintains FIFO order for same priority items.
This class was written to handle the tasks created in
awscli.customizations.s3.tasks, but it's possible to use this
class outside of that context. In order for this to be the case,
the following conditions should be met:
* Objects that are queued should have a PRIORITY attribute.
This should be an integer value not to exceed the max_priority
value passed into the ``__init__``. Objects with lower
priority numbers are retrieved before objects with higher
priority numbers.
* A relatively small max_priority should be chosen. ``get()``
calls are O(max_priority).
Any object that does not have a ``PRIORITY`` attribute or whose
priority exceeds ``max_priority`` will be queued at the highest
(least important) priority available.
"""
def __init__(self, maxsize=0, max_priority=20):
queue.Queue.__init__(self, maxsize=maxsize)
self.priorities = [deque([]) for i in range(max_priority + 1)]
self.default_priority = max_priority
def _qsize(self):
size = 0
for bucket in self.priorities:
size += len(bucket)
return size
def _put(self, item):
priority = min(getattr(item, 'PRIORITY', self.default_priority),
self.default_priority)
self.priorities[priority].append(item)
def _get(self):
for bucket in self.priorities:
if not bucket:
continue
return bucket.popleft()
def find_bucket_key(s3_path):
"""
This is a helper function that given an s3 path such that the path is of
the form: bucket/key
It will return the bucket and the key represented by the s3 path
"""
s3_components = s3_path.split('/')
bucket = s3_components[0]
s3_key = ""
if len(s3_components) > 1:
s3_key = '/'.join(s3_components[1:])
return bucket, s3_key
def split_s3_bucket_key(s3_path):
"""Split s3 path into bucket and key prefix.
This will also handle the s3:// prefix.
:return: Tuple of ('bucketname', 'keyname')
"""
if s3_path.startswith('s3://'):
s3_path = s3_path[5:]
return find_bucket_key(s3_path)
def get_file_stat(path):
"""
This is a helper function that given a local path return the size of
the file in bytes and time of last modification.
"""
try:
stats = os.stat(path)
except IOError as e:
raise ValueError('Could not retrieve file stat of "%s": %s' % (
path, e))
try:
update_time = datetime.fromtimestamp(stats.st_mtime, tzlocal())
except ValueError:
# Python's fromtimestamp raises value errors when the timestamp is out
# of range of the platform's C localtime() function. This can cause
# issues when syncing from systems with a wide range of valid timestamps
# to systems with a lower range. Some systems support 64-bit timestamps,
# for instance, while others only support 32-bit. We don't want to fail
# in these cases, so instead we pass along none.
update_time = None
return stats.st_size, update_time
def find_dest_path_comp_key(files, src_path=None):
"""
This is a helper function that determines the destination path and compare
key given parameters received from the ``FileFormat`` class.
"""
src = files['src']
dest = files['dest']
src_type = src['type']
dest_type = dest['type']
if src_path is None:
src_path = src['path']
sep_table = {'s3': '/', 'local': os.sep}
if files['dir_op']:
rel_path = src_path[len(src['path']):]
else:
rel_path = src_path.split(sep_table[src_type])[-1]
compare_key = rel_path.replace(sep_table[src_type], '/')
if files['use_src_name']:
dest_path = dest['path']
dest_path += rel_path.replace(sep_table[src_type],
sep_table[dest_type])
else:
dest_path = dest['path']
return dest_path, compare_key
def create_warning(path, error_message, skip_file=True):
"""
This creates a ``PrintTask`` for whenever a warning is to be thrown.
"""
print_string = "warning: "
if skip_file:
print_string = print_string + "Skipping file " + path + ". "
print_string = print_string + error_message
warning_message = PrintTask(message=print_string, error=False,
warning=True)
return warning_message
def find_chunksize(size, current_chunksize):
"""
The purpose of this function is determine a chunksize so that
the number of parts in a multipart upload is not greater than
the ``MAX_PARTS``. If the ``chunksize`` is greater than
``MAX_SINGLE_UPLOAD_SIZE`` it returns ``MAX_SINGLE_UPLOAD_SIZE``.
"""
chunksize = current_chunksize
num_parts = int(math.ceil(size / float(chunksize)))
while num_parts > MAX_PARTS:
chunksize *= 2
num_parts = int(math.ceil(size / float(chunksize)))
if chunksize > MAX_SINGLE_UPLOAD_SIZE:
return MAX_SINGLE_UPLOAD_SIZE
else:
return chunksize
class MultiCounter(object):
"""
This class is used as a way to keep track of how many multipart
operations are in progress. It also is used to track how many
part operations are occuring.
"""
def __init__(self):
self.count = 0
def uni_print(statement, out_file=None):
"""
This function is used to properly write unicode to a file, usually
stdout or stdderr. It ensures that the proper encoding is used if the
statement is not a string type.
"""
if out_file is None:
out_file = sys.stdout
try:
# Otherwise we assume that out_file is a
# text writer type that accepts str/unicode instead
# of bytes.
out_file.write(statement)
except UnicodeEncodeError:
# Some file like objects like cStringIO will
# try to decode as ascii on python2.
#
# This can also fail if our encoding associated
# with the text writer cannot encode the unicode
# ``statement`` we've been given. This commonly
# happens on windows where we have some S3 key
# previously encoded with utf-8 that can't be
# encoded using whatever codepage the user has
# configured in their console.
#
# At this point we've already failed to do what's
# been requested. We now try to make a best effort
# attempt at printing the statement to the outfile.
# We're using 'ascii' as the default because if the
# stream doesn't give us any encoding information
# we want to pick an encoding that has the highest
# chance of printing successfully.
new_encoding = getattr(out_file, 'encoding', 'ascii')
new_statement = statement.encode(
new_encoding, 'replace').decode(new_encoding)
out_file.write(new_statement)
out_file.flush()
def bytes_print(statement):
"""
This function is used to properly write bytes to standard out.
"""
if PY3:
if getattr(sys.stdout, 'buffer', None):
sys.stdout.buffer.write(statement)
else:
# If it is not possible to write to the standard out buffer.
# The next best option is to decode and write to standard out.
sys.stdout.write(statement.decode('utf-8'))
else:
sys.stdout.write(statement)
def guess_content_type(filename):
"""Given a filename, guess it's content type.
If the type cannot be guessed, a value of None is returned.
"""
return mimetypes.guess_type(filename)[0]
def relative_path(filename, start=os.path.curdir):
"""Cross platform relative path of a filename.
If no relative path can be calculated (i.e different
drives on Windows), then instead of raising a ValueError,
the absolute path is returned.
"""
try:
dirname, basename = os.path.split(filename)
relative_dir = os.path.relpath(dirname, start)
return os.path.join(relative_dir, basename)
except ValueError:
return os.path.abspath(filename)
def set_file_utime(filename, desired_time):
"""
Set the utime of a file, and if it fails, raise a more explicit error.
:param filename: the file to modify
:param desired_time: the epoch timestamp to set for atime and mtime.
:raises: SetFileUtimeError: if you do not have permission (errno 1)
:raises: OSError: for all errors other than errno 1
"""
try:
os.utime(filename, (desired_time, desired_time))
except OSError as e:
# Only raise a more explicit exception when it is a permission issue.
if e.errno != errno.EPERM:
raise e
raise SetFileUtimeError(
("The file was downloaded, but attempting to modify the "
"utime of the file failed. Is the file owned by another user?"))
class SetFileUtimeError(Exception):
pass
class ReadFileChunk(object):
def __init__(self, filename, start_byte, size):
self._filename = filename
self._start_byte = start_byte
self._fileobj = open(self._filename, 'rb')
self._size = self._calculate_file_size(self._fileobj, requested_size=size,
start_byte=start_byte)
self._fileobj.seek(self._start_byte)
self._amount_read = 0
def _calculate_file_size(self, fileobj, requested_size, start_byte):
actual_file_size = os.fstat(fileobj.fileno()).st_size
max_chunk_size = actual_file_size - start_byte
return min(max_chunk_size, requested_size)
def read(self, amount=None):
if amount is None:
remaining = self._size - self._amount_read
data = self._fileobj.read(remaining)
self._amount_read += remaining
return data
else:
actual_amount = min(self._size - self._amount_read, amount)
data = self._fileobj.read(actual_amount)
self._amount_read += actual_amount
return data
def seek(self, where):
self._fileobj.seek(self._start_byte + where)
self._amount_read = where
def close(self):
self._fileobj.close()
def tell(self):
return self._amount_read
def __len__(self):
# __len__ is defined because requests will try to determine the length
# of the stream to set a content length. In the normal case
# of the file it will just stat the file, but we need to change that
# behavior. By providing a __len__, requests will use that instead
# of stat'ing the file.
return self._size
def __enter__(self):
return self
def __exit__(self, *args, **kwargs):
self._fileobj.close()
def __iter__(self):
# This is a workaround for http://bugs.python.org/issue17575
# Basically httplib will try to iterate over the contents, even
# if its a file like object. This wasn't noticed because we've
# already exhausted the stream so iterating over the file immediately
# steps, which is what we're simulating here.
return iter([])
def _date_parser(date_string):
return parse(date_string).astimezone(tzlocal())
class BucketLister(object):
"""List keys in a bucket."""
def __init__(self, client, date_parser=_date_parser):
self._client = client
self._date_parser = date_parser
def list_objects(self, bucket, prefix=None, page_size=None):
kwargs = {'Bucket': bucket, 'PaginationConfig': {'PageSize': page_size}}
if prefix is not None:
kwargs['Prefix'] = prefix
paginator = self._client.get_paginator('list_objects')
pages = paginator.paginate(**kwargs)
for page in pages:
contents = page.get('Contents', [])
for content in contents:
source_path = bucket + '/' + content['Key']
content['LastModified'] = self._date_parser(
content['LastModified'])
yield source_path, content
class PrintTask(namedtuple('PrintTask',
['message', 'error', 'total_parts', 'warning'])):
def __new__(cls, message, error=False, total_parts=None, warning=None):
"""
:param message: An arbitrary string associated with the entry. This
can be used to communicate the result of the task.
:param error: Boolean indicating a failure.
:param total_parts: The total number of parts for multipart transfers.
:param warning: Boolean indicating a warning
"""
return super(PrintTask, cls).__new__(cls, message, error, total_parts,
warning)
IORequest = namedtuple('IORequest',
['filename', 'offset', 'data', 'is_stream'])
# Used to signal that IO for the filename is finished, and that
# any associated resources may be cleaned up.
_IOCloseRequest = namedtuple('IOCloseRequest', ['filename', 'desired_mtime'])
class IOCloseRequest(_IOCloseRequest):
def __new__(cls, filename, desired_mtime=None):
return super(IOCloseRequest, cls).__new__(cls, filename, desired_mtime)
class RequestParamsMapper(object):
"""A utility class that maps CLI params to request params
Each method in the class maps to a particular operation and will set
the request parameters depending on the operation and CLI parameters
provided. For each of the class's methods the parameters are as follows:
:type request_params: dict
:param request_params: A dictionary to be filled out with the appropriate
parameters for the specified client operation using the current CLI
parameters
:type cli_params: dict
:param cli_params: A dictionary of the current CLI params that will be
used to generate the request parameters for the specified operation
For example, take the mapping of request parameters for PutObject::
>>> cli_request_params = {'sse': 'AES256', 'storage_class': 'GLACIER'}
>>> request_params = {}
>>> RequestParamsMapper.map_put_object_params(
request_params, cli_request_params)
>>> print(request_params)
{'StorageClass': 'GLACIER', 'ServerSideEncryption': 'AES256'}
Note that existing parameters in ``request_params`` will be overriden if
a parameter in ``cli_params`` maps to the existing parameter.
"""
@classmethod
def map_put_object_params(cls, request_params, cli_params):
"""Map CLI params to PutObject request params"""
cls._set_general_object_params(request_params, cli_params)
cls._set_metadata_params(request_params, cli_params)
cls._set_sse_request_params(request_params, cli_params)
cls._set_sse_c_request_params(request_params, cli_params)
@classmethod
def map_get_object_params(cls, request_params, cli_params):
"""Map CLI params to GetObject request params"""
cls._set_sse_c_request_params(request_params, cli_params)
@classmethod
def map_copy_object_params(cls, request_params, cli_params):
"""Map CLI params to CopyObject request params"""
cls._set_general_object_params(request_params, cli_params)
cls._set_metadata_directive_param(request_params, cli_params)
cls._set_metadata_params(request_params, cli_params)
cls._auto_populate_metadata_directive(request_params)
cls._set_sse_request_params(request_params, cli_params)
cls._set_sse_c_and_copy_source_request_params(
request_params, cli_params)
@classmethod
def map_head_object_params(cls, request_params, cli_params):
"""Map CLI params to HeadObject request params"""
cls._set_sse_c_request_params(request_params, cli_params)
@classmethod
def map_create_multipart_upload_params(cls, request_params, cli_params):
"""Map CLI params to CreateMultipartUpload request params"""
cls._set_general_object_params(request_params, cli_params)
cls._set_sse_request_params(request_params, cli_params)
cls._set_sse_c_request_params(request_params, cli_params)
cls._set_metadata_params(request_params, cli_params)
@classmethod
def map_upload_part_params(cls, request_params, cli_params):
"""Map CLI params to UploadPart request params"""
cls._set_sse_c_request_params(request_params, cli_params)
@classmethod
def map_upload_part_copy_params(cls, request_params, cli_params):
"""Map CLI params to UploadPartCopy request params"""
cls._set_sse_c_and_copy_source_request_params(
request_params, cli_params)
@classmethod
def _set_general_object_params(cls, request_params, cli_params):
# Paramters set in this method should be applicable to the following
# operations involving objects: PutObject, CopyObject, and
# CreateMultipartUpload.
general_param_translation = {
'acl': 'ACL',
'storage_class': 'StorageClass',
'website_redirect': 'WebsiteRedirectLocation',
'content_type': 'ContentType',
'cache_control': 'CacheControl',
'content_disposition': 'ContentDisposition',
'content_encoding': 'ContentEncoding',
'content_language': 'ContentLanguage',
'expires': 'Expires'
}
for cli_param_name in general_param_translation:
if cli_params.get(cli_param_name):
request_param_name = general_param_translation[cli_param_name]
request_params[request_param_name] = cli_params[cli_param_name]
cls._set_grant_params(request_params, cli_params)
@classmethod
def _set_grant_params(cls, request_params, cli_params):
if cli_params.get('grants'):
for grant in cli_params['grants']:
try:
permission, grantee = grant.split('=', 1)
except ValueError:
raise ValueError('grants should be of the form '
'permission=principal')
request_params[cls._permission_to_param(permission)] = grantee
@classmethod
def _permission_to_param(cls, permission):
if permission == 'read':
return 'GrantRead'
if permission == 'full':
return 'GrantFullControl'
if permission == 'readacl':
return 'GrantReadACP'
if permission == 'writeacl':
return 'GrantWriteACP'
raise ValueError('permission must be one of: '
'read|readacl|writeacl|full')
@classmethod
def _set_metadata_params(cls, request_params, cli_params):
if cli_params.get('metadata'):
request_params['Metadata'] = cli_params['metadata']
@classmethod
def _auto_populate_metadata_directive(cls, request_params):
if request_params.get('Metadata') and \
not request_params.get('MetadataDirective'):
request_params['MetadataDirective'] = 'REPLACE'
@classmethod
def _set_metadata_directive_param(cls, request_params, cli_params):
if cli_params.get('metadata_directive'):
request_params['MetadataDirective'] = cli_params[
'metadata_directive']
@classmethod
def _set_sse_request_params(cls, request_params, cli_params):
if cli_params.get('sse'):
request_params['ServerSideEncryption'] = cli_params['sse']
if cli_params.get('sse_kms_key_id'):
request_params['SSEKMSKeyId'] = cli_params['sse_kms_key_id']
@classmethod
def _set_sse_c_request_params(cls, request_params, cli_params):
if cli_params.get('sse_c'):
request_params['SSECustomerAlgorithm'] = cli_params['sse_c']
request_params['SSECustomerKey'] = cli_params['sse_c_key']
@classmethod
def _set_sse_c_copy_source_request_params(cls, request_params, cli_params):
if cli_params.get('sse_c_copy_source'):
request_params['CopySourceSSECustomerAlgorithm'] = cli_params[
'sse_c_copy_source']
request_params['CopySourceSSECustomerKey'] = cli_params[
'sse_c_copy_source_key']
@classmethod
def _set_sse_c_and_copy_source_request_params(cls, request_params,
cli_params):
cls._set_sse_c_request_params(request_params, cli_params)
cls._set_sse_c_copy_source_request_params(request_params, cli_params)
awscli-1.10.1/awscli/customizations/s3/transferconfig.py 0000666 4542626 0000144 00000005435 12652514124 024425 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013-2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.s3.utils import human_readable_to_bytes
# If the user does not specify any overrides,
# these are the default values we use for the s3 transfer
# commands.
DEFAULTS = {
'multipart_threshold': 8 * (1024 ** 2),
'multipart_chunksize': 8 * (1024 ** 2),
'max_concurrent_requests': 10,
'max_queue_size': 1000,
}
class InvalidConfigError(Exception):
pass
class RuntimeConfig(object):
POSITIVE_INTEGERS = ['multipart_chunksize', 'multipart_threshold',
'max_concurrent_requests', 'max_queue_size']
HUMAN_READABLE_SIZES = ['multipart_chunksize', 'multipart_threshold']
@staticmethod
def defaults():
return DEFAULTS.copy()
def build_config(self, **kwargs):
"""Create and convert a runtime config dictionary.
This method will merge and convert S3 runtime configuration
data into a single dictionary that can then be passed to classes
that use this runtime config.
:param kwargs: Any key in the ``DEFAULTS`` dict.
:return: A dictionary of the merged and converted values.
"""
runtime_config = DEFAULTS.copy()
if kwargs:
runtime_config.update(kwargs)
self._convert_human_readable_sizes(runtime_config)
self._validate_config(runtime_config)
return runtime_config
def _convert_human_readable_sizes(self, runtime_config):
for attr in self.HUMAN_READABLE_SIZES:
value = runtime_config.get(attr)
if value is not None and not isinstance(value, int):
runtime_config[attr] = human_readable_to_bytes(value)
def _validate_config(self, runtime_config):
for attr in self.POSITIVE_INTEGERS:
value = runtime_config.get(attr)
if value is not None:
try:
runtime_config[attr] = int(value)
if not runtime_config[attr] > 0:
self._error_positive_value(attr, value)
except ValueError:
self._error_positive_value(attr, value)
def _error_positive_value(self, name, value):
raise InvalidConfigError(
"Value for %s must be a positive integer: %s" % (name, value))
awscli-1.10.1/awscli/customizations/s3/s3.py 0000666 4542626 0000144 00000005140 12652514124 021731 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations import utils
from awscli.customizations.commands import BasicCommand
from awscli.customizations.s3.subcommands import ListCommand, WebsiteCommand, \
CpCommand, MvCommand, RmCommand, SyncCommand, MbCommand, RbCommand
from awscli.customizations.s3.syncstrategy.register import \
register_sync_strategies
def awscli_initialize(cli):
"""
This function is require to use the plugin. It calls the functions
required to add all neccessary commands and parameters to the CLI.
This function is necessary to install the plugin using a configuration
file
"""
cli.register("building-command-table.main", add_s3)
cli.register('building-command-table.sync', register_sync_strategies)
def s3_plugin_initialize(event_handlers):
"""
This is a wrapper to make the plugin built-in to the cli as opposed
to specifiying it in the configuration file.
"""
awscli_initialize(event_handlers)
def add_s3(command_table, session, **kwargs):
"""
This creates a new service object for the s3 plugin. It sends the
old s3 commands to the namespace ``s3api``.
"""
utils.rename_command(command_table, 's3', 's3api')
command_table['s3'] = S3(session)
class S3(BasicCommand):
NAME = 's3'
DESCRIPTION = BasicCommand.FROM_FILE('s3/_concepts.rst')
SYNOPSIS = "aws s3 [ ...]"
SUBCOMMANDS = [
{'name': 'ls', 'command_class': ListCommand},
{'name': 'website', 'command_class': WebsiteCommand},
{'name': 'cp', 'command_class': CpCommand},
{'name': 'mv', 'command_class': MvCommand},
{'name': 'rm', 'command_class': RmCommand},
{'name': 'sync', 'command_class': SyncCommand},
{'name': 'mb', 'command_class': MbCommand},
{'name': 'rb', 'command_class': RbCommand}
]
def _run_main(self, parsed_args, parsed_globals):
if parsed_args.subcommand is None:
raise ValueError("usage: aws [options] "
"[parameters]\naws: error: too few arguments")
awscli-1.10.1/awscli/customizations/s3/fileinfobuilder.py 0000666 4542626 0000144 00000006173 12652514124 024555 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.s3.fileinfo import FileInfo
class FileInfoBuilder(object):
"""
This class takes a ``FileBase`` object's attributes and generates
a ``FileInfo`` object so that the operation can be performed.
"""
def __init__(self, client, source_client=None,
parameters = None, is_stream=False):
self._client = client
self._source_client = client
if source_client is not None:
self._source_client = source_client
self._parameters = parameters
self._is_stream = is_stream
def call(self, files):
for file_base in files:
file_info = self._inject_info(file_base)
yield file_info
def _inject_info(self, file_base):
file_info_attr = {}
file_info_attr['src'] = file_base.src
file_info_attr['dest'] = file_base.dest
file_info_attr['compare_key'] = file_base.compare_key
file_info_attr['size'] = file_base.size
file_info_attr['last_update'] = file_base.last_update
file_info_attr['src_type'] = file_base.src_type
file_info_attr['dest_type'] = file_base.dest_type
file_info_attr['operation_name'] = file_base.operation_name
file_info_attr['parameters'] = self._parameters
file_info_attr['is_stream'] = self._is_stream
file_info_attr['associated_response_data'] = file_base.response_data
# This is a bit quirky. The below conditional hinges on the --delete
# flag being set, which only occurs during a sync command. The source
# client in a sync delete refers to the source of the sync rather than
# the source of the delete. What this means is that the client that
# gets called during the delete process would point to the wrong region.
# Normally this doesn't matter because DNS will re-route the request
# to the correct region. In the case of s3v4 signing, however, this
# would result in a failed delete. The conditional below fixes this
# issue by swapping clients only in the case of a sync delete since
# swapping which client is used in the delete function would then break
# moving under s3v4.
if (file_base.operation_name == 'delete' and
self._parameters.get('delete')):
file_info_attr['client'] = self._source_client
file_info_attr['source_client'] = self._client
else:
file_info_attr['client'] = self._client
file_info_attr['source_client'] = self._source_client
return FileInfo(**file_info_attr)
awscli-1.10.1/awscli/customizations/s3/filters.py 0000666 4542626 0000144 00000014534 12652514124 023063 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
import fnmatch
import os
from awscli.customizations.s3.utils import split_s3_bucket_key
LOG = logging.getLogger(__name__)
def create_filter(parameters):
"""Given the CLI parameters dict, create a Filter object."""
# We need to evaluate all the filters based on the source
# directory.
if parameters['filters']:
cli_filters = parameters['filters']
real_filters = []
for filter_type, filter_pattern in cli_filters:
real_filters.append((filter_type.lstrip('-'),
filter_pattern))
source_location = parameters['src']
if source_location.startswith('s3://'):
# This gives us (bucket, keyname) and we want
# the bucket to be the root dir.
src_rootdir = _get_s3_root(source_location,
parameters['dir_op'])
else:
src_rootdir = _get_local_root(parameters['src'], parameters['dir_op'])
destination_location = parameters['dest']
if destination_location.startswith('s3://'):
dst_rootdir = _get_s3_root(parameters['dest'],
parameters['dir_op'])
else:
dst_rootdir = _get_local_root(parameters['dest'],
parameters['dir_op'])
return Filter(real_filters, src_rootdir, dst_rootdir)
else:
return Filter({}, None, None)
def _get_s3_root(source_location, dir_op):
# Obtain the bucket and the key.
bucket, key = split_s3_bucket_key(source_location)
if not dir_op and not key.endswith('/'):
# If we are not performing an operation on a directory and the key
# is of the form: ``prefix/key``. We only want ``prefix`` included in
# the the s3 root and not ``key``.
key = '/'.join(key.split('/')[:-1])
# Rejoin the bucket and key back together.
s3_path = '/'.join([bucket, key])
return s3_path
def _get_local_root(source_location, dir_op):
if dir_op:
rootdir = os.path.abspath(source_location)
else:
rootdir = os.path.abspath(os.path.dirname(source_location))
return rootdir
class Filter(object):
"""
This is a universal exclude/include filter.
"""
def __init__(self, patterns, rootdir, dst_rootdir):
"""
:var patterns: A list of patterns. A pattern consits of a list
whose first member is a string 'exclude' or 'include'.
The second member is the actual rule.
:var rootdir: The root directory where the patterns are evaluated.
This will generally be the directory of the source location.
:var dst_rootdir: The destination root directory where the patterns are
evaluated. This is only useful when the --delete option is
also specified.
"""
self._original_patterns = patterns
self.patterns = self._full_path_patterns(patterns, rootdir)
self.dst_patterns = self._full_path_patterns(patterns, dst_rootdir)
def _full_path_patterns(self, original_patterns, rootdir):
# We need to transform the patterns into patterns that have
# the root dir prefixed, so things like ``--exclude "*"``
# will actually be ['exclude', '/path/to/root/*']
full_patterns = []
for pattern in original_patterns:
full_patterns.append(
(pattern[0], os.path.join(rootdir, pattern[1])))
return full_patterns
def call(self, file_infos):
"""
This function iterates over through the yielded file_info objects. It
determines the type of the file and applies pattern matching to
determine if the rule applies. While iterating though the patterns the
file is assigned a boolean flag to determine if a file should be
yielded on past the filer. Anything identified by the exclude filter
has its flag set to false. Anything identified by the include filter
has its flag set to True. All files begin with the flag set to true.
Rules listed at the end will overwrite flags thrown by rules listed
before it.
"""
for file_info in file_infos:
file_path = file_info.src
file_status = (file_info, True)
for pattern, dst_pattern in zip(self.patterns, self.dst_patterns):
current_file_status = self._match_pattern(pattern, file_info)
if current_file_status is not None:
file_status = current_file_status
dst_current_file_status = self._match_pattern(dst_pattern, file_info)
if dst_current_file_status is not None:
file_status = dst_current_file_status
LOG.debug("=%s final filtered status, should_include: %s",
file_path, file_status[1])
if file_status[1]:
yield file_info
def _match_pattern(self, pattern, file_info):
file_status = None
file_path = file_info.src
pattern_type = pattern[0]
if file_info.src_type == 'local':
path_pattern = pattern[1].replace('/', os.sep)
else:
path_pattern = pattern[1].replace(os.sep, '/')
is_match = fnmatch.fnmatch(file_path, path_pattern)
if is_match and pattern_type == 'include':
file_status = (file_info, True)
LOG.debug("%s matched include filter: %s",
file_path, path_pattern)
elif is_match and pattern_type == 'exclude':
file_status = (file_info, False)
LOG.debug("%s matched exclude filter: %s",
file_path, path_pattern)
else:
LOG.debug("%s did not match %s filter: %s",
file_path, pattern_type[2:], path_pattern)
return file_status
awscli-1.10.1/awscli/customizations/s3/fileinfo.py 0000666 4542626 0000144 00000032046 12652514124 023204 0 ustar pysdk-ci amazon 0000000 0000000 import os
import logging
import sys
import time
from functools import partial
import errno
import hashlib
from dateutil.parser import parse
from dateutil.tz import tzlocal
from botocore.compat import quote
from awscli.customizations.s3.utils import find_bucket_key, \
uni_print, guess_content_type, MD5Error, bytes_print, set_file_utime, \
RequestParamsMapper
LOGGER = logging.getLogger(__name__)
class CreateDirectoryError(Exception):
pass
def read_file(filename):
"""
This reads the file into a form that can be sent to S3
"""
with open(filename, 'rb') as in_file:
return in_file.read()
def save_file(filename, response_data, last_update, is_stream=False):
"""
This writes to the file upon downloading. It reads the data in the
response. Makes a new directory if needed and then writes the
data to the file. It also modifies the last modified time to that
of the S3 object.
"""
body = response_data['Body']
etag = response_data['ETag'][1:-1]
if not is_stream:
d = os.path.dirname(filename)
try:
if not os.path.exists(d):
os.makedirs(d)
except OSError as e:
if not e.errno == errno.EEXIST:
raise CreateDirectoryError(
"Could not create directory %s: %s" % (d, e))
md5 = hashlib.md5()
file_chunks = iter(partial(body.read, 1024 * 1024), b'')
if is_stream:
# Need to save the data to be able to check the etag for a stream
# because once the data is written to the stream there is no
# undoing it.
payload = write_to_file(None, etag, md5, file_chunks, True)
else:
with open(filename, 'wb') as out_file:
write_to_file(out_file, etag, md5, file_chunks)
if _can_validate_md5_with_etag(etag, response_data):
if etag != md5.hexdigest():
if not is_stream:
os.remove(filename)
raise MD5Error(filename)
if not is_stream:
last_update_tuple = last_update.timetuple()
mod_timestamp = time.mktime(last_update_tuple)
set_file_utime(filename, int(mod_timestamp))
else:
# Now write the output to stdout since the md5 is correct.
bytes_print(payload)
sys.stdout.flush()
def _can_validate_md5_with_etag(etag, response_data):
sse = response_data.get('ServerSideEncryption', None)
sse_customer_algorithm = response_data.get('SSECustomerAlgorithm', None)
if not _is_multipart_etag(etag) and sse != 'aws:kms' and \
sse_customer_algorithm is None:
return True
return False
def write_to_file(out_file, etag, md5, file_chunks, is_stream=False):
"""
Updates the etag for each file chunk. It will write to the file if it a
file but if it is a stream it will return a byte string to be later
written to a stream.
"""
body = b''
for chunk in file_chunks:
if not _is_multipart_etag(etag):
md5.update(chunk)
if is_stream:
body += chunk
else:
out_file.write(chunk)
return body
def _is_multipart_etag(etag):
return '-' in etag
class TaskInfo(object):
"""
This class contains important details related to performing a task. This
object is usually only used for creating buckets, removing buckets, and
listing objects/buckets. This object contains the attributes and
functions needed to perform the task. Note that just instantiating one
of these objects will not be enough to run a listing or bucket command.
unless ``session`` and ``region`` are specified upon instantiation.
:param src: the source path
:type src: string
:param src_type: if the source file is s3 or local.
:type src_type: string
:param operation: the operation being performed.
:type operation: string
:param session: ``botocore.session`` object
:param region: The region for the endpoint
Note that a local file will always have its absolute path, and a s3 file
will have its path in the form of bucket/key
"""
def __init__(self, src, src_type, operation_name, client):
self.src = src
self.src_type = src_type
self.operation_name = operation_name
self.client = client
def make_bucket(self):
"""
This opereation makes a bucket.
"""
bucket, key = find_bucket_key(self.src)
bucket_config = {'LocationConstraint': self.client.meta.region_name}
params = {'Bucket': bucket}
if self.client.meta.region_name != 'us-east-1':
params['CreateBucketConfiguration'] = bucket_config
self.client.create_bucket(**params)
def remove_bucket(self):
"""
This operation removes a bucket.
"""
bucket, key = find_bucket_key(self.src)
self.client.delete_bucket(Bucket=bucket)
def is_glacier_compatible(self):
# These operations do not involving transferring glacier objects
# so they are always glacier compatible.
return True
class FileInfo(TaskInfo):
"""
This is a child object of the ``TaskInfo`` object. It can perform more
operations such as ``upload``, ``download``, ``copy``, ``delete``,
``move``. Similiarly to
``TaskInfo`` objects attributes like ``session`` need to be set in order
to perform operations.
:param dest: the destination path
:type dest: string
:param compare_key: the name of the file relative to the specified
directory/prefix. This variable is used when performing synching
or if the destination file is adopting the source file's name.
:type compare_key: string
:param size: The size of the file in bytes.
:type size: integer
:param last_update: the local time of last modification.
:type last_update: datetime object
:param dest_type: if the destination is s3 or local.
:param dest_type: string
:param parameters: a dictionary of important values this is assigned in
the ``BasicTask`` object.
:param associated_response_data: The response data used by
the ``FileGenerator`` to create this task. It is either an dictionary
from the list of a ListObjects or the response from a HeadObject. It
will only be filled if the task was generated from an S3 bucket.
"""
def __init__(self, src, dest=None, compare_key=None, size=None,
last_update=None, src_type=None, dest_type=None,
operation_name=None, client=None, parameters=None,
source_client=None, is_stream=False,
associated_response_data=None):
super(FileInfo, self).__init__(src, src_type=src_type,
operation_name=operation_name,
client=client)
self.dest = dest
self.dest_type = dest_type
self.compare_key = compare_key
self.size = size
self.last_update = last_update
# Usually inject ``parameters`` from ``BasicTask`` class.
self.parameters = {}
if parameters is not None:
self.parameters = parameters
self.source_client = source_client
self.is_stream = is_stream
self.associated_response_data = associated_response_data
def set_size_from_s3(self):
"""
This runs a ``HeadObject`` on the s3 object and sets the size.
"""
bucket, key = find_bucket_key(self.src)
params = {'Bucket': bucket,
'Key': key}
RequestParamsMapper.map_head_object_params(params, self.parameters)
response_data = self.client.head_object(**params)
self.size = int(response_data['ContentLength'])
def is_glacier_compatible(self):
"""Determines if a file info object is glacier compatible
Operations will fail if the S3 object has a storage class of GLACIER
and it involves copying from S3 to S3, downloading from S3, or moving
where S3 is the source (the delete will actually succeed, but we do
not want fail to transfer the file and then successfully delete it).
:returns: True if the FileInfo's operation will not fail because the
operation is on a glacier object. False if it will fail.
"""
if self._is_glacier_object(self.associated_response_data):
if self.operation_name in ['copy', 'download']:
return False
elif self.operation_name == 'move':
if self.src_type == 's3':
return False
return True
def _is_glacier_object(self, response_data):
if response_data:
if response_data.get('StorageClass') == 'GLACIER' and \
not self._is_restored(response_data):
return True
return False
def _is_restored(self, response_data):
# Returns True is this is a glacier object that has been
# restored back to S3.
# 'Restore' looks like: 'ongoing-request="false", expiry-date="..."'
return 'ongoing-request="false"' in response_data.get('Restore', '')
def upload(self, payload=None):
"""
Redirects the file to the multipart upload function if the file is
large. If it is small enough, it puts the file as an object in s3.
"""
if payload:
self._handle_upload(payload)
else:
with open(self.src, 'rb') as body:
self._handle_upload(body)
def _handle_upload(self, body):
bucket, key = find_bucket_key(self.dest)
params = {
'Bucket': bucket,
'Key': key,
'Body': body,
}
self._inject_content_type(params)
RequestParamsMapper.map_put_object_params(params, self.parameters)
response_data = self.client.put_object(**params)
def _inject_content_type(self, params):
if not self.parameters['guess_mime_type']:
return
filename = self.src
# Add a content type param if we can guess the type.
try:
guessed_type = guess_content_type(filename)
if guessed_type is not None:
params['ContentType'] = guessed_type
# This catches a bug in the mimetype libary where some MIME types
# specifically on windows machines cause a UnicodeDecodeError
# because the MIME type in the Windows registery has an encoding
# that cannot be properly encoded using the default system encoding.
# https://bugs.python.org/issue9291
#
# So instead of hard failing, just log the issue and fall back to the
# default guessed content type of None.
except UnicodeDecodeError:
LOGGER.debug(
'Unable to guess content type for %s due to '
'UnicodeDecodeError: ', filename, exc_info=True
)
def download(self):
"""
Redirects the file to the multipart download function if the file is
large. If it is small enough, it gets the file as an object from s3.
"""
bucket, key = find_bucket_key(self.src)
params = {'Bucket': bucket, 'Key': key}
RequestParamsMapper.map_get_object_params(params, self.parameters)
response_data = self.client.get_object(**params)
save_file(self.dest, response_data, self.last_update,
self.is_stream)
def copy(self):
"""
Copies a object in s3 to another location in s3.
"""
source_bucket, source_key = find_bucket_key(self.src)
copy_source = {'Bucket': source_bucket, 'Key': source_key}
bucket, key = find_bucket_key(self.dest)
params = {'Bucket': bucket,
'CopySource': copy_source, 'Key': key}
self._inject_content_type(params)
RequestParamsMapper.map_copy_object_params(params, self.parameters)
response_data = self.client.copy_object(**params)
def delete(self):
"""
Deletes the file from s3 or local. The src file and type is used
from the file info object.
"""
if self.src_type == 's3':
bucket, key = find_bucket_key(self.src)
params = {'Bucket': bucket, 'Key': key}
self.source_client.delete_object(**params)
else:
os.remove(self.src)
def move(self):
"""
Implements a move command for s3.
"""
src = self.src_type
dest = self.dest_type
if src == 'local' and dest == 's3':
self.upload()
elif src == 's3' and dest == 's3':
self.copy()
elif src == 's3' and dest == 'local':
self.download()
else:
raise Exception("Invalid path arguments for mv")
self.delete()
def create_multipart_upload(self):
bucket, key = find_bucket_key(self.dest)
params = {'Bucket': bucket, 'Key': key}
self._inject_content_type(params)
RequestParamsMapper.map_create_multipart_upload_params(
params, self.parameters)
response_data = self.client.create_multipart_upload(**params)
upload_id = response_data['UploadId']
return upload_id
awscli-1.10.1/awscli/customizations/s3/s3handler.py 0000666 4542626 0000144 00000061030 12652514124 023267 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from collections import namedtuple
import logging
import math
import os
import sys
from awscli.customizations.s3.utils import find_chunksize, \
find_bucket_key, relative_path, PrintTask, create_warning
from awscli.customizations.s3.executor import Executor
from awscli.customizations.s3 import tasks
from awscli.customizations.s3.transferconfig import RuntimeConfig
from awscli.compat import six
from awscli.compat import queue
LOGGER = logging.getLogger(__name__)
# Maximum object size allowed in S3.
# See: http://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html
MAX_UPLOAD_SIZE = 5 * (1024 ** 4)
CommandResult = namedtuple('CommandResult',
['num_tasks_failed', 'num_tasks_warned'])
class S3Handler(object):
"""
This class sets up the process to perform the tasks sent to it. It
sources the ``self.executor`` from which threads inside the
class pull tasks from to complete.
"""
MAX_IO_QUEUE_SIZE = 20
def __init__(self, session, params, result_queue=None,
runtime_config=None):
self.session = session
if runtime_config is None:
runtime_config = RuntimeConfig.defaults()
self._runtime_config = runtime_config
# The write_queue has potential for optimizations, so the constant
# for maxsize is scoped to this class (as opposed to constants.py)
# so we have the ability to change this value later.
self.write_queue = queue.Queue(maxsize=self.MAX_IO_QUEUE_SIZE)
self.result_queue = result_queue
if not self.result_queue:
self.result_queue = queue.Queue()
self.params = {
'dryrun': False, 'quiet': False, 'acl': None,
'guess_mime_type': True, 'sse_c_copy_source': None,
'sse_c_copy_source_key': None, 'sse': None,
'sse_c': None, 'sse_c_key': None, 'sse_kms_key_id': None,
'storage_class': None, 'website_redirect': None,
'content_type': None, 'cache_control': None,
'content_disposition': None, 'content_encoding': None,
'content_language': None, 'expires': None, 'grants': None,
'only_show_errors': False, 'is_stream': False,
'paths_type': None, 'expected_size': None, 'metadata': None,
'metadata_directive': None, 'ignore_glacier_warnings': False
}
self.params['region'] = params['region']
for key in self.params.keys():
if key in params:
self.params[key] = params[key]
self.multi_threshold = self._runtime_config['multipart_threshold']
self.chunksize = self._runtime_config['multipart_chunksize']
LOGGER.debug("Using a multipart threshold of %s and a part size of %s",
self.multi_threshold, self.chunksize)
self.executor = Executor(
num_threads=self._runtime_config['max_concurrent_requests'],
result_queue=self.result_queue,
quiet=self.params['quiet'],
only_show_errors=self.params['only_show_errors'],
max_queue_size=self._runtime_config['max_queue_size'],
write_queue=self.write_queue
)
self._multipart_uploads = []
self._multipart_downloads = []
def call(self, files):
"""
This function pulls a ``FileInfo`` or ``TaskInfo`` object from
a list ``files``. Each object is then deemed if it will be a
multipart operation and add the necessary attributes if so. Each
object is then wrapped with a ``BasicTask`` object which is
essentially a thread of execution for a thread to follow. These
tasks are then submitted to the main executor.
"""
try:
self.executor.start()
total_files, total_parts = self._enqueue_tasks(files)
self.executor.print_thread.set_total_files(total_files)
self.executor.print_thread.set_total_parts(total_parts)
self.executor.initiate_shutdown()
self.executor.wait_until_shutdown()
self._shutdown()
except Exception as e:
LOGGER.debug('Exception caught during task execution: %s',
str(e), exc_info=True)
self.result_queue.put(PrintTask(message=str(e), error=True))
self.executor.initiate_shutdown(
priority=self.executor.IMMEDIATE_PRIORITY)
self._shutdown()
self.executor.wait_until_shutdown()
except KeyboardInterrupt:
self.result_queue.put(PrintTask(message=("Cleaning up. "
"Please wait..."),
error=True))
self.executor.initiate_shutdown(
priority=self.executor.IMMEDIATE_PRIORITY)
self._shutdown()
self.executor.wait_until_shutdown()
return CommandResult(self.executor.num_tasks_failed,
self.executor.num_tasks_warned)
def _shutdown(self):
# And finally we need to make a pass through all the existing
# multipart uploads and abort any pending multipart uploads.
self._abort_pending_multipart_uploads()
self._remove_pending_downloads()
def _abort_pending_multipart_uploads(self):
# For the purpose of aborting uploads, we consider any
# upload context with an upload id.
for upload, filename in self._multipart_uploads:
if upload.is_cancelled():
try:
upload.wait_for_upload_id()
except tasks.UploadCancelledError:
pass
else:
# This means that the upload went from STARTED -> CANCELLED.
# This could happen if a part thread decided to cancel the
# upload. We need to explicitly abort the upload here.
self._cancel_upload(upload.wait_for_upload_id(), filename)
upload.cancel_upload(self._cancel_upload, args=(filename,))
def _remove_pending_downloads(self):
# The downloads case is easier than the uploads case because we don't
# need to make any service calls. To properly cleanup we just need
# to go through the multipart downloads that were in progress but
# cancelled and remove the local file.
for context, local_filename in self._multipart_downloads:
if (context.is_cancelled() or context.is_started()) and \
os.path.exists(local_filename):
# The file is in an inconsistent state (not all the parts
# were written to the file) so we should remove the
# local file rather than leave it in a bad state. We don't
# want to remove the files if the download has *not* been
# started because we haven't touched the file yet, so it's
# better to leave the old version of the file rather than
# deleting the file entirely.
os.remove(local_filename)
context.cancel()
def _cancel_upload(self, upload_id, filename):
bucket, key = find_bucket_key(filename.dest)
params = {
'Bucket': bucket,
'Key': key,
'UploadId': upload_id,
}
LOGGER.debug("Aborting multipart upload for: %s", key)
filename.client.abort_multipart_upload(**params)
def _enqueue_tasks(self, files):
total_files = 0
total_parts = 0
for filename in files:
num_uploads = 1
is_multipart_task = self._is_multipart_task(filename)
too_large = False
if hasattr(filename, 'size'):
too_large = filename.size > MAX_UPLOAD_SIZE
if too_large and filename.operation_name == 'upload':
warning_message = "File exceeds s3 upload limit of 5 TB."
warning = create_warning(relative_path(filename.src),
message=warning_message)
self.result_queue.put(warning)
# Warn and skip over glacier incompatible tasks.
elif not filename.is_glacier_compatible():
LOGGER.debug(
'Encountered glacier object s3://%s. Not performing '
'%s on object.' % (filename.src, filename.operation_name))
if not self.params['ignore_glacier_warnings']:
warning = create_warning(
's3://'+filename.src,
'Object is of storage class GLACIER. Unable to '
'perform %s operations on GLACIER objects. You must '
'restore the object to be able to the perform '
'operation.' %
filename.operation_name
)
self.result_queue.put(warning)
continue
elif is_multipart_task and not self.params['dryrun']:
# If we're in dryrun mode, then we don't need the
# real multipart tasks. We can just use a BasicTask
# in the else clause below, which will print out the
# fact that it's transferring a file rather than
# the specific part tasks required to perform the
# transfer.
num_uploads = self._enqueue_multipart_tasks(filename)
else:
task = tasks.BasicTask(
session=self.session, filename=filename,
parameters=self.params,
result_queue=self.result_queue)
self.executor.submit(task)
total_files += 1
total_parts += num_uploads
return total_files, total_parts
def _is_multipart_task(self, filename):
# First we need to determine if it's an operation that even
# qualifies for multipart upload.
if hasattr(filename, 'size'):
above_multipart_threshold = filename.size > self.multi_threshold
if above_multipart_threshold:
if filename.operation_name in ('upload', 'download',
'move', 'copy'):
return True
else:
return False
else:
return False
def _enqueue_multipart_tasks(self, filename):
num_uploads = 1
if filename.operation_name == 'upload':
num_uploads = self._enqueue_multipart_upload_tasks(filename)
elif filename.operation_name == 'move':
if filename.src_type == 'local' and filename.dest_type == 's3':
num_uploads = self._enqueue_multipart_upload_tasks(
filename, remove_local_file=True)
elif filename.src_type == 's3' and filename.dest_type == 'local':
num_uploads = self._enqueue_range_download_tasks(
filename, remove_remote_file=True)
elif filename.src_type == 's3' and filename.dest_type == 's3':
num_uploads = self._enqueue_multipart_copy_tasks(
filename, remove_remote_file=True)
else:
raise ValueError("Unknown transfer type of %s -> %s" %
(filename.src_type, filename.dest_type))
elif filename.operation_name == 'copy':
num_uploads = self._enqueue_multipart_copy_tasks(
filename, remove_remote_file=False)
elif filename.operation_name == 'download':
num_uploads = self._enqueue_range_download_tasks(filename)
return num_uploads
def _enqueue_range_download_tasks(self, filename, remove_remote_file=False):
chunksize = find_chunksize(filename.size, self.chunksize)
num_downloads = int(filename.size / chunksize)
context = tasks.MultipartDownloadContext(num_downloads)
create_file_task = tasks.CreateLocalFileTask(
context=context, filename=filename,
result_queue=self.result_queue)
self.executor.submit(create_file_task)
self._do_enqueue_range_download_tasks(
filename=filename, chunksize=chunksize,
num_downloads=num_downloads, context=context,
remove_remote_file=remove_remote_file
)
complete_file_task = tasks.CompleteDownloadTask(
context=context, filename=filename, result_queue=self.result_queue,
params=self.params, io_queue=self.write_queue)
self.executor.submit(complete_file_task)
self._multipart_downloads.append((context, filename.dest))
if remove_remote_file:
remove_task = tasks.RemoveRemoteObjectTask(
filename=filename, context=context)
self.executor.submit(remove_task)
return num_downloads
def _do_enqueue_range_download_tasks(self, filename, chunksize,
num_downloads, context,
remove_remote_file=False):
for i in range(num_downloads):
task = tasks.DownloadPartTask(
part_number=i, chunk_size=chunksize,
result_queue=self.result_queue, filename=filename,
context=context, io_queue=self.write_queue,
params=self.params)
self.executor.submit(task)
def _enqueue_multipart_upload_tasks(self, filename,
remove_local_file=False):
# First we need to create a CreateMultipartUpload task,
# then create UploadTask objects for each of the parts.
# And finally enqueue a CompleteMultipartUploadTask.
chunksize = find_chunksize(filename.size, self.chunksize)
num_uploads = int(math.ceil(filename.size /
float(chunksize)))
upload_context = self._enqueue_upload_start_task(
chunksize, num_uploads, filename)
self._enqueue_upload_tasks(
num_uploads, chunksize, upload_context, filename, tasks.UploadPartTask)
self._enqueue_upload_end_task(filename, upload_context)
if remove_local_file:
remove_task = tasks.RemoveFileTask(local_filename=filename.src,
upload_context=upload_context)
self.executor.submit(remove_task)
return num_uploads
def _enqueue_multipart_copy_tasks(self, filename,
remove_remote_file=False):
chunksize = find_chunksize(filename.size, self.chunksize)
num_uploads = int(math.ceil(filename.size / float(chunksize)))
upload_context = self._enqueue_upload_start_task(
chunksize, num_uploads, filename)
self._enqueue_upload_tasks(
num_uploads, chunksize, upload_context, filename, tasks.CopyPartTask)
self._enqueue_upload_end_task(filename, upload_context)
if remove_remote_file:
remove_task = tasks.RemoveRemoteObjectTask(
filename=filename, context=upload_context)
self.executor.submit(remove_task)
return num_uploads
def _enqueue_upload_start_task(self, chunksize, num_uploads, filename):
upload_context = tasks.MultipartUploadContext(
expected_parts=num_uploads)
create_multipart_upload_task = tasks.CreateMultipartUploadTask(
session=self.session, filename=filename,
parameters=self.params,
result_queue=self.result_queue, upload_context=upload_context)
self.executor.submit(create_multipart_upload_task)
return upload_context
def _enqueue_upload_tasks(self, num_uploads, chunksize, upload_context,
filename, task_class):
for i in range(1, (num_uploads + 1)):
self._enqueue_upload_single_part_task(
part_number=i,
chunk_size=chunksize,
upload_context=upload_context,
filename=filename,
task_class=task_class
)
def _enqueue_upload_single_part_task(self, part_number, chunk_size,
upload_context, filename, task_class,
payload=None):
kwargs = {'part_number': part_number, 'chunk_size': chunk_size,
'result_queue': self.result_queue,
'upload_context': upload_context, 'filename': filename,
'params': self.params}
if payload:
kwargs['payload'] = payload
task = task_class(**kwargs)
self.executor.submit(task)
def _enqueue_upload_end_task(self, filename, upload_context):
complete_multipart_upload_task = tasks.CompleteMultipartUploadTask(
session=self.session, filename=filename, parameters=self.params,
result_queue=self.result_queue, upload_context=upload_context)
self.executor.submit(complete_multipart_upload_task)
self._multipart_uploads.append((upload_context, filename))
class S3StreamHandler(S3Handler):
"""
This class is an alternative ``S3Handler`` to be used when the operation
involves a stream since the logic is different when uploading and
downloading streams.
"""
# This ensures that the number of multipart chunks waiting in the
# executor queue and in the threads is limited.
MAX_EXECUTOR_QUEUE_SIZE = 2
EXECUTOR_NUM_THREADS = 6
def __init__(self, session, params, result_queue=None,
runtime_config=None):
if runtime_config is None:
# Rather than using the .defaults(), streaming
# has different default values so that it does not
# consume large amounts of memory.
runtime_config = RuntimeConfig().build_config(
max_queue_size=self.MAX_EXECUTOR_QUEUE_SIZE,
max_concurrent_requests=self.EXECUTOR_NUM_THREADS)
super(S3StreamHandler, self).__init__(session, params, result_queue,
runtime_config)
def _enqueue_tasks(self, files):
total_files = 0
total_parts = 0
for filename in files:
num_uploads = 1
# If uploading stream, it is required to read from the stream
# to determine if the stream needs to be multipart uploaded.
payload = None
if filename.operation_name == 'upload':
payload, is_multipart_task = \
self._pull_from_stream(self.multi_threshold)
else:
# Set the file size for the ``FileInfo`` object since
# streams do not use a ``FileGenerator`` that usually
# determines the size.
filename.set_size_from_s3()
is_multipart_task = self._is_multipart_task(filename)
if is_multipart_task and not self.params['dryrun']:
# If we're in dryrun mode, then we don't need the
# real multipart tasks. We can just use a BasicTask
# in the else clause below, which will print out the
# fact that it's transferring a file rather than
# the specific part tasks required to perform the
# transfer.
num_uploads = self._enqueue_multipart_tasks(filename, payload)
else:
task = tasks.BasicTask(
session=self.session, filename=filename,
parameters=self.params,
result_queue=self.result_queue,
payload=payload)
self.executor.submit(task)
total_files += 1
total_parts += num_uploads
return total_files, total_parts
def _pull_from_stream(self, amount_requested):
"""
This function pulls data from stdin until it hits the amount
requested or there is no more left to pull in from stdin. The
function wraps the data into a ``BytesIO`` object that is returned
along with a boolean telling whether the amount requested is
the amount returned.
"""
stream_filein = sys.stdin
if six.PY3:
stream_filein = sys.stdin.buffer
payload = stream_filein.read(amount_requested)
payload_file = six.BytesIO(payload)
return payload_file, len(payload) == amount_requested
def _enqueue_multipart_tasks(self, filename, payload=None):
num_uploads = 1
if filename.operation_name == 'upload':
num_uploads = self._enqueue_multipart_upload_tasks(filename,
payload=payload)
elif filename.operation_name == 'download':
num_uploads = self._enqueue_range_download_tasks(filename)
return num_uploads
def _enqueue_range_download_tasks(self, filename, remove_remote_file=False):
# Create the context for the multipart download.
chunksize = find_chunksize(filename.size, self.chunksize)
num_downloads = int(filename.size / chunksize)
context = tasks.MultipartDownloadContext(num_downloads)
# No file is needed for downloading a stream. So just announce
# that it has been made since it is required for the context to
# begin downloading.
context.announce_file_created()
# Submit download part tasks to the executor.
self._do_enqueue_range_download_tasks(
filename=filename, chunksize=chunksize,
num_downloads=num_downloads, context=context,
remove_remote_file=remove_remote_file
)
return num_downloads
def _enqueue_multipart_upload_tasks(self, filename, payload=None):
# First we need to create a CreateMultipartUpload task,
# then create UploadTask objects for each of the parts.
# And finally enqueue a CompleteMultipartUploadTask.
chunksize = self.chunksize
# Determine an appropriate chunksize if given an expected size.
if self.params['expected_size']:
chunksize = find_chunksize(int(self.params['expected_size']),
self.chunksize)
num_uploads = '...'
# Submit a task to begin the multipart upload.
upload_context = self._enqueue_upload_start_task(
chunksize, num_uploads, filename)
# Now submit a task to upload the initial chunk of data pulled
# from the stream that was used to determine if a multipart upload
# was needed.
self._enqueue_upload_single_part_task(
part_number=1, chunk_size=chunksize,
upload_context=upload_context, filename=filename,
task_class=tasks.UploadPartTask, payload=payload
)
# Submit tasks to upload the rest of the chunks of the data coming in
# from standard input.
num_uploads = self._enqueue_upload_tasks(
num_uploads, chunksize, upload_context,
filename, tasks.UploadPartTask
)
# Submit a task to notify the multipart upload is complete.
self._enqueue_upload_end_task(filename, upload_context)
return num_uploads
def _enqueue_upload_tasks(self, num_uploads, chunksize, upload_context,
filename, task_class):
# The previous upload occured right after the multipart
# upload started for a stream.
num_uploads = 1
while True:
# Pull more data from standard input.
payload, is_remaining = self._pull_from_stream(chunksize)
# Submit an upload part task for the recently pulled data.
self._enqueue_upload_single_part_task(
part_number=num_uploads+1,
chunk_size=chunksize,
upload_context=upload_context,
filename=filename,
task_class=task_class,
payload=payload
)
num_uploads += 1
if not is_remaining:
break
# Once there is no more data left, announce to the context how
# many parts are being uploaded so it knows when it can quit.
upload_context.announce_total_parts(num_uploads)
return num_uploads
awscli-1.10.1/awscli/customizations/s3/subcommands.py 0000666 4542626 0000144 00000143607 12652514124 023732 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import os
import sys
from botocore.client import Config
from dateutil.parser import parse
from dateutil.tz import tzlocal
from awscli.compat import six
from awscli.compat import queue
from awscli.customizations.commands import BasicCommand
from awscli.customizations.s3.comparator import Comparator
from awscli.customizations.s3.fileinfobuilder import FileInfoBuilder
from awscli.customizations.s3.fileformat import FileFormat
from awscli.customizations.s3.filegenerator import FileGenerator
from awscli.customizations.s3.fileinfo import TaskInfo, FileInfo
from awscli.customizations.s3.filters import create_filter
from awscli.customizations.s3.s3handler import S3Handler, S3StreamHandler
from awscli.customizations.s3.utils import find_bucket_key, uni_print, \
AppendFilter, find_dest_path_comp_key, human_readable_size, \
RequestParamsMapper
from awscli.customizations.s3.syncstrategy.base import MissingFileSync, \
SizeAndLastModifiedSync, NeverSync
from awscli.customizations.s3 import transferconfig
RECURSIVE = {'name': 'recursive', 'action': 'store_true', 'dest': 'dir_op',
'help_text': (
"Command is performed on all files or objects "
"under the specified directory or prefix.")}
HUMAN_READABLE = {'name': 'human-readable', 'action': 'store_true',
'help_text': "Displays file sizes in human readable format."}
SUMMARIZE = {'name': 'summarize', 'action': 'store_true',
'help_text': (
"Displays summary information "
"(number of objects, total size).")}
DRYRUN = {'name': 'dryrun', 'action': 'store_true',
'help_text': (
"Displays the operations that would be performed using the "
"specified command without actually running them.")}
QUIET = {'name': 'quiet', 'action': 'store_true',
'help_text': (
"Does not display the operations performed from the specified "
"command.")}
FORCE = {'name': 'force', 'action': 'store_true',
'help_text': (
"Deletes all objects in the bucket including the bucket itself. "
"Note that versioned objects will not be deleted in this "
"process which would cause the bucket deletion to fail because "
"the bucket would not be empty. To delete versioned "
"objects use the ``s3api delete-object`` command with "
"the ``--version-id`` parameter.")}
FOLLOW_SYMLINKS = {'name': 'follow-symlinks', 'action': 'store_true',
'default': True, 'group_name': 'follow_symlinks',
'help_text': (
"Symbolic links are followed "
"only when uploading to S3 from the local filesystem. "
"Note that S3 does not support symbolic links, so the "
"contents of the link target are uploaded under the "
"name of the link. When neither ``--follow-symlinks`` "
"nor ``--no-follow-symlinks`` is specifed, the default "
"is to follow symlinks.")}
NO_FOLLOW_SYMLINKS = {'name': 'no-follow-symlinks', 'action': 'store_false',
'dest': 'follow_symlinks', 'default': True,
'group_name': 'follow_symlinks'}
NO_GUESS_MIME_TYPE = {'name': 'no-guess-mime-type', 'action': 'store_false',
'dest': 'guess_mime_type', 'default': True,
'help_text': (
"Do not try to guess the mime type for "
"uploaded files. By default the mime type of a "
"file is guessed when it is uploaded.")}
CONTENT_TYPE = {'name': 'content-type',
'help_text': (
"Specify an explicit content type for this operation. "
"This value overrides any guessed mime types.")}
EXCLUDE = {'name': 'exclude', 'action': AppendFilter, 'nargs': 1,
'dest': 'filters',
'help_text': (
"Exclude all files or objects from the command that matches "
"the specified pattern.")}
INCLUDE = {'name': 'include', 'action': AppendFilter, 'nargs': 1,
'dest': 'filters',
'help_text': (
"Don't exclude files or objects "
"in the command that match the specified pattern. "
'See Use of '
'Exclude and Include Filters for details.')}
ACL = {'name': 'acl',
'choices': ['private', 'public-read', 'public-read-write',
'authenticated-read', 'bucket-owner-read',
'bucket-owner-full-control', 'log-delivery-write'],
'help_text': (
"Sets the ACL for the object when the command is "
"performed. If you use this parameter you must have the "
'"s3:PutObjectAcl" permission included in the list of actions '
"for your IAM policy. "
"Only accepts values of ``private``, ``public-read``, "
"``public-read-write``, ``authenticated-read``, "
"``bucket-owner-read``, ``bucket-owner-full-control`` and "
"``log-delivery-write``. "
'See Canned ACL for details')}
GRANTS = {
'name': 'grants', 'nargs': '+',
'help_text': (
'
Grant specific permissions to individual users or groups. You '
'can supply a list of grants of the form
--grants '
'Permission=Grantee_Type=Grantee_ID [Permission=Grantee_Type='
'Grantee_ID ...]To specify the same permission type '
'for multiple '
'grantees, specify the permission as such as --grants '
'Permission=Grantee_Type=Grantee_ID,Grantee_Type=Grantee_ID,...'
'Each value contains the following elements:'
'
Permission - Specifies '
'the granted permissions, and can be set to read, readacl, '
'writeacl, or full.
Grantee_Type - '
'Specifies how the grantee is to be identified, and can be set '
'to uri, emailaddress, or id.
Grantee_ID - '
'Specifies the grantee based on Grantee_Type.
'
''
'For more information on Amazon S3 access control, see '
'Access Control')}
SSE = {
'name': 'sse', 'nargs': '?', 'const': 'AES256',
'choices': ['AES256', 'aws:kms'],
'help_text': (
'Specifies server-side encryption of the object in S3. '
'Valid values are ``AES256`` and ``aws:kms``. If the parameter is '
'specified but no value is provided, ``AES256`` is used.'
)
}
SSE_C = {
'name': 'sse-c', 'nargs': '?', 'const': 'AES256', 'choices': ['AES256'],
'help_text': (
'Specifies server-side encryption using customer provided keys '
'of the the object in S3. ``AES256`` is the only valid value. '
'If the parameter is specified but no value is provided, '
'``AES256`` is used. If you provide this value, ``--sse-c-key`` '
'be specfied as well.'
)
}
SSE_C_KEY = {
'name': 'sse-c-key',
'help_text': (
'The customer-provided encryption key to use to server-side '
'encrypt the object in S3. If you provide this value, '
'``--sse-c`` be specfied as well.'
)
}
SSE_KMS_KEY_ID = {
'name': 'sse-kms-key-id',
'help_text': (
'The AWS KMS key ID that should be used to server-side '
'encrypt the object in S3. Note that you should only '
'provide this parameter if KMS key ID is different the '
'default S3 master KMS key.'
)
}
SSE_C_COPY_SOURCE = {
'name': 'sse-c-copy-source', 'nargs': '?',
'const': 'AES256', 'choices': ['AES256'],
'help_text': (
'This parameter should only be specified when copying an S3 object '
'that was encrypted server-side with a customer-provided '
'key. It specifies the algorithm to use when decrypting the source '
'object. ``AES256`` is the only valid '
'value. If the parameter is specified but no value is provided, '
'``AES256`` is used. If you provide this value, '
'``--sse-c-copy-source-key`` be specfied as well. '
)
}
SSE_C_COPY_SOURCE_KEY = {
'name': 'sse-c-copy-source-key',
'help_text': (
'This parameter should only be specified when copying an S3 object '
'that was encrypted server-side with a customer-provided '
'key. Specifies the customer-provided encryption key for Amazon S3 '
'to use to decrypt the source object. The encryption key provided '
'must be one that was used when the source object was created. '
'If you provide this value, ``--sse-c-copy-source`` be specfied as '
'well.'
)
}
STORAGE_CLASS = {'name': 'storage-class',
'choices': ['STANDARD', 'REDUCED_REDUNDANCY', 'STANDARD_IA'],
'help_text': (
"The type of storage to use for the object. "
"Valid choices are: STANDARD | REDUCED_REDUNDANCY "
"| STANDARD_IA. "
"Defaults to 'STANDARD'")}
WEBSITE_REDIRECT = {'name': 'website-redirect',
'help_text': (
"If the bucket is configured as a website, "
"redirects requests for this object to another object "
"in the same bucket or to an external URL. Amazon S3 "
"stores the value of this header in the object "
"metadata.")}
CACHE_CONTROL = {'name': 'cache-control',
'help_text': (
"Specifies caching behavior along the "
"request/reply chain.")}
CONTENT_DISPOSITION = {'name': 'content-disposition',
'help_text': (
"Specifies presentational information "
"for the object.")}
CONTENT_ENCODING = {'name': 'content-encoding',
'help_text': (
"Specifies what content encodings have been "
"applied to the object and thus what decoding "
"mechanisms must be applied to obtain the media-type "
"referenced by the Content-Type header field.")}
CONTENT_LANGUAGE = {'name': 'content-language',
'help_text': ("The language the content is in.")}
SOURCE_REGION = {'name': 'source-region',
'help_text': (
"When transferring objects from an s3 bucket to an s3 "
"bucket, this specifies the region of the source bucket."
" Note the region specified by ``--region`` or through "
"configuration of the CLI refers to the region of the "
"destination bucket. If ``--source-region`` is not "
"specified the region of the source will be the same "
"as the region of the destination bucket.")}
EXPIRES = {
'name': 'expires',
'help_text': (
"The date and time at which the object is no longer cacheable.")
}
METADATA = {
'name': 'metadata', 'cli_type_name': 'map',
'schema': {
'type': 'map',
'key': {'type': 'string'},
'value': {'type': 'string'}
},
'help_text': (
"A map of metadata to store with the objects in S3. This will be "
"applied to every object which is part of this request. In a sync, this"
"means that files which haven't changed won't receive the new metadata."
" When copying between two s3 locations, the metadata-directive "
"argument will default to 'REPLACE' unless otherwise specified."
)
}
METADATA_DIRECTIVE = {
'name': 'metadata-directive', 'choices': ['COPY', 'REPLACE'],
'help_text': (
'Specifies whether the metadata is copied from the source object '
'or replaced with metadata provided when copying S3 objects. '
'Note that if the object is copied over in parts, the source '
'object\'s metadata will not be copied over, no matter the value for '
'``--metadata-directive``, and instead the desired metadata values '
'must be specified as parameters on the command line. '
'Valid values are ``COPY`` and ``REPLACE``. If this parameter is not '
'specified, ``COPY`` will be used by default. If ``REPLACE`` is used, '
'the copied object will only have the metadata values that were'
' specified by the CLI command. Note that if you are '
'using any of the following parameters: ``--content-type``, '
'``content-language``, ``--content-encoding``, '
'``--content-disposition``, ``-cache-control``, or ``--expires``, you '
'will need to specify ``--metadata-directive REPLACE`` for '
'non-multipart copies if you want the copied objects to have the '
'specified metadata values.')
}
INDEX_DOCUMENT = {'name': 'index-document',
'help_text': (
'A suffix that is appended to a request that is for '
'a directory on the website endpoint (e.g. if the '
'suffix is index.html and you make a request to '
'samplebucket/images/ the data that is returned '
'will be for the object with the key name '
'images/index.html) The suffix must not be empty and '
'must not include a slash character.')}
ERROR_DOCUMENT = {'name': 'error-document',
'help_text': (
'The object key name to use when '
'a 4XX class error occurs.')}
ONLY_SHOW_ERRORS = {'name': 'only-show-errors', 'action': 'store_true',
'help_text': (
'Only errors and warnings are displayed. All other '
'output is suppressed.')}
EXPECTED_SIZE = {'name': 'expected-size',
'help_text': (
'This argument specifies the expected size of a stream '
'in terms of bytes. Note that this argument is needed '
'only when a stream is being uploaded to s3 and the size '
'is larger than 5GB. Failure to include this argument '
'under these conditions may result in a failed upload. '
'due to too many parts in upload.')}
PAGE_SIZE = {'name': 'page-size', 'cli_type_name': 'integer',
'help_text': (
'The number of results to return in each response to a list '
'operation. The default value is 1000 (the maximum allowed). '
'Using a lower value may help if an operation times out.')}
IGNORE_GLACIER_WARNINGS = {
'name': 'ignore-glacier-warnings', 'action': 'store_true',
'help_text': (
'Turns off glacier warnings. Warnings about operations that cannot '
'be performed because it involves copying, downloading, or moving '
'a glacier object will no longer be printed to standard error and '
'will no longer cause the return code of the command to be ``2``.'
)
}
TRANSFER_ARGS = [DRYRUN, QUIET, INCLUDE, EXCLUDE, ACL,
FOLLOW_SYMLINKS, NO_FOLLOW_SYMLINKS, NO_GUESS_MIME_TYPE,
SSE, SSE_C, SSE_C_KEY, SSE_KMS_KEY_ID, SSE_C_COPY_SOURCE,
SSE_C_COPY_SOURCE_KEY, STORAGE_CLASS, GRANTS,
WEBSITE_REDIRECT, CONTENT_TYPE, CACHE_CONTROL,
CONTENT_DISPOSITION, CONTENT_ENCODING, CONTENT_LANGUAGE,
EXPIRES, SOURCE_REGION, ONLY_SHOW_ERRORS,
PAGE_SIZE, IGNORE_GLACIER_WARNINGS]
def get_client(session, region, endpoint_url, verify, config=None):
return session.create_client('s3', region_name=region,
endpoint_url=endpoint_url, verify=verify,
config=config)
class S3Command(BasicCommand):
def _run_main(self, parsed_args, parsed_globals):
self.client = get_client(self._session, parsed_globals.region,
parsed_globals.endpoint_url,
parsed_globals.verify_ssl)
class ListCommand(S3Command):
NAME = 'ls'
DESCRIPTION = ("List S3 objects and common prefixes under a prefix or "
"all S3 buckets. Note that the --output argument "
"is ignored for this command.")
USAGE = " or NONE"
ARG_TABLE = [{'name': 'paths', 'nargs': '?', 'default': 's3://',
'positional_arg': True, 'synopsis': USAGE}, RECURSIVE,
PAGE_SIZE, HUMAN_READABLE, SUMMARIZE]
def _run_main(self, parsed_args, parsed_globals):
super(ListCommand, self)._run_main(parsed_args, parsed_globals)
self._empty_result = False
self._at_first_page = True
self._size_accumulator = 0
self._total_objects = 0
self._human_readable = parsed_args.human_readable
path = parsed_args.paths
if path.startswith('s3://'):
path = path[5:]
bucket, key = find_bucket_key(path)
if not bucket:
self._list_all_buckets()
elif parsed_args.dir_op:
# Then --recursive was specified.
self._list_all_objects_recursive(bucket, key,
parsed_args.page_size)
else:
self._list_all_objects(bucket, key, parsed_args.page_size)
if parsed_args.summarize:
self._print_summary()
if key:
# User specified a key to look for. We should return an rc of one
# if there are no matching keys and/or prefixes or return an rc
# of zero if there are matching keys or prefixes.
return self._check_no_objects()
else:
# This covers the case when user is trying to list all of of
# the buckets or is trying to list the objects of a bucket
# (without specifying a key). For both situations, a rc of 0
# should be returned because applicable errors are supplied by
# the server (i.e. bucket not existing). These errors will be
# thrown before reaching the automatic return of rc of zero.
return 0
def _list_all_objects(self, bucket, key, page_size=None):
paginator = self.client.get_paginator('list_objects')
iterator = paginator.paginate(Bucket=bucket,
Prefix=key, Delimiter='/',
PaginationConfig={'PageSize': page_size})
for response_data in iterator:
self._display_page(response_data)
def _display_page(self, response_data, use_basename=True):
common_prefixes = response_data.get('CommonPrefixes', [])
contents = response_data.get('Contents', [])
if not contents and not common_prefixes:
self._empty_result = True
return
for common_prefix in common_prefixes:
prefix_components = common_prefix['Prefix'].split('/')
prefix = prefix_components[-2]
pre_string = "PRE".rjust(30, " ")
print_str = pre_string + ' ' + prefix + '/\n'
uni_print(print_str)
for content in contents:
last_mod_str = self._make_last_mod_str(content['LastModified'])
self._size_accumulator += int(content['Size'])
self._total_objects += 1
size_str = self._make_size_str(content['Size'])
if use_basename:
filename_components = content['Key'].split('/')
filename = filename_components[-1]
else:
filename = content['Key']
print_str = last_mod_str + ' ' + size_str + ' ' + \
filename + '\n'
uni_print(print_str)
self._at_first_page = False
def _list_all_buckets(self):
response_data = self.client.list_buckets()
buckets = response_data['Buckets']
for bucket in buckets:
last_mod_str = self._make_last_mod_str(bucket['CreationDate'])
print_str = last_mod_str + ' ' + bucket['Name'] + '\n'
uni_print(print_str)
def _list_all_objects_recursive(self, bucket, key, page_size=None):
paginator = self.client.get_paginator('list_objects')
iterator = paginator.paginate(Bucket=bucket,
Prefix=key,
PaginationConfig={'PageSize': page_size})
for response_data in iterator:
self._display_page(response_data, use_basename=False)
def _check_no_objects(self):
if self._empty_result and self._at_first_page:
# Nothing was returned in the first page of results when listing
# the objects.
return 1
return 0
def _make_last_mod_str(self, last_mod):
"""
This function creates the last modified time string whenever objects
or buckets are being listed
"""
last_mod = parse(last_mod)
last_mod = last_mod.astimezone(tzlocal())
last_mod_tup = (str(last_mod.year), str(last_mod.month).zfill(2),
str(last_mod.day).zfill(2),
str(last_mod.hour).zfill(2),
str(last_mod.minute).zfill(2),
str(last_mod.second).zfill(2))
last_mod_str = "%s-%s-%s %s:%s:%s" % last_mod_tup
return last_mod_str.ljust(19, ' ')
def _make_size_str(self, size):
"""
This function creates the size string when objects are being listed.
"""
if self._human_readable:
size_str = human_readable_size(size)
else:
size_str = str(size)
return size_str.rjust(10, ' ')
def _print_summary(self):
"""
This function prints a summary of total objects and total bytes
"""
print_str = str(self._total_objects)
uni_print("\nTotal Objects: ".rjust(15, ' ') + print_str + "\n")
if self._human_readable:
print_str = human_readable_size(self._size_accumulator)
else:
print_str = str(self._size_accumulator)
uni_print("Total Size: ".rjust(15, ' ') + print_str + "\n")
class WebsiteCommand(S3Command):
NAME = 'website'
DESCRIPTION = 'Set the website configuration for a bucket.'
USAGE = ''
ARG_TABLE = [{'name': 'paths', 'nargs': 1, 'positional_arg': True,
'synopsis': USAGE}, INDEX_DOCUMENT, ERROR_DOCUMENT]
def _run_main(self, parsed_args, parsed_globals):
super(WebsiteCommand, self)._run_main(parsed_args, parsed_globals)
bucket = self._get_bucket_name(parsed_args.paths[0])
website_configuration = self._build_website_configuration(parsed_args)
self.client.put_bucket_website(
Bucket=bucket, WebsiteConfiguration=website_configuration)
return 0
def _build_website_configuration(self, parsed_args):
website_config = {}
if parsed_args.index_document is not None:
website_config['IndexDocument'] = \
{'Suffix': parsed_args.index_document}
if parsed_args.error_document is not None:
website_config['ErrorDocument'] = \
{'Key': parsed_args.error_document}
return website_config
def _get_bucket_name(self, path):
# We support either:
# s3://bucketname
# bucketname
#
# We also strip off the trailing slash if a user
# accidently appends a slash.
if path.startswith('s3://'):
path = path[5:]
if path.endswith('/'):
path = path[:-1]
return path
class S3TransferCommand(S3Command):
def _run_main(self, parsed_args, parsed_globals):
super(S3TransferCommand, self)._run_main(parsed_args, parsed_globals)
self._convert_path_args(parsed_args)
params = self._build_call_parameters(parsed_args, {})
cmd_params = CommandParameters(self.NAME, params,
self.USAGE)
cmd_params.add_region(parsed_globals)
cmd_params.add_endpoint_url(parsed_globals)
cmd_params.add_verify_ssl(parsed_globals)
cmd_params.add_page_size(parsed_args)
cmd_params.add_paths(parsed_args.paths)
self._handle_rm_force(parsed_globals, cmd_params.parameters)
runtime_config = transferconfig.RuntimeConfig().build_config(
**self._session.get_scoped_config().get('s3', {}))
cmd = CommandArchitecture(self._session, self.NAME,
cmd_params.parameters,
runtime_config)
cmd.set_clients()
cmd.create_instructions()
return cmd.run()
def _build_call_parameters(self, args, command_params):
"""
This takes all of the commands in the name space and puts them
into a dictionary
"""
for name, value in vars(args).items():
command_params[name] = value
return command_params
def _convert_path_args(self, parsed_args):
if not isinstance(parsed_args.paths, list):
parsed_args.paths = [parsed_args.paths]
for i in range(len(parsed_args.paths)):
path = parsed_args.paths[i]
if isinstance(path, six.binary_type):
dec_path = path.decode(sys.getfilesystemencoding())
enc_path = dec_path.encode('utf-8')
new_path = enc_path.decode('utf-8')
parsed_args.paths[i] = new_path
def _handle_rm_force(self, parsed_globals, parameters):
"""
This function recursively deletes objects in a bucket if the force
parameter was thrown when using the remove bucket command. It will
refuse to delete if a key is specified in the s3path.
"""
# XXX: This shouldn't really be here. This was originally moved from
# the CommandParameters class to here, but this is still not the ideal
# place for this code. This should be moved
# to either the CommandArchitecture class, or the RbCommand class where
# the actual operations against S3 are performed. This may require
# some refactoring though to move this to either of those classes.
# For now, moving this out of CommandParameters allows for that class
# to be kept simple.
if 'force' in parameters:
if parameters['force']:
bucket, key = find_bucket_key(parameters['src'][5:])
if key:
raise ValueError('Please specify a valid bucket name only.'
' E.g. s3://%s' % bucket)
path = 's3://' + bucket
del_objects = RmCommand(self._session)
del_objects([path, '--recursive'], parsed_globals)
class CpCommand(S3TransferCommand):
NAME = 'cp'
DESCRIPTION = "Copies a local file or S3 object to another location " \
"locally or in S3."
USAGE = " or " \
"or "
ARG_TABLE = [{'name': 'paths', 'nargs': 2, 'positional_arg': True,
'synopsis': USAGE}] + TRANSFER_ARGS + \
[METADATA, METADATA_DIRECTIVE, EXPECTED_SIZE, RECURSIVE]
class MvCommand(S3TransferCommand):
NAME = 'mv'
DESCRIPTION = "Moves a local file or S3 object to " \
"another location locally or in S3."
USAGE = " or " \
"or "
ARG_TABLE = [{'name': 'paths', 'nargs': 2, 'positional_arg': True,
'synopsis': USAGE}] + TRANSFER_ARGS +\
[METADATA, METADATA_DIRECTIVE, RECURSIVE]
class RmCommand(S3TransferCommand):
NAME = 'rm'
DESCRIPTION = "Deletes an S3 object."
USAGE = ""
ARG_TABLE = [{'name': 'paths', 'nargs': 1, 'positional_arg': True,
'synopsis': USAGE}, DRYRUN, QUIET, RECURSIVE, INCLUDE,
EXCLUDE, ONLY_SHOW_ERRORS, PAGE_SIZE]
class SyncCommand(S3TransferCommand):
NAME = 'sync'
DESCRIPTION = "Syncs directories and S3 prefixes. Recursively copies " \
"new and updated files from the source directory to " \
"the destination. Only creates folders in the destination " \
"if they contain one or more files."
USAGE = " or " \
" or "
ARG_TABLE = [{'name': 'paths', 'nargs': 2, 'positional_arg': True,
'synopsis': USAGE}] + TRANSFER_ARGS + \
[METADATA, METADATA_DIRECTIVE]
class MbCommand(S3TransferCommand):
NAME = 'mb'
DESCRIPTION = "Creates an S3 bucket."
USAGE = ""
ARG_TABLE = [{'name': 'paths', 'nargs': 1, 'positional_arg': True,
'synopsis': USAGE}]
class RbCommand(S3TransferCommand):
NAME = 'rb'
DESCRIPTION = (
"Deletes an empty S3 bucket. A bucket must be completely empty "
"of objects and versioned objects before it can be deleted. "
"However, the ``--force`` parameter can be used to delete "
"the non-versioned objects in the bucket before the bucket is "
"deleted."
)
USAGE = ""
ARG_TABLE = [{'name': 'paths', 'nargs': 1, 'positional_arg': True,
'synopsis': USAGE}, FORCE]
class CommandArchitecture(object):
"""
This class drives the actual command. A command is performed in two
steps. First a list of instructions is generated. This list of
instructions identifies which type of components are required based on the
name of the command and the parameters passed to the command line. After
the instructions are generated the second step involves using the
lsit of instructions to wire together an assortment of generators to
perform the command.
"""
def __init__(self, session, cmd, parameters, runtime_config=None):
self.session = session
self.cmd = cmd
self.parameters = parameters
self.instructions = []
self._runtime_config = runtime_config
self._endpoint = None
self._source_endpoint = None
self._client = None
self._source_client = None
def set_clients(self):
client_config = None
if self.parameters.get('sse') == 'aws:kms':
client_config = Config(signature_version='s3v4')
self._client = get_client(
self.session,
region=self.parameters['region'],
endpoint_url=self.parameters['endpoint_url'],
verify=self.parameters['verify_ssl'],
config=client_config
)
self._source_client = get_client(
self.session,
region=self.parameters['region'],
endpoint_url=self.parameters['endpoint_url'],
verify=self.parameters['verify_ssl'],
config=client_config
)
if self.parameters['source_region']:
if self.parameters['paths_type'] == 's3s3':
self._source_client = get_client(
self.session,
region=self.parameters['source_region'],
endpoint_url=None,
verify=self.parameters['verify_ssl'],
config=client_config
)
def create_instructions(self):
"""
This function creates the instructions based on the command name and
extra parameters. Note that all commands must have an s3_handler
instruction in the instructions and must be at the end of the
instruction list because it sends the request to S3 and does not
yield anything.
"""
if self.needs_filegenerator():
self.instructions.append('file_generator')
if self.parameters.get('filters'):
self.instructions.append('filters')
if self.cmd == 'sync':
self.instructions.append('comparator')
self.instructions.append('file_info_builder')
self.instructions.append('s3_handler')
def needs_filegenerator(self):
if self.cmd in ['mb', 'rb'] or self.parameters['is_stream']:
return False
else:
return True
def choose_sync_strategies(self):
"""Determines the sync strategy for the command.
It defaults to the default sync strategies but a customizable sync
strategy can overide the default strategy if it returns the instance
of its self when the event is emitted.
"""
sync_strategies = {}
# Set the default strategies.
sync_strategies['file_at_src_and_dest_sync_strategy'] = \
SizeAndLastModifiedSync()
sync_strategies['file_not_at_dest_sync_strategy'] = MissingFileSync()
sync_strategies['file_not_at_src_sync_strategy'] = NeverSync()
# Determine what strategies to overide if any.
responses = self.session.emit(
'choosing-s3-sync-strategy', params=self.parameters)
if responses is not None:
for response in responses:
override_sync_strategy = response[1]
if override_sync_strategy is not None:
sync_type = override_sync_strategy.sync_type
sync_type += '_sync_strategy'
sync_strategies[sync_type] = override_sync_strategy
return sync_strategies
def run(self):
"""
This function wires together all of the generators and completes
the command. First a dictionary is created that is indexed first by
the command name. Then using the instruction, another dictionary
can be indexed to obtain the objects corresponding to the
particular instruction for that command. To begin the wiring,
either a ``FileFormat`` or ``TaskInfo`` object, depending on the
command, is put into a list. Then the function enters a while loop
that pops off an instruction. It then determines the object needed
and calls the call function of the object using the list as the input.
Depending on the number of objects in the input list and the number
of components in the list corresponding to the instruction, the call
method of the component can be called two different ways. If the
number of inputs is equal to the number of components a 1:1 mapping of
inputs to components is used when calling the call function. If the
there are more inputs than components, then a 2:1 mapping of inputs to
components is used where the component call method takes two inputs
instead of one. Whatever files are yielded from the call function
is appended to a list and used as the input for the next repetition
of the while loop until there are no more instructions.
"""
src = self.parameters['src']
dest = self.parameters['dest']
paths_type = self.parameters['paths_type']
files = FileFormat().format(src, dest, self.parameters)
rev_files = FileFormat().format(dest, src, self.parameters)
cmd_translation = {}
cmd_translation['locals3'] = {'cp': 'upload', 'sync': 'upload',
'mv': 'move'}
cmd_translation['s3s3'] = {'cp': 'copy', 'sync': 'copy', 'mv': 'move'}
cmd_translation['s3local'] = {'cp': 'download', 'sync': 'download',
'mv': 'move'}
cmd_translation['s3'] = {
'rm': 'delete',
'mb': 'make_bucket',
'rb': 'remove_bucket'
}
result_queue = queue.Queue()
operation_name = cmd_translation[paths_type][self.cmd]
fgen_kwargs = {
'client': self._source_client, 'operation_name': operation_name,
'follow_symlinks': self.parameters['follow_symlinks'],
'page_size': self.parameters['page_size'],
'result_queue': result_queue
}
rgen_kwargs = {
'client': self._client, 'operation_name': '',
'follow_symlinks': self.parameters['follow_symlinks'],
'page_size': self.parameters['page_size'],
'result_queue': result_queue
}
fgen_request_parameters = {}
fgen_head_object_params = {}
fgen_request_parameters['HeadObject'] = fgen_head_object_params
fgen_kwargs['request_parameters'] = fgen_request_parameters
# SSE-C may be neaded for HeadObject for copies/downloads/deletes
# If the operation is s3 to s3, the FileGenerator should use the
# copy source key and algorithm. Otherwise, use the regular
# SSE-C key and algorithm. Note the reverse FileGenerator does
# not need any of these because it is used only for sync operations
# which only use ListObjects which does not require HeadObject.
RequestParamsMapper.map_head_object_params(
fgen_head_object_params, self.parameters)
if paths_type == 's3s3':
RequestParamsMapper.map_head_object_params(
fgen_head_object_params, {
'sse_c': self.parameters.get('sse_c_copy_source'),
'sse_c_key': self.parameters.get('sse_c_copy_source_key')
}
)
file_generator = FileGenerator(**fgen_kwargs)
rev_generator = FileGenerator(**rgen_kwargs)
taskinfo = [TaskInfo(src=files['src']['path'],
src_type='s3',
operation_name=operation_name,
client=self._client)]
stream_dest_path, stream_compare_key = find_dest_path_comp_key(files)
stream_file_info = [FileInfo(src=files['src']['path'],
dest=stream_dest_path,
compare_key=stream_compare_key,
src_type=files['src']['type'],
dest_type=files['dest']['type'],
operation_name=operation_name,
client=self._client,
is_stream=True)]
file_info_builder = FileInfoBuilder(
self._client, self._source_client, self.parameters)
s3handler = S3Handler(self.session, self.parameters,
runtime_config=self._runtime_config,
result_queue=result_queue)
s3_stream_handler = S3StreamHandler(self.session, self.parameters,
result_queue=result_queue)
sync_strategies = self.choose_sync_strategies()
command_dict = {}
if self.cmd == 'sync':
command_dict = {'setup': [files, rev_files],
'file_generator': [file_generator,
rev_generator],
'filters': [create_filter(self.parameters),
create_filter(self.parameters)],
'comparator': [Comparator(**sync_strategies)],
'file_info_builder': [file_info_builder],
's3_handler': [s3handler]}
elif self.cmd == 'cp' and self.parameters['is_stream']:
command_dict = {'setup': [stream_file_info],
's3_handler': [s3_stream_handler]}
elif self.cmd == 'cp':
command_dict = {'setup': [files],
'file_generator': [file_generator],
'filters': [create_filter(self.parameters)],
'file_info_builder': [file_info_builder],
's3_handler': [s3handler]}
elif self.cmd == 'rm':
command_dict = {'setup': [files],
'file_generator': [file_generator],
'filters': [create_filter(self.parameters)],
'file_info_builder': [file_info_builder],
's3_handler': [s3handler]}
elif self.cmd == 'mv':
command_dict = {'setup': [files],
'file_generator': [file_generator],
'filters': [create_filter(self.parameters)],
'file_info_builder': [file_info_builder],
's3_handler': [s3handler]}
elif self.cmd == 'mb':
command_dict = {'setup': [taskinfo],
's3_handler': [s3handler]}
elif self.cmd == 'rb':
command_dict = {'setup': [taskinfo],
's3_handler': [s3handler]}
files = command_dict['setup']
while self.instructions:
instruction = self.instructions.pop(0)
file_list = []
components = command_dict[instruction]
for i in range(len(components)):
if len(files) > len(components):
file_list.append(components[i].call(*files))
else:
file_list.append(components[i].call(files[i]))
files = file_list
# This is kinda quirky, but each call through the instructions
# will replaces the files attr with the return value of the
# file_list. The very last call is a single list of
# [s3_handler], and the s3_handler returns the number of
# tasks failed and the number of tasks warned.
# This means that files[0] now contains a namedtuple with
# the number of failed tasks and the number of warned tasks.
# In terms of the RC, we're keeping it simple and saying
# that > 0 failed tasks will give a 1 RC and > 0 warned
# tasks will give a 2 RC. Otherwise a RC of zero is returned.
rc = 0
if files[0].num_tasks_failed > 0:
rc = 1
if files[0].num_tasks_warned > 0:
rc = 2
return rc
class CommandParameters(object):
"""
This class is used to do some initial error based on the
parameters and arguments passed to the command line.
"""
def __init__(self, cmd, parameters, usage):
"""
Stores command name and parameters. Ensures that the ``dir_op`` flag
is true if a certain command is being used.
:param cmd: The name of the command, e.g. "rm".
:param parameters: A dictionary of parameters.
:param usage: A usage string
"""
self.cmd = cmd
self.parameters = parameters
self.usage = usage
if 'dir_op' not in parameters:
self.parameters['dir_op'] = False
if 'follow_symlinks' not in parameters:
self.parameters['follow_symlinks'] = True
if 'source_region' not in parameters:
self.parameters['source_region'] = None
if self.cmd in ['sync', 'mb', 'rb']:
self.parameters['dir_op'] = True
def add_paths(self, paths):
"""
Reformats the parameters dictionary by including a key and
value for the source and the destination. If a destination is
not used the destination is the same as the source to ensure
the destination always have some value.
"""
self.check_path_type(paths)
self._normalize_s3_trailing_slash(paths)
src_path = paths[0]
self.parameters['src'] = src_path
if len(paths) == 2:
self.parameters['dest'] = paths[1]
elif len(paths) == 1:
self.parameters['dest'] = paths[0]
self._validate_streaming_paths()
self._validate_path_args()
self._validate_sse_c_args()
def _validate_streaming_paths(self):
self.parameters['is_stream'] = False
if self.parameters['src'] == '-' or self.parameters['dest'] == '-':
self.parameters['is_stream'] = True
self.parameters['dir_op'] = False
self.parameters['only_show_errors'] = True
if self.parameters['is_stream'] and self.cmd != 'cp':
raise ValueError("Streaming currently is only compatible with "
"single file cp commands")
def _validate_path_args(self):
# If we're using a mv command, you can't copy the object onto itself.
params = self.parameters
if self.cmd == 'mv' and self._same_path(params['src'], params['dest']):
raise ValueError("Cannot mv a file onto itself: '%s' - '%s'" % (
params['src'], params['dest']))
# If the user provided local path does not exist, hard fail because
# we know that we will not be able to upload the file.
if 'locals3' == params['paths_type'] and not params['is_stream']:
if not os.path.exists(params['src']):
raise RuntimeError(
'The user-provided path %s does not exist.' %
params['src'])
# If the operation is downloading to a directory that does not exist,
# create the directories so no warnings are thrown during the syncing
# process.
elif 's3local' == params['paths_type'] and params['dir_op']:
if not os.path.exists(params['dest']):
os.makedirs(params['dest'])
def _same_path(self, src, dest):
if not self.parameters['paths_type'] == 's3s3':
return False
elif src == dest:
return True
elif dest.endswith('/'):
src_base = os.path.basename(src)
return src == os.path.join(dest, src_base)
def _normalize_s3_trailing_slash(self, paths):
for i, path in enumerate(paths):
if path.startswith('s3://'):
bucket, key = find_bucket_key(path[5:])
if not key and not path.endswith('/'):
# If only a bucket was specified, we need
# to normalize the path and ensure it ends
# with a '/', s3://bucket -> s3://bucket/
path += '/'
paths[i] = path
def check_path_type(self, paths):
"""
This initial check ensures that the path types for the specified
command is correct.
"""
template_type = {'s3s3': ['cp', 'sync', 'mv'],
's3local': ['cp', 'sync', 'mv'],
'locals3': ['cp', 'sync', 'mv'],
's3': ['mb', 'rb', 'rm'],
'local': [], 'locallocal': []}
paths_type = ''
usage = "usage: aws s3 %s %s" % (self.cmd,
self.usage)
for i in range(len(paths)):
if paths[i].startswith('s3://'):
paths_type = paths_type + 's3'
else:
paths_type = paths_type + 'local'
if self.cmd in template_type[paths_type]:
self.parameters['paths_type'] = paths_type
else:
raise TypeError("%s\nError: Invalid argument type" % usage)
def add_region(self, parsed_globals):
self.parameters['region'] = parsed_globals.region
def add_endpoint_url(self, parsed_globals):
"""
Adds endpoint_url to the parameters.
"""
if 'endpoint_url' in parsed_globals:
self.parameters['endpoint_url'] = getattr(parsed_globals,
'endpoint_url')
else:
self.parameters['endpoint_url'] = None
def add_verify_ssl(self, parsed_globals):
self.parameters['verify_ssl'] = parsed_globals.verify_ssl
def add_page_size(self, parsed_args):
self.parameters['page_size'] = getattr(parsed_args, 'page_size', None)
def _validate_sse_c_args(self):
self._validate_sse_c_arg()
self._validate_sse_c_arg('sse_c_copy_source')
self._validate_sse_c_copy_source_for_paths()
def _validate_sse_c_arg(self, sse_c_type='sse_c'):
sse_c_key_type = sse_c_type + '_key'
sse_c_type_param = '--' + sse_c_type.replace('_', '-')
sse_c_key_type_param = '--' + sse_c_key_type.replace('_', '-')
if self.parameters.get(sse_c_type):
if not self.parameters.get(sse_c_key_type):
raise ValueError(
'It %s is specified, %s must be specified '
'as well.' % (sse_c_type_param, sse_c_key_type_param)
)
if self.parameters.get(sse_c_key_type):
if not self.parameters.get(sse_c_type):
raise ValueError(
'It %s is specified, %s must be specified '
'as well.' % (sse_c_key_type_param, sse_c_type_param)
)
def _validate_sse_c_copy_source_for_paths(self):
if self.parameters.get('sse_c_copy_source'):
if self.parameters['paths_type'] != 's3s3':
raise ValueError(
'--sse-c-copy-source is only supported for '
'copy operations.'
)
awscli-1.10.1/awscli/customizations/s3/__init__.py 0000666 4542626 0000144 00000001065 12652514124 023145 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
awscli-1.10.1/awscli/customizations/s3/fileformat.py 0000666 4542626 0000144 00000013615 12652514124 023542 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import os
class FileFormat(object):
def format(self, src, dest, parameters):
"""
This function formats the source and destination
path to the proper form for a file generator.
Note that a file is designated as an s3 file if it begins with s3://
:param src: The path of the source
:type src: string
:param dest: The path of the dest
:type dest: string
:param parameters: A dictionary that will be formed when the arguments
of the command line have been parsed. For this
function the dictionary should have the key 'dir_op'
which is a boolean value that is true when
the operation is being performed on a local directory/
all objects under a common prefix in s3 or false when
it is on a single file/object.
:returns: A dictionary that will be passed to a file generator.
The dictionary contains the keys src, dest, dir_op, and
use_src_name. src is a dictionary containing the source path
and whether its located locally or in s3. dest is a dictionary
containing the destination path and whether its located
locally or in s3.
"""
src_type, src_path = self.identify_type(src)
dest_type, dest_path = self.identify_type(dest)
format_table = {'s3': self.s3_format, 'local': self.local_format}
# :var dir_op: True when the operation being performed is on a
# directory/objects under a common prefix or false when it
# is a single file
dir_op = parameters['dir_op']
src_path = format_table[src_type](src_path, dir_op)[0]
# :var use_src_name: True when the destination file/object will take on
# the name of the source file/object. False when it
# will take on the name the user specified in the
# command line.
dest_path, use_src_name = format_table[dest_type](dest_path, dir_op)
files = {'src': {'path': src_path, 'type': src_type},
'dest': {'path': dest_path, 'type': dest_type},
'dir_op': dir_op, 'use_src_name': use_src_name}
return files
def local_format(self, path, dir_op):
"""
This function formats the path of local files and returns whether the
destination will keep its own name or take the source's name along with
the editted path.
Formatting Rules:
1) If a destination file is taking on a source name, it must end
with the apporpriate operating system seperator
General Options:
1) If the operation is on a directory, the destination file will
always use the name of the corresponding source file.
2) If the path of the destination exists and is a directory it
will always use the name of the source file.
3) If the destination path ends with the appropriate operating
system seperator but is not an existing directory, the
appropriate directories will be made and the file will use the
source's name.
4) If the destination path does not end with the appropriate
operating system seperator and is not an existing directory, the
appropriate directories will be created and the file name will
be of the one provided.
"""
full_path = os.path.abspath(path)
if (os.path.exists(full_path) and os.path.isdir(full_path)) or dir_op:
full_path += os.sep
return full_path, True
else:
if path.endswith(os.sep):
full_path += os.sep
return full_path, True
else:
return full_path, False
def s3_format(self, path, dir_op):
"""
This function formats the path of source files and returns whether the
destination will keep its own name or take the source's name along
with the edited path.
Formatting Rules:
1) If a destination file is taking on a source name, it must end
with a forward slash.
General Options:
1) If the operation is on objects under a common prefix,
the destination file will always use the name of the
corresponding source file.
2) If the path ends with a forward slash, the appropriate prefixes
will be formed and will use the name of the source.
3) If the path does not end with a forward slash, the appropriate
prefix will be formed but use the the name provided as opposed
to the source name.
"""
if dir_op:
if not path.endswith('/'):
path += '/'
return path, True
else:
if not path.endswith('/'):
return path, False
else:
return path, True
def identify_type(self, path):
"""
It identifies whether the path is from local or s3. Returns the
adjusted pathname and a string stating whether the file is from local
or s3. If from s3 it strips off the s3:// from the beginnning of the
path
"""
if path.startswith('s3://'):
return 's3', path[5:]
else:
return 'local', path
awscli-1.10.1/awscli/customizations/s3/syncstrategy/ 0000777 4542626 0000144 00000000000 12652514126 023573 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/customizations/s3/syncstrategy/register.py 0000666 4542626 0000144 00000003721 12652514124 025772 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.s3.syncstrategy.sizeonly import SizeOnlySync
from awscli.customizations.s3.syncstrategy.exacttimestamps import \
ExactTimestampsSync
from awscli.customizations.s3.syncstrategy.delete import DeleteSync
def register_sync_strategy(session, strategy_cls,
sync_type='file_at_src_and_dest'):
"""Registers a single sync strategy
:param session: The session that the sync strategy is being registered to.
:param strategy_cls: The class of the sync strategy to be registered.
:param sync_type: A string representing when to perform the sync strategy.
See ``__init__`` method of ``BaseSyncStrategy`` for possible options.
"""
strategy = strategy_cls(sync_type)
strategy.register_strategy(session)
def register_sync_strategies(command_table, session, **kwargs):
"""Registers the different sync strategies.
To register a sync strategy add
``register_sync_strategy(session, YourSyncStrategyClass, sync_type)``
to the list of registered strategies in this function.
"""
# Register the size only sync strategy.
register_sync_strategy(session, SizeOnlySync)
# Register the exact timestamps sync strategy.
register_sync_strategy(session, ExactTimestampsSync)
# Register the delete sync strategy.
register_sync_strategy(session, DeleteSync, 'file_not_at_src')
# Register additional sync strategies here...
awscli-1.10.1/awscli/customizations/s3/syncstrategy/delete.py 0000666 4542626 0000144 00000002320 12652514124 025402 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
from awscli.customizations.s3.syncstrategy.base import BaseSync
LOG = logging.getLogger(__name__)
DELETE = {'name': 'delete', 'action': 'store_true',
'help_text': (
"Files that exist in the destination but not in the source are "
"deleted during sync.")}
class DeleteSync(BaseSync):
ARGUMENT = DELETE
def determine_should_sync(self, src_file, dest_file):
dest_file.operation_name = 'delete'
LOG.debug("syncing: (None) -> %s (remove), file does not "
"exist at source (%s) and delete mode enabled",
dest_file.src, dest_file.dest)
return True
awscli-1.10.1/awscli/customizations/s3/syncstrategy/sizeonly.py 0000666 4542626 0000144 00000002424 12652514124 026021 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
from awscli.customizations.s3.syncstrategy.base import BaseSync
LOG = logging.getLogger(__name__)
SIZE_ONLY = {'name': 'size-only', 'action': 'store_true',
'help_text': (
'Makes the size of each key the only criteria used to '
'decide whether to sync from source to destination.')}
class SizeOnlySync(BaseSync):
ARGUMENT = SIZE_ONLY
def determine_should_sync(self, src_file, dest_file):
same_size = self.compare_size(src_file, dest_file)
should_sync = not same_size
if should_sync:
LOG.debug("syncing: %s -> %s, size_changed: %s",
src_file.src, src_file.dest, not same_size)
return should_sync
awscli-1.10.1/awscli/customizations/s3/syncstrategy/__init__.py 0000666 4542626 0000144 00000001065 12652514124 025704 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
awscli-1.10.1/awscli/customizations/s3/syncstrategy/exacttimestamps.py 0000666 4542626 0000144 00000003226 12652514124 027361 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
from awscli.customizations.s3.syncstrategy.base import SizeAndLastModifiedSync
LOG = logging.getLogger(__name__)
EXACT_TIMESTAMPS = {'name': 'exact-timestamps', 'action': 'store_true',
'help_text': (
'When syncing from S3 to local, same-sized '
'items will be ignored only when the timestamps '
'match exactly. The default behavior is to ignore '
'same-sized items unless the local version is newer '
'than the S3 version.')}
class ExactTimestampsSync(SizeAndLastModifiedSync):
ARGUMENT = EXACT_TIMESTAMPS
def compare_time(self, src_file, dest_file):
src_time = src_file.last_update
dest_time = dest_file.last_update
delta = dest_time - src_time
cmd = src_file.operation_name
if cmd == 'download':
return self.total_seconds(delta) == 0
else:
return super(ExactTimestampsSync, self).compare_time(src_file,
dest_file)
awscli-1.10.1/awscli/customizations/s3/syncstrategy/base.py 0000666 4542626 0000144 00000023600 12652514124 025056 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
LOG = logging.getLogger(__name__)
VALID_SYNC_TYPES = ['file_at_src_and_dest', 'file_not_at_dest',
'file_not_at_src']
class BaseSync(object):
"""Base sync strategy
To create a new sync strategy, subclass from this class.
"""
# This is the argument that will be added to the ``SyncCommand`` arg table.
# This argument will represent the sync strategy when the arguments for
# the sync command are parsed. ``ARGUMENT`` follows the same format as
# a member of ``ARG_TABLE`` in ``BasicCommand`` class as specified in
# ``awscli/customizations/commands.py``.
#
# For example, if I wanted to perform the sync strategy whenever I type
# ``--my-sync-strategy``, I would say:
#
# ARGUMENT =
# {'name': 'my-sync-strategy', 'action': 'store-true',
# 'help_text': 'Performs my sync strategy'}
#
# Typically, the argument's ``action`` should ``store_true`` to
# minimize amount of extra code in making a custom sync strategy.
ARGUMENT = None
# At this point all that need to be done is implement
# ``determine_should_sync`` method (see method for more information).
def __init__(self, sync_type='file_at_src_and_dest'):
"""
:type sync_type: string
:param sync_type: This determines where the sync strategy will be
used. There are three strings to choose from:
'file_at_src_and_dest': apply sync strategy on a file that
exists both at the source and the destination.
'file_not_at_dest': apply sync strategy on a file that
exists at the source but not the destination.
'file_not_at_src': apply sync strategy on a file that
exists at the destination but not the source.
"""
self._check_sync_type(sync_type)
self._sync_type = sync_type
def _check_sync_type(self, sync_type):
if sync_type not in VALID_SYNC_TYPES:
raise ValueError("Unknown sync_type: %s.\n"
"Valid options are %s." %
(sync_type, VALID_SYNC_TYPES))
@property
def sync_type(self):
return self._sync_type
def register_strategy(self, session):
"""Registers the sync strategy class to the given session."""
session.register('building-arg-table.sync',
self.add_sync_argument)
session.register('choosing-s3-sync-strategy', self.use_sync_strategy)
def determine_should_sync(self, src_file, dest_file):
"""Subclasses should implement this method.
This function takes two ``FileStat`` objects (one from the source and
one from the destination). Then makes a decision on whether a given
operation (e.g. a upload, copy, download) should be allowed
to take place.
The function currently raises a ``NotImplementedError``. So this
method must be overwritten when this class is subclassed. Note
that this method must return a Boolean as documented below.
:type src_file: ``FileStat`` object
:param src_file: A representation of the opertaion that is to be
performed on a specfic file existing in the source. Note if
the file does not exist at the source, ``src_file`` is None.
:type dest_file: ``FileStat`` object
:param dest_file: A representation of the operation that is to be
performed on a specific file existing in the destination. Note if
the file does not exist at the destination, ``dest_file`` is None.
:rtype: Boolean
:return: True if an operation based on the ``FileStat`` should be
allowed to occur.
False if if an operation based on the ``FileStat`` should not be
allowed to occur. Note the operation being referred to depends on
the ``sync_type`` of the sync strategy:
'file_at_src_and_dest': refers to ``src_file``
'file_not_at_dest': refers to ``src_file``
'file_not_at_src': refers to ``dest_file``
"""
raise NotImplementedError("determine_should_sync")
@property
def arg_name(self):
# Retrieves the ``name`` of the sync strategy's ``ARGUMENT``.
name = None
if self.ARGUMENT is not None:
name = self.ARGUMENT.get('name', None)
return name
@property
def arg_dest(self):
# Retrieves the ``dest`` of the sync strategy's ``ARGUMENT``.
dest = None
if self.ARGUMENT is not None:
dest = self.ARGUMENT.get('dest', None)
return dest
def add_sync_argument(self, arg_table, **kwargs):
# This function adds sync strategy's argument to the ``SyncCommand``
# argument table.
if self.ARGUMENT is not None:
arg_table.append(self.ARGUMENT)
def use_sync_strategy(self, params, **kwargs):
# This function determines which sync strategy the ``SyncCommand`` will
# use. The sync strategy object must be returned by this method
# if it is to be chosen as the sync strategy to use.
#
# ``params`` is a dictionary that specifies all of the arguments
# the sync command is able to process as well as their values.
#
# Since ``ARGUMENT`` was added to the ``SyncCommand`` arg table,
# the argument will be present in ``params``.
#
# If the argument was included in the actual ``aws s3 sync`` command
# its value will show up as ``True`` in ``params`` otherwise its value
# will be ``False`` in ``params`` assuming the argument's ``action``
# is ``store_true``.
#
# Note: If the ``action`` of ``ARGUMENT`` was not set to
# ``store_true``, this method will need to be overwritten.
#
name_in_params = None
# Check if a ``dest`` was specified in ``ARGUMENT`` as if it is
# specified, the boolean value will be located at the argument's
# ``dest`` value in the ``params`` dictionary.
if self.arg_dest is not None:
name_in_params = self.arg_dest
# Then check ``name`` of ``ARGUMENT``, the boolean value will be
# located at the argument's ``name`` value in the ``params``
# dictionary.
elif self.arg_name is not None:
# ``name`` has all ``-`` replaced with ``_`` in ``params``.
name_in_params = self.arg_name.replace('-', '_')
if name_in_params is not None:
if params.get(name_in_params):
# Return the sync strategy object to be used for syncing.
return self
return None
def total_seconds(self, td):
"""
timedelta's time_seconds() function for python 2.6 users
:param td: The difference between two datetime objects.
"""
return (td.microseconds + (td.seconds + td.days * 24 *
3600) * 10**6) / 10**6
def compare_size(self, src_file, dest_file):
"""
:returns: True if the sizes are the same.
False otherwise.
"""
return src_file.size == dest_file.size
def compare_time(self, src_file, dest_file):
"""
:returns: True if the file does not need updating based on time of
last modification and type of operation.
False if the file does need updating based on the time of
last modification and type of operation.
"""
src_time = src_file.last_update
dest_time = dest_file.last_update
delta = dest_time - src_time
cmd = src_file.operation_name
if cmd == "upload" or cmd == "copy":
if self.total_seconds(delta) >= 0:
# Destination is newer than source.
return True
else:
# Destination is older than source, so
# we have a more recently updated file
# at the source location.
return False
elif cmd == "download":
if self.total_seconds(delta) <= 0:
return True
else:
# delta is positive, so the destination
# is newer than the source.
return False
class SizeAndLastModifiedSync(BaseSync):
def determine_should_sync(self, src_file, dest_file):
same_size = self.compare_size(src_file, dest_file)
same_last_modified_time = self.compare_time(src_file, dest_file)
should_sync = (not same_size) or (not same_last_modified_time)
if should_sync:
LOG.debug(
"syncing: %s -> %s, size: %s -> %s, modified time: %s -> %s",
src_file.src, src_file.dest,
src_file.size, dest_file.size,
src_file.last_update, dest_file.last_update)
return should_sync
class NeverSync(BaseSync):
def __init__(self, sync_type='file_not_at_src'):
super(NeverSync, self).__init__(sync_type)
def determine_should_sync(self, src_file, dest_file):
return False
class MissingFileSync(BaseSync):
def __init__(self, sync_type='file_not_at_dest'):
super(MissingFileSync, self).__init__(sync_type)
def determine_should_sync(self, src_file, dest_file):
LOG.debug("syncing: %s -> %s, file does not exist at destination",
src_file.src, src_file.dest)
return True
awscli-1.10.1/awscli/customizations/s3/tasks.py 0000666 4542626 0000144 00000074711 12652514124 022543 0 ustar pysdk-ci amazon 0000000 0000000 import logging
import math
import os
import time
import socket
import threading
from botocore.vendored import requests
from botocore.exceptions import IncompleteReadError
from botocore.vendored.requests.packages.urllib3.exceptions import \
ReadTimeoutError
from awscli.customizations.s3.utils import find_bucket_key, MD5Error, \
ReadFileChunk, relative_path, IORequest, IOCloseRequest, PrintTask, \
RequestParamsMapper
LOGGER = logging.getLogger(__name__)
class UploadCancelledError(Exception):
pass
class DownloadCancelledError(Exception):
pass
class RetriesExeededError(Exception):
pass
def print_operation(filename, failed, dryrun=False):
"""
Helper function used to print out what an operation did and whether
it failed.
"""
print_str = filename.operation_name
if dryrun:
print_str = '(dryrun) ' + print_str
if failed:
print_str += " failed"
print_str += ": "
if filename.src_type == "s3":
print_str = print_str + "s3://" + filename.src
else:
print_str += relative_path(filename.src)
if filename.operation_name not in ["delete", "make_bucket",
"remove_bucket"]:
if filename.dest_type == "s3":
print_str += " to s3://" + filename.dest
else:
print_str += " to " + relative_path(filename.dest)
return print_str
class OrderableTask(object):
PRIORITY = 10
class BasicTask(OrderableTask):
"""
This class is a wrapper for all ``TaskInfo`` and ``TaskInfo`` objects
It is practically a thread of execution. It also injects the necessary
attributes like ``session`` object in order for the filename to
perform its designated operation.
"""
def __init__(self, session, filename, parameters,
result_queue, payload=None):
self.session = session
self.filename = filename
self.filename.parameters = parameters
self.parameters = parameters
self.result_queue = result_queue
self.payload = payload
def __call__(self):
self._execute_task(attempts=3)
def _execute_task(self, attempts, last_error=''):
if attempts == 0:
# We've run out of retries.
self._queue_print_message(self.filename, failed=True,
dryrun=self.parameters['dryrun'],
error_message=last_error)
return
filename = self.filename
kwargs = {}
if self.payload:
kwargs['payload'] = self.payload
try:
if not self.parameters['dryrun']:
getattr(filename, filename.operation_name)(**kwargs)
except requests.ConnectionError as e:
connect_error = str(e)
LOGGER.debug("%s %s failure: %s",
filename.src, filename.operation_name, connect_error)
self._execute_task(attempts - 1, last_error=str(e))
except MD5Error as e:
LOGGER.debug("%s %s failure: Data was corrupted: %s",
filename.src, filename.operation_name, e)
self._execute_task(attempts - 1, last_error=str(e))
except Exception as e:
LOGGER.debug(str(e), exc_info=True)
self._queue_print_message(filename, failed=True,
dryrun=self.parameters['dryrun'],
error_message=str(e))
else:
self._queue_print_message(filename, failed=False,
dryrun=self.parameters['dryrun'])
def _queue_print_message(self, filename, failed, dryrun,
error_message=None):
try:
if filename.operation_name != 'list_objects':
message = print_operation(filename, failed,
self.parameters['dryrun'])
if error_message is not None:
message += ' ' + error_message
result = {'message': message, 'error': failed}
self.result_queue.put(PrintTask(**result))
except Exception as e:
LOGGER.debug('%s' % str(e))
class CopyPartTask(OrderableTask):
def __init__(self, part_number, chunk_size,
result_queue, upload_context, filename, params):
self._result_queue = result_queue
self._upload_context = upload_context
self._part_number = part_number
self._chunk_size = chunk_size
self._filename = filename
self._params = params
def _is_last_part(self, part_number):
return self._part_number == int(
math.ceil(self._filename.size / float(self._chunk_size)))
def _total_parts(self):
return int(math.ceil(
self._filename.size / float(self._chunk_size)))
def __call__(self):
LOGGER.debug("Uploading part copy %s for filename: %s",
self._part_number, self._filename.src)
total_file_size = self._filename.size
start_range = (self._part_number - 1) * self._chunk_size
if self._is_last_part(self._part_number):
end_range = total_file_size - 1
else:
end_range = start_range + self._chunk_size - 1
range_param = 'bytes=%s-%s' % (start_range, end_range)
try:
LOGGER.debug("Waiting for upload id.")
upload_id = self._upload_context.wait_for_upload_id()
bucket, key = find_bucket_key(self._filename.dest)
src_bucket, src_key = find_bucket_key(self._filename.src)
params = {'Bucket': bucket, 'Key': key,
'PartNumber': self._part_number,
'UploadId': upload_id,
'CopySource': {'Bucket': src_bucket, 'Key': src_key},
'CopySourceRange': range_param}
RequestParamsMapper.map_upload_part_copy_params(
params, self._params)
response_data = self._filename.client.upload_part_copy(**params)
etag = response_data['CopyPartResult']['ETag'][1:-1]
self._upload_context.announce_finished_part(
etag=etag, part_number=self._part_number)
message = print_operation(self._filename, 0)
result = {'message': message, 'total_parts': self._total_parts(),
'error': False}
self._result_queue.put(PrintTask(**result))
except UploadCancelledError as e:
# We don't need to do anything in this case. The task
# has been cancelled, and the task that cancelled the
# task has already queued a message.
LOGGER.debug("Not uploading part copy, task has been cancelled.")
except Exception as e:
LOGGER.debug('Error during upload part copy: %s', e,
exc_info=True)
message = print_operation(self._filename, failed=True,
dryrun=False)
message += '\n' + str(e)
result = {'message': message, 'error': True}
self._result_queue.put(PrintTask(**result))
self._upload_context.cancel_upload()
else:
LOGGER.debug("Copy part number %s completed for filename: %s",
self._part_number, self._filename.src)
class UploadPartTask(OrderableTask):
"""
This is a task used to upload a part of a multipart upload.
This task pulls from a ``part_queue`` which represents the
queue for a specific multipart upload. This pulling from a
``part_queue`` is necessary in order to keep track and
complete the multipart upload initiated by the ``FileInfo``
object.
"""
def __init__(self, part_number, chunk_size, result_queue, upload_context,
filename, params, payload=None):
self._result_queue = result_queue
self._upload_context = upload_context
self._part_number = part_number
self._chunk_size = chunk_size
self._filename = filename
self._params = params
self._payload = payload
def _read_part(self):
actual_filename = self._filename.src
in_file_part_number = self._part_number - 1
starting_byte = in_file_part_number * self._chunk_size
return ReadFileChunk(actual_filename, starting_byte, self._chunk_size)
def __call__(self):
LOGGER.debug("Uploading part %s for filename: %s",
self._part_number, self._filename.src)
try:
LOGGER.debug("Waiting for upload id.")
upload_id = self._upload_context.wait_for_upload_id()
bucket, key = find_bucket_key(self._filename.dest)
if self._filename.is_stream:
body = self._payload
total = self._upload_context.expected_parts
else:
total = int(math.ceil(
self._filename.size/float(self._chunk_size)))
body = self._read_part()
params = {'Bucket': bucket, 'Key': key,
'PartNumber': self._part_number,
'UploadId': upload_id,
'Body': body}
RequestParamsMapper.map_upload_part_params(params, self._params)
try:
response_data = self._filename.client.upload_part(**params)
finally:
body.close()
etag = response_data['ETag'][1:-1]
self._upload_context.announce_finished_part(
etag=etag, part_number=self._part_number)
message = print_operation(self._filename, 0)
result = {'message': message, 'total_parts': total,
'error': False}
self._result_queue.put(PrintTask(**result))
except UploadCancelledError as e:
# We don't need to do anything in this case. The task
# has been cancelled, and the task that cancelled the
# task has already queued a message.
LOGGER.debug("Not uploading part, task has been cancelled.")
except Exception as e:
LOGGER.debug('Error during part upload: %s', e,
exc_info=True)
message = print_operation(self._filename, failed=True,
dryrun=False)
message += '\n' + str(e)
result = {'message': message, 'error': True}
self._result_queue.put(PrintTask(**result))
self._upload_context.cancel_upload()
else:
LOGGER.debug("Part number %s completed for filename: %s",
self._part_number, self._filename.src)
class CreateLocalFileTask(OrderableTask):
def __init__(self, context, filename, result_queue):
self._context = context
self._filename = filename
self._result_queue = result_queue
def __call__(self):
dirname = os.path.dirname(self._filename.dest)
try:
if not os.path.isdir(dirname):
try:
os.makedirs(dirname)
except OSError:
# It's possible that between the if check and the makedirs
# check that another thread has come along and created the
# directory. In this case the directory already exists and we
# can move on.
pass
# Always create the file. Even if it exists, we need to
# wipe out the existing contents.
with open(self._filename.dest, 'wb'):
pass
except Exception as e:
message = print_operation(self._filename, failed=True,
dryrun=False)
message += '\n' + str(e)
result = {'message': message, 'error': True}
self._result_queue.put(PrintTask(**result))
self._context.cancel()
else:
self._context.announce_file_created()
class CompleteDownloadTask(OrderableTask):
def __init__(self, context, filename, result_queue, params, io_queue):
self._context = context
self._filename = filename
self._result_queue = result_queue
self._parameters = params
self._io_queue = io_queue
def __call__(self):
# When the file is downloading, we have a few things we need to do:
# 1) Fix up the last modified time to match s3.
# 2) Tell the result_queue we're done.
# 3) Queue an IO request to the IO thread letting it know we're
# done with the file.
self._context.wait_for_completion()
last_update_tuple = self._filename.last_update.timetuple()
mod_timestamp = time.mktime(last_update_tuple)
desired_mtime = int(mod_timestamp)
message = print_operation(self._filename, False,
self._parameters['dryrun'])
print_task = {'message': message, 'error': False}
self._result_queue.put(PrintTask(**print_task))
self._io_queue.put(IOCloseRequest(self._filename.dest, desired_mtime))
class DownloadPartTask(OrderableTask):
"""
This task downloads and writes a part to a file. This task pulls
from a ``part_queue`` which represents the queue for a specific
multipart download. This pulling from a ``part_queue`` is necessary
in order to keep track and complete the multipart download initiated by
the ``FileInfo`` object.
"""
# Amount to read from response body at a time.
ITERATE_CHUNK_SIZE = 1024 * 1024
READ_TIMEOUT = 60
TOTAL_ATTEMPTS = 5
def __init__(self, part_number, chunk_size, result_queue,
filename, context, io_queue, params):
self._part_number = part_number
self._chunk_size = chunk_size
self._result_queue = result_queue
self._filename = filename
self._client = filename.client
self._context = context
self._io_queue = io_queue
self._params = params
def __call__(self):
try:
self._download_part()
except Exception as e:
LOGGER.debug(
'Exception caught downloading byte range: %s',
e, exc_info=True)
self._context.cancel()
raise e
def _download_part(self):
total_file_size = self._filename.size
start_range = self._part_number * self._chunk_size
if self._part_number == int(total_file_size / self._chunk_size) - 1:
end_range = ''
else:
end_range = start_range + self._chunk_size - 1
range_param = 'bytes=%s-%s' % (start_range, end_range)
LOGGER.debug("Downloading bytes range of %s for file %s", range_param,
self._filename.dest)
bucket, key = find_bucket_key(self._filename.src)
params = {'Bucket': bucket,
'Key': key,
'Range': range_param}
RequestParamsMapper.map_get_object_params(params, self._params)
for i in range(self.TOTAL_ATTEMPTS):
try:
LOGGER.debug("Making GetObject requests with byte range: %s",
range_param)
response_data = self._client.get_object(**params)
LOGGER.debug("Response received from GetObject")
body = response_data['Body']
self._queue_writes(body)
self._context.announce_completed_part(self._part_number)
message = print_operation(self._filename, 0)
total_parts = int(self._filename.size / self._chunk_size)
result = {'message': message, 'error': False,
'total_parts': total_parts}
self._result_queue.put(PrintTask(**result))
LOGGER.debug("Task complete: %s", self)
return
except (socket.timeout, socket.error, ReadTimeoutError) as e:
LOGGER.debug("Timeout error caught, retrying request, "
"(attempt %s / %s)", i, self.TOTAL_ATTEMPTS,
exc_info=True)
continue
except IncompleteReadError as e:
LOGGER.debug("Incomplete read detected: %s, (attempt %s / %s)",
e, i, self.TOTAL_ATTEMPTS)
continue
raise RetriesExeededError("Maximum number of attempts exceeded: %s" %
self.TOTAL_ATTEMPTS)
def _queue_writes(self, body):
self._context.wait_for_file_created()
LOGGER.debug("Writing part number %s to file: %s",
self._part_number, self._filename.dest)
iterate_chunk_size = self.ITERATE_CHUNK_SIZE
body.set_socket_timeout(self.READ_TIMEOUT)
if self._filename.is_stream:
self._queue_writes_for_stream(body)
else:
self._queue_writes_in_chunks(body, iterate_chunk_size)
def _queue_writes_for_stream(self, body):
# We have to handle an output stream differently. The main reason is
# that we cannot seek() in the output stream. This means that we need
# to queue the writes in order. If we queue IO writes in smaller than
# part size chunks, on the case of a retry we'll need to do a range GET
# for only the remaining parts. The other alternative, which is what
# we do here, is to just request the entire chunk size write.
self._context.wait_for_turn(self._part_number)
chunk = body.read()
offset = self._part_number * self._chunk_size
LOGGER.debug("Submitting IORequest to write queue.")
self._io_queue.put(
IORequest(self._filename.dest, offset, chunk,
self._filename.is_stream)
)
self._context.done_with_turn()
def _queue_writes_in_chunks(self, body, iterate_chunk_size):
amount_read = 0
current = body.read(iterate_chunk_size)
while current:
offset = self._part_number * self._chunk_size + amount_read
LOGGER.debug("Submitting IORequest to write queue.")
self._io_queue.put(
IORequest(self._filename.dest, offset, current,
self._filename.is_stream)
)
LOGGER.debug("Request successfully submitted.")
amount_read += len(current)
current = body.read(iterate_chunk_size)
# Change log message.
LOGGER.debug("Done queueing writes for part number %s to file: %s",
self._part_number, self._filename.dest)
class CreateMultipartUploadTask(BasicTask):
def __init__(self, session, filename, parameters, result_queue,
upload_context):
super(CreateMultipartUploadTask, self).__init__(
session, filename, parameters, result_queue)
self._upload_context = upload_context
def __call__(self):
LOGGER.debug("Creating multipart upload for file: %s",
self.filename.src)
try:
upload_id = self.filename.create_multipart_upload()
LOGGER.debug("Announcing upload id: %s", upload_id)
self._upload_context.announce_upload_id(upload_id)
except Exception as e:
LOGGER.debug("Error trying to create multipart upload: %s",
e, exc_info=True)
self._upload_context.cancel_upload()
message = print_operation(self.filename, True,
self.parameters['dryrun'])
message += '\n' + str(e)
result = {'message': message, 'error': True}
self.result_queue.put(PrintTask(**result))
raise e
class RemoveRemoteObjectTask(OrderableTask):
def __init__(self, filename, context):
self._context = context
self._filename = filename
def __call__(self):
LOGGER.debug("Waiting for download to finish.")
self._context.wait_for_completion()
bucket, key = find_bucket_key(self._filename.src)
params = {'Bucket': bucket, 'Key': key}
self._filename.source_client.delete_object(**params)
class CompleteMultipartUploadTask(BasicTask):
def __init__(self, session, filename, parameters, result_queue,
upload_context):
super(CompleteMultipartUploadTask, self).__init__(
session, filename, parameters, result_queue)
self._upload_context = upload_context
def __call__(self):
LOGGER.debug("Completing multipart upload for file: %s",
self.filename.src)
upload_id = self._upload_context.wait_for_upload_id()
parts = self._upload_context.wait_for_parts_to_finish()
LOGGER.debug("Received upload id and parts list.")
bucket, key = find_bucket_key(self.filename.dest)
params = {
'Bucket': bucket, 'Key': key,
'UploadId': upload_id,
'MultipartUpload': {'Parts': parts},
}
try:
response_data = self.filename.client.complete_multipart_upload(
**params)
except Exception as e:
LOGGER.debug("Error trying to complete multipart upload: %s",
e, exc_info=True)
message = print_operation(
self.filename, failed=True,
dryrun=self.parameters['dryrun'])
message += '\n' + str(e)
result = {
'message': message,
'error': True
}
else:
LOGGER.debug("Multipart upload completed for: %s",
self.filename.src)
message = print_operation(self.filename, False,
self.parameters['dryrun'])
result = {'message': message, 'error': False}
self._upload_context.announce_completed()
self.result_queue.put(PrintTask(**result))
class RemoveFileTask(BasicTask):
def __init__(self, local_filename, upload_context):
self._local_filename = local_filename
self._upload_context = upload_context
# This 'filename' attr has to be here because other objects
# introspect tasks objects. This should eventually be removed
# but it's needed for now.
self.filename = None
def __call__(self):
LOGGER.debug("Waiting for upload to complete.")
self._upload_context.wait_for_completion()
LOGGER.debug("Removing local file: %s", self._local_filename)
os.remove(self._local_filename)
class MultipartUploadContext(object):
"""Context object for a multipart upload.
Performing a multipart upload usually consists of three parts:
* CreateMultipartUpload
* UploadPart
* CompleteMultipartUpload
Each of those three parts are not independent of each other. In order
to upload a part, you need to know the upload id (created during the
CreateMultipartUpload operation). In order to complete a multipart
you need the etags from all the parts (created during the UploadPart
operations). This context object provides the necessary building blocks
to allow for the three stages to efficiently communicate with each other.
This class is thread safe.
"""
# These are the valid states for this object.
_UNSTARTED = '_UNSTARTED'
_STARTED = '_STARTED'
_CANCELLED = '_CANCELLED'
_COMPLETED = '_COMPLETED'
def __init__(self, expected_parts='...'):
self._upload_id = None
self._expected_parts = expected_parts
self._parts = []
self._lock = threading.Lock()
self._upload_id_condition = threading.Condition(self._lock)
self._parts_condition = threading.Condition(self._lock)
self._upload_complete_condition = threading.Condition(self._lock)
self._state = self._UNSTARTED
@property
def expected_parts(self):
return self._expected_parts
def announce_upload_id(self, upload_id):
with self._upload_id_condition:
self._upload_id = upload_id
self._state = self._STARTED
self._upload_id_condition.notifyAll()
def announce_finished_part(self, etag, part_number):
with self._parts_condition:
self._parts.append({'ETag': etag, 'PartNumber': part_number})
self._parts_condition.notifyAll()
def announce_total_parts(self, total_parts):
with self._parts_condition:
self._expected_parts = total_parts
self._parts_condition.notifyAll()
def wait_for_parts_to_finish(self):
with self._parts_condition:
while self._expected_parts == '...' or \
len(self._parts) < self._expected_parts:
if self._state == self._CANCELLED:
raise UploadCancelledError("Upload has been cancelled.")
self._parts_condition.wait(timeout=1)
return list(sorted(self._parts, key=lambda p: p['PartNumber']))
def wait_for_upload_id(self):
with self._upload_id_condition:
while self._upload_id is None and self._state != self._CANCELLED:
self._upload_id_condition.wait(timeout=1)
if self._state == self._CANCELLED:
raise UploadCancelledError("Upload has been cancelled.")
return self._upload_id
def wait_for_completion(self):
with self._upload_complete_condition:
while not self._state == self._COMPLETED:
if self._state == self._CANCELLED:
raise UploadCancelledError("Upload has been cancelled.")
self._upload_complete_condition.wait(timeout=1)
def cancel_upload(self, canceller=None, args=None, kwargs=None):
"""Cancel the upload.
If the upload is already in progress (via ``self.in_progress()``)
you can provide a ``canceller`` argument that can be used to cancel
the multipart upload request (typically this would call something like
AbortMultipartUpload. The canceller argument is a function that takes
a single argument, which is the upload id::
def my_canceller(upload_id):
cancel.upload(bucket, key, upload_id)
The ``canceller`` callable will only be called if the
task is in progress. If the task has not been started or is
complete, then ``canceller`` will not be called.
Note that ``canceller`` is called while an exclusive lock is held,
so you cannot make any calls into the MultipartUploadContext object
in the ``canceller`` object.
"""
with self._lock:
if self._state == self._STARTED and canceller is not None:
if args is None:
args = ()
if kwargs is None:
kwargs = {}
canceller(self._upload_id, *args, **kwargs)
self._state = self._CANCELLED
def in_progress(self):
"""Determines whether or not the multipart upload is in process.
Note that this has a very short gap from the time that a
CreateMultipartUpload is called to the time the
MultipartUploadContext object is told about the upload
where this method will return False even though the multipart
upload is in fact in progress. This is solely based on whether
or not the MultipartUploadContext has been notified about an
upload id.
"""
with self._lock:
return self._state == self._STARTED
def is_complete(self):
with self._lock:
return self._state == self._COMPLETED
def is_cancelled(self):
with self._lock:
return self._state == self._CANCELLED
def announce_completed(self):
"""Let the context object know that the upload is complete.
This should be called after a CompleteMultipartUpload operation.
"""
with self._upload_complete_condition:
self._state = self._COMPLETED
self._upload_complete_condition.notifyAll()
class MultipartDownloadContext(object):
_STATES = {
'UNSTARTED': 'UNSTARTED',
'STARTED': 'STARTED',
'COMPLETED': 'COMPLETED',
'CANCELLED': 'CANCELLED'
}
def __init__(self, num_parts, lock=None):
self.num_parts = num_parts
if lock is None:
lock = threading.Lock()
self._lock = lock
self._created_condition = threading.Condition(self._lock)
self._submit_write_condition = threading.Condition(self._lock)
self._completed_condition = threading.Condition(self._lock)
self._state = self._STATES['UNSTARTED']
self._finished_parts = set()
self._current_stream_part_number = 0
def announce_completed_part(self, part_number):
with self._completed_condition:
self._finished_parts.add(part_number)
if len(self._finished_parts) == self.num_parts:
self._state = self._STATES['COMPLETED']
self._completed_condition.notifyAll()
def announce_file_created(self):
with self._created_condition:
self._state = self._STATES['STARTED']
self._created_condition.notifyAll()
def wait_for_file_created(self):
with self._created_condition:
while not self._state == self._STATES['STARTED']:
if self._state == self._STATES['CANCELLED']:
raise DownloadCancelledError(
"Download has been cancelled.")
self._created_condition.wait(timeout=1)
def wait_for_completion(self):
with self._completed_condition:
while not self._state == self._STATES['COMPLETED']:
if self._state == self._STATES['CANCELLED']:
raise DownloadCancelledError(
"Download has been cancelled.")
self._completed_condition.wait(timeout=1)
def wait_for_turn(self, part_number):
with self._submit_write_condition:
while self._current_stream_part_number != part_number:
if self._state == self._STATES['CANCELLED']:
raise DownloadCancelledError(
"Download has been cancelled.")
self._submit_write_condition.wait(timeout=0.2)
def done_with_turn(self):
with self._submit_write_condition:
self._current_stream_part_number += 1
self._submit_write_condition.notifyAll()
def cancel(self):
with self._lock:
self._state = self._STATES['CANCELLED']
def is_cancelled(self):
with self._lock:
return self._state == self._STATES['CANCELLED']
def is_started(self):
with self._lock:
return self._state == self._STATES['STARTED']
awscli-1.10.1/awscli/customizations/iot_data.py 0000666 4542626 0000144 00000002317 12652514124 022646 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
def register_custom_endpoint_note(event_emitter):
event_emitter.register_last(
'doc-description.iot-data', add_custom_endpoint_url_note)
def add_custom_endpoint_url_note(help_command, **kwargs):
style = help_command.doc.style
style.start_note()
style.doc.writeln(
'The default endpoint data.iot.[region].amazonaws.com is intended '
'for testing purposes only. For production code it is strongly '
'recommended to use the custom endpoint for your account '
' (retrievable via the iot describe-endpoint command) to ensure best '
'availability and reachability of the service.'
)
style.end_note()
awscli-1.10.1/awscli/customizations/emr/ 0000777 4542626 0000144 00000000000 12652514126 021272 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/customizations/emr/createcluster.py 0000666 4542626 0000144 00000047601 12652514124 024517 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import re
from awscli.customizations.commands import BasicCommand
from awscli.customizations.emr import applicationutils
from awscli.customizations.emr import argumentschema
from awscli.customizations.emr import constants
from awscli.customizations.emr import emrfsutils
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import exceptions
from awscli.customizations.emr import hbaseutils
from awscli.customizations.emr import helptext
from awscli.customizations.emr import instancegroupsutils
from awscli.customizations.emr import steputils
from awscli.customizations.emr.command import Command
from awscli.customizations.emr.constants import EC2_ROLE_NAME
from awscli.customizations.emr.constants import EMR_ROLE_NAME
from botocore.compat import json
class CreateCluster(Command):
NAME = 'create-cluster'
DESCRIPTION = helptext.CREATE_CLUSTER_DESCRIPTION
ARG_TABLE = [
{'name': 'release-label',
'help_text': helptext.RELEASE_LABEL},
{'name': 'ami-version',
'help_text': helptext.AMI_VERSION},
{'name': 'instance-groups',
'schema': argumentschema.INSTANCE_GROUPS_SCHEMA,
'help_text': helptext.INSTANCE_GROUPS},
{'name': 'instance-type',
'help_text': helptext.INSTANCE_TYPE},
{'name': 'instance-count',
'help_text': helptext.INSTANCE_COUNT},
{'name': 'auto-terminate', 'action': 'store_true',
'group_name': 'auto_terminate',
'help_text': helptext.AUTO_TERMINATE},
{'name': 'no-auto-terminate', 'action': 'store_true',
'group_name': 'auto_terminate'},
{'name': 'name',
'default': 'Development Cluster',
'help_text': helptext.CLUSTER_NAME},
{'name': 'log-uri',
'help_text': helptext.LOG_URI},
{'name': 'service-role',
'help_text': helptext.SERVICE_ROLE},
{'name': 'use-default-roles', 'action': 'store_true',
'help_text': helptext.USE_DEFAULT_ROLES},
{'name': 'configurations',
'help_text': helptext.CONFIGURATIONS},
{'name': 'ec2-attributes',
'help_text': helptext.EC2_ATTRIBUTES,
'schema': argumentschema.EC2_ATTRIBUTES_SCHEMA},
{'name': 'termination-protected', 'action': 'store_true',
'group_name': 'termination_protected',
'help_text': helptext.TERMINATION_PROTECTED},
{'name': 'no-termination-protected', 'action': 'store_true',
'group_name': 'termination_protected'},
{'name': 'visible-to-all-users', 'action': 'store_true',
'group_name': 'visibility',
'help_text': helptext.VISIBILITY},
{'name': 'no-visible-to-all-users', 'action': 'store_true',
'group_name': 'visibility'},
{'name': 'enable-debugging', 'action': 'store_true',
'group_name': 'debug',
'help_text': helptext.DEBUGGING},
{'name': 'no-enable-debugging', 'action': 'store_true',
'group_name': 'debug'},
{'name': 'tags', 'nargs': '+',
'help_text': helptext.TAGS,
'schema': argumentschema.TAGS_SCHEMA},
{'name': 'bootstrap-actions',
'help_text': helptext.BOOTSTRAP_ACTIONS,
'schema': argumentschema.BOOTSTRAP_ACTIONS_SCHEMA},
{'name': 'applications',
'help_text': helptext.APPLICATIONS,
'schema': argumentschema.APPLICATIONS_SCHEMA},
{'name': 'emrfs',
'help_text': helptext.EMR_FS,
'schema': argumentschema.EMR_FS_SCHEMA},
{'name': 'steps',
'schema': argumentschema.STEPS_SCHEMA,
'help_text': helptext.STEPS},
{'name': 'additional-info',
'help_text': helptext.ADDITIONAL_INFO},
{'name': 'restore-from-hbase-backup',
'schema': argumentschema.HBASE_RESTORE_FROM_BACKUP_SCHEMA,
'help_text': helptext.RESTORE_FROM_HBASE}
]
SYNOPSIS = BasicCommand.FROM_FILE('emr', 'create-cluster-synopsis.rst')
EXAMPLES = BasicCommand.FROM_FILE('emr', 'create-cluster-examples.rst')
def _run_main_command(self, parsed_args, parsed_globals):
params = {}
params['Name'] = parsed_args.name
self._validate_release_label_ami_version(parsed_args)
service_role_validation_message = (
" Either choose --use-default-roles or use both --service-role "
" and --ec2-attributes InstanceProfile=.")
if parsed_args.use_default_roles is True and \
parsed_args.service_role is not None:
raise exceptions.MutualExclusiveOptionError(
option1="--use-default-roles",
option2="--service-role",
message=service_role_validation_message)
if parsed_args.use_default_roles is True and \
parsed_args.ec2_attributes is not None and \
'InstanceProfile' in parsed_args.ec2_attributes:
raise exceptions.MutualExclusiveOptionError(
option1="--use-default-roles",
option2="--ec2-attributes InstanceProfile",
message=service_role_validation_message)
instances_config = {}
instances_config['InstanceGroups'] = \
instancegroupsutils.validate_and_build_instance_groups(
instance_groups=parsed_args.instance_groups,
instance_type=parsed_args.instance_type,
instance_count=parsed_args.instance_count)
if parsed_args.release_label is not None:
params["ReleaseLabel"] = parsed_args.release_label
if parsed_args.configurations is not None:
try:
params["Configurations"] = json.loads(
parsed_args.configurations)
except ValueError:
raise ValueError('aws: error: invalid json argument for '
'option --configurations')
if (parsed_args.release_label is None and
parsed_args.ami_version is not None):
is_valid_ami_version = re.match('\d?\..*', parsed_args.ami_version)
if is_valid_ami_version is None:
raise exceptions.InvalidAmiVersionError(
ami_version=parsed_args.ami_version)
params['AmiVersion'] = parsed_args.ami_version
emrutils.apply_dict(
params, 'AdditionalInfo', parsed_args.additional_info)
emrutils.apply_dict(params, 'LogUri', parsed_args.log_uri)
if parsed_args.use_default_roles is True:
parsed_args.service_role = EMR_ROLE_NAME
if parsed_args.ec2_attributes is None:
parsed_args.ec2_attributes = {}
parsed_args.ec2_attributes['InstanceProfile'] = EC2_ROLE_NAME
emrutils.apply_dict(params, 'ServiceRole', parsed_args.service_role)
if (
parsed_args.no_auto_terminate is False and
parsed_args.auto_terminate is False):
parsed_args.no_auto_terminate = True
instances_config['KeepJobFlowAliveWhenNoSteps'] = \
emrutils.apply_boolean_options(
parsed_args.no_auto_terminate,
'--no-auto-terminate',
parsed_args.auto_terminate,
'--auto-terminate')
instances_config['TerminationProtected'] = \
emrutils.apply_boolean_options(
parsed_args.termination_protected,
'--termination-protected',
parsed_args.no_termination_protected,
'--no-termination-protected')
if (parsed_args.visible_to_all_users is False and
parsed_args.no_visible_to_all_users is False):
parsed_args.visible_to_all_users = True
params['VisibleToAllUsers'] = \
emrutils.apply_boolean_options(
parsed_args.visible_to_all_users,
'--visible-to-all-users',
parsed_args.no_visible_to_all_users,
'--no-visible-to-all-users')
params['Tags'] = emrutils.parse_tags(parsed_args.tags)
params['Instances'] = instances_config
if parsed_args.ec2_attributes is not None:
self._build_ec2_attributes(
cluster=params, parsed_attrs=parsed_args.ec2_attributes)
debugging_enabled = emrutils.apply_boolean_options(
parsed_args.enable_debugging,
'--enable-debugging',
parsed_args.no_enable_debugging,
'--no-enable-debugging')
if parsed_args.log_uri is None and debugging_enabled is True:
raise exceptions.LogUriError
if debugging_enabled is True:
self._update_cluster_dict(
cluster=params,
key='Steps',
value=[
self._build_enable_debugging(parsed_args, parsed_globals)])
if parsed_args.applications is not None:
if parsed_args.release_label is None:
app_list, ba_list, step_list = \
applicationutils.build_applications(
region=self.region,
parsed_applications=parsed_args.applications,
ami_version=params['AmiVersion'])
self._update_cluster_dict(
params, 'NewSupportedProducts', app_list)
self._update_cluster_dict(
params, 'BootstrapActions', ba_list)
self._update_cluster_dict(
params, 'Steps', step_list)
else:
params["Applications"] = []
for application in parsed_args.applications:
params["Applications"].append(application)
hbase_restore_config = parsed_args.restore_from_hbase_backup
if hbase_restore_config is not None:
args = hbaseutils.build_hbase_restore_from_backup_args(
dir=hbase_restore_config.get('Dir'),
backup_version=hbase_restore_config.get('BackupVersion'))
step_config = emrutils.build_step(
jar=constants.HBASE_JAR_PATH,
name=constants.HBASE_RESTORE_STEP_NAME,
action_on_failure=constants.CANCEL_AND_WAIT,
args=args)
self._update_cluster_dict(
params, 'Steps', [step_config])
if parsed_args.bootstrap_actions is not None:
self._build_bootstrap_actions(
cluster=params,
parsed_boostrap_actions=parsed_args.bootstrap_actions)
if parsed_args.emrfs is not None:
self._handle_emrfs_parameters(
cluster=params,
emrfs_args=parsed_args.emrfs,
release_label=parsed_args.release_label)
if parsed_args.steps is not None:
steps_list = steputils.build_step_config_list(
parsed_step_list=parsed_args.steps,
region=self.region,
release_label=parsed_args.release_label)
self._update_cluster_dict(
cluster=params, key='Steps', value=steps_list)
self._validate_required_applications(parsed_args)
run_job_flow_response = emrutils.call(
self._session, 'run_job_flow', params, self.region,
parsed_globals.endpoint_url, parsed_globals.verify_ssl)
constructed_result = self._construct_result(run_job_flow_response)
emrutils.display_response(self._session, 'run_job_flow',
constructed_result, parsed_globals)
return 0
def _construct_result(self, run_job_flow_result):
jobFlowId = None
if run_job_flow_result is not None:
jobFlowId = run_job_flow_result.get('JobFlowId')
if jobFlowId is not None:
return {'ClusterId': jobFlowId}
else:
return {}
def _build_ec2_attributes(self, cluster, parsed_attrs):
keys = parsed_attrs.keys()
instances = cluster['Instances']
if 'AvailabilityZone' in keys and 'SubnetId' in keys:
raise exceptions.SubnetAndAzValidationError
emrutils.apply_params(
src_params=parsed_attrs, src_key='KeyName',
dest_params=instances, dest_key='Ec2KeyName')
emrutils.apply_params(
src_params=parsed_attrs, src_key='SubnetId',
dest_params=instances, dest_key='Ec2SubnetId')
if 'AvailabilityZone' in keys:
instances['Placement'] = dict()
emrutils.apply_params(
src_params=parsed_attrs, src_key='AvailabilityZone',
dest_params=instances['Placement'],
dest_key='AvailabilityZone')
emrutils.apply_params(
src_params=parsed_attrs, src_key='InstanceProfile',
dest_params=cluster, dest_key='JobFlowRole')
emrutils.apply_params(
src_params=parsed_attrs, src_key='EmrManagedMasterSecurityGroup',
dest_params=instances, dest_key='EmrManagedMasterSecurityGroup')
emrutils.apply_params(
src_params=parsed_attrs, src_key='EmrManagedSlaveSecurityGroup',
dest_params=instances, dest_key='EmrManagedSlaveSecurityGroup')
emrutils.apply_params(
src_params=parsed_attrs, src_key='ServiceAccessSecurityGroup',
dest_params=instances, dest_key='ServiceAccessSecurityGroup')
emrutils.apply_params(
src_params=parsed_attrs, src_key='AdditionalMasterSecurityGroups',
dest_params=instances, dest_key='AdditionalMasterSecurityGroups')
emrutils.apply_params(
src_params=parsed_attrs, src_key='AdditionalSlaveSecurityGroups',
dest_params=instances, dest_key='AdditionalSlaveSecurityGroups')
emrutils.apply(params=cluster, key='Instances', value=instances)
return cluster
def _build_bootstrap_actions(
self, cluster, parsed_boostrap_actions):
cluster_ba_list = cluster.get('BootstrapActions')
if cluster_ba_list is None:
cluster_ba_list = []
bootstrap_actions = []
if len(cluster_ba_list) + len(parsed_boostrap_actions) \
> constants.MAX_BOOTSTRAP_ACTION_NUMBER:
raise ValueError('aws: error: maximum number of '
'bootstrap actions for a cluster exceeded.')
for ba in parsed_boostrap_actions:
ba_config = {}
if ba.get('Name') is not None:
ba_config['Name'] = ba.get('Name')
else:
ba_config['Name'] = constants.BOOTSTRAP_ACTION_NAME
script_arg_config = {}
emrutils.apply_params(
src_params=ba, src_key='Path',
dest_params=script_arg_config, dest_key='Path')
emrutils.apply_params(
src_params=ba, src_key='Args',
dest_params=script_arg_config, dest_key='Args')
emrutils.apply(
params=ba_config,
key='ScriptBootstrapAction',
value=script_arg_config)
bootstrap_actions.append(ba_config)
result = cluster_ba_list + bootstrap_actions
if len(result) > 0:
cluster['BootstrapActions'] = result
return cluster
def _build_enable_debugging(self, parsed_args, parsed_globals):
if parsed_args.release_label:
jar = constants.COMMAND_RUNNER
args = [constants.DEBUGGING_COMMAND]
else:
jar = emrutils.get_script_runner(self.region)
args = [emrutils.build_s3_link(
relative_path=constants.DEBUGGING_PATH,
region=self.region)]
return emrutils.build_step(
name=constants.DEBUGGING_NAME,
action_on_failure=constants.TERMINATE_CLUSTER,
jar=jar,
args=args)
def _update_cluster_dict(self, cluster, key, value):
if key in cluster.keys():
cluster[key] += value
elif value is not None and len(value) > 0:
cluster[key] = value
return cluster
def _validate_release_label_ami_version(self, parsed_args):
if parsed_args.ami_version is not None and \
parsed_args.release_label is not None:
raise exceptions.MutualExclusiveOptionError(
option1="--ami-version",
option2="--release-label")
if parsed_args.ami_version is None and \
parsed_args.release_label is None:
raise exceptions.RequiredOptionsError(
option1="--ami-version",
option2="--release-label")
# Checks if the applications required by steps are specified
# using the --applications option.
def _validate_required_applications(self, parsed_args):
specified_apps = set([])
if parsed_args.applications is not None:
specified_apps = \
set([app['Name'].lower() for app in parsed_args.applications])
missing_apps = self._get_missing_applications_for_steps(specified_apps,
parsed_args)
# Check for HBase.
if parsed_args.restore_from_hbase_backup is not None:
if constants.HBASE not in specified_apps:
missing_apps.add(constants.HBASE.title())
if len(missing_apps) != 0:
raise exceptions.MissingApplicationsError(
applications=missing_apps)
def _get_missing_applications_for_steps(self, specified_apps, parsed_args):
allowed_app_steps = set([constants.HIVE, constants.PIG,
constants.IMPALA])
missing_apps = set([])
if parsed_args.steps is not None:
for step in parsed_args.steps:
if len(missing_apps) == len(allowed_app_steps):
break
step_type = step.get('Type')
if step_type is not None:
step_type = step_type.lower()
if step_type in allowed_app_steps and \
step_type not in specified_apps:
missing_apps.add(step['Type'].title())
return missing_apps
def _filter_configurations_in_special_cases(self, configurations,
parsed_args, parsed_configs):
if parsed_args.use_default_roles:
configurations = [x for x in configurations
if x.name is not 'service_role'
and x.name is not 'instance_profile']
return configurations
def _handle_emrfs_parameters(self, cluster, emrfs_args, release_label):
if release_label:
self.validate_no_emrfs_configuration(cluster)
emrfs_configuration = emrfsutils.build_emrfs_confiuration(
emrfs_args)
self._update_cluster_dict(
cluster=cluster, key='Configurations',
value=[emrfs_configuration])
else:
emrfs_ba_config_list = emrfsutils.build_bootstrap_action_configs(
self.region, emrfs_args)
self._update_cluster_dict(
cluster=cluster, key='BootstrapActions',
value=emrfs_ba_config_list)
def validate_no_emrfs_configuration(self, cluster):
if 'Configurations' in cluster:
for config in cluster['Configurations']:
if config is not None and \
config.get('Classification') == constants.EMRFS_SITE:
raise exceptions.DuplicateEmrFsConfigurationError
awscli-1.10.1/awscli/customizations/emr/steputils.py 0000666 4542626 0000144 00000020442 12652514124 023700 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import constants
from awscli.customizations.emr import exceptions
def build_step_config_list(parsed_step_list, region, release_label):
step_config_list = []
for step in parsed_step_list:
step_type = step.get('Type')
if step_type is None:
step_type = constants.CUSTOM_JAR
step_type = step_type.lower()
step_config = {}
if step_type == constants.CUSTOM_JAR:
step_config = build_custom_jar_step(parsed_step=step)
elif step_type == constants.STREAMING:
step_config = build_streaming_step(
parsed_step=step, release_label=release_label)
elif step_type == constants.HIVE:
step_config = build_hive_step(
parsed_step=step, region=region,
release_label=release_label)
elif step_type == constants.PIG:
step_config = build_pig_step(
parsed_step=step, region=region,
release_label=release_label)
elif step_type == constants.IMPALA:
step_config = build_impala_step(
parsed_step=step, region=region,
release_label=release_label)
elif step_type == constants.SPARK:
step_config = build_spark_step(
parsed_step=step, region=region,
release_label=release_label)
else:
raise exceptions.UnknownStepTypeError(step_type=step_type)
step_config_list.append(step_config)
return step_config_list
def build_custom_jar_step(parsed_step):
name = _apply_default_value(
arg=parsed_step.get('Name'),
value=constants.DEFAULT_CUSTOM_JAR_STEP_NAME)
action_on_failure = _apply_default_value(
arg=parsed_step.get('ActionOnFailure'),
value=constants.DEFAULT_FAILURE_ACTION)
emrutils.check_required_field(
structure=constants.CUSTOM_JAR_STEP_CONFIG,
name='Jar',
value=parsed_step.get('Jar'))
return emrutils.build_step(
jar=parsed_step.get('Jar'),
args=parsed_step.get('Args'),
name=name,
action_on_failure=action_on_failure,
main_class=parsed_step.get('MainClass'),
properties=emrutils.parse_key_value_string(
parsed_step.get('Properties')))
def build_streaming_step(parsed_step, release_label):
name = _apply_default_value(
arg=parsed_step.get('Name'),
value=constants.DEFAULT_STREAMING_STEP_NAME)
action_on_failure = _apply_default_value(
arg=parsed_step.get('ActionOnFailure'),
value=constants.DEFAULT_FAILURE_ACTION)
args = parsed_step.get('Args')
emrutils.check_required_field(
structure=constants.STREAMING_STEP_CONFIG,
name='Args',
value=args)
emrutils.check_empty_string_list(name='Args', value=args)
args_list = []
if release_label:
jar = constants.COMMAND_RUNNER
args_list.append(constants.HADOOP_STREAMING_COMMAND)
else:
jar = constants.HADOOP_STREAMING_PATH
args_list += args
return emrutils.build_step(
jar=jar,
args=args_list,
name=name,
action_on_failure=action_on_failure)
def build_hive_step(parsed_step, release_label, region=None):
args = parsed_step.get('Args')
emrutils.check_required_field(
structure=constants.HIVE_STEP_CONFIG, name='Args', value=args)
emrutils.check_empty_string_list(name='Args', value=args)
name = _apply_default_value(
arg=parsed_step.get('Name'),
value=constants.DEFAULT_HIVE_STEP_NAME)
action_on_failure = \
_apply_default_value(
arg=parsed_step.get('ActionOnFailure'),
value=constants.DEFAULT_FAILURE_ACTION)
return emrutils.build_step(
jar=_get_runner_jar(release_label, region),
args=_build_hive_args(args, release_label, region),
name=name,
action_on_failure=action_on_failure)
def _build_hive_args(args, release_label, region):
args_list = []
if release_label:
args_list.append(constants.HIVE_SCRIPT_COMMAND)
else:
args_list.append(emrutils.build_s3_link(
relative_path=constants.HIVE_SCRIPT_PATH, region=region))
args_list.append(constants.RUN_HIVE_SCRIPT)
if not release_label:
args_list.append(constants.HIVE_VERSIONS)
args_list.append(constants.LATEST)
args_list.append(constants.ARGS)
args_list += args
return args_list
def build_pig_step(parsed_step, release_label, region=None):
args = parsed_step.get('Args')
emrutils.check_required_field(
structure=constants.PIG_STEP_CONFIG, name='Args', value=args)
emrutils.check_empty_string_list(name='Args', value=args)
name = _apply_default_value(
arg=parsed_step.get('Name'),
value=constants.DEFAULT_PIG_STEP_NAME)
action_on_failure = _apply_default_value(
arg=parsed_step.get('ActionOnFailure'),
value=constants.DEFAULT_FAILURE_ACTION)
return emrutils.build_step(
jar=_get_runner_jar(release_label, region),
args=_build_pig_args(args, release_label, region),
name=name,
action_on_failure=action_on_failure)
def _build_pig_args(args, release_label, region):
args_list = []
if release_label:
args_list.append(constants.PIG_SCRIPT_COMMAND)
else:
args_list.append(emrutils.build_s3_link(
relative_path=constants.PIG_SCRIPT_PATH, region=region))
args_list.append(constants.RUN_PIG_SCRIPT)
if not release_label:
args_list.append(constants.PIG_VERSIONS)
args_list.append(constants.LATEST)
args_list.append(constants.ARGS)
args_list += args
return args_list
def build_impala_step(parsed_step, release_label, region=None):
if release_label:
raise exceptions.UnknownStepTypeError(step_type=constants.IMPALA)
name = _apply_default_value(
arg=parsed_step.get('Name'),
value=constants.DEFAULT_IMPALA_STEP_NAME)
action_on_failure = _apply_default_value(
arg=parsed_step.get('ActionOnFailure'),
value=constants.DEFAULT_FAILURE_ACTION)
args_list = [
emrutils.build_s3_link(
relative_path=constants.IMPALA_INSTALL_PATH, region=region),
constants.RUN_IMPALA_SCRIPT]
args = parsed_step.get('Args')
emrutils.check_required_field(
structure=constants.IMPALA_STEP_CONFIG, name='Args', value=args)
args_list += args
return emrutils.build_step(
jar=emrutils.get_script_runner(region),
args=args_list,
name=name,
action_on_failure=action_on_failure)
def build_spark_step(parsed_step, release_label, region=None):
name = _apply_default_value(
arg=parsed_step.get('Name'),
value=constants.DEFAULT_SPARK_STEP_NAME)
action_on_failure = _apply_default_value(
arg=parsed_step.get('ActionOnFailure'),
value=constants.DEFAULT_FAILURE_ACTION)
args = parsed_step.get('Args')
emrutils.check_required_field(
structure=constants.SPARK_STEP_CONFIG, name='Args', value=args)
return emrutils.build_step(
jar=_get_runner_jar(release_label, region),
args=_build_spark_args(args, release_label, region),
name=name,
action_on_failure=action_on_failure)
def _build_spark_args(args, release_label, region):
args_list = []
if release_label:
args_list.append(constants.SPARK_SUBMIT_COMMAND)
else:
args_list.append(constants.SPARK_SUBMIT_PATH)
args_list += args
return args_list
def _apply_default_value(arg, value):
if arg is None:
arg = value
return arg
def _get_runner_jar(release_label, region):
return constants.COMMAND_RUNNER if release_label \
else emrutils.get_script_runner(region)
awscli-1.10.1/awscli/customizations/emr/hbase.py 0000666 4542626 0000144 00000022276 12652514124 022735 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import constants
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import hbaseutils
from awscli.customizations.emr import helptext
from awscli.customizations.emr.command import Command
class RestoreFromHBaseBackup(Command):
NAME = 'restore-from-hbase-backup'
DESCRIPTION = ('Restores HBase from S3. ' +
helptext.AVAILABLE_ONLY_FOR_AMI_VERSIONS)
ARG_TABLE = [
{'name': 'cluster-id', 'required': True,
'help_text': helptext.CLUSTER_ID},
{'name': 'dir', 'required': True,
'help_text': helptext.HBASE_BACKUP_DIR},
{'name': 'backup-version',
'help_text': helptext.HBASE_BACKUP_VERSION}
]
def _run_main_command(self, parsed_args, parsed_globals):
steps = []
args = hbaseutils.build_hbase_restore_from_backup_args(
parsed_args.dir, parsed_args.backup_version)
step_config = emrutils.build_step(
jar=constants.HBASE_JAR_PATH,
name=constants.HBASE_RESTORE_STEP_NAME,
action_on_failure=constants.CANCEL_AND_WAIT,
args=args)
steps.append(step_config)
parameters = {'JobFlowId': parsed_args.cluster_id,
'Steps': steps}
emrutils.call_and_display_response(self._session, 'AddJobFlowSteps',
parameters, parsed_globals)
return 0
class ScheduleHBaseBackup(Command):
NAME = 'schedule-hbase-backup'
DESCRIPTION = ('Adds a step to schedule automated HBase backup. ' +
helptext.AVAILABLE_ONLY_FOR_AMI_VERSIONS)
ARG_TABLE = [
{'name': 'cluster-id', 'required': True,
'help_text': helptext.CLUSTER_ID},
{'name': 'type', 'required': True,
'help_text': "
Backup type. You can specify 'incremental' or "
"'full'.
The time unit for backup's time-interval. "
"You can specify one of the following values:"
" 'minutes', 'hours', or 'days'.
"},
{'name': 'start-time',
'help_text': '
The time of the first backup in ISO format.
'
' e.g. 2014-04-21T05:26:10Z. Default is now.'},
{'name': 'consistent', 'action': 'store_true',
'help_text': '
Performs a consistent backup.'
' Pauses all write operations to the HBase cluster'
' during the backup process.
'}
]
def _run_main_command(self, parsed_args, parsed_globals):
steps = []
self._check_type(parsed_args.type)
self._check_unit(parsed_args.unit)
args = self._build_hbase_schedule_backup_args(parsed_args)
step_config = emrutils.build_step(
jar=constants.HBASE_JAR_PATH,
name=constants.HBASE_SCHEDULE_BACKUP_STEP_NAME,
action_on_failure=constants.CANCEL_AND_WAIT,
args=args)
steps.append(step_config)
parameters = {'JobFlowId': parsed_args.cluster_id,
'Steps': steps}
emrutils.call_and_display_response(self._session, 'AddJobFlowSteps',
parameters, parsed_globals)
return 0
def _check_type(self, type):
type = type.lower()
if type != constants.FULL and type != constants.INCREMENTAL:
raise ValueError('aws: error: invalid type. type should be either '
+ constants.FULL + ' or ' + constants.INCREMENTAL
+ '.')
def _check_unit(self, unit):
unit = unit.lower()
if (unit != constants.MINUTES and unit != constants.HOURS
and unit != constants.DAYS):
raise ValueError('aws: error: invalid unit. unit should be one of'
' the following values: ' + constants.MINUTES +
', ' + constants.HOURS + ' or ' + constants.DAYS +
'.')
def _build_hbase_schedule_backup_args(self, parsed_args):
args = [constants.HBASE_MAIN, constants.HBASE_SCHEDULED_BACKUP,
constants.TRUE, constants.HBASE_BACKUP_DIR, parsed_args.dir]
type = parsed_args.type.lower()
unit = parsed_args.unit.lower()
if parsed_args.consistent is True:
args.append(constants.HBASE_BACKUP_CONSISTENT)
if type == constants.FULL:
args.append(constants.HBASE_FULL_BACKUP_INTERVAL)
else:
args.append(constants.HBASE_INCREMENTAL_BACKUP_INTERVAL)
args.append(parsed_args.interval)
if type == constants.FULL:
args.append(constants.HBASE_FULL_BACKUP_INTERVAL_UNIT)
else:
args.append(constants.HBASE_INCREMENTAL_BACKUP_INTERVAL_UNIT)
args.append(unit)
args.append(constants.HBASE_BACKUP_STARTTIME)
if parsed_args.start_time is not None:
args.append(parsed_args.start_time)
else:
args.append(constants.NOW)
return args
class CreateHBaseBackup(Command):
NAME = 'create-hbase-backup'
DESCRIPTION = ('Creates a HBase backup in S3. ' +
helptext.AVAILABLE_ONLY_FOR_AMI_VERSIONS)
ARG_TABLE = [
{'name': 'cluster-id', 'required': True,
'help_text': helptext.CLUSTER_ID},
{'name': 'dir', 'required': True,
'help_text': helptext.HBASE_BACKUP_DIR},
{'name': 'consistent', 'action': 'store_true',
'help_text': '
Performs a consistent backup. Pauses all write'
' operations to the HBase cluster during the backup'
' process.
'}
]
def _run_main_command(self, parsed_args, parsed_globals):
steps = []
args = self._build_hbase_backup_args(parsed_args)
step_config = emrutils.build_step(
jar=constants.HBASE_JAR_PATH,
name=constants.HBASE_BACKUP_STEP_NAME,
action_on_failure=constants.CANCEL_AND_WAIT,
args=args)
steps.append(step_config)
parameters = {'JobFlowId': parsed_args.cluster_id,
'Steps': steps}
emrutils.call_and_display_response(self._session, 'AddJobFlowSteps',
parameters, parsed_globals)
return 0
def _build_hbase_backup_args(self, parsed_args):
args = [constants.HBASE_MAIN,
constants.HBASE_BACKUP,
constants.HBASE_BACKUP_DIR, parsed_args.dir]
if parsed_args.consistent is True:
args.append(constants.HBASE_BACKUP_CONSISTENT)
return args
class DisableHBaseBackups(Command):
NAME = 'disable-hbase-backups'
DESCRIPTION = ('Add a step to disable automated HBase backups. ' +
helptext.AVAILABLE_ONLY_FOR_AMI_VERSIONS)
ARG_TABLE = [
{'name': 'cluster-id', 'required': True,
'help_text': helptext.CLUSTER_ID},
{'name': 'full', 'action': 'store_true',
'help_text': 'Disables full backup.'},
{'name': 'incremental', 'action': 'store_true',
'help_text': 'Disables incremental backup.'}
]
def _run_main_command(self, parsed_args, parsed_globals):
steps = []
args = self._build_hbase_disable_backups_args(parsed_args)
step_config = emrutils.build_step(
constants.HBASE_JAR_PATH,
constants.HBASE_SCHEDULE_BACKUP_STEP_NAME,
constants.CANCEL_AND_WAIT,
args)
steps.append(step_config)
parameters = {'JobFlowId': parsed_args.cluster_id,
'Steps': steps}
emrutils.call_and_display_response(self._session, 'AddJobFlowSteps',
parameters, parsed_globals)
return 0
def _build_hbase_disable_backups_args(self, parsed_args):
args = [constants.HBASE_MAIN, constants.HBASE_SCHEDULED_BACKUP,
constants.FALSE]
if parsed_args.full is False and parsed_args.incremental is False:
error_message = 'Should specify at least one of --' +\
constants.FULL + ' and --' +\
constants.INCREMENTAL + '.'
raise ValueError(error_message)
if parsed_args.full is True:
args.append(constants.HBASE_DISABLE_FULL_BACKUP)
if parsed_args.incremental is True:
args.append(constants.HBASE_DISABLE_INCREMENTAL_BACKUP)
return args
awscli-1.10.1/awscli/customizations/emr/sshutils.py 0000666 4542626 0000144 00000006441 12652514124 023525 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
from awscli.customizations.emr import exceptions
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import constants
from botocore.exceptions import WaiterError
LOG = logging.getLogger(__name__)
def validate_and_find_master_dns(session, parsed_globals, cluster_id):
"""
Utility method for ssh, socks, put and get command.
Check if the cluster to be connected to is
terminated or being terminated.
Check if the cluster is running.
Find master instance public dns of a given cluster.
Return the latest created master instance public dns name.
Throw MasterDNSNotAvailableError or ClusterTerminatedError.
"""
cluster_state = emrutils.get_cluster_state(
session, parsed_globals, cluster_id)
if cluster_state in constants.TERMINATED_STATES:
raise exceptions.ClusterTerminatedError
emr = emrutils.get_client(session, parsed_globals)
try:
cluster_running_waiter = emr.get_waiter('cluster_running')
if cluster_state in constants.STARTING_STATES:
print("Waiting for the cluster to start.")
cluster_running_waiter.wait(ClusterId=cluster_id)
except WaiterError:
raise exceptions.MasterDNSNotAvailableError
return emrutils.find_master_public_dns(
session=session, cluster_id=cluster_id,
parsed_globals=parsed_globals)
def validate_ssh_with_key_file(key_file):
if (emrutils.which('putty.exe') or emrutils.which('ssh') or
emrutils.which('ssh.exe')) is None:
raise exceptions.SSHNotFoundError
else:
check_ssh_key_format(key_file)
def validate_scp_with_key_file(key_file):
if (emrutils.which('pscp.exe') or emrutils.which('scp') or
emrutils.which('scp.exe')) is None:
raise exceptions.SCPNotFoundError
else:
check_scp_key_format(key_file)
def check_scp_key_format(key_file):
# If only pscp is present and the file format is incorrect
if (emrutils.which('pscp.exe') is not None and
(emrutils.which('scp.exe') or emrutils.which('scp')) is None):
if check_command_key_format(key_file, ['ppk']) is False:
raise exceptions.WrongPuttyKeyError
else:
pass
def check_ssh_key_format(key_file):
# If only putty is present and the file format is incorrect
if (emrutils.which('putty.exe') is not None and
(emrutils.which('ssh.exe') or emrutils.which('ssh')) is None):
if check_command_key_format(key_file, ['ppk']) is False:
raise exceptions.WrongPuttyKeyError
else:
pass
def check_command_key_format(key_file, accepted_file_format=[]):
if any(key_file.endswith(i) for i in accepted_file_format):
return True
else:
return False
awscli-1.10.1/awscli/customizations/emr/command.py 0000666 4542626 0000144 00000012502 12652514124 023260 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
from awscli.customizations.commands import BasicCommand
from awscli.customizations.emr import config
from awscli.customizations.emr import configutils
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import exceptions
LOG = logging.getLogger(__name__)
class Command(BasicCommand):
region = None
UNSUPPORTED_COMMANDS_FOR_RELEASE_BASED_CLUSTERS = set([
'install-applications',
'restore-from-hbase-backup',
'schedule-hbase-backup',
'create-hbase-backup',
'disable-hbase-backups',
])
def supports_arg(self, name):
return any((x['name'] == name for x in self.ARG_TABLE))
def _run_main(self, parsed_args, parsed_globals):
self._apply_configs(parsed_args,
configutils.get_configs(self._session))
self.region = emrutils.get_region(self._session, parsed_globals)
self._validate_unsupported_commands_for_release_based_clusters(
parsed_args, parsed_globals)
return self._run_main_command(parsed_args, parsed_globals)
def _apply_configs(self, parsed_args, parsed_configs):
applicable_configurations = \
self._get_applicable_configurations(parsed_args, parsed_configs)
configs_added = {}
for configuration in applicable_configurations:
configuration.add(self, parsed_args,
parsed_configs[configuration.name])
configs_added[configuration.name] = \
parsed_configs[configuration.name]
if configs_added:
LOG.debug("Updated arguments with configs: %s" % configs_added)
else:
LOG.debug("No configs applied")
LOG.debug("Running command with args: %s" % parsed_args)
def _get_applicable_configurations(self, parsed_args, parsed_configs):
# We need to find the applicable configurations by applying
# following filters:
# 1. Configurations that are applicable to this command
# 3. Configurations that are present in parsed_configs
# 2. Configurations that are not present in parsed_args
configurations = \
config.get_applicable_configurations(self)
configurations = [x for x in configurations
if x.name in parsed_configs
and not x.is_present(parsed_args)]
configurations = self._filter_configurations_in_special_cases(
configurations, parsed_args, parsed_configs)
return configurations
def _filter_configurations_in_special_cases(self, configurations,
parsed_args, parsed_configs):
# Subclasses can override this method to filter the applicable
# configurations further based upon some custom logic
# Default behavior is to return the configurations list as is
return configurations
def _run_main_command(self, parsed_args, parsed_globals):
# Subclasses should implement this method.
# parsed_globals are the parsed global args (things like region,
# profile, output, etc.)
# parsed_args are any arguments you've defined in your ARG_TABLE
# that are parsed.
# parsed_args are updated to include any emr specific configuration
# from the config file if the corresponding argument is not
# explicitly specified on the CLI
raise NotImplementedError("_run_main_command")
def _validate_unsupported_commands_for_release_based_clusters(
self, parsed_args, parsed_globals):
command = self.NAME
if (command in self.UNSUPPORTED_COMMANDS_FOR_RELEASE_BASED_CLUSTERS
and hasattr(parsed_args, 'cluster_id')):
release_label = emrutils.get_release_label(
parsed_args.cluster_id, self._session, self.region,
parsed_globals.endpoint_url, parsed_globals.verify_ssl)
if release_label:
raise exceptions.UnsupportedCommandWithReleaseError(
command=command,
release_label=release_label)
def override_args_required_option(argument_table, args, session, **kwargs):
# This function overrides the 'required' property of an argument
# if a value corresponding to that argument is present in the config
# file
# We don't want to override when user is viewing the help so that we
# can show the required options correctly in the help
need_to_override = False if len(args) == 1 and args[0] == 'help' \
else True
if need_to_override:
parsed_configs = configutils.get_configs(session)
for arg_name in argument_table.keys():
if arg_name.replace('-', '_') in parsed_configs:
argument_table[arg_name].required = False
awscli-1.10.1/awscli/customizations/emr/emrutils.py 0000666 4542626 0000144 00000022230 12652514124 023505 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import json
import logging
import os
from awscli.clidriver import CLIOperationCaller
from awscli.customizations.emr import constants
from awscli.customizations.emr import exceptions
from botocore.exceptions import WaiterError, NoCredentialsError
from botocore import xform_name
LOG = logging.getLogger(__name__)
def parse_tags(raw_tags_list):
tags_dict_list = []
if raw_tags_list:
for tag in raw_tags_list:
if tag.find('=') == -1:
key, value = tag, ''
else:
key, value = tag.split('=', 1)
tags_dict_list.append({'Key': key, 'Value': value})
return tags_dict_list
def parse_key_value_string(key_value_string):
# raw_key_value_string is a list of key value pairs separated by comma.
# Examples: "k1=v1,k2='v 2',k3,k4"
key_value_list = []
if key_value_string is not None:
raw_key_value_list = key_value_string.split(',')
for kv in raw_key_value_list:
if kv.find('=') == -1:
key, value = kv, ''
else:
key, value = kv.split('=', 1)
key_value_list.append({'Key': key, 'Value': value})
return key_value_list
else:
return None
def apply_boolean_options(
true_option, true_option_name, false_option, false_option_name):
if true_option and false_option:
error_message = \
'aws: error: cannot use both ' + true_option_name + \
' and ' + false_option_name + ' options together.'
raise ValueError(error_message)
elif true_option:
return True
else:
return False
# Deprecate. Rename to apply_dict
def apply(params, key, value):
if value:
params[key] = value
return params
def apply_dict(params, key, value):
if value:
params[key] = value
return params
def apply_params(src_params, src_key, dest_params, dest_key):
if src_key in src_params.keys() and src_params[src_key]:
dest_params[dest_key] = src_params[src_key]
return dest_params
def build_step(
jar, name='Step',
action_on_failure=constants.DEFAULT_FAILURE_ACTION,
args=None,
main_class=None,
properties=None):
check_required_field(
structure='HadoopJarStep', name='Jar', value=jar)
step = {}
apply_dict(step, 'Name', name)
apply_dict(step, 'ActionOnFailure', action_on_failure)
jar_config = {}
jar_config['Jar'] = jar
apply_dict(jar_config, 'Args', args)
apply_dict(jar_config, 'MainClass', main_class)
apply_dict(jar_config, 'Properties', properties)
step['HadoopJarStep'] = jar_config
return step
def build_bootstrap_action(
path,
name='Bootstrap Action',
args=None):
if path is None:
raise exceptions.MissingParametersError(
object_name='ScriptBootstrapActionConfig', missing='Path')
ba_config = {}
apply_dict(ba_config, 'Name', name)
script_config = {}
apply_dict(script_config, 'Args', args)
script_config['Path'] = path
apply_dict(ba_config, 'ScriptBootstrapAction', script_config)
return ba_config
def build_s3_link(relative_path='', region='us-east-1'):
if region is None:
region = 'us-east-1'
return 's3://{0}.elasticmapreduce{1}'.format(region, relative_path)
def get_script_runner(region='us-east-1'):
if region is None:
region = 'us-east-1'
return build_s3_link(
relative_path=constants.SCRIPT_RUNNER_PATH, region=region)
def check_required_field(structure, name, value):
if not value:
raise exceptions.MissingParametersError(
object_name=structure, missing=name)
def check_empty_string_list(name, value):
if not value or (len(value) == 1 and value[0].strip() == ""):
raise exceptions.EmptyListError(param=name)
def call(session, operation_name, parameters, region_name=None,
endpoint_url=None, verify=None):
# We could get an error from get_endpoint() about not having
# a region configured. Before this happens we want to check
# for credentials so we can give a good error message.
if session.get_credentials() is None:
raise NoCredentialsError()
client = session.create_client(
'emr', region_name=region_name, endpoint_url=endpoint_url,
verify=verify)
LOG.debug('Calling ' + str(operation_name))
return getattr(client, operation_name)(**parameters)
def get_example_file(command):
return open('awscli/examples/emr/' + command + '.rst')
def dict_to_string(dict, indent=2):
return json.dumps(dict, indent=indent)
def get_client(session, parsed_globals):
return session.create_client(
'emr',
region_name=get_region(session, parsed_globals),
endpoint_url=parsed_globals.endpoint_url,
verify=parsed_globals.verify_ssl)
def _get_creation_date_time(instance):
return instance['Status']['Timeline']['CreationDateTime']
def _find_most_recently_created(pages):
""" Find instance which is most recently created. """
most_recently_created = None
for page in pages:
for instance in page['Instances']:
if (not most_recently_created or
_get_creation_date_time(most_recently_created) <
_get_creation_date_time(instance)):
most_recently_created = instance
return most_recently_created
def get_cluster_state(session, parsed_globals, cluster_id):
client = get_client(session, parsed_globals)
data = client.describe_cluster(ClusterId=cluster_id)
return data['Cluster']['Status']['State']
def _find_master_instance(session, parsed_globals, cluster_id):
"""
Find the most recently created master instance.
If the master instance is not available yet,
the method will return None.
"""
client = get_client(session, parsed_globals)
paginator = client.get_paginator('list_instances')
pages = paginator.paginate(
ClusterId=cluster_id, InstanceGroupTypes=['MASTER'])
return _find_most_recently_created(pages)
def find_master_public_dns(session, parsed_globals, cluster_id):
"""
Returns the master_instance's 'PublicDnsName'.
"""
master_instance = _find_master_instance(
session, parsed_globals, cluster_id)
if master_instance is None:
return ""
else:
return master_instance.get('PublicDnsName')
def which(program):
for path in os.environ["PATH"].split(os.pathsep):
path = path.strip('"')
exe_file = os.path.join(path, program)
if os.path.isfile(exe_file) and os.access(exe_file, os.X_OK):
return exe_file
return None
def call_and_display_response(session, operation_name, parameters,
parsed_globals):
cli_operation_caller = CLIOperationCaller(session)
cli_operation_caller.invoke(
'emr', operation_name,
parameters, parsed_globals)
def display_response(session, operation_name, result, parsed_globals):
cli_operation_caller = CLIOperationCaller(session)
# Calling a private method. Should be changed after the functionality
# is moved outside CliOperationCaller.
cli_operation_caller._display_response(
operation_name, result, parsed_globals)
def get_region(session, parsed_globals):
region = parsed_globals.region
if region is None:
region = session.get_config_variable('region')
return region
def join(values, separator=',', lastSeparator='and'):
"""
Helper method to print a list of values
[1,2,3] -> '1, 2 and 3'
"""
values = [str(x) for x in values]
if len(values) < 1:
return ""
elif len(values) is 1:
return values[0]
else:
separator = '%s ' % separator
return ' '.join([separator.join(values[:-1]),
lastSeparator, values[-1]])
def split_to_key_value(string):
if string.find('=') == -1:
return string, ''
else:
return string.split('=', 1)
def get_cluster(cluster_id, session, region,
endpoint_url, verify_ssl):
describe_cluster_params = {'ClusterId': cluster_id}
describe_cluster_response = call(
session, 'describe_cluster', describe_cluster_params,
region, endpoint_url,
verify_ssl)
if describe_cluster_response is not None:
return describe_cluster_response.get('Cluster')
def get_release_label(cluster_id, session, region,
endpoint_url, verify_ssl):
cluster = get_cluster(cluster_id, session, region,
endpoint_url, verify_ssl)
if cluster is not None:
return cluster.get('ReleaseLabel')
awscli-1.10.1/awscli/customizations/emr/ssh.py 0000666 4542626 0000144 00000017023 12652514124 022442 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import os
import subprocess
import tempfile
from awscli.customizations.emr import constants
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import sshutils
from awscli.customizations.emr.command import Command
KEY_PAIR_FILE_HELP_TEXT = '\nA value for the variable Key Pair File ' \
'can be set in the AWS CLI config file using the "aws configure set" ' \
'command.\n'
class Socks(Command):
NAME = 'socks'
DESCRIPTION = ('Create a socks tunnel on port 8157 from your machine '
'to the master.\n%s' % KEY_PAIR_FILE_HELP_TEXT)
ARG_TABLE = [
{'name': 'cluster-id', 'required': True,
'help_text': 'Cluster Id of cluster you want to ssh into'},
{'name': 'key-pair-file', 'required': True,
'help_text': 'Private key file to use for login'},
]
def _run_main_command(self, parsed_args, parsed_globals):
try:
master_dns = sshutils.validate_and_find_master_dns(
session=self._session,
parsed_globals=parsed_globals,
cluster_id=parsed_args.cluster_id)
key_file = parsed_args.key_pair_file
sshutils.validate_ssh_with_key_file(key_file)
f = tempfile.NamedTemporaryFile(delete=False)
if (emrutils.which('ssh') or emrutils.which('ssh.exe')):
command = ['ssh', '-o', 'StrictHostKeyChecking=no', '-o',
'ServerAliveInterval=10', '-ND', '8157', '-i',
parsed_args.key_pair_file, constants.SSH_USER +
'@' + master_dns]
else:
command = ['putty', '-ssh', '-i', parsed_args.key_pair_file,
constants.SSH_USER + '@' + master_dns, '-N', '-D',
'8157']
print(' '.join(command))
rc = subprocess.call(command)
return rc
except KeyboardInterrupt:
print('Disabling Socks Tunnel.')
return 0
class SSH(Command):
NAME = 'ssh'
DESCRIPTION = ('SSH into master node of the cluster.\n%s' %
KEY_PAIR_FILE_HELP_TEXT)
ARG_TABLE = [
{'name': 'cluster-id', 'required': True,
'help_text': 'Cluster Id of cluster you want to ssh into'},
{'name': 'key-pair-file', 'required': True,
'help_text': 'Private key file to use for login'},
{'name': 'command', 'help_text': 'Command to execute on Master Node'}
]
def _run_main_command(self, parsed_args, parsed_globals):
master_dns = sshutils.validate_and_find_master_dns(
session=self._session,
parsed_globals=parsed_globals,
cluster_id=parsed_args.cluster_id)
key_file = parsed_args.key_pair_file
sshutils.validate_ssh_with_key_file(key_file)
f = tempfile.NamedTemporaryFile(delete=False)
if (emrutils.which('ssh') or emrutils.which('ssh.exe')):
command = ['ssh', '-o', 'StrictHostKeyChecking=no', '-o',
'ServerAliveInterval=10', '-i',
parsed_args.key_pair_file, constants.SSH_USER +
'@' + master_dns]
if parsed_args.command:
command.append(parsed_args.command)
else:
command = ['putty', '-ssh', '-i', parsed_args.key_pair_file,
constants.SSH_USER + '@' + master_dns, '-t']
if parsed_args.command:
f.write(parsed_args.command)
f.write('\nread -n1 -r -p "Command completed. Press any key."')
command.append('-m')
command.append(f.name)
f.close()
print(' '.join(command))
rc = subprocess.call(command)
os.remove(f.name)
return rc
class Put(Command):
NAME = 'put'
DESCRIPTION = ('Put file onto the master node.\n%s' %
KEY_PAIR_FILE_HELP_TEXT)
ARG_TABLE = [
{'name': 'cluster-id', 'required': True,
'help_text': 'Cluster Id of cluster you want to put file onto'},
{'name': 'key-pair-file', 'required': True,
'help_text': 'Private key file to use for login'},
{'name': 'src', 'required': True,
'help_text': 'Source file path on local machine'},
{'name': 'dest', 'help_text': 'Destination file path on remote host'}
]
def _run_main_command(self, parsed_args, parsed_globals):
master_dns = sshutils.validate_and_find_master_dns(
session=self._session,
parsed_globals=parsed_globals,
cluster_id=parsed_args.cluster_id)
key_file = parsed_args.key_pair_file
sshutils.validate_scp_with_key_file(key_file)
if (emrutils.which('scp') or emrutils.which('scp.exe')):
command = ['scp', '-r', '-o StrictHostKeyChecking=no',
'-i', parsed_args.key_pair_file, parsed_args.src,
constants.SSH_USER + '@' + master_dns]
else:
command = ['pscp', '-scp', '-r', '-i', parsed_args.key_pair_file,
parsed_args.src, constants.SSH_USER + '@' + master_dns]
# if the instance is not terminated
if parsed_args.dest:
command[-1] = command[-1] + ":" + parsed_args.dest
else:
command[-1] = command[-1] + ":" + parsed_args.src.split('/')[-1]
print(' '.join(command))
rc = subprocess.call(command)
return rc
class Get(Command):
NAME = 'get'
DESCRIPTION = ('Get file from master node.\n%s' % KEY_PAIR_FILE_HELP_TEXT)
ARG_TABLE = [
{'name': 'cluster-id', 'required': True,
'help_text': 'Cluster Id of cluster you want to get file from'},
{'name': 'key-pair-file', 'required': True,
'help_text': 'Private key file to use for login'},
{'name': 'src', 'required': True,
'help_text': 'Source file path on remote host'},
{'name': 'dest', 'help_text': 'Destination file path on your machine'}
]
def _run_main_command(self, parsed_args, parsed_globals):
master_dns = sshutils.validate_and_find_master_dns(
session=self._session,
parsed_globals=parsed_globals,
cluster_id=parsed_args.cluster_id)
key_file = parsed_args.key_pair_file
sshutils.validate_scp_with_key_file(key_file)
if (emrutils.which('scp') or emrutils.which('scp.exe')):
command = ['scp', '-r', '-o StrictHostKeyChecking=no', '-i',
parsed_args.key_pair_file, constants.SSH_USER + '@' +
master_dns + ':' + parsed_args.src]
else:
command = ['pscp', '-scp', '-r', '-i', parsed_args.key_pair_file,
constants.SSH_USER + '@' + master_dns + ':' +
parsed_args.src]
if parsed_args.dest:
command.append(parsed_args.dest)
else:
command.append(parsed_args.src.split('/')[-1])
print(' '.join(command))
rc = subprocess.call(command)
return rc
awscli-1.10.1/awscli/customizations/emr/createdefaultroles.py 0000666 4542626 0000144 00000030467 12652514124 025531 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
import re
import botocore.exceptions
from botocore import xform_name
from awscli.customizations.emr import configutils
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import exceptions
from awscli.customizations.emr.command import Command
from awscli.customizations.emr.constants import EC2
from awscli.customizations.emr.constants import EC2_ROLE_NAME
from awscli.customizations.emr.constants import EC2_ROLE_ARN_PATTERN
from awscli.customizations.emr.constants import EMR
from awscli.customizations.emr.constants import EMR_ROLE_NAME
from awscli.customizations.emr.constants import EMR_ROLE_ARN_PATTERN
from awscli.customizations.emr.exceptions import ResolveServicePrincipalError
LOG = logging.getLogger(__name__)
def assume_role_policy(serviceprincipal):
return {
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {"Service": serviceprincipal},
"Action": "sts:AssumeRole"
}
]
}
def get_service_role_policy_arn(region):
region_suffix = _get_policy_arn_suffix(region)
return EMR_ROLE_ARN_PATTERN.replace("{{region_suffix}}", region_suffix)
def get_ec2_role_policy_arn(region):
region_suffix = _get_policy_arn_suffix(region)
return EC2_ROLE_ARN_PATTERN.replace("{{region_suffix}}", region_suffix)
def _get_policy_arn_suffix(region):
region_string = region.lower()
if region_string.startswith("cn-"):
return "aws-cn"
elif region_string.startswith("us-gov"):
return "aws-us-gov"
else:
return "aws"
def get_service_principal(service, endpoint_host):
return service + '.' + _get_suffix(endpoint_host)
def _get_suffix(endpoint_host):
return _get_suffix_from_endpoint_host(endpoint_host)
def _get_suffix_from_endpoint_host(endpoint_host):
suffix_match = _get_regex_match_from_endpoint_host(endpoint_host)
if suffix_match is not None and suffix_match.lastindex >= 3:
suffix = suffix_match.group(3)
else:
raise ResolveServicePrincipalError
return suffix
def _get_regex_match_from_endpoint_host(endpoint_host):
if endpoint_host is None:
return None
regex_match = re.match("(https?://)([^.]+).elasticmapreduce.([^/]*)",
endpoint_host)
# Supports 'elasticmapreduce.{region}.' and '{region}.elasticmapreduce.'
if regex_match is None:
regex_match = re.match("(https?://elasticmapreduce).([^.]+).([^/]*)",
endpoint_host)
return regex_match
class CreateDefaultRoles(Command):
NAME = "create-default-roles"
DESCRIPTION = ('Creates the default IAM role ' +
EC2_ROLE_NAME + ' and ' +
EMR_ROLE_NAME + ' which can be used when creating the'
' cluster using the create-cluster command. The default'
' roles for EMR use managed policies, which are updated'
' automatically to support future EMR functionality.\n'
'\nIf you do not have a Service Role and Instance Profile '
'variable set for your create-cluster command in the AWS '
'CLI config file, create-default-roles will automatically '
'set the values for these variables with these default '
'roles. If you have already set a value for Service Role '
'or Instance Profile, create-default-roles will not '
'automatically set the defaults for these variables in the '
'AWS CLI config file. You can view settings for variables '
'in the config file using the "aws configure get" command.'
'\n')
ARG_TABLE = [
{'name': 'iam-endpoint',
'no_paramfile': True,
'help_text': '
The IAM endpoint to call for creating the roles.'
' This is optional and should only be specified when a'
' custom endpoint should be called for IAM operations'
'.
'}
]
def _run_main_command(self, parsed_args, parsed_globals):
ec2_result = None
ec2_policy = None
emr_result = None
emr_policy = None
self.iam_endpoint_url = parsed_args.iam_endpoint
self._check_for_iam_endpoint(self.region, self.iam_endpoint_url)
self.emr_endpoint_url = \
self._session.create_client(
'emr',
region_name=self.region,
endpoint_url=parsed_globals.endpoint_url,
verify=parsed_globals.verify_ssl).meta.endpoint_url
LOG.debug('elasticmapreduce endpoint used for resolving'
' service principal: ' + self.emr_endpoint_url)
# Check if the default EC2 Role for EMR exists.
role_name = EC2_ROLE_NAME
if self._check_if_role_exists(role_name, parsed_globals):
LOG.debug('Role ' + role_name + ' exists.')
else:
LOG.debug('Role ' + role_name + ' does not exist.'
' Creating default role for EC2: ' + role_name)
role_arn = get_ec2_role_policy_arn(self.region)
ec2_result = self._create_role_with_role_policy(
role_name, EC2, role_arn, parsed_globals)
ec2_policy = self._get_role_policy(role_arn, parsed_globals)
# Check if the default EC2 Instance Profile for EMR exists.
instance_profile_name = EC2_ROLE_NAME
if self._check_if_instance_profile_exists(instance_profile_name,
parsed_globals):
LOG.debug('Instance Profile ' + instance_profile_name + ' exists.')
else:
LOG.debug('Instance Profile ' + instance_profile_name +
'does not exist. Creating default Instance Profile ' +
instance_profile_name)
self._create_instance_profile_with_role(instance_profile_name,
instance_profile_name,
parsed_globals)
# Check if the default EMR Role exists.
role_name = EMR_ROLE_NAME
if self._check_if_role_exists(role_name, parsed_globals):
LOG.debug('Role ' + role_name + ' exists.')
else:
LOG.debug('Role ' + role_name + ' does not exist.'
' Creating default role for EMR: ' + role_name)
role_arn = get_service_role_policy_arn(self.region)
emr_result = self._create_role_with_role_policy(
role_name, EMR, role_arn, parsed_globals)
emr_policy = self._get_role_policy(role_arn, parsed_globals)
configutils.update_roles(self._session)
emrutils.display_response(
self._session,
'create_role',
self._construct_result(ec2_result, ec2_policy,
emr_result, emr_policy),
parsed_globals)
return 0
def _check_for_iam_endpoint(self, region, iam_endpoint):
try:
self._session.create_client('emr', region)
except botocore.exceptions.UnknownEndpointError:
if iam_endpoint is None:
raise exceptions.UnknownIamEndpointError(region=region)
def _construct_result(self, ec2_response, ec2_policy,
emr_response, emr_policy):
result = []
self._construct_role_and_role_policy_structure(
result, ec2_response, ec2_policy)
self._construct_role_and_role_policy_structure(
result, emr_response, emr_policy)
return result
def _construct_role_and_role_policy_structure(
self, list, response, policy):
if response is not None and response['Role'] is not None:
list.append({'Role': response['Role'], 'RolePolicy': policy})
return list
def _check_if_role_exists(self, role_name, parsed_globals):
parameters = {'RoleName': role_name}
try:
self._call_iam_operation('GetRole', parameters, parsed_globals)
except Exception as e:
role_not_found_msg = 'The role with name ' + role_name +\
' cannot be found'
if role_not_found_msg in e.error_message:
# No role error.
return False
else:
# Some other error. raise.
raise e
return True
def _check_if_instance_profile_exists(self, instance_profile_name,
parsed_globals):
parameters = {'InstanceProfileName': instance_profile_name}
try:
self._call_iam_operation('GetInstanceProfile', parameters,
parsed_globals)
except Exception as e:
profile_not_found_msg = 'Instance Profile ' +\
instance_profile_name +\
' cannot be found.'
if profile_not_found_msg in e.error_message:
# No instance profile error.
return False
else:
# Some other error. raise.
raise e
return True
def _get_role_policy(self, arn, parsed_globals):
parameters = {}
parameters['PolicyArn'] = arn
policy_details = self._call_iam_operation('GetPolicy', parameters,
parsed_globals)
parameters["VersionId"] = policy_details["Policy"]["DefaultVersionId"]
policy_version_details = self._call_iam_operation('GetPolicyVersion',
parameters,
parsed_globals)
return policy_version_details["PolicyVersion"]["Document"]
def _create_role_with_role_policy(
self, role_name, service_name, role_arn, parsed_globals):
service_principal = get_service_principal(service_name,
self.emr_endpoint_url)
LOG.debug(service_principal)
parameters = {'RoleName': role_name}
_assume_role_policy = \
emrutils.dict_to_string(assume_role_policy(service_principal))
parameters['AssumeRolePolicyDocument'] = _assume_role_policy
create_role_response = self._call_iam_operation('CreateRole',
parameters,
parsed_globals)
parameters = {}
parameters['PolicyArn'] = role_arn
parameters['RoleName'] = role_name
self._call_iam_operation('AttachRolePolicy',
parameters, parsed_globals)
return create_role_response
def _create_instance_profile_with_role(self, instance_profile_name,
role_name, parsed_globals):
# Creating an Instance Profile
parameters = {'InstanceProfileName': instance_profile_name}
self._call_iam_operation('CreateInstanceProfile', parameters,
parsed_globals)
# Adding the role to the Instance Profile
parameters = {}
parameters['InstanceProfileName'] = instance_profile_name
parameters['RoleName'] = role_name
self._call_iam_operation('AddRoleToInstanceProfile', parameters,
parsed_globals)
def _call_iam_operation(self, operation_name, parameters, parsed_globals):
client = self._session.create_client(
'iam', region_name=self.region, endpoint_url=self.iam_endpoint_url,
verify=parsed_globals.verify_ssl)
return getattr(client, xform_name(operation_name))(**parameters)
awscli-1.10.1/awscli/customizations/emr/addinstancegroups.py 0000666 4542626 0000144 00000004711 12652514124 025362 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import argumentschema
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import helptext
from awscli.customizations.emr import instancegroupsutils
from awscli.customizations.emr.command import Command
class AddInstanceGroups(Command):
NAME = 'add-instance-groups'
DESCRIPTION = 'Adds an instance group to a running cluster.'
ARG_TABLE = [
{'name': 'cluster-id', 'required': True,
'help_text': helptext.CLUSTER_ID},
{'name': 'instance-groups', 'required': True,
'help_text': helptext.INSTANCE_GROUPS,
'schema': argumentschema.INSTANCE_GROUPS_SCHEMA}
]
def _run_main_command(self, parsed_args, parsed_globals):
parameters = {'JobFlowId': parsed_args.cluster_id}
parameters['InstanceGroups'] = \
instancegroupsutils.build_instance_groups(
parsed_args.instance_groups)
add_instance_groups_response = emrutils.call(
self._session, 'add_instance_groups', parameters,
self.region, parsed_globals.endpoint_url,
parsed_globals.verify_ssl)
constructed_result = self._construct_result(
add_instance_groups_response)
emrutils.display_response(self._session, 'add_instance_groups',
constructed_result, parsed_globals)
return 0
def _construct_result(self, add_instance_groups_result):
jobFlowId = None
instanceGroupIds = None
if add_instance_groups_result is not None:
jobFlowId = add_instance_groups_result.get('JobFlowId')
instanceGroupIds = add_instance_groups_result.get(
'InstanceGroupIds')
if jobFlowId is not None and instanceGroupIds is not None:
return {'ClusterId': jobFlowId,
'InstanceGroupIds': instanceGroupIds}
else:
return {}
awscli-1.10.1/awscli/customizations/emr/installapplications.py 0000666 4542626 0000144 00000005477 12652514124 025734 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import applicationutils
from awscli.customizations.emr import argumentschema
from awscli.customizations.emr import constants
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import helptext
from awscli.customizations.emr.command import Command
class InstallApplications(Command):
NAME = 'install-applications'
DESCRIPTION = ('Installs applications on a running cluster. Currently only'
' Hive and Pig can be installed using this command, and'
' this command is only supported by AMI versions'
' (3.x and 2.x).')
ARG_TABLE = [
{'name': 'cluster-id', 'required': True,
'help_text': helptext.CLUSTER_ID},
{'name': 'applications', 'required': True,
'help_text': helptext.INSTALL_APPLICATIONS,
'schema': argumentschema.APPLICATIONS_SCHEMA},
]
# Applications supported by the install-applications command.
supported_apps = ['HIVE', 'PIG']
def _run_main_command(self, parsed_args, parsed_globals):
parameters = {'JobFlowId': parsed_args.cluster_id}
self._check_for_supported_apps(parsed_args.applications)
parameters['Steps'] = applicationutils.build_applications(
self.region, parsed_args.applications)[2]
emrutils.call_and_display_response(self._session, 'AddJobFlowSteps',
parameters, parsed_globals)
return 0
def _check_for_supported_apps(self, parsed_applications):
for app_config in parsed_applications:
app_name = app_config['Name'].upper()
if app_name in constants.APPLICATIONS:
if app_name not in self.supported_apps:
raise ValueError(
"aws: error: " + app_config['Name'] + " cannot be"
" installed on a running cluster. 'Name' should be one"
" of the following: " +
', '.join(self.supported_apps))
else:
raise ValueError(
"aws: error: Unknown application: " + app_config['Name'] +
". 'Name' should be one of the following: " +
', '.join(constants.APPLICATIONS))
awscli-1.10.1/awscli/customizations/emr/describecluster.py 0000666 4542626 0000144 00000007641 12652514124 025034 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.commands import BasicCommand
from awscli.customizations.emr import constants
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import helptext
from awscli.customizations.emr.command import Command
from botocore.exceptions import NoCredentialsError
class DescribeCluster(Command):
NAME = 'describe-cluster'
DESCRIPTION = ('Provides cluster-level details including status, hardware'
' and software configuration, VPC settings, bootstrap'
' actions, instance groups and so on. For information about'
' the cluster steps, see list-steps.')
ARG_TABLE = [
{'name': 'cluster-id', 'required': True,
'help_text': helptext.CLUSTER_ID}
]
def _run_main_command(self, parsed_args, parsed_globals):
parameters = {'ClusterId': parsed_args.cluster_id}
describe_cluster_result = self._call(
self._session, 'describe_cluster', parameters, parsed_globals)
list_instance_groups_result = self._call(
self._session, 'list_instance_groups', parameters, parsed_globals)
list_bootstrap_actions_result = self._call(
self._session, 'list_bootstrap_actions',
parameters, parsed_globals)
master_public_dns = self._find_master_public_dns(
cluster_id=parsed_args.cluster_id,
parsed_globals=parsed_globals)
constructed_result = self._construct_result(
describe_cluster_result,
list_instance_groups_result,
list_bootstrap_actions_result,
master_public_dns)
emrutils.display_response(self._session, 'describe_cluster',
constructed_result, parsed_globals)
return 0
def _find_master_public_dns(self, cluster_id, parsed_globals):
return emrutils.find_master_public_dns(
session=self._session, cluster_id=cluster_id,
parsed_globals=parsed_globals)
def _call(self, session, operation_name, parameters, parsed_globals):
return emrutils.call(
session, operation_name, parameters,
region_name=self.region,
endpoint_url=parsed_globals.endpoint_url,
verify=parsed_globals.verify_ssl)
def _get_key_of_result(self, keys):
# Return the first key that is not "Marker"
for key in keys:
if key != "Marker":
return key
def _construct_result(
self, describe_cluster_result, list_instance_groups_result,
list_bootstrap_actions_result, master_public_dns):
result = describe_cluster_result
result['Cluster']['MasterPublicDnsName'] = master_public_dns
result['Cluster']['InstanceGroups'] = []
result['Cluster']['BootstrapActions'] = []
if (list_instance_groups_result is not None and
list_instance_groups_result.get('InstanceGroups') is not None):
result['Cluster']['InstanceGroups'] = \
list_instance_groups_result.get('InstanceGroups')
if (list_bootstrap_actions_result is not None and
list_bootstrap_actions_result.get('BootstrapActions')
is not None):
result['Cluster']['BootstrapActions'] = \
list_bootstrap_actions_result['BootstrapActions']
return result
awscli-1.10.1/awscli/customizations/emr/configutils.py 0000666 4542626 0000144 00000004763 12652514124 024202 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
import os
from awscli.customizations.configure import ConfigFileWriter
from awscli.customizations.emr.constants import EC2_ROLE_NAME
from awscli.customizations.emr.constants import EMR_ROLE_NAME
LOG = logging.getLogger(__name__)
def get_configs(session):
return session.get_scoped_config().get('emr', {})
def get_current_profile_name(session):
profile_name = session.get_config_variable('profile')
return 'default' if profile_name is None else profile_name
def get_current_profile_var_name(session):
return _get_profile_str(session, '.')
def _get_profile_str(session, separator):
profile_name = session.get_config_variable('profile')
return 'default' if profile_name is None \
else 'profile%c%s' % (separator, profile_name)
def is_any_role_configured(session):
parsed_configs = get_configs(session)
return True if ('instance_profile' in parsed_configs
or 'service_role' in parsed_configs) \
else False
def update_roles(session):
if is_any_role_configured(session):
LOG.debug("At least one of the roles is already associated with "
"your current profile ")
else:
config_writer = ConfigWriter(session)
config_writer.update_config('service_role', EMR_ROLE_NAME)
config_writer.update_config('instance_profile', EC2_ROLE_NAME)
LOG.debug("Associated default roles with your current profile")
class ConfigWriter(object):
def __init__(self, session):
self.session = session
self.section = _get_profile_str(session, ' ')
self.config_file_writer = ConfigFileWriter()
def update_config(self, key, value):
config_filename = \
os.path.expanduser(self.session.get_config_variable('config_file'))
updated_config = {'__section__': self.section,
'emr': {key: value}}
self.config_file_writer.update_config(updated_config, config_filename)
awscli-1.10.1/awscli/customizations/emr/helptext.py 0000666 4542626 0000144 00000040043 12652514124 023500 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr.createdefaultroles import EMR_ROLE_NAME
from awscli.customizations.emr.createdefaultroles import EC2_ROLE_NAME
TERMINATE_CLUSTERS = (
'Shuts down a list of clusters. When a cluster is shut'
' down, any step not yet completed is canceled and the '
'Amazon EC2 instances in the cluster are terminated. '
'Any log files not already saved are uploaded to'
' Amazon S3 if a LogUri was specified when the cluster was created.'
" 'terminate-clusters' is asynchronous. Depending on the"
' configuration of the cluster, it may take from 5 to 20 minutes for the'
' cluster to completely terminate and release allocated resources such as'
' Amazon EC2 instances.')
CLUSTER_ID = (
'
A unique string that identifies the cluster. This'
' identifier is returned by create-cluster and can also be'
' obtained from list-clusters.
')
HBASE_BACKUP_DIR = (
'
Amazon S3 location of the Hbase backup.
Example:
'
's3://mybucket/mybackup
where mybucket is the'
' specified Amazon S3 bucket and mybackup is the specified backup'
' location. The path argument must begin with s3:// in order to denote'
' that the path argument refers to an Amazon S3 folder.
')
HBASE_BACKUP_VERSION = (
'
Backup version to restore from. If not specified the latest backup '
'in the specified location will be used.
')
# create-cluster options help text
CLUSTER_NAME = (
'
The name of the cluster. The default is "Development Cluster".
')
LOG_URI = (
'
The location in Amazon S3 to write the log files '
'of the cluster. If a value is not provided, '
'logs are not created.
')
SERVICE_ROLE = (
'
Allows EMR to call other AWS Services such as EC2 on your behalf.
'
'To create the default Service Role ' + EMR_ROLE_NAME + ','
' use aws emr create-default-roles command.
'
'This command will also create the default EC2 instance profile '
'' + EC2_ROLE_NAME + '.')
USE_DEFAULT_ROLES = (
'
Uses --service-role=' + EMR_ROLE_NAME + ', and '
'--ec2-attributes InstanceProfile=' + EC2_ROLE_NAME + ''
'To create the default service role and instance profile'
' use aws emr create-default-roles command.
')
AMI_VERSION = (
'
The version number of the Amazon Machine Image (AMI) '
'to use for Amazon EC2 instances in the cluster. '
'For example,--ami-version 3.1.0 You cannot specify both a release label'
' (emr-4.0.0 and later) and an AMI version (3.x or 2.x) on a cluster
'
'
For details about the AMIs currently supported by Amazon '
'Elastic MapReduce, go to AMI Versions Supported in Amazon Elastic '
'MapReduce in the Amazon Elastic MapReduce Developer\'s Guide.
The identifier for the EMR release, which includes a set of software,'
' to use with Amazon EC2 instances that are part of an Amazon EMR cluster.'
' For example, --release-label emr-4.0.0 You cannot specify both a'
' release label (emr-4.0.0 and later) and AMI version (3.x or 2.x) on a'
' cluster.
'
'
For details about the releases available in Amazon Elastic MapReduce,'
' go to Releases Available in Amazon Elastic MapReduce in the'
' Amazon Elastic MapReduce Documentation.
Please use ami-version if you want to specify AMI'
' Versions for your Amazon EMR cluster (3.x and 2.x)
')
CONFIGURATIONS = (
'
Specifies new configuration values for applications installed on your'
' cluster when using an EMR release (emr-4.0.0 and later). The'
' configuration files available for editing in each application (for'
' example: yarn-site for YARN) can be found in the Amazon EMR Developer\'s'
' Guide in the respective application\'s section. Currently on the CLI,'
' you can only specify these values in a JSON file stored locally or in'
' Amazon S3, and you supply the path to this file to this parameter.
'
'
For example:
'
'
To specify configurations from a local file --configurations'
' file://configurations.json
'
'
To specify configurations from a file in Amazon S3 '
'--configurations https://s3.amazonaws.com/myBucket/configurations.json'
'
'
'
For more information about configuring applications in EMR release,'
' go to the Amazon EMR Documentation:
A specification of the number and type'
' of Amazon EC2 instances to create instance groups in a cluster.
'
'
Each instance group takes the following parameters: '
'[Name], InstanceGroupType, InstanceType, InstanceCount,'
' [BidPrice]
')
INSTANCE_TYPE = (
'
Shortcut option for --instance-groups. A specification of the '
'type of Amazon EC2 instances used together with --instance-count '
'(optional) to create instance groups in a cluster. '
'Specifying the --instance-type argument without '
'also specifying --instance-count launches a single-node cluster.
')
INSTANCE_COUNT = (
'
Shortcut option for --instance-groups. '
'A specification of the number of Amazon EC2 instances used together with'
' --instance-type to create instance groups in a cluster. EMR will use one'
' node as the cluster\'s master node and use the remainder of the nodes as'
' core nodes. Specifying the --instance-type argument without '
'also specifying --instance-count launches a single-node cluster.
')
ADDITIONAL_INFO = (
'
Specifies additional information during cluster creation
')
EC2_ATTRIBUTES = (
'
Specifies the following Amazon EC2 attributes: KeyName,'
' AvailabilityZone, SubnetId, InstanceProfile,'
' EmrManagedMasterSecurityGroup, EmrManagedSlaveSecurityGroup,'
' AdditionalMasterSecurityGroups and AdditionalSlaveSecurityGroups.'
' AvailabilityZone and Subnet cannot be specified together.'
' To create the default instance profile '
+ EC2_ROLE_NAME + ','
' use aws emr create-default-roles command.
'
'This command will also create the default EMR service role '
'' + EMR_ROLE_NAME + '.'
'
KeyName - the name of the AWS EC2 key pair you are using '
'to launch the cluster.
'
'
AvailabilityZone - An isolated resource '
'location within a region.
'
'
SubnetId - Assign the EMR cluster to this Amazon VPC Subnet.
'
'
InstanceProfile - Provides access to other AWS services such as S3,'
' DynamoDB from EC2 instances that are launched by EMR..
'
'
EmrManagedMasterSecurityGroup - The identifier of the Amazon EC2'
' security group'
' for the master node.
'
'
EmrManagedSlaveSecurityGroup - The identifier of the Amazon EC2'
' security group'
' for the slave nodes.
'
'
ServiceAccessSecurityGroup - The identifier of the Amazon EC2 '
'security group for the Amazon EMR service '
'to access clusters in VPC private subnets
'
'
AdditionalMasterSecurityGroups - A list of additional Amazon EC2'
' security group IDs for the master node
'
'
AdditionalSlaveSecurityGroups - A list of additional Amazon EC2'
' security group IDs for the slave nodes.
')
AUTO_TERMINATE = (
'
Specifies whether the cluster should terminate after'
' completing all the steps. Auto termination is off by default.
')
TERMINATION_PROTECTED = (
'
Specifies whether to lock the cluster to prevent the'
' Amazon EC2 instances from being terminated by API call, '
'user intervention, or in the event of an error. Termination protection '
'is off by default.
')
VISIBILITY = (
'
Specifies whether the cluster is visible to all IAM users of'
' the AWS account associated with the cluster. If set to '
'--visible-to-all-users, all IAM users of that AWS account'
' can view and (if they have the proper policy permisions set) manage'
' the cluster. If it is set to --no-visible-to-all-users,'
' only the IAM user that created the cluster can view and manage it. '
' Clusters are visible by default.
')
DEBUGGING = (
'
Enables debugging for the cluster. The debugging tool is a'
' graphical user interface that you can use to browse the log files from'
' the console (https://console.aws.amazon.com/elasticmapreduce/'
' ). When you enable debugging on a cluster, Amazon EMR archives'
' the log files to Amazon S3 and then indexes those files. You can then'
' use the graphical interface to browse the step, job, task, and task'
' attempt logs for the cluster in an intuitive way.
Requires'
' --log-uri to be specified
')
TAGS = (
'
A list of tags to associate with a cluster and propagate to'
' each Amazon EC2 instance in the cluster. '
'They are user-defined key/value pairs that'
' consist of a required key string with a maximum of 128 characters'
' and an optional value string with a maximum of 256 characters.
'
'
You can specify tags in key=value format or to add a'
' tag without value just write key name, key.
'
'
Syntax:
Multiple tags separated by a space.
'
'
--tags key1=value1 key2=value2
')
BOOTSTRAP_ACTIONS = (
'
Specifies a list of bootstrap actions to run when creating a'
' cluster. You can use bootstrap actions to install additional software'
' and to change the configuration of applications on the cluster.'
' Bootstrap actions are scripts that are run on the cluster nodes when'
' Amazon EMR launches the cluster. They run before Hadoop starts and'
' before the node begins processing data.
'
'
Each bootstrap action takes the following parameters: '
'Path, [Name] and [Args]. '
'Note: Args should either be a comma-separated list of values '
'(e.g. Args=arg1,arg2,arg3) or a bracket-enclosed list of values '
'and/or key-value pairs (e.g. Args=[arg1,arg2=arg3,arg4]).
')
APPLICATIONS = (
'
Installs applications such as Hadoop, Spark, Hue, Hive, Pig, HBase,'
' Ganglia and Impala or the MapR distribution when creating a cluster.'
' Available applications vary by EMR release, and the set of components'
' installed when specifying an Application Name can be found in the Amazon'
' EMR Developer\'s Guide. Note: If you are using an AMI version instead of'
' an EMR release, some applications take optional Args for configuration.'
' Args should either be a comma-separated list of values'
' (e.g. Args=arg1,arg2,arg3) or a bracket-enclosed list of values'
' and/or key-value pairs (e.g. Args=[arg1,arg2=arg3,arg4]).
')
EMR_FS = (
'
Configures certain features in EMRFS like consistent'
' view, Amazon S3 client-side and server-side encryption.
'
'
Encryption - enables Amazon S3 server-side encryption or'
' Amazon S3 client-side encryption and takes the mutually exclusive'
' values, ServerSide or ClientSide.
'
'
ProviderType - the encryption ProviderType, which is either Custom'
' or KMS
'
'
KMSKeyId - the AWS KMS KeyId, the alias'
' you mapped to the KeyId, or the full ARN of the key that'
' includes the region, account ID, and the KeyId.
'
'
CustomProviderLocation - the S3 URI of'
' the custom EncryptionMaterialsProvider class.
'
'
CustomProviderClass - the name of the'
' custom EncryptionMaterialsProvider class you are using.
'
'
Consistent - setting to true enables consistent view.
'
'
RetryCount - the number of times EMRFS consistent view will check'
' for list consistency before returning an error.
'
'
RetryPeriod - the interval at which EMRFS consistent view will'
' recheck for consistency of objects it tracks.
'
'
SSE - deprecated in favor of Encryption=ServerSide
'
'
Args - optional arguments you can supply in configuring EMRFS.
')
RESTORE_FROM_HBASE = (
'
Launches a new HBase cluster and populates it with'
' data from a previous backup of an HBase cluster. You must install HBase'
' using the --applications option.'
' Note: this is only supported by AMI versions (3.x and 2.x).
')
STEPS = (
'
A list of steps to be executed by the cluster. A step can be'
' specified either using the shorthand syntax, JSON file or as a JSON'
' string. Note: [Args] supplied with steps should either be a'
' comma-separated list of values (e.g. Args=arg1,arg2,arg3) or'
' a bracket-enclosed list of values and/or key-value pairs'
' (e.g. Args=[arg1,arg2=arg3,arg4]).
')
INSTALL_APPLICATIONS = (
'
The applications to be installed.'
' Takes the following parameters: '
'Name and Args.')
LIST_CLUSTERS_CLUSTER_STATES = (
'
The cluster state filters to apply when listing clusters.
'
'
Syntax:'
'"string" "string" ...
'
'
Where valid values are:
'
'
STARTING
'
'
BOOTSTRAPPING
'
'
RUNNING
'
'
WAITING
'
'
TERMINATING
'
'
TERMINATED
'
'
TERMINATED_WITH_ERRORS
')
LIST_CLUSTERS_STATE_FILTERS = (
'
Shortcut option for --cluster-states.
'
'
--active filters clusters in \'STARTING\','
'\'BOOTSTRAPPING\',\'RUNNING\','
'\'WAITING\', or \'TERMINATING\' states.
'
'
--terminated filters clusters in \'TERMINATED\' state.
'
'
--failed filters clusters in \'TERMINATED_WITH_ERRORS\' state.
')
LIST_CLUSTERS_CREATED_AFTER = (
'
The creation date and time beginning value filter for '
'listing clusters. For example, 2014-07-15T00:01:30.
')
LIST_CLUSTERS_CREATED_BEFORE = (
'
The creation date and time end value filter for '
'listing clusters. For example, 2014-07-15T00:01:30.
')
EMR_MANAGED_MASTER_SECURITY_GROUP = (
'
The identifier of the Amazon EC2 security group '
'for the master node.
')
EMR_MANAGED_SLAVE_SECURITY_GROUP = (
'
The identifier of the Amazon EC2 security group '
'for the slave nodes.
')
SERVICE_ACCESS_SECURITY_GROUP = (
'
The identifier of the Amazon EC2 security group '
'for the Amazon EMR service to access '
'clusters in VPC private subnets.
')
ADDITIONAL_MASTER_SECURITY_GROUPS = (
'
A list of additional Amazon EC2 security group IDs for '
'the master node
')
ADDITIONAL_SLAVE_SECURITY_GROUPS = (
'
A list of additional Amazon EC2 security group IDs for '
'the slave nodes.
')
AVAILABLE_ONLY_FOR_AMI_VERSIONS = (
'This command is only available for AMI Versions (3.x and 2.x).')
CREATE_CLUSTER_DESCRIPTION = (
'Creates an Amazon EMR cluster with specified software.\n'
'\nQuick start:\n\naws emr create-cluster --release-label '
' --instance-type [--instance-count ]\n\n'
'Values for variables Instance Profile (under EC2 Attributes),'
' Service Role, Log URI, and Key Name (under EC2 Attributes) can be set in'
' the AWS CLI config file using the "aws configure set" command.\n')
awscli-1.10.1/awscli/customizations/emr/addsteps.py 0000666 4542626 0000144 00000003633 12652514124 023456 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import argumentschema
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import helptext
from awscli.customizations.emr import steputils
from awscli.customizations.emr.command import Command
class AddSteps(Command):
NAME = 'add-steps'
DESCRIPTION = ('Add a list of steps to a cluster.')
ARG_TABLE = [
{'name': 'cluster-id', 'required': True,
'help_text': helptext.CLUSTER_ID
},
{'name': 'steps',
'required': True,
'nargs': '+',
'schema': argumentschema.STEPS_SCHEMA,
'help_text': helptext.STEPS
}
]
def _run_main_command(self, parsed_args, parsed_globals):
parsed_steps = parsed_args.steps
release_label = emrutils.get_release_label(
parsed_args.cluster_id, self._session, self.region,
parsed_globals.endpoint_url, parsed_globals.verify_ssl)
step_list = steputils.build_step_config_list(
parsed_step_list=parsed_steps, region=self.region,
release_label=release_label)
parameters = {
'JobFlowId': parsed_args.cluster_id,
'Steps': step_list
}
emrutils.call_and_display_response(self._session, 'AddJobFlowSteps',
parameters, parsed_globals)
return 0
awscli-1.10.1/awscli/customizations/emr/argumentschema.py 0000666 4542626 0000144 00000023350 12652514124 024650 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import helptext
from awscli.customizations.emr.createdefaultroles import EC2_ROLE_NAME
INSTANCE_GROUPS_SCHEMA = {
"type": "array",
"items": {
"type": "object",
"properties": {
"Name": {
"type": "string",
"description":
"Friendly name given to the instance group."
},
"InstanceGroupType": {
"type": "string",
"description":
"The type of the instance group in the cluster.",
"enum": ["MASTER", "CORE", "TASK"],
"required": True
},
"BidPrice": {
"type": "string",
"description":
"Bid price for each Amazon EC2 instance in the "
"instance group when launching nodes as Spot Instances, "
"expressed in USD."
},
"InstanceType": {
"type": "string",
"description":
"The Amazon EC2 instance type for all instances "
"in the instance group.",
"required": True
},
"InstanceCount": {
"type": "integer",
"description": "Target number of Amazon EC2 instances "
"for the instance group",
"required": True
}
}
}
}
EC2_ATTRIBUTES_SCHEMA = {
"type": "object",
"properties": {
"KeyName": {
"type": "string",
"description":
"The name of the Amazon EC2 key pair that can "
"be used to ssh to the master node as the user 'hadoop'."
},
"SubnetId": {
"type": "string",
"description":
"To launch the cluster in Amazon "
"Virtual Private Cloud (Amazon VPC), set this parameter to "
"the identifier of the Amazon VPC subnet where you want "
"the cluster to launch. If you do not specify this value, "
"the cluster is launched in the normal Amazon Web Services "
"cloud, outside of an Amazon VPC. "
},
"AvailabilityZone": {
"type": "string",
"description": "The Availability Zone the cluster will run in."
},
"InstanceProfile": {
"type": "string",
"description":
"An IAM role for the cluster. The EC2 instances of the cluster"
" assume this role. The default role is " +
EC2_ROLE_NAME + ". In order to use the default"
" role, you must have already created it using the "
"create-default-roles command. "
},
"EmrManagedMasterSecurityGroup": {
"type": "string",
"description": helptext.EMR_MANAGED_MASTER_SECURITY_GROUP
},
"EmrManagedSlaveSecurityGroup": {
"type": "string",
"description": helptext.EMR_MANAGED_SLAVE_SECURITY_GROUP
},
"ServiceAccessSecurityGroup": {
"type": "string",
"description": helptext.SERVICE_ACCESS_SECURITY_GROUP
},
"AdditionalMasterSecurityGroups": {
"type": "array",
"description": helptext.ADDITIONAL_MASTER_SECURITY_GROUPS,
"items": {
"type": "string"
}
},
"AdditionalSlaveSecurityGroups": {
"type": "array",
"description": helptext.ADDITIONAL_SLAVE_SECURITY_GROUPS,
"items": {
"type": "string"
}
}
}
}
APPLICATIONS_SCHEMA = {
"type": "array",
"items": {
"type": "object",
"properties": {
"Name": {
"type": "string",
"description": "Application name.",
"enum": ["MapR", "HUE", "HIVE", "PIG", "HBASE",
"IMPALA", "GANGLIA", "HADOOP", "SPARK"],
"required": True
},
"Args": {
"type": "array",
"description":
"A list of arguments to pass to the application.",
"items": {
"type": "string"
}
}
}
}
}
BOOTSTRAP_ACTIONS_SCHEMA = {
"type": "array",
"items": {
"type": "object",
"properties": {
"Name": {
"type": "string",
"default": "Bootstrap Action"
},
"Path": {
"type": "string",
"description":
"Location of the script to run during a bootstrap action. "
"Can be either a location in Amazon S3 or "
"on a local file system.",
"required": True
},
"Args": {
"type": "array",
"description":
"A list of command line arguments to pass to "
"the bootstrap action script",
"items": {
"type": "string"
}
}
}
}
}
STEPS_SCHEMA = {
"type": "array",
"items": {
"type": "object",
"properties": {
"Type": {
"type": "string",
"description":
"The type of a step to be added to the cluster.",
"default": "custom_jar",
"enum": ["CUSTOM_JAR", "STREAMING", "HIVE", "PIG", "IMPALA"],
},
"Name": {
"type": "string",
"description": "The name of the step. ",
},
"ActionOnFailure": {
"type": "string",
"description": "The action to take if the cluster step fails.",
"enum": ["TERMINATE_CLUSTER", "CANCEL_AND_WAIT", "CONTINUE"],
"default": "CONTINUE"
},
"Jar": {
"type": "string",
"description": "A path to a JAR file run during the step.",
},
"Args": {
"type": "array",
"description":
"A list of command line arguments to pass to the step.",
"items": {
"type": "string"
}
},
"MainClass": {
"type": "string",
"description":
"The name of the main class in the specified "
"Java file. If not specified, the JAR file should "
"specify a Main-Class in its manifest file."
},
"Properties": {
"type": "string",
"description":
"A list of Java properties that are set when the step "
"runs. You can use these properties to pass key value "
"pairs to your main function."
}
}
}
}
HBASE_RESTORE_FROM_BACKUP_SCHEMA = {
"type": "object",
"properties": {
"Dir": {
"type": "string",
"description": helptext.HBASE_BACKUP_DIR
},
"BackupVersion": {
"type": "string",
"description": helptext.HBASE_BACKUP_VERSION
}
}
}
EMR_FS_SCHEMA = {
"type": "object",
"properties": {
"Consistent": {
"type": "boolean",
"description": "Enable EMRFS consistent view."
},
"SSE": {
"type": "boolean",
"description": "Enable Amazon S3 server-side encryption on files "
"written to S3 by EMRFS."
},
"RetryCount": {
"type": "integer",
"description":
"The maximum number of times to retry upon S3 inconsistency."
},
"RetryPeriod": {
"type": "integer",
"description": "The amount of time (in seconds) until the first "
"retry. Subsequent retries use an exponential "
"back-off."
},
"Args": {
"type": "array",
"description": "A list of arguments to pass for additional "
"EMRFS configuration.",
"items": {
"type": "string"
}
},
"Encryption": {
"type": "string",
"description": "EMRFS encryption type.",
"enum": ["SERVERSIDE", "CLIENTSIDE"]
},
"ProviderType": {
"type": "string",
"description": "EMRFS client-side encryption provider type.",
"enum": ["KMS", "CUSTOM"]
},
"KMSKeyId": {
"type": "string",
"description": "AWS KMS's customer master key identifier",
},
"CustomProviderLocation": {
"type": "string",
"description": "Custom encryption provider JAR location."
},
"CustomProviderClass": {
"type": "string",
"description": "Custom encryption provider full class name."
}
}
}
TAGS_SCHEMA = {
"type": "array",
"items": {
"type": "string"
}
}
awscli-1.10.1/awscli/customizations/emr/applicationutils.py 0000666 4542626 0000144 00000014607 12652514124 025236 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import constants
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import exceptions
def build_applications(region,
parsed_applications, ami_version=None):
app_list = []
step_list = []
ba_list = []
for app_config in parsed_applications:
app_name = app_config['Name'].lower()
if app_name == constants.HIVE:
hive_version = constants.LATEST
step_list.append(
_build_install_hive_step(region=region))
args = app_config.get('Args')
if args is not None:
hive_site_path = _find_matching_arg(
key=constants.HIVE_SITE_KEY, args_list=args)
if hive_site_path is not None:
step_list.append(
_build_install_hive_site_step(
region=region,
hive_site_path=hive_site_path))
elif app_name == constants.PIG:
pig_version = constants.LATEST
step_list.append(
_build_pig_install_step(
region=region))
elif app_name == constants.GANGLIA:
ba_list.append(
_build_ganglia_install_bootstrap_action(
region=region))
elif app_name == constants.HBASE:
ba_list.append(
_build_hbase_install_bootstrap_action(
region=region))
if ami_version >= '3.0':
step_list.append(
_build_hbase_install_step(
constants.HBASE_PATH_HADOOP2_INSTALL_JAR))
elif ami_version >= '2.1':
step_list.append(
_build_hbase_install_step(
constants.HBASE_PATH_HADOOP1_INSTALL_JAR))
else:
raise ValueError('aws: error: AMI version ' + ami_version +
'is not compatible with HBase.')
elif app_name == constants.IMPALA:
ba_list.append(
_build_impala_install_bootstrap_action(
region=region,
args=app_config.get('Args')))
else:
app_list.append(
_build_supported_product(
app_config['Name'], app_config.get('Args')))
return app_list, ba_list, step_list
def _build_supported_product(name, args):
if args is None:
args = []
config = {'Name': name.lower(), 'Args': args}
return config
def _build_ganglia_install_bootstrap_action(region):
return emrutils.build_bootstrap_action(
name=constants.INSTALL_GANGLIA_NAME,
path=emrutils.build_s3_link(
relative_path=constants.GANGLIA_INSTALL_BA_PATH,
region=region))
def _build_hbase_install_bootstrap_action(region):
return emrutils.build_bootstrap_action(
name=constants.INSTALL_HBASE_NAME,
path=emrutils.build_s3_link(
relative_path=constants.HBASE_INSTALL_BA_PATH,
region=region))
def _build_hbase_install_step(jar):
return emrutils.build_step(
jar=jar,
name=constants.START_HBASE_NAME,
action_on_failure=constants.TERMINATE_CLUSTER,
args=constants.HBASE_INSTALL_ARG)
def _build_impala_install_bootstrap_action(region, args=None):
args_list = [
constants.BASE_PATH_ARG,
emrutils.build_s3_link(region=region),
constants.IMPALA_VERSION,
constants.LATEST]
if args is not None:
args_list.append(constants.IMPALA_CONF)
args_list += args
return emrutils.build_bootstrap_action(
name=constants.INSTALL_IMPALA_NAME,
path=emrutils.build_s3_link(
relative_path=constants.IMPALA_INSTALL_PATH,
region=region),
args=args_list)
def _build_install_hive_step(region,
action_on_failure=constants.TERMINATE_CLUSTER):
step_args = [
emrutils.build_s3_link(constants.HIVE_SCRIPT_PATH, region),
constants.INSTALL_HIVE_ARG,
constants.BASE_PATH_ARG,
emrutils.build_s3_link(constants.HIVE_BASE_PATH, region),
constants.HIVE_VERSIONS,
constants.LATEST]
step = emrutils.build_step(
name=constants.INSTALL_HIVE_NAME,
action_on_failure=action_on_failure,
jar=emrutils.build_s3_link(constants.SCRIPT_RUNNER_PATH, region),
args=step_args)
return step
def _build_install_hive_site_step(region, hive_site_path,
action_on_failure=constants.CANCEL_AND_WAIT):
step_args = [
emrutils.build_s3_link(constants.HIVE_SCRIPT_PATH, region),
constants.BASE_PATH_ARG,
emrutils.build_s3_link(constants.HIVE_BASE_PATH),
constants.INSTALL_HIVE_SITE_ARG,
hive_site_path,
constants.HIVE_VERSIONS,
constants.LATEST]
step = emrutils.build_step(
name=constants.INSTALL_HIVE_SITE_NAME,
action_on_failure=action_on_failure,
jar=emrutils.build_s3_link(constants.SCRIPT_RUNNER_PATH, region),
args=step_args)
return step
def _build_pig_install_step(region,
action_on_failure=constants.TERMINATE_CLUSTER):
step_args = [
emrutils.build_s3_link(constants.PIG_SCRIPT_PATH, region),
constants.INSTALL_PIG_ARG,
constants.BASE_PATH_ARG,
emrutils.build_s3_link(constants.PIG_BASE_PATH, region),
constants.PIG_VERSIONS,
constants.LATEST]
step = emrutils.build_step(
name=constants.INSTALL_PIG_NAME,
action_on_failure=action_on_failure,
jar=emrutils.build_s3_link(constants.SCRIPT_RUNNER_PATH, region),
args=step_args)
return step
def _find_matching_arg(key, args_list):
for arg in args_list:
if key in arg:
return arg
return None
awscli-1.10.1/awscli/customizations/emr/terminateclusters.py 0000666 4542626 0000144 00000002546 12652514124 025426 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import helptext
from awscli.customizations.emr.command import Command
class TerminateClusters(Command):
NAME = 'terminate-clusters'
DESCRIPTION = helptext.TERMINATE_CLUSTERS
ARG_TABLE = [{
'name': 'cluster-ids', 'nargs': '+', 'required': True,
'help_text': '
A list of clusters to terminate.
',
'schema': {'type': 'array', 'items': {'type': 'string'}},
}]
def _run_main_command(self, parsed_args, parsed_globals):
parameters = {'JobFlowIds': parsed_args.cluster_ids}
emrutils.call_and_display_response(self._session,
'TerminateJobFlows', parameters,
parsed_globals)
return 0
awscli-1.10.1/awscli/customizations/emr/hbaseutils.py 0000666 4542626 0000144 00000001670 12652514124 024011 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import constants
def build_hbase_restore_from_backup_args(dir, backup_version=None):
args = [constants.HBASE_MAIN,
constants.HBASE_RESTORE,
constants.HBASE_BACKUP_DIR, dir]
if backup_version is not None:
args.append(constants.HBASE_BACKUP_VERSION_FOR_RESTORE)
args.append(backup_version)
return args awscli-1.10.1/awscli/customizations/emr/emrfsutils.py 0000666 4542626 0000144 00000021622 12652514124 024042 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import constants
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import exceptions
from botocore.compat import OrderedDict
CONSISTENT_OPTIONAL_KEYS = ['RetryCount', 'RetryPeriod']
CSE_KMS_REQUIRED_KEYS = ['KMSKeyId']
CSE_CUSTOM_REQUIRED_KEYS = ['CustomProviderLocation', 'CustomProviderClass']
CSE_PROVIDER_TYPES = [constants.EMRFS_KMS, constants.EMRFS_CUSTOM]
ENCRYPTION_TYPES = [constants.EMRFS_CLIENT_SIDE, constants.EMRFS_SERVER_SIDE]
CONSISTENT_OPTION_NAME = "--emrfs Consistent=true/false"
CSE_OPTION_NAME = '--emrfs Encryption=ClientSide'
CSE_KMS_OPTION_NAME = '--emrfs Encryption=ClientSide,ProviderType=KMS'
CSE_CUSTOM_OPTION_NAME = '--emrfs Encryption=ClientSide,ProviderType=Custom'
def build_bootstrap_action_configs(region, emrfs_args):
bootstrap_actions = []
_verify_emrfs_args(emrfs_args)
if _need_to_configure_cse(emrfs_args, 'CUSTOM'):
# Download custom encryption provider from Amazon S3 to EMR Cluster
bootstrap_actions.append(
emrutils.build_bootstrap_action(
path=constants.EMRFS_CSE_CUSTOM_S3_GET_BA_PATH,
name=constants.S3_GET_BA_NAME,
args=[constants.S3_GET_BA_SRC,
emrfs_args.get('CustomProviderLocation'),
constants.S3_GET_BA_DEST,
constants.EMRFS_CUSTOM_DEST_PATH,
constants.S3_GET_BA_FORCE]))
emrfs_setup_ba_args = _build_ba_args_to_setup_emrfs(emrfs_args)
bootstrap_actions.append(
emrutils.build_bootstrap_action(
path=emrutils.build_s3_link(
relative_path=constants.CONFIG_HADOOP_PATH,
region=region),
name=constants.EMRFS_BA_NAME,
args=emrfs_setup_ba_args))
return bootstrap_actions
def build_emrfs_confiuration(emrfs_args):
_verify_emrfs_args(emrfs_args)
emrfs_properties = _build_emrfs_properties(emrfs_args)
if _need_to_configure_cse(emrfs_args, 'CUSTOM'):
emrfs_properties[constants.EMRFS_CSE_CUSTOM_PROVIDER_URI_KEY] = \
emrfs_args.get('CustomProviderLocation')
emrfs_configuration = {
'Classification': constants.EMRFS_SITE,
'Properties': emrfs_properties}
return emrfs_configuration
def _verify_emrfs_args(emrfs_args):
# Encryption should have a valid value
if 'Encryption' in emrfs_args \
and emrfs_args['Encryption'].upper() not in ENCRYPTION_TYPES:
raise exceptions.UnknownEncryptionTypeError(
encryption=emrfs_args['Encryption'])
# Only one of SSE and Encryption should be configured
if 'SSE' in emrfs_args and 'Encryption' in emrfs_args:
raise exceptions.BothSseAndEncryptionConfiguredError(
sse=emrfs_args['SSE'], encryption=emrfs_args['Encryption'])
# CSE should be configured correctly
# ProviderType should be present and should have valid value
# Given the type, the required parameters should be present
if ('Encryption' in emrfs_args and
emrfs_args['Encryption'].upper() == constants.EMRFS_CLIENT_SIDE):
if 'ProviderType' not in emrfs_args:
raise exceptions.MissingParametersError(
object_name=CSE_OPTION_NAME, missing='ProviderType')
elif emrfs_args['ProviderType'].upper() not in CSE_PROVIDER_TYPES:
raise exceptions.UnknownCseProviderTypeError(
provider_type=emrfs_args['ProviderType'])
elif emrfs_args['ProviderType'].upper() == 'KMS':
_verify_required_args(emrfs_args.keys(), CSE_KMS_REQUIRED_KEYS,
CSE_KMS_OPTION_NAME)
elif emrfs_args['ProviderType'].upper() == 'CUSTOM':
_verify_required_args(emrfs_args.keys(), CSE_CUSTOM_REQUIRED_KEYS,
CSE_CUSTOM_OPTION_NAME)
# No child attributes should be present if the parent feature is not
# configured
if 'Consistent' not in emrfs_args:
_verify_child_args(emrfs_args.keys(), CONSISTENT_OPTIONAL_KEYS,
CONSISTENT_OPTION_NAME)
if not _need_to_configure_cse(emrfs_args, 'KMS'):
_verify_child_args(emrfs_args.keys(), CSE_KMS_REQUIRED_KEYS,
CSE_KMS_OPTION_NAME)
if not _need_to_configure_cse(emrfs_args, 'CUSTOM'):
_verify_child_args(emrfs_args.keys(), CSE_CUSTOM_REQUIRED_KEYS,
CSE_CUSTOM_OPTION_NAME)
def _verify_required_args(actual_keys, required_keys, object_name):
if any(x not in actual_keys for x in required_keys):
missing_keys = list(
sorted(set(required_keys).difference(set(actual_keys))))
raise exceptions.MissingParametersError(
object_name=object_name, missing=emrutils.join(missing_keys))
def _verify_child_args(actual_keys, child_keys, parent_object_name):
if any(x in actual_keys for x in child_keys):
invalid_keys = list(
sorted(set(child_keys).intersection(set(actual_keys))))
raise exceptions.InvalidEmrFsArgumentsError(
invalid=emrutils.join(invalid_keys),
parent_object_name=parent_object_name)
def _build_ba_args_to_setup_emrfs(emrfs_args):
emrfs_properties = _build_emrfs_properties(emrfs_args)
return _create_ba_args(emrfs_properties)
def _build_emrfs_properties(emrfs_args):
"""
Assumption: emrfs_args is valid i.e. all required attributes are present
"""
emrfs_properties = OrderedDict()
if _need_to_configure_consistent_view(emrfs_args):
_update_properties_for_consistent_view(emrfs_properties, emrfs_args)
if _need_to_configure_sse(emrfs_args):
_update_properties_for_sse(emrfs_properties, emrfs_args)
if _need_to_configure_cse(emrfs_args, 'KMS'):
_update_properties_for_cse(emrfs_properties, emrfs_args, 'KMS')
if _need_to_configure_cse(emrfs_args, 'CUSTOM'):
_update_properties_for_cse(emrfs_properties, emrfs_args, 'CUSTOM')
if 'Args' in emrfs_args:
for arg_value in emrfs_args.get('Args'):
key, value = emrutils.split_to_key_value(arg_value)
emrfs_properties[key] = value
return emrfs_properties
def _need_to_configure_consistent_view(emrfs_args):
return 'Consistent' in emrfs_args
def _need_to_configure_sse(emrfs_args):
return 'SSE' in emrfs_args \
or ('Encryption' in emrfs_args and
emrfs_args['Encryption'].upper() == constants.EMRFS_SERVER_SIDE)
def _need_to_configure_cse(emrfs_args, cse_type):
return ('Encryption' in emrfs_args
and emrfs_args['Encryption'].upper() == constants.EMRFS_CLIENT_SIDE
and 'ProviderType' in emrfs_args
and emrfs_args['ProviderType'].upper() == cse_type)
def _update_properties_for_consistent_view(emrfs_properties, emrfs_args):
emrfs_properties[constants.EMRFS_CONSISTENT_KEY] = \
str(emrfs_args['Consistent']).lower()
if 'RetryCount' in emrfs_args:
emrfs_properties[constants.EMRFS_RETRY_COUNT_KEY] = \
str(emrfs_args['RetryCount'])
if 'RetryPeriod' in emrfs_args:
emrfs_properties[constants.EMRFS_RETRY_PERIOD_KEY] = \
str(emrfs_args['RetryPeriod'])
def _update_properties_for_sse(emrfs_properties, emrfs_args):
sse_value = emrfs_args['SSE'] if 'SSE' in emrfs_args else True
# if 'SSE' is not in emrfs_args then 'Encryption' must be 'ServerSide'
emrfs_properties[constants.EMRFS_SSE_KEY] = str(sse_value).lower()
def _update_properties_for_cse(emrfs_properties, emrfs_args, cse_type):
emrfs_properties[constants.EMRFS_CSE_KEY] = 'true'
if cse_type == 'KMS':
emrfs_properties[
constants.EMRFS_CSE_ENCRYPTION_MATERIALS_PROVIDER_KEY] = \
constants.EMRFS_CSE_KMS_PROVIDER_FULL_CLASS_NAME
emrfs_properties[constants.EMRFS_CSE_KMS_KEY_ID_KEY] =\
emrfs_args['KMSKeyId']
elif cse_type == 'CUSTOM':
emrfs_properties[
constants.EMRFS_CSE_ENCRYPTION_MATERIALS_PROVIDER_KEY] = \
emrfs_args['CustomProviderClass']
def _update_emrfs_ba_args(ba_args, key_value):
ba_args.append(constants.EMRFS_BA_ARG_KEY)
ba_args.append(key_value)
def _create_ba_args(emrfs_properties):
ba_args = []
for key, value in emrfs_properties.items():
key_value = key
if value:
key_value = key_value + "=" + value
_update_emrfs_ba_args(ba_args, key_value)
return ba_args
awscli-1.10.1/awscli/customizations/emr/config.py 0000666 4542626 0000144 00000011231 12652514124 023105 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
from awscli.customizations.emr import configutils
from awscli.customizations.emr import exceptions
LOG = logging.getLogger(__name__)
SUPPORTED_CONFIG_LIST = [
{'name': 'service_role'},
{'name': 'log_uri'},
{'name': 'instance_profile', 'arg_name': 'ec2_attributes',
'arg_value_key': 'InstanceProfile'},
{'name': 'key_name', 'arg_name': 'ec2_attributes',
'arg_value_key': 'KeyName'},
{'name': 'enable_debugging', 'type': 'boolean'},
{'name': 'key_pair_file'}
]
TYPES = ['string', 'boolean']
def get_applicable_configurations(command):
supported_configurations = _create_supported_configurations()
return [x for x in supported_configurations if x.is_applicable(command)]
def _create_supported_configuration(config):
config_type = config['type'] if 'type' in config else 'string'
if (config_type == 'string'):
config_arg_name = config['arg_name'] \
if 'arg_name' in config else config['name']
config_arg_value_key = config['arg_value_key'] \
if 'arg_value_key' in config else None
configuration = StringConfiguration(config['name'],
config_arg_name,
config_arg_value_key)
elif (config_type == 'boolean'):
configuration = BooleanConfiguration(config['name'])
return configuration
def _create_supported_configurations():
return [_create_supported_configuration(config)
for config in SUPPORTED_CONFIG_LIST]
class Configuration(object):
def __init__(self, name, arg_name):
self.name = name
self.arg_name = arg_name
def is_applicable(self, command):
raise NotImplementedError("is_applicable")
def is_present(self, parsed_args):
raise NotImplementedError("is_present")
def add(self, command, parsed_args, value):
raise NotImplementedError("add")
def _check_arg(self, parsed_args, arg_name):
return getattr(parsed_args, arg_name, None)
class StringConfiguration(Configuration):
def __init__(self, name, arg_name, arg_value_key=None):
super(StringConfiguration, self).__init__(name, arg_name)
self.arg_value_key = arg_value_key
def is_applicable(self, command):
return command.supports_arg(self.arg_name.replace('_', '-'))
def is_present(self, parsed_args):
if (not self.arg_value_key):
return self._check_arg(parsed_args, self.arg_name)
else:
return self._check_arg(parsed_args, self.arg_name) \
and self.arg_value_key in getattr(parsed_args, self.arg_name)
def add(self, command, parsed_args, value):
if (not self.arg_value_key):
setattr(parsed_args, self.arg_name, value)
else:
if (not self._check_arg(parsed_args, self.arg_name)):
setattr(parsed_args, self.arg_name, {})
getattr(parsed_args, self.arg_name)[self.arg_value_key] = value
class BooleanConfiguration(Configuration):
def __init__(self, name):
super(BooleanConfiguration, self).__init__(name, name)
self.no_version_arg_name = "no_" + name
def is_applicable(self, command):
return command.supports_arg(self.arg_name.replace('_', '-')) and \
command.supports_arg(self.no_version_arg_name.replace('_', '-'))
def is_present(self, parsed_args):
return self._check_arg(parsed_args, self.arg_name) \
or self._check_arg(parsed_args, self.no_version_arg_name)
def add(self, command, parsed_args, value):
if (value.lower() == 'true'):
setattr(parsed_args, self.arg_name, True)
setattr(parsed_args, self.no_version_arg_name, False)
elif (value.lower() == 'false'):
setattr(parsed_args, self.arg_name, False)
setattr(parsed_args, self.no_version_arg_name, True)
else:
raise exceptions.InvalidBooleanConfigError(
config_value=value,
config_key=self.arg_name,
profile_var_name=configutils.get_current_profile_var_name(
command._session))
awscli-1.10.1/awscli/customizations/emr/instancegroupsutils.py 0000666 4542626 0000144 00000006077 12652514124 026001 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import constants
from awscli.customizations.emr import exceptions
def build_instance_groups(parsed_instance_groups):
"""
Helper method that converts --instance-groups option value in
create-cluster and add-instance-groups to
Amazon Elastic MapReduce InstanceGroupConfig data type.
"""
instance_groups = []
for instance_group in parsed_instance_groups:
ig_config = {}
keys = instance_group.keys()
if 'Name' in keys:
ig_config['Name'] = instance_group['Name']
else:
ig_config['Name'] = instance_group['InstanceGroupType']
ig_config['InstanceType'] = instance_group['InstanceType']
ig_config['InstanceCount'] = instance_group['InstanceCount']
ig_config['InstanceRole'] = instance_group['InstanceGroupType'].upper()
if 'BidPrice' in keys:
ig_config['BidPrice'] = instance_group['BidPrice']
ig_config['Market'] = constants.SPOT
else:
ig_config['Market'] = constants.ON_DEMAND
instance_groups.append(ig_config)
return instance_groups
def _build_instance_group(
instance_type, instance_count, instance_group_type):
ig_config = {}
ig_config['InstanceType'] = instance_type
ig_config['InstanceCount'] = instance_count
ig_config['InstanceRole'] = instance_group_type.upper()
ig_config['Name'] = ig_config['InstanceRole']
ig_config['Market'] = constants.ON_DEMAND
return ig_config
def validate_and_build_instance_groups(
instance_groups, instance_type, instance_count):
if (instance_groups is None and instance_type is None):
raise exceptions.MissingRequiredInstanceGroupsError
if (instance_groups is not None and
(instance_type is not None or
instance_count is not None)):
raise exceptions.InstanceGroupsValidationError
if instance_groups is not None:
return build_instance_groups(instance_groups)
else:
instance_groups = []
master_ig = _build_instance_group(
instance_type=instance_type,
instance_count=1,
instance_group_type="MASTER")
instance_groups.append(master_ig)
if instance_count is not None and int(instance_count) > 1:
core_ig = _build_instance_group(
instance_type=instance_type,
instance_count=int(instance_count)-1,
instance_group_type="CORE")
instance_groups.append(core_ig)
return instance_groups
awscli-1.10.1/awscli/customizations/emr/__init__.py 0000666 4542626 0000144 00000001065 12652514124 023403 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
awscli-1.10.1/awscli/customizations/emr/modifyclusterattributes.py 0000666 4542626 0000144 00000007067 12652514124 026654 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import exceptions
from awscli.customizations.emr import helptext
from awscli.customizations.emr.command import Command
class ModifyClusterAttr(Command):
NAME = 'modify-cluster-attributes'
DESCRIPTION = ("Modifies the cluster attributes 'visible-to-all-users' and"
" 'termination-protected'.")
ARG_TABLE = [
{'name': 'cluster-id', 'required': True,
'help_text': helptext.CLUSTER_ID},
{'name': 'visible-to-all-users', 'required': False, 'action':
'store_true', 'group_name': 'visible',
'help_text': 'Change cluster visibility for IAM users'},
{'name': 'no-visible-to-all-users', 'required': False, 'action':
'store_true', 'group_name': 'visible',
'help_text': 'Change cluster visibility for IAM users'},
{'name': 'termination-protected', 'required': False, 'action':
'store_true', 'group_name': 'terminate',
'help_text': 'Set termination protection on or off'},
{'name': 'no-termination-protected', 'required': False, 'action':
'store_true', 'group_name': 'terminate',
'help_text': 'Set termination protection on or off'},
]
def _run_main_command(self, args, parsed_globals):
if (args.visible_to_all_users and args.no_visible_to_all_users):
raise exceptions.MutualExclusiveOptionError(
option1='--visible-to-all-users',
option2='--no-visible-to-all-users')
if (args.termination_protected and args.no_termination_protected):
raise exceptions.MutualExclusiveOptionError(
option1='--termination-protected',
option2='--no-termination-protected')
if not(args.termination_protected or args.no_termination_protected
or args.visible_to_all_users or args.no_visible_to_all_users):
raise exceptions.MissingClusterAttributesError()
if (args.visible_to_all_users or args.no_visible_to_all_users):
visible = (args.visible_to_all_users and
not args.no_visible_to_all_users)
parameters = {'JobFlowIds': [args.cluster_id],
'VisibleToAllUsers': visible}
emrutils.call_and_display_response(self._session,
'SetVisibleToAllUsers',
parameters, parsed_globals)
if (args.termination_protected or args.no_termination_protected):
protected = (args.termination_protected and
not args.no_termination_protected)
parameters = {'JobFlowIds': [args.cluster_id],
'TerminationProtected': protected}
emrutils.call_and_display_response(self._session,
'SetTerminationProtection',
parameters, parsed_globals)
return 0
awscli-1.10.1/awscli/customizations/emr/exceptions.py 0000666 4542626 0000144 00000023275 12652514124 024034 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
class EmrError(Exception):
"""
The base exception class for Emr exceptions.
:ivar msg: The descriptive message associated with the error.
"""
fmt = 'An unspecified error occured'
def __init__(self, **kwargs):
msg = self.fmt.format(**kwargs)
Exception.__init__(self, msg)
self.kwargs = kwargs
class MissingParametersError(EmrError):
"""
One or more required parameters were not supplied.
:ivar object_name: The object that has missing parameters.
This can be an operation or a parameter (in the
case of inner params). The str() of this object
will be used so it doesn't need to implement anything
other than str().
:ivar missing: The names of the missing parameters.
"""
fmt = ('aws: error: The following required parameters are missing for '
'{object_name}: {missing}.')
class EmptyListError(EmrError):
"""
The provided list is empty.
:ivar param: The provided list parameter
"""
fmt = ('aws: error: The prameter {param} cannot be an empty list.')
class MissingRequiredInstanceGroupsError(EmrError):
"""
In create-cluster command, none of --instance-group,
--instance-count nor --instance-type were not supplied.
"""
fmt = ('aws: error: Must specify either --instance-groups or '
'--instance-type with --instance-count(optional) to '
'configure instance groups.')
class InstanceGroupsValidationError(EmrError):
"""
--instance-type and --instance-count are shortcut option
for --instance-groups and they cannot be specified
together with --instance-groups
"""
fmt = ('aws: error: You may not specify --instance-type '
'or --instance-count with --instance-groups, '
'because --instance-type and --instance-count are '
'shortcut options for --instance-groups.')
class InvalidAmiVersionError(EmrError):
"""
The supplied ami-version is invalid.
:ivar ami_version: The provided ami_version.
"""
fmt = ('aws: error: The supplied AMI version "{ami_version}" is invalid.'
' Please see AMI Versions Supported in Amazon EMR in '
'Amazon Elastic MapReduce Developer Guide: '
'http://docs.aws.amazon.com/ElasticMapReduce/'
'latest/DeveloperGuide/ami-versions-supported.html')
class MissingBooleanOptionsError(EmrError):
"""
Required boolean options are not supplied.
:ivar true_option
:ivar false_option
"""
fmt = ('aws: error: Must specify one of the following boolean options: '
'{true_option}|{false_option}.')
class UnknownStepTypeError(EmrError):
"""
The provided step type is not supported.
:ivar step_type: the step_type provided.
"""
fmt = ('aws: error: The step type {step_type} is not supported.')
class UnknownIamEndpointError(EmrError):
"""
The IAM endpoint is not known for the specified region.
:ivar region: The region specified.
"""
fmt = 'IAM endpoint not known for region: {region}.' +\
' Specify the iam-endpoint using the --iam-endpoint option.'
class ResolveServicePrincipalError(EmrError):
"""
The service principal could not be resolved from the region or the
endpoint.
"""
fmt = 'Could not resolve the service principal from' +\
' the region or the endpoint.'
class LogUriError(EmrError):
"""
The LogUri is not specified and debugging is enabled for the cluster.
"""
fmt = ('aws: error: LogUri not specified. You must specify a logUri '
'if you enable debugging when creating a cluster.')
class MasterDNSNotAvailableError(EmrError):
"""
Cannot get public dns of master node on the cluster.
"""
fmt = 'Cannot get Public DNS of master node on the cluster. '\
' Please try again after some time.'
class WrongPuttyKeyError(EmrError):
"""
A wrong key has been used with a compatible program.
"""
fmt = 'Key file file format is incorrect. Putty expects a ppk file. '\
'Please refer to documentation at http://docs.aws.amazon.com/'\
'ElasticMapReduce/latest/DeveloperGuide/EMR_SetUp_SSH.html. '
class SSHNotFoundError(EmrError):
"""
SSH or Putty not available.
"""
fmt = 'SSH or Putty not available. Please refer to the documentation '\
'at http://docs.aws.amazon.com/ElasticMapReduce/latest/'\
'DeveloperGuide/EMR_SetUp_SSH.html.'
class SCPNotFoundError(EmrError):
"""
SCP or Pscp not available.
"""
fmt = 'SCP or Pscp not available. Please refer to the documentation '\
'at http://docs.aws.amazon.com/ElasticMapReduce/latest/'\
'DeveloperGuide/EMR_SetUp_SSH.html. '
class SubnetAndAzValidationError(EmrError):
"""
SubnetId and AvailabilityZone are mutual exclusive in --ec2-attributes.
"""
fmt = ('aws: error: You may not specify both a SubnetId and an Availabili'
'tyZone (placement) because ec2SubnetId implies a placement.')
class RequiredOptionsError(EmrError):
"""
Either of option1 or option2 is required.
"""
fmt = ('aws: error: Either {option1} or {option2} is required.')
class MutualExclusiveOptionError(EmrError):
"""
The provided option1 and option2 are mutually exclusive.
:ivar option1
:ivar option2
:ivar message (optional)
"""
def __init__(self, **kwargs):
msg = ('aws: error: You cannot specify both ' +
kwargs.get('option1', '') + ' and ' +
kwargs.get('option2', '') + ' options together.' +
kwargs.get('message', ''))
Exception.__init__(self, msg)
class MissingApplicationsError(EmrError):
"""
The application required for a step is not installed when creating a
cluster.
:ivar applications
"""
def __init__(self, **kwargs):
msg = ('aws: error: Some of the steps require the following'
' applications to be installed: ' +
', '.join(kwargs['applications']) + '. Please install the'
' applications using --applications.')
Exception.__init__(self, msg)
class ClusterTerminatedError(EmrError):
"""
The cluster is terminating or has already terminated.
"""
fmt = 'aws: error: Cluster terminating or already terminated.'
class ClusterStatesFilterValidationError(EmrError):
"""
In the list-clusters command, customers can specify only one
of the following states filters:
--cluster-states, --active, --terminated, --failed
"""
fmt = ('aws: error: You can specify only one of the cluster state '
'filters: --cluster-states, --active, --terminated, --failed.')
class MissingClusterAttributesError(EmrError):
"""
In the modify-cluster-attributes command, customers need to provide
at least one of the following cluster attributes: --visible-to-all-users,
--no-visible-to-all-users, --termination-protected
and --no-termination-protected
"""
fmt = ('aws: error: Must specify one of the following boolean options: '
'--visible-to-all-users|--no-visible-to-all-users, '
'--termination-protected|--no-termination-protected.')
class InvalidEmrFsArgumentsError(EmrError):
"""
The provided EMRFS parameters are invalid as parent feature e.g.,
Consistent View, CSE, SSE is not configured
:ivar invalid: Invalid parameters
:ivar parent_object_name: Parent feature name
"""
fmt = ('aws: error: {parent_object_name} is not specified. Thus, '
' following parameters are invalid: {invalid}')
class DuplicateEmrFsConfigurationError(EmrError):
fmt = ('aws: error: EMRFS should be configured either using '
'--configuration or --emrfs but not both')
class UnknownCseProviderTypeError(EmrError):
"""
The provided EMRFS client-side encryption provider type is not supported.
:ivar provider_type: the provider_type provided.
"""
fmt = ('aws: error: The client side encryption type "{provider_type}" is '
'not supported. You must specify either KMS or Custom')
class UnknownEncryptionTypeError(EmrError):
"""
The provided encryption type is not supported.
:ivar provider_type: the provider_type provided.
"""
fmt = ('aws: error: The encryption type "{encryption}" is invalid. '
'You must specify either ServerSide or ClientSide')
class BothSseAndEncryptionConfiguredError(EmrError):
"""
Only one of SSE or Encryption can be configured.
:ivar sse: Value for SSE
:ivar encryption: Value for encryption
"""
fmt = ('aws: error: Both SSE={sse} and Encryption={encryption} are '
'configured for --emrfs. You must specify only one of the two.')
class InvalidBooleanConfigError(EmrError):
fmt = ("aws: error: {config_value} for {config_key} in the config file is "
"invalid. The value should be either 'True' or 'False'. Use "
"'aws configure set {profile_var_name}.emr.{config_key} ' "
"command to set a valid value.")
class UnsupportedCommandWithReleaseError(EmrError):
fmt = ("aws: error: {command} is not supported with "
"'{release_label}' release.")
awscli-1.10.1/awscli/customizations/emr/listclusters.py 0000666 4542626 0000144 00000007277 12652514124 024417 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.arguments import CustomArgument
from awscli.customizations.emr import helptext
from awscli.customizations.emr import exceptions
from awscli.customizations.emr import constants
def modify_list_clusters_argument(argument_table, **kwargs):
argument_table['cluster-states'] = \
ClusterStatesArgument(
name='cluster-states',
help_text=helptext.LIST_CLUSTERS_CLUSTER_STATES,
nargs='+')
argument_table['active'] = \
ActiveStateArgument(
name='active', help_text=helptext.LIST_CLUSTERS_STATE_FILTERS,
action='store_true', group_name='states_filter')
argument_table['terminated'] = \
TerminatedStateArgument(
name='terminated',
action='store_true', group_name='states_filter')
argument_table['failed'] = \
FailedStateArgument(
name='failed', action='store_true', group_name='states_filter')
argument_table['created-before'] = CreatedBefore(
name='created-before', help_text=helptext.LIST_CLUSTERS_CREATED_BEFORE,
cli_type_name='timestamp')
argument_table['created-after'] = CreatedAfter(
name='created-after', help_text=helptext.LIST_CLUSTERS_CREATED_AFTER,
cli_type_name='timestamp')
class ClusterStatesArgument(CustomArgument):
def add_to_params(self, parameters, value):
if value is not None:
if (parameters.get('ClusterStates') is not None and
len(parameters.get('ClusterStates')) > 0):
raise exceptions.ClusterStatesFilterValidationError()
parameters['ClusterStates'] = value
class ActiveStateArgument(CustomArgument):
def add_to_params(self, parameters, value):
if value is True:
if (parameters.get('ClusterStates') is not None and
len(parameters.get('ClusterStates')) > 0):
raise exceptions.ClusterStatesFilterValidationError()
parameters['ClusterStates'] = constants.LIST_CLUSTERS_ACTIVE_STATES
class TerminatedStateArgument(CustomArgument):
def add_to_params(self, parameters, value):
if value is True:
if (parameters.get('ClusterStates') is not None and
len(parameters.get('ClusterStates')) > 0):
raise exceptions.ClusterStatesFilterValidationError()
parameters['ClusterStates'] = \
constants.LIST_CLUSTERS_TERMINATED_STATES
class FailedStateArgument(CustomArgument):
def add_to_params(self, parameters, value):
if value is True:
if (parameters.get('ClusterStates') is not None and
len(parameters.get('ClusterStates')) > 0):
raise exceptions.ClusterStatesFilterValidationError()
parameters['ClusterStates'] = constants.LIST_CLUSTERS_FAILED_STATES
class CreatedBefore(CustomArgument):
def add_to_params(self, parameters, value):
if value is None:
return
parameters['CreatedBefore'] = value
class CreatedAfter(CustomArgument):
def add_to_params(self, parameters, value):
if value is None:
return
parameters['CreatedAfter'] = value
awscli-1.10.1/awscli/customizations/emr/constants.py 0000666 4542626 0000144 00000014472 12652514124 023666 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
# Declare all the constants used by EMR in this file.
EC2_ROLE_NAME = "EMR_EC2_DefaultRole"
EMR_ROLE_NAME = "EMR_DefaultRole"
EC2_ROLE_ARN_PATTERN = ("arn:{{region_suffix}}:iam::aws:policy/service-role/"
"AmazonElasticMapReduceforEC2Role")
EMR_ROLE_ARN_PATTERN = ("arn:{{region_suffix}}:iam::aws:policy/service-role/"
"AmazonElasticMapReduceRole")
# Action on failure
CONTINUE = 'CONTINUE'
CANCEL_AND_WAIT = 'CANCEL_AND_WAIT'
TERMINATE_CLUSTER = 'TERMINATE_CLUSTER'
DEFAULT_FAILURE_ACTION = CONTINUE
# Market type
SPOT = 'SPOT'
ON_DEMAND = 'ON_DEMAND'
SCRIPT_RUNNER_PATH = '/libs/script-runner/script-runner.jar'
COMMAND_RUNNER = 'command-runner.jar'
DEBUGGING_PATH = '/libs/state-pusher/0.1/fetch'
DEBUGGING_COMMAND = 'state-pusher-script'
DEBUGGING_NAME = 'Setup Hadoop Debugging'
CONFIG_HADOOP_PATH = '/bootstrap-actions/configure-hadoop'
# S3 copy bootstrap action
S3_GET_BA_NAME = 'S3 get'
S3_GET_BA_SRC = '-s'
S3_GET_BA_DEST = '-d'
S3_GET_BA_FORCE = '-f'
# EMRFS
EMRFS_BA_NAME = 'Setup EMRFS'
EMRFS_BA_ARG_KEY = '-e'
EMRFS_CONSISTENT_KEY = 'fs.s3.consistent'
EMRFS_SSE_KEY = 'fs.s3.enableServerSideEncryption'
EMRFS_RETRY_COUNT_KEY = 'fs.s3.consistent.retryCount'
EMRFS_RETRY_PERIOD_KEY = 'fs.s3.consistent.retryPeriodSeconds'
EMRFS_CSE_KEY = 'fs.s3.cse.enabled'
EMRFS_CSE_KMS_KEY_ID_KEY = 'fs.s3.cse.kms.keyId'
EMRFS_CSE_ENCRYPTION_MATERIALS_PROVIDER_KEY = \
'fs.s3.cse.encryptionMaterialsProvider'
EMRFS_CSE_CUSTOM_PROVIDER_URI_KEY = 'fs.s3.cse.custom.provider.uri'
EMRFS_CSE_KMS_PROVIDER_FULL_CLASS_NAME = ('com.amazon.ws.emr.hadoop.fs.cse.'
'KMSEncryptionMaterialsProvider')
EMRFS_CSE_CUSTOM_S3_GET_BA_PATH = 'file:/usr/share/aws/emr/scripts/s3get'
EMRFS_CUSTOM_DEST_PATH = '/usr/share/aws/emr/auxlib'
EMRFS_SERVER_SIDE = 'SERVERSIDE'
EMRFS_CLIENT_SIDE = 'CLIENTSIDE'
EMRFS_KMS = 'KMS'
EMRFS_CUSTOM = 'CUSTOM'
EMRFS_SITE = 'emrfs-site'
MAX_BOOTSTRAP_ACTION_NUMBER = 16
BOOTSTRAP_ACTION_NAME = 'Bootstrap action'
HIVE_BASE_PATH = '/libs/hive'
HIVE_SCRIPT_PATH = '/libs/hive/hive-script'
HIVE_SCRIPT_COMMAND = 'hive-script'
PIG_BASE_PATH = '/libs/pig'
PIG_SCRIPT_PATH = '/libs/pig/pig-script'
PIG_SCRIPT_COMMAND = 'pig-script'
GANGLIA_INSTALL_BA_PATH = '/bootstrap-actions/install-ganglia'
# HBase
HBASE_INSTALL_BA_PATH = '/bootstrap-actions/setup-hbase'
HBASE_PATH_HADOOP1_INSTALL_JAR = '/home/hadoop/lib/hbase-0.92.0.jar'
HBASE_PATH_HADOOP2_INSTALL_JAR = '/home/hadoop/lib/hbase.jar'
HBASE_INSTALL_ARG = ['emr.hbase.backup.Main', '--start-master']
HBASE_JAR_PATH = '/home/hadoop/lib/hbase.jar'
HBASE_MAIN = 'emr.hbase.backup.Main'
# HBase commands
HBASE_RESTORE = '--restore'
HBASE_BACKUP_DIR_FOR_RESTORE = '--backup-dir-to-restore'
HBASE_BACKUP_VERSION_FOR_RESTORE = '--backup-version'
HBASE_BACKUP = '--backup'
HBASE_SCHEDULED_BACKUP = '--set-scheduled-backup'
HBASE_BACKUP_DIR = '--backup-dir'
HBASE_INCREMENTAL_BACKUP_INTERVAL = '--incremental-backup-time-interval'
HBASE_INCREMENTAL_BACKUP_INTERVAL_UNIT = '--incremental-backup-time-unit'
HBASE_FULL_BACKUP_INTERVAL = '--full-backup-time-interval'
HBASE_FULL_BACKUP_INTERVAL_UNIT = '--full-backup-time-unit'
HBASE_DISABLE_FULL_BACKUP = '--disable-full-backups'
HBASE_DISABLE_INCREMENTAL_BACKUP = '--disable-incremental-backups'
HBASE_BACKUP_STARTTIME = '--start-time'
HBASE_BACKUP_CONSISTENT = '--consistent'
HBASE_BACKUP_STEP_NAME = 'Backup HBase'
HBASE_RESTORE_STEP_NAME = 'Restore HBase'
HBASE_SCHEDULE_BACKUP_STEP_NAME = 'Modify Backup Schedule'
IMPALA_INSTALL_PATH = '/libs/impala/setup-impala'
# Step
HADOOP_STREAMING_PATH = '/home/hadoop/contrib/streaming/hadoop-streaming.jar'
HADOOP_STREAMING_COMMAND = 'hadoop-streaming'
CUSTOM_JAR = 'custom_jar'
HIVE = 'hive'
PIG = 'pig'
IMPALA = 'impala'
STREAMING = 'streaming'
GANGLIA = 'ganglia'
HBASE = 'hbase'
SPARK = 'spark'
DEFAULT_CUSTOM_JAR_STEP_NAME = 'Custom JAR'
DEFAULT_STREAMING_STEP_NAME = 'Streaming program'
DEFAULT_HIVE_STEP_NAME = 'Hive program'
DEFAULT_PIG_STEP_NAME = 'Pig program'
DEFAULT_IMPALA_STEP_NAME = 'Impala program'
DEFAULT_SPARK_STEP_NAME = 'Spark application'
ARGS = '--args'
RUN_HIVE_SCRIPT = '--run-hive-script'
HIVE_VERSIONS = '--hive-versions'
HIVE_STEP_CONFIG = 'HiveStepConfig'
RUN_PIG_SCRIPT = '--run-pig-script'
PIG_VERSIONS = '--pig-versions'
PIG_STEP_CONFIG = 'PigStepConfig'
RUN_IMPALA_SCRIPT = '--run-impala-script'
SPARK_SUBMIT_PATH = '/home/hadoop/spark/bin/spark-submit'
SPARK_SUBMIT_COMMAND = 'spark-submit'
IMPALA_STEP_CONFIG = 'ImpalaStepConfig'
SPARK_STEP_CONFIG = 'SparkStepConfig'
STREAMING_STEP_CONFIG = 'StreamingStepConfig'
CUSTOM_JAR_STEP_CONFIG = 'CustomJARStepConfig'
INSTALL_PIG_ARG = '--install-pig'
INSTALL_PIG_NAME = 'Install Pig'
INSTALL_HIVE_ARG = '--install-hive'
INSTALL_HIVE_NAME = 'Install Hive'
HIVE_SITE_KEY = '--hive-site'
INSTALL_HIVE_SITE_ARG = '--install-hive-site'
INSTALL_HIVE_SITE_NAME = 'Install Hive Site Configuration'
BASE_PATH_ARG = '--base-path'
INSTALL_GANGLIA_NAME = 'Install Ganglia'
INSTALL_HBASE_NAME = 'Install HBase'
START_HBASE_NAME = 'Start HBase'
INSTALL_IMPALA_NAME = 'Install Impala'
IMPALA_VERSION = '--impala-version'
IMPALA_CONF = '--impala-conf'
FULL = 'full'
INCREMENTAL = 'incremental'
MINUTES = 'minutes'
HOURS = 'hours'
DAYS = 'days'
NOW = 'now'
TRUE = 'true'
FALSE = 'false'
EC2 = 'ec2'
EMR = 'elasticmapreduce'
LATEST = 'latest'
APPLICATIONS = ["HIVE", "PIG", "HBASE", "GANGLIA", "IMPALA", "SPARK", "MAPR",
"MAPR_M3", "MAPR_M5", "MAPR_M7"]
SSH_USER = 'hadoop'
STARTING_STATES = ['STARTING', 'BOOTSTRAPPING']
TERMINATED_STATES = ['TERMINATED', 'TERMINATING', 'TERMINATED_WITH_ERRORS']
# list-clusters
LIST_CLUSTERS_ACTIVE_STATES = ['STARTING', 'BOOTSTRAPPING', 'RUNNING',
'WAITING', 'TERMINATING']
LIST_CLUSTERS_TERMINATED_STATES = ['TERMINATED']
LIST_CLUSTERS_FAILED_STATES = ['TERMINATED_WITH_ERRORS']
awscli-1.10.1/awscli/customizations/emr/addtags.py 0000666 4542626 0000144 00000002100 12652514124 023242 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.arguments import CustomArgument
from awscli.customizations.emr import helptext
from awscli.customizations.emr import emrutils
def modify_tags_argument(argument_table, **kwargs):
argument_table['tags'] = TagsArgument('tags', required=True,
help_text=helptext.TAGS, nargs='+')
class TagsArgument(CustomArgument):
def add_to_params(self, parameters, value):
if value is None:
return
parameters['Tags'] = emrutils.parse_tags(value) awscli-1.10.1/awscli/customizations/emr/emr.py 0000666 4542626 0000144 00000006271 12652514124 022433 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import hbase
from awscli.customizations.emr import ssh
from awscli.customizations.emr.addsteps import AddSteps
from awscli.customizations.emr.createcluster import CreateCluster
from awscli.customizations.emr.addinstancegroups import AddInstanceGroups
from awscli.customizations.emr.createdefaultroles import CreateDefaultRoles
from awscli.customizations.emr.modifyclusterattributes import ModifyClusterAttr
from awscli.customizations.emr.installapplications import InstallApplications
from awscli.customizations.emr.describecluster import DescribeCluster
from awscli.customizations.emr.terminateclusters import TerminateClusters
from awscli.customizations.emr.addtags import modify_tags_argument
from awscli.customizations.emr.listclusters \
import modify_list_clusters_argument
from awscli.customizations.emr.command import override_args_required_option
def emr_initialize(cli):
"""
The entry point for EMR high level commands.
"""
cli.register('building-command-table.emr', register_commands)
cli.register('building-argument-table.emr.add-tags', modify_tags_argument)
cli.register(
'building-argument-table.emr.list-clusters',
modify_list_clusters_argument)
cli.register('before-building-argument-table-parser.emr.*',
override_args_required_option)
def register_commands(command_table, session, **kwargs):
"""
Called when the EMR command table is being built. Used to inject new
high level commands into the command list. These high level commands
must not collide with existing low-level API call names.
"""
command_table['terminate-clusters'] = TerminateClusters(session)
command_table['describe-cluster'] = DescribeCluster(session)
command_table['modify-cluster-attributes'] = ModifyClusterAttr(session)
command_table['install-applications'] = InstallApplications(session)
command_table['create-cluster'] = CreateCluster(session)
command_table['add-steps'] = AddSteps(session)
command_table['restore-from-hbase-backup'] = \
hbase.RestoreFromHBaseBackup(session)
command_table['create-hbase-backup'] = hbase.CreateHBaseBackup(session)
command_table['schedule-hbase-backup'] = hbase.ScheduleHBaseBackup(session)
command_table['disable-hbase-backups'] = \
hbase.DisableHBaseBackups(session)
command_table['create-default-roles'] = CreateDefaultRoles(session)
command_table['add-instance-groups'] = AddInstanceGroups(session)
command_table['ssh'] = ssh.SSH(session)
command_table['socks'] = ssh.Socks(session)
command_table['get'] = ssh.Get(session)
command_table['put'] = ssh.Put(session)
awscli-1.10.1/awscli/customizations/argrename.py 0000666 4542626 0000144 00000007726 12652514124 023034 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""
"""
from awscli.customizations import utils
ARGUMENT_RENAMES = {
# Mapping of original arg to renamed arg.
# The key is ..argname
# The first part of the key is used for event registration
# so if you wanted to rename something for an entire service you
# could say 'ec2.*.dry-run': 'renamed-arg-name', or if you wanted
# to rename across all services you could say '*.*.dry-run': 'new-name'.
'ec2.create-image.no-no-reboot': 'reboot',
'ec2.*.no-egress': 'ingress',
'ec2.*.no-disable-api-termination': 'enable-api-termination',
'opsworks.*.region': 'stack-region',
'elastictranscoder.*.output': 'job-output',
'swf.register-activity-type.version': 'activity-version',
'swf.register-workflow-type.version': 'workflow-version',
'datapipeline.*.query': 'objects-query',
'datapipeline.get-pipeline-definition.version': 'pipeline-version',
'emr.*.job-flow-ids': 'cluster-ids',
'emr.*.job-flow-id': 'cluster-id',
'cloudsearchdomain.search.query': 'search-query',
'cloudsearchdomain.suggest.query': 'suggest-query',
'sns.subscribe.endpoint': 'notification-endpoint',
'deploy.*.s-3-location': 's3-location',
'deploy.*.ec-2-tag-filters': 'ec2-tag-filters',
'codepipeline.get-pipeline.version': 'pipeline-version',
'codepipeline.create-custom-action-type.version': 'action-version',
'codepipeline.delete-custom-action-type.version': 'action-version',
'route53.delete-traffic-policy.version': 'traffic-policy-version',
'route53.get-traffic-policy.version': 'traffic-policy-version',
'route53.update-traffic-policy-comment.version': 'traffic-policy-version'
}
# Same format as ARGUMENT_RENAMES, but instead of renaming the arguments,
# an alias is created to the original arugment and marked as undocumented.
# This is useful when you need to change the name of an argument but you
# still need to support the old argument.
HIDDEN_ALIASES = {
'cognito-identity.create-identity-pool.open-id-connect-provider-arns':
'open-id-connect-provider-ar-ns',
'storagegateway.describe-tapes.tape-arns': 'tape-ar-ns',
'storagegateway.describe-tape-archives.tape-arns': 'tape-ar-ns',
'storagegateway.describe-vtl-devices.vtl-device-arns': 'vtl-device-ar-ns',
'storagegateway.describe-cached-iscsi-volumes.volume-arns': 'volume-ar-ns',
'storagegateway.describe-stored-iscsi-volumes.volume-arns': 'volume-ar-ns',
}
def register_arg_renames(cli):
for original, new_name in ARGUMENT_RENAMES.items():
event_portion, original_arg_name = original.rsplit('.', 1)
cli.register('building-argument-table.%s' % event_portion,
rename_arg(original_arg_name, new_name))
for original, new_name in HIDDEN_ALIASES.items():
event_portion, original_arg_name = original.rsplit('.', 1)
cli.register('building-argument-table.%s' % event_portion,
hidden_alias(original_arg_name, new_name))
def rename_arg(original_arg_name, new_name):
def _rename_arg(argument_table, **kwargs):
if original_arg_name in argument_table:
utils.rename_argument(argument_table, original_arg_name, new_name)
return _rename_arg
def hidden_alias(original_arg_name, alias_name):
def _alias_arg(argument_table, **kwargs):
if original_arg_name in argument_table:
utils.make_hidden_alias(argument_table, original_arg_name, alias_name)
return _alias_arg
awscli-1.10.1/awscli/customizations/putmetricdata.py 0000666 4542626 0000144 00000013670 12652514124 023734 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""
This customization adds the following scalar parameters to the
cloudwatch put-metric-data operation:
* --metric-name
* --dimensions
* --timestamp
* --value
* --statistic-values
* --unit
"""
import decimal
from awscli.arguments import CustomArgument
from awscli.utils import split_on_commas
from awscli.customizations.utils import validate_mutually_exclusive_handler
def register_put_metric_data(event_handler):
event_handler.register('building-argument-table.cloudwatch.put-metric-data',
_promote_args)
event_handler.register(
'operation-args-parsed.cloudwatch.put-metric-data',
validate_mutually_exclusive_handler(
['metric_data'], ['metric_name', 'timestamp', 'unit', 'value',
'dimensions', 'statistic_values']))
def _promote_args(argument_table, **kwargs):
# We're providing top level params for metric-data. This means
# that metric-data is now longer a required arg. We do need
# to check that either metric-data or the complex args we've added
# have been provided.
argument_table['metric-data'].required = False
argument_table['metric-name'] = PutMetricArgument(
'metric-name', help_text='The name of the metric.')
argument_table['timestamp'] = PutMetricArgument(
'timestamp', help_text='The time stamp used for the metric. '
'If not specified, the default value is '
'set to the time the metric data was '
'received.')
argument_table['unit'] = PutMetricArgument(
'unit', help_text='The unit of metric.')
argument_table['value'] = PutMetricArgument(
'value', help_text='The value for the metric. Although the --value '
'parameter accepts numbers of type Double, '
'Amazon CloudWatch truncates values with very '
'large exponents. Values with base-10 exponents '
'greater than 126 (1 x 10^126) are truncated. '
'Likewise, values with base-10 exponents less '
'than -130 (1 x 10^-130) are also truncated.')
argument_table['dimensions'] = PutMetricArgument(
'dimensions', help_text=(
'The --dimension argument further expands '
'on the identity of a metric using a Name=Value'
'pair, separated by commas, for example: '
'--dimensions User=SomeUser,Stack=Test'))
argument_table['statistic-values'] = PutMetricArgument(
'statistic-values', help_text='A set of statistical values describing '
'the metric.')
def insert_first_element(name):
def _wrap_add_to_params(func):
def _add_to_params(self, parameters, value):
if value is None:
return
if name not in parameters:
# We're taking a shortcut here and assuming that the first
# element is a struct type, hence the default value of
# a dict. If this was going to be more general we'd need
# to have this paramterized, i.e. you pass in some sort of
# factory function that creates the initial starting value.
parameters[name] = [{}]
first_element = parameters[name][0]
return func(self, first_element, value)
return _add_to_params
return _wrap_add_to_params
class PutMetricArgument(CustomArgument):
def add_to_params(self, parameters, value):
method_name = '_add_param_%s' % self.name.replace('-', '_')
return getattr(self, method_name)(parameters, value)
@insert_first_element('MetricData')
def _add_param_metric_name(self, first_element, value):
first_element['MetricName'] = value
@insert_first_element('MetricData')
def _add_param_unit(self, first_element, value):
first_element['Unit'] = value
@insert_first_element('MetricData')
def _add_param_timestamp(self, first_element, value):
first_element['Timestamp'] = value
@insert_first_element('MetricData')
def _add_param_value(self, first_element, value):
# Use a Decimal to avoid loss in precision.
first_element['Value'] = decimal.Decimal(value)
@insert_first_element('MetricData')
def _add_param_dimensions(self, first_element, value):
# Dimensions needs a little more processing. We support
# the key=value,key2=value syntax so we need to parse
# that.
dimensions = []
for pair in split_on_commas(value):
key, value = pair.split('=')
dimensions.append({'Name': key, 'Value': value})
first_element['Dimensions'] = dimensions
@insert_first_element('MetricData')
def _add_param_statistic_values(self, first_element, value):
# StatisticValues is a struct type so we are parsing
# a csv keyval list into a dict.
statistics = {}
for pair in split_on_commas(value):
key, value = pair.split('=')
# There are four supported values: Maximum, Minimum, SampleCount,
# and Sum. All of them are documented as a type double so we can
# convert these to a decimal value to preserve precision.
statistics[key] = decimal.Decimal(value)
first_element['StatisticValues'] = statistics
awscli-1.10.1/awscli/customizations/iot.py 0000666 4542626 0000144 00000004537 12652514125 021664 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""
This customization makes it easier to save various pieces of data
returned from iot commands that would typically need to be saved to a
file. This customization adds the following options:
- aws iot create-certificate-from-csr
- ``--certificate-pem-outfile``: certificatePem
- aws iot create-keys-and-certificate
- ``--certificate-pem-outfile``: certificatePem
- ``--public-key-outfile``: keyPair.PublicKey
- ``--private-key-outfile``: keyPair.PrivateKey
"""
from awscli.customizations.arguments import QueryOutFileArgument
def register_create_keys_and_cert_arguments(session, argument_table, **kwargs):
"""Add outfile save arguments to create-keys-and-certificate
- ``--certificate-pem-outfile``
- ``--public-key-outfile``
- ``--private-key-outfile``
"""
after_event = 'after-call.iot.CreateKeysAndCertificate'
argument_table['certificate-pem-outfile'] = QueryOutFileArgument(
session=session, name='certificate-pem-outfile',
query='certificatePem', after_call_event=after_event, perm=0o600)
argument_table['public-key-outfile'] = QueryOutFileArgument(
session=session, name='public-key-outfile', query='keyPair.PublicKey',
after_call_event=after_event, perm=0o600)
argument_table['private-key-outfile'] = QueryOutFileArgument(
session=session, name='private-key-outfile',
query='keyPair.PrivateKey', after_call_event=after_event, perm=0o600)
def register_create_keys_from_csr_arguments(session, argument_table, **kwargs):
"""Add certificate-pem-outfile to create-certificate-from-csr"""
argument_table['certificate-pem-outfile'] = QueryOutFileArgument(
session=session, name='certificate-pem-outfile',
query='certificatePem',
after_call_event='after-call.iot.CreateCertificateFromCsr', perm=0o600)
awscli-1.10.1/awscli/customizations/codedeploy/ 0000777 4542626 0000144 00000000000 12652514126 022636 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/customizations/codedeploy/locationargs.py 0000666 4542626 0000144 00000013426 12652514124 025701 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.argprocess import unpack_cli_arg
from awscli.arguments import CustomArgument
from awscli.arguments import create_argument_model_from_schema
S3_LOCATION_ARG_DESCRIPTION = {
'name': 's3-location',
'required': False,
'help_text': (
'Information about the location of the application revision in Amazon '
'S3. You must specify the bucket, the key, and bundleType. '
'Optionally, you can also specify an eTag and version.'
)
}
S3_LOCATION_SCHEMA = {
"type": "object",
"properties": {
"bucket": {
"type": "string",
"description": "The Amazon S3 bucket name.",
"required": True
},
"key": {
"type": "string",
"description": "The Amazon S3 object key name.",
"required": True
},
"bundleType": {
"type": "string",
"description": "The format of the bundle stored in Amazon S3.",
"enum": ["tar", "tgz", "zip"],
"required": True
},
"eTag": {
"type": "string",
"description": "The Amazon S3 object eTag.",
"required": False
},
"version": {
"type": "string",
"description": "The Amazon S3 object version.",
"required": False
}
}
}
GITHUB_LOCATION_ARG_DESCRIPTION = {
'name': 'github-location',
'required': False,
'help_text': (
'Information about the location of the application revision in '
'GitHub. You must specify the repository and commit ID that '
'references the application revision. For the repository, use the '
'format GitHub-account/repository-name or GitHub-org/repository-name. '
'For the commit ID, use the SHA1 Git commit reference.'
)
}
GITHUB_LOCATION_SCHEMA = {
"type": "object",
"properties": {
"repository": {
"type": "string",
"description": (
"The GitHub account or organization and repository. Specify "
"as GitHub-account/repository or GitHub-org/repository."
),
"required": True
},
"commitId": {
"type": "string",
"description": "The SHA1 Git commit reference.",
"required": True
}
}
}
def modify_revision_arguments(argument_table, session, **kwargs):
s3_model = create_argument_model_from_schema(S3_LOCATION_SCHEMA)
argument_table[S3_LOCATION_ARG_DESCRIPTION['name']] = (
S3LocationArgument(
argument_model=s3_model,
session=session,
**S3_LOCATION_ARG_DESCRIPTION
)
)
github_model = create_argument_model_from_schema(GITHUB_LOCATION_SCHEMA)
argument_table[GITHUB_LOCATION_ARG_DESCRIPTION['name']] = (
GitHubLocationArgument(
argument_model=github_model,
session=session,
**GITHUB_LOCATION_ARG_DESCRIPTION
)
)
argument_table['revision'].required = False
class LocationArgument(CustomArgument):
def __init__(self, session, *args, **kwargs):
super(LocationArgument, self).__init__(*args, **kwargs)
self._session = session
def add_to_params(self, parameters, value):
if value is None:
return
parsed = self._session.emit_first_non_none_response(
'process-cli-arg.codedeploy.%s' % self.name,
param=self.argument_model,
cli_argument=self,
value=value,
operation=None
)
if parsed is None:
parsed = unpack_cli_arg(self, value)
parameters['revision'] = self.build_revision_location(parsed)
def build_revision_location(self, value_dict):
"""
Repack the input structure into a revisionLocation.
"""
raise NotImplementedError("build_revision_location")
class S3LocationArgument(LocationArgument):
def build_revision_location(self, value_dict):
required = ['bucket', 'key', 'bundleType']
valid = lambda k: value_dict.get(k, False)
if not all(map(valid, required)):
raise RuntimeError(
'--s3-location must specify bucket, key and bundleType.'
)
revision = {
"revisionType": "S3",
"s3Location": {
"bucket": value_dict['bucket'],
"key": value_dict['key'],
"bundleType": value_dict['bundleType']
}
}
if 'eTag' in value_dict:
revision['s3Location']['eTag'] = value_dict['eTag']
if 'version' in value_dict:
revision['s3Location']['version'] = value_dict['version']
return revision
class GitHubLocationArgument(LocationArgument):
def build_revision_location(self, value_dict):
required = ['repository', 'commitId']
valid = lambda k: value_dict.get(k, False)
if not all(map(valid, required)):
raise RuntimeError(
'--github-location must specify repository and commitId.'
)
return {
"revisionType": "GitHub",
"gitHubLocation": {
"repository": value_dict['repository'],
"commitId": value_dict['commitId']
}
}
awscli-1.10.1/awscli/customizations/codedeploy/register.py 0000666 4542626 0000144 00000016034 12652514124 025036 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import sys
from awscli.customizations.commands import BasicCommand
from awscli.customizations.codedeploy.systems import DEFAULT_CONFIG_FILE
from awscli.customizations.codedeploy.utils import \
validate_region, validate_instance_name, validate_tags, \
validate_iam_user_arn, INSTANCE_NAME_ARG, IAM_USER_ARN_ARG
class Register(BasicCommand):
NAME = 'register'
DESCRIPTION = (
"Creates an IAM user for the on-premises instance, if not provided, "
"and saves the user's credentials to an on-premises instance "
"configuration file; registers the on-premises instance with AWS "
"CodeDeploy; and optionally adds tags to the on-premises instance."
)
TAGS_SCHEMA = {
"type": "array",
"items": {
"type": "object",
"properties": {
"Key": {
"description": "The tag key.",
"type": "string",
"required": True
},
"Value": {
"description": "The tag value.",
"type": "string",
"required": True
}
}
}
}
ARG_TABLE = [
INSTANCE_NAME_ARG,
{
'name': 'tags',
'synopsis': '--tags ',
'required': False,
'nargs': '+',
'schema': TAGS_SCHEMA,
'help_text': (
'Optional. The list of key/value pairs to tag the on-premises '
'instance.'
)
},
IAM_USER_ARN_ARG
]
def _run_main(self, parsed_args, parsed_globals):
params = parsed_args
params.session = self._session
validate_region(params, parsed_globals)
validate_instance_name(params)
validate_tags(params)
validate_iam_user_arn(params)
self.codedeploy = self._session.create_client(
'codedeploy',
region_name=params.region,
endpoint_url=parsed_globals.endpoint_url,
verify=parsed_globals.verify_ssl
)
self.iam = self._session.create_client(
'iam',
region_name=params.region
)
try:
if not params.iam_user_arn:
self._create_iam_user(params)
self._create_access_key(params)
self._create_user_policy(params)
self._create_config(params)
self._register_instance(params)
if params.tags:
self._add_tags(params)
sys.stdout.write(
'Copy the on-premises configuration file named {0} to the '
'on-premises instance, and run the following command on the '
'on-premises instance to install and configure the AWS '
'CodeDeploy Agent:\n'
'aws deploy install --config-file {0}\n'.format(
DEFAULT_CONFIG_FILE
)
)
except Exception as e:
sys.stdout.flush()
sys.stderr.write(
'ERROR\n'
'{0}\n'
'Register the on-premises instance by following the '
'instructions in "Configure Existing On-Premises Instances by '
'Using AWS CodeDeploy" in the AWS CodeDeploy User '
'Guide.\n'.format(e)
)
def _create_iam_user(self, params):
sys.stdout.write('Creating the IAM user... ')
params.user_name = params.instance_name
response = self.iam.create_user(
Path='/AWS/CodeDeploy/',
UserName=params.user_name
)
params.iam_user_arn = response['User']['Arn']
sys.stdout.write(
'DONE\n'
'IamUserArn: {0}\n'.format(
params.iam_user_arn
)
)
def _create_access_key(self, params):
sys.stdout.write('Creating the IAM user access key... ')
response = self.iam.create_access_key(
UserName=params.user_name
)
params.access_key_id = response['AccessKey']['AccessKeyId']
params.secret_access_key = response['AccessKey']['SecretAccessKey']
sys.stdout.write(
'DONE\n'
'AccessKeyId: {0}\n'
'SecretAccessKey: {1}\n'.format(
params.access_key_id,
params.secret_access_key
)
)
def _create_user_policy(self, params):
sys.stdout.write('Creating the IAM user policy... ')
params.policy_name = 'codedeploy-agent'
params.policy_document = (
'{\n'
' "Version": "2012-10-17",\n'
' "Statement": [ {\n'
' "Action": [ "s3:Get*", "s3:List*" ],\n'
' "Effect": "Allow",\n'
' "Resource": "*"\n'
' } ]\n'
'}'
)
self.iam.put_user_policy(
UserName=params.user_name,
PolicyName=params.policy_name,
PolicyDocument=params.policy_document
)
sys.stdout.write(
'DONE\n'
'PolicyName: {0}\n'
'PolicyDocument: {1}\n'.format(
params.policy_name,
params.policy_document
)
)
def _create_config(self, params):
sys.stdout.write(
'Creating the on-premises instance configuration file named {0}'
'...'.format(DEFAULT_CONFIG_FILE)
)
with open(DEFAULT_CONFIG_FILE, 'w') as f:
f.write(
'---\n'
'region: {0}\n'
'iam_user_arn: {1}\n'
'aws_access_key_id: {2}\n'
'aws_secret_access_key: {3}\n'.format(
params.region,
params.iam_user_arn,
params.access_key_id,
params.secret_access_key
)
)
sys.stdout.write('DONE\n')
def _register_instance(self, params):
sys.stdout.write('Registering the on-premises instance... ')
self.codedeploy.register_on_premises_instance(
instanceName=params.instance_name,
iamUserArn=params.iam_user_arn
)
sys.stdout.write('DONE\n')
def _add_tags(self, params):
sys.stdout.write('Adding tags to the on-premises instance... ')
self.codedeploy.add_tags_to_on_premises_instances(
tags=params.tags,
instanceNames=[params.instance_name]
)
sys.stdout.write('DONE\n')
awscli-1.10.1/awscli/customizations/codedeploy/utils.py 0000666 4542626 0000144 00000010763 12652514124 024355 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import platform
import re
from awscli.compat import urlopen, URLError
from awscli.customizations.codedeploy.systems import System, Ubuntu, Windows, RHEL
from socket import timeout
MAX_INSTANCE_NAME_LENGTH = 100
MAX_TAGS_PER_INSTANCE = 10
MAX_TAG_KEY_LENGTH = 128
MAX_TAG_VALUE_LENGTH = 256
INSTANCE_NAME_PATTERN = r'^[A-Za-z0-9+=,.@_-]+$'
IAM_USER_ARN_PATTERN = r'^arn:aws:iam::[0-9]{12}:user/[A-Za-z0-9/+=,.@_-]+$'
INSTANCE_NAME_ARG = {
'name': 'instance-name',
'synopsis': '--instance-name ',
'required': True,
'help_text': (
'Required. The name of the on-premises instance.'
)
}
IAM_USER_ARN_ARG = {
'name': 'iam-user-arn',
'synopsis': '--iam-user-arn ',
'required': False,
'help_text': (
'Optional. The IAM user associated with the on-premises instance.'
)
}
def validate_region(params, parsed_globals):
if parsed_globals.region:
params.region = parsed_globals.region
else:
params.region = params.session.get_config_variable('region')
if not params.region:
raise RuntimeError('Region not specified.')
def validate_instance_name(params):
if params.instance_name:
if not re.match(INSTANCE_NAME_PATTERN, params.instance_name):
raise ValueError('Instance name contains invalid characters.')
if params.instance_name.startswith('i-'):
raise ValueError('Instance name cannot start with \'i-\'.')
if len(params.instance_name) > MAX_INSTANCE_NAME_LENGTH:
raise ValueError(
'Instance name cannot be longer than {0} characters.'.format(
MAX_INSTANCE_NAME_LENGTH
)
)
def validate_tags(params):
if params.tags:
if len(params.tags) > MAX_TAGS_PER_INSTANCE:
raise ValueError(
'Instances can only have a maximum of {0} tags.'.format(
MAX_TAGS_PER_INSTANCE
)
)
for tag in params.tags:
if len(tag['Key']) > MAX_TAG_KEY_LENGTH:
raise ValueError(
'Tag Key cannot be longer than {0} characters.'.format(
MAX_TAG_KEY_LENGTH
)
)
if len(tag['Value']) > MAX_TAG_KEY_LENGTH:
raise ValueError(
'Tag Value cannot be longer than {0} characters.'.format(
MAX_TAG_VALUE_LENGTH
)
)
def validate_iam_user_arn(params):
if params.iam_user_arn and \
not re.match(IAM_USER_ARN_PATTERN, params.iam_user_arn):
raise ValueError('Invalid IAM user ARN.')
def validate_instance(params):
if platform.system() == 'Linux':
if 'Ubuntu' in platform.linux_distribution()[0]:
params.system = Ubuntu(params)
if 'Red Hat Enterprise Linux Server' in platform.linux_distribution()[0]:
params.system = RHEL(params)
elif platform.system() == 'Windows':
params.system = Windows(params)
if 'system' not in params:
raise RuntimeError(
System.UNSUPPORTED_SYSTEM_MSG
)
try:
urlopen('http://169.254.169.254/latest/meta-data/', timeout=1)
raise RuntimeError('Amazon EC2 instances are not supported.')
except (URLError, timeout):
pass
def validate_s3_location(params, arg_name):
arg_name = arg_name.replace('-', '_')
if arg_name in params:
s3_location = getattr(params, arg_name)
if s3_location:
matcher = re.match('s3://(.+?)/(.+)', str(s3_location))
if matcher:
params.bucket = matcher.group(1)
params.key = matcher.group(2)
else:
raise ValueError(
'--{0} must specify the Amazon S3 URL format as '
's3:///.'.format(
arg_name.replace('_', '-')
)
)
awscli-1.10.1/awscli/customizations/codedeploy/install.py 0000666 4542626 0000144 00000010155 12652514124 024656 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import errno
import os
import shutil
import sys
from awscli.customizations.commands import BasicCommand
from awscli.customizations.codedeploy.utils import \
validate_region, validate_s3_location, validate_instance
class Install(BasicCommand):
NAME = 'install'
DESCRIPTION = (
'Configures and installs the AWS CodeDeploy Agent on the on-premises '
'instance.'
)
ARG_TABLE = [
{
'name': 'config-file',
'synopsis': '--config-file ',
'required': True,
'help_text': (
'Required. The path to the on-premises instance configuration '
'file.'
)
},
{
'name': 'override-config',
'action': 'store_true',
'default': False,
'help_text': (
'Optional. Overrides the on-premises instance configuration '
'file.'
)
},
{
'name': 'agent-installer',
'synopsis': '--agent-installer ',
'required': False,
'help_text': (
'Optional. The AWS CodeDeploy Agent installer file.'
)
}
]
def _run_main(self, parsed_args, parsed_globals):
params = parsed_args
params.session = self._session
validate_region(params, parsed_globals)
validate_instance(params)
params.system.validate_administrator()
self._validate_override_config(params)
self._validate_agent_installer(params)
try:
self._create_config(params)
self._install_agent(params)
except Exception as e:
sys.stdout.flush()
sys.stderr.write(
'ERROR\n'
'{0}\n'
'Install the AWS CodeDeploy Agent on the on-premises instance '
'by following the instructions in "Configure Existing '
'On-Premises Instances by Using AWS CodeDeploy" in the AWS '
'CodeDeploy User Guide.\n'.format(e)
)
def _validate_override_config(self, params):
if os.path.isfile(params.system.CONFIG_PATH) and \
not params.override_config:
raise RuntimeError(
'The on-premises instance configuration file already exists. '
'Specify --override-config to update the existing on-premises '
'instance configuration file.'
)
def _validate_agent_installer(self, params):
validate_s3_location(params, 'agent_installer')
if 'bucket' not in params:
params.bucket = 'aws-codedeploy-{0}'.format(params.region)
if 'key' not in params:
params.key = 'latest/{0}'.format(params.system.INSTALLER)
params.installer = params.system.INSTALLER
else:
start = params.key.rfind('/') + 1
params.installer = params.key[start:]
def _create_config(self, params):
sys.stdout.write(
'Creating the on-premises instance configuration file... '
)
try:
os.makedirs(params.system.CONFIG_DIR)
except OSError as e:
if e.errno != errno.EEXIST:
raise e
if params.config_file != params.system.CONFIG_PATH:
shutil.copyfile(params.config_file, params.system.CONFIG_PATH)
sys.stdout.write('DONE\n')
def _install_agent(self, params):
sys.stdout.write('Installing the AWS CodeDeploy Agent... ')
params.system.install(params)
sys.stdout.write('DONE\n')
awscli-1.10.1/awscli/customizations/codedeploy/uninstall.py 0000666 4542626 0000144 00000004232 12652514124 025220 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import os
import sys
import errno
from awscli.customizations.codedeploy.utils import validate_instance, \
validate_region
from awscli.customizations.commands import BasicCommand
class Uninstall(BasicCommand):
NAME = 'uninstall'
DESCRIPTION = (
'Uninstalls the AWS CodeDeploy Agent from the on-premises instance.'
)
def _run_main(self, parsed_args, parsed_globals):
params = parsed_args
params.session = self._session
validate_region(params, parsed_globals)
validate_instance(params)
params.system.validate_administrator()
try:
self._uninstall_agent(params)
self._delete_config_file(params)
except Exception as e:
sys.stdout.flush()
sys.stderr.write(
'ERROR\n'
'{0}\n'
'Uninstall the AWS CodeDeploy Agent on the on-premises '
'instance by following the instructions in "Configure '
'Existing On-Premises Instances by Using AWS CodeDeploy" in '
'the AWS CodeDeploy User Guide.\n'.format(e)
)
def _uninstall_agent(self, params):
sys.stdout.write('Uninstalling the AWS CodeDeploy Agent... ')
params.system.uninstall(params)
sys.stdout.write('DONE\n')
def _delete_config_file(self, params):
sys.stdout.write('Deleting the on-premises instance configuration... ')
try:
os.remove(params.system.CONFIG_PATH)
except OSError as e:
if e.errno != errno.ENOENT:
raise e
sys.stdout.write('DONE\n')
awscli-1.10.1/awscli/customizations/codedeploy/deregister.py 0000666 4542626 0000144 00000014045 12652514124 025347 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import sys
from awscli.customizations.commands import BasicCommand
from awscli.customizations.codedeploy.utils import \
validate_region, validate_instance_name, INSTANCE_NAME_ARG
from awscli.errorhandler import ClientError, ServerError
class Deregister(BasicCommand):
NAME = 'deregister'
DESCRIPTION = (
'Removes any tags from the on-premises instance; deregisters the '
'on-premises instance from AWS CodeDeploy; and, unless requested '
'otherwise, deletes the IAM user for the on-premises instance.'
)
ARG_TABLE = [
INSTANCE_NAME_ARG,
{
'name': 'no-delete-iam-user',
'action': 'store_true',
'default': False,
'help_text': (
'Optional. Do not delete the IAM user for the registered '
'on-premises instance.'
)
}
]
def _run_main(self, parsed_args, parsed_globals):
params = parsed_args
params.session = self._session
validate_region(params, parsed_globals)
validate_instance_name(params)
self.codedeploy = self._session.create_client(
'codedeploy',
region_name=params.region,
endpoint_url=parsed_globals.endpoint_url,
verify=parsed_globals.verify_ssl
)
self.iam = self._session.create_client(
'iam',
region_name=params.region
)
try:
self._get_instance_info(params)
if params.tags:
self._remove_tags(params)
self._deregister_instance(params)
if not params.no_delete_iam_user:
self._delete_user_policy(params)
self._delete_access_key(params)
self._delete_iam_user(params)
sys.stdout.write(
'Run the following command on the on-premises instance to '
'uninstall the codedeploy-agent:\n'
'aws deploy uninstall\n'
)
except Exception as e:
sys.stdout.flush()
sys.stderr.write(
'ERROR\n'
'{0}\n'
'Deregister the on-premises instance by following the '
'instructions in "Configure Existing On-Premises Instances by '
'Using AWS CodeDeploy" in the AWS CodeDeploy User '
'Guide.\n'.format(e)
)
def _get_instance_info(self, params):
sys.stdout.write('Retrieving on-premises instance information... ')
response = self.codedeploy.get_on_premises_instance(
instanceName=params.instance_name
)
params.iam_user_arn = response['instanceInfo']['iamUserArn']
start = params.iam_user_arn.rfind('/') + 1
params.user_name = params.iam_user_arn[start:]
params.tags = response['instanceInfo']['tags']
sys.stdout.write(
'DONE\n'
'IamUserArn: {0}\n'.format(
params.iam_user_arn
)
)
if params.tags:
sys.stdout.write('Tags:')
for tag in params.tags:
sys.stdout.write(
' Key={0},Value={1}'.format(tag['Key'], tag['Value'])
)
sys.stdout.write('\n')
def _remove_tags(self, params):
sys.stdout.write('Removing tags from the on-premises instance... ')
self.codedeploy.remove_tags_from_on_premises_instances(
tags=params.tags,
instanceNames=[params.instance_name]
)
sys.stdout.write('DONE\n')
def _deregister_instance(self, params):
sys.stdout.write('Deregistering the on-premises instance... ')
self.codedeploy.deregister_on_premises_instance(
instanceName=params.instance_name
)
sys.stdout.write('DONE\n')
def _delete_user_policy(self, params):
sys.stdout.write('Deleting the IAM user policies... ')
list_user_policies = self.iam.get_paginator('list_user_policies')
try:
for response in list_user_policies.paginate(
UserName=params.user_name):
for policy_name in response['PolicyNames']:
self.iam.delete_user_policy(
UserName=params.user_name,
PolicyName=policy_name
)
except (ServerError, ClientError) as e:
if e.error_code != 'NoSuchEntity':
raise e
sys.stdout.write('DONE\n')
def _delete_access_key(self, params):
sys.stdout.write('Deleting the IAM user access keys... ')
list_access_keys = self.iam.get_paginator('list_access_keys')
try:
for response in list_access_keys.paginate(
UserName=params.user_name):
for access_key in response['AccessKeyMetadata']:
self.iam.delete_access_key(
UserName=params.user_name,
AccessKeyId=access_key['AccessKeyId']
)
except (ServerError, ClientError) as e:
if e.error_code != 'NoSuchEntity':
raise e
sys.stdout.write('DONE\n')
def _delete_iam_user(self, params):
sys.stdout.write('Deleting the IAM user ({0})... '.format(
params.user_name
))
try:
self.iam.delete_user(UserName=params.user_name)
except (ServerError, ClientError) as e:
if e.error_code != 'NoSuchEntity':
raise e
sys.stdout.write('DONE\n')
awscli-1.10.1/awscli/customizations/codedeploy/push.py 0000666 4542626 0000144 00000024624 12652514124 024175 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import os
import sys
import zipfile
import tempfile
import contextlib
from datetime import datetime
from awscli.compat import six
from awscli.customizations.codedeploy.utils import validate_s3_location
from awscli.customizations.commands import BasicCommand
from awscli.errorhandler import ServerError, ClientError
from awscli.compat import ZIP_COMPRESSION_MODE
ONE_MB = 1 << 20
MULTIPART_LIMIT = 6 * ONE_MB
class Push(BasicCommand):
NAME = 'push'
DESCRIPTION = (
'Bundles and uploads to Amazon Simple Storage Service (Amazon S3) an '
'application revision, which is an archive file that contains '
'deployable content and an accompanying Application Specification '
'file (AppSpec file). If the upload is successful, a message is '
'returned that describes how to call the create-deployment command to '
'deploy the application revision from Amazon S3 to target Amazon '
'Elastic Compute Cloud (Amazon EC2) instances.'
)
ARG_TABLE = [
{
'name': 'application-name',
'synopsis': '--application-name ',
'required': True,
'help_text': (
'Required. The name of the AWS CodeDeploy application to be '
'associated with the application revision.'
)
},
{
'name': 's3-location',
'synopsis': '--s3-location s3:///',
'required': True,
'help_text': (
'Required. Information about the location of the application '
'revision to be uploaded to Amazon S3. You must specify both '
'a bucket and a key that represent the Amazon S3 bucket name '
'and the object key name. Use the format '
's3://\/\'
)
},
{
'name': 'ignore-hidden-files',
'action': 'store_true',
'default': False,
'group_name': 'ignore-hidden-files',
'help_text': (
'Optional. Set the --ignore-hidden-files flag to not bundle '
'and upload hidden files to Amazon S3; otherwise, set the '
'--no-ignore-hidden-files flag (the default) to bundle and '
'upload hidden files to Amazon S3.'
)
},
{
'name': 'no-ignore-hidden-files',
'action': 'store_true',
'default': False,
'group_name': 'ignore-hidden-files'
},
{
'name': 'source',
'synopsis': '--source ',
'default': '.',
'help_text': (
'Optional. The location of the deployable content and the '
'accompanying AppSpec file on the development machine to be '
'bundled and uploaded to Amazon S3. If not specified, the '
'current directory is used.'
)
},
{
'name': 'description',
'synopsis': '--description ',
'help_text': (
'Optional. A comment that summarizes the application '
'revision. If not specified, the default string "Uploaded by '
'AWS CLI \'time\' UTC" is used, where \'time\' is the current '
'system time in Coordinated Universal Time (UTC).'
)
}
]
def _run_main(self, parsed_args, parsed_globals):
self._validate_args(parsed_args)
self.codedeploy = self._session.create_client(
'codedeploy',
region_name=parsed_globals.region,
endpoint_url=parsed_globals.endpoint_url,
verify=parsed_globals.verify_ssl
)
self.s3 = self._session.create_client(
's3',
region_name=parsed_globals.region
)
self._push(parsed_args)
def _validate_args(self, parsed_args):
validate_s3_location(parsed_args, 's3_location')
if parsed_args.ignore_hidden_files \
and parsed_args.no_ignore_hidden_files:
raise RuntimeError(
'You cannot specify both --ignore-hidden-files and '
'--no-ignore-hidden-files.'
)
if not parsed_args.description:
parsed_args.description = (
'Uploaded by AWS CLI {0} UTC'.format(
datetime.utcnow().isoformat()
)
)
def _push(self, params):
with self._compress(
params.source,
params.ignore_hidden_files
) as bundle:
try:
upload_response = self._upload_to_s3(params, bundle)
params.eTag = upload_response['ETag']
if 'VersionId' in upload_response:
params.version = upload_response['VersionId']
except Exception as e:
raise RuntimeError(
'Failed to upload \'%s\' to \'%s\': %s' %
(params.source,
params.s3_location,
str(e))
)
self._register_revision(params)
if 'version' in params:
version_string = ',version={0}'.format(params.version)
else:
version_string = ''
s3location_string = (
'--s3-location bucket={0},key={1},'
'bundleType=zip,eTag={2}{3}'.format(
params.bucket,
params.key,
params.eTag,
version_string
)
)
sys.stdout.write(
'To deploy with this revision, run:\n'
'aws deploy create-deployment '
'--application-name {0} {1} '
'--deployment-group-name '
'--deployment-config-name '
'--description \n'.format(
params.application_name,
s3location_string
)
)
@contextlib.contextmanager
def _compress(self, source, ignore_hidden_files=False):
source_path = os.path.abspath(source)
appspec_path = os.path.sep.join([source_path, 'appspec.yml'])
with tempfile.TemporaryFile('w+b') as tf:
zf = zipfile.ZipFile(tf, 'w', allowZip64=True)
# Using 'try'/'finally' instead of 'with' statement since ZipFile
# does not have support context manager in Python 2.6.
try:
contains_appspec = False
for root, dirs, files in os.walk(source, topdown=True):
if ignore_hidden_files:
files = [fn for fn in files if not fn.startswith('.')]
dirs[:] = [dn for dn in dirs if not dn.startswith('.')]
for fn in files:
filename = os.path.join(root, fn)
filename = os.path.abspath(filename)
arcname = filename[len(source_path) + 1:]
if filename == appspec_path:
contains_appspec = True
zf.write(filename, arcname, ZIP_COMPRESSION_MODE)
if not contains_appspec:
raise RuntimeError(
'{0} was not found'.format(appspec_path)
)
finally:
zf.close()
yield tf
def _upload_to_s3(self, params, bundle):
size_remaining = self._bundle_size(bundle)
if size_remaining < MULTIPART_LIMIT:
return self.s3.put_object(
Bucket=params.bucket,
Key=params.key,
Body=bundle
)
else:
return self._multipart_upload_to_s3(
params,
bundle,
size_remaining
)
def _bundle_size(self, bundle):
bundle.seek(0, 2)
size = bundle.tell()
bundle.seek(0)
return size
def _multipart_upload_to_s3(self, params, bundle, size_remaining):
create_response = self.s3.create_multipart_upload(
Bucket=params.bucket,
Key=params.key
)
upload_id = create_response['UploadId']
try:
part_num = 1
multipart_list = []
bundle.seek(0)
while size_remaining > 0:
data = bundle.read(MULTIPART_LIMIT)
upload_response = self.s3.upload_part(
Bucket=params.bucket,
Key=params.key,
UploadId=upload_id,
PartNumber=part_num,
Body=six.BytesIO(data)
)
multipart_list.append({
'PartNumber': part_num,
'ETag': upload_response['ETag']
})
part_num += 1
size_remaining -= len(data)
return self.s3.complete_multipart_upload(
Bucket=params.bucket,
Key=params.key,
UploadId=upload_id,
MultipartUpload={'Parts': multipart_list}
)
except (ServerError, ClientError) as e:
self.s3.abort_multipart_upload(
Bucket=params.bucket,
Key=params.key,
UploadId=upload_id
)
raise e
def _register_revision(self, params):
revision = {
'revisionType': 'S3',
's3Location': {
'bucket': params.bucket,
'key': params.key,
'bundleType': 'zip',
'eTag': params.eTag
}
}
if 'version' in params:
revision['s3Location']['version'] = params.version
self.codedeploy.register_application_revision(
applicationName=params.application_name,
revision=revision,
description=params.description
)
awscli-1.10.1/awscli/customizations/codedeploy/codedeploy.py 0000666 4542626 0000144 00000004244 12652514124 025341 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations import utils
from awscli.customizations.codedeploy.locationargs import \
modify_revision_arguments
from awscli.customizations.codedeploy.push import Push
from awscli.customizations.codedeploy.register import Register
from awscli.customizations.codedeploy.deregister import Deregister
from awscli.customizations.codedeploy.install import Install
from awscli.customizations.codedeploy.uninstall import Uninstall
def initialize(cli):
"""
The entry point for CodeDeploy high level commands.
"""
cli.register(
'building-command-table.main',
change_name
)
cli.register(
'building-command-table.deploy',
inject_commands
)
cli.register(
'building-argument-table.deploy.get-application-revision',
modify_revision_arguments
)
cli.register(
'building-argument-table.deploy.register-application-revision',
modify_revision_arguments
)
cli.register(
'building-argument-table.deploy.create-deployment',
modify_revision_arguments
)
def change_name(command_table, session, **kwargs):
"""
Change all existing 'aws codedeploy' commands to 'aws deploy' commands.
"""
utils.rename_command(command_table, 'codedeploy', 'deploy')
def inject_commands(command_table, session, **kwargs):
"""
Inject custom 'aws deploy' commands.
"""
command_table['push'] = Push(session)
command_table['register'] = Register(session)
command_table['deregister'] = Deregister(session)
command_table['install'] = Install(session)
command_table['uninstall'] = Uninstall(session)
awscli-1.10.1/awscli/customizations/codedeploy/__init__.py 0000666 4542626 0000144 00000001065 12652514124 024747 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
awscli-1.10.1/awscli/customizations/codedeploy/systems.py 0000666 4542626 0000144 00000016755 12652514124 024733 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import ctypes
import os
import subprocess
DEFAULT_CONFIG_FILE = 'codedeploy.onpremises.yml'
class System:
UNSUPPORTED_SYSTEM_MSG = (
'Only Ubuntu Server, Red Hat Enterprise Linux Server and '
'Windows Server operating systems are supported.'
)
def __init__(self, params):
self.session = params.session
self.s3 = self.session.create_client(
's3',
region_name=params.region
)
def validate_administrator(self):
raise NotImplementedError('validate_administrator')
def install(self, params):
raise NotImplementedError('install')
def uninstall(self, params):
raise NotImplementedError('uninstall')
class Windows(System):
CONFIG_DIR = r'C:\ProgramData\Amazon\CodeDeploy'
CONFIG_FILE = 'conf.onpremises.yml'
CONFIG_PATH = r'{0}\{1}'.format(CONFIG_DIR, CONFIG_FILE)
INSTALLER = 'codedeploy-agent.msi'
def validate_administrator(self):
if not ctypes.windll.shell32.IsUserAnAdmin():
raise RuntimeError(
'You must run this command as an Administrator.'
)
def install(self, params):
if 'installer' in params:
self.INSTALLER = params.installer
process = subprocess.Popen(
[
'powershell.exe',
'-Command', 'Stop-Service',
'-Name', 'codedeployagent'
],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
(output, error) = process.communicate()
not_found = (
"Cannot find any service with service name 'codedeployagent'"
)
if process.returncode != 0 and not_found not in error:
raise RuntimeError(
'Failed to stop the AWS CodeDeploy Agent:\n{0}'.format(error)
)
response = self.s3.get_object(Bucket=params.bucket, Key=params.key)
with open(self.INSTALLER, 'wb') as f:
f.write(response['Body'].read())
subprocess.check_call(
[
r'.\{0}'.format(self.INSTALLER),
'/quiet',
'/l', r'.\codedeploy-agent-install-log.txt'
],
shell=True
)
subprocess.check_call([
'powershell.exe',
'-Command', 'Restart-Service',
'-Name', 'codedeployagent'
])
process = subprocess.Popen(
[
'powershell.exe',
'-Command', 'Get-Service',
'-Name', 'codedeployagent'
],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
(output, error) = process.communicate()
if "Running" not in output:
raise RuntimeError(
'The AWS CodeDeploy Agent did not start after installation.'
)
def uninstall(self, params):
process = subprocess.Popen(
[
'powershell.exe',
'-Command', 'Stop-Service',
'-Name', 'codedeployagent'
],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
(output, error) = process.communicate()
not_found = (
"Cannot find any service with service name 'codedeployagent'"
)
if process.returncode == 0:
self._remove_agent()
elif not_found not in error:
raise RuntimeError(
'Failed to stop the AWS CodeDeploy Agent:\n{0}'.format(error)
)
def _remove_agent(self):
process = subprocess.Popen(
[
'wmic',
'product', 'where', 'name="CodeDeploy Host Agent"',
'call', 'uninstall', '/nointeractive'
],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
(output, error) = process.communicate()
if process.returncode != 0:
raise RuntimeError(
'Failed to uninstall the AWS CodeDeploy Agent:\n{0}'.format(
error
)
)
class Linux(System):
CONFIG_DIR = '/etc/codedeploy-agent/conf'
CONFIG_FILE = DEFAULT_CONFIG_FILE
CONFIG_PATH = '{0}/{1}'.format(CONFIG_DIR, CONFIG_FILE)
INSTALLER = 'install'
def validate_administrator(self):
if os.geteuid() != 0:
raise RuntimeError('You must run this command as sudo.')
def install(self, params):
if 'installer' in params:
self.INSTALLER = params.installer
self._update_system(params)
self._stop_agent(params)
response = self.s3.get_object(Bucket=params.bucket, Key=params.key)
with open(self.INSTALLER, 'wb') as f:
f.write(response['Body'].read())
subprocess.check_call(
['chmod', '+x', './{0}'.format(self.INSTALLER)]
)
credentials = self.session.get_credentials()
environment = os.environ.copy()
environment['AWS_REGION'] = params.region
environment['AWS_ACCESS_KEY_ID'] = credentials.access_key
environment['AWS_SECRET_ACCESS_KEY'] = credentials.secret_key
if credentials.token is not None:
environment['AWS_SESSION_TOKEN'] = credentials.token
subprocess.check_call(
['./{0}'.format(self.INSTALLER), 'auto'],
env=environment
)
def uninstall(self, params):
process = self._stop_agent(params)
if process.returncode == 0:
self._remove_agent(params)
def _update_system(self, params):
raise NotImplementedError('preinstall')
def _remove_agent(self, params):
raise NotImplementedError('remove_agent')
def _stop_agent(self, params):
process = subprocess.Popen(
['service', 'codedeploy-agent', 'stop'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
(output, error) = process.communicate()
if process.returncode != 0 and params.not_found_msg not in error:
raise RuntimeError(
'Failed to stop the AWS CodeDeploy Agent:\n{0}'.format(error)
)
return process
class Ubuntu(Linux):
def _update_system(self, params):
subprocess.check_call(['apt-get', '-y', 'update'])
subprocess.check_call(['apt-get', '-y', 'install', 'ruby2.0'])
def _remove_agent(self, params):
subprocess.check_call(['dpkg', '-r', 'codedeploy-agent'])
def _stop_agent(self, params):
params.not_found_msg = 'codedeploy-agent: unrecognized service'
return Linux._stop_agent(self, params)
class RHEL(Linux):
def _update_system(self, params):
subprocess.check_call(['yum', '-y', 'install', 'ruby'])
def _remove_agent(self, params):
subprocess.check_call(['yum', '-y', 'erase', 'codedeploy-agent'])
def _stop_agent(self, params):
params.not_found_msg = 'Redirecting to /bin/systemctl stop codedeploy-agent.service'
return Linux._stop_agent(self, params)
awscli-1.10.1/awscli/customizations/scalarparse.py 0000666 4542626 0000144 00000003560 12652514124 023363 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""Change the scalar response parsing behavior for the AWS CLI.
The underlying library used by botocore has some response parsing
behavior that we'd like to modify in the AWS CLI. There are two:
* Parsing binary content.
* Parsing timestamps (dates)
For the first option we can't print binary content to the terminal,
so this customization leaves the binary content base64 encoded. If the
user wants the binary content, they can then base64 decode the appropriate
fields as needed.
There's nothing currently done for timestamps, but this will change
in the future.
"""
def register_scalar_parser(event_handlers):
event_handlers.register_first(
'session-initialized', add_scalar_parsers)
def identity(x):
return x
def add_scalar_parsers(session, **kwargs):
factory = session.get_component('response_parser_factory')
# For backwards compatibility reasons, we replace botocore's timestamp
# parser (which parsers to a datetime.datetime object) with the identity
# function which prints the date exactly the same as it comes across the
# wire. We will eventually add a config option that allows for a user to
# have normalized datetime representation, but we can't change the default.
factory.set_parser_defaults(
blob_parser=identity,
timestamp_parser=identity)
awscli-1.10.1/awscli/customizations/ec2protocolarg.py 0000666 4542626 0000144 00000002567 12652514124 024016 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""
This customization allows the user to specify the values "tcp", "udp",
or "icmp" as values for the --protocol parameter. The actual Protocol
parameter of the operation accepts only integer protocol numbers.
"""
def _fix_args(params, **kwargs):
key_name = 'Protocol'
if key_name in params:
if params[key_name] == 'tcp':
params[key_name] = '6'
elif params[key_name] == 'udp':
params[key_name] = '17'
elif params[key_name] == 'icmp':
params[key_name] = '1'
elif params[key_name] == 'all':
params[key_name] = '-1'
def register_protocol_args(cli):
cli.register('before-parameter-build.ec2.CreateNetworkAclEntry',
_fix_args)
cli.register('before-parameter-build.ec2.ReplaceNetworkAclEntry',
_fix_args)
awscli-1.10.1/awscli/customizations/ec2secgroupsimplify.py 0000666 4542626 0000144 00000020152 12652514124 025055 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""
This customization adds the following scalar parameters to the
authorize operations:
* --protocol: tcp | udp | icmp or any protocol number
* --port: A single integer or a range (min-max). You can specify ``all``
to mean all ports (for example, port range 0-65535)
* --source-group: Either the source security group ID or name.
* --cidr - The CIDR range. Cannot be used when specifying a source or
destination security group.
"""
from awscli.arguments import CustomArgument
def _add_params(argument_table, **kwargs):
arg = ProtocolArgument('protocol',
help_text=PROTOCOL_DOCS)
argument_table['protocol'] = arg
argument_table['ip-protocol']._UNDOCUMENTED = True
arg = PortArgument('port', help_text=PORT_DOCS)
argument_table['port'] = arg
# Port handles both the from-port and to-port,
# we need to not document both args.
argument_table['from-port']._UNDOCUMENTED = True
argument_table['to-port']._UNDOCUMENTED = True
arg = CidrArgument('cidr', help_text=CIDR_DOCS)
argument_table['cidr'] = arg
argument_table['cidr-ip']._UNDOCUMENTED = True
arg = SourceGroupArgument('source-group',
help_text=SOURCEGROUP_DOCS)
argument_table['source-group'] = arg
argument_table['source-security-group-name']._UNDOCUMENTED = True
arg = GroupOwnerArgument('group-owner',
help_text=GROUPOWNER_DOCS)
argument_table['group-owner'] = arg
argument_table['source-security-group-owner-id']._UNDOCUMENTED = True
def _check_args(parsed_args, **kwargs):
# This function checks the parsed args. If the user specified
# the --ip-permissions option with any of the scalar options we
# raise an error.
arg_dict = vars(parsed_args)
if arg_dict['ip_permissions']:
for key in ('protocol', 'port', 'cidr',
'source_group', 'group_owner'):
if arg_dict[key]:
msg = ('The --%s option is not compatible '
'with the --ip-permissions option ') % key
raise ValueError(msg)
def _add_docs(help_command, **kwargs):
doc = help_command.doc
doc.style.new_paragraph()
doc.style.start_note()
msg = ('To specify multiple rules in a single command '
'use the --ip-permissions option')
doc.include_doc_string(msg)
doc.style.end_note()
EVENTS = [
('building-argument-table.ec2.authorize-security-group-ingress',
_add_params),
('building-argument-table.ec2.authorize-security-group-egress',
_add_params),
('building-argument-table.ec2.revoke-security-group-ingress', _add_params),
('building-argument-table.ec2.revoke-security-group-egress', _add_params),
('operation-args-parsed.ec2.authorize-security-group-ingress',
_check_args),
('operation-args-parsed.ec2.authorize-security-group-egress', _check_args),
('operation-args-parsed.ec2.revoke-security-group-ingress', _check_args),
('operation-args-parsed.ec2.revoke-security-group-egress', _check_args),
('doc-description.ec2.authorize-security-group-ingress', _add_docs),
('doc-description.ec2.authorize-security-group-egress', _add_docs),
('doc-description.ec2.revoke-security-group-ingress', _add_docs),
('doc-description.ec2.revoke-security-groupdoc-ingress', _add_docs),
]
PROTOCOL_DOCS = ('
The IP protocol of this permission.
'
'
Valid protocol values: tcp, '
'udp, icmp
')
PORT_DOCS = ('
For TCP or UDP: The range of ports to allow.'
' A single integer or a range (min-max).
'
'
For ICMP: A single integer or a range (type-code)'
' representing the ICMP type'
' number and the ICMP code number respectively.'
' A value of -1 indicates all ICMP codes for'
' all ICMP types. A value of -1 just for type'
' indicates all ICMP codes for the specified ICMP type.
')
CIDR_DOCS = '
The CIDR IP range.
'
SOURCEGROUP_DOCS = ('
The name or ID of the source security group. '
'Cannot be used when specifying a CIDR IP address.')
GROUPOWNER_DOCS = ('
The AWS account ID that owns the source security '
'group. Cannot be used when specifying a CIDR IP '
'address.
')
def register_secgroup(event_handler):
for event, handler in EVENTS:
event_handler.register(event, handler)
def _build_ip_permissions(params, key, value):
if 'IpPermissions' not in params:
params['IpPermissions'] = [{}]
if key == 'CidrIp':
if 'IpRanges' not in params['ip_permissions'][0]:
params['IpPermissions'][0]['IpRanges'] = []
params['IpPermissions'][0]['IpRanges'].append(value)
elif key in ('GroupId', 'GroupName', 'UserId'):
if 'UserIdGroupPairs' not in params['IpPermissions'][0]:
params['IpPermissions'][0]['UserIdGroupPairs'] = [{}]
params['IpPermissions'][0]['UserIdGroupPairs'][0][key] = value
else:
params['IpPermissions'][0][key] = value
class ProtocolArgument(CustomArgument):
def add_to_params(self, parameters, value):
if value:
try:
int_value = int(value)
if (int_value < 0 or int_value > 255) and int_value != -1:
msg = ('protocol numbers must be in the range 0-255 '
'or -1 to specify all protocols')
raise ValueError(msg)
except ValueError:
if value not in ('tcp', 'udp', 'icmp', 'all'):
msg = ('protocol parameter should be one of: '
'tcp|udp|icmp|all or any valid protocol number.')
raise ValueError(msg)
if value == 'all':
value = '-1'
_build_ip_permissions(parameters, 'IpProtocol', value)
class PortArgument(CustomArgument):
def add_to_params(self, parameters, value):
if value:
try:
if value == '-1' or value == 'all':
fromstr = '-1'
tostr = '-1'
elif '-' in value:
# We can get away with simple logic here because
# argparse will not allow values such as
# "-1-8", and these aren't actually valid
# values any from from/to ports.
fromstr, tostr = value.split('-', 1)
else:
fromstr, tostr = (value, value)
_build_ip_permissions(parameters, 'FromPort', int(fromstr))
_build_ip_permissions(parameters, 'ToPort', int(tostr))
except ValueError:
msg = ('port parameter should be of the '
'form (e.g. 22 or 22-25)')
raise ValueError(msg)
class CidrArgument(CustomArgument):
def add_to_params(self, parameters, value):
if value:
value = [{'CidrIp': value}]
_build_ip_permissions(parameters, 'IpRanges', value)
class SourceGroupArgument(CustomArgument):
def add_to_params(self, parameters, value):
if value:
if value.startswith('sg-'):
_build_ip_permissions(parameters, 'GroupId', value)
else:
_build_ip_permissions(parameters, 'GroupName', value)
class GroupOwnerArgument(CustomArgument):
def add_to_params(self, parameters, value):
if value:
_build_ip_permissions(parameters, 'UserId', value)
awscli-1.10.1/awscli/customizations/ecr.py 0000666 4542626 0000144 00000004116 12652514124 021632 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.commands import BasicCommand
from awscli.customizations.utils import create_client_from_parsed_globals
from base64 import b64decode
import sys
def register_ecr_commands(cli):
cli.register('building-command-table.ecr', _inject_get_login)
def _inject_get_login(command_table, session, **kwargs):
command_table['get-login'] = ECRLogin(session)
class ECRLogin(BasicCommand):
"""Log in with docker login"""
NAME = 'get-login'
DESCRIPTION = BasicCommand.FROM_FILE('ecr/get-login_description.rst')
ARG_TABLE = [
{
'name': 'registry-ids',
'help_text': 'A list of AWS account IDs that correspond to the '
'Amazon ECR registries that you want to log in to.',
'required': False,
'nargs': '+'
}
]
def _run_main(self, parsed_args, parsed_globals):
ecr_client = create_client_from_parsed_globals(
self._session, 'ecr', parsed_globals)
if not parsed_args.registry_ids:
result = ecr_client.get_authorization_token()
else:
result = ecr_client.get_authorization_token(
registryIds=parsed_args.registry_ids)
for auth in result['authorizationData']:
auth_token = b64decode(auth['authorizationToken']).decode()
username, password = auth_token.split(':')
sys.stdout.write('docker login -u %s -p %s -e none %s\n'
% (username, password, auth['proxyEndpoint']))
return 0
awscli-1.10.1/awscli/customizations/cloudtrail/ 0000777 4542626 0000144 00000000000 12652514126 022651 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/customizations/cloudtrail/utils.py 0000666 4542626 0000144 00000003151 12652514124 024361 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2012-2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
def get_account_id_from_arn(trail_arn):
"""Gets the account ID portion of an ARN"""
return trail_arn.split(':')[4]
def get_account_id(iam_client):
"""Retrieve the AWS account ID for the authenticated user"""
response = iam_client.get_user()
return get_account_id_from_arn(response['User']['Arn'])
def get_trail_by_arn(cloudtrail_client, trail_arn):
"""Gets trail information based on the trail's ARN"""
trails = cloudtrail_client.describe_trails()['trailList']
for trail in trails:
if trail.get('TrailARN', None) == trail_arn:
return trail
raise ValueError('A trail could not be found for %s' % trail_arn)
def remove_cli_error_event(client):
"""This unregister call will go away once the client switchover
is done, but for now we're relying on S3 catching a ClientError
when we check if a bucket exists, so we need to ensure the
botocore ClientError is raised instead of the CLI's error handler.
"""
client.meta.events.unregister(
'after-call', unique_id='awscli-error-handler')
awscli-1.10.1/awscli/customizations/cloudtrail/validation.py 0000666 4542626 0000144 00000112022 12652514124 025351 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2012-2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import base64
import binascii
import json
import hashlib
import logging
import re
import sys
import zlib
from zlib import error as ZLibError
from datetime import datetime, timedelta
from dateutil import tz, parser
from pyasn1.error import PyAsn1Error
import rsa
from awscli.customizations.cloudtrail.utils import get_trail_by_arn, \
get_account_id_from_arn, remove_cli_error_event
from awscli.customizations.commands import BasicCommand
from botocore.exceptions import ClientError
LOG = logging.getLogger(__name__)
DATE_FORMAT = '%Y%m%dT%H%M%SZ'
DISPLAY_DATE_FORMAT = '%Y-%m-%dT%H:%M:%SZ'
def format_date(date):
"""Returns a formatted date string in a CloudTrail date format"""
return date.strftime(DATE_FORMAT)
def format_display_date(date):
"""Returns a formatted date string meant for CLI output"""
return date.strftime(DISPLAY_DATE_FORMAT)
def normalize_date(date):
"""Returns a normalized date using a UTC timezone"""
return date.replace(tzinfo=tz.tzutc())
def extract_digest_key_date(digest_s3_key):
"""Extract the timestamp portion of a manifest file.
Manifest file names take the following form:
AWSLogs/{account}/CloudTrail-Digest/{region}/{ymd}/{account}_CloudTrail \
-Digest_{region}_{name}_region_{date}.json.gz
"""
return digest_s3_key[-24:-8]
def parse_date(date_string):
try:
return parser.parse(date_string)
except ValueError:
raise ValueError('Unable to parse date value: %s' % date_string)
def assert_cloudtrail_arn_is_valid(trail_arn):
"""Ensures that the arn looks correct.
ARNs look like: arn:aws:cloudtrail:us-east-1:123456789012:trail/foo"""
pattern = re.compile('arn:.+:cloudtrail:.+:\d{12}:trail/.+')
if not pattern.match(trail_arn):
raise ValueError('Invalid trail ARN provided: %s' % trail_arn)
def create_digest_traverser(cloudtrail_client, s3_client_provider, trail_arn,
trail_source_region=None, on_invalid=None,
on_gap=None, on_missing=None, bucket=None,
prefix=None):
"""Creates a CloudTrail DigestTraverser and its object graph.
:type cloudtrail_client: botocore.client.CloudTrail
:param cloudtrail_client: Client used to connect to CloudTrail
:type s3_client_provider: S3ClientProvider
:param s3_client_provider: Used to create Amazon S3 client per/region.
:param trail_arn: CloudTrail trail ARN
:param trail_source_region: The scanned region of a trail.
:param on_invalid: Callback that is invoked when validating a digest fails.
:param on_gap: Callback that is invoked when a digest has no link to the
previous digest, but there are more digests to validate. This can
happen when a trail is disabled for a period of time.
:param on_missing: Callback that is invoked when a digest file has been
deleted from Amazon S3 but is supposed to be present.
:param bucket: Amazon S3 bucket of the trail if it is different than the
bucket that is currently associated with the trail.
:param prefix: bucket: Key prefix prepended to each digest and log placed
in the Amazon S3 bucket if it is different than the prefix that is
currently associated with the trail.
``on_gap``, ``on_invalid``, and ``on_missing`` callbacks are invoked with
the following named arguments:
- ``bucket`: The next S3 bucket.
- ``next_key``: (optional) Next digest key that was found in the bucket.
- ``next_end_date``: (optional) End date of the next found digest.
- ``last_key``: The last digest key that was found.
- ``last_start_date``: (optional) Start date of last found digest.
- ``message``: (optional) Message string about the notification.
"""
assert_cloudtrail_arn_is_valid(trail_arn)
account_id = get_account_id_from_arn(trail_arn)
if bucket is None:
# Determine the bucket and prefix based on the trail arn.
trail_info = get_trail_by_arn(cloudtrail_client, trail_arn)
LOG.debug('Loaded trail info: %s', trail_info)
bucket = trail_info['S3BucketName']
prefix = trail_info.get('S3KeyPrefix', None)
# Determine the region from the ARN (e.g., arn:aws:cloudtrail:REGION:...)
trail_region = trail_arn.split(':')[3]
# Determine the name from the ARN (the last part after "/")
trail_name = trail_arn.split('/')[-1]
digest_provider = DigestProvider(
account_id=account_id, trail_name=trail_name,
s3_client_provider=s3_client_provider,
trail_source_region=trail_source_region,
trail_home_region=trail_region)
return DigestTraverser(
digest_provider=digest_provider, starting_bucket=bucket,
starting_prefix=prefix, on_invalid=on_invalid, on_gap=on_gap,
on_missing=on_missing,
public_key_provider=PublicKeyProvider(cloudtrail_client))
class S3ClientProvider(object):
"""Creates Amazon S3 clients and determines the region name of a client.
This class will cache the location constraints of previously requested
buckets and cache previously created clients for the same region.
"""
def __init__(self, session, get_bucket_location_region='us-east-1'):
self._session = session
self._get_bucket_location_region = get_bucket_location_region
self._client_cache = {}
self._region_cache = {}
def get_client(self, bucket_name):
"""Creates an S3 client that can work with the given bucket name"""
region_name = self._get_bucket_region(bucket_name)
return self._create_client(region_name)
def _get_bucket_region(self, bucket_name):
"""Returns the region of a bucket"""
if bucket_name not in self._region_cache:
client = self._create_client(self._get_bucket_location_region)
result = client.get_bucket_location(Bucket=bucket_name)
region = result['LocationConstraint'] or 'us-east-1'
self._region_cache[bucket_name] = region
return self._region_cache[bucket_name]
def _create_client(self, region_name):
"""Creates an Amazon S3 client for the given region name"""
if region_name not in self._client_cache:
client = self._session.create_client('s3', region_name)
# Remove the CLI error event that prevents exceptions.
remove_cli_error_event(client)
self._client_cache[region_name] = client
return self._client_cache[region_name]
class DigestError(ValueError):
"""Exception raised when a digest fails to validate"""
pass
class DigestSignatureError(DigestError):
"""Exception raised when a digest signature is invalid"""
def __init__(self, bucket, key):
message = ('Digest file\ts3://%s/%s\tINVALID: signature verification '
'failed') % (bucket, key)
super(DigestSignatureError, self).__init__(message)
class InvalidDigestFormat(DigestError):
"""Exception raised when a digest has an invalid format"""
def __init__(self, bucket, key):
message = 'Digest file\ts3://%s/%s\tINVALID: invalid format' % (bucket,
key)
super(InvalidDigestFormat, self).__init__(message)
class PublicKeyProvider(object):
"""Retrieves public keys from CloudTrail within a date range."""
def __init__(self, cloudtrail_client):
self._cloudtrail_client = cloudtrail_client
def get_public_keys(self, start_date, end_date):
"""Loads public keys in a date range into a returned dict.
:type start_date: datetime
:param start_date: Start date of a date range.
:type end_date: datetime
:param end_date: End date of a date range.
:rtype: dict
:return: Returns a dict where each key is the fingerprint of the
public key, and each value is a dict of public key data.
"""
public_keys = self._cloudtrail_client.list_public_keys(
StartTime=start_date, EndTime=end_date)
public_keys_in_range = public_keys['PublicKeyList']
LOG.debug('Loaded public keys in range: %s', public_keys_in_range)
return dict((key['Fingerprint'], key) for key in public_keys_in_range)
class DigestProvider(object):
"""
Retrieves digest keys and digests from Amazon S3.
This class is responsible for determining the full list of digest files
in a bucket and loading digests from the bucket into a JSON decoded
dict. This class is not responsible for validation or iterating from
one digest to the next.
"""
def __init__(self, s3_client_provider, account_id, trail_name,
trail_home_region, trail_source_region=None):
self._client_provider = s3_client_provider
self.trail_name = trail_name
self.account_id = account_id
self.trail_home_region = trail_home_region
self.trail_source_region = trail_source_region or trail_home_region
def load_digest_keys_in_range(self, bucket, prefix, start_date, end_date):
"""Returns a list of digest keys in the date range.
This method uses a list_objects API call and provides a Marker
parameter that is calculated based on the start_date provided.
Amazon S3 then returns all keys in the bucket that start after
the given key (non-inclusive). We then iterate over the keys
until the date extracted from the yielded keys is greater than
the given end_date.
"""
digests = []
marker = self._create_digest_key(start_date, prefix)
client = self._client_provider.get_client(bucket)
paginator = client.get_paginator('list_objects')
page_iterator = paginator.paginate(Bucket=bucket, Marker=marker)
key_filter = page_iterator.search('Contents[*].Key')
# Create a target start end end date
target_start_date = format_date(normalize_date(start_date))
# Add one hour to the end_date to get logs that spilled over to next.
target_end_date = format_date(
normalize_date(end_date + timedelta(hours=1)))
# Ensure digests are from the same trail.
digest_key_regex = re.compile(self._create_digest_key_regex(prefix))
for key in key_filter:
if digest_key_regex.match(key):
# Use a lexicographic comparison to know when to stop.
extracted_date = extract_digest_key_date(key)
if extracted_date > target_end_date:
break
# Only append digests after the start date.
if extracted_date >= target_start_date:
digests.append(key)
return digests
def fetch_digest(self, bucket, key):
"""Loads a digest by key from S3.
Returns the JSON decode data and GZIP inflated raw content.
"""
client = self._client_provider.get_client(bucket)
result = client.get_object(Bucket=bucket, Key=key)
try:
digest = zlib.decompress(result['Body'].read(),
zlib.MAX_WBITS | 16)
digest_data = json.loads(digest.decode())
except (ValueError, ZLibError):
# Cannot gzip decode or JSON parse.
raise InvalidDigestFormat(bucket, key)
# Add the expected digest signature and algorithm to the dict.
if 'signature' not in result['Metadata'] \
or 'signature-algorithm' not in result['Metadata']:
raise DigestSignatureError(bucket, key)
digest_data['_signature'] = result['Metadata']['signature']
digest_data['_signature_algorithm'] = \
result['Metadata']['signature-algorithm']
return digest_data, digest
def _create_digest_key(self, start_date, key_prefix):
"""Computes an Amazon S3 key based on the provided data.
The computed is what would have been placed in the S3 bucket if
a log digest were created at a specific time. This computed key
does not have to actually exist as it will only be used to as
a Marker parameter in a list_objects call.
:return: Returns a computed key as a string.
"""
# Subtract one minute to ensure the dates are inclusive.
date = start_date - timedelta(minutes=1)
template = ('AWSLogs/{account}/CloudTrail-Digest/{source_region}/'
'{ymd}/{account}_CloudTrail-Digest_{source_region}_{name}_'
'{home_region}_{date}.json.gz')
key = template.format(account=self.account_id, date=format_date(date),
ymd=date.strftime('%Y/%m/%d'),
source_region=self.trail_source_region,
home_region=self.trail_home_region,
name=self.trail_name)
if key_prefix:
key = key_prefix + '/' + key
return key
def _create_digest_key_regex(self, key_prefix):
"""Creates a regular expression used to match against S3 keys"""
template = ('AWSLogs/{account}/CloudTrail\\-Digest/{source_region}/'
'\\d+/\\d+/\\d+/{account}_CloudTrail\\-Digest_'
'{source_region}_{name}_{home_region}_.+\\.json\\.gz')
key = template.format(
account=re.escape(self.account_id),
source_region=re.escape(self.trail_source_region),
home_region=re.escape(self.trail_home_region),
name=re.escape(self.trail_name))
if key_prefix:
key = re.escape(key_prefix) + '/' + key
return '^' + key + '$'
class DigestTraverser(object):
"""Retrieves and validates digests within a date range."""
# These keys are required to be present before validating the contents
# of a digest.
required_digest_keys = ['digestPublicKeyFingerprint', 'digestS3Bucket',
'digestS3Object', 'previousDigestSignature',
'digestEndTime', 'digestStartTime']
def __init__(self, digest_provider, starting_bucket, starting_prefix,
public_key_provider, digest_validator=None,
on_invalid=None, on_gap=None, on_missing=None):
"""
:type digest_provider: DigestProvider
:param digest_provider: DigestProvider object
:param starting_bucket: S3 bucket where the digests are stored.
:param starting_prefix: An optional prefix applied to each S3 key.
:param public_key_provider: Provides public keys for a range.
:param digest_validator: Validates digest using a validate method.
:param on_invalid: Callback invoked when a digest is invalid.
:param on_gap: Callback invoked when a digest has no parent, but
there are still more digests to validate.
:param on_missing: Callback invoked when a digest file is missing.
"""
self.starting_bucket = starting_bucket
self.starting_prefix = starting_prefix
self.digest_provider = digest_provider
self._public_key_provider = public_key_provider
self._on_gap = on_gap
self._on_invalid = on_invalid
self._on_missing = on_missing
if digest_validator is None:
digest_validator = Sha256RSADigestValidator()
self._digest_validator = digest_validator
def traverse(self, start_date, end_date=None):
"""Creates and returns a generator that yields validated digest data.
Each yielded digest dictionary contains information about the digest
and the log file associated with the digest. Digest files are validated
before they are yielded. Whether or not the digest is successfully
validated is stated in the "isValid" key value pair of the yielded
dictionary.
:type start_date: datetime
:param start_date: Date to start validating from (inclusive).
:type start_date: datetime
:param end_date: Date to stop validating at (inclusive).
"""
if end_date is None:
end_date = datetime.utcnow()
end_date = normalize_date(end_date)
start_date = normalize_date(start_date)
bucket = self.starting_bucket
prefix = self.starting_prefix
digests = self._load_digests(bucket, prefix, start_date, end_date)
public_keys = self._load_public_keys(start_date, end_date)
key, end_date = self._get_last_digest(digests)
last_start_date = end_date
while key and start_date <= last_start_date:
try:
digest, end_date = self._load_and_validate_digest(
public_keys, bucket, key)
last_start_date = normalize_date(
parse_date(digest['digestStartTime']))
previous_bucket = digest.get('previousDigestS3Bucket', None)
yield digest
if previous_bucket is None:
# The chain is broken, so find next in digest store.
key, end_date = self._find_next_digest(
digests=digests, bucket=bucket, last_key=key,
last_start_date=last_start_date, cb=self._on_gap,
is_cb_conditional=True)
else:
key = digest['previousDigestS3Object']
if previous_bucket != bucket:
bucket = previous_bucket
# The bucket changed so reload the digest list.
digests = self._load_digests(
bucket, prefix, start_date, end_date)
except ClientError as e:
if e.response['Error']['Code'] != 'NoSuchKey':
raise e
key, end_date = self._find_next_digest(
digests=digests, bucket=bucket, last_key=key,
last_start_date=last_start_date, cb=self._on_missing,
message=str(e))
except DigestError as e:
key, end_date = self._find_next_digest(
digests=digests, bucket=bucket, last_key=key,
last_start_date=last_start_date, cb=self._on_invalid,
message=str(e))
except Exception as e:
# Any other unexpected errors.
key, end_date = self._find_next_digest(
digests=digests, bucket=bucket, last_key=key,
last_start_date=last_start_date, cb=self._on_invalid,
message='Digest file\ts3://%s/%s\tINVALID: %s'
% (bucket, key, str(e)))
def _load_digests(self, bucket, prefix, start_date, end_date):
return self.digest_provider.load_digest_keys_in_range(
bucket=bucket, prefix=prefix,
start_date=start_date, end_date=end_date)
def _find_next_digest(self, digests, bucket, last_key, last_start_date,
cb=None, is_cb_conditional=False, message=None):
"""Finds the next digest in the bucket and invokes any callback."""
next_key, next_end_date = self._get_last_digest(digests, last_key)
if cb and (not is_cb_conditional or next_key):
cb(bucket=bucket, next_key=next_key, last_key=last_key,
next_end_date=next_end_date, last_start_date=last_start_date,
message=message)
return next_key, next_end_date
def _get_last_digest(self, digests, before_key=None):
"""Finds the previous digest key (either the last or before before_key)
If no key is provided, the last digest is used. If a digest is found,
the end date of the provider is adjusted to match the found key's end
date.
"""
if not digests:
return None, None
elif before_key is None:
next_key = digests.pop()
next_key_date = normalize_date(
parse_date(extract_digest_key_date(next_key)))
return next_key, next_key_date
# find a key before the given key.
before_key_date = parse_date(extract_digest_key_date(before_key))
while digests:
next_key = digests.pop()
next_key_date = normalize_date(
parse_date(extract_digest_key_date(next_key)))
if next_key_date < before_key_date:
LOG.debug("Next found key: %s", next_key)
return next_key, next_key_date
return None, None
def _load_and_validate_digest(self, public_keys, bucket, key):
"""Loads and validates a digest from S3.
:param public_keys: Public key dictionary of fingerprint to dict.
:return: Returns a tuple of the digest data as a dict and end_date
:rtype: tuple
"""
digest_data, digest = self.digest_provider.fetch_digest(bucket, key)
for required_key in self.required_digest_keys:
if required_key not in digest_data:
raise InvalidDigestFormat(bucket, key)
# Ensure the bucket and key are the same as what's expected.
if digest_data['digestS3Bucket'] != bucket \
or digest_data['digestS3Object'] != key:
raise DigestError(
('Digest file\ts3://%s/%s\tINVALID: has been moved from its '
'original location') % (bucket, key))
# Get the public keys in the given time range.
fingerprint = digest_data['digestPublicKeyFingerprint']
if fingerprint not in public_keys:
raise DigestError(
('Digest file\ts3://%s/%s\tINVALID: public key not found for '
'fingerprint %s') % (bucket, key, fingerprint))
public_key_hex = public_keys[fingerprint]['Value']
self._digest_validator.validate(
bucket, key, public_key_hex, digest_data, digest)
end_date = normalize_date(parse_date(digest_data['digestEndTime']))
return digest_data, end_date
def _load_public_keys(self, start_date, end_date):
public_keys = self._public_key_provider.get_public_keys(
start_date, end_date)
if not public_keys:
raise RuntimeError(
'No public keys found between %s and %s' %
(format_display_date(start_date),
format_display_date(end_date)))
return public_keys
class Sha256RSADigestValidator(object):
"""
Validates SHA256withRSA signed digests.
The result of validating the digest is inserted into the digest_data
dictionary using the isValid key value pair.
"""
def validate(self, bucket, key, public_key, digest_data, inflated_digest):
"""Validates a digest file.
Throws a DigestError when the digest is invalid.
:param bucket: Bucket of the digest file
:param key: Key of the digest file
:param public_key: Public key bytes.
:param digest_data: Dict of digest data returned when JSON
decoding a manifest.
:param inflated_digest: Inflated digest file contents as bytes.
"""
try:
decoded_key = base64.b64decode(public_key)
public_key = rsa.PublicKey.load_pkcs1(decoded_key, format='DER')
to_sign = self._create_string_to_sign(digest_data, inflated_digest)
signature_bytes = binascii.unhexlify(digest_data['_signature'])
rsa.verify(to_sign, signature_bytes, public_key)
except PyAsn1Error:
raise DigestError(
('Digest file\ts3://%s/%s\tINVALID: Unable to load PKCS #1 key'
' with fingerprint %s')
% (bucket, key, digest_data['digestPublicKeyFingerprint']))
except rsa.pkcs1.VerificationError:
# Note from the Python-RSA docs: Never display the stack trace of
# a rsa.pkcs1.VerificationError exception. It shows where in the
# code the exception occurred, and thus leaks information about
# the key.
raise DigestSignatureError(bucket, key)
def _create_string_to_sign(self, digest_data, inflated_digest):
previous_signature = digest_data['previousDigestSignature']
if previous_signature is None:
# The value must be 'null' to match the Java implementation.
previous_signature = 'null'
string_to_sign = "%s\n%s/%s\n%s\n%s" % (
digest_data['digestEndTime'],
digest_data['digestS3Bucket'],
digest_data['digestS3Object'],
hashlib.sha256(inflated_digest).hexdigest(),
previous_signature)
LOG.debug('Digest string to sign: %s', string_to_sign)
return string_to_sign.encode()
class CloudTrailValidateLogs(BasicCommand):
"""
Validates log digests and log files, optionally saving them to disk.
"""
NAME = 'validate-logs'
DESCRIPTION = """
Validates CloudTrail logs for a given period of time.
This command uses the digest files delivered to your S3 bucket to perform
the validation.
The AWS CLI allows you to detect the following types of changes:
- Modification or deletion of CloudTrail log files.
- Modification or deletion of CloudTrail digest files.
To validate log files with the AWS CLI, the following preconditions must
be met:
- You must have online connectivity to AWS.
- You must have read access to the S3 bucket that contains the digest and
log files.
- The digest and log files must not have been moved from the original S3
location where CloudTrail delivered them.
When you disable Log File Validation, the chain of digest files is broken
after one hour. CloudTrail will not digest log files that were delivered
during a period in which the Log File Validation feature was disabled.
For example, if you enable Log File Validation on January 1, disable it
on January 2, and re-enable it on January 10, digest files will not be
created for the log files delivered from January 3 to January 9. The same
applies whenever you stop CloudTrail logging or delete a trail.
.. note::
Log files that have been downloaded to local disk cannot be validated
with the AWS CLI. The CLI will download all log files each time this
command is executed.
.. note::
This command requires that the role executing the command has
permission to call ListObjects, GetObject, and GetBucketLocation for
each bucket referenced by the trail.
"""
ARG_TABLE = [
{'name': 'trail-arn', 'required': True, 'cli_type_name': 'string',
'help_text': 'Specifies the ARN of the trail to be validated'},
{'name': 'start-time', 'required': True, 'cli_type_name': 'string',
'help_text': ('Specifies that log files delivered on or after the '
'specified UTC timestamp value will be validated. '
'Example: "2015-01-08T05:21:42Z".')},
{'name': 'end-time', 'cli_type_name': 'string',
'help_text': ('Optionally specifies that log files delivered on or '
'before the specified UTC timestamp value will be '
'validated. The default value is the current time. '
'Example: "2015-01-08T12:31:41Z".')},
{'name': 's3-bucket', 'cli_type_name': 'string',
'help_text': ('Optionally specifies the S3 bucket where the digest '
'files are stored. If a bucket name is not specified, '
'the CLI will retrieve it by calling describe_trails')},
{'name': 's3-prefix', 'cli_type_name': 'string',
'help_text': ('Optionally specifies the optional S3 prefix where the '
'digest files are stored. If not specified, the CLI '
'will determine the prefix automatically by calling '
'describe_trails.')},
{'name': 'verbose', 'cli_type_name': 'boolean',
'action': 'store_true',
'help_text': 'Display verbose log validation information'}
]
def __init__(self, session):
super(CloudTrailValidateLogs, self).__init__(session)
self.trail_arn = None
self.is_verbose = False
self.start_time = None
self.end_time = None
self.s3_bucket = None
self.s3_prefix = None
self.s3_client_provider = None
self.cloudtrail_client = None
self._source_region = None
self._valid_digests = 0
self._invalid_digests = 0
self._valid_logs = 0
self._invalid_logs = 0
self._is_last_status_double_space = True
self._found_start_time = None
self._found_end_time = None
def _run_main(self, args, parsed_globals):
self.handle_args(args)
self.setup_services(args, parsed_globals)
self._call()
if self._invalid_digests > 0 or self._invalid_logs > 0:
return 1
return 0
def handle_args(self, args):
self.trail_arn = args.trail_arn
self.is_verbose = args.verbose
self.s3_bucket = args.s3_bucket
self.s3_prefix = args.s3_prefix
self.start_time = normalize_date(parse_date(args.start_time))
if args.end_time:
self.end_time = normalize_date(parse_date(args.end_time))
else:
self.end_time = normalize_date(datetime.utcnow())
if self.start_time > self.end_time:
raise ValueError(('Invalid time range specified: start-time must '
'occur before end-time'))
# Found start time always defaults to the given start time. This value
# may change if the earliest found digest is after the given start
# time. Note that the summary output report of what date ranges were
# actually found is only shown if a valid digest is encountered,
# thereby setting self._found_end_time to a value.
self._found_start_time = self.start_time
def setup_services(self, args, parsed_globals):
self._source_region = parsed_globals.region
# Use the the same region as the region of the CLI to get locations.
self.s3_client_provider = S3ClientProvider(
self._session, self._source_region)
client_args = {'region_name': parsed_globals.region,
'verify': parsed_globals.verify_ssl}
if parsed_globals.endpoint_url is not None:
client_args['endpoint_url'] = parsed_globals.endpoint_url
self.cloudtrail_client = self._session.create_client(
'cloudtrail', **client_args)
def _call(self):
traverser = create_digest_traverser(
trail_arn=self.trail_arn, cloudtrail_client=self.cloudtrail_client,
trail_source_region=self._source_region,
s3_client_provider=self.s3_client_provider, bucket=self.s3_bucket,
prefix=self.s3_prefix, on_missing=self._on_missing_digest,
on_invalid=self._on_invalid_digest, on_gap=self._on_digest_gap)
self._write_startup_text()
digests = traverser.traverse(self.start_time, self.end_time)
for digest in digests:
# Only valid digests are yielded and only valid digests can adjust
# the found times that are reported in the CLI output summary.
self._track_found_times(digest)
self._valid_digests += 1
self._write_status(
'Digest file\ts3://%s/%s\tvalid'
% (digest['digestS3Bucket'], digest['digestS3Object']))
if not digest['logFiles']:
continue
for log in digest['logFiles']:
self._download_log(log, digest)
self._write_summary_text()
def _track_found_times(self, digest):
# Track the earliest found start time, but do not use a date before
# the user supplied start date.
digest_start_time = parse_date(digest['digestStartTime'])
if digest_start_time > self.start_time:
self._found_start_time = digest_start_time
# Only use the last found end time if it is less than the
# user supplied end time (or the current date).
if not self._found_end_time:
digest_end_time = parse_date(digest['digestEndTime'])
self._found_end_time = min(digest_end_time, self.end_time)
def _download_log(self, log, digest_data):
""" Download a log, decompress, and compare SHA256 checksums"""
try:
# Create a client that can work with this bucket.
client = self.s3_client_provider.get_client(log['s3Bucket'])
response = client.get_object(
Bucket=log['s3Bucket'], Key=log['s3Object'])
gzip_inflater = zlib.decompressobj(zlib.MAX_WBITS | 16)
rolling_hash = hashlib.sha256()
for chunk in iter(lambda: response['Body'].read(2048), b""):
data = gzip_inflater.decompress(chunk)
rolling_hash.update(data)
remaining_data = gzip_inflater.flush()
if remaining_data:
rolling_hash.update(remaining_data)
computed_hash = rolling_hash.hexdigest()
if computed_hash != log['hashValue']:
self._on_log_invalid(log)
else:
self._valid_logs += 1
self._write_status(('Log file\ts3://%s/%s\tvalid'
% (log['s3Bucket'], log['s3Object'])))
except ClientError as e:
if e.response['Error']['Code'] != 'NoSuchKey':
raise
self._on_missing_log(log)
except Exception:
self._on_invalid_log_format(log)
def _write_status(self, message, is_error=False):
if is_error:
if self._is_last_status_double_space:
sys.stderr.write("%s\n\n" % message)
else:
sys.stderr.write("\n%s\n\n" % message)
self._is_last_status_double_space = True
elif self.is_verbose:
self._is_last_status_double_space = False
sys.stdout.write("%s\n" % message)
def _write_startup_text(self):
sys.stdout.write(
'Validating log files for trail %s between %s and %s\n\n'
% (self.trail_arn, format_display_date(self.start_time),
format_display_date(self.end_time)))
def _write_summary_text(self):
if not self._is_last_status_double_space:
sys.stdout.write('\n')
sys.stdout.write('Results requested for %s to %s\n'
% (format_display_date(self.start_time),
format_display_date(self.end_time)))
if not self._valid_digests and not self._invalid_digests:
sys.stdout.write('No digests found\n')
return
if not self._found_start_time or not self._found_end_time:
sys.stdout.write('No valid digests found in range\n')
else:
sys.stdout.write('Results found for %s to %s:\n'
% (format_display_date(self._found_start_time),
format_display_date(self._found_end_time)))
self._write_ratio(self._valid_digests, self._invalid_digests, 'digest')
self._write_ratio(self._valid_logs, self._invalid_logs, 'log')
sys.stdout.write('\n')
def _write_ratio(self, valid, invalid, name):
total = valid + invalid
if total > 0:
sys.stdout.write('\n%d/%d %s files valid' % (valid, total, name))
if invalid > 0:
sys.stdout.write(', %d/%d %s files INVALID' % (invalid, total,
name))
def _on_missing_digest(self, bucket, last_key, **kwargs):
self._invalid_digests += 1
self._write_status('Digest file\ts3://%s/%s\tINVALID: not found'
% (bucket, last_key), True)
def _on_digest_gap(self, **kwargs):
self._write_status(
'No log files were delivered by CloudTrail between %s and %s'
% (format_display_date(kwargs['next_end_date']),
format_display_date(kwargs['last_start_date'])), True)
def _on_invalid_digest(self, message, **kwargs):
self._invalid_digests += 1
self._write_status(message, True)
def _on_invalid_log_format(self, log_data):
self._invalid_logs += 1
self._write_status(
('Log file\ts3://%s/%s\tINVALID: invalid format'
% (log_data['s3Bucket'], log_data['s3Object'])), True)
def _on_log_invalid(self, log_data):
self._invalid_logs += 1
self._write_status(
"Log file\ts3://%s/%s\tINVALID: hash value doesn't match"
% (log_data['s3Bucket'], log_data['s3Object']), True)
def _on_missing_log(self, log_data):
self._invalid_logs += 1
self._write_status(
'Log file\ts3://%s/%s\tINVALID: not found'
% (log_data['s3Bucket'], log_data['s3Object']), True)
awscli-1.10.1/awscli/customizations/cloudtrail/subscribe.py 0000666 4542626 0000144 00000032611 12652514124 025205 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import json
import logging
import sys
from .utils import get_account_id, remove_cli_error_event
from awscli.customizations.commands import BasicCommand
from awscli.customizations.utils import s3_bucket_exists
from botocore.exceptions import ClientError
LOG = logging.getLogger(__name__)
S3_POLICY_TEMPLATE = 'policy/S3/AWSCloudTrail-S3BucketPolicy-2014-12-17.json'
SNS_POLICY_TEMPLATE = 'policy/SNS/AWSCloudTrail-SnsTopicPolicy-2014-12-17.json'
class CloudTrailError(Exception):
pass
class CloudTrailSubscribe(BasicCommand):
"""
Subscribe/update a user account to CloudTrail, creating the required S3 bucket,
the optional SNS topic, and starting the CloudTrail monitoring and logging.
"""
NAME = 'create-subscription'
DESCRIPTION = ('Creates and configures the AWS resources necessary to use'
' CloudTrail, creates a trail using those resources, and '
'turns on logging.')
SYNOPSIS = ('aws cloudtrail create-subscription'
' (--s3-use-bucket|--s3-new-bucket) bucket-name'
' [--sns-new-topic topic-name]\n')
ARG_TABLE = [
{'name': 'name', 'required': True, 'help_text': 'Cloudtrail name'},
{'name': 's3-new-bucket',
'help_text': 'Create a new S3 bucket with this name'},
{'name': 's3-use-bucket',
'help_text': 'Use an existing S3 bucket with this name'},
{'name': 's3-prefix', 'help_text': 'S3 object prefix'},
{'name': 'sns-new-topic',
'help_text': 'Create a new SNS topic with this name'},
{'name': 'include-global-service-events',
'help_text': 'Whether to include global service events'},
{'name': 's3-custom-policy',
'help_text': 'Custom S3 policy template or URL'},
{'name': 'sns-custom-policy',
'help_text': 'Custom SNS policy template or URL'}
]
UPDATE = False
def _run_main(self, args, parsed_globals):
self.setup_services(args, parsed_globals)
# Run the command and report success
self._call(args, parsed_globals)
return 0
def setup_services(self, args, parsed_globals):
client_args = {
'region_name': None,
'verify': None
}
if parsed_globals.region is not None:
client_args['region_name'] = parsed_globals.region
if parsed_globals.verify_ssl is not None:
client_args['verify'] = parsed_globals.verify_ssl
# Initialize services
LOG.debug('Initializing S3, SNS and CloudTrail...')
self.iam = self._session.create_client('iam', **client_args)
self.s3 = self._session.create_client('s3', **client_args)
self.sns = self._session.create_client('sns', **client_args)
remove_cli_error_event(self.s3)
self.region_name = self.s3.meta.region_name
# If the endpoint is specified, it is designated for the cloudtrail
# service. Not all of the other services will use it.
if parsed_globals.endpoint_url is not None:
client_args['endpoint_url'] = parsed_globals.endpoint_url
self.cloudtrail = self._session.create_client('cloudtrail', **client_args)
def _call(self, options, parsed_globals):
"""
Run the command. Calls various services based on input options and
outputs the final CloudTrail configuration.
"""
gse = options.include_global_service_events
if gse:
if gse.lower() == 'true':
gse = True
elif gse.lower() == 'false':
gse = False
else:
raise ValueError('You must pass either true or false to'
' --include-global-service-events.')
bucket = options.s3_use_bucket
if options.s3_new_bucket:
bucket = options.s3_new_bucket
if self.UPDATE and options.s3_prefix is None:
# Prefix was not passed and this is updating the S3 bucket,
# so let's find the existing prefix and use that if possible
res = self.cloudtrail.describe_trails(
trailNameList=[options.name])
trail_info = res['trailList'][0]
if 'S3KeyPrefix' in trail_info:
LOG.debug('Setting S3 prefix to {0}'.format(
trail_info['S3KeyPrefix']))
options.s3_prefix = trail_info['S3KeyPrefix']
self.setup_new_bucket(bucket, options.s3_prefix,
options.s3_custom_policy)
elif not bucket and not self.UPDATE:
# No bucket was passed for creation.
raise ValueError('You must pass either --s3-use-bucket or'
' --s3-new-bucket to create.')
if options.sns_new_topic:
try:
topic_result = self.setup_new_topic(options.sns_new_topic,
options.sns_custom_policy)
except Exception:
# Roll back any S3 bucket creation
if options.s3_new_bucket:
self.s3.delete_bucket(Bucket=options.s3_new_bucket)
raise
try:
cloudtrail_config = self.upsert_cloudtrail_config(
options.name,
bucket,
options.s3_prefix,
options.sns_new_topic,
gse
)
except Exception:
# Roll back any S3 bucket / SNS topic creations
if options.s3_new_bucket:
self.s3.delete_bucket(Bucket=options.s3_new_bucket)
if options.sns_new_topic:
self.sns.delete_topic(TopicArn=topic_result['TopicArn'])
raise
sys.stdout.write('CloudTrail configuration:\n{config}\n'.format(
config=json.dumps(cloudtrail_config, indent=2)))
if not self.UPDATE:
# If the configure call command above completes then this should
# have a really high chance of also completing
self.start_cloudtrail(options.name)
sys.stdout.write(
'Logs will be delivered to {bucket}:{prefix}\n'.format(
bucket=bucket, prefix=options.s3_prefix or ''))
def _get_policy(self, key_name):
try:
data = self.s3.get_object(
Bucket='awscloudtrail-policy-' + self.region_name,
Key=key_name)
return data['Body'].read().decode('utf-8')
except Exception as e:
raise CloudTrailError(
'Unable to get regional policy template for'
' region %s: %s. Error: %s', self.region_name, key_name, e)
def setup_new_bucket(self, bucket, prefix, custom_policy=None):
"""
Creates a new S3 bucket with an appropriate policy to let CloudTrail
write to the prefix path.
"""
sys.stdout.write(
'Setting up new S3 bucket {bucket}...\n'.format(bucket=bucket))
account_id = get_account_id(self.iam)
# Clean up the prefix - it requires a trailing slash if set
if prefix and not prefix.endswith('/'):
prefix += '/'
# Fetch policy data from S3 or a custom URL
if custom_policy is not None:
policy = custom_policy
else:
policy = self._get_policy(S3_POLICY_TEMPLATE)
policy = policy.replace('', bucket)\
.replace('', account_id)
if '/' in policy:
policy = policy.replace('/', prefix or '')
else:
policy = policy.replace('', prefix or '')
LOG.debug('Bucket policy:\n{0}'.format(policy))
bucket_exists = s3_bucket_exists(self.s3, bucket)
if bucket_exists:
raise Exception('Bucket {bucket} already exists.'.format(
bucket=bucket))
# If we are not using the us-east-1 region, then we must set
# a location constraint on the new bucket.
params = {'Bucket': bucket}
if self.region_name != 'us-east-1':
bucket_config = {'LocationConstraint': self.region_name}
params['CreateBucketConfiguration'] = bucket_config
data = self.s3.create_bucket(**params)
try:
self.s3.put_bucket_policy(Bucket=bucket, Policy=policy)
except ClientError:
# Roll back bucket creation.
self.s3.delete_bucket(Bucket=bucket)
raise
return data
def setup_new_topic(self, topic, custom_policy=None):
"""
Creates a new SNS topic with an appropriate policy to let CloudTrail
post messages to the topic.
"""
sys.stdout.write(
'Setting up new SNS topic {topic}...\n'.format(topic=topic))
account_id = get_account_id(self.iam)
# Make sure topic doesn't already exist
# Warn but do not fail if ListTopics permissions
# are missing from the IAM role?
try:
topics = self.sns.list_topics()['Topics']
except Exception:
topics = []
LOG.warn('Unable to list topics, continuing...')
if [t for t in topics if t['TopicArn'].split(':')[-1] == topic]:
raise Exception('Topic {topic} already exists.'.format(
topic=topic))
region = self.sns.meta.region_name
# Get the SNS topic policy information to allow CloudTrail
# write-access.
if custom_policy is not None:
policy = custom_policy
else:
policy = self._get_policy(SNS_POLICY_TEMPLATE)
policy = policy.replace('', region)\
.replace('', account_id)\
.replace('', topic)
topic_result = self.sns.create_topic(Name=topic)
try:
# Merge any existing topic policy with our new policy statements
topic_attr = self.sns.get_topic_attributes(
TopicArn=topic_result['TopicArn'])
policy = self.merge_sns_policy(topic_attr['Attributes']['Policy'],
policy)
LOG.debug('Topic policy:\n{0}'.format(policy))
# Set the topic policy
self.sns.set_topic_attributes(TopicArn=topic_result['TopicArn'],
AttributeName='Policy',
AttributeValue=policy)
except Exception:
# Roll back topic creation
self.sns.delete_topic(TopicArn=topic_result['TopicArn'])
raise
return topic_result
def merge_sns_policy(self, left, right):
"""
Merge two SNS topic policy documents. The id information from
``left`` is used in the final document, and the statements
from ``right`` are merged into ``left``.
http://docs.aws.amazon.com/sns/latest/dg/BasicStructure.html
:type left: string
:param left: First policy JSON document
:type right: string
:param right: Second policy JSON document
:rtype: string
:return: Merged policy JSON
"""
left_parsed = json.loads(left)
right_parsed = json.loads(right)
left_parsed['Statement'] += right_parsed['Statement']
return json.dumps(left_parsed)
def upsert_cloudtrail_config(self, name, bucket, prefix, topic, gse):
"""
Either create or update the CloudTrail configuration depending on
whether this command is a create or update command.
"""
sys.stdout.write('Creating/updating CloudTrail configuration...\n')
config = {
'Name': name
}
if bucket is not None:
config['S3BucketName'] = bucket
if prefix is not None:
config['S3KeyPrefix'] = prefix
if topic is not None:
config['SnsTopicName'] = topic
if gse is not None:
config['IncludeGlobalServiceEvents'] = gse
if not self.UPDATE:
self.cloudtrail.create_trail(**config)
else:
self.cloudtrail.update_trail(**config)
return self.cloudtrail.describe_trails()
def start_cloudtrail(self, name):
"""
Start the CloudTrail service, which begins logging.
"""
sys.stdout.write('Starting CloudTrail service...\n')
return self.cloudtrail.start_logging(Name=name)
class CloudTrailUpdate(CloudTrailSubscribe):
"""
Like subscribe above, but the update version of the command.
"""
NAME = 'update-subscription'
UPDATE = True
DESCRIPTION = ('Updates any of the trail configuration settings, and'
' creates and configures any new AWS resources specified.')
SYNOPSIS = ('aws cloudtrail update-subscription'
' [(--s3-use-bucket|--s3-new-bucket) bucket-name]'
' [--sns-new-topic topic-name]\n')
awscli-1.10.1/awscli/customizations/cloudtrail/__init__.py 0000666 4542626 0000144 00000002471 12652514124 024764 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2012-2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from .subscribe import CloudTrailSubscribe, CloudTrailUpdate
from .validation import CloudTrailValidateLogs
def initialize(cli):
"""
The entry point for CloudTrail high level commands.
"""
cli.register('building-command-table.cloudtrail', inject_commands)
def inject_commands(command_table, session, **kwargs):
"""
Called when the CloudTrail command table is being built. Used to inject new
high level commands into the command list. These high level commands
must not collide with existing low-level API call names.
"""
command_table['create-subscription'] = CloudTrailSubscribe(session)
command_table['update-subscription'] = CloudTrailUpdate(session)
command_table['validate-logs'] = CloudTrailValidateLogs(session)
awscli-1.10.1/awscli/customizations/__init__.py 0000666 4542626 0000144 00000002714 12652514124 022622 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""
Customizations
==============
As we start to accumulate more and more of these *built-in* customizations
we probably need to come up with some way to organize them and to make
it easy to add them and register them.
One idea I had was to place them all with a package like this. That
at least keeps them all in one place. Each module in this package
should contain a single customization (I think).
To take it a step further, we could have each module define a couple
of well-defined attributes:
* ``EVENT`` would be a string containing the event that this customization
needs to be registered with. Or, perhaps this should be a list of
events?
* ``handler`` is a callable that will be registered as the handler
for the event.
Using a convention like this, we could perhaps automatically discover
all customizations and register them without having to manually edit
``handlers.py`` each time.
"""
awscli-1.10.1/awscli/customizations/arguments.py 0000666 4542626 0000144 00000012006 12652514125 023064 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import os
from awscli.arguments import CustomArgument
import jmespath
def resolve_given_outfile_path(path):
"""Asserts that a path is writable and returns the expanded path"""
if path is None:
return
outfile = os.path.expanduser(os.path.expandvars(path))
if not os.access(os.path.dirname(os.path.abspath(outfile)), os.W_OK):
raise ValueError('Unable to write to file: %s' % outfile)
return outfile
def is_parsed_result_successful(parsed_result):
"""Returns True if a parsed result is successful"""
return parsed_result['ResponseMetadata']['HTTPStatusCode'] < 300
class OverrideRequiredArgsArgument(CustomArgument):
"""An argument that if specified makes all other arguments not required
By not required, it refers to not having an error thrown when the
parser does not find an argument that is required on the command line.
To obtain this argument's property of ignoring required arguments,
subclass from this class and fill out the ``ARG_DATA`` parameter as
described below. Note this class is really only useful for subclassing.
"""
# ``ARG_DATA`` follows the same format as a member of ``ARG_TABLE`` in
# ``BasicCommand`` class as specified in
# ``awscli/customizations/commands.py``.
#
# For example, an ``ARG_DATA`` variable would be filled out as:
#
# ARG_DATA =
# {'name': 'my-argument',
# 'help_text': 'This is argument ensures the argument is specified'
# 'no other arguments are required'}
ARG_DATA = {'name': 'no-required-args'}
def __init__(self, session):
self._session = session
self._register_argument_action()
super(OverrideRequiredArgsArgument, self).__init__(**self.ARG_DATA)
def _register_argument_action(self):
self._session.register('before-building-argument-table-parser',
self.override_required_args)
def override_required_args(self, argument_table, args, **kwargs):
name_in_cmdline = '--' + self.name
# Set all ``Argument`` objects in ``argument_table`` to not required
# if this argument's name is present in the command line.
if name_in_cmdline in args:
for arg_name in argument_table.keys():
argument_table[arg_name].required = False
class StatefulArgument(CustomArgument):
"""An argument that maintains a stateful value"""
def __init__(self, *args, **kwargs):
super(StatefulArgument, self).__init__(*args, **kwargs)
self._value = None
def add_to_params(self, parameters, value):
super(StatefulArgument, self).add_to_params(parameters, value)
self._value = value
@property
def value(self):
return self._value
class QueryOutFileArgument(StatefulArgument):
"""An argument that write a JMESPath query result to a file"""
def __init__(self, session, name, query, after_call_event, perm,
*args, **kwargs):
self._session = session
self._query = query
self._after_call_event = after_call_event
self._perm = perm
# Generate default help_text if text was not provided.
if 'help_text' not in kwargs:
kwargs['help_text'] = ('Saves the command output contents of %s '
'to the given filename' % self.query)
super(QueryOutFileArgument, self).__init__(name, *args, **kwargs)
@property
def query(self):
return self._query
@property
def perm(self):
return self._perm
def add_to_params(self, parameters, value):
value = resolve_given_outfile_path(value)
super(QueryOutFileArgument, self).add_to_params(parameters, value)
if self.value is not None:
# Only register the event to save the argument if it is set
self._session.register(self._after_call_event, self.save_query)
def save_query(self, parsed, **kwargs):
"""Saves the result of a JMESPath expression to a file.
This method only saves the query data if the response code of
the parsed result is < 300.
"""
if is_parsed_result_successful(parsed):
contents = jmespath.search(self.query, parsed)
with open(self.value, 'w') as fp:
# Don't write 'None' to a file -- write ''.
if contents is None:
fp.write('')
else:
fp.write(contents)
os.chmod(self.value, self.perm)
awscli-1.10.1/awscli/customizations/codecommit.py 0000666 4542626 0000144 00000016337 12652514124 023214 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import os
import re
import sys
import logging
import fileinput
import datetime
from botocore.auth import SigV4Auth
from botocore.awsrequest import AWSRequest
from botocore.compat import urlsplit
from awscli.customizations.commands import BasicCommand
from awscli.compat import BinaryStdout
logger = logging.getLogger('botocore.credentials')
def initialize(cli):
"""
The entry point for the credential helper
"""
cli.register('building-command-table.codecommit', inject_commands)
def inject_commands(command_table, session, **kwargs):
"""
Injects new commands into the codecommit subcommand.
"""
command_table['credential-helper'] = CodeCommitCommand(session)
class CodeCommitNoOpStoreCommand(BasicCommand):
NAME = 'store'
DESCRIPTION = ('This operation does nothing, credentials'
' are calculated each time')
SYNOPSIS = ('aws codecommit credential-helper store')
EXAMPLES = ''
_UNDOCUMENTED = True
def _run_main(self, args, parsed_globals):
return 0
class CodeCommitNoOpEraseCommand(BasicCommand):
NAME = 'erase'
DESCRIPTION = ('This operation does nothing, no credentials'
' are ever stored')
SYNOPSIS = ('aws codecommit credential-helper erase')
EXAMPLES = ''
_UNDOCUMENTED = True
def _run_main(self, args, parsed_globals):
return 0
class CodeCommitGetCommand(BasicCommand):
NAME = 'get'
DESCRIPTION = ('get a username SigV4 credential pair'
' based on protocol, host and path provided'
' from standard in. This is primarily'
' called by git to generate credentials to'
' authenticate against AWS CodeCommit')
SYNOPSIS = ('aws codecommit credential-helper get')
EXAMPLES = (r'echo -e "protocol=https\\n'
r'path=/v1/repos/myrepo\\n'
'host=git-codecommit.us-east-1.amazonaws.com"'
' | aws codecommit credential-helper get')
ARG_TABLE = [
{
'name': 'ignore-host-check',
'action': 'store_true',
'default': False,
'group_name': 'ignore-host-check',
'help_text': (
'Optional. Generate credentials regardless of whether'
' the domain is an Amazon domain.'
)
}
]
def __init__(self, session):
super(CodeCommitGetCommand, self).__init__(session)
def _run_main(self, args, parsed_globals):
git_parameters = self.read_git_parameters()
if ('amazon.com' in git_parameters['host'] or
'amazonaws.com' in git_parameters['host'] or
args.ignore_host_check):
theUrl = self.extract_url(git_parameters)
region = self.extract_region(git_parameters, parsed_globals)
signature = self.sign_request(region, theUrl)
self.write_git_parameters(signature)
return 0
def write_git_parameters(self, signature):
username = self._session.get_credentials().access_key
if self._session.get_credentials().token is not None:
username += "%" + self._session.get_credentials().token
# Python will add a \r to the line ending for a text stdout in Windows.
# Git does not like the \r, so switch to binary
with BinaryStdout() as binary_stdout:
binary_stdout.write('username={0}\n'.format(username))
logger.debug('username\n%s', username)
binary_stdout.write('password={0}\n'.format(signature))
# need to explicitly flush the buffer here,
# before we turn the stream back to text for windows
binary_stdout.flush()
logger.debug('signature\n%s', signature)
def read_git_parameters(self):
parsed = {}
for line in sys.stdin:
key, value = line.strip().split('=', 1)
parsed[key] = value
return parsed
def extract_url(self, parameters):
url = '{0}://{1}/{2}'.format(parameters['protocol'],
parameters['host'],
parameters['path'])
return url
def extract_region(self, parameters, parsed_globals):
match = re.match(r'git-codecommit\.([^.]+)\.amazonaws\.com',
parameters['host'])
if match is not None:
return match.group(1)
elif parsed_globals.region is not None:
return parsed_globals.region
else:
return self._session.get_config_variable('region')
def sign_request(self, region, url_to_sign):
credentials = self._session.get_credentials()
signer = SigV4Auth(credentials, 'codecommit', region)
request = AWSRequest()
request.url = url_to_sign
request.method = 'GIT'
now = datetime.datetime.utcnow()
request.context['timestamp'] = now.strftime('%Y%m%dT%H%M%S')
split = urlsplit(request.url)
# we don't want to include the port number in the signature
hostname = split.netloc.split(':')[0]
canonical_request = '{0}\n{1}\n\nhost:{2}\n\nhost\n'.format(
request.method,
split.path,
hostname)
logger.debug("Calculating signature using v4 auth.")
logger.debug('CanonicalRequest:\n%s', canonical_request)
string_to_sign = signer.string_to_sign(request, canonical_request)
logger.debug('StringToSign:\n%s', string_to_sign)
signature = signer.signature(string_to_sign, request)
logger.debug('Signature:\n%s', signature)
return '{0}Z{1}'.format(request.context['timestamp'], signature)
class CodeCommitCommand(BasicCommand):
NAME = 'credential-helper'
SYNOPSIS = ('aws codecommit credential-helper')
EXAMPLES = ''
SUBCOMMANDS = [
{'name': 'get', 'command_class': CodeCommitGetCommand},
{'name': 'store', 'command_class': CodeCommitNoOpStoreCommand},
{'name': 'erase', 'command_class': CodeCommitNoOpEraseCommand},
]
DESCRIPTION = ('Provide a SigV4 compatible user name and'
' password for git smart HTTP '
' These commands are consumed by git and'
' should not used directly. Erase and Store'
' are no-ops. Get is operation to generate'
' credentials to authenticate AWS CodeCommit.'
' Run \"aws codecommit credential-helper help\"'
' for details')
def _run_main(self, args, parsed_globals):
raise ValueError('usage: aws [options] codecommit'
' credential-helper '
'[parameters]\naws: error: too few arguments')
awscli-1.10.1/awscli/customizations/ec2runinstances.py 0000666 4542626 0000144 00000016517 12652514124 024177 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""
This customization adds two new parameters to the ``ec2 run-instance``
command. The first, ``--secondary-private-ip-addresses`` allows a list
of IP addresses within the specified subnet to be associated with the
new instance. The second, ``--secondary-ip-address-count`` allows you
to specify how many additional IP addresses you want but the actual
address will be assigned for you.
This functionality (and much more) is also available using the
``--network-interfaces`` complex argument. This just makes two of
the most commonly used features available more easily.
"""
from awscli.arguments import CustomArgument
# --secondary-private-ip-address
SECONDARY_PRIVATE_IP_ADDRESSES_DOCS = (
'[EC2-VPC] A secondary private IP address for the network interface '
'or instance. You can specify this multiple times to assign multiple '
'secondary IP addresses. If you want additional private IP addresses '
'but do not need a specific address, use the '
'--secondary-private-ip-address-count option.')
# --secondary-private-ip-address-count
SECONDARY_PRIVATE_IP_ADDRESS_COUNT_DOCS = (
'[EC2-VPC] The number of secondary IP addresses to assign to '
'the network interface or instance.')
# --associate-public-ip-address
ASSOCIATE_PUBLIC_IP_ADDRESS_DOCS = (
'[EC2-VPC] If specified a public IP address will be assigned '
'to the new instance in a VPC.')
def _add_params(argument_table, **kwargs):
arg = SecondaryPrivateIpAddressesArgument(
name='secondary-private-ip-addresses',
help_text=SECONDARY_PRIVATE_IP_ADDRESSES_DOCS)
argument_table['secondary-private-ip-addresses'] = arg
arg = SecondaryPrivateIpAddressCountArgument(
name='secondary-private-ip-address-count',
help_text=SECONDARY_PRIVATE_IP_ADDRESS_COUNT_DOCS)
argument_table['secondary-private-ip-address-count'] = arg
arg = AssociatePublicIpAddressArgument(
name='associate-public-ip-address',
help_text=ASSOCIATE_PUBLIC_IP_ADDRESS_DOCS,
action='store_true', group_name='associate_public_ip')
argument_table['associate-public-ip-address'] = arg
arg = NoAssociatePublicIpAddressArgument(
name='no-associate-public-ip-address',
help_text=ASSOCIATE_PUBLIC_IP_ADDRESS_DOCS,
action='store_false', group_name='associate_public_ip')
argument_table['no-associate-public-ip-address'] = arg
def _check_args(parsed_args, **kwargs):
# This function checks the parsed args. If the user specified
# the --network-interfaces option with any of the scalar options we
# raise an error.
arg_dict = vars(parsed_args)
if arg_dict['network_interfaces']:
for key in ('secondary_private_ip_addresses',
'secondary_private_ip_address_count',
'associate_public_ip_address'):
if arg_dict[key]:
msg = ('Mixing the --network-interfaces option '
'with the simple, scalar options is '
'not supported.')
raise ValueError(msg)
def _fix_args(params, **kwargs):
# The RunInstances request provides some parameters
# such as --subnet-id and --security-group-id that can be specified
# as separate options only if the request DOES NOT include a
# NetworkInterfaces structure. In those cases, the values for
# these parameters must be specified inside the NetworkInterfaces
# structure. This function checks for those parameters
# and fixes them if necessary.
# NOTE: If the user is a default VPC customer, RunInstances
# allows them to specify the security group by name or by id.
# However, in this scenario we can only support id because
# we can't place a group name in the NetworkInterfaces structure.
if 'NetworkInterfaces' in params:
ni = params['NetworkInterfaces']
if 'AssociatePublicIpAddress' in ni[0]:
if 'SubnetId' in params:
ni[0]['SubnetId'] = params['SubnetId']
del params['SubnetId']
if 'SecurityGroupIds' in params:
ni[0]['Groups'] = params['SecurityGroupIds']
del params['SecurityGroupIds']
if 'PrivateIpAddress' in params:
ip_addr = {'PrivateIpAddress': params['PrivateIpAddress'],
'Primary': True}
ni[0]['PrivateIpAddresses'] = [ip_addr]
del params['PrivateIpAddress']
EVENTS = [
('building-argument-table.ec2.run-instances', _add_params),
('operation-args-parsed.ec2.run-instances', _check_args),
('before-parameter-build.ec2.RunInstances', _fix_args),
]
def register_runinstances(event_handler):
# Register all of the events for customizing BundleInstance
for event, handler in EVENTS:
event_handler.register(event, handler)
def _build_network_interfaces(params, key, value):
# Build up the NetworkInterfaces data structure
if 'NetworkInterfaces' not in params:
params['NetworkInterfaces'] = [{'DeviceIndex': 0}]
if key == 'PrivateIpAddresses':
if 'PrivateIpAddresses' not in params['NetworkInterfaces'][0]:
params['NetworkInterfaces'][0]['PrivateIpAddresses'] = value
else:
params['NetworkInterfaces'][0][key] = value
class SecondaryPrivateIpAddressesArgument(CustomArgument):
def add_to_parser(self, parser, cli_name=None):
parser.add_argument(self.cli_name, dest=self.py_name,
default=self._default, nargs='*')
def add_to_params(self, parameters, value):
if value:
value = [{'PrivateIpAddress': v, 'Primary': False} for
v in value]
_build_network_interfaces(parameters,
'PrivateIpAddresses',
value)
class SecondaryPrivateIpAddressCountArgument(CustomArgument):
def add_to_parser(self, parser, cli_name=None):
parser.add_argument(self.cli_name, dest=self.py_name,
default=self._default, type=int)
def add_to_params(self, parameters, value):
if value:
_build_network_interfaces(parameters,
'SecondaryPrivateIpAddressCount',
value)
class AssociatePublicIpAddressArgument(CustomArgument):
def add_to_params(self, parameters, value):
if value is True:
_build_network_interfaces(parameters,
'AssociatePublicIpAddress',
value)
class NoAssociatePublicIpAddressArgument(CustomArgument):
def add_to_params(self, parameters, value):
if value is False:
_build_network_interfaces(parameters,
'AssociatePublicIpAddress',
value)
awscli-1.10.1/awscli/customizations/configservice/ 0000777 4542626 0000144 00000000000 12652514126 023335 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/customizations/configservice/subscribe.py 0000666 4542626 0000144 00000015714 12652514124 025676 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import json
import sys
from awscli.customizations.commands import BasicCommand
from awscli.customizations.utils import s3_bucket_exists
from awscli.customizations.s3.utils import find_bucket_key
S3_BUCKET = {'name': 's3-bucket', 'required': True,
'help_text': ('The S3 bucket that the AWS Config delivery channel'
' will use. If the bucket does not exist, it will '
'be automatically created. The value for this '
'argument should follow the form '
'bucket/prefix. Note that the prefix is optional.')}
SNS_TOPIC = {'name': 'sns-topic', 'required': True,
'help_text': ('The SNS topic that the AWS Config delivery channel'
' will use. If the SNS topic does not exist, it '
'will be automatically created. Value for this '
'should be a valid SNS topic name or the ARN of an '
'existing SNS topic.')}
IAM_ROLE = {'name': 'iam-role', 'required': True,
'help_text': ('The IAM role that the AWS Config configuration '
'recorder will use to record current resource '
'configurations. Value for this should be the '
'ARN of the desired IAM role.')}
def register_subscribe(cli):
cli.register('building-command-table.configservice', add_subscribe)
def add_subscribe(command_table, session, **kwargs):
command_table['subscribe'] = SubscribeCommand(session)
class SubscribeCommand(BasicCommand):
NAME = 'subscribe'
DESCRIPTION = ('Subcribes user to AWS Config by creating an AWS Config '
'delivery channel and configuration recorder to track '
'AWS resource configurations. The names of the default '
'channel and configuration recorder will be default.')
ARG_TABLE = [S3_BUCKET, SNS_TOPIC, IAM_ROLE]
def __init__(self, session):
self._s3_client = None
self._sns_client = None
self._config_client = None
super(SubscribeCommand, self).__init__(session)
def _run_main(self, parsed_args, parsed_globals):
# Setup the necessary all of the necessary clients.
self._setup_clients(parsed_globals)
# Prepare a s3 bucket for use.
s3_bucket_helper = S3BucketHelper(self._s3_client)
bucket, prefix = s3_bucket_helper.prepare_bucket(parsed_args.s3_bucket)
# Prepare a sns topic for use.
sns_topic_helper = SNSTopicHelper(self._sns_client)
sns_topic_arn = sns_topic_helper.prepare_topic(parsed_args.sns_topic)
name = 'default'
# Create a configuration recorder.
self._config_client.put_configuration_recorder(
ConfigurationRecorder={
'name': name,
'roleARN': parsed_args.iam_role
}
)
# Create a delivery channel.
delivery_channel = {
'name': name,
's3BucketName': bucket,
'snsTopicARN': sns_topic_arn
}
if prefix:
delivery_channel['s3KeyPrefix'] = prefix
self._config_client.put_delivery_channel(
DeliveryChannel=delivery_channel)
# Start the configuration recorder.
self._config_client.start_configuration_recorder(
ConfigurationRecorderName=name
)
# Describe the configuration recorders
sys.stdout.write('Subscribe succeeded:\n\n')
sys.stdout.write('Configuration Recorders: ')
response = self._config_client.describe_configuration_recorders()
sys.stdout.write(
json.dumps(response['ConfigurationRecorders'], indent=4))
sys.stdout.write('\n\n')
# Describe the delivery channels
sys.stdout.write('Delivery Channels: ')
response = self._config_client.describe_delivery_channels()
sys.stdout.write(json.dumps(response['DeliveryChannels'], indent=4))
sys.stdout.write('\n')
return 0
def _setup_clients(self, parsed_globals):
client_args = {
'verify': parsed_globals.verify_ssl,
'region_name': parsed_globals.region
}
self._s3_client = self._session.create_client('s3', **client_args)
self._sns_client = self._session.create_client('sns', **client_args)
# Use the specified endpoint only for config related commands.
client_args['endpoint_url'] = parsed_globals.endpoint_url
self._config_client = self._session.create_client('config',
**client_args)
class S3BucketHelper(object):
def __init__(self, s3_client):
self._s3_client = s3_client
def prepare_bucket(self, s3_path):
bucket, key = find_bucket_key(s3_path)
bucket_exists = self._check_bucket_exists(bucket)
if not bucket_exists:
self._create_bucket(bucket)
sys.stdout.write('Using new S3 bucket: %s\n' % bucket)
else:
sys.stdout.write('Using existing S3 bucket: %s\n' % bucket)
return bucket, key
def _check_bucket_exists(self, bucket):
self._s3_client.meta.events.unregister(
'after-call',
unique_id='awscli-error-handler')
return s3_bucket_exists(self._s3_client, bucket)
def _create_bucket(self, bucket):
region_name = self._s3_client.meta.region_name
params = {
'Bucket': bucket
}
bucket_config = {'LocationConstraint': region_name}
if region_name != 'us-east-1':
params['CreateBucketConfiguration'] = bucket_config
self._s3_client.create_bucket(**params)
class SNSTopicHelper(object):
def __init__(self, sns_client):
self._sns_client = sns_client
def prepare_topic(self, sns_topic):
sns_topic_arn = sns_topic
# Create the topic if a name is given.
if not self._check_is_arn(sns_topic):
response = self._sns_client.create_topic(Name=sns_topic)
sns_topic_arn = response['TopicArn']
sys.stdout.write('Using new SNS topic: %s\n' % sns_topic_arn)
else:
sys.stdout.write('Using existing SNS topic: %s\n' % sns_topic_arn)
return sns_topic_arn
def _check_is_arn(self, sns_topic):
# The name of topic cannot contain a colon only arns have colons.
return ':' in sns_topic
awscli-1.10.1/awscli/customizations/configservice/rename_cmd.py 0000666 4542626 0000144 00000001634 12652514124 026003 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations import utils
def register_rename_config(cli):
cli.register('building-command-table.main', change_name)
def change_name(command_table, session, **kwargs):
"""
Change all existing ``aws config`` commands to ``aws configservice``
commands.
"""
utils.rename_command(command_table, 'config', 'configservice')
awscli-1.10.1/awscli/customizations/configservice/__init__.py 0000666 4542626 0000144 00000001065 12652514124 025446 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
awscli-1.10.1/awscli/customizations/configservice/getstatus.py 0000666 4542626 0000144 00000010242 12652514124 025727 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import sys
from awscli.customizations.commands import BasicCommand
def register_get_status(cli):
cli.register('building-command-table.configservice', add_get_status)
def add_get_status(command_table, session, **kwargs):
command_table['get-status'] = GetStatusCommand(session)
class GetStatusCommand(BasicCommand):
NAME = 'get-status'
DESCRIPTION = ('Reports the status of all of configuration '
'recorders and delivery channels.')
def __init__(self, session):
self._config_client = None
super(GetStatusCommand, self).__init__(session)
def _run_main(self, parsed_args, parsed_globals):
self._setup_client(parsed_globals)
self._check_configuration_recorders()
self._check_delivery_channels()
return 0
def _setup_client(self, parsed_globals):
client_args = {
'verify': parsed_globals.verify_ssl,
'region_name': parsed_globals.region,
'endpoint_url': parsed_globals.endpoint_url
}
self._config_client = self._session.create_client('config',
**client_args)
def _check_configuration_recorders(self):
status = self._config_client.describe_configuration_recorder_status()
sys.stdout.write('Configuration Recorders:\n\n')
for configuration_recorder in status['ConfigurationRecordersStatus']:
self._check_configure_recorder_status(configuration_recorder)
sys.stdout.write('\n')
def _check_configure_recorder_status(self, configuration_recorder):
# Get the name of the recorder and print it out.
name = configuration_recorder['name']
sys.stdout.write('name: %s\n' % name)
# Get the recording status and print it out.
recording = configuration_recorder['recording']
recording_map = {False: 'OFF', True: 'ON'}
sys.stdout.write('recorder: %s\n' % recording_map[recording])
# If the recorder is on, get the last status and print it out.
if recording:
self._check_last_status(configuration_recorder)
def _check_delivery_channels(self):
status = self._config_client.describe_delivery_channel_status()
sys.stdout.write('Delivery Channels:\n\n')
for delivery_channel in status['DeliveryChannelsStatus']:
self._check_delivery_channel_status(delivery_channel)
sys.stdout.write('\n')
def _check_delivery_channel_status(self, delivery_channel):
# Get the name of the delivery channel and print it out.
name = delivery_channel['name']
sys.stdout.write('name: %s\n' % name)
# Obtain the various delivery statuses.
stream_delivery = delivery_channel['configStreamDeliveryInfo']
history_delivery = delivery_channel['configHistoryDeliveryInfo']
snapshot_delivery = delivery_channel['configSnapshotDeliveryInfo']
# Print the statuses out if they exist.
if stream_delivery:
self._check_last_status(stream_delivery, 'stream delivery ')
if history_delivery:
self._check_last_status(history_delivery, 'history delivery ')
if snapshot_delivery:
self._check_last_status(snapshot_delivery, 'snapshot delivery ')
def _check_last_status(self, status, status_name=''):
last_status = status['lastStatus']
sys.stdout.write('last %sstatus: %s\n' % (status_name, last_status))
if last_status == "FAILURE":
sys.stdout.write('error code: %s\n' % status['lastErrorCode'])
sys.stdout.write('message: %s\n' % status['lastErrorMessage'])
awscli-1.10.1/awscli/customizations/configservice/putconfigurationrecorder.py 0000666 4542626 0000144 00000006122 12652514124 031034 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import copy
from awscli.arguments import CLIArgument
def register_modify_put_configuration_recorder(cli):
cli.register(
'building-argument-table.configservice.put-configuration-recorder',
extract_recording_group)
def extract_recording_group(session, argument_table, **kwargs):
# The purpose of this customization is to extract the recordingGroup
# member from ConfigurationRecorder into its own argument.
# This customization is needed because the recordingGroup member
# breaks the shorthand syntax as it is a structure and not a scalar value.
configuration_recorder_argument = argument_table['configuration-recorder']
configuration_recorder_model = copy.deepcopy(
configuration_recorder_argument.argument_model)
recording_group_model = copy.deepcopy(
configuration_recorder_argument.argument_model.
members['recordingGroup'])
del configuration_recorder_model.members['recordingGroup']
argument_table['configuration-recorder'] = ConfigurationRecorderArgument(
name='configuration-recorder',
argument_model=configuration_recorder_model,
operation_model=configuration_recorder_argument._operation_model,
is_required=True,
event_emitter=session.get_component('event_emitter'),
serialized_name='ConfigurationRecorder'
)
argument_table['recording-group'] = RecordingGroupArgument(
name='recording-group',
argument_model=recording_group_model,
operation_model=configuration_recorder_argument._operation_model,
is_required=False,
event_emitter=session.get_component('event_emitter'),
serialized_name='recordingGroup'
)
class ConfigurationRecorderArgument(CLIArgument):
def add_to_params(self, parameters, value):
if value is None:
return
unpacked = self._unpack_argument(value)
if 'ConfigurationRecorder' in parameters:
current_value = parameters['ConfigurationRecorder']
current_value.update(unpacked)
else:
parameters['ConfigurationRecorder'] = unpacked
class RecordingGroupArgument(CLIArgument):
def add_to_params(self, parameters, value):
if value is None:
return
unpacked = self._unpack_argument(value)
if 'ConfigurationRecorder' in parameters:
parameters['ConfigurationRecorder']['recordingGroup'] = unpacked
else:
parameters['ConfigurationRecorder'] = {}
parameters['ConfigurationRecorder']['recordingGroup'] = unpacked
awscli-1.10.1/awscli/customizations/configure/ 0000777 4542626 0000144 00000000000 12652514126 022470 5 ustar pysdk-ci amazon 0000000 0000000 awscli-1.10.1/awscli/customizations/configure/addmodel.py 0000666 4542626 0000144 00000011357 12652514124 024620 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import json
import os
from botocore.model import ServiceModel
from awscli.customizations.commands import BasicCommand
def _get_endpoint_prefix_to_name_mappings(session):
# Get the mappings of endpoint prefixes to service names from the
# available service models.
prefixes_to_services = {}
for service_name in session.get_available_services():
service_model = session.get_service_model(service_name)
prefixes_to_services[service_model.endpoint_prefix] = service_name
return prefixes_to_services
def _get_service_name(session, endpoint_prefix):
if endpoint_prefix in session.get_available_services():
# Check if the endpoint prefix is a pre-existing service.
# If it is, use that endpoint prefix as the service name.
return endpoint_prefix
else:
# The service may have a different endpoint prefix than its name
# So we need to determine what the correct mapping may be.
# Figure out the mappings of endpoint prefix to service names.
name_mappings = _get_endpoint_prefix_to_name_mappings(session)
# Determine the service name from the mapping.
# If it does not exist in the mapping, return the original endpoint
# prefix.
return name_mappings.get(endpoint_prefix, endpoint_prefix)
def get_model_location(session, service_definition, service_name=None):
"""Gets the path of where a service-2.json file should go in ~/.aws/models
:type session: botocore.session.Session
:param session: A session object
:type service_definition: dict
:param service_definition: The json loaded service definition
:type service_name: str
:param service_name: The service name to use. If this not provided,
this will be determined from a combination of available services
and the service definition.
:returns: The path to where are model should be placed based on
the service defintion and the current services in botocore.
"""
# Add the ServiceModel abstraction over the service json definition to
# make it easier to work with.
service_model = ServiceModel(service_definition)
# Determine the service_name if not provided
if service_name is None:
endpoint_prefix = service_model.endpoint_prefix
service_name = _get_service_name(session, endpoint_prefix)
api_version = service_model.api_version
# For the model location we only want the custom data path (~/.aws/models
# not the one set by AWS_DATA_PATH)
data_path = session.get_component('data_loader').CUSTOMER_DATA_PATH
# Use the version of the model to determine the file's naming convention.
service_model_name = (
'service-%d.json' % int(
float(service_definition.get('version', '2.0'))))
return os.path.join(data_path, service_name, api_version,
service_model_name)
class AddModelCommand(BasicCommand):
NAME = 'add-model'
DESCRITPION = (
'Adds a service JSON model to the appropriate location in '
'~/.aws/models. Once the model gets added, CLI commands and Boto3 '
'clients will be immediately available for the service JSON model '
'provided.'
)
ARG_TABLE = [
{'name': 'service-model', 'required': True, 'help_text': (
'The contents of the service JSON model.')},
{'name': 'service-name', 'help_text': (
'Overrides the default name used by the service JSON '
'model to generate CLI service commands and Boto3 clients.')}
]
def _run_main(self, parsed_args, parsed_globals):
service_definition = json.loads(parsed_args.service_model)
# Get the path to where the model should be written
model_location = get_model_location(
self._session, service_definition, parsed_args.service_name
)
# If the service_name/api_version directories do not exist,
# then create them.
model_directory = os.path.dirname(model_location)
if not os.path.exists(model_directory):
os.makedirs(model_directory)
# Write the model to the specified location
with open(model_location, 'w') as f:
f.write(parsed_args.service_model)
return 0
awscli-1.10.1/awscli/customizations/configure/__init__.py 0000666 4542626 0000144 00000062764 12652514124 024616 0 ustar pysdk-ci amazon 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import os
import re
import sys
import logging
from botocore.exceptions import ProfileNotFound
from awscli.compat import raw_input
from awscli.customizations.commands import BasicCommand
from awscli.customizations.configure.addmodel import AddModelCommand
logger = logging.getLogger(__name__)
NOT_SET = ''
PREDEFINED_SECTION_NAMES = ('preview', 'plugins')
def register_configure_cmd(cli):
cli.register('building-command-table.main',
ConfigureCommand.add_command)
class ConfigValue(object):
def __init__(self, value, config_type, config_variable):
self.value = value
self.config_type = config_type
self.config_variable = config_variable
def mask_value(self):
if self.value is NOT_SET:
return
self.value = _mask_value(self.value)
class SectionNotFoundError(Exception):
pass
def _mask_value(current_value):
if current_value is None:
return 'None'
else:
return ('*' * 16) + current_value[-4:]
class InteractivePrompter(object):
def get_value(self, current_value, config_name, prompt_text=''):
if config_name in ('aws_access_key_id', 'aws_secret_access_key'):
current_value = _mask_value(current_value)
response = raw_input("%s [%s]: " % (prompt_text, current_value))
if not response:
# If the user hits enter, we return a value of None
# instead of an empty string. That way we can determine
# whether or not a value has changed.
response = None
return response
class ConfigFileWriter(object):
SECTION_REGEX = re.compile(r'\[(?P[^]]+)\]')
OPTION_REGEX = re.compile(
r'(?P