awscli-1.17.14/0000755000000000000000000000000013620325757013117 5ustar rootroot00000000000000awscli-1.17.14/README.rst0000644000000000000000000004347513620325630014611 0ustar rootroot00000000000000======= aws-cli ======= .. image:: https://travis-ci.org/aws/aws-cli.svg?branch=develop :target: https://travis-ci.org/aws/aws-cli :alt: Build Status .. image:: https://badges.gitter.im/aws/aws-cli.svg :target: https://gitter.im/aws/aws-cli :alt: Gitter This package provides a unified command line interface to Amazon Web Services. The aws-cli package works on Python versions: * 2.7.x and greater * 3.4.x and greater * 3.5.x and greater * 3.6.x and greater * 3.7.x and greater * 3.8.x and greater On 10/09/2019 support for Python 2.6 and Python 3.3 was deprecated and support was dropped on 01/10/2020. To avoid disruption, customers using the AWS CLI on Python 2.6 or 3.3 will need to upgrade their version of Python or pin the version of the AWS CLI in use prior to 01/10/2020. For more information, see this `blog post `__. .. attention:: We recommend that all customers regularly monitor the `Amazon Web Services Security Bulletins website`_ for any important security bulletins related to aws-cli. ------------ Installation ------------ The easiest way to install aws-cli is to use `pip`_ in a ``virtualenv``:: $ pip install awscli or, if you are not installing in a ``virtualenv``, to install globally:: $ sudo pip install awscli or for your user:: $ pip install --user awscli If you have the aws-cli installed and want to upgrade to the latest version you can run:: $ pip install --upgrade awscli .. note:: On macOS, if you see an error regarding the version of six that came with distutils in El Capitan, use the ``--ignore-installed`` option:: $ sudo pip install awscli --ignore-installed six This will install the aws-cli package as well as all dependencies. You can also just `download the tarball`_. Once you have the awscli directory structure on your workstation, you can just run:: $ cd $ python setup.py install If you want to run the ``develop`` branch of the CLI, see the "CLI Dev Version" section below. ------------ CLI Releases ------------ The release notes for the AWS CLI can be found `here `__. ------------------ Command Completion ------------------ The aws-cli package includes a very useful command completion feature. This feature is not automatically installed so you need to configure it manually. To enable tab completion for bash either use the built-in command ``complete``:: $ complete -C aws_completer aws Or add ``bin/aws_bash_completer`` file under ``/etc/bash_completion.d``, ``/usr/local/etc/bash_completion.d`` or any other ``bash_completion.d`` location. For tcsh:: $ complete aws 'p/*/`aws_completer`/' You should add this to your startup scripts to enable it for future sessions. For zsh please refer to ``bin/aws_zsh_completer.sh``. Source that file, e.g. from your ``~/.zshrc``, and make sure you run ``compinit`` before:: $ source bin/aws_zsh_completer.sh For now the bash compatibility auto completion (``bashcompinit``) is used. For further details please refer to the top of ``bin/aws_zsh_completer.sh``. --------------- Getting Started --------------- Before using aws-cli, you need to tell it about your AWS credentials. You can do this in several ways: * Environment variables * Shared credentials file * Config file * IAM Role The quickest way to get started is to run the ``aws configure`` command:: $ aws configure AWS Access Key ID: foo AWS Secret Access Key: bar Default region name [us-west-2]: us-west-2 Default output format [None]: json To use environment variables, do the following:: $ export AWS_ACCESS_KEY_ID= $ export AWS_SECRET_ACCESS_KEY= To use the shared credentials file, create an INI formatted file like this:: [default] aws_access_key_id=foo aws_secret_access_key=bar [testing] aws_access_key_id=foo aws_secret_access_key=bar and place it in ``~/.aws/credentials`` (or in ``%UserProfile%\.aws/credentials`` on Windows). If you wish to place the shared credentials file in a different location than the one specified above, you need to tell aws-cli where to find it. Do this by setting the appropriate environment variable:: $ export AWS_SHARED_CREDENTIALS_FILE=/path/to/shared_credentials_file To use a config file, create a configuration file like this:: [default] aws_access_key_id= aws_secret_access_key= # Optional, to define default region for this profile. region=us-west-1 [profile testing] aws_access_key_id= aws_secret_access_key= region=us-west-2 and place it in ``~/.aws/config`` (or in ``%UserProfile%\.aws\config`` on Windows). If you wish to place the config file in a different location than the one specified above, you need to tell aws-cli where to find it. Do this by setting the appropriate environment variable:: $ export AWS_CONFIG_FILE=/path/to/config_file As you can see, you can have multiple ``profiles`` defined in both the shared credentials file and the configuration file. You can then specify which profile to use by using the ``--profile`` option. If no profile is specified the ``default`` profile is used. In the config file, except for the default profile, you **must** prefix each config section of a profile group with ``profile``. For example, if you have a profile named "testing" the section header would be ``[profile testing]``. The final option for credentials is highly recommended if you are using aws-cli on an EC2 instance. IAM Roles are a great way to have credentials installed automatically on your instance. If you are using IAM Roles, aws-cli will find them and use them automatically. ---------------------------- Other Configurable Variables ---------------------------- In addition to credentials, a number of other variables can be configured either with environment variables, configuration file entries or both. The following table documents these. ============================= =========== ============================= ================================= ================================== Variable Option Config Entry Environment Variable Description ============================= =========== ============================= ================================= ================================== profile --profile profile AWS_PROFILE Default profile name ----------------------------- ----------- ----------------------------- --------------------------------- ---------------------------------- region --region region AWS_DEFAULT_REGION Default AWS Region ----------------------------- ----------- ----------------------------- --------------------------------- ---------------------------------- config_file AWS_CONFIG_FILE Alternate location of config ----------------------------- ----------- ----------------------------- --------------------------------- ---------------------------------- credentials_file AWS_SHARED_CREDENTIALS_FILE Alternate location of credentials ----------------------------- ----------- ----------------------------- --------------------------------- ---------------------------------- output --output output AWS_DEFAULT_OUTPUT Default output style ----------------------------- ----------- ----------------------------- --------------------------------- ---------------------------------- ca_bundle --ca-bundle ca_bundle AWS_CA_BUNDLE CA Certificate Bundle ----------------------------- ----------- ----------------------------- --------------------------------- ---------------------------------- access_key aws_access_key_id AWS_ACCESS_KEY_ID AWS Access Key ----------------------------- ----------- ----------------------------- --------------------------------- ---------------------------------- secret_key aws_secret_access_key AWS_SECRET_ACCESS_KEY AWS Secret Key ----------------------------- ----------- ----------------------------- --------------------------------- ---------------------------------- token aws_session_token AWS_SESSION_TOKEN AWS Token (temp credentials) ----------------------------- ----------- ----------------------------- --------------------------------- ---------------------------------- cli_timestamp_format cli_timestamp_format Output format of timestamps ----------------------------- ----------- ----------------------------- --------------------------------- ---------------------------------- metadata_service_timeout metadata_service_timeout AWS_METADATA_SERVICE_TIMEOUT EC2 metadata timeout ----------------------------- ----------- ----------------------------- --------------------------------- ---------------------------------- metadata_service_num_attempts metadata_service_num_attempts AWS_METADATA_SERVICE_NUM_ATTEMPTS EC2 metadata retry count ----------------------------- ----------- ----------------------------- --------------------------------- ---------------------------------- parameter_validation parameter_validation Toggles local parameter validation ============================= =========== ============================= ================================= ================================== ^^^^^^^^ Examples ^^^^^^^^ If you get tired of specifying a ``--region`` option on the command line all of the time, you can specify a default region to use whenever no explicit ``--region`` option is included using the ``region`` variable. To specify this using an environment variable:: $ export AWS_DEFAULT_REGION=us-west-2 To include it in your config file:: [default] aws_access_key_id= aws_secret_access_key= region=us-west-1 Similarly, the ``profile`` variable can be used to specify which profile to use if one is not explicitly specified on the command line via the ``--profile`` option. To set this via environment variable:: $ export AWS_PROFILE=testing The ``profile`` variable can not be specified in the configuration file since it would have to be associated with a profile and would defeat the purpose. ^^^^^^^^^^^^^^^^^^^ Further Information ^^^^^^^^^^^^^^^^^^^ For more information about configuration options, please refer the `AWS CLI Configuration Variables topic `_. You can access this topic from the CLI as well by running ``aws help config-vars``. ---------------------------------------- Accessing Services With Global Endpoints ---------------------------------------- Some services, such as *AWS Identity and Access Management* (IAM) have a single, global endpoint rather than different endpoints for each region. To make access to these services simpler, aws-cli will automatically use the global endpoint unless you explicitly supply a region (using the ``--region`` option) or a profile (using the ``--profile`` option). Therefore, the following:: $ aws iam list-users will automatically use the global endpoint for the IAM service regardless of the value of the ``AWS_DEFAULT_REGION`` environment variable or the ``region`` variable specified in your profile. -------------------- JSON Parameter Input -------------------- Many options that need to be provided are simple string or numeric values. However, some operations require JSON data structures as input parameters either on the command line or in files. For example, consider the command to authorize access to an EC2 security group. In this case, we will add ingress access to port 22 for all IP addresses:: $ aws ec2 authorize-security-group-ingress --group-name MySecurityGroup \ --ip-permissions '{"FromPort":22,"ToPort":22,"IpProtocol":"tcp","IpRanges":[{"CidrIp": "0.0.0.0/0"}]}' -------------------------- File-based Parameter Input -------------------------- Some parameter values are so large or so complex that it would be easier to place the parameter value in a file and refer to that file rather than entering the value directly on the command line. Let's use the ``authorize-security-group-ingress`` command shown above. Rather than provide the value of the ``--ip-permissions`` parameter directly in the command, you could first store the values in a file. Let's call the file ``ip_perms.json``:: {"FromPort":22, "ToPort":22, "IpProtocol":"tcp", "IpRanges":[{"CidrIp":"0.0.0.0/0"}]} Then, we could make the same call as above like this:: $ aws ec2 authorize-security-group-ingress --group-name MySecurityGroup \ --ip-permissions file://ip_perms.json The ``file://`` prefix on the parameter value signals that the parameter value is actually a reference to a file that contains the actual parameter value. aws-cli will open the file, read the value and use that value as the parameter value. This is also useful when the parameter is really referring to file-based data. For example, the ``--user-data`` option of the ``aws ec2 run-instances`` command or the ``--public-key-material`` parameter of the ``aws ec2 import-key-pair`` command. ------------------------- URI-based Parameter Input ------------------------- Similar to the file-based input described above, aws-cli also includes a way to use data from a URI as the value of a parameter. The idea is exactly the same except the prefix used is ``https://`` or ``http://``:: $ aws ec2 authorize-security-group-ingress --group-name MySecurityGroup \ --ip-permissions http://mybucket.s3.amazonaws.com/ip_perms.json -------------- Command Output -------------- The default output for commands is currently JSON. You can use the ``--query`` option to extract the output elements from this JSON document. For more information on the expression language used for the ``--query`` argument, you can read the `JMESPath Tutorial `__. ^^^^^^^^ Examples ^^^^^^^^ Get a list of IAM user names:: $ aws iam list-users --query Users[].UserName Get a list of key names and their sizes in an S3 bucket:: $ aws s3api list-objects --bucket b --query Contents[].[Key,Size] Get a list of all EC2 instances and include their Instance ID, State Name, and their Name (if they've been tagged with a Name):: $ aws ec2 describe-instances --query \ 'Reservations[].Instances[].[InstanceId,State.Name,Tags[?Key==`Name`] | [0].Value]' You may also find the `jq `_ tool useful in processing the JSON output for other uses. There is also an ASCII table format available. You can select this style with the ``--output table`` option or you can make this style your default output style via environment variable or config file entry as described above. Try adding ``--output table`` to the above commands. --------------- CLI Dev Version --------------- If you are just interested in using the latest released version of the AWS CLI, please see the Installation_ section above. This section is for anyone who wants to install the development version of the CLI. You normally would not need to do this unless: * You are developing a feature for the CLI and plan on submitting a Pull Request. * You want to test the latest changes of the CLI before they make it into an official release. The latest changes to the CLI are in the ``develop`` branch on github. This is the default branch when you clone the git repository. Additionally, there are several other packages that are developed in lockstep with the CLI. This includes: * `botocore `__ * `jmespath `__ If you just want to install a snapshot of the latest development version of the CLI, you can use the ``requirements.txt`` file included in this repo. This file points to the development version of the above packages:: $ cd $ pip install -r requirements.txt $ pip install -e . However, to keep up to date, you will continually have to run the ``pip install -r requirements.txt`` file to pull in the latest changes from the develop branches of botocore, jmespath, etc. You can optionally clone each of those repositories and run "pip install -e ." for each repository:: $ git clone && cd jmespath/ $ pip install -e . && cd .. $ git clone && cd botocore/ $ pip install -e . && cd .. $ git clone && cd aws-cli/ $ pip install -e . ------------ Getting Help ------------ We use GitHub issues for tracking bugs and feature requests and have limited bandwidth to address them. Please use these community resources for getting help: * Ask a question on `Stack Overflow `__ and tag it with `aws-cli `__ * Come join the AWS CLI community chat on `gitter `__ * Open a support ticket with `AWS Support `__ * If it turns out that you may have found a bug, please `open an issue `__ .. _`Amazon Web Services Security Bulletins website`: https://aws.amazon.com/security/security-bulletins .. _pip: http://www.pip-installer.org/en/latest/ .. _`download the tarball`: https://pypi.org/project/awscli/ awscli-1.17.14/bin/0000755000000000000000000000000013620325757013667 5ustar rootroot00000000000000awscli-1.17.14/bin/aws.cmd0000644000000000000000000000263013620325556015144 0ustar rootroot00000000000000@echo OFF REM=""" setlocal set PythonExe="" set PythonExeFlags= for %%i in (cmd bat exe) do ( for %%j in (python.%%i) do ( call :SetPythonExe "%%~$PATH:j" ) ) for /f "tokens=2 delims==" %%i in ('assoc .py') do ( for /f "tokens=2 delims==" %%j in ('ftype %%i') do ( for /f "tokens=1" %%k in ("%%j") do ( call :SetPythonExe %%k ) ) ) %PythonExe% -x %PythonExeFlags% "%~f0" %* exit /B %ERRORLEVEL% goto :EOF :SetPythonExe if not ["%~1"]==[""] ( if [%PythonExe%]==[""] ( set PythonExe="%~1" ) ) goto :EOF """ # =================================================== # Python script starts here # =================================================== #!/usr/bin/env python # Copyright 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # http://aws.amazon.com/apache2.0/ # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import awscli.clidriver import sys def main(): return awscli.clidriver.main() if __name__ == '__main__': sys.exit(main()) awscli-1.17.14/bin/aws_completer0000755000000000000000000000216313620325556016460 0ustar rootroot00000000000000#!/usr/bin/env python # Copyright 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # http://aws.amazon.com/apache2.0/ # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import os if os.environ.get('LC_CTYPE', '') == 'UTF-8': os.environ['LC_CTYPE'] = 'en_US.UTF-8' import awscli.completer if __name__ == '__main__': # bash exports COMP_LINE and COMP_POINT, tcsh COMMAND_LINE only cline = os.environ.get('COMP_LINE') or os.environ.get('COMMAND_LINE') or '' cpoint = int(os.environ.get('COMP_POINT') or len(cline)) try: awscli.completer.complete(cline, cpoint) except KeyboardInterrupt: # If the user hits Ctrl+C, we don't want to print # a traceback to the user. pass awscli-1.17.14/bin/aws0000755000000000000000000000146213620325556014407 0ustar rootroot00000000000000#!/usr/bin/env python # Copyright 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # http://aws.amazon.com/apache2.0/ # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import sys import os if os.environ.get('LC_CTYPE', '') == 'UTF-8': os.environ['LC_CTYPE'] = 'en_US.UTF-8' import awscli.clidriver def main(): return awscli.clidriver.main() if __name__ == '__main__': sys.exit(main()) awscli-1.17.14/bin/aws_bash_completer0000644000000000000000000000031413620325556017446 0ustar rootroot00000000000000# Typically that would be added under one of the following paths: # - /etc/bash_completion.d # - /usr/local/etc/bash_completion.d # - /usr/share/bash-completion/completions complete -C aws_completer aws awscli-1.17.14/bin/aws_zsh_completer.sh0000644000000000000000000000341713620325556017755 0ustar rootroot00000000000000# Source this file to activate auto completion for zsh using the bash # compatibility helper. Make sure to run `compinit` before, which should be # given usually. # # % source /path/to/zsh_complete.sh # # Typically that would be called somewhere in your .zshrc. # # Note, the overwrite of _bash_complete() is to export COMP_LINE and COMP_POINT # That is only required for zsh <= edab1d3dbe61da7efe5f1ac0e40444b2ec9b9570 # # https://github.com/zsh-users/zsh/commit/edab1d3dbe61da7efe5f1ac0e40444b2ec9b9570 # # zsh relases prior to that version do not export the required env variables! autoload -Uz bashcompinit bashcompinit -i _bash_complete() { local ret=1 local -a suf matches local -x COMP_POINT COMP_CWORD local -a COMP_WORDS COMPREPLY BASH_VERSINFO local -x COMP_LINE="$words" local -A savejobstates savejobtexts (( COMP_POINT = 1 + ${#${(j. .)words[1,CURRENT]}} + $#QIPREFIX + $#IPREFIX + $#PREFIX )) (( COMP_CWORD = CURRENT - 1)) COMP_WORDS=( $words ) BASH_VERSINFO=( 2 05b 0 1 release ) savejobstates=( ${(kv)jobstates} ) savejobtexts=( ${(kv)jobtexts} ) [[ ${argv[${argv[(I)nospace]:-0}-1]} = -o ]] && suf=( -S '' ) matches=( ${(f)"$(compgen $@ -- ${words[CURRENT]})"} ) if [[ -n $matches ]]; then if [[ ${argv[${argv[(I)filenames]:-0}-1]} = -o ]]; then compset -P '*/' && matches=( ${matches##*/} ) compset -S '/*' && matches=( ${matches%%/*} ) compadd -Q -f "${suf[@]}" -a matches && ret=0 else compadd -Q "${suf[@]}" -a matches && ret=0 fi fi if (( ret )); then if [[ ${argv[${argv[(I)default]:-0}-1]} = -o ]]; then _default "${suf[@]}" && ret=0 elif [[ ${argv[${argv[(I)dirnames]:-0}-1]} = -o ]]; then _directories "${suf[@]}" && ret=0 fi fi return ret } complete -C aws_completer aws awscli-1.17.14/LICENSE.txt0000644000000000000000000000104513620325630014730 0ustar rootroot00000000000000Copyright 2012-2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. awscli-1.17.14/MANIFEST.in0000644000000000000000000000022313620325554014645 0ustar rootroot00000000000000include README.rst include LICENSE.txt include requirements.txt recursive-include awscli/examples *.rst *.txt recursive-include awscli/data *.json awscli-1.17.14/requirements.txt0000644000000000000000000000100713620325630016367 0ustar rootroot00000000000000tox>=2.3.1,<3.0.0 # botocore and the awscli packages are typically developed # in tandem, so we're requiring the latest develop # branch of botocore and s3transfer when working on the awscli. -e git://github.com/boto/botocore.git@develop#egg=botocore -e git://github.com/boto/s3transfer.git@develop#egg=s3transfer nose==1.3.7 mock==1.3.0 # TODO: this can now be bumped # 0.30.0 dropped support for python2.6 # remove this upper bound on the wheel version once 2.6 support # is dropped from aws-cli wheel>0.24.0,<0.30.0 awscli-1.17.14/setup.cfg0000644000000000000000000000100713620325757014736 0ustar rootroot00000000000000[wheel] universal = 1 [metadata] requires-dist = botocore==1.14.14 docutils>=0.10,<0.16 rsa>=3.1.2,<=3.5.0 PyYAML>=3.10,<5.3 s3transfer>=0.3.0,<0.4.0 colorama>=0.2.5,<0.4.2; python_version=='3.4' colorama>=0.2.5,<0.4.4; python_version!='3.4' [check-manifest] ignore = .github .github/* .dependabot .dependabot/* .coveragerc CHANGELOG.rst CONTRIBUTING.rst .travis.yml requirements* tox.ini .changes .changes/* tests tests/* scripts scripts/* doc doc/* [egg_info] tag_build = tag_date = 0 awscli-1.17.14/setup.py0000644000000000000000000000554613620325757014643 0ustar rootroot00000000000000#!/usr/bin/env python import codecs import os.path import re import sys from setuptools import setup, find_packages here = os.path.abspath(os.path.dirname(__file__)) def read(*parts): return codecs.open(os.path.join(here, *parts), 'r').read() def find_version(*file_paths): version_file = read(*file_paths) version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", version_file, re.M) if version_match: return version_match.group(1) raise RuntimeError("Unable to find version string.") install_requires = [ 'botocore==1.14.14', 'docutils>=0.10,<0.16', 'rsa>=3.1.2,<=3.5.0', 's3transfer>=0.3.0,<0.4.0', 'PyYAML>=3.10,<5.3', ] if sys.version_info[:2] == (3, 4): install_requires.append('colorama>=0.2.5,<0.4.2') else: install_requires.append('colorama>=0.2.5,<0.4.4') setup_options = dict( name='awscli', version=find_version("awscli", "__init__.py"), description='Universal Command Line Environment for AWS.', long_description=read('README.rst'), author='Amazon Web Services', url='http://aws.amazon.com/cli/', scripts=['bin/aws', 'bin/aws.cmd', 'bin/aws_completer', 'bin/aws_zsh_completer.sh', 'bin/aws_bash_completer'], packages=find_packages(exclude=['tests*']), package_data={'awscli': ['data/*.json', 'examples/*/*.rst', 'examples/*/*.txt', 'examples/*/*/*.txt', 'examples/*/*/*.rst', 'topics/*.rst', 'topics/*.json']}, install_requires=install_requires, extras_require={}, license="Apache License 2.0", classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'Intended Audience :: System Administrators', 'Natural Language :: English', 'License :: OSI Approved :: Apache Software License', 'Programming Language :: Python', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', ], ) if 'py2exe' in sys.argv: # This will actually give us a py2exe command. import py2exe # And we have some py2exe specific options. setup_options['options'] = { 'py2exe': { 'optimize': 0, 'skip_archive': True, 'dll_excludes': ['crypt32.dll'], 'packages': ['docutils', 'urllib', 'httplib', 'HTMLParser', 'awscli', 'ConfigParser', 'xml.etree', 'pipes'], } } setup_options['console'] = ['bin/aws'] setup(**setup_options) awscli-1.17.14/awscli/0000755000000000000000000000000013620325757014401 5ustar rootroot00000000000000awscli-1.17.14/awscli/table.py0000644000000000000000000003601013620325556016037 0ustar rootroot00000000000000# Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # http://aws.amazon.com/apache2.0/ # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import sys import struct import unicodedata import colorama from awscli.utils import is_a_tty from awscli.compat import six # `autoreset` allows us to not have to sent reset sequences for every # string. `strip` lets us preserve color when redirecting. COLORAMA_KWARGS = { 'autoreset': True, 'strip': False, } def get_text_length(text): # `len(unichar)` measures the number of characters, so we use # `unicodedata.east_asian_width` to measure the length of characters. # Following responses are considered to be full-width length. # * A(Ambiguous) # * F(Fullwidth) # * W(Wide) text = six.text_type(text) return sum(2 if unicodedata.east_asian_width(char) in 'WFA' else 1 for char in text) def determine_terminal_width(default_width=80): # If we can't detect the terminal width, the default_width is returned. try: from termios import TIOCGWINSZ from fcntl import ioctl except ImportError: return default_width try: height, width = struct.unpack('hhhh', ioctl(sys.stdout, TIOCGWINSZ, '\000' * 8))[0:2] except Exception: return default_width else: return width def center_text(text, length=80, left_edge='|', right_edge='|', text_length=None): """Center text with specified edge chars. You can pass in the length of the text as an arg, otherwise it is computed automatically for you. This can allow you to center a string not based on it's literal length (useful if you're using ANSI codes). """ # postcondition: get_text_length(returned_text) == length if text_length is None: text_length = get_text_length(text) output = [] char_start = (length // 2) - (text_length // 2) - 1 output.append(left_edge + ' ' * char_start + text) length_so_far = get_text_length(left_edge) + char_start + text_length right_side_spaces = length - get_text_length(right_edge) - length_so_far output.append(' ' * right_side_spaces) output.append(right_edge) final = ''.join(output) return final def align_left(text, length, left_edge='|', right_edge='|', text_length=None, left_padding=2): """Left align text.""" # postcondition: get_text_length(returned_text) == length if text_length is None: text_length = get_text_length(text) computed_length = ( text_length + left_padding + \ get_text_length(left_edge) + get_text_length(right_edge)) if length - computed_length >= 0: padding = left_padding else: padding = 0 output = [] length_so_far = 0 output.append(left_edge) length_so_far += len(left_edge) output.append(' ' * padding) length_so_far += padding output.append(text) length_so_far += text_length output.append(' ' * (length - length_so_far - len(right_edge))) output.append(right_edge) return ''.join(output) def convert_to_vertical_table(sections): # Any section that only has a single row is # inverted, so: # header1 | header2 | header3 # val1 | val2 | val2 # # becomes: # # header1 | val1 # header2 | val2 # header3 | val3 for i, section in enumerate(sections): if len(section.rows) == 1 and section.headers: headers = section.headers new_section = Section() new_section.title = section.title new_section.indent_level = section.indent_level for header, element in zip(headers, section.rows[0]): new_section.add_row([header, element]) sections[i] = new_section class IndentedStream(object): def __init__(self, stream, indent_level, left_indent_char='|', right_indent_char='|'): self._stream = stream self._indent_level = indent_level self._left_indent_char = left_indent_char self._right_indent_char = right_indent_char def write(self, text): self._stream.write(self._left_indent_char * self._indent_level) if text.endswith('\n'): self._stream.write(text[:-1]) self._stream.write(self._right_indent_char * self._indent_level) self._stream.write('\n') else: self._stream.write(text) def __getattr__(self, attr): return getattr(self._stream, attr) class Styler(object): def style_title(self, text): return text def style_header_column(self, text): return text def style_row_element(self, text): return text def style_indentation_char(self, text): return text class ColorizedStyler(Styler): def __init__(self): colorama.init(**COLORAMA_KWARGS) def style_title(self, text): # Originally bold + underline return text #return colorama.Style.BOLD + text + colorama.Style.RESET_ALL def style_header_column(self, text): # Originally underline return text def style_row_element(self, text): return (colorama.Style.BRIGHT + colorama.Fore.BLUE + text + colorama.Style.RESET_ALL) def style_indentation_char(self, text): return (colorama.Style.DIM + colorama.Fore.YELLOW + text + colorama.Style.RESET_ALL) class MultiTable(object): def __init__(self, terminal_width=None, initial_section=True, column_separator='|', terminal=None, styler=None, auto_reformat=True): self._auto_reformat = auto_reformat if initial_section: self._current_section = Section() self._sections = [self._current_section] else: self._current_section = None self._sections = [] if styler is None: # Move out to factory. if is_a_tty(): self._styler = ColorizedStyler() else: self._styler = Styler() else: self._styler = styler self._rendering_index = 0 self._column_separator = column_separator if terminal_width is None: self._terminal_width = determine_terminal_width() def add_title(self, title): self._current_section.add_title(title) def add_row_header(self, headers): self._current_section.add_header(headers) def add_row(self, row_elements): self._current_section.add_row(row_elements) def new_section(self, title, indent_level=0): self._current_section = Section() self._sections.append(self._current_section) self._current_section.add_title(title) self._current_section.indent_level = indent_level def render(self, stream): max_width = self._calculate_max_width() should_convert_table = self._determine_conversion_needed(max_width) if should_convert_table: convert_to_vertical_table(self._sections) max_width = self._calculate_max_width() stream.write('-' * max_width + '\n') for section in self._sections: self._render_section(section, max_width, stream) def _determine_conversion_needed(self, max_width): # If we don't know the width of the controlling terminal, # then we don't try to resize the table. if max_width > self._terminal_width: return self._auto_reformat def _calculate_max_width(self): max_width = max(s.total_width(padding=4, with_border=True, outer_padding=s.indent_level) for s in self._sections) return max_width def _render_section(self, section, max_width, stream): stream = IndentedStream(stream, section.indent_level, self._styler.style_indentation_char('|'), self._styler.style_indentation_char('|')) max_width -= (section.indent_level * 2) self._render_title(section, max_width, stream) self._render_column_titles(section, max_width, stream) self._render_rows(section, max_width, stream) def _render_title(self, section, max_width, stream): # The title consists of: # title : | This is the title | # bottom_border: ---------------------------- if section.title: title = self._styler.style_title(section.title) stream.write(center_text(title, max_width, '|', '|', get_text_length(section.title)) + '\n') if not section.headers and not section.rows: stream.write('+%s+' % ('-' * (max_width - 2)) + '\n') def _render_column_titles(self, section, max_width, stream): if not section.headers: return # In order to render the column titles we need to know # the width of each of the columns. widths = section.calculate_column_widths(padding=4, max_width=max_width) # TODO: Built a list instead of +=, it's more efficient. current = '' length_so_far = 0 # The first cell needs both left and right edges '| foo |' # while subsequent cells only need right edges ' foo |'. first = True for width, header in zip(widths, section.headers): stylized_header = self._styler.style_header_column(header) if first: left_edge = '|' first = False else: left_edge = '' current += center_text(text=stylized_header, length=width, left_edge=left_edge, right_edge='|', text_length=get_text_length(header)) length_so_far += width self._write_line_break(stream, widths) stream.write(current + '\n') def _write_line_break(self, stream, widths): # Write out something like: # +-------+---------+---------+ parts = [] first = True for width in widths: if first: parts.append('+%s+' % ('-' * (width - 2))) first = False else: parts.append('%s+' % ('-' * (width - 1))) parts.append('\n') stream.write(''.join(parts)) def _render_rows(self, section, max_width, stream): if not section.rows: return widths = section.calculate_column_widths(padding=4, max_width=max_width) if not widths: return self._write_line_break(stream, widths) for row in section.rows: # TODO: Built the string in a list then join instead of using +=, # it's more efficient. current = '' length_so_far = 0 first = True for width, element in zip(widths, row): if first: left_edge = '|' first = False else: left_edge = '' stylized = self._styler.style_row_element(element) current += align_left(text=stylized, length=width, left_edge=left_edge, right_edge=self._column_separator, text_length=get_text_length(element)) length_so_far += width stream.write(current + '\n') self._write_line_break(stream, widths) class Section(object): def __init__(self): self.title = '' self.headers = [] self.rows = [] self.indent_level = 0 self._num_cols = None self._max_widths = [] def __repr__(self): return ("Section(title=%s, headers=%s, indent_level=%s, num_rows=%s)" % (self.title, self.headers, self.indent_level, len(self.rows))) def calculate_column_widths(self, padding=0, max_width=None): # postcondition: sum(widths) == max_width unscaled_widths = [w + padding for w in self._max_widths] if max_width is None: return unscaled_widths if not unscaled_widths: return unscaled_widths else: # Compute scale factor for max_width. scale_factor = max_width / float(sum(unscaled_widths)) scaled = [int(round(scale_factor * w)) for w in unscaled_widths] # Once we've scaled the columns, we may be slightly over/under # the amount we need so we have to adjust the columns. off_by = sum(scaled) - max_width while off_by != 0: iter_order = range(len(scaled)) if off_by < 0: iter_order = reversed(iter_order) for i in iter_order: if off_by > 0: scaled[i] -= 1 off_by -= 1 else: scaled[i] += 1 off_by += 1 if off_by == 0: break return scaled def total_width(self, padding=0, with_border=False, outer_padding=0): total = 0 # One char on each side == 2 chars total to the width. border_padding = 2 for w in self.calculate_column_widths(): total += w + padding if with_border: total += border_padding total += outer_padding + outer_padding return max(get_text_length(self.title) + border_padding + outer_padding + outer_padding, total) def add_title(self, title): self.title = title def add_header(self, headers): self._update_max_widths(headers) if self._num_cols is None: self._num_cols = len(headers) self.headers = self._format_headers(headers) def _format_headers(self, headers): return headers def add_row(self, row): if self._num_cols is None: self._num_cols = len(row) if len(row) != self._num_cols: raise ValueError("Row should have %s elements, instead " "it has %s" % (self._num_cols, len(row))) row = self._format_row(row) self.rows.append(row) self._update_max_widths(row) def _format_row(self, row): return [six.text_type(r) for r in row] def _update_max_widths(self, row): if not self._max_widths: self._max_widths = [get_text_length(el) for el in row] else: for i, el in enumerate(row): self._max_widths[i] = max(get_text_length(el), self._max_widths[i]) awscli-1.17.14/awscli/help.py0000644000000000000000000003244513620325556015710 0ustar rootroot00000000000000# Copyright 2012-2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import logging import os import sys import platform import shlex from subprocess import Popen, PIPE from docutils.core import publish_string from docutils.writers import manpage from botocore.docs.bcdoc import docevents from botocore.docs.bcdoc.restdoc import ReSTDocument from botocore.docs.bcdoc.textwriter import TextWriter from awscli.clidocs import ProviderDocumentEventHandler from awscli.clidocs import ServiceDocumentEventHandler from awscli.clidocs import OperationDocumentEventHandler from awscli.clidocs import TopicListerDocumentEventHandler from awscli.clidocs import TopicDocumentEventHandler from awscli.argprocess import ParamShorthandParser from awscli.argparser import ArgTableArgParser from awscli.topictags import TopicTagDB from awscli.utils import ignore_ctrl_c LOG = logging.getLogger('awscli.help') class ExecutableNotFoundError(Exception): def __init__(self, executable_name): super(ExecutableNotFoundError, self).__init__( 'Could not find executable named "%s"' % executable_name) def get_renderer(): """ Return the appropriate HelpRenderer implementation for the current platform. """ if platform.system() == 'Windows': return WindowsHelpRenderer() else: return PosixHelpRenderer() class PagingHelpRenderer(object): """ Interface for a help renderer. The renderer is responsible for displaying the help content on a particular platform. """ def __init__(self, output_stream=sys.stdout): self.output_stream = output_stream PAGER = None def get_pager_cmdline(self): pager = self.PAGER if 'MANPAGER' in os.environ: pager = os.environ['MANPAGER'] elif 'PAGER' in os.environ: pager = os.environ['PAGER'] return shlex.split(pager) def render(self, contents): """ Each implementation of HelpRenderer must implement this render method. """ converted_content = self._convert_doc_content(contents) self._send_output_to_pager(converted_content) def _send_output_to_pager(self, output): cmdline = self.get_pager_cmdline() LOG.debug("Running command: %s", cmdline) p = self._popen(cmdline, stdin=PIPE) p.communicate(input=output) def _popen(self, *args, **kwargs): return Popen(*args, **kwargs) def _convert_doc_content(self, contents): return contents class PosixHelpRenderer(PagingHelpRenderer): """ Render help content on a Posix-like system. This includes Linux and MacOS X. """ PAGER = 'less -R' def _convert_doc_content(self, contents): man_contents = publish_string(contents, writer=manpage.Writer()) if not self._exists_on_path('groff'): raise ExecutableNotFoundError('groff') cmdline = ['groff', '-m', 'man', '-T', 'ascii'] LOG.debug("Running command: %s", cmdline) p3 = self._popen(cmdline, stdin=PIPE, stdout=PIPE, stderr=PIPE) groff_output = p3.communicate(input=man_contents)[0] return groff_output def _send_output_to_pager(self, output): cmdline = self.get_pager_cmdline() if not self._exists_on_path(cmdline[0]): LOG.debug("Pager '%s' not found in PATH, printing raw help." % cmdline[0]) self.output_stream.write(output.decode('utf-8') + "\n") self.output_stream.flush() return LOG.debug("Running command: %s", cmdline) with ignore_ctrl_c(): # We can't rely on the KeyboardInterrupt from # the CLIDriver being caught because when we # send the output to a pager it will use various # control characters that need to be cleaned # up gracefully. Otherwise if we simply catch # the Ctrl-C and exit, it will likely leave the # users terminals in a bad state and they'll need # to manually run ``reset`` to fix this issue. # Ignoring Ctrl-C solves this issue. It's also # the default behavior of less (you can't ctrl-c # out of a manpage). p = self._popen(cmdline, stdin=PIPE) p.communicate(input=output) def _exists_on_path(self, name): # Since we're only dealing with POSIX systems, we can # ignore things like PATHEXT. return any([os.path.exists(os.path.join(p, name)) for p in os.environ.get('PATH', '').split(os.pathsep)]) class WindowsHelpRenderer(PagingHelpRenderer): """Render help content on a Windows platform.""" PAGER = 'more' def _convert_doc_content(self, contents): text_output = publish_string(contents, writer=TextWriter()) return text_output def _popen(self, *args, **kwargs): # Also set the shell value to True. To get any of the # piping to a pager to work, we need to use shell=True. kwargs['shell'] = True return Popen(*args, **kwargs) class HelpCommand(object): """ HelpCommand Interface --------------------- A HelpCommand object acts as the interface between objects in the CLI (e.g. Providers, Services, Operations, etc.) and the documentation system (bcdoc). A HelpCommand object wraps the object from the CLI space and provides a consistent interface to critical information needed by the documentation pipeline such as the object's name, description, etc. The HelpCommand object is passed to the component of the documentation pipeline that fires documentation events. It is then passed on to each document event handler that has registered for the events. All HelpCommand objects contain the following attributes: + ``session`` - A ``botocore`` ``Session`` object. + ``obj`` - The object that is being documented. + ``command_table`` - A dict mapping command names to callable objects. + ``arg_table`` - A dict mapping argument names to callable objects. + ``doc`` - A ``Document`` object that is used to collect the generated documentation. In addition, please note the `properties` defined below which are required to allow the object to be used in the document pipeline. Implementations of HelpCommand are provided here for Provider, Service and Operation objects. Other implementations for other types of objects might be needed for customization in plugins. As long as the implementations conform to this basic interface it should be possible to pass them to the documentation system and generate interactive and static help files. """ EventHandlerClass = None """ Each subclass should define this class variable to point to the EventHandler class used by this HelpCommand. """ def __init__(self, session, obj, command_table, arg_table): self.session = session self.obj = obj if command_table is None: command_table = {} self.command_table = command_table if arg_table is None: arg_table = {} self.arg_table = arg_table self._subcommand_table = {} self._related_items = [] self.renderer = get_renderer() self.doc = ReSTDocument(target='man') @property def event_class(self): """ Return the ``event_class`` for this object. The ``event_class`` is used by the documentation pipeline when generating documentation events. For the event below:: doc-title.. The document pipeline would use this property to determine the ``event_class`` value. """ pass @property def name(self): """ Return the name of the wrapped object. This would be called by the document pipeline to determine the ``name`` to be inserted into the event, as shown above. """ pass @property def subcommand_table(self): """These are the commands that may follow after the help command""" return self._subcommand_table @property def related_items(self): """This is list of items that are related to the help command""" return self._related_items def __call__(self, args, parsed_globals): if args: subcommand_parser = ArgTableArgParser({}, self.subcommand_table) parsed, remaining = subcommand_parser.parse_known_args(args) if getattr(parsed, 'subcommand', None) is not None: return self.subcommand_table[parsed.subcommand](remaining, parsed_globals) # Create an event handler for a Provider Document instance = self.EventHandlerClass(self) # Now generate all of the events for a Provider document. # We pass ourselves along so that we can, in turn, get passed # to all event handlers. docevents.generate_events(self.session, self) self.renderer.render(self.doc.getvalue()) instance.unregister() class ProviderHelpCommand(HelpCommand): """Implements top level help command. This is what is called when ``aws help`` is run. """ EventHandlerClass = ProviderDocumentEventHandler def __init__(self, session, command_table, arg_table, description, synopsis, usage): HelpCommand.__init__(self, session, None, command_table, arg_table) self.description = description self.synopsis = synopsis self.help_usage = usage self._subcommand_table = None self._topic_tag_db = None self._related_items = ['aws help topics'] @property def event_class(self): return 'aws' @property def name(self): return 'aws' @property def subcommand_table(self): if self._subcommand_table is None: if self._topic_tag_db is None: self._topic_tag_db = TopicTagDB() self._topic_tag_db.load_json_index() self._subcommand_table = self._create_subcommand_table() return self._subcommand_table def _create_subcommand_table(self): subcommand_table = {} # Add the ``aws help topics`` command to the ``topic_table`` topic_lister_command = TopicListerCommand(self.session) subcommand_table['topics'] = topic_lister_command topic_names = self._topic_tag_db.get_all_topic_names() # Add all of the possible topics to the ``topic_table`` for topic_name in topic_names: topic_help_command = TopicHelpCommand(self.session, topic_name) subcommand_table[topic_name] = topic_help_command return subcommand_table class ServiceHelpCommand(HelpCommand): """Implements service level help. This is the object invoked whenever a service command help is implemented, e.g. ``aws ec2 help``. """ EventHandlerClass = ServiceDocumentEventHandler def __init__(self, session, obj, command_table, arg_table, name, event_class): super(ServiceHelpCommand, self).__init__(session, obj, command_table, arg_table) self._name = name self._event_class = event_class @property def event_class(self): return self._event_class @property def name(self): return self._name class OperationHelpCommand(HelpCommand): """Implements operation level help. This is the object invoked whenever help for a service is requested, e.g. ``aws ec2 describe-instances help``. """ EventHandlerClass = OperationDocumentEventHandler def __init__(self, session, operation_model, arg_table, name, event_class): HelpCommand.__init__(self, session, operation_model, None, arg_table) self.param_shorthand = ParamShorthandParser() self._name = name self._event_class = event_class @property def event_class(self): return self._event_class @property def name(self): return self._name class TopicListerCommand(HelpCommand): EventHandlerClass = TopicListerDocumentEventHandler def __init__(self, session): super(TopicListerCommand, self).__init__(session, None, {}, {}) @property def event_class(self): return 'topics' @property def name(self): return 'topics' class TopicHelpCommand(HelpCommand): EventHandlerClass = TopicDocumentEventHandler def __init__(self, session, topic_name): super(TopicHelpCommand, self).__init__(session, None, {}, {}) self._topic_name = topic_name @property def event_class(self): return 'topics.' + self.name @property def name(self): return self._topic_name awscli-1.17.14/awscli/__main__.py0000644000000000000000000000122713620325554016470 0ustar rootroot00000000000000# Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import sys from awscli.clidriver import main if __name__ == "__main__": sys.exit(main()) awscli-1.17.14/awscli/utils.py0000644000000000000000000001423013620325556016110 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import csv import signal import datetime import contextlib import os import sys import subprocess from awscli.compat import six from awscli.compat import get_binary_stdout from awscli.compat import get_popen_kwargs_for_pager_cmd def split_on_commas(value): if not any(char in value for char in ['"', '\\', "'", ']', '[']): # No quotes or escaping, just use a simple split. return value.split(',') elif not any(char in value for char in ['"', "'", '[', ']']): # Simple escaping, let the csv module handle it. return list(csv.reader(six.StringIO(value), escapechar='\\'))[0] else: # If there's quotes for the values, we have to handle this # ourselves. return _split_with_quotes(value) def _split_with_quotes(value): try: parts = list(csv.reader(six.StringIO(value), escapechar='\\'))[0] except csv.Error: raise ValueError("Bad csv value: %s" % value) iter_parts = iter(parts) new_parts = [] for part in iter_parts: # Find the first quote quote_char = _find_quote_char_in_part(part) # Find an opening list bracket list_start = part.find('=[') if list_start >= 0 and value.find(']') != -1 and \ (quote_char is None or part.find(quote_char) > list_start): # This is a list, eat all the items until the end if ']' in part: # Short circuit for only one item new_chunk = part else: new_chunk = _eat_items(value, iter_parts, part, ']') list_items = _split_with_quotes(new_chunk[list_start + 2:-1]) new_chunk = new_chunk[:list_start + 1] + ','.join(list_items) new_parts.append(new_chunk) continue elif quote_char is None: new_parts.append(part) continue elif part.count(quote_char) == 2: # Starting and ending quote are in this part. # While it's not needed right now, this will # break down if we ever need to escape quotes while # quoting a value. new_parts.append(part.replace(quote_char, '')) continue # Now that we've found a starting quote char, we # need to combine the parts until we encounter an end quote. new_chunk = _eat_items(value, iter_parts, part, quote_char, quote_char) new_parts.append(new_chunk) return new_parts def _eat_items(value, iter_parts, part, end_char, replace_char=''): """ Eat items from an iterator, optionally replacing characters with a blank and stopping when the end_char has been reached. """ current = part chunks = [current.replace(replace_char, '')] while True: try: current = six.advance_iterator(iter_parts) except StopIteration: raise ValueError(value) chunks.append(current.replace(replace_char, '')) if current.endswith(end_char): break return ','.join(chunks) def _find_quote_char_in_part(part): if '"' not in part and "'" not in part: return quote_char = None double_quote = part.find('"') single_quote = part.find("'") if double_quote >= 0 and single_quote == -1: quote_char = '"' elif single_quote >= 0 and double_quote == -1: quote_char = "'" elif double_quote < single_quote: quote_char = '"' elif single_quote < double_quote: quote_char = "'" return quote_char def find_service_and_method_in_event_name(event_name): """ Grabs the service id and the operation name from an event name. This is making the assumption that the event name is in the form event.service.operation. """ split_event = event_name.split('.')[1:] service_name = None if len(split_event) > 0: service_name = split_event[0] operation_name = None if len(split_event) > 1: operation_name = split_event[1] return service_name, operation_name def json_encoder(obj): """JSON encoder that formats datetimes as ISO8601 format.""" if isinstance(obj, datetime.datetime): return obj.isoformat() else: return obj @contextlib.contextmanager def ignore_ctrl_c(): original = signal.signal(signal.SIGINT, signal.SIG_IGN) try: yield finally: signal.signal(signal.SIGINT, original) def emit_top_level_args_parsed_event(session, args): session.emit( 'top-level-args-parsed', parsed_args=args, session=session) def is_a_tty(): try: return os.isatty(sys.stdout.fileno()) except Exception as e: return False class OutputStreamFactory(object): def __init__(self, popen=None): self._popen = popen if popen is None: self._popen = subprocess.Popen @contextlib.contextmanager def get_pager_stream(self, preferred_pager=None): popen_kwargs = self._get_process_pager_kwargs(preferred_pager) try: process = self._popen(**popen_kwargs) yield process.stdin except IOError: # Ignore IOError since this can commonly be raised when a pager # is closed abruptly and causes a broken pipe. pass finally: process.communicate() @contextlib.contextmanager def get_stdout_stream(self): yield get_binary_stdout() def _get_process_pager_kwargs(self, pager_cmd): kwargs = get_popen_kwargs_for_pager_cmd(pager_cmd) kwargs['stdin'] = subprocess.PIPE return kwargs def write_exception(ex, outfile): outfile.write("\n") outfile.write(six.text_type(ex)) outfile.write("\n") awscli-1.17.14/awscli/commands.py0000644000000000000000000000407213620325554016552 0ustar rootroot00000000000000# Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. class CLICommand(object): """Interface for a CLI command. This class represents a top level CLI command (``aws ec2``, ``aws s3``, ``aws config``). """ @property def name(self): # Subclasses must implement a name. raise NotImplementedError("name") @name.setter def name(self, value): # Subclasses must implement setting/changing the cmd name. raise NotImplementedError("name") @property def lineage(self): # Represents how to get to a specific command using the CLI. # It includes all commands that came before it and itself in # a list. return [self] @property def lineage_names(self): # Represents the lineage of a command in terms of command ``name`` return [cmd.name for cmd in self.lineage] def __call__(self, args, parsed_globals): """Invoke CLI operation. :type args: str :param args: The remaining command line args. :type parsed_globals: ``argparse.Namespace`` :param parsed_globals: The parsed arguments so far. :rtype: int :return: The return code of the operation. This will be used as the RC code for the ``aws`` process. """ # Subclasses are expected to implement this method. pass def create_help_command(self): # Subclasses are expected to implement this method if they want # help docs. return None @property def arg_table(self): return {} awscli-1.17.14/awscli/plugin.py0000644000000000000000000000433113620325556016247 0ustar rootroot00000000000000# Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import logging from botocore.hooks import HierarchicalEmitter log = logging.getLogger('awscli.plugin') BUILTIN_PLUGINS = {'__builtin__': 'awscli.handlers'} def load_plugins(plugin_mapping, event_hooks=None, include_builtins=True): """ :type plugin_mapping: dict :param plugin_mapping: A dict of plugin name to import path, e.g. ``{"plugingName": "package.modulefoo"}``. :type event_hooks: ``EventHooks`` :param event_hooks: Event hook emitter. If one if not provided, an emitter will be created and returned. Otherwise, the passed in ``event_hooks`` will be used to initialize plugins. :type include_builtins: bool :param include_builtins: If True, the builtin awscli plugins (specified in ``BUILTIN_PLUGINS``) will be included in the list of plugins to load. :rtype: HierarchicalEmitter :return: An event emitter object. """ if include_builtins: plugin_mapping.update(BUILTIN_PLUGINS) modules = _import_plugins(plugin_mapping) if event_hooks is None: event_hooks = HierarchicalEmitter() for name, plugin in zip(plugin_mapping.keys(), modules): log.debug("Initializing plugin %s: %s", name, plugin) plugin.awscli_initialize(event_hooks) return event_hooks def _import_plugins(plugin_names): plugins = [] for name, path in plugin_names.items(): log.debug("Importing plugin %s: %s", name, path) if '.' not in path: plugins.append(__import__(path)) else: package, module = path.rsplit('.', 1) module = __import__(path, fromlist=[module]) plugins.append(module) return plugins awscli-1.17.14/awscli/formatter.py0000644000000000000000000002577313620325556016771 0ustar rootroot00000000000000# Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # http://aws.amazon.com/apache2.0/ # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import logging from botocore.compat import json from botocore.utils import set_value_from_jmespath from botocore.paginate import PageIterator from awscli.table import MultiTable, Styler, ColorizedStyler from awscli import text from awscli import compat from awscli.utils import json_encoder LOG = logging.getLogger(__name__) def is_response_paginated(response): return isinstance(response, PageIterator) class Formatter(object): def __init__(self, args): self._args = args def _remove_request_id(self, response_data): # We only want to display the ResponseMetadata (which includes # the request id) if there is an error in the response. # Since all errors have been unified under the Errors key, # this should be a reasonable way to filter. if 'Errors' not in response_data: if 'ResponseMetadata' in response_data: if 'RequestId' in response_data['ResponseMetadata']: request_id = response_data['ResponseMetadata']['RequestId'] LOG.debug('RequestId: %s', request_id) del response_data['ResponseMetadata'] def _get_default_stream(self): return compat.get_stdout_text_writer() def _flush_stream(self, stream): try: stream.flush() except IOError: pass class FullyBufferedFormatter(Formatter): def __call__(self, command_name, response, stream=None): if stream is None: # Retrieve stdout on invocation instead of at import time # so that if anything wraps stdout we'll pick up those changes # (specifically colorama on windows wraps stdout). stream = self._get_default_stream() # I think the interfaces between non-paginated # and paginated responses can still be cleaned up. if is_response_paginated(response): response_data = response.build_full_result() else: response_data = response self._remove_request_id(response_data) if self._args.query is not None: response_data = self._args.query.search(response_data) try: self._format_response(command_name, response_data, stream) except IOError as e: # If the reading end of our stdout stream has closed the file # we can just exit. pass finally: # flush is needed to avoid the "close failed in file object # destructor" in python2.x (see http://bugs.python.org/issue11380). self._flush_stream(stream) class JSONFormatter(FullyBufferedFormatter): def _format_response(self, command_name, response, stream): # For operations that have no response body (e.g. s3 put-object) # the response will be an empty string. We don't want to print # that out to the user but other "falsey" values like an empty # dictionary should be printed. if response != {}: json.dump(response, stream, indent=4, default=json_encoder, ensure_ascii=False) stream.write('\n') class TableFormatter(FullyBufferedFormatter): """Pretty print a table from a given response. The table formatter is able to take any generic response and generate a pretty printed table. It does this without using the output definition from the model. """ def __init__(self, args, table=None): super(TableFormatter, self).__init__(args) if args.color == 'auto': self.table = MultiTable(initial_section=False, column_separator='|') elif args.color == 'off': styler = Styler() self.table = MultiTable(initial_section=False, column_separator='|', styler=styler) elif args.color == 'on': styler = ColorizedStyler() self.table = MultiTable(initial_section=False, column_separator='|', styler=styler) else: raise ValueError("Unknown color option: %s" % args.color) def _format_response(self, command_name, response, stream): if self._build_table(command_name, response): try: self.table.render(stream) except IOError: # If they're piping stdout to another process which exits before # we're done writing all of our output, we'll get an error about a # closed pipe which we can safely ignore. pass def _build_table(self, title, current, indent_level=0): if not current: return False if title is not None: self.table.new_section(title, indent_level=indent_level) if isinstance(current, list): if isinstance(current[0], dict): self._build_sub_table_from_list(current, indent_level, title) else: for item in current: if self._scalar_type(item): self.table.add_row([item]) elif all(self._scalar_type(el) for el in item): self.table.add_row(item) else: self._build_table(title=None, current=item) if isinstance(current, dict): # Render a single row section with keys as header # and the row as the values, unless the value # is a list. self._build_sub_table_from_dict(current, indent_level) return True def _build_sub_table_from_dict(self, current, indent_level): # Render a single row section with keys as header # and the row as the values, unless the value # is a list. headers, more = self._group_scalar_keys(current) if len(headers) == 1: # Special casing if a dict has a single scalar key/value pair. self.table.add_row([headers[0], current[headers[0]]]) elif headers: self.table.add_row_header(headers) self.table.add_row([current[k] for k in headers]) for remaining in more: self._build_table(remaining, current[remaining], indent_level=indent_level + 1) def _build_sub_table_from_list(self, current, indent_level, title): headers, more = self._group_scalar_keys_from_list(current) self.table.add_row_header(headers) first = True for element in current: if not first and more: self.table.new_section(title, indent_level=indent_level) self.table.add_row_header(headers) first = False # Use .get() to account for the fact that sometimes an element # may not have all the keys from the header. self.table.add_row([element.get(header, '') for header in headers]) for remaining in more: # Some of the non scalar attributes may not necessarily # be in every single element of the list, so we need to # check this condition before recursing. if remaining in element: self._build_table(remaining, element[remaining], indent_level=indent_level + 1) def _scalar_type(self, element): return not isinstance(element, (list, dict)) def _group_scalar_keys_from_list(self, list_of_dicts): # We want to make sure we catch all the keys in the list of dicts. # Most of the time each list element has the same keys, but sometimes # a list element will have keys not defined in other elements. headers = set() more = set() for item in list_of_dicts: current_headers, current_more = self._group_scalar_keys(item) headers.update(current_headers) more.update(current_more) headers = list(sorted(headers)) more = list(sorted(more)) return headers, more def _group_scalar_keys(self, current): # Given a dict, separate the keys into those whose values are # scalar, and those whose values aren't. Return two lists, # one is the scalar value keys, the second is the remaining keys. more = [] headers = [] for element in current: if self._scalar_type(current[element]): headers.append(element) else: more.append(element) headers.sort() more.sort() return headers, more class TextFormatter(Formatter): def __call__(self, command_name, response, stream=None): if stream is None: stream = self._get_default_stream() try: if is_response_paginated(response): result_keys = response.result_keys for i, page in enumerate(response): if i > 0: current = {} else: current = response.non_aggregate_part for result_key in result_keys: data = result_key.search(page) set_value_from_jmespath( current, result_key.expression, data ) self._format_response(current, stream) if response.resume_token: # Tell the user about the next token so they can continue # if they want. self._format_response( {'NextToken': {'NextToken': response.resume_token}}, stream) else: self._remove_request_id(response) self._format_response(response, stream) finally: # flush is needed to avoid the "close failed in file object # destructor" in python2.x (see http://bugs.python.org/issue11380). self._flush_stream(stream) def _format_response(self, response, stream): if self._args.query is not None: expression = self._args.query response = expression.search(response) text.format_text(response, stream) def get_formatter(format_type, args): if format_type == 'json': return JSONFormatter(args) elif format_type == 'text': return TextFormatter(args) elif format_type == 'table': return TableFormatter(args) raise ValueError("Unknown output type: %s" % format_type) awscli-1.17.14/awscli/text.py0000644000000000000000000001027413620325556015740 0ustar rootroot00000000000000# Copyright 2012-2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # http://aws.amazon.com/apache2.0/ # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. from awscli.compat import six def format_text(data, stream): _format_text(data, stream) def _format_text(item, stream, identifier=None, scalar_keys=None): if isinstance(item, dict): _format_dict(scalar_keys, item, identifier, stream) elif isinstance(item, list): _format_list(item, identifier, stream) else: # If it's not a list or a dict, we just write the scalar # value out directly. stream.write(six.text_type(item)) stream.write('\n') def _format_list(item, identifier, stream): if not item: return if any(isinstance(el, dict) for el in item): all_keys = _all_scalar_keys(item) for element in item: _format_text(element, stream=stream, identifier=identifier, scalar_keys=all_keys) elif any(isinstance(el, list) for el in item): scalar_elements, non_scalars = _partition_list(item) if scalar_elements: _format_scalar_list(scalar_elements, identifier, stream) for non_scalar in non_scalars: _format_text(non_scalar, stream=stream, identifier=identifier) else: _format_scalar_list(item, identifier, stream) def _partition_list(item): scalars = [] non_scalars = [] for element in item: if isinstance(element, (list, dict)): non_scalars.append(element) else: scalars.append(element) return scalars, non_scalars def _format_scalar_list(elements, identifier, stream): if identifier is not None: for item in elements: stream.write('%s\t%s\n' % (identifier.upper(), item)) else: # For a bare list, just print the contents. stream.write('\t'.join([six.text_type(item) for item in elements])) stream.write('\n') def _format_dict(scalar_keys, item, identifier, stream): scalars, non_scalars = _partition_dict(item, scalar_keys=scalar_keys) if scalars: if identifier is not None: scalars.insert(0, identifier.upper()) stream.write('\t'.join(scalars)) stream.write('\n') for new_identifier, non_scalar in non_scalars: _format_text(item=non_scalar, stream=stream, identifier=new_identifier) def _all_scalar_keys(list_of_dicts): keys_seen = set() for item_dict in list_of_dicts: for key, value in item_dict.items(): if not isinstance(value, (dict, list)): keys_seen.add(key) return list(sorted(keys_seen)) def _partition_dict(item_dict, scalar_keys): # Given a dictionary, partition it into two list based on the # values associated with the keys. # {'foo': 'scalar', 'bar': 'scalar', 'baz': ['not, 'scalar']} # scalar = [('foo', 'scalar'), ('bar', 'scalar')] # non_scalar = [('baz', ['not', 'scalar'])] scalar = [] non_scalar = [] if scalar_keys is None: # scalar_keys can have more than just the keys in the item_dict, # but if user does not provide scalar_keys, we'll grab the keys # from the current item_dict for key, value in sorted(item_dict.items()): if isinstance(value, (dict, list)): non_scalar.append((key, value)) else: scalar.append(six.text_type(value)) else: for key in scalar_keys: scalar.append(six.text_type(item_dict.get(key, ''))) remaining_keys = sorted(set(item_dict.keys()) - set(scalar_keys)) for remaining_key in remaining_keys: non_scalar.append((remaining_key, item_dict[remaining_key])) return scalar, non_scalar awscli-1.17.14/awscli/compat.py0000644000000000000000000004460413620325630016234 0ustar rootroot00000000000000# Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # http://aws.amazon.com/apache2.0/ # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import sys import re import shlex import os import os.path import platform import zipfile import signal import contextlib from botocore.compat import six #import botocore.compat from botocore.compat import OrderedDict # If you ever want to import from the vendored six. Add it here and then # import from awscli.compat. Also try to keep it in alphabetical order. # This may get large. advance_iterator = six.advance_iterator PY3 = six.PY3 queue = six.moves.queue shlex_quote = six.moves.shlex_quote StringIO = six.StringIO BytesIO = six.BytesIO urlopen = six.moves.urllib.request.urlopen binary_type = six.binary_type # Most, but not all, python installations will have zlib. This is required to # compress any files we send via a push. If we can't compress, we can still # package the files in a zip container. try: import zlib ZIP_COMPRESSION_MODE = zipfile.ZIP_DEFLATED except ImportError: ZIP_COMPRESSION_MODE = zipfile.ZIP_STORED try: import sqlite3 except ImportError: sqlite3 = None is_windows = sys.platform == 'win32' if is_windows: default_pager = 'more' else: default_pager = 'less -R' class StdinMissingError(Exception): def __init__(self): message = ( 'stdin is required for this operation, but is not available.' ) super(StdinMissingError, self).__init__(message) class NonTranslatedStdout(object): """ This context manager sets the line-end translation mode for stdout. It is deliberately set to binary mode so that `\r` does not get added to the line ending. This can be useful when printing commands where a windows style line ending would casuse errors. """ def __enter__(self): if sys.platform == "win32": import msvcrt self.previous_mode = msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY) return sys.stdout def __exit__(self, type, value, traceback): if sys.platform == "win32": import msvcrt msvcrt.setmode(sys.stdout.fileno(), self.previous_mode) def ensure_text_type(s): if isinstance(s, six.text_type): return s if isinstance(s, six.binary_type): return s.decode('utf-8') raise ValueError("Expected str, unicode or bytes, received %s." % type(s)) if six.PY3: import collections.abc as collections_abc import locale import urllib.parse as urlparse from urllib.error import URLError raw_input = input def get_binary_stdin(): if sys.stdin is None: raise StdinMissingError() return sys.stdin.buffer def get_binary_stdout(): return sys.stdout.buffer def _get_text_writer(stream, errors): return stream def compat_open(filename, mode='r', encoding=None): """Back-port open() that accepts an encoding argument. In python3 this uses the built in open() and in python2 this uses the io.open() function. If the file is not being opened in binary mode, then we'll use locale.getpreferredencoding() to find the preferred encoding. """ if 'b' not in mode: encoding = locale.getpreferredencoding() return open(filename, mode, encoding=encoding) def bytes_print(statement, stdout=None): """ This function is used to write raw bytes to stdout. """ if stdout is None: stdout = sys.stdout if getattr(stdout, 'buffer', None): stdout.buffer.write(statement) else: # If it is not possible to write to the standard out buffer. # The next best option is to decode and write to standard out. stdout.write(statement.decode('utf-8')) else: import codecs import collections as collections_abc import locale import io import urlparse from urllib2 import URLError raw_input = raw_input def get_binary_stdin(): if sys.stdin is None: raise StdinMissingError() return sys.stdin def get_binary_stdout(): return sys.stdout def _get_text_writer(stream, errors): # In python3, all the sys.stdout/sys.stderr streams are in text # mode. This means they expect unicode, and will encode the # unicode automatically before actually writing to stdout/stderr. # In python2, that's not the case. In order to provide a consistent # interface, we can create a wrapper around sys.stdout that will take # unicode, and automatically encode it to the preferred encoding. # That way consumers can just call get_text_writer(stream) and write # unicode to the returned stream. Note that get_text_writer # just returns the stream in the PY3 section above because python3 # handles this. # We're going to use the preferred encoding, but in cases that there is # no preferred encoding we're going to fall back to assuming ASCII is # what we should use. This will currently break the use of # PYTHONIOENCODING, which would require checking stream.encoding first, # however, the existing behavior is to only use # locale.getpreferredencoding() and so in the hope of not breaking what # is currently working, we will continue to only use that. encoding = locale.getpreferredencoding() if encoding is None: encoding = "ascii" return codecs.getwriter(encoding)(stream, errors) def compat_open(filename, mode='r', encoding=None): # See docstring for compat_open in the PY3 section above. if 'b' not in mode: encoding = locale.getpreferredencoding() return io.open(filename, mode, encoding=encoding) def bytes_print(statement, stdout=None): if stdout is None: stdout = sys.stdout stdout.write(statement) def get_stdout_text_writer(): return _get_text_writer(sys.stdout, errors="strict") def get_stderr_text_writer(): return _get_text_writer(sys.stderr, errors="replace") def compat_input(prompt): """ Cygwin's pty's are based on pipes. Therefore, when it interacts with a Win32 program (such as Win32 python), what that program sees is a pipe instead of a console. This is important because python buffers pipes, and so on a pty-based terminal, text will not necessarily appear immediately. In most cases, this isn't a big deal. But when we're doing an interactive prompt, the result is that the prompts won't display until we fill the buffer. Since raw_input does not flush the prompt, we need to manually write and flush it. See https://github.com/mintty/mintty/issues/56 for more details. """ sys.stdout.write(prompt) sys.stdout.flush() return raw_input() def compat_shell_quote(s, platform=None): """Return a shell-escaped version of the string *s* Unfortunately `shlex.quote` doesn't support Windows, so this method provides that functionality. """ if platform is None: platform = sys.platform if platform == "win32": return _windows_shell_quote(s) else: return shlex_quote(s) def _windows_shell_quote(s): """Return a Windows shell-escaped version of the string *s* Windows has potentially bizarre rules depending on where you look. When spawning a process via the Windows C runtime the rules are as follows: https://docs.microsoft.com/en-us/cpp/cpp/parsing-cpp-command-line-arguments To summarize the relevant bits: * Only space and tab are valid delimiters * Double quotes are the only valid quotes * Backslash is interpreted literally unless it is part of a chain that leads up to a double quote. Then the backslashes escape the backslashes, and if there is an odd number the final backslash escapes the quote. :param s: A string to escape :return: An escaped string """ if not s: return '""' buff = [] num_backspaces = 0 for character in s: if character == '\\': # We can't simply append backslashes because we don't know if # they will need to be escaped. Instead we separately keep track # of how many we've seen. num_backspaces += 1 elif character == '"': if num_backspaces > 0: # The backslashes are part of a chain that lead up to a # double quote, so they need to be escaped. buff.append('\\' * (num_backspaces * 2)) num_backspaces = 0 # The double quote also needs to be escaped. The fact that we're # seeing it at all means that it must have been escaped in the # original source. buff.append('\\"') else: if num_backspaces > 0: # The backslashes aren't part of a chain leading up to a # double quote, so they can be inserted directly without # being escaped. buff.append('\\' * num_backspaces) num_backspaces = 0 buff.append(character) # There may be some leftover backspaces if they were on the trailing # end, so they're added back in here. if num_backspaces > 0: buff.append('\\' * num_backspaces) new_s = ''.join(buff) if ' ' in new_s or '\t' in new_s: # If there are any spaces or tabs then the string needs to be double # quoted. return '"%s"' % new_s return new_s def get_popen_kwargs_for_pager_cmd(pager_cmd=None): """Returns the default pager to use dependent on platform :rtype: str :returns: A string represent the paging command to run based on the platform being used. """ popen_kwargs = {} if pager_cmd is None: pager_cmd = default_pager # Similar to what we do with the help command, we need to specify # shell as True to make it work in the pager for Windows if is_windows: popen_kwargs = {'shell': True} else: pager_cmd = shlex.split(pager_cmd) popen_kwargs['args'] = pager_cmd return popen_kwargs @contextlib.contextmanager def ignore_user_entered_signals(): """ Ignores user entered signals to avoid process getting killed. """ if is_windows: signal_list = [signal.SIGINT] else: signal_list = [signal.SIGINT, signal.SIGQUIT, signal.SIGTSTP] actual_signals = [] for user_signal in signal_list: actual_signals.append(signal.signal(user_signal, signal.SIG_IGN)) try: yield finally: for sig, user_signal in enumerate(signal_list): signal.signal(user_signal, actual_signals[sig]) # linux_distribution is used by the CodeDeploy customization. Python 3.8 # removed it from the stdlib, so it is vendored here in the case where the # import fails. try: from platform import linux_distribution except ImportError: _UNIXCONFDIR = '/etc' def _dist_try_harder(distname, version, id): """ Tries some special tricks to get the distribution information in case the default method fails. Currently supports older SuSE Linux, Caldera OpenLinux and Slackware Linux distributions. """ if os.path.exists('/var/adm/inst-log/info'): # SuSE Linux stores distribution information in that file distname = 'SuSE' with open('/var/adm/inst-log/info') as f: for line in f: tv = line.split() if len(tv) == 2: tag, value = tv else: continue if tag == 'MIN_DIST_VERSION': version = value.strip() elif tag == 'DIST_IDENT': values = value.split('-') id = values[2] return distname, version, id if os.path.exists('/etc/.installed'): # Caldera OpenLinux has some infos in that file (thanks to Colin Kong) with open('/etc/.installed') as f: for line in f: pkg = line.split('-') if len(pkg) >= 2 and pkg[0] == 'OpenLinux': # XXX does Caldera support non Intel platforms ? If yes, # where can we find the needed id ? return 'OpenLinux', pkg[1], id if os.path.isdir('/usr/lib/setup'): # Check for slackware version tag file (thanks to Greg Andruk) verfiles = os.listdir('/usr/lib/setup') for n in range(len(verfiles)-1, -1, -1): if verfiles[n][:14] != 'slack-version-': del verfiles[n] if verfiles: verfiles.sort() distname = 'slackware' version = verfiles[-1][14:] return distname, version, id return distname, version, id _release_filename = re.compile(r'(\w+)[-_](release|version)', re.ASCII) _lsb_release_version = re.compile(r'(.+)' r' release ' r'([\d.]+)' r'[^(]*(?:\((.+)\))?', re.ASCII) _release_version = re.compile(r'([^0-9]+)' r'(?: release )?' r'([\d.]+)' r'[^(]*(?:\((.+)\))?', re.ASCII) # See also http://www.novell.com/coolsolutions/feature/11251.html # and http://linuxmafia.com/faq/Admin/release-files.html # and http://data.linux-ntfs.org/rpm/whichrpm # and http://www.die.net/doc/linux/man/man1/lsb_release.1.html _supported_dists = ( 'SuSE', 'debian', 'fedora', 'redhat', 'centos', 'mandrake', 'mandriva', 'rocks', 'slackware', 'yellowdog', 'gentoo', 'UnitedLinux', 'turbolinux', 'arch', 'mageia') def _parse_release_file(firstline): # Default to empty 'version' and 'id' strings. Both defaults are used # when 'firstline' is empty. 'id' defaults to empty when an id can not # be deduced. version = '' id = '' # Parse the first line m = _lsb_release_version.match(firstline) if m is not None: # LSB format: "distro release x.x (codename)" return tuple(m.groups()) # Pre-LSB format: "distro x.x (codename)" m = _release_version.match(firstline) if m is not None: return tuple(m.groups()) # Unknown format... take the first two words l = firstline.strip().split() if l: version = l[0] if len(l) > 1: id = l[1] return '', version, id _distributor_id_file_re = re.compile("(?:DISTRIB_ID\s*=)\s*(.*)", re.I) _release_file_re = re.compile("(?:DISTRIB_RELEASE\s*=)\s*(.*)", re.I) _codename_file_re = re.compile("(?:DISTRIB_CODENAME\s*=)\s*(.*)", re.I) def linux_distribution(distname='', version='', id='', supported_dists=_supported_dists, full_distribution_name=1): return _linux_distribution(distname, version, id, supported_dists, full_distribution_name) def _linux_distribution(distname, version, id, supported_dists, full_distribution_name): """ Tries to determine the name of the Linux OS distribution name. The function first looks for a distribution release file in /etc and then reverts to _dist_try_harder() in case no suitable files are found. supported_dists may be given to define the set of Linux distributions to look for. It defaults to a list of currently supported Linux distributions identified by their release file name. If full_distribution_name is true (default), the full distribution read from the OS is returned. Otherwise the short name taken from supported_dists is used. Returns a tuple (distname, version, id) which default to the args given as parameters. """ # check for the Debian/Ubuntu /etc/lsb-release file first, needed so # that the distribution doesn't get identified as Debian. # https://bugs.python.org/issue9514 try: with open("/etc/lsb-release", "r") as etclsbrel: for line in etclsbrel: m = _distributor_id_file_re.search(line) if m: _u_distname = m.group(1).strip() m = _release_file_re.search(line) if m: _u_version = m.group(1).strip() m = _codename_file_re.search(line) if m: _u_id = m.group(1).strip() if _u_distname and _u_version: return (_u_distname, _u_version, _u_id) except (EnvironmentError, UnboundLocalError): pass try: etc = os.listdir(_UNIXCONFDIR) except OSError: # Probably not a Unix system return distname, version, id etc.sort() for file in etc: m = _release_filename.match(file) if m is not None: _distname, dummy = m.groups() if _distname in supported_dists: distname = _distname break else: return _dist_try_harder(distname, version, id) # Read the first line with open(os.path.join(_UNIXCONFDIR, file), 'r', encoding='utf-8', errors='surrogateescape') as f: firstline = f.readline() _distname, _version, _id = _parse_release_file(firstline) if _distname and full_distribution_name: distname = _distname if _version: version = _version if _id: id = _id return distname, version, id awscli-1.17.14/awscli/customizations/0000755000000000000000000000000013620325757017474 5ustar rootroot00000000000000awscli-1.17.14/awscli/customizations/dynamodb.py0000644000000000000000000000363613620325630021641 0ustar rootroot00000000000000# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import base64 import binascii import logging from awscli.compat import six logger = logging.getLogger(__name__) def register_dynamodb_paginator_fix(event_emitter): DynamoDBPaginatorFix(event_emitter).register_events() def parse_last_evaluated_key_binary(parsed, **kwargs): # Because we disable parsing blobs into a binary type and leave them as # a base64 string if a binary field is present in the continuation token # as is the case with dynamodb the binary will be double encoded. This # ensures that the continuation token is properly converted to binary to # avoid double encoding the contination token. last_evaluated_key = parsed.get('LastEvaluatedKey', None) if last_evaluated_key is None: return for key, val in last_evaluated_key.items(): if 'B' in val: val['B'] = base64.b64decode(val['B']) class DynamoDBPaginatorFix(object): def __init__(self, event_emitter): self._event_emitter = event_emitter def register_events(self): self._event_emitter.register( 'calling-command.dynamodb.*', self._maybe_register_pagination_fix ) def _maybe_register_pagination_fix(self, parsed_globals, **kwargs): if parsed_globals.paginate: self._event_emitter.register( 'after-call.dynamodb.*', parse_last_evaluated_key_binary ) awscli-1.17.14/awscli/customizations/streamingoutputarg.py0000644000000000000000000000747413620325554024021 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. from botocore.model import Shape from awscli.arguments import BaseCLIArgument def add_streaming_output_arg(argument_table, operation_model, session, **kwargs): # Implementation detail: hooked up to 'building-argument-table' # event. if _has_streaming_output(operation_model): streaming_argument_name = _get_streaming_argument_name(operation_model) argument_table['outfile'] = StreamingOutputArgument( response_key=streaming_argument_name, operation_model=operation_model, session=session, name='outfile') def _has_streaming_output(model): return model.has_streaming_output def _get_streaming_argument_name(model): return model.output_shape.serialization['payload'] class StreamingOutputArgument(BaseCLIArgument): BUFFER_SIZE = 32768 HELP = 'Filename where the content will be saved' def __init__(self, response_key, operation_model, name, session, buffer_size=None): self._name = name self.argument_model = Shape('StreamingOutputArgument', {'type': 'string'}) if buffer_size is None: buffer_size = self.BUFFER_SIZE self._buffer_size = buffer_size # This is the key in the response body where we can find the # streamed contents. self._response_key = response_key self._output_file = None self._name = name self._required = True self._operation_model = operation_model self._session = session @property def cli_name(self): # Because this is a parameter, not an option, it shouldn't have the # '--' prefix. We want to use the self.py_name to indicate that it's an # argument. return self._name @property def cli_type_name(self): return 'string' @property def required(self): return self._required @required.setter def required(self, value): self._required = value @property def documentation(self): return self.HELP def add_to_parser(self, parser): parser.add_argument(self._name, metavar=self.py_name, help=self.HELP) def add_to_params(self, parameters, value): self._output_file = value service_id = self._operation_model.service_model.service_id.hyphenize() operation_name = self._operation_model.name self._session.register('after-call.%s.%s' % ( service_id, operation_name), self.save_file) def save_file(self, parsed, **kwargs): if self._response_key not in parsed: # If the response key is not in parsed, then # we've received an error message and we'll let the AWS CLI # error handler print out an error message. We have no # file to save in this situation. return body = parsed[self._response_key] buffer_size = self._buffer_size with open(self._output_file, 'wb') as fp: data = body.read(buffer_size) while data: fp.write(data) data = body.read(buffer_size) # We don't want to include the streaming param in # the returned response. del parsed[self._response_key] awscli-1.17.14/awscli/customizations/argrename.py0000644000000000000000000001745413620325630022010 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. """ """ from awscli.customizations import utils ARGUMENT_RENAMES = { # Mapping of original arg to renamed arg. # The key is ..argname # The first part of the key is used for event registration # so if you wanted to rename something for an entire service you # could say 'ec2.*.dry-run': 'renamed-arg-name', or if you wanted # to rename across all services you could say '*.*.dry-run': 'new-name'. 'ec2.create-image.no-no-reboot': 'reboot', 'ec2.*.no-egress': 'ingress', 'ec2.*.no-disable-api-termination': 'enable-api-termination', 'opsworks.*.region': 'stack-region', 'elastictranscoder.*.output': 'job-output', 'swf.register-activity-type.version': 'activity-version', 'swf.register-workflow-type.version': 'workflow-version', 'datapipeline.*.query': 'objects-query', 'datapipeline.get-pipeline-definition.version': 'pipeline-version', 'emr.*.job-flow-ids': 'cluster-ids', 'emr.*.job-flow-id': 'cluster-id', 'cloudsearchdomain.search.query': 'search-query', 'cloudsearchdomain.suggest.query': 'suggest-query', 'sns.subscribe.endpoint': 'notification-endpoint', 'deploy.*.s-3-location': 's3-location', 'deploy.*.ec-2-tag-filters': 'ec2-tag-filters', 'codepipeline.get-pipeline.version': 'pipeline-version', 'codepipeline.create-custom-action-type.version': 'action-version', 'codepipeline.delete-custom-action-type.version': 'action-version', 'kinesisanalytics.add-application-output.output': 'application-output', 'kinesisanalyticsv2.add-application-output.output': 'application-output', 'route53.delete-traffic-policy.version': 'traffic-policy-version', 'route53.get-traffic-policy.version': 'traffic-policy-version', 'route53.update-traffic-policy-comment.version': 'traffic-policy-version', 'gamelift.create-build.version': 'build-version', 'gamelift.update-build.version': 'build-version', 'gamelift.create-script.version': 'script-version', 'gamelift.update-script.version': 'script-version', 'route53domains.view-billing.start': 'start-time', 'route53domains.view-billing.end': 'end-time', 'apigateway.create-rest-api.version': 'api-version', 'apigatewayv2.create-api.version': 'api-version', 'apigatewayv2.update-api.version': 'api-version', 'pinpoint.get-campaign-version.version': 'campaign-version', 'pinpoint.get-segment-version.version': 'segment-version', 'pinpoint.delete-email-template.version': 'template-version', 'pinpoint.delete-push-template.version': 'template-version', 'pinpoint.delete-sms-template.version': 'template-version', 'pinpoint.delete-voice-template.version': 'template-version', 'pinpoint.get-email-template.version': 'template-version', 'pinpoint.get-push-template.version': 'template-version', 'pinpoint.get-sms-template.version': 'template-version', 'pinpoint.get-voice-template.version': 'template-version', 'pinpoint.update-email-template.version': 'template-version', 'pinpoint.update-push-template.version': 'template-version', 'pinpoint.update-sms-template.version': 'template-version', 'pinpoint.update-voice-template.version': 'template-version', 'stepfunctions.send-task-success.output': 'task-output', 'clouddirectory.publish-schema.version': 'schema-version', 'mturk.list-qualification-types.query': 'types-query', 'workdocs.create-notification-subscription.endpoint': 'notification-endpoint', 'workdocs.describe-users.query': 'user-query', 'lex-models.delete-bot.version': 'bot-version', 'lex-models.delete-intent.version': 'intent-version', 'lex-models.delete-slot-type.version': 'slot-type-version', 'lex-models.get-intent.version': 'intent-version', 'lex-models.get-slot-type.version': 'slot-type-version', 'lex-models.delete-bot-version.version': 'bot-version', 'lex-models.delete-intent-version.version': 'intent-version', 'lex-models.delete-slot-type-version.version': 'slot-type-version', 'lex-models.get-export.version': 'resource-version', 'mobile.create-project.region': 'project-region', 'rekognition.create-stream-processor.output': 'stream-processor-output', 'eks.create-cluster.version': 'kubernetes-version', 'eks.update-cluster-version.version': 'kubernetes-version', 'eks.create-nodegroup.version': 'kubernetes-version', 'eks.update-nodegroup-version.version': 'kubernetes-version', 'schemas.*.version': 'schema-version', } # Same format as ARGUMENT_RENAMES, but instead of renaming the arguments, # an alias is created to the original arugment and marked as undocumented. # This is useful when you need to change the name of an argument but you # still need to support the old argument. HIDDEN_ALIASES = { 'cognito-identity.create-identity-pool.open-id-connect-provider-arns': 'open-id-connect-provider-ar-ns', 'storagegateway.describe-tapes.tape-arns': 'tape-ar-ns', 'storagegateway.describe-tape-archives.tape-arns': 'tape-ar-ns', 'storagegateway.describe-vtl-devices.vtl-device-arns': 'vtl-device-ar-ns', 'storagegateway.describe-cached-iscsi-volumes.volume-arns': 'volume-ar-ns', 'storagegateway.describe-stored-iscsi-volumes.volume-arns': 'volume-ar-ns', 'route53domains.view-billing.start-time': 'start', # These come from the xform_name() changes that no longer separates words # by numbers. 'deploy.create-deployment-group.ec2-tag-set': 'ec-2-tag-set', 'deploy.list-application-revisions.s3-bucket': 's-3-bucket', 'deploy.list-application-revisions.s3-key-prefix': 's-3-key-prefix', 'deploy.update-deployment-group.ec2-tag-set': 'ec-2-tag-set', 'iam.enable-mfa-device.authentication-code1': 'authentication-code-1', 'iam.enable-mfa-device.authentication-code2': 'authentication-code-2', 'iam.resync-mfa-device.authentication-code1': 'authentication-code-1', 'iam.resync-mfa-device.authentication-code2': 'authentication-code-2', 'importexport.get-shipping-label.street1': 'street-1', 'importexport.get-shipping-label.street2': 'street-2', 'importexport.get-shipping-label.street3': 'street-3', 'lambda.publish-version.code-sha256': 'code-sha-256', 'lightsail.import-key-pair.public-key-base64': 'public-key-base-64', 'opsworks.register-volume.ec2-volume-id': 'ec-2-volume-id', } def register_arg_renames(cli): for original, new_name in ARGUMENT_RENAMES.items(): event_portion, original_arg_name = original.rsplit('.', 1) cli.register('building-argument-table.%s' % event_portion, rename_arg(original_arg_name, new_name)) for original, new_name in HIDDEN_ALIASES.items(): event_portion, original_arg_name = original.rsplit('.', 1) cli.register('building-argument-table.%s' % event_portion, hidden_alias(original_arg_name, new_name)) def rename_arg(original_arg_name, new_name): def _rename_arg(argument_table, **kwargs): if original_arg_name in argument_table: utils.rename_argument(argument_table, original_arg_name, new_name) return _rename_arg def hidden_alias(original_arg_name, alias_name): def _alias_arg(argument_table, **kwargs): if original_arg_name in argument_table: utils.make_hidden_alias(argument_table, original_arg_name, alias_name) return _alias_arg awscli-1.17.14/awscli/customizations/ecr.py0000644000000000000000000000775113620325630020617 0ustar rootroot00000000000000# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. from awscli.customizations.commands import BasicCommand from awscli.customizations.utils import create_client_from_parsed_globals from base64 import b64decode import sys def register_ecr_commands(cli): cli.register('building-command-table.ecr', _inject_commands) def _inject_commands(command_table, session, **kwargs): command_table['get-login'] = ECRLogin(session) command_table['get-login-password'] = ECRGetLoginPassword(session) class ECRLogin(BasicCommand): """Log in with 'docker login'""" NAME = 'get-login' DESCRIPTION = BasicCommand.FROM_FILE('ecr/get-login_description.rst') ARG_TABLE = [ { 'name': 'registry-ids', 'help_text': 'A list of AWS account IDs that correspond to the ' 'Amazon ECR registries that you want to log in to.', 'required': False, 'nargs': '+' }, { 'name': 'include-email', 'action': 'store_true', 'group_name': 'include-email', 'dest': 'include_email', 'default': True, 'required': False, 'help_text': ( "Specify if the '-e' flag should be included in the " "'docker login' command. The '-e' option has been deprecated " "and is removed in Docker version 17.06 and later. You must " "specify --no-include-email if you're using Docker version " "17.06 or later. The default behavior is to include the " "'-e' flag in the 'docker login' output."), }, { 'name': 'no-include-email', 'help_text': 'Include email arg', 'action': 'store_false', 'default': True, 'group_name': 'include-email', 'dest': 'include_email', 'required': False, }, ] def _run_main(self, parsed_args, parsed_globals): ecr_client = create_client_from_parsed_globals( self._session, 'ecr', parsed_globals) if not parsed_args.registry_ids: result = ecr_client.get_authorization_token() else: result = ecr_client.get_authorization_token( registryIds=parsed_args.registry_ids) for auth in result['authorizationData']: auth_token = b64decode(auth['authorizationToken']).decode() username, password = auth_token.split(':') command = ['docker', 'login', '-u', username, '-p', password] if parsed_args.include_email: command.extend(['-e', 'none']) command.append(auth['proxyEndpoint']) sys.stdout.write(' '.join(command)) sys.stdout.write('\n') return 0 class ECRGetLoginPassword(BasicCommand): """Get a password to be used with container clients such as Docker""" NAME = 'get-login-password' DESCRIPTION = BasicCommand.FROM_FILE( 'ecr/get-login-password_description.rst') def _run_main(self, parsed_args, parsed_globals): ecr_client = create_client_from_parsed_globals( self._session, 'ecr', parsed_globals) result = ecr_client.get_authorization_token() auth = result['authorizationData'][0] auth_token = b64decode(auth['authorizationToken']).decode() _, password = auth_token.split(':') sys.stdout.write(password) sys.stdout.write('\n') return 0 awscli-1.17.14/awscli/customizations/utils.py0000644000000000000000000002041613620325554021204 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. """ Utility functions to make it easier to work with customizations. """ import copy import sys from botocore.exceptions import ClientError def rename_argument(argument_table, existing_name, new_name): current = argument_table[existing_name] argument_table[new_name] = current current.name = new_name del argument_table[existing_name] def _copy_argument(argument_table, current_name, copy_name): current = argument_table[current_name] copy_arg = copy.copy(current) copy_arg.name = copy_name argument_table[copy_name] = copy_arg return copy_arg def make_hidden_alias(argument_table, existing_name, alias_name): """Create a hidden alias for an existing argument. This will copy an existing argument object in an arg table, and add a new entry to the arg table with a different name. The new argument will also be undocumented. This is needed if you want to check an existing argument, but you still need the other one to work for backwards compatibility reasons. """ current = argument_table[existing_name] copy_arg = _copy_argument(argument_table, existing_name, alias_name) copy_arg._UNDOCUMENTED = True if current.required: # If the current argument is required, then # we'll mark both as not required, but # flag _DOCUMENT_AS_REQUIRED so our doc gen # knows to still document this argument as required. copy_arg.required = False current.required = False current._DOCUMENT_AS_REQUIRED = True def rename_command(command_table, existing_name, new_name): current = command_table[existing_name] command_table[new_name] = current current.name = new_name del command_table[existing_name] def alias_command(command_table, existing_name, new_name): """Moves an argument to a new name, keeping the old as a hidden alias. :type command_table: dict :param command_table: The full command table for the CLI or a service. :type existing_name: str :param existing_name: The current name of the command. :type new_name: str :param new_name: The new name for the command. """ current = command_table[existing_name] _copy_argument(command_table, existing_name, new_name) current._UNDOCUMENTED = True def make_hidden_command_alias(command_table, existing_name, alias_name): """Create a hidden alias for an exiting command. This will copy an existing command object in a command table and add a new entry to the command table with a different name. The new command will be undocumented. This is needed if you want to change an existing command, but you still need the old name to work for backwards compatibility reasons. :type command_table: dict :param command_table: The full command table for the CLI or a service. :type existing_name: str :param existing_name: The current name of the command. :type alias_name: str :param alias_name: The new name for the command. """ new = _copy_argument(command_table, existing_name, alias_name) new._UNDOCUMENTED = True def validate_mutually_exclusive_handler(*groups): def _handler(parsed_args, **kwargs): return validate_mutually_exclusive(parsed_args, *groups) return _handler def validate_mutually_exclusive(parsed_args, *groups): """Validate mututally exclusive groups in the parsed args.""" args_dict = vars(parsed_args) all_args = set(arg for group in groups for arg in group) if not any(k in all_args for k in args_dict if args_dict[k] is not None): # If none of the specified args are in a mutually exclusive group # there is nothing left to validate. return current_group = None for key in [k for k in args_dict if args_dict[k] is not None]: key_group = _get_group_for_key(key, groups) if key_group is None: # If they key is not part of a mutex group, we can move on. continue if current_group is None: current_group = key_group elif not key_group == current_group: raise ValueError('The key "%s" cannot be specified when one ' 'of the following keys are also specified: ' '%s' % (key, ', '.join(current_group))) def _get_group_for_key(key, groups): for group in groups: if key in group: return group def s3_bucket_exists(s3_client, bucket_name): bucket_exists = True try: # See if the bucket exists by running a head bucket s3_client.head_bucket(Bucket=bucket_name) except ClientError as e: # If a client error is thrown. Check that it was a 404 error. # If it was a 404 error, than the bucket does not exist. error_code = int(e.response['Error']['Code']) if error_code == 404: bucket_exists = False return bucket_exists def create_client_from_parsed_globals(session, service_name, parsed_globals, overrides=None): """Creates a service client, taking parsed_globals into account Any values specified in overrides will override the returned dict. Note that this override occurs after 'region' from parsed_globals has been translated into 'region_name' in the resulting dict. """ client_args = {} if 'region' in parsed_globals: client_args['region_name'] = parsed_globals.region if 'endpoint_url' in parsed_globals: client_args['endpoint_url'] = parsed_globals.endpoint_url if 'verify_ssl' in parsed_globals: client_args['verify'] = parsed_globals.verify_ssl if overrides: client_args.update(overrides) return session.create_client(service_name, **client_args) def uni_print(statement, out_file=None): """ This function is used to properly write unicode to a file, usually stdout or stdderr. It ensures that the proper encoding is used if the statement is not a string type. """ if out_file is None: out_file = sys.stdout try: # Otherwise we assume that out_file is a # text writer type that accepts str/unicode instead # of bytes. out_file.write(statement) except UnicodeEncodeError: # Some file like objects like cStringIO will # try to decode as ascii on python2. # # This can also fail if our encoding associated # with the text writer cannot encode the unicode # ``statement`` we've been given. This commonly # happens on windows where we have some S3 key # previously encoded with utf-8 that can't be # encoded using whatever codepage the user has # configured in their console. # # At this point we've already failed to do what's # been requested. We now try to make a best effort # attempt at printing the statement to the outfile. # We're using 'ascii' as the default because if the # stream doesn't give us any encoding information # we want to pick an encoding that has the highest # chance of printing successfully. new_encoding = getattr(out_file, 'encoding', 'ascii') # When the output of the aws command is being piped, # ``sys.stdout.encoding`` is ``None``. if new_encoding is None: new_encoding = 'ascii' new_statement = statement.encode( new_encoding, 'replace').decode(new_encoding) out_file.write(new_statement) out_file.flush() def get_policy_arn_suffix(region): """Method to return region value as expected by policy arn""" region_string = region.lower() if region_string.startswith("cn-"): return "aws-cn" elif region_string.startswith("us-gov"): return "aws-us-gov" else: return "aws" awscli-1.17.14/awscli/customizations/commands.py0000644000000000000000000004131713620325554021650 0ustar rootroot00000000000000import logging import os from botocore import model from botocore.compat import OrderedDict from botocore.validate import validate_parameters from botocore.docs.bcdoc import docevents import awscli from awscli.argparser import ArgTableArgParser from awscli.argprocess import unpack_argument, unpack_cli_arg from awscli.arguments import CustomArgument, create_argument_model_from_schema from awscli.clidocs import OperationDocumentEventHandler from awscli.clidriver import CLICommand from awscli.help import HelpCommand from awscli.schema import SchemaTransformer LOG = logging.getLogger(__name__) _open = open class _FromFile(object): def __init__(self, *paths, **kwargs): """ ``**kwargs`` can contain a ``root_module`` argument that contains the root module where the file contents should be searched. This is an optional argument, and if no value is provided, will default to ``awscli``. This means that by default we look for examples in the ``awscli`` module. """ self.filename = None if paths: self.filename = os.path.join(*paths) if 'root_module' in kwargs: self.root_module = kwargs['root_module'] else: self.root_module = awscli class BasicCommand(CLICommand): """Basic top level command with no subcommands. If you want to create a new command, subclass this and provide the values documented below. """ # This is the name of your command, so if you want to # create an 'aws mycommand ...' command, the NAME would be # 'mycommand' NAME = 'commandname' # This is the description that will be used for the 'help' # command. DESCRIPTION = 'describe the command' # This is optional, if you are fine with the default synopsis # (the way all the built in operations are documented) then you # can leave this empty. SYNOPSIS = '' # If you want to provide some hand written examples, you can do # so here. This is written in RST format. This is optional, # you don't have to provide any examples, though highly encouraged! EXAMPLES = '' # If your command has arguments, you can specify them here. This is # somewhat of an implementation detail, but this is a list of dicts # where the dicts match the kwargs of the CustomArgument's __init__. # For example, if I want to add a '--argument-one' and an # '--argument-two' command, I'd say: # # ARG_TABLE = [ # {'name': 'argument-one', 'help_text': 'This argument does foo bar.', # 'action': 'store', 'required': False, 'cli_type_name': 'string',}, # {'name': 'argument-two', 'help_text': 'This argument does some other thing.', # 'action': 'store', 'choices': ['a', 'b', 'c']}, # ] # # A `schema` parameter option is available to accept a custom JSON # structure as input. See the file `awscli/schema.py` for more info. ARG_TABLE = [] # If you want the command to have subcommands, you can provide a list of # dicts. We use a list here because we want to allow a user to provide # the order they want to use for subcommands. # SUBCOMMANDS = [ # {'name': 'subcommand1', 'command_class': SubcommandClass}, # {'name': 'subcommand2', 'command_class': SubcommandClass2}, # ] # The command_class must subclass from ``BasicCommand``. SUBCOMMANDS = [] FROM_FILE = _FromFile # You can set the DESCRIPTION, SYNOPSIS, and EXAMPLES to FROM_FILE # and we'll automatically read in that data from the file. # This is useful if you have a lot of content and would prefer to keep # the docs out of the class definition. For example: # # DESCRIPTION = FROM_FILE # # will set the DESCRIPTION value to the contents of # awscli/examples//_description.rst # The naming conventions for these attributes are: # # DESCRIPTION = awscli/examples//_description.rst # SYNOPSIS = awscli/examples//_synopsis.rst # EXAMPLES = awscli/examples//_examples.rst # # You can also provide a relative path and we'll load the file # from the specified location: # # DESCRIPTION = awscli/examples/ # # For example: # # DESCRIPTION = FROM_FILE('command, 'subcommand, '_description.rst') # DESCRIPTION = 'awscli/examples/command/subcommand/_description.rst' # # At this point, the only other thing you have to implement is a _run_main # method (see the method for more information). def __init__(self, session): self._session = session self._arg_table = None self._subcommand_table = None self._lineage = [self] def __call__(self, args, parsed_globals): # args is the remaining unparsed args. # We might be able to parse these args so we need to create # an arg parser and parse them. self._subcommand_table = self._build_subcommand_table() self._arg_table = self._build_arg_table() event = 'before-building-argument-table-parser.%s' % \ ".".join(self.lineage_names) self._session.emit(event, argument_table=self._arg_table, args=args, session=self._session) parser = ArgTableArgParser(self.arg_table, self.subcommand_table) parsed_args, remaining = parser.parse_known_args(args) # Unpack arguments for key, value in vars(parsed_args).items(): cli_argument = None # Convert the name to use dashes instead of underscore # as these are how the parameters are stored in the # `arg_table`. xformed = key.replace('_', '-') if xformed in self.arg_table: cli_argument = self.arg_table[xformed] value = unpack_argument( self._session, 'custom', self.name, cli_argument, value ) # If this parameter has a schema defined, then allow plugins # a chance to process and override its value. if self._should_allow_plugins_override(cli_argument, value): override = self._session\ .emit_first_non_none_response( 'process-cli-arg.%s.%s' % ('custom', self.name), cli_argument=cli_argument, value=value, operation=None) if override is not None: # A plugin supplied a conversion value = override else: # Unpack the argument, which is a string, into the # correct Python type (dict, list, etc) value = unpack_cli_arg(cli_argument, value) self._validate_value_against_schema( cli_argument.argument_model, value) setattr(parsed_args, key, value) if hasattr(parsed_args, 'help'): self._display_help(parsed_args, parsed_globals) elif getattr(parsed_args, 'subcommand', None) is None: # No subcommand was specified so call the main # function for this top level command. if remaining: raise ValueError("Unknown options: %s" % ','.join(remaining)) return self._run_main(parsed_args, parsed_globals) else: return self.subcommand_table[parsed_args.subcommand](remaining, parsed_globals) def _validate_value_against_schema(self, model, value): validate_parameters(value, model) def _should_allow_plugins_override(self, param, value): if (param and param.argument_model is not None and value is not None): return True return False def _run_main(self, parsed_args, parsed_globals): # Subclasses should implement this method. # parsed_globals are the parsed global args (things like region, # profile, output, etc.) # parsed_args are any arguments you've defined in your ARG_TABLE # that are parsed. These will come through as whatever you've # provided as the 'dest' key. Otherwise they default to the # 'name' key. For example: ARG_TABLE[0] = {"name": "foo-arg", ...} # can be accessed by ``parsed_args.foo_arg``. raise NotImplementedError("_run_main") def _build_subcommand_table(self): subcommand_table = OrderedDict() for subcommand in self.SUBCOMMANDS: subcommand_name = subcommand['name'] subcommand_class = subcommand['command_class'] subcommand_table[subcommand_name] = subcommand_class(self._session) self._session.emit('building-command-table.%s' % self.NAME, command_table=subcommand_table, session=self._session, command_object=self) self._add_lineage(subcommand_table) return subcommand_table def _display_help(self, parsed_args, parsed_globals): help_command = self.create_help_command() help_command(parsed_args, parsed_globals) def create_help_command(self): command_help_table = {} if self.SUBCOMMANDS: command_help_table = self.create_help_command_table() return BasicHelp(self._session, self, command_table=command_help_table, arg_table=self.arg_table) def create_help_command_table(self): """ Create the command table into a form that can be handled by the BasicDocHandler. """ commands = {} for command in self.SUBCOMMANDS: commands[command['name']] = command['command_class'](self._session) self._add_lineage(commands) return commands def _build_arg_table(self): arg_table = OrderedDict() self._session.emit('building-arg-table.%s' % self.NAME, arg_table=self.ARG_TABLE) for arg_data in self.ARG_TABLE: # If a custom schema was passed in, create the argument_model # so that it can be validated and docs can be generated. if 'schema' in arg_data: argument_model = create_argument_model_from_schema( arg_data.pop('schema')) arg_data['argument_model'] = argument_model custom_argument = CustomArgument(**arg_data) arg_table[arg_data['name']] = custom_argument return arg_table def _add_lineage(self, command_table): for command in command_table: command_obj = command_table[command] command_obj.lineage = self.lineage + [command_obj] @property def arg_table(self): if self._arg_table is None: self._arg_table = self._build_arg_table() return self._arg_table @property def subcommand_table(self): if self._subcommand_table is None: self._subcommand_table = self._build_subcommand_table() return self._subcommand_table @classmethod def add_command(cls, command_table, session, **kwargs): command_table[cls.NAME] = cls(session) @property def name(self): return self.NAME @property def lineage(self): return self._lineage @lineage.setter def lineage(self, value): self._lineage = value class BasicHelp(HelpCommand): def __init__(self, session, command_object, command_table, arg_table, event_handler_class=None): super(BasicHelp, self).__init__(session, command_object, command_table, arg_table) # This is defined in HelpCommand so we're matching the # casing here. if event_handler_class is None: event_handler_class = BasicDocHandler self.EventHandlerClass = event_handler_class # These are public attributes that are mapped from the command # object. These are used by the BasicDocHandler below. self._description = command_object.DESCRIPTION self._synopsis = command_object.SYNOPSIS self._examples = command_object.EXAMPLES @property def name(self): return self.obj.NAME @property def description(self): return self._get_doc_contents('_description') @property def synopsis(self): return self._get_doc_contents('_synopsis') @property def examples(self): return self._get_doc_contents('_examples') @property def event_class(self): return '.'.join(self.obj.lineage_names) def _get_doc_contents(self, attr_name): value = getattr(self, attr_name) if isinstance(value, BasicCommand.FROM_FILE): if value.filename is not None: trailing_path = value.filename else: trailing_path = os.path.join(self.name, attr_name + '.rst') root_module = value.root_module doc_path = os.path.join( os.path.abspath(os.path.dirname(root_module.__file__)), 'examples', trailing_path) with _open(doc_path) as f: return f.read() else: return value def __call__(self, args, parsed_globals): # Create an event handler for a Provider Document instance = self.EventHandlerClass(self) # Now generate all of the events for a Provider document. # We pass ourselves along so that we can, in turn, get passed # to all event handlers. docevents.generate_events(self.session, self) self.renderer.render(self.doc.getvalue()) instance.unregister() class BasicDocHandler(OperationDocumentEventHandler): def __init__(self, help_command): super(BasicDocHandler, self).__init__(help_command) self.doc = help_command.doc def doc_description(self, help_command, **kwargs): self.doc.style.h2('Description') self.doc.write(help_command.description) self.doc.style.new_paragraph() self._add_top_level_args_reference(help_command) def doc_synopsis_start(self, help_command, **kwargs): if not help_command.synopsis: super(BasicDocHandler, self).doc_synopsis_start( help_command=help_command, **kwargs) else: self.doc.style.h2('Synopsis') self.doc.style.start_codeblock() self.doc.writeln(help_command.synopsis) def doc_synopsis_option(self, arg_name, help_command, **kwargs): if not help_command.synopsis: doc = help_command.doc argument = help_command.arg_table[arg_name] if argument.synopsis: option_str = argument.synopsis elif argument.group_name in self._arg_groups: if argument.group_name in self._documented_arg_groups: # This arg is already documented so we can move on. return option_str = ' | '.join( [a.cli_name for a in self._arg_groups[argument.group_name]]) self._documented_arg_groups.append(argument.group_name) elif argument.cli_type_name == 'boolean': option_str = '%s' % argument.cli_name elif argument.nargs == '+': option_str = "%s [...]" % argument.cli_name else: option_str = '%s ' % argument.cli_name if not (argument.required or argument.positional_arg): option_str = '[%s]' % option_str doc.writeln('%s' % option_str) else: # A synopsis has been provided so we don't need to write # anything here. pass def doc_synopsis_end(self, help_command, **kwargs): if not help_command.synopsis: super(BasicDocHandler, self).doc_synopsis_end( help_command=help_command, **kwargs) else: self.doc.style.end_codeblock() def doc_examples(self, help_command, **kwargs): if help_command.examples: self.doc.style.h2('Examples') self.doc.write(help_command.examples) def doc_subitems_start(self, help_command, **kwargs): if help_command.command_table: doc = help_command.doc doc.style.h2('Available Commands') doc.style.toctree() def doc_subitem(self, command_name, help_command, **kwargs): if help_command.command_table: doc = help_command.doc doc.style.tocitem(command_name) def doc_subitems_end(self, help_command, **kwargs): pass def doc_output(self, help_command, event_name, **kwargs): pass def doc_options_end(self, help_command, **kwargs): self._add_top_level_args_reference(help_command) awscli-1.17.14/awscli/customizations/codecommit.py0000644000000000000000000001641213620325554022170 0ustar rootroot00000000000000# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import os import re import sys import logging import fileinput import datetime from botocore.auth import SigV4Auth from botocore.awsrequest import AWSRequest from botocore.compat import urlsplit from awscli.customizations.commands import BasicCommand from awscli.compat import NonTranslatedStdout logger = logging.getLogger('botocore.credentials') def initialize(cli): """ The entry point for the credential helper """ cli.register('building-command-table.codecommit', inject_commands) def inject_commands(command_table, session, **kwargs): """ Injects new commands into the codecommit subcommand. """ command_table['credential-helper'] = CodeCommitCommand(session) class CodeCommitNoOpStoreCommand(BasicCommand): NAME = 'store' DESCRIPTION = ('This operation does nothing, credentials' ' are calculated each time') SYNOPSIS = ('aws codecommit credential-helper store') EXAMPLES = '' _UNDOCUMENTED = True def _run_main(self, args, parsed_globals): return 0 class CodeCommitNoOpEraseCommand(BasicCommand): NAME = 'erase' DESCRIPTION = ('This operation does nothing, no credentials' ' are ever stored') SYNOPSIS = ('aws codecommit credential-helper erase') EXAMPLES = '' _UNDOCUMENTED = True def _run_main(self, args, parsed_globals): return 0 class CodeCommitGetCommand(BasicCommand): NAME = 'get' DESCRIPTION = ('get a username SigV4 credential pair' ' based on protocol, host and path provided' ' from standard in. This is primarily' ' called by git to generate credentials to' ' authenticate against AWS CodeCommit') SYNOPSIS = ('aws codecommit credential-helper get') EXAMPLES = (r'echo -e "protocol=https\\n' r'path=/v1/repos/myrepo\\n' 'host=git-codecommit.us-east-1.amazonaws.com"' ' | aws codecommit credential-helper get') ARG_TABLE = [ { 'name': 'ignore-host-check', 'action': 'store_true', 'default': False, 'group_name': 'ignore-host-check', 'help_text': ( 'Optional. Generate credentials regardless of whether' ' the domain is an Amazon domain.' ) } ] def __init__(self, session): super(CodeCommitGetCommand, self).__init__(session) def _run_main(self, args, parsed_globals): git_parameters = self.read_git_parameters() if ('amazon.com' in git_parameters['host'] or 'amazonaws.com' in git_parameters['host'] or args.ignore_host_check): theUrl = self.extract_url(git_parameters) region = self.extract_region(git_parameters, parsed_globals) signature = self.sign_request(region, theUrl) self.write_git_parameters(signature) return 0 def write_git_parameters(self, signature): username = self._session.get_credentials().access_key if self._session.get_credentials().token is not None: username += "%" + self._session.get_credentials().token # Python will add a \r to the line ending for a text stdout in Windows. # Git does not like the \r, so switch to binary with NonTranslatedStdout() as binary_stdout: binary_stdout.write('username={0}\n'.format(username)) logger.debug('username\n%s', username) binary_stdout.write('password={0}\n'.format(signature)) # need to explicitly flush the buffer here, # before we turn the stream back to text for windows binary_stdout.flush() logger.debug('signature\n%s', signature) def read_git_parameters(self): parsed = {} for line in sys.stdin: key, value = line.strip().split('=', 1) parsed[key] = value return parsed def extract_url(self, parameters): url = '{0}://{1}/{2}'.format(parameters['protocol'], parameters['host'], parameters['path']) return url def extract_region(self, parameters, parsed_globals): match = re.match(r'(vpce-.+\.)?git-codecommit(-fips)?\.([^.]+)\.(vpce\.)?amazonaws\.com', parameters['host']) if match is not None: return match.group(3) elif parsed_globals.region is not None: return parsed_globals.region else: return self._session.get_config_variable('region') def sign_request(self, region, url_to_sign): credentials = self._session.get_credentials() signer = SigV4Auth(credentials, 'codecommit', region) request = AWSRequest() request.url = url_to_sign request.method = 'GIT' now = datetime.datetime.utcnow() request.context['timestamp'] = now.strftime('%Y%m%dT%H%M%S') split = urlsplit(request.url) # we don't want to include the port number in the signature hostname = split.netloc.split(':')[0] canonical_request = '{0}\n{1}\n\nhost:{2}\n\nhost\n'.format( request.method, split.path, hostname) logger.debug("Calculating signature using v4 auth.") logger.debug('CanonicalRequest:\n%s', canonical_request) string_to_sign = signer.string_to_sign(request, canonical_request) logger.debug('StringToSign:\n%s', string_to_sign) signature = signer.signature(string_to_sign, request) logger.debug('Signature:\n%s', signature) return '{0}Z{1}'.format(request.context['timestamp'], signature) class CodeCommitCommand(BasicCommand): NAME = 'credential-helper' SYNOPSIS = ('aws codecommit credential-helper') EXAMPLES = '' SUBCOMMANDS = [ {'name': 'get', 'command_class': CodeCommitGetCommand}, {'name': 'store', 'command_class': CodeCommitNoOpStoreCommand}, {'name': 'erase', 'command_class': CodeCommitNoOpEraseCommand}, ] DESCRIPTION = ('Provide a SigV4 compatible user name and' ' password for git smart HTTP ' ' These commands are consumed by git and' ' should not used directly. Erase and Store' ' are no-ops. Get is operation to generate' ' credentials to authenticate AWS CodeCommit.' ' Run \"aws codecommit credential-helper help\"' ' for details') def _run_main(self, args, parsed_globals): raise ValueError('usage: aws [options] codecommit' ' credential-helper ' '[parameters]\naws: error: too few arguments') awscli-1.17.14/awscli/customizations/ec2/0000755000000000000000000000000013620325757020145 5ustar rootroot00000000000000awscli-1.17.14/awscli/customizations/ec2/secgroupsimplify.py0000644000000000000000000002041713620325554024122 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. """ This customization adds the following scalar parameters to the authorize operations: * --protocol: tcp | udp | icmp or any protocol number * --port: A single integer or a range (min-max). You can specify ``all`` to mean all ports (for example, port range 0-65535) * --source-group: Either the source security group ID or name. * --cidr - The CIDR range. Cannot be used when specifying a source or destination security group. """ from awscli.arguments import CustomArgument def _add_params(argument_table, **kwargs): arg = ProtocolArgument('protocol', help_text=PROTOCOL_DOCS) argument_table['protocol'] = arg argument_table['ip-protocol']._UNDOCUMENTED = True arg = PortArgument('port', help_text=PORT_DOCS) argument_table['port'] = arg # Port handles both the from-port and to-port, # we need to not document both args. argument_table['from-port']._UNDOCUMENTED = True argument_table['to-port']._UNDOCUMENTED = True arg = CidrArgument('cidr', help_text=CIDR_DOCS) argument_table['cidr'] = arg argument_table['cidr-ip']._UNDOCUMENTED = True arg = SourceGroupArgument('source-group', help_text=SOURCEGROUP_DOCS) argument_table['source-group'] = arg argument_table['source-security-group-name']._UNDOCUMENTED = True arg = GroupOwnerArgument('group-owner', help_text=GROUPOWNER_DOCS) argument_table['group-owner'] = arg argument_table['source-security-group-owner-id']._UNDOCUMENTED = True def _check_args(parsed_args, **kwargs): # This function checks the parsed args. If the user specified # the --ip-permissions option with any of the scalar options we # raise an error. arg_dict = vars(parsed_args) if arg_dict['ip_permissions']: for key in ('protocol', 'port', 'cidr', 'source_group', 'group_owner'): if arg_dict[key]: msg = ('The --%s option is not compatible ' 'with the --ip-permissions option ') % key raise ValueError(msg) def _add_docs(help_command, **kwargs): doc = help_command.doc doc.style.new_paragraph() doc.style.start_note() msg = ('To specify multiple rules in a single command ' 'use the --ip-permissions option') doc.include_doc_string(msg) doc.style.end_note() EVENTS = [ ('building-argument-table.ec2.authorize-security-group-ingress', _add_params), ('building-argument-table.ec2.authorize-security-group-egress', _add_params), ('building-argument-table.ec2.revoke-security-group-ingress', _add_params), ('building-argument-table.ec2.revoke-security-group-egress', _add_params), ('operation-args-parsed.ec2.authorize-security-group-ingress', _check_args), ('operation-args-parsed.ec2.authorize-security-group-egress', _check_args), ('operation-args-parsed.ec2.revoke-security-group-ingress', _check_args), ('operation-args-parsed.ec2.revoke-security-group-egress', _check_args), ('doc-description.ec2.authorize-security-group-ingress', _add_docs), ('doc-description.ec2.authorize-security-group-egress', _add_docs), ('doc-description.ec2.revoke-security-group-ingress', _add_docs), ('doc-description.ec2.revoke-security-groupdoc-ingress', _add_docs), ] PROTOCOL_DOCS = ('

The IP protocol: tcp | ' 'udp | icmp

' '

(VPC only) Use all to specify all protocols.

' '

If this argument is provided without also providing the ' 'port argument, then it will be applied to all ' 'ports for the specified protocol.

') PORT_DOCS = ('

For TCP or UDP: The range of ports to allow.' ' A single integer or a range (min-max).

' '

For ICMP: A single integer or a range (type-code)' ' representing the ICMP type' ' number and the ICMP code number respectively.' ' A value of -1 indicates all ICMP codes for' ' all ICMP types. A value of -1 just for type' ' indicates all ICMP codes for the specified ICMP type.

') CIDR_DOCS = '

The CIDR IP range.

' SOURCEGROUP_DOCS = ('

The name or ID of the source security group.

') GROUPOWNER_DOCS = ('

The AWS account ID that owns the source security ' 'group. Cannot be used when specifying a CIDR IP ' 'address.

') def register_secgroup(event_handler): for event, handler in EVENTS: event_handler.register(event, handler) def _build_ip_permissions(params, key, value): if 'IpPermissions' not in params: params['IpPermissions'] = [{}] if key == 'CidrIp': if 'IpRanges' not in params['ip_permissions'][0]: params['IpPermissions'][0]['IpRanges'] = [] params['IpPermissions'][0]['IpRanges'].append(value) elif key in ('GroupId', 'GroupName', 'UserId'): if 'UserIdGroupPairs' not in params['IpPermissions'][0]: params['IpPermissions'][0]['UserIdGroupPairs'] = [{}] params['IpPermissions'][0]['UserIdGroupPairs'][0][key] = value else: params['IpPermissions'][0][key] = value class ProtocolArgument(CustomArgument): def add_to_params(self, parameters, value): if value: try: int_value = int(value) if (int_value < 0 or int_value > 255) and int_value != -1: msg = ('protocol numbers must be in the range 0-255 ' 'or -1 to specify all protocols') raise ValueError(msg) except ValueError: if value not in ('tcp', 'udp', 'icmp', 'all'): msg = ('protocol parameter should be one of: ' 'tcp|udp|icmp|all or any valid protocol number.') raise ValueError(msg) if value == 'all': value = '-1' _build_ip_permissions(parameters, 'IpProtocol', value) class PortArgument(CustomArgument): def add_to_params(self, parameters, value): if value: try: if value == '-1' or value == 'all': fromstr = '-1' tostr = '-1' elif '-' in value: # We can get away with simple logic here because # argparse will not allow values such as # "-1-8", and these aren't actually valid # values any from from/to ports. fromstr, tostr = value.split('-', 1) else: fromstr, tostr = (value, value) _build_ip_permissions(parameters, 'FromPort', int(fromstr)) _build_ip_permissions(parameters, 'ToPort', int(tostr)) except ValueError: msg = ('port parameter should be of the ' 'form (e.g. 22 or 22-25)') raise ValueError(msg) class CidrArgument(CustomArgument): def add_to_params(self, parameters, value): if value: value = [{'CidrIp': value}] _build_ip_permissions(parameters, 'IpRanges', value) class SourceGroupArgument(CustomArgument): def add_to_params(self, parameters, value): if value: if value.startswith('sg-'): _build_ip_permissions(parameters, 'GroupId', value) else: _build_ip_permissions(parameters, 'GroupName', value) class GroupOwnerArgument(CustomArgument): def add_to_params(self, parameters, value): if value: _build_ip_permissions(parameters, 'UserId', value) awscli-1.17.14/awscli/customizations/ec2/protocolarg.py0000644000000000000000000000257013620325554023051 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. """ This customization allows the user to specify the values "tcp", "udp", or "icmp" as values for the --protocol parameter. The actual Protocol parameter of the operation accepts only integer protocol numbers. """ def _fix_args(params, **kwargs): key_name = 'Protocol' if key_name in params: if params[key_name] == 'tcp': params[key_name] = '6' elif params[key_name] == 'udp': params[key_name] = '17' elif params[key_name] == 'icmp': params[key_name] = '1' elif params[key_name] == 'all': params[key_name] = '-1' def register_protocol_args(cli): cli.register('before-parameter-build.ec2.CreateNetworkAclEntry', _fix_args) cli.register('before-parameter-build.ec2.ReplaceNetworkAclEntry', _fix_args) awscli-1.17.14/awscli/customizations/ec2/addcount.py0000644000000000000000000000566013620325554022322 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import logging from botocore import model from awscli.arguments import BaseCLIArgument logger = logging.getLogger(__name__) DEFAULT = 1 HELP = """

Number of instances to launch. If a single number is provided, it is assumed to be the minimum to launch (defaults to %d). If a range is provided in the form min:max then the first number is interpreted as the minimum number of instances to launch and the second is interpreted as the maximum number of instances to launch.

""" % DEFAULT def register_count_events(event_handler): event_handler.register( 'building-argument-table.ec2.run-instances', ec2_add_count) event_handler.register( 'before-parameter-build.ec2.RunInstances', set_default_count) def ec2_add_count(argument_table, **kwargs): argument_table['count'] = CountArgument('count') del argument_table['min-count'] del argument_table['max-count'] def set_default_count(params, **kwargs): params.setdefault('MaxCount', DEFAULT) params.setdefault('MinCount', DEFAULT) class CountArgument(BaseCLIArgument): def __init__(self, name): self.argument_model = model.Shape('CountArgument', {'type': 'string'}) self._name = name self._required = False @property def cli_name(self): return '--' + self._name @property def cli_type_name(self): return 'string' @property def required(self): return self._required @required.setter def required(self, value): self._required = value @property def documentation(self): return HELP def add_to_parser(self, parser): # We do NOT set default value here. It will be set later by event hook. parser.add_argument(self.cli_name, metavar=self.py_name, help='Number of instances to launch') def add_to_params(self, parameters, value): if value is None: # NO-OP if value is not explicitly set by user return try: if ':' in value: minstr, maxstr = value.split(':') else: minstr, maxstr = (value, value) parameters['MinCount'] = int(minstr) parameters['MaxCount'] = int(maxstr) except: msg = ('count parameter should be of ' 'form min[:max] (e.g. 1 or 1:10)') raise ValueError(msg) awscli-1.17.14/awscli/customizations/ec2/bundleinstance.py0000644000000000000000000001513213620325630023505 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import logging from hashlib import sha1 import hmac import base64 import datetime from awscli.compat import six from awscli.arguments import CustomArgument logger = logging.getLogger('ec2bundleinstance') # This customization adds the following scalar parameters to the # bundle-instance operation: # --bucket: BUCKET_DOCS = ('The bucket in which to store the AMI. ' 'You can specify a bucket that you already own or ' 'a new bucket that Amazon EC2 creates on your behalf. ' 'If you specify a bucket that belongs to someone else, ' 'Amazon EC2 returns an error.') # --prefix: PREFIX_DOCS = ('The prefix for the image component names being stored ' 'in Amazon S3.') # --owner-akid OWNER_AKID_DOCS = 'The access key ID of the owner of the Amazon S3 bucket.' # --policy POLICY_DOCS = ( "An Amazon S3 upload policy that gives " "Amazon EC2 permission to upload items into Amazon S3 " "on the user's behalf. If you provide this parameter, " "you must also provide " "your secret access key, so we can create a policy " "signature for you (the secret access key is not passed " "to Amazon EC2). If you do not provide this parameter, " "we generate an upload policy for you automatically. " "For more information about upload policies see the " "sections about policy construction and signatures in the " '' 'Amazon Simple Storage Service Developer Guide.') # --owner-sak OWNER_SAK_DOCS = ('The AWS secret access key for the owner of the ' 'Amazon S3 bucket specified in the --bucket ' 'parameter. This parameter is required so that a ' 'signature can be computed for the policy.') def _add_params(argument_table, **kwargs): # Add the scalar parameters and also change the complex storage # param to not be required so the user doesn't get an error from # argparse if they only supply scalar params. storage_arg = argument_table['storage'] storage_arg.required = False arg = BundleArgument(storage_param='Bucket', name='bucket', help_text=BUCKET_DOCS) argument_table['bucket'] = arg arg = BundleArgument(storage_param='Prefix', name='prefix', help_text=PREFIX_DOCS) argument_table['prefix'] = arg arg = BundleArgument(storage_param='AWSAccessKeyId', name='owner-akid', help_text=OWNER_AKID_DOCS) argument_table['owner-akid'] = arg arg = BundleArgument(storage_param='_SAK', name='owner-sak', help_text=OWNER_SAK_DOCS) argument_table['owner-sak'] = arg arg = BundleArgument(storage_param='UploadPolicy', name='policy', help_text=POLICY_DOCS) argument_table['policy'] = arg def _check_args(parsed_args, **kwargs): # This function checks the parsed args. If the user specified # the --ip-permissions option with any of the scalar options we # raise an error. logger.debug(parsed_args) arg_dict = vars(parsed_args) if arg_dict['storage']: for key in ('bucket', 'prefix', 'owner_akid', 'owner_sak', 'policy'): if arg_dict[key]: msg = ('Mixing the --storage option ' 'with the simple, scalar options is ' 'not recommended.') raise ValueError(msg) POLICY = ('{{"expiration": "{expires}",' '"conditions": [' '{{"bucket": "{bucket}"}},' '{{"acl": "ec2-bundle-read"}},' '["starts-with", "$key", "{prefix}"]' ']}}' ) def _generate_policy(params): # Called if there is no policy supplied by the user. # Creates a policy that provides access for 24 hours. delta = datetime.timedelta(hours=24) expires = datetime.datetime.utcnow() + delta expires_iso = expires.strftime("%Y-%m-%dT%H:%M:%S.%fZ") policy = POLICY.format(expires=expires_iso, bucket=params['Bucket'], prefix=params['Prefix']) params['UploadPolicy'] = policy def _generate_signature(params): # If we have a policy and a sak, create the signature. policy = params.get('UploadPolicy') sak = params.get('_SAK') if policy and sak: policy = base64.b64encode(six.b(policy)).decode('utf-8') new_hmac = hmac.new(sak.encode('utf-8'), digestmod=sha1) new_hmac.update(six.b(policy)) ps = base64.encodestring(new_hmac.digest()).strip().decode('utf-8') params['UploadPolicySignature'] = ps del params['_SAK'] def _check_params(params, **kwargs): # Called just before call but prior to building the params. # Adds information not supplied by the user. storage = params['Storage']['S3'] if 'UploadPolicy' not in storage: _generate_policy(storage) if 'UploadPolicySignature' not in storage: _generate_signature(storage) EVENTS = [ ('building-argument-table.ec2.bundle-instance', _add_params), ('operation-args-parsed.ec2.bundle-instance', _check_args), ('before-parameter-build.ec2.BundleInstance', _check_params), ] def register_bundleinstance(event_handler): # Register all of the events for customizing BundleInstance for event, handler in EVENTS: event_handler.register(event, handler) class BundleArgument(CustomArgument): def __init__(self, storage_param, *args, **kwargs): super(BundleArgument, self).__init__(*args, **kwargs) self._storage_param = storage_param def _build_storage(self, params, value): # Build up the Storage data structure if 'Storage' not in params: params['Storage'] = {'S3': {}} params['Storage']['S3'][self._storage_param] = value def add_to_params(self, parameters, value): if value: self._build_storage(parameters, value) awscli-1.17.14/awscli/customizations/ec2/paginate.py0000644000000000000000000000440013620325554022300 0ustar rootroot00000000000000# Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. def register_ec2_page_size_injector(event_emitter): EC2PageSizeInjector().register(event_emitter) class EC2PageSizeInjector(object): # Operations to auto-paginate and their specific whitelists. # Format: # Key: Operation # Value: List of parameters to add to whitelist for that operation. TARGET_OPERATIONS = { "describe-volumes": [], "describe-snapshots": ['OwnerIds', 'RestorableByUserIds'] } # Parameters which should be whitelisted for every operation. UNIVERSAL_WHITELIST = ['NextToken', 'DryRun', 'PaginationConfig'] DEFAULT_PAGE_SIZE = 1000 def register(self, event_emitter): """Register `inject` for each target operation.""" event_template = "calling-command.ec2.%s" for operation in self.TARGET_OPERATIONS: event = event_template % operation event_emitter.register_last(event, self.inject) def inject(self, event_name, parsed_globals, call_parameters, **kwargs): """Conditionally inject PageSize.""" if not parsed_globals.paginate: return pagination_config = call_parameters.get('PaginationConfig', {}) if 'PageSize' in pagination_config: return operation_name = event_name.split('.')[-1] whitelisted_params = self.TARGET_OPERATIONS.get(operation_name) if whitelisted_params is None: return whitelisted_params = whitelisted_params + self.UNIVERSAL_WHITELIST for param in call_parameters: if param not in whitelisted_params: return pagination_config['PageSize'] = self.DEFAULT_PAGE_SIZE call_parameters['PaginationConfig'] = pagination_config awscli-1.17.14/awscli/customizations/ec2/runinstances.py0000644000000000000000000001711713620325554023235 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. """ This customization adds two new parameters to the ``ec2 run-instance`` command. The first, ``--secondary-private-ip-addresses`` allows a list of IP addresses within the specified subnet to be associated with the new instance. The second, ``--secondary-ip-address-count`` allows you to specify how many additional IP addresses you want but the actual address will be assigned for you. This functionality (and much more) is also available using the ``--network-interfaces`` complex argument. This just makes two of the most commonly used features available more easily. """ from awscli.arguments import CustomArgument # --secondary-private-ip-address SECONDARY_PRIVATE_IP_ADDRESSES_DOCS = ( '[EC2-VPC] A secondary private IP address for the network interface ' 'or instance. You can specify this multiple times to assign multiple ' 'secondary IP addresses. If you want additional private IP addresses ' 'but do not need a specific address, use the ' '--secondary-private-ip-address-count option.') # --secondary-private-ip-address-count SECONDARY_PRIVATE_IP_ADDRESS_COUNT_DOCS = ( '[EC2-VPC] The number of secondary IP addresses to assign to ' 'the network interface or instance.') # --associate-public-ip-address ASSOCIATE_PUBLIC_IP_ADDRESS_DOCS = ( '[EC2-VPC] If specified a public IP address will be assigned ' 'to the new instance in a VPC.') def _add_params(argument_table, **kwargs): arg = SecondaryPrivateIpAddressesArgument( name='secondary-private-ip-addresses', help_text=SECONDARY_PRIVATE_IP_ADDRESSES_DOCS) argument_table['secondary-private-ip-addresses'] = arg arg = SecondaryPrivateIpAddressCountArgument( name='secondary-private-ip-address-count', help_text=SECONDARY_PRIVATE_IP_ADDRESS_COUNT_DOCS) argument_table['secondary-private-ip-address-count'] = arg arg = AssociatePublicIpAddressArgument( name='associate-public-ip-address', help_text=ASSOCIATE_PUBLIC_IP_ADDRESS_DOCS, action='store_true', group_name='associate_public_ip') argument_table['associate-public-ip-address'] = arg arg = NoAssociatePublicIpAddressArgument( name='no-associate-public-ip-address', help_text=ASSOCIATE_PUBLIC_IP_ADDRESS_DOCS, action='store_false', group_name='associate_public_ip') argument_table['no-associate-public-ip-address'] = arg def _check_args(parsed_args, **kwargs): # This function checks the parsed args. If the user specified # the --network-interfaces option with any of the scalar options we # raise an error. arg_dict = vars(parsed_args) if arg_dict['network_interfaces']: for key in ('secondary_private_ip_addresses', 'secondary_private_ip_address_count', 'associate_public_ip_address'): if arg_dict[key]: msg = ('Mixing the --network-interfaces option ' 'with the simple, scalar options is ' 'not supported.') raise ValueError(msg) def _fix_args(params, **kwargs): # The RunInstances request provides some parameters # such as --subnet-id and --security-group-id that can be specified # as separate options only if the request DOES NOT include a # NetworkInterfaces structure. In those cases, the values for # these parameters must be specified inside the NetworkInterfaces # structure. This function checks for those parameters # and fixes them if necessary. # NOTE: If the user is a default VPC customer, RunInstances # allows them to specify the security group by name or by id. # However, in this scenario we can only support id because # we can't place a group name in the NetworkInterfaces structure. network_interface_params = [ 'PrivateIpAddresses', 'SecondaryPrivateIpAddressCount', 'AssociatePublicIpAddress' ] if 'NetworkInterfaces' in params: interface = params['NetworkInterfaces'][0] if any(param in interface for param in network_interface_params): if 'SubnetId' in params: interface['SubnetId'] = params['SubnetId'] del params['SubnetId'] if 'SecurityGroupIds' in params: interface['Groups'] = params['SecurityGroupIds'] del params['SecurityGroupIds'] if 'PrivateIpAddress' in params: ip_addr = {'PrivateIpAddress': params['PrivateIpAddress'], 'Primary': True} interface['PrivateIpAddresses'] = [ip_addr] del params['PrivateIpAddress'] if 'Ipv6AddressCount' in params: interface['Ipv6AddressCount'] = params['Ipv6AddressCount'] del params['Ipv6AddressCount'] if 'Ipv6Addresses' in params: interface['Ipv6Addresses'] = params['Ipv6Addresses'] del params['Ipv6Addresses'] EVENTS = [ ('building-argument-table.ec2.run-instances', _add_params), ('operation-args-parsed.ec2.run-instances', _check_args), ('before-parameter-build.ec2.RunInstances', _fix_args), ] def register_runinstances(event_handler): # Register all of the events for customizing BundleInstance for event, handler in EVENTS: event_handler.register(event, handler) def _build_network_interfaces(params, key, value): # Build up the NetworkInterfaces data structure if 'NetworkInterfaces' not in params: params['NetworkInterfaces'] = [{'DeviceIndex': 0}] if key == 'PrivateIpAddresses': if 'PrivateIpAddresses' not in params['NetworkInterfaces'][0]: params['NetworkInterfaces'][0]['PrivateIpAddresses'] = value else: params['NetworkInterfaces'][0][key] = value class SecondaryPrivateIpAddressesArgument(CustomArgument): def add_to_parser(self, parser, cli_name=None): parser.add_argument(self.cli_name, dest=self.py_name, default=self._default, nargs='*') def add_to_params(self, parameters, value): if value: value = [{'PrivateIpAddress': v, 'Primary': False} for v in value] _build_network_interfaces( parameters, 'PrivateIpAddresses', value) class SecondaryPrivateIpAddressCountArgument(CustomArgument): def add_to_parser(self, parser, cli_name=None): parser.add_argument(self.cli_name, dest=self.py_name, default=self._default, type=int) def add_to_params(self, parameters, value): if value: _build_network_interfaces( parameters, 'SecondaryPrivateIpAddressCount', value) class AssociatePublicIpAddressArgument(CustomArgument): def add_to_params(self, parameters, value): if value is True: _build_network_interfaces( parameters, 'AssociatePublicIpAddress', value) class NoAssociatePublicIpAddressArgument(CustomArgument): def add_to_params(self, parameters, value): if value is False: _build_network_interfaces( parameters, 'AssociatePublicIpAddress', value) awscli-1.17.14/awscli/customizations/ec2/decryptpassword.py0000644000000000000000000001076513620325554023760 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import logging import os import base64 import rsa from awscli.compat import six from botocore import model from awscli.arguments import BaseCLIArgument logger = logging.getLogger(__name__) HELP = """

The file that contains the private key used to launch the instance (e.g. windows-keypair.pem). If this is supplied, the password data sent from EC2 will be decrypted before display.

""" def ec2_add_priv_launch_key(argument_table, operation_model, session, **kwargs): """ This handler gets called after the argument table for the operation has been created. It's job is to add the ``priv-launch-key`` parameter. """ argument_table['priv-launch-key'] = LaunchKeyArgument( session, operation_model, 'priv-launch-key') class LaunchKeyArgument(BaseCLIArgument): def __init__(self, session, operation_model, name): self._session = session self.argument_model = model.Shape('LaunchKeyArgument', {'type': 'string'}) self._operation_model = operation_model self._name = name self._key_path = None self._required = False @property def cli_type_name(self): return 'string' @property def required(self): return self._required @required.setter def required(self, value): self._required = value @property def documentation(self): return HELP def add_to_parser(self, parser): parser.add_argument(self.cli_name, dest=self.py_name, help='SSH Private Key file') def add_to_params(self, parameters, value): """ This gets called with the value of our ``--priv-launch-key`` if it is specified. It needs to determine if the path provided is valid and, if it is, it stores it in the instance variable ``_key_path`` for use by the decrypt routine. """ if value: path = os.path.expandvars(value) path = os.path.expanduser(path) if os.path.isfile(path): self._key_path = path endpoint_prefix = \ self._operation_model.service_model.endpoint_prefix event = 'after-call.%s.%s' % (endpoint_prefix, self._operation_model.name) self._session.register(event, self._decrypt_password_data) else: msg = ('priv-launch-key should be a path to the ' 'local SSH private key file used to launch ' 'the instance.') raise ValueError(msg) def _decrypt_password_data(self, parsed, **kwargs): """ This handler gets called after the GetPasswordData command has been executed. It is called with the and the ``parsed`` data. It checks to see if a private launch key was specified on the command. If it was, it tries to use that private key to decrypt the password data and replace it in the returned data dictionary. """ if self._key_path is not None: logger.debug("Decrypting password data using: %s", self._key_path) value = parsed.get('PasswordData') if not value: return try: with open(self._key_path) as pk_file: pk_contents = pk_file.read() private_key = rsa.PrivateKey.load_pkcs1(six.b(pk_contents)) value = base64.b64decode(value) value = rsa.decrypt(value, private_key) logger.debug(parsed) parsed['PasswordData'] = value.decode('utf-8') logger.debug(parsed) except Exception: logger.debug('Unable to decrypt PasswordData', exc_info=True) msg = ('Unable to decrypt password data using ' 'provided private key file.') raise ValueError(msg) awscli-1.17.14/awscli/customizations/ec2/__init__.py0000644000000000000000000000106513620325554022253 0ustar rootroot00000000000000# Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. awscli-1.17.14/awscli/customizations/addexamples.py0000644000000000000000000000342513620325554022334 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. """ Add authored examples to MAN and HTML documentation --------------------------------------------------- This customization allows authored examples in ReST format to be inserted into the generated help for an Operation. To get this to work you need to: * Register the ``add_examples`` function below with the ``doc-examples.*.*`` event. * Create a file containing ReST format fragment with the examples. The file needs to be created in the ``examples/`` directory and needs to be named ``-.rst``. For example, ``examples/ec2/ec2-create-key-pair.rst``. """ import os import logging LOG = logging.getLogger(__name__) def add_examples(help_command, **kwargs): doc_path = os.path.join( os.path.dirname( os.path.dirname( os.path.abspath(__file__))), 'examples') doc_path = os.path.join(doc_path, help_command.event_class.replace('.', os.path.sep)) doc_path = doc_path + '.rst' LOG.debug("Looking for example file at: %s", doc_path) if os.path.isfile(doc_path): help_command.doc.style.h2('Examples') fp = open(doc_path) for line in fp.readlines(): help_command.doc.write(line) awscli-1.17.14/awscli/customizations/rekognition.py0000644000000000000000000000674513620325554022405 0ustar rootroot00000000000000# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import re from awscli.arguments import CustomArgument IMAGE_FILE_DOCSTRING = ('

The content of the image to be uploaded. ' 'To specify the content of a local file use the ' 'fileb:// prefix. ' 'Example: fileb://image.png

') IMAGE_DOCSTRING_ADDENDUM = ('

To specify a local file use --%s ' 'instead.

') FILE_PARAMETER_UPDATES = { 'compare-faces.source-image': 'source-image-bytes', 'compare-faces.target-image': 'target-image-bytes', '*.image': 'image-bytes', } def register_rekognition_detect_labels(cli): for target, new_param in FILE_PARAMETER_UPDATES.items(): operation, old_param = target.rsplit('.', 1) cli.register('building-argument-table.rekognition.%s' % operation, ImageArgUpdater(old_param, new_param)) class ImageArgUpdater(object): def __init__(self, source_param, new_param): self._source_param = source_param self._new_param = new_param def __call__(self, session, argument_table, **kwargs): if not self._valid_target(argument_table): return self._update_param( argument_table, self._source_param, self._new_param) def _valid_target(self, argument_table): # We need to ensure that the target parameter is a shape that # looks like it is the Image shape. This means checking that it # has a member named Bytes of the blob type. if self._source_param in argument_table: param = argument_table[self._source_param] input_model = param.argument_model bytes_member = input_model.members.get('Bytes') if bytes_member is not None and bytes_member.type_name == 'blob': return True return False def _update_param(self, argument_table, source_param, new_param): argument_table[new_param] = ImageArgument( new_param, source_param, help_text=IMAGE_FILE_DOCSTRING, cli_type_name='blob') argument_table[source_param].required = False doc_addendum = IMAGE_DOCSTRING_ADDENDUM % new_param argument_table[source_param].documentation += doc_addendum class ImageArgument(CustomArgument): def __init__(self, name, source_param, **kwargs): super(ImageArgument, self).__init__(name, **kwargs) self._parameter_to_overwrite = reverse_xform_name(source_param) def add_to_params(self, parameters, value): if value is None: return image_file_param = {'Bytes': value} if parameters.get(self._parameter_to_overwrite): parameters[self._parameter_to_overwrite].update(image_file_param) else: parameters[self._parameter_to_overwrite] = image_file_param def _upper(match): return match.group(1).lstrip('-').upper() def reverse_xform_name(name): return re.sub(r'(^.|-.)', _upper, name) awscli-1.17.14/awscli/customizations/s3endpoint.py0000644000000000000000000000345713620325554022140 0ustar rootroot00000000000000# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. """Disable endpoint url customizations for s3. There's a customization in botocore such that for S3 operations we try to fix the S3 endpoint url based on whether a bucket is dns compatible. We also try to map the endpoint url to the standard S3 region (s3.amazonaws.com). This normally happens even if a user provides an --endpoint-url (if the bucket is DNS compatible). This customization ensures that if a user specifies an --endpoint-url, then we turn off the botocore customization that messes with endpoint url. """ from functools import partial from botocore.utils import fix_s3_host def register_s3_endpoint(cli): handler = partial(on_top_level_args_parsed, event_handler=cli) cli.register( 'top-level-args-parsed', handler, unique_id='s3-endpoint') def on_top_level_args_parsed(parsed_args, event_handler, **kwargs): # The fix_s3_host has logic to set the endpoint to the # standard region endpoint for s3 (s3.amazonaws.com) under # certain conditions. We're making sure that if # the user provides an --endpoint-url, that entire handler # is disabled. if parsed_args.command in ['s3', 's3api'] and \ parsed_args.endpoint_url is not None: event_handler.unregister('before-sign.s3', fix_s3_host) awscli-1.17.14/awscli/customizations/sessendemail.py0000644000000000000000000001055313620325554022521 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. """ This customization provides a simpler interface for the ``ses send-email`` command. This simplified form is based on the legacy CLI. The simple format will be:: aws ses send-email --subject SUBJECT --from FROM_EMAIL --to-addresses addr ... --cc-addresses addr ... --bcc-addresses addr ... --reply-to-addresses addr ... --return-path addr --text TEXTBODY --html HTMLBODY """ from awscli.customizations import utils from awscli.arguments import CustomArgument from awscli.customizations.utils import validate_mutually_exclusive_handler TO_HELP = ('The email addresses of the primary recipients. ' 'You can specify multiple recipients as space-separated values') CC_HELP = ('The email addresses of copy recipients (Cc). ' 'You can specify multiple recipients as space-separated values') BCC_HELP = ('The email addresses of blind-carbon-copy recipients (Bcc). ' 'You can specify multiple recipients as space-separated values') SUBJECT_HELP = 'The subject of the message' TEXT_HELP = 'The raw text body of the message' HTML_HELP = 'The HTML body of the message' def register_ses_send_email(event_handler): event_handler.register('building-argument-table.ses.send-email', _promote_args) event_handler.register( 'operation-args-parsed.ses.send-email', validate_mutually_exclusive_handler( ['destination'], ['to', 'cc', 'bcc'])) event_handler.register( 'operation-args-parsed.ses.send-email', validate_mutually_exclusive_handler( ['message'], ['text', 'html'])) def _promote_args(argument_table, **kwargs): argument_table['message'].required = False argument_table['destination'].required = False utils.rename_argument(argument_table, 'source', new_name='from') argument_table['to'] = AddressesArgument( 'to', 'ToAddresses', help_text=TO_HELP) argument_table['cc'] = AddressesArgument( 'cc', 'CcAddresses', help_text=CC_HELP) argument_table['bcc'] = AddressesArgument( 'bcc', 'BccAddresses', help_text=BCC_HELP) argument_table['subject'] = BodyArgument( 'subject', 'Subject', help_text=SUBJECT_HELP) argument_table['text'] = BodyArgument( 'text', 'Text', help_text=TEXT_HELP) argument_table['html'] = BodyArgument( 'html', 'Html', help_text=HTML_HELP) def _build_destination(params, key, value): # Build up the Destination data structure if 'Destination' not in params: params['Destination'] = {} params['Destination'][key] = value def _build_message(params, key, value): # Build up the Message data structure if 'Message' not in params: params['Message'] = {'Subject': {}, 'Body': {}} if key in ('Text', 'Html'): params['Message']['Body'][key] = {'Data': value} elif key == 'Subject': params['Message']['Subject'] = {'Data': value} class AddressesArgument(CustomArgument): def __init__(self, name, json_key, help_text='', dest=None, default=None, action=None, required=None, choices=None, cli_type_name=None): super(AddressesArgument, self).__init__(name=name, help_text=help_text, required=required, nargs='+') self._json_key = json_key def add_to_params(self, parameters, value): if value: _build_destination(parameters, self._json_key, value) class BodyArgument(CustomArgument): def __init__(self, name, json_key, help_text='', required=None): super(BodyArgument, self).__init__(name=name, help_text=help_text, required=required) self._json_key = json_key def add_to_params(self, parameters, value): if value: _build_message(parameters, self._json_key, value) awscli-1.17.14/awscli/customizations/assumerole.py0000644000000000000000000000347313620325554022227 0ustar rootroot00000000000000import os import logging from botocore.exceptions import ProfileNotFound from botocore.credentials import JSONFileCache LOG = logging.getLogger(__name__) CACHE_DIR = os.path.expanduser(os.path.join('~', '.aws', 'cli', 'cache')) def register_assume_role_provider(event_handlers): event_handlers.register('session-initialized', inject_assume_role_provider_cache, unique_id='inject_assume_role_cred_provider_cache') def inject_assume_role_provider_cache(session, **kwargs): try: cred_chain = session.get_component('credential_provider') except ProfileNotFound: # If a user has provided a profile that does not exist, # trying to retrieve components/config on the session # will raise ProfileNotFound. Sometimes this is invalid: # # "ec2 describe-instances --profile unknown" # # and sometimes this is perfectly valid: # # "configure set region us-west-2 --profile brand-new-profile" # # Because we can't know (and don't want to know) whether # the customer is trying to do something valid, we just # immediately return. If it's invalid something else # up the stack will raise ProfileNotFound, otherwise # the configure (and other) commands will work as expected. LOG.debug("ProfileNotFound caught when trying to inject " "assume-role cred provider cache. Not configuring " "JSONFileCache for assume-role.") return assume_role_provider = cred_chain.get_provider('assume-role') assume_role_provider.cache = JSONFileCache(CACHE_DIR) web_identity_provider = cred_chain.get_provider( 'assume-role-with-web-identity' ) web_identity_provider.cache = JSONFileCache(CACHE_DIR) awscli-1.17.14/awscli/customizations/opsworks.py0000644000000000000000000005115513620325554021737 0ustar rootroot00000000000000# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import datetime import json import logging import os import platform import re import shlex import socket import subprocess import tempfile import textwrap from botocore.exceptions import ClientError from awscli.compat import shlex_quote, urlopen, ensure_text_type from awscli.customizations.commands import BasicCommand from awscli.customizations.utils import create_client_from_parsed_globals LOG = logging.getLogger(__name__) IAM_USER_POLICY_NAME = "OpsWorks-Instance" IAM_USER_POLICY_TIMEOUT = datetime.timedelta(minutes=15) IAM_PATH = '/AWS/OpsWorks/' IAM_POLICY_ARN = 'arn:aws:iam::aws:policy/AWSOpsWorksInstanceRegistration' HOSTNAME_RE = re.compile(r"^(?!-)[a-z0-9-]{1,63}(?$AGENT_TMP_DIR/opsworks-agent-installer/preconfig <]', 'help_text': """Either the EC2 instance ID or the hostname of the instance or machine to be registered with OpsWorks. Cannot be used together with `--local`."""}, ] def __init__(self, session): super(OpsWorksRegister, self).__init__(session) self._stack = None self._ec2_instance = None self._prov_params = None self._use_address = None self._use_hostname = None self._name_for_iam = None self.access_key = None def _create_clients(self, args, parsed_globals): self.iam = self._session.create_client('iam') self.opsworks = create_client_from_parsed_globals( self._session, 'opsworks', parsed_globals) def _run_main(self, args, parsed_globals): self._create_clients(args, parsed_globals) self.prevalidate_arguments(args) self.retrieve_stack(args) self.validate_arguments(args) self.determine_details(args) self.create_iam_entities(args) self.setup_target_machine(args) def prevalidate_arguments(self, args): """ Validates command line arguments before doing anything else. """ if not args.target and not args.local: raise ValueError("One of target or --local is required.") elif args.target and args.local: raise ValueError( "Arguments target and --local are mutually exclusive.") if args.local and platform.system() != 'Linux': raise ValueError( "Non-Linux instances are not supported by AWS OpsWorks.") if args.ssh and (args.username or args.private_key): raise ValueError( "Argument --override-ssh cannot be used together with " "--ssh-username or --ssh-private-key.") if args.infrastructure_class == 'ec2': if args.private_ip: raise ValueError( "--override-private-ip is not supported for EC2.") if args.public_ip: raise ValueError( "--override-public-ip is not supported for EC2.") if args.infrastructure_class == 'on-premises' and \ args.use_instance_profile: raise ValueError( "--use-instance-profile is only supported for EC2.") if args.hostname: if not HOSTNAME_RE.match(args.hostname): raise ValueError( "Invalid hostname: '%s'. Hostnames must consist of " "letters, digits and dashes only and must not start or " "end with a dash." % args.hostname) def retrieve_stack(self, args): """ Retrieves the stack from the API, thereby ensures that it exists. Provides `self._stack`, `self._prov_params`, `self._use_address`, and `self._ec2_instance`. """ LOG.debug("Retrieving stack and provisioning parameters") self._stack = self.opsworks.describe_stacks( StackIds=[args.stack_id] )['Stacks'][0] self._prov_params = \ self.opsworks.describe_stack_provisioning_parameters( StackId=self._stack['StackId'] ) if args.infrastructure_class == 'ec2' and not args.local: LOG.debug("Retrieving EC2 instance information") ec2 = self._session.create_client( 'ec2', region_name=self._stack['Region']) # `desc_args` are arguments for the describe_instances call, # whereas `conditions` is a list of lambdas for further filtering # on the results of the call. desc_args = {'Filters': []} conditions = [] # make sure that the platforms (EC2/VPC) and VPC IDs of the stack # and the instance match if 'VpcId' in self._stack: desc_args['Filters'].append( {'Name': 'vpc-id', 'Values': [self._stack['VpcId']]} ) else: # Cannot search for non-VPC instances directly, thus filter # afterwards conditions.append(lambda instance: 'VpcId' not in instance) # target may be an instance ID, an IP address, or a name if INSTANCE_ID_RE.match(args.target): desc_args['InstanceIds'] = [args.target] elif IP_ADDRESS_RE.match(args.target): # Cannot search for either private or public IP at the same # time, thus filter afterwards conditions.append( lambda instance: instance.get('PrivateIpAddress') == args.target or instance.get('PublicIpAddress') == args.target) # also use the given address to connect self._use_address = args.target else: # names are tags desc_args['Filters'].append( {'Name': 'tag:Name', 'Values': [args.target]} ) # find all matching instances instances = [ i for r in ec2.describe_instances(**desc_args)['Reservations'] for i in r['Instances'] if all(c(i) for c in conditions) ] if not instances: raise ValueError( "Did not find any instance matching %s." % args.target) elif len(instances) > 1: raise ValueError( "Found multiple instances matching %s: %s." % ( args.target, ", ".join(i['InstanceId'] for i in instances))) self._ec2_instance = instances[0] def validate_arguments(self, args): """ Validates command line arguments using the retrieved information. """ if args.hostname: instances = self.opsworks.describe_instances( StackId=self._stack['StackId'] )['Instances'] if any(args.hostname.lower() == instance['Hostname'] for instance in instances): raise ValueError( "Invalid hostname: '%s'. Hostnames must be unique within " "a stack." % args.hostname) if args.infrastructure_class == 'ec2' and args.local: # make sure the regions match region = json.loads( ensure_text_type(urlopen(IDENTITY_URL).read()))['region'] if region != self._stack['Region']: raise ValueError( "The stack's and the instance's region must match.") def determine_details(self, args): """ Determine details (like the address to connect to and the hostname to use) from the given arguments and the retrieved data. Provides `self._use_address` (if not provided already), `self._use_hostname` and `self._name_for_iam`. """ # determine the address to connect to if not self._use_address: if args.local: pass elif args.infrastructure_class == 'ec2': if 'PublicIpAddress' in self._ec2_instance: self._use_address = self._ec2_instance['PublicIpAddress'] elif 'PrivateIpAddress' in self._ec2_instance: LOG.warn( "Instance does not have a public IP address. Trying " "to use the private address to connect.") self._use_address = self._ec2_instance['PrivateIpAddress'] else: # Should never happen raise ValueError( "The instance does not seem to have an IP address.") elif args.infrastructure_class == 'on-premises': self._use_address = args.target # determine the names to use if args.hostname: self._use_hostname = args.hostname self._name_for_iam = args.hostname elif args.local: self._use_hostname = None self._name_for_iam = socket.gethostname() else: self._use_hostname = None self._name_for_iam = args.target def create_iam_entities(self, args): """ Creates an IAM group, user and corresponding credentials. Provides `self.access_key`. """ if args.use_instance_profile: LOG.debug("Skipping IAM entity creation") self.access_key = None return LOG.debug("Creating the IAM group if necessary") group_name = "OpsWorks-%s" % clean_for_iam(self._stack['StackId']) try: self.iam.create_group(GroupName=group_name, Path=IAM_PATH) LOG.debug("Created IAM group %s", group_name) except ClientError as e: if e.response.get('Error', {}).get('Code') == 'EntityAlreadyExists': LOG.debug("IAM group %s exists, continuing", group_name) # group already exists, good pass else: raise # create the IAM user, trying alternatives if it already exists LOG.debug("Creating an IAM user") base_username = "OpsWorks-%s-%s" % ( shorten_name(clean_for_iam(self._stack['Name']), 25), shorten_name(clean_for_iam(self._name_for_iam), 25) ) for try_ in range(20): username = base_username + ("+%s" % try_ if try_ else "") try: self.iam.create_user(UserName=username, Path=IAM_PATH) except ClientError as e: if e.response.get('Error', {}).get('Code') == 'EntityAlreadyExists': LOG.debug( "IAM user %s already exists, trying another name", username ) # user already exists, try the next one pass else: raise else: LOG.debug("Created IAM user %s", username) break else: raise ValueError("Couldn't find an unused IAM user name.") LOG.debug("Adding the user to the group and attaching a policy") self.iam.add_user_to_group(GroupName=group_name, UserName=username) try: self.iam.attach_user_policy( PolicyArn=IAM_POLICY_ARN, UserName=username ) except ClientError as e: if e.response.get('Error', {}).get('Code') == 'AccessDenied': LOG.debug( "Unauthorized to attach policy %s to user %s. Trying " "to put user policy", IAM_POLICY_ARN, username ) self.iam.put_user_policy( PolicyName=IAM_USER_POLICY_NAME, PolicyDocument=self._iam_policy_document( self._stack['Arn'], IAM_USER_POLICY_TIMEOUT), UserName=username ) LOG.debug( "Put policy %s to user %s", IAM_USER_POLICY_NAME, username ) else: raise else: LOG.debug( "Attached policy %s to user %s", IAM_POLICY_ARN, username ) LOG.debug("Creating an access key") self.access_key = self.iam.create_access_key( UserName=username )['AccessKey'] def setup_target_machine(self, args): """ Setups the target machine by copying over the credentials and starting the installation process. """ remote_script = REMOTE_SCRIPT % { 'agent_installer_url': self._prov_params['AgentInstallerUrl'], 'preconfig': self._to_ruby_yaml(self._pre_config_document(args)), 'assets_download_bucket': self._prov_params['Parameters']['assets_download_bucket'] } if args.local: LOG.debug("Running the installer locally") subprocess.check_call(["/bin/sh", "-c", remote_script]) else: LOG.debug("Connecting to the target machine to run the installer.") self.ssh(args, remote_script) def ssh(self, args, remote_script): """ Runs a (sh) script on a remote machine via SSH. """ if platform.system() == 'Windows': try: script_file = tempfile.NamedTemporaryFile("wt", delete=False) script_file.write(remote_script) script_file.close() if args.ssh: call = args.ssh else: call = 'plink' if args.username: call += ' -l "%s"' % args.username if args.private_key: call += ' -i "%s"' % args.private_key call += ' "%s"' % self._use_address call += ' -m' call += ' "%s"' % script_file.name subprocess.check_call(call, shell=True) finally: os.remove(script_file.name) else: if args.ssh: call = shlex.split(str(args.ssh)) else: call = ['ssh', '-tt'] if args.username: call.extend(['-l', args.username]) if args.private_key: call.extend(['-i', args.private_key]) call.append(self._use_address) remote_call = ["/bin/sh", "-c", remote_script] call.append(" ".join(shlex_quote(word) for word in remote_call)) subprocess.check_call(call) def _pre_config_document(self, args): parameters = dict( stack_id=self._stack['StackId'], **self._prov_params["Parameters"] ) if self.access_key: parameters['access_key_id'] = self.access_key['AccessKeyId'] parameters['secret_access_key'] = \ self.access_key['SecretAccessKey'] if self._use_hostname: parameters['hostname'] = self._use_hostname if args.private_ip: parameters['private_ip'] = args.private_ip if args.public_ip: parameters['public_ip'] = args.public_ip parameters['import'] = args.infrastructure_class == 'ec2' LOG.debug("Using pre-config: %r", parameters) return parameters @staticmethod def _iam_policy_document(arn, timeout=None): statement = { "Action": "opsworks:RegisterInstance", "Effect": "Allow", "Resource": arn, } if timeout is not None: valid_until = datetime.datetime.utcnow() + timeout statement["Condition"] = { "DateLessThan": { "aws:CurrentTime": valid_until.strftime("%Y-%m-%dT%H:%M:%SZ") } } policy_document = { "Statement": [statement], "Version": "2012-10-17" } return json.dumps(policy_document) @staticmethod def _to_ruby_yaml(parameters): return "\n".join(":%s: %s" % (k, json.dumps(v)) for k, v in sorted(parameters.items())) def clean_for_iam(name): """ Cleans a name to fit IAM's naming requirements. """ return re.sub(r'[^A-Za-z0-9+=,.@_-]+', '-', name) def shorten_name(name, max_length): """ Shortens a name to the given number of characters. """ if len(name) <= max_length: return name q, r = divmod(max_length - 3, 2) return name[:q + r] + "..." + name[-q:] awscli-1.17.14/awscli/customizations/codedeploy/0000755000000000000000000000000013620325757021623 5ustar rootroot00000000000000awscli-1.17.14/awscli/customizations/codedeploy/utils.py0000644000000000000000000001104013620325630023317 0ustar rootroot00000000000000# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import platform import re import awscli.compat from awscli.compat import urlopen, URLError from awscli.customizations.codedeploy.systems import System, Ubuntu, Windows, RHEL from socket import timeout MAX_INSTANCE_NAME_LENGTH = 100 MAX_TAGS_PER_INSTANCE = 10 MAX_TAG_KEY_LENGTH = 128 MAX_TAG_VALUE_LENGTH = 256 INSTANCE_NAME_PATTERN = r'^[A-Za-z0-9+=,.@_-]+$' IAM_USER_ARN_PATTERN = r'^arn:aws:iam::[0-9]{12}:user/[A-Za-z0-9/+=,.@_-]+$' INSTANCE_NAME_ARG = { 'name': 'instance-name', 'synopsis': '--instance-name ', 'required': True, 'help_text': ( 'Required. The name of the on-premises instance.' ) } IAM_USER_ARN_ARG = { 'name': 'iam-user-arn', 'synopsis': '--iam-user-arn ', 'required': False, 'help_text': ( 'Optional. The IAM user associated with the on-premises instance.' ) } def validate_region(params, parsed_globals): if parsed_globals.region: params.region = parsed_globals.region else: params.region = params.session.get_config_variable('region') if not params.region: raise RuntimeError('Region not specified.') def validate_instance_name(params): if params.instance_name: if not re.match(INSTANCE_NAME_PATTERN, params.instance_name): raise ValueError('Instance name contains invalid characters.') if params.instance_name.startswith('i-'): raise ValueError('Instance name cannot start with \'i-\'.') if len(params.instance_name) > MAX_INSTANCE_NAME_LENGTH: raise ValueError( 'Instance name cannot be longer than {0} characters.'.format( MAX_INSTANCE_NAME_LENGTH ) ) def validate_tags(params): if params.tags: if len(params.tags) > MAX_TAGS_PER_INSTANCE: raise ValueError( 'Instances can only have a maximum of {0} tags.'.format( MAX_TAGS_PER_INSTANCE ) ) for tag in params.tags: if len(tag['Key']) > MAX_TAG_KEY_LENGTH: raise ValueError( 'Tag Key cannot be longer than {0} characters.'.format( MAX_TAG_KEY_LENGTH ) ) if len(tag['Value']) > MAX_TAG_VALUE_LENGTH: raise ValueError( 'Tag Value cannot be longer than {0} characters.'.format( MAX_TAG_VALUE_LENGTH ) ) def validate_iam_user_arn(params): if params.iam_user_arn and \ not re.match(IAM_USER_ARN_PATTERN, params.iam_user_arn): raise ValueError('Invalid IAM user ARN.') def validate_instance(params): if platform.system() == 'Linux': distribution = awscli.compat.linux_distribution()[0] if 'Ubuntu' in distribution: params.system = Ubuntu(params) if 'Red Hat Enterprise Linux Server' in distribution: params.system = RHEL(params) elif platform.system() == 'Windows': params.system = Windows(params) if 'system' not in params: raise RuntimeError( System.UNSUPPORTED_SYSTEM_MSG ) try: urlopen('http://169.254.169.254/latest/meta-data/', timeout=1) raise RuntimeError('Amazon EC2 instances are not supported.') except (URLError, timeout): pass def validate_s3_location(params, arg_name): arg_name = arg_name.replace('-', '_') if arg_name in params: s3_location = getattr(params, arg_name) if s3_location: matcher = re.match('s3://(.+?)/(.+)', str(s3_location)) if matcher: params.bucket = matcher.group(1) params.key = matcher.group(2) else: raise ValueError( '--{0} must specify the Amazon S3 URL format as ' 's3:///.'.format( arg_name.replace('_', '-') ) ) awscli-1.17.14/awscli/customizations/codedeploy/install.py0000644000000000000000000001015513620325554023640 0ustar rootroot00000000000000# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import errno import os import shutil import sys from awscli.customizations.commands import BasicCommand from awscli.customizations.codedeploy.utils import \ validate_region, validate_s3_location, validate_instance class Install(BasicCommand): NAME = 'install' DESCRIPTION = ( 'Configures and installs the AWS CodeDeploy Agent on the on-premises ' 'instance.' ) ARG_TABLE = [ { 'name': 'config-file', 'synopsis': '--config-file ', 'required': True, 'help_text': ( 'Required. The path to the on-premises instance configuration ' 'file.' ) }, { 'name': 'override-config', 'action': 'store_true', 'default': False, 'help_text': ( 'Optional. Overrides the on-premises instance configuration ' 'file.' ) }, { 'name': 'agent-installer', 'synopsis': '--agent-installer ', 'required': False, 'help_text': ( 'Optional. The AWS CodeDeploy Agent installer file.' ) } ] def _run_main(self, parsed_args, parsed_globals): params = parsed_args params.session = self._session validate_region(params, parsed_globals) validate_instance(params) params.system.validate_administrator() self._validate_override_config(params) self._validate_agent_installer(params) try: self._create_config(params) self._install_agent(params) except Exception as e: sys.stdout.flush() sys.stderr.write( 'ERROR\n' '{0}\n' 'Install the AWS CodeDeploy Agent on the on-premises instance ' 'by following the instructions in "Configure Existing ' 'On-Premises Instances by Using AWS CodeDeploy" in the AWS ' 'CodeDeploy User Guide.\n'.format(e) ) def _validate_override_config(self, params): if os.path.isfile(params.system.CONFIG_PATH) and \ not params.override_config: raise RuntimeError( 'The on-premises instance configuration file already exists. ' 'Specify --override-config to update the existing on-premises ' 'instance configuration file.' ) def _validate_agent_installer(self, params): validate_s3_location(params, 'agent_installer') if 'bucket' not in params: params.bucket = 'aws-codedeploy-{0}'.format(params.region) if 'key' not in params: params.key = 'latest/{0}'.format(params.system.INSTALLER) params.installer = params.system.INSTALLER else: start = params.key.rfind('/') + 1 params.installer = params.key[start:] def _create_config(self, params): sys.stdout.write( 'Creating the on-premises instance configuration file... ' ) try: os.makedirs(params.system.CONFIG_DIR) except OSError as e: if e.errno != errno.EEXIST: raise e if params.config_file != params.system.CONFIG_PATH: shutil.copyfile(params.config_file, params.system.CONFIG_PATH) sys.stdout.write('DONE\n') def _install_agent(self, params): sys.stdout.write('Installing the AWS CodeDeploy Agent... ') params.system.install(params) sys.stdout.write('DONE\n') awscli-1.17.14/awscli/customizations/codedeploy/push.py0000644000000000000000000002466313620325554023162 0ustar rootroot00000000000000# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import os import sys import zipfile import tempfile import contextlib from datetime import datetime from botocore.exceptions import ClientError from awscli.compat import six from awscli.customizations.codedeploy.utils import validate_s3_location from awscli.customizations.commands import BasicCommand from awscli.compat import ZIP_COMPRESSION_MODE ONE_MB = 1 << 20 MULTIPART_LIMIT = 6 * ONE_MB class Push(BasicCommand): NAME = 'push' DESCRIPTION = ( 'Bundles and uploads to Amazon Simple Storage Service (Amazon S3) an ' 'application revision, which is a zip archive file that contains ' 'deployable content and an accompanying Application Specification ' 'file (AppSpec file). If the upload is successful, a message is ' 'returned that describes how to call the create-deployment command to ' 'deploy the application revision from Amazon S3 to target Amazon ' 'Elastic Compute Cloud (Amazon EC2) instances.' ) ARG_TABLE = [ { 'name': 'application-name', 'synopsis': '--application-name ', 'required': True, 'help_text': ( 'Required. The name of the AWS CodeDeploy application to be ' 'associated with the application revision.' ) }, { 'name': 's3-location', 'synopsis': '--s3-location s3:///', 'required': True, 'help_text': ( 'Required. Information about the location of the application ' 'revision to be uploaded to Amazon S3. You must specify both ' 'a bucket and a key that represent the Amazon S3 bucket name ' 'and the object key name. Content will be zipped before ' 'uploading. Use the format s3://\/\' ) }, { 'name': 'ignore-hidden-files', 'action': 'store_true', 'default': False, 'group_name': 'ignore-hidden-files', 'help_text': ( 'Optional. Set the --ignore-hidden-files flag to not bundle ' 'and upload hidden files to Amazon S3; otherwise, set the ' '--no-ignore-hidden-files flag (the default) to bundle and ' 'upload hidden files to Amazon S3.' ) }, { 'name': 'no-ignore-hidden-files', 'action': 'store_true', 'default': False, 'group_name': 'ignore-hidden-files' }, { 'name': 'source', 'synopsis': '--source ', 'default': '.', 'help_text': ( 'Optional. The location of the deployable content and the ' 'accompanying AppSpec file on the development machine to be ' 'zipped and uploaded to Amazon S3. If not specified, the ' 'current directory is used.' ) }, { 'name': 'description', 'synopsis': '--description ', 'help_text': ( 'Optional. A comment that summarizes the application ' 'revision. If not specified, the default string "Uploaded by ' 'AWS CLI \'time\' UTC" is used, where \'time\' is the current ' 'system time in Coordinated Universal Time (UTC).' ) } ] def _run_main(self, parsed_args, parsed_globals): self._validate_args(parsed_args) self.codedeploy = self._session.create_client( 'codedeploy', region_name=parsed_globals.region, endpoint_url=parsed_globals.endpoint_url, verify=parsed_globals.verify_ssl ) self.s3 = self._session.create_client( 's3', region_name=parsed_globals.region ) self._push(parsed_args) def _validate_args(self, parsed_args): validate_s3_location(parsed_args, 's3_location') if parsed_args.ignore_hidden_files \ and parsed_args.no_ignore_hidden_files: raise RuntimeError( 'You cannot specify both --ignore-hidden-files and ' '--no-ignore-hidden-files.' ) if not parsed_args.description: parsed_args.description = ( 'Uploaded by AWS CLI {0} UTC'.format( datetime.utcnow().isoformat() ) ) def _push(self, params): with self._compress( params.source, params.ignore_hidden_files ) as bundle: try: upload_response = self._upload_to_s3(params, bundle) params.eTag = upload_response['ETag'].replace('"', "") if 'VersionId' in upload_response: params.version = upload_response['VersionId'] except Exception as e: raise RuntimeError( 'Failed to upload \'%s\' to \'%s\': %s' % (params.source, params.s3_location, str(e)) ) self._register_revision(params) if 'version' in params: version_string = ',version={0}'.format(params.version) else: version_string = '' s3location_string = ( '--s3-location bucket={0},key={1},' 'bundleType=zip,eTag={2}{3}'.format( params.bucket, params.key, params.eTag, version_string ) ) sys.stdout.write( 'To deploy with this revision, run:\n' 'aws deploy create-deployment ' '--application-name {0} {1} ' '--deployment-group-name ' '--deployment-config-name ' '--description \n'.format( params.application_name, s3location_string ) ) @contextlib.contextmanager def _compress(self, source, ignore_hidden_files=False): source_path = os.path.abspath(source) appspec_path = os.path.sep.join([source_path, 'appspec.yml']) with tempfile.TemporaryFile('w+b') as tf: zf = zipfile.ZipFile(tf, 'w', allowZip64=True) # Using 'try'/'finally' instead of 'with' statement since ZipFile # does not have support context manager in Python 2.6. try: contains_appspec = False for root, dirs, files in os.walk(source, topdown=True): if ignore_hidden_files: files = [fn for fn in files if not fn.startswith('.')] dirs[:] = [dn for dn in dirs if not dn.startswith('.')] for fn in files: filename = os.path.join(root, fn) filename = os.path.abspath(filename) arcname = filename[len(source_path) + 1:] if filename == appspec_path: contains_appspec = True zf.write(filename, arcname, ZIP_COMPRESSION_MODE) if not contains_appspec: raise RuntimeError( '{0} was not found'.format(appspec_path) ) finally: zf.close() yield tf def _upload_to_s3(self, params, bundle): size_remaining = self._bundle_size(bundle) if size_remaining < MULTIPART_LIMIT: return self.s3.put_object( Bucket=params.bucket, Key=params.key, Body=bundle ) else: return self._multipart_upload_to_s3( params, bundle, size_remaining ) def _bundle_size(self, bundle): bundle.seek(0, 2) size = bundle.tell() bundle.seek(0) return size def _multipart_upload_to_s3(self, params, bundle, size_remaining): create_response = self.s3.create_multipart_upload( Bucket=params.bucket, Key=params.key ) upload_id = create_response['UploadId'] try: part_num = 1 multipart_list = [] bundle.seek(0) while size_remaining > 0: data = bundle.read(MULTIPART_LIMIT) upload_response = self.s3.upload_part( Bucket=params.bucket, Key=params.key, UploadId=upload_id, PartNumber=part_num, Body=six.BytesIO(data) ) multipart_list.append({ 'PartNumber': part_num, 'ETag': upload_response['ETag'] }) part_num += 1 size_remaining -= len(data) return self.s3.complete_multipart_upload( Bucket=params.bucket, Key=params.key, UploadId=upload_id, MultipartUpload={'Parts': multipart_list} ) except ClientError as e: self.s3.abort_multipart_upload( Bucket=params.bucket, Key=params.key, UploadId=upload_id ) raise e def _register_revision(self, params): revision = { 'revisionType': 'S3', 's3Location': { 'bucket': params.bucket, 'key': params.key, 'bundleType': 'zip', 'eTag': params.eTag } } if 'version' in params: revision['s3Location']['version'] = params.version self.codedeploy.register_application_revision( applicationName=params.application_name, revision=revision, description=params.description ) awscli-1.17.14/awscli/customizations/codedeploy/register.py0000644000000000000000000001603413620325554024020 0ustar rootroot00000000000000# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import sys from awscli.customizations.commands import BasicCommand from awscli.customizations.codedeploy.systems import DEFAULT_CONFIG_FILE from awscli.customizations.codedeploy.utils import \ validate_region, validate_instance_name, validate_tags, \ validate_iam_user_arn, INSTANCE_NAME_ARG, IAM_USER_ARN_ARG class Register(BasicCommand): NAME = 'register' DESCRIPTION = ( "Creates an IAM user for the on-premises instance, if not provided, " "and saves the user's credentials to an on-premises instance " "configuration file; registers the on-premises instance with AWS " "CodeDeploy; and optionally adds tags to the on-premises instance." ) TAGS_SCHEMA = { "type": "array", "items": { "type": "object", "properties": { "Key": { "description": "The tag key.", "type": "string", "required": True }, "Value": { "description": "The tag value.", "type": "string", "required": True } } } } ARG_TABLE = [ INSTANCE_NAME_ARG, { 'name': 'tags', 'synopsis': '--tags ', 'required': False, 'nargs': '+', 'schema': TAGS_SCHEMA, 'help_text': ( 'Optional. The list of key/value pairs to tag the on-premises ' 'instance.' ) }, IAM_USER_ARN_ARG ] def _run_main(self, parsed_args, parsed_globals): params = parsed_args params.session = self._session validate_region(params, parsed_globals) validate_instance_name(params) validate_tags(params) validate_iam_user_arn(params) self.codedeploy = self._session.create_client( 'codedeploy', region_name=params.region, endpoint_url=parsed_globals.endpoint_url, verify=parsed_globals.verify_ssl ) self.iam = self._session.create_client( 'iam', region_name=params.region ) try: if not params.iam_user_arn: self._create_iam_user(params) self._create_access_key(params) self._create_user_policy(params) self._create_config(params) self._register_instance(params) if params.tags: self._add_tags(params) sys.stdout.write( 'Copy the on-premises configuration file named {0} to the ' 'on-premises instance, and run the following command on the ' 'on-premises instance to install and configure the AWS ' 'CodeDeploy Agent:\n' 'aws deploy install --config-file {0}\n'.format( DEFAULT_CONFIG_FILE ) ) except Exception as e: sys.stdout.flush() sys.stderr.write( 'ERROR\n' '{0}\n' 'Register the on-premises instance by following the ' 'instructions in "Configure Existing On-Premises Instances by ' 'Using AWS CodeDeploy" in the AWS CodeDeploy User ' 'Guide.\n'.format(e) ) def _create_iam_user(self, params): sys.stdout.write('Creating the IAM user... ') params.user_name = params.instance_name response = self.iam.create_user( Path='/AWS/CodeDeploy/', UserName=params.user_name ) params.iam_user_arn = response['User']['Arn'] sys.stdout.write( 'DONE\n' 'IamUserArn: {0}\n'.format( params.iam_user_arn ) ) def _create_access_key(self, params): sys.stdout.write('Creating the IAM user access key... ') response = self.iam.create_access_key( UserName=params.user_name ) params.access_key_id = response['AccessKey']['AccessKeyId'] params.secret_access_key = response['AccessKey']['SecretAccessKey'] sys.stdout.write( 'DONE\n' 'AccessKeyId: {0}\n' 'SecretAccessKey: {1}\n'.format( params.access_key_id, params.secret_access_key ) ) def _create_user_policy(self, params): sys.stdout.write('Creating the IAM user policy... ') params.policy_name = 'codedeploy-agent' params.policy_document = ( '{\n' ' "Version": "2012-10-17",\n' ' "Statement": [ {\n' ' "Action": [ "s3:Get*", "s3:List*" ],\n' ' "Effect": "Allow",\n' ' "Resource": "*"\n' ' } ]\n' '}' ) self.iam.put_user_policy( UserName=params.user_name, PolicyName=params.policy_name, PolicyDocument=params.policy_document ) sys.stdout.write( 'DONE\n' 'PolicyName: {0}\n' 'PolicyDocument: {1}\n'.format( params.policy_name, params.policy_document ) ) def _create_config(self, params): sys.stdout.write( 'Creating the on-premises instance configuration file named {0}' '...'.format(DEFAULT_CONFIG_FILE) ) with open(DEFAULT_CONFIG_FILE, 'w') as f: f.write( '---\n' 'region: {0}\n' 'iam_user_arn: {1}\n' 'aws_access_key_id: {2}\n' 'aws_secret_access_key: {3}\n'.format( params.region, params.iam_user_arn, params.access_key_id, params.secret_access_key ) ) sys.stdout.write('DONE\n') def _register_instance(self, params): sys.stdout.write('Registering the on-premises instance... ') self.codedeploy.register_on_premises_instance( instanceName=params.instance_name, iamUserArn=params.iam_user_arn ) sys.stdout.write('DONE\n') def _add_tags(self, params): sys.stdout.write('Adding tags to the on-premises instance... ') self.codedeploy.add_tags_to_on_premises_instances( tags=params.tags, instanceNames=[params.instance_name] ) sys.stdout.write('DONE\n') awscli-1.17.14/awscli/customizations/codedeploy/uninstall.py0000644000000000000000000000423213620325554024202 0ustar rootroot00000000000000# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import os import sys import errno from awscli.customizations.codedeploy.utils import validate_instance, \ validate_region from awscli.customizations.commands import BasicCommand class Uninstall(BasicCommand): NAME = 'uninstall' DESCRIPTION = ( 'Uninstalls the AWS CodeDeploy Agent from the on-premises instance.' ) def _run_main(self, parsed_args, parsed_globals): params = parsed_args params.session = self._session validate_region(params, parsed_globals) validate_instance(params) params.system.validate_administrator() try: self._uninstall_agent(params) self._delete_config_file(params) except Exception as e: sys.stdout.flush() sys.stderr.write( 'ERROR\n' '{0}\n' 'Uninstall the AWS CodeDeploy Agent on the on-premises ' 'instance by following the instructions in "Configure ' 'Existing On-Premises Instances by Using AWS CodeDeploy" in ' 'the AWS CodeDeploy User Guide.\n'.format(e) ) def _uninstall_agent(self, params): sys.stdout.write('Uninstalling the AWS CodeDeploy Agent... ') params.system.uninstall(params) sys.stdout.write('DONE\n') def _delete_config_file(self, params): sys.stdout.write('Deleting the on-premises instance configuration... ') try: os.remove(params.system.CONFIG_PATH) except OSError as e: if e.errno != errno.ENOENT: raise e sys.stdout.write('DONE\n') awscli-1.17.14/awscli/customizations/codedeploy/deregister.py0000644000000000000000000001407513620325554024334 0ustar rootroot00000000000000# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import sys from botocore.exceptions import ClientError from awscli.customizations.commands import BasicCommand from awscli.customizations.codedeploy.utils import \ validate_region, validate_instance_name, INSTANCE_NAME_ARG class Deregister(BasicCommand): NAME = 'deregister' DESCRIPTION = ( 'Removes any tags from the on-premises instance; deregisters the ' 'on-premises instance from AWS CodeDeploy; and, unless requested ' 'otherwise, deletes the IAM user for the on-premises instance.' ) ARG_TABLE = [ INSTANCE_NAME_ARG, { 'name': 'no-delete-iam-user', 'action': 'store_true', 'default': False, 'help_text': ( 'Optional. Do not delete the IAM user for the registered ' 'on-premises instance.' ) } ] def _run_main(self, parsed_args, parsed_globals): params = parsed_args params.session = self._session validate_region(params, parsed_globals) validate_instance_name(params) self.codedeploy = self._session.create_client( 'codedeploy', region_name=params.region, endpoint_url=parsed_globals.endpoint_url, verify=parsed_globals.verify_ssl ) self.iam = self._session.create_client( 'iam', region_name=params.region ) try: self._get_instance_info(params) if params.tags: self._remove_tags(params) self._deregister_instance(params) if not params.no_delete_iam_user: self._delete_user_policy(params) self._delete_access_key(params) self._delete_iam_user(params) sys.stdout.write( 'Run the following command on the on-premises instance to ' 'uninstall the codedeploy-agent:\n' 'aws deploy uninstall\n' ) except Exception as e: sys.stdout.flush() sys.stderr.write( 'ERROR\n' '{0}\n' 'Deregister the on-premises instance by following the ' 'instructions in "Configure Existing On-Premises Instances by ' 'Using AWS CodeDeploy" in the AWS CodeDeploy User ' 'Guide.\n'.format(e) ) def _get_instance_info(self, params): sys.stdout.write('Retrieving on-premises instance information... ') response = self.codedeploy.get_on_premises_instance( instanceName=params.instance_name ) params.iam_user_arn = response['instanceInfo']['iamUserArn'] start = params.iam_user_arn.rfind('/') + 1 params.user_name = params.iam_user_arn[start:] params.tags = response['instanceInfo']['tags'] sys.stdout.write( 'DONE\n' 'IamUserArn: {0}\n'.format( params.iam_user_arn ) ) if params.tags: sys.stdout.write('Tags:') for tag in params.tags: sys.stdout.write( ' Key={0},Value={1}'.format(tag['Key'], tag['Value']) ) sys.stdout.write('\n') def _remove_tags(self, params): sys.stdout.write('Removing tags from the on-premises instance... ') self.codedeploy.remove_tags_from_on_premises_instances( tags=params.tags, instanceNames=[params.instance_name] ) sys.stdout.write('DONE\n') def _deregister_instance(self, params): sys.stdout.write('Deregistering the on-premises instance... ') self.codedeploy.deregister_on_premises_instance( instanceName=params.instance_name ) sys.stdout.write('DONE\n') def _delete_user_policy(self, params): sys.stdout.write('Deleting the IAM user policies... ') list_user_policies = self.iam.get_paginator('list_user_policies') try: for response in list_user_policies.paginate( UserName=params.user_name): for policy_name in response['PolicyNames']: self.iam.delete_user_policy( UserName=params.user_name, PolicyName=policy_name ) except ClientError as e: if e.response.get('Error', {}).get('Code') != 'NoSuchEntity': raise e sys.stdout.write('DONE\n') def _delete_access_key(self, params): sys.stdout.write('Deleting the IAM user access keys... ') list_access_keys = self.iam.get_paginator('list_access_keys') try: for response in list_access_keys.paginate( UserName=params.user_name): for access_key in response['AccessKeyMetadata']: self.iam.delete_access_key( UserName=params.user_name, AccessKeyId=access_key['AccessKeyId'] ) except ClientError as e: if e.response.get('Error', {}).get('Code') != 'NoSuchEntity': raise e sys.stdout.write('DONE\n') def _delete_iam_user(self, params): sys.stdout.write('Deleting the IAM user ({0})... '.format( params.user_name )) try: self.iam.delete_user(UserName=params.user_name) except ClientError as e: if e.response.get('Error', {}).get('Code') != 'NoSuchEntity': raise e sys.stdout.write('DONE\n') awscli-1.17.14/awscli/customizations/codedeploy/__init__.py0000644000000000000000000000106513620325554023731 0ustar rootroot00000000000000# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. awscli-1.17.14/awscli/customizations/codedeploy/systems.py0000644000000000000000000001675513620325554023715 0ustar rootroot00000000000000# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import ctypes import os import subprocess DEFAULT_CONFIG_FILE = 'codedeploy.onpremises.yml' class System: UNSUPPORTED_SYSTEM_MSG = ( 'Only Ubuntu Server, Red Hat Enterprise Linux Server and ' 'Windows Server operating systems are supported.' ) def __init__(self, params): self.session = params.session self.s3 = self.session.create_client( 's3', region_name=params.region ) def validate_administrator(self): raise NotImplementedError('validate_administrator') def install(self, params): raise NotImplementedError('install') def uninstall(self, params): raise NotImplementedError('uninstall') class Windows(System): CONFIG_DIR = r'C:\ProgramData\Amazon\CodeDeploy' CONFIG_FILE = 'conf.onpremises.yml' CONFIG_PATH = r'{0}\{1}'.format(CONFIG_DIR, CONFIG_FILE) INSTALLER = 'codedeploy-agent.msi' def validate_administrator(self): if not ctypes.windll.shell32.IsUserAnAdmin(): raise RuntimeError( 'You must run this command as an Administrator.' ) def install(self, params): if 'installer' in params: self.INSTALLER = params.installer process = subprocess.Popen( [ 'powershell.exe', '-Command', 'Stop-Service', '-Name', 'codedeployagent' ], stdout=subprocess.PIPE, stderr=subprocess.PIPE ) (output, error) = process.communicate() not_found = ( "Cannot find any service with service name 'codedeployagent'" ) if process.returncode != 0 and not_found not in error: raise RuntimeError( 'Failed to stop the AWS CodeDeploy Agent:\n{0}'.format(error) ) response = self.s3.get_object(Bucket=params.bucket, Key=params.key) with open(self.INSTALLER, 'wb') as f: f.write(response['Body'].read()) subprocess.check_call( [ r'.\{0}'.format(self.INSTALLER), '/quiet', '/l', r'.\codedeploy-agent-install-log.txt' ], shell=True ) subprocess.check_call([ 'powershell.exe', '-Command', 'Restart-Service', '-Name', 'codedeployagent' ]) process = subprocess.Popen( [ 'powershell.exe', '-Command', 'Get-Service', '-Name', 'codedeployagent' ], stdout=subprocess.PIPE, stderr=subprocess.PIPE ) (output, error) = process.communicate() if "Running" not in output: raise RuntimeError( 'The AWS CodeDeploy Agent did not start after installation.' ) def uninstall(self, params): process = subprocess.Popen( [ 'powershell.exe', '-Command', 'Stop-Service', '-Name', 'codedeployagent' ], stdout=subprocess.PIPE, stderr=subprocess.PIPE ) (output, error) = process.communicate() not_found = ( "Cannot find any service with service name 'codedeployagent'" ) if process.returncode == 0: self._remove_agent() elif not_found not in error: raise RuntimeError( 'Failed to stop the AWS CodeDeploy Agent:\n{0}'.format(error) ) def _remove_agent(self): process = subprocess.Popen( [ 'wmic', 'product', 'where', 'name="CodeDeploy Host Agent"', 'call', 'uninstall', '/nointeractive' ], stdout=subprocess.PIPE, stderr=subprocess.PIPE ) (output, error) = process.communicate() if process.returncode != 0: raise RuntimeError( 'Failed to uninstall the AWS CodeDeploy Agent:\n{0}'.format( error ) ) class Linux(System): CONFIG_DIR = '/etc/codedeploy-agent/conf' CONFIG_FILE = DEFAULT_CONFIG_FILE CONFIG_PATH = '{0}/{1}'.format(CONFIG_DIR, CONFIG_FILE) INSTALLER = 'install' def validate_administrator(self): if os.geteuid() != 0: raise RuntimeError('You must run this command as sudo.') def install(self, params): if 'installer' in params: self.INSTALLER = params.installer self._update_system(params) self._stop_agent(params) response = self.s3.get_object(Bucket=params.bucket, Key=params.key) with open(self.INSTALLER, 'wb') as f: f.write(response['Body'].read()) subprocess.check_call( ['chmod', '+x', './{0}'.format(self.INSTALLER)] ) credentials = self.session.get_credentials() environment = os.environ.copy() environment['AWS_REGION'] = params.region environment['AWS_ACCESS_KEY_ID'] = credentials.access_key environment['AWS_SECRET_ACCESS_KEY'] = credentials.secret_key if credentials.token is not None: environment['AWS_SESSION_TOKEN'] = credentials.token subprocess.check_call( ['./{0}'.format(self.INSTALLER), 'auto'], env=environment ) def uninstall(self, params): process = self._stop_agent(params) if process.returncode == 0: self._remove_agent(params) def _update_system(self, params): raise NotImplementedError('preinstall') def _remove_agent(self, params): raise NotImplementedError('remove_agent') def _stop_agent(self, params): process = subprocess.Popen( ['service', 'codedeploy-agent', 'stop'], stdout=subprocess.PIPE, stderr=subprocess.PIPE ) (output, error) = process.communicate() if process.returncode != 0 and params.not_found_msg not in error: raise RuntimeError( 'Failed to stop the AWS CodeDeploy Agent:\n{0}'.format(error) ) return process class Ubuntu(Linux): def _update_system(self, params): subprocess.check_call(['apt-get', '-y', 'update']) subprocess.check_call(['apt-get', '-y', 'install', 'ruby2.0']) def _remove_agent(self, params): subprocess.check_call(['dpkg', '-r', 'codedeploy-agent']) def _stop_agent(self, params): params.not_found_msg = 'codedeploy-agent: unrecognized service' return Linux._stop_agent(self, params) class RHEL(Linux): def _update_system(self, params): subprocess.check_call(['yum', '-y', 'install', 'ruby']) def _remove_agent(self, params): subprocess.check_call(['yum', '-y', 'erase', 'codedeploy-agent']) def _stop_agent(self, params): params.not_found_msg = 'Redirecting to /bin/systemctl stop codedeploy-agent.service' return Linux._stop_agent(self, params) awscli-1.17.14/awscli/customizations/codedeploy/codedeploy.py0000644000000000000000000000424413620325554024323 0ustar rootroot00000000000000# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. from awscli.customizations import utils from awscli.customizations.codedeploy.locationargs import \ modify_revision_arguments from awscli.customizations.codedeploy.push import Push from awscli.customizations.codedeploy.register import Register from awscli.customizations.codedeploy.deregister import Deregister from awscli.customizations.codedeploy.install import Install from awscli.customizations.codedeploy.uninstall import Uninstall def initialize(cli): """ The entry point for CodeDeploy high level commands. """ cli.register( 'building-command-table.main', change_name ) cli.register( 'building-command-table.deploy', inject_commands ) cli.register( 'building-argument-table.deploy.get-application-revision', modify_revision_arguments ) cli.register( 'building-argument-table.deploy.register-application-revision', modify_revision_arguments ) cli.register( 'building-argument-table.deploy.create-deployment', modify_revision_arguments ) def change_name(command_table, session, **kwargs): """ Change all existing 'aws codedeploy' commands to 'aws deploy' commands. """ utils.rename_command(command_table, 'codedeploy', 'deploy') def inject_commands(command_table, session, **kwargs): """ Inject custom 'aws deploy' commands. """ command_table['push'] = Push(session) command_table['register'] = Register(session) command_table['deregister'] = Deregister(session) command_table['install'] = Install(session) command_table['uninstall'] = Uninstall(session) awscli-1.17.14/awscli/customizations/codedeploy/locationargs.py0000644000000000000000000001342613620325554024663 0ustar rootroot00000000000000# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. from awscli.argprocess import unpack_cli_arg from awscli.arguments import CustomArgument from awscli.arguments import create_argument_model_from_schema S3_LOCATION_ARG_DESCRIPTION = { 'name': 's3-location', 'required': False, 'help_text': ( 'Information about the location of the application revision in Amazon ' 'S3. You must specify the bucket, the key, and bundleType. ' 'Optionally, you can also specify an eTag and version.' ) } S3_LOCATION_SCHEMA = { "type": "object", "properties": { "bucket": { "type": "string", "description": "The Amazon S3 bucket name.", "required": True }, "key": { "type": "string", "description": "The Amazon S3 object key name.", "required": True }, "bundleType": { "type": "string", "description": "The format of the bundle stored in Amazon S3.", "enum": ["tar", "tgz", "zip"], "required": True }, "eTag": { "type": "string", "description": "The Amazon S3 object eTag.", "required": False }, "version": { "type": "string", "description": "The Amazon S3 object version.", "required": False } } } GITHUB_LOCATION_ARG_DESCRIPTION = { 'name': 'github-location', 'required': False, 'help_text': ( 'Information about the location of the application revision in ' 'GitHub. You must specify the repository and commit ID that ' 'references the application revision. For the repository, use the ' 'format GitHub-account/repository-name or GitHub-org/repository-name. ' 'For the commit ID, use the SHA1 Git commit reference.' ) } GITHUB_LOCATION_SCHEMA = { "type": "object", "properties": { "repository": { "type": "string", "description": ( "The GitHub account or organization and repository. Specify " "as GitHub-account/repository or GitHub-org/repository." ), "required": True }, "commitId": { "type": "string", "description": "The SHA1 Git commit reference.", "required": True } } } def modify_revision_arguments(argument_table, session, **kwargs): s3_model = create_argument_model_from_schema(S3_LOCATION_SCHEMA) argument_table[S3_LOCATION_ARG_DESCRIPTION['name']] = ( S3LocationArgument( argument_model=s3_model, session=session, **S3_LOCATION_ARG_DESCRIPTION ) ) github_model = create_argument_model_from_schema(GITHUB_LOCATION_SCHEMA) argument_table[GITHUB_LOCATION_ARG_DESCRIPTION['name']] = ( GitHubLocationArgument( argument_model=github_model, session=session, **GITHUB_LOCATION_ARG_DESCRIPTION ) ) argument_table['revision'].required = False class LocationArgument(CustomArgument): def __init__(self, session, *args, **kwargs): super(LocationArgument, self).__init__(*args, **kwargs) self._session = session def add_to_params(self, parameters, value): if value is None: return parsed = self._session.emit_first_non_none_response( 'process-cli-arg.codedeploy.%s' % self.name, param=self.argument_model, cli_argument=self, value=value, operation=None ) if parsed is None: parsed = unpack_cli_arg(self, value) parameters['revision'] = self.build_revision_location(parsed) def build_revision_location(self, value_dict): """ Repack the input structure into a revisionLocation. """ raise NotImplementedError("build_revision_location") class S3LocationArgument(LocationArgument): def build_revision_location(self, value_dict): required = ['bucket', 'key', 'bundleType'] valid = lambda k: value_dict.get(k, False) if not all(map(valid, required)): raise RuntimeError( '--s3-location must specify bucket, key and bundleType.' ) revision = { "revisionType": "S3", "s3Location": { "bucket": value_dict['bucket'], "key": value_dict['key'], "bundleType": value_dict['bundleType'] } } if 'eTag' in value_dict: revision['s3Location']['eTag'] = value_dict['eTag'] if 'version' in value_dict: revision['s3Location']['version'] = value_dict['version'] return revision class GitHubLocationArgument(LocationArgument): def build_revision_location(self, value_dict): required = ['repository', 'commitId'] valid = lambda k: value_dict.get(k, False) if not all(map(valid, required)): raise RuntimeError( '--github-location must specify repository and commitId.' ) return { "revisionType": "GitHub", "gitHubLocation": { "repository": value_dict['repository'], "commitId": value_dict['commitId'] } } awscli-1.17.14/awscli/customizations/cloudformation/0000755000000000000000000000000013620325757022521 5ustar rootroot00000000000000awscli-1.17.14/awscli/customizations/cloudformation/deployer.py0000644000000000000000000002272313620325554024717 0ustar rootroot00000000000000# Copyright 2012-2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import sys import time import logging import botocore import collections from awscli.customizations.cloudformation import exceptions from awscli.customizations.cloudformation.artifact_exporter import mktempfile, parse_s3_url from datetime import datetime LOG = logging.getLogger(__name__) ChangeSetResult = collections.namedtuple( "ChangeSetResult", ["changeset_id", "changeset_type"]) class Deployer(object): def __init__(self, cloudformation_client, changeset_prefix="awscli-cloudformation-package-deploy-"): self._client = cloudformation_client self.changeset_prefix = changeset_prefix def has_stack(self, stack_name): """ Checks if a CloudFormation stack with given name exists :param stack_name: Name or ID of the stack :return: True if stack exists. False otherwise """ try: resp = self._client.describe_stacks(StackName=stack_name) if len(resp["Stacks"]) != 1: return False # When you run CreateChangeSet on a a stack that does not exist, # CloudFormation will create a stack and set it's status # REVIEW_IN_PROGRESS. However this stack is cannot be manipulated # by "update" commands. Under this circumstances, we treat like # this stack does not exist and call CreateChangeSet will # ChangeSetType set to CREATE and not UPDATE. stack = resp["Stacks"][0] return stack["StackStatus"] != "REVIEW_IN_PROGRESS" except botocore.exceptions.ClientError as e: # If a stack does not exist, describe_stacks will throw an # exception. Unfortunately we don't have a better way than parsing # the exception msg to understand the nature of this exception. msg = str(e) if "Stack with id {0} does not exist".format(stack_name) in msg: LOG.debug("Stack with id {0} does not exist".format( stack_name)) return False else: # We don't know anything about this exception. Don't handle LOG.debug("Unable to get stack details.", exc_info=e) raise e def create_changeset(self, stack_name, cfn_template, parameter_values, capabilities, role_arn, notification_arns, s3_uploader, tags): """ Call Cloudformation to create a changeset and wait for it to complete :param stack_name: Name or ID of stack :param cfn_template: CloudFormation template string :param parameter_values: Template parameters object :param capabilities: Array of capabilities passed to CloudFormation :param tags: Array of tags passed to CloudFormation :return: """ now = datetime.utcnow().isoformat() description = "Created by AWS CLI at {0} UTC".format(now) # Each changeset will get a unique name based on time changeset_name = self.changeset_prefix + str(int(time.time())) if not self.has_stack(stack_name): changeset_type = "CREATE" # When creating a new stack, UsePreviousValue=True is invalid. # For such parameters, users should either override with new value, # or set a Default value in template to successfully create a stack. parameter_values = [x for x in parameter_values if not x.get("UsePreviousValue", False)] else: changeset_type = "UPDATE" # UsePreviousValue not valid if parameter is new summary = self._client.get_template_summary(StackName=stack_name) existing_parameters = [parameter['ParameterKey'] for parameter in \ summary['Parameters']] parameter_values = [x for x in parameter_values if not (x.get("UsePreviousValue", False) and \ x["ParameterKey"] not in existing_parameters)] kwargs = { 'ChangeSetName': changeset_name, 'StackName': stack_name, 'TemplateBody': cfn_template, 'ChangeSetType': changeset_type, 'Parameters': parameter_values, 'Capabilities': capabilities, 'Description': description, 'Tags': tags, } # If an S3 uploader is available, use TemplateURL to deploy rather than # TemplateBody. This is required for large templates. if s3_uploader: with mktempfile() as temporary_file: temporary_file.write(kwargs.pop('TemplateBody')) temporary_file.flush() url = s3_uploader.upload_with_dedup( temporary_file.name, "template") # TemplateUrl property requires S3 URL to be in path-style format parts = parse_s3_url(url, version_property="Version") kwargs['TemplateURL'] = s3_uploader.to_path_style_s3_url(parts["Key"], parts.get("Version", None)) # don't set these arguments if not specified to use existing values if role_arn is not None: kwargs['RoleARN'] = role_arn if notification_arns is not None: kwargs['NotificationARNs'] = notification_arns try: resp = self._client.create_change_set(**kwargs) return ChangeSetResult(resp["Id"], changeset_type) except Exception as ex: LOG.debug("Unable to create changeset", exc_info=ex) raise ex def wait_for_changeset(self, changeset_id, stack_name): """ Waits until the changeset creation completes :param changeset_id: ID or name of the changeset :param stack_name: Stack name :return: Latest status of the create-change-set operation """ sys.stdout.write("\nWaiting for changeset to be created..\n") sys.stdout.flush() # Wait for changeset to be created waiter = self._client.get_waiter("change_set_create_complete") # Poll every 5 seconds. Changeset creation should be fast waiter_config = {'Delay': 5} try: waiter.wait(ChangeSetName=changeset_id, StackName=stack_name, WaiterConfig=waiter_config) except botocore.exceptions.WaiterError as ex: LOG.debug("Create changeset waiter exception", exc_info=ex) resp = ex.last_response status = resp["Status"] reason = resp["StatusReason"] if status == "FAILED" and \ "The submitted information didn't contain changes." in reason or \ "No updates are to be performed" in reason: raise exceptions.ChangeEmptyError(stack_name=stack_name) raise RuntimeError("Failed to create the changeset: {0} " "Status: {1}. Reason: {2}" .format(ex, status, reason)) def execute_changeset(self, changeset_id, stack_name): """ Calls CloudFormation to execute changeset :param changeset_id: ID of the changeset :param stack_name: Name or ID of the stack :return: Response from execute-change-set call """ return self._client.execute_change_set( ChangeSetName=changeset_id, StackName=stack_name) def wait_for_execute(self, stack_name, changeset_type): sys.stdout.write("Waiting for stack create/update to complete\n") sys.stdout.flush() # Pick the right waiter if changeset_type == "CREATE": waiter = self._client.get_waiter("stack_create_complete") elif changeset_type == "UPDATE": waiter = self._client.get_waiter("stack_update_complete") else: raise RuntimeError("Invalid changeset type {0}" .format(changeset_type)) # Poll every 5 seconds. Optimizing for the case when the stack has only # minimal changes, such the Code for Lambda Function waiter_config = { 'Delay': 5, 'MaxAttempts': 720, } try: waiter.wait(StackName=stack_name, WaiterConfig=waiter_config) except botocore.exceptions.WaiterError as ex: LOG.debug("Execute changeset waiter exception", exc_info=ex) raise exceptions.DeployFailedError(stack_name=stack_name) def create_and_wait_for_changeset(self, stack_name, cfn_template, parameter_values, capabilities, role_arn, notification_arns, s3_uploader, tags): result = self.create_changeset( stack_name, cfn_template, parameter_values, capabilities, role_arn, notification_arns, s3_uploader, tags) self.wait_for_changeset(result.changeset_id, stack_name) return result awscli-1.17.14/awscli/customizations/cloudformation/deploy.py0000644000000000000000000003376513620325554024400 0ustar rootroot00000000000000# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import os import sys import logging from botocore.client import Config from awscli.customizations.cloudformation import exceptions from awscli.customizations.cloudformation.deployer import Deployer from awscli.customizations.s3uploader import S3Uploader from awscli.customizations.cloudformation.yamlhelper import yaml_parse from awscli.customizations.commands import BasicCommand from awscli.compat import get_stdout_text_writer from awscli.utils import write_exception LOG = logging.getLogger(__name__) class DeployCommand(BasicCommand): MSG_NO_EXECUTE_CHANGESET = \ ("Changeset created successfully. Run the following command to " "review changes:" "\n" "aws cloudformation describe-change-set --change-set-name " "{changeset_id}" "\n") MSG_EXECUTE_SUCCESS = "Successfully created/updated stack - {stack_name}\n" PARAMETER_OVERRIDE_CMD = "parameter-overrides" TAGS_CMD = "tags" NAME = 'deploy' DESCRIPTION = BasicCommand.FROM_FILE("cloudformation", "_deploy_description.rst") ARG_TABLE = [ { 'name': 'template-file', 'required': True, 'help_text': ( 'The path where your AWS CloudFormation' ' template is located.' ) }, { 'name': 'stack-name', 'action': 'store', 'required': True, 'help_text': ( 'The name of the AWS CloudFormation stack you\'re deploying to.' ' If you specify an existing stack, the command updates the' ' stack. If you specify a new stack, the command creates it.' ) }, { 'name': 's3-bucket', 'required': False, 'help_text': ( 'The name of the S3 bucket where this command uploads your ' 'CloudFormation template. This is required the deployments of ' 'templates sized greater than 51,200 bytes' ) }, { "name": "force-upload", "action": "store_true", "help_text": ( 'Indicates whether to override existing files in the S3 bucket.' ' Specify this flag to upload artifacts even if they ' ' match existing artifacts in the S3 bucket.' ) }, { 'name': 's3-prefix', 'help_text': ( 'A prefix name that the command adds to the' ' artifacts\' name when it uploads them to the S3 bucket.' ' The prefix name is a path name (folder name) for' ' the S3 bucket.' ) }, { 'name': 'kms-key-id', 'help_text': ( 'The ID of an AWS KMS key that the command uses' ' to encrypt artifacts that are at rest in the S3 bucket.' ) }, { 'name': PARAMETER_OVERRIDE_CMD, 'action': 'store', 'required': False, 'schema': { 'type': 'array', 'items': { 'type': 'string' } }, 'default': [], 'help_text': ( 'A list of parameter structures that specify input parameters' ' for your stack template. If you\'re updating a stack and you' ' don\'t specify a parameter, the command uses the stack\'s' ' existing value. For new stacks, you must specify' ' parameters that don\'t have a default value.' ' Syntax: ParameterKey1=ParameterValue1' ' ParameterKey2=ParameterValue2 ...' ) }, { 'name': 'capabilities', 'action': 'store', 'required': False, 'schema': { 'type': 'array', 'items': { 'type': 'string', 'enum': [ 'CAPABILITY_IAM', 'CAPABILITY_NAMED_IAM' ] } }, 'default': [], 'help_text': ( 'A list of capabilities that you must specify before AWS' ' Cloudformation can create certain stacks. Some stack' ' templates might include resources that can affect' ' permissions in your AWS account, for example, by creating' ' new AWS Identity and Access Management (IAM) users. For' ' those stacks, you must explicitly acknowledge their' ' capabilities by specifying this parameter. ' ' The only valid values are CAPABILITY_IAM and' ' CAPABILITY_NAMED_IAM. If you have IAM resources, you can' ' specify either capability. If you have IAM resources with' ' custom names, you must specify CAPABILITY_NAMED_IAM. If you' ' don\'t specify this parameter, this action returns an' ' InsufficientCapabilities error.' ) }, { 'name': 'no-execute-changeset', 'action': 'store_false', 'dest': 'execute_changeset', 'required': False, 'help_text': ( 'Indicates whether to execute the change set. Specify this' ' flag if you want to view your stack changes before' ' executing the change set. The command creates an' ' AWS CloudFormation change set and then exits without' ' executing the change set. After you view the change set,' ' execute it to implement your changes.' ) }, { 'name': 'role-arn', 'required': False, 'help_text': ( 'The Amazon Resource Name (ARN) of an AWS Identity and Access ' 'Management (IAM) role that AWS CloudFormation assumes when ' 'executing the change set.' ) }, { 'name': 'notification-arns', 'required': False, 'schema': { 'type': 'array', 'items': { 'type': 'string' } }, 'help_text': ( 'Amazon Simple Notification Service topic Amazon Resource Names' ' (ARNs) that AWS CloudFormation associates with the stack.' ) }, { 'name': 'fail-on-empty-changeset', 'required': False, 'action': 'store_true', 'group_name': 'fail-on-empty-changeset', 'dest': 'fail_on_empty_changeset', 'default': True, 'help_text': ( 'Specify if the CLI should return a non-zero exit code if ' 'there are no changes to be made to the stack. The default ' 'behavior is to return a non-zero exit code.' ) }, { 'name': 'no-fail-on-empty-changeset', 'required': False, 'action': 'store_false', 'group_name': 'fail-on-empty-changeset', 'dest': 'fail_on_empty_changeset', 'default': True, 'help_text': ( 'Causes the CLI to return an exit code of 0 if there are no ' 'changes to be made to the stack.' ) }, { 'name': TAGS_CMD, 'action': 'store', 'required': False, 'schema': { 'type': 'array', 'items': { 'type': 'string' } }, 'default': [], 'help_text': ( 'A list of tags to associate with the stack that is created' ' or updated. AWS CloudFormation also propagates these tags' ' to resources in the stack if the resource supports it.' ' Syntax: TagKey1=TagValue1 TagKey2=TagValue2 ...' ) } ] def _run_main(self, parsed_args, parsed_globals): cloudformation_client = \ self._session.create_client( 'cloudformation', region_name=parsed_globals.region, endpoint_url=parsed_globals.endpoint_url, verify=parsed_globals.verify_ssl) template_path = parsed_args.template_file if not os.path.isfile(template_path): raise exceptions.InvalidTemplatePathError( template_path=template_path) # Parse parameters with open(template_path, "r") as handle: template_str = handle.read() stack_name = parsed_args.stack_name parameter_overrides = self.parse_key_value_arg( parsed_args.parameter_overrides, self.PARAMETER_OVERRIDE_CMD) tags_dict = self.parse_key_value_arg(parsed_args.tags, self.TAGS_CMD) tags = [{"Key": key, "Value": value} for key, value in tags_dict.items()] template_dict = yaml_parse(template_str) parameters = self.merge_parameters(template_dict, parameter_overrides) template_size = os.path.getsize(parsed_args.template_file) if template_size > 51200 and not parsed_args.s3_bucket: raise exceptions.DeployBucketRequiredError() bucket = parsed_args.s3_bucket if bucket: s3_client = self._session.create_client( "s3", config=Config(signature_version='s3v4'), region_name=parsed_globals.region, verify=parsed_globals.verify_ssl) s3_uploader = S3Uploader(s3_client, bucket, parsed_args.s3_prefix, parsed_args.kms_key_id, parsed_args.force_upload) else: s3_uploader = None deployer = Deployer(cloudformation_client) return self.deploy(deployer, stack_name, template_str, parameters, parsed_args.capabilities, parsed_args.execute_changeset, parsed_args.role_arn, parsed_args.notification_arns, s3_uploader, tags, parsed_args.fail_on_empty_changeset) def deploy(self, deployer, stack_name, template_str, parameters, capabilities, execute_changeset, role_arn, notification_arns, s3_uploader, tags, fail_on_empty_changeset=True): try: result = deployer.create_and_wait_for_changeset( stack_name=stack_name, cfn_template=template_str, parameter_values=parameters, capabilities=capabilities, role_arn=role_arn, notification_arns=notification_arns, s3_uploader=s3_uploader, tags=tags ) except exceptions.ChangeEmptyError as ex: if fail_on_empty_changeset: raise write_exception(ex, outfile=get_stdout_text_writer()) return 0 if execute_changeset: deployer.execute_changeset(result.changeset_id, stack_name) deployer.wait_for_execute(stack_name, result.changeset_type) sys.stdout.write(self.MSG_EXECUTE_SUCCESS.format( stack_name=stack_name)) else: sys.stdout.write(self.MSG_NO_EXECUTE_CHANGESET.format( changeset_id=result.changeset_id)) sys.stdout.flush() return 0 def merge_parameters(self, template_dict, parameter_overrides): """ CloudFormation CreateChangeset requires a value for every parameter from the template, either specifying a new value or use previous value. For convenience, this method will accept new parameter values and generates a dict of all parameters in a format that ChangeSet API will accept :param parameter_overrides: :return: """ parameter_values = [] if not isinstance(template_dict.get("Parameters", None), dict): return parameter_values for key, value in template_dict["Parameters"].items(): obj = { "ParameterKey": key } if key in parameter_overrides: obj["ParameterValue"] = parameter_overrides[key] else: obj["UsePreviousValue"] = True parameter_values.append(obj) return parameter_values def parse_key_value_arg(self, arg_value, argname): """ Converts arguments that are passed as list of "Key=Value" strings into a real dictionary. :param arg_value list: Array of strings, where each string is of form Key=Value :param argname string: Name of the argument that contains the value :return dict: Dictionary representing the key/value pairs """ result = {} for data in arg_value: # Split at first '=' from left key_value_pair = data.split("=", 1) if len(key_value_pair) != 2: raise exceptions.InvalidKeyValuePairArgumentError( argname=argname, value=key_value_pair) result[key_value_pair[0]] = key_value_pair[1] return result awscli-1.17.14/awscli/customizations/cloudformation/exceptions.py0000644000000000000000000000364113620325554025253 0ustar rootroot00000000000000 class CloudFormationCommandError(Exception): fmt = 'An unspecified error occurred' def __init__(self, **kwargs): msg = self.fmt.format(**kwargs) Exception.__init__(self, msg) self.kwargs = kwargs class InvalidTemplatePathError(CloudFormationCommandError): fmt = "Invalid template path {template_path}" class ChangeEmptyError(CloudFormationCommandError): fmt = "No changes to deploy. Stack {stack_name} is up to date" class InvalidLocalPathError(CloudFormationCommandError): fmt = ("Parameter {property_name} of resource {resource_id} refers " "to a file or folder that does not exist {local_path}") class InvalidTemplateUrlParameterError(CloudFormationCommandError): fmt = ("{property_name} parameter of {resource_id} resource is invalid. " "It must be a S3 URL or path to CloudFormation " "template file. Actual: {template_path}") class ExportFailedError(CloudFormationCommandError): fmt = ("Unable to upload artifact {property_value} referenced " "by {property_name} parameter of {resource_id} resource." "\n" "{ex}") class InvalidKeyValuePairArgumentError(CloudFormationCommandError): fmt = ("{value} value passed to --{argname} must be of format " "Key=Value") class DeployFailedError(CloudFormationCommandError): fmt = \ ("Failed to create/update the stack. Run the following command" "\n" "to fetch the list of events leading up to the failure" "\n" "aws cloudformation describe-stack-events --stack-name {stack_name}") class DeployBucketRequiredError(CloudFormationCommandError): fmt = \ ("Templates with a size greater than 51,200 bytes must be deployed " "via an S3 Bucket. Please add the --s3-bucket parameter to your " "command. The local template will be copied to that S3 bucket and " "then deployed.") awscli-1.17.14/awscli/customizations/cloudformation/yamlhelper.py0000644000000000000000000000614213620325554025233 0ustar rootroot00000000000000# Copyright 2012-2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. from botocore.compat import json from botocore.compat import OrderedDict import yaml from yaml.resolver import ScalarNode, SequenceNode from awscli.compat import six def intrinsics_multi_constructor(loader, tag_prefix, node): """ YAML constructor to parse CloudFormation intrinsics. This will return a dictionary with key being the instrinsic name """ # Get the actual tag name excluding the first exclamation tag = node.tag[1:] # Some intrinsic functions doesn't support prefix "Fn::" prefix = "Fn::" if tag in ["Ref", "Condition"]: prefix = "" cfntag = prefix + tag if tag == "GetAtt" and isinstance(node.value, six.string_types): # ShortHand notation for !GetAtt accepts Resource.Attribute format # while the standard notation is to use an array # [Resource, Attribute]. Convert shorthand to standard format value = node.value.split(".", 1) elif isinstance(node, ScalarNode): # Value of this node is scalar value = loader.construct_scalar(node) elif isinstance(node, SequenceNode): # Value of this node is an array (Ex: [1,2]) value = loader.construct_sequence(node) else: # Value of this node is an mapping (ex: {foo: bar}) value = loader.construct_mapping(node) return {cfntag: value} def _dict_representer(dumper, data): return dumper.represent_dict(data.items()) def yaml_dump(dict_to_dump): """ Dumps the dictionary as a YAML document :param dict_to_dump: :return: """ FlattenAliasDumper.add_representer(OrderedDict, _dict_representer) return yaml.dump( dict_to_dump, default_flow_style=False, Dumper=FlattenAliasDumper, ) def _dict_constructor(loader, node): # Necessary in order to make yaml merge tags work loader.flatten_mapping(node) return OrderedDict(loader.construct_pairs(node)) def yaml_parse(yamlstr): """Parse a yaml string""" try: # PyYAML doesn't support json as well as it should, so if the input # is actually just json it is better to parse it with the standard # json parser. return json.loads(yamlstr, object_pairs_hook=OrderedDict) except ValueError: yaml.SafeLoader.add_constructor(yaml.resolver.BaseResolver.DEFAULT_MAPPING_TAG, _dict_constructor) yaml.SafeLoader.add_multi_constructor( "!", intrinsics_multi_constructor) return yaml.safe_load(yamlstr) class FlattenAliasDumper(yaml.SafeDumper): def ignore_aliases(self, data): return True awscli-1.17.14/awscli/customizations/cloudformation/package.py0000644000000000000000000001365613620325554024474 0ustar rootroot00000000000000# Copyright 2012-2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import os import logging import sys import json from botocore.client import Config from awscli.customizations.cloudformation.artifact_exporter import Template from awscli.customizations.cloudformation.yamlhelper import yaml_dump from awscli.customizations.cloudformation import exceptions from awscli.customizations.commands import BasicCommand from awscli.customizations.s3uploader import S3Uploader LOG = logging.getLogger(__name__) class PackageCommand(BasicCommand): MSG_PACKAGED_TEMPLATE_WRITTEN = ( "Successfully packaged artifacts and wrote output template " "to file {output_file_name}." "\n" "Execute the following command to deploy the packaged template" "\n" "aws cloudformation deploy --template-file {output_file_path} " "--stack-name " "\n") NAME = "package" DESCRIPTION = BasicCommand.FROM_FILE("cloudformation", "_package_description.rst") ARG_TABLE = [ { 'name': 'template-file', 'required': True, 'help_text': ( 'The path where your AWS CloudFormation' ' template is located.' ) }, { 'name': 's3-bucket', 'required': True, 'help_text': ( 'The name of the S3 bucket where this command uploads' ' the artifacts that are referenced in your template.' ) }, { 'name': 's3-prefix', 'help_text': ( 'A prefix name that the command adds to the' ' artifacts\' name when it uploads them to the S3 bucket.' ' The prefix name is a path name (folder name) for' ' the S3 bucket.' ) }, { 'name': 'kms-key-id', 'help_text': ( 'The ID of an AWS KMS key that the command uses' ' to encrypt artifacts that are at rest in the S3 bucket.' ) }, { "name": "output-template-file", "help_text": ( "The path to the file where the command writes the" " output AWS CloudFormation template. If you don't specify" " a path, the command writes the template to the standard" " output." ) }, { "name": "use-json", "action": "store_true", "help_text": ( "Indicates whether to use JSON as the format for the output AWS" " CloudFormation template. YAML is used by default." ) }, { "name": "force-upload", "action": "store_true", "help_text": ( 'Indicates whether to override existing files in the S3 bucket.' ' Specify this flag to upload artifacts even if they ' ' match existing artifacts in the S3 bucket.' ) }, { "name": "metadata", "cli_type_name": "map", "schema": { "type": "map", "key": {"type": "string"}, "value": {"type": "string"} }, "help_text": "A map of metadata to attach to *ALL* the artifacts that" " are referenced in your template." } ] def _run_main(self, parsed_args, parsed_globals): s3_client = self._session.create_client( "s3", config=Config(signature_version='s3v4'), region_name=parsed_globals.region, verify=parsed_globals.verify_ssl) template_path = parsed_args.template_file if not os.path.isfile(template_path): raise exceptions.InvalidTemplatePathError( template_path=template_path) bucket = parsed_args.s3_bucket self.s3_uploader = S3Uploader(s3_client, bucket, parsed_args.s3_prefix, parsed_args.kms_key_id, parsed_args.force_upload) # attach the given metadata to the artifacts to be uploaded self.s3_uploader.artifact_metadata = parsed_args.metadata output_file = parsed_args.output_template_file use_json = parsed_args.use_json exported_str = self._export(template_path, use_json) sys.stdout.write("\n") self.write_output(output_file, exported_str) if output_file: msg = self.MSG_PACKAGED_TEMPLATE_WRITTEN.format( output_file_name=output_file, output_file_path=os.path.abspath(output_file)) sys.stdout.write(msg) sys.stdout.flush() return 0 def _export(self, template_path, use_json): template = Template(template_path, os.getcwd(), self.s3_uploader) exported_template = template.export() if use_json: exported_str = json.dumps(exported_template, indent=4, ensure_ascii=False) else: exported_str = yaml_dump(exported_template) return exported_str def write_output(self, output_file_name, data): if output_file_name is None: sys.stdout.write(data) return with open(output_file_name, "w") as fp: fp.write(data) awscli-1.17.14/awscli/customizations/cloudformation/__init__.py0000644000000000000000000000240113620325554024622 0ustar rootroot00000000000000# Copyright 2012-2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. from awscli.customizations.cloudformation.package import PackageCommand from awscli.customizations.cloudformation.deploy import DeployCommand def initialize(cli): """ The entry point for CloudFormation high level commands. """ cli.register('building-command-table.cloudformation', inject_commands) def inject_commands(command_table, session, **kwargs): """ Called when the CloudFormation command table is being built. Used to inject new high level commands into the command list. These high level commands must not collide with existing low-level API call names. """ command_table['package'] = PackageCommand(session) command_table['deploy'] = DeployCommand(session) awscli-1.17.14/awscli/customizations/cloudformation/artifact_exporter.py0000644000000000000000000005322513620325554026622 0ustar rootroot00000000000000# Copyright 2012-2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import logging import os import tempfile import zipfile import contextlib import uuid import shutil from awscli.compat import six from botocore.utils import set_value_from_jmespath from awscli.compat import urlparse from contextlib import contextmanager from awscli.customizations.cloudformation import exceptions from awscli.customizations.cloudformation.yamlhelper import yaml_dump, \ yaml_parse import jmespath LOG = logging.getLogger(__name__) def is_path_value_valid(path): return isinstance(path, six.string_types) def make_abs_path(directory, path): if is_path_value_valid(path) and not os.path.isabs(path): return os.path.normpath(os.path.join(directory, path)) else: return path def is_s3_url(url): try: parse_s3_url(url) return True except ValueError: return False def is_local_folder(path): return is_path_value_valid(path) and os.path.isdir(path) def is_local_file(path): return is_path_value_valid(path) and os.path.isfile(path) def is_zip_file(path): return ( is_path_value_valid(path) and zipfile.is_zipfile(path)) def parse_s3_url(url, bucket_name_property="Bucket", object_key_property="Key", version_property=None): if isinstance(url, six.string_types) \ and url.startswith("s3://"): # Python < 2.7.10 don't parse query parameters from URI with custom # scheme such as s3://blah/blah. As a workaround, remove scheme # altogether to trigger the parser "s3://foo/bar?v=1" =>"//foo/bar?v=1" parsed = urlparse.urlparse(url[3:]) query = urlparse.parse_qs(parsed.query) if parsed.netloc and parsed.path: result = dict() result[bucket_name_property] = parsed.netloc result[object_key_property] = parsed.path.lstrip('/') # If there is a query string that has a single versionId field, # set the object version and return if version_property is not None \ and 'versionId' in query \ and len(query['versionId']) == 1: result[version_property] = query['versionId'][0] return result raise ValueError("URL given to the parse method is not a valid S3 url " "{0}".format(url)) def upload_local_artifacts(resource_id, resource_dict, property_name, parent_dir, uploader): """ Upload local artifacts referenced by the property at given resource and return S3 URL of the uploaded object. It is the responsibility of callers to ensure property value is a valid string If path refers to a file, this method will upload the file. If path refers to a folder, this method will zip the folder and upload the zip to S3. If path is omitted, this method will zip the current working folder and upload. If path is already a path to S3 object, this method does nothing. :param resource_id: Id of the CloudFormation resource :param resource_dict: Dictionary containing resource definition :param property_name: Property name of CloudFormation resource where this local path is present :param parent_dir: Resolve all relative paths with respect to this directory :param uploader: Method to upload files to S3 :return: S3 URL of the uploaded object :raise: ValueError if path is not a S3 URL or a local path """ local_path = jmespath.search(property_name, resource_dict) if local_path is None: # Build the root directory and upload to S3 local_path = parent_dir if is_s3_url(local_path): # A valid CloudFormation template will specify artifacts as S3 URLs. # This check is supporting the case where your resource does not # refer to local artifacts # Nothing to do if property value is an S3 URL LOG.debug("Property {0} of {1} is already a S3 URL" .format(property_name, resource_id)) return local_path local_path = make_abs_path(parent_dir, local_path) # Or, pointing to a folder. Zip the folder and upload if is_local_folder(local_path): return zip_and_upload(local_path, uploader) # Path could be pointing to a file. Upload the file elif is_local_file(local_path): return uploader.upload_with_dedup(local_path) raise exceptions.InvalidLocalPathError( resource_id=resource_id, property_name=property_name, local_path=local_path) def zip_and_upload(local_path, uploader): with zip_folder(local_path) as zipfile: return uploader.upload_with_dedup(zipfile) @contextmanager def zip_folder(folder_path): """ Zip the entire folder and return a file to the zip. Use this inside a "with" statement to cleanup the zipfile after it is used. :param folder_path: :return: Name of the zipfile """ filename = os.path.join( tempfile.gettempdir(), "data-" + uuid.uuid4().hex) zipfile_name = make_zip(filename, folder_path) try: yield zipfile_name finally: if os.path.exists(zipfile_name): os.remove(zipfile_name) def make_zip(filename, source_root): zipfile_name = "{0}.zip".format(filename) source_root = os.path.abspath(source_root) with open(zipfile_name, 'wb') as f: zip_file = zipfile.ZipFile(f, 'w', zipfile.ZIP_DEFLATED) with contextlib.closing(zip_file) as zf: for root, dirs, files in os.walk(source_root, followlinks=True): for filename in files: full_path = os.path.join(root, filename) relative_path = os.path.relpath( full_path, source_root) zf.write(full_path, relative_path) return zipfile_name @contextmanager def mktempfile(): directory = tempfile.gettempdir() filename = os.path.join(directory, uuid.uuid4().hex) try: with open(filename, "w+") as handle: yield handle finally: if os.path.exists(filename): os.remove(filename) def copy_to_temp_dir(filepath): tmp_dir = tempfile.mkdtemp() dst = os.path.join(tmp_dir, os.path.basename(filepath)) shutil.copyfile(filepath, dst) return tmp_dir class Resource(object): """ Base class representing a CloudFormation resource that can be exported """ RESOURCE_TYPE = None PROPERTY_NAME = None PACKAGE_NULL_PROPERTY = True # Set this property to True in base class if you want the exporter to zip # up the file before uploading This is useful for Lambda functions. FORCE_ZIP = False def __init__(self, uploader): self.uploader = uploader def export(self, resource_id, resource_dict, parent_dir): if resource_dict is None: return property_value = jmespath.search(self.PROPERTY_NAME, resource_dict) if not property_value and not self.PACKAGE_NULL_PROPERTY: return if isinstance(property_value, dict): LOG.debug("Property {0} of {1} resource is not a URL" .format(self.PROPERTY_NAME, resource_id)) return # If property is a file but not a zip file, place file in temp # folder and send the temp folder to be zipped temp_dir = None if is_local_file(property_value) and not \ is_zip_file(property_value) and self.FORCE_ZIP: temp_dir = copy_to_temp_dir(property_value) set_value_from_jmespath(resource_dict, self.PROPERTY_NAME, temp_dir) try: self.do_export(resource_id, resource_dict, parent_dir) except Exception as ex: LOG.debug("Unable to export", exc_info=ex) raise exceptions.ExportFailedError( resource_id=resource_id, property_name=self.PROPERTY_NAME, property_value=property_value, ex=ex) finally: if temp_dir: shutil.rmtree(temp_dir) def do_export(self, resource_id, resource_dict, parent_dir): """ Default export action is to upload artifacts and set the property to S3 URL of the uploaded object """ uploaded_url = upload_local_artifacts(resource_id, resource_dict, self.PROPERTY_NAME, parent_dir, self.uploader) set_value_from_jmespath(resource_dict, self.PROPERTY_NAME, uploaded_url) class ResourceWithS3UrlDict(Resource): """ Represents CloudFormation resources that need the S3 URL to be specified as an dict like {Bucket: "", Key: "", Version: ""} """ BUCKET_NAME_PROPERTY = None OBJECT_KEY_PROPERTY = None VERSION_PROPERTY = None def __init__(self, uploader): super(ResourceWithS3UrlDict, self).__init__(uploader) def do_export(self, resource_id, resource_dict, parent_dir): """ Upload to S3 and set property to an dict representing the S3 url of the uploaded object """ artifact_s3_url = \ upload_local_artifacts(resource_id, resource_dict, self.PROPERTY_NAME, parent_dir, self.uploader) parsed_url = parse_s3_url( artifact_s3_url, bucket_name_property=self.BUCKET_NAME_PROPERTY, object_key_property=self.OBJECT_KEY_PROPERTY, version_property=self.VERSION_PROPERTY) set_value_from_jmespath(resource_dict, self.PROPERTY_NAME, parsed_url) class ServerlessFunctionResource(Resource): RESOURCE_TYPE = "AWS::Serverless::Function" PROPERTY_NAME = "CodeUri" FORCE_ZIP = True class ServerlessApiResource(Resource): RESOURCE_TYPE = "AWS::Serverless::Api" PROPERTY_NAME = "DefinitionUri" # Don't package the directory if DefinitionUri is omitted. # Necessary to support DefinitionBody PACKAGE_NULL_PROPERTY = False class GraphQLSchemaResource(Resource): RESOURCE_TYPE = "AWS::AppSync::GraphQLSchema" PROPERTY_NAME = "DefinitionS3Location" # Don't package the directory if DefinitionS3Location is omitted. # Necessary to support Definition PACKAGE_NULL_PROPERTY = False class AppSyncResolverRequestTemplateResource(Resource): RESOURCE_TYPE = "AWS::AppSync::Resolver" PROPERTY_NAME = "RequestMappingTemplateS3Location" # Don't package the directory if RequestMappingTemplateS3Location is omitted. # Necessary to support RequestMappingTemplate PACKAGE_NULL_PROPERTY = False class AppSyncResolverResponseTemplateResource(Resource): RESOURCE_TYPE = "AWS::AppSync::Resolver" PROPERTY_NAME = "ResponseMappingTemplateS3Location" # Don't package the directory if ResponseMappingTemplateS3Location is omitted. # Necessary to support ResponseMappingTemplate PACKAGE_NULL_PROPERTY = False class AppSyncFunctionConfigurationRequestTemplateResource(Resource): RESOURCE_TYPE = "AWS::AppSync::FunctionConfiguration" PROPERTY_NAME = "RequestMappingTemplateS3Location" # Don't package the directory if RequestMappingTemplateS3Location is omitted. # Necessary to support RequestMappingTemplate PACKAGE_NULL_PROPERTY = False class AppSyncFunctionConfigurationResponseTemplateResource(Resource): RESOURCE_TYPE = "AWS::AppSync::FunctionConfiguration" PROPERTY_NAME = "ResponseMappingTemplateS3Location" # Don't package the directory if ResponseMappingTemplateS3Location is omitted. # Necessary to support ResponseMappingTemplate PACKAGE_NULL_PROPERTY = False class LambdaFunctionResource(ResourceWithS3UrlDict): RESOURCE_TYPE = "AWS::Lambda::Function" PROPERTY_NAME = "Code" BUCKET_NAME_PROPERTY = "S3Bucket" OBJECT_KEY_PROPERTY = "S3Key" VERSION_PROPERTY = "S3ObjectVersion" FORCE_ZIP = True class ApiGatewayRestApiResource(ResourceWithS3UrlDict): RESOURCE_TYPE = "AWS::ApiGateway::RestApi" PROPERTY_NAME = "BodyS3Location" PACKAGE_NULL_PROPERTY = False BUCKET_NAME_PROPERTY = "Bucket" OBJECT_KEY_PROPERTY = "Key" VERSION_PROPERTY = "Version" class ElasticBeanstalkApplicationVersion(ResourceWithS3UrlDict): RESOURCE_TYPE = "AWS::ElasticBeanstalk::ApplicationVersion" PROPERTY_NAME = "SourceBundle" BUCKET_NAME_PROPERTY = "S3Bucket" OBJECT_KEY_PROPERTY = "S3Key" VERSION_PROPERTY = None class LambdaLayerVersionResource(ResourceWithS3UrlDict): RESOURCE_TYPE = "AWS::Lambda::LayerVersion" PROPERTY_NAME = "Content" BUCKET_NAME_PROPERTY = "S3Bucket" OBJECT_KEY_PROPERTY = "S3Key" VERSION_PROPERTY = "S3ObjectVersion" FORCE_ZIP = True class ServerlessLayerVersionResource(Resource): RESOURCE_TYPE = "AWS::Serverless::LayerVersion" PROPERTY_NAME = "ContentUri" FORCE_ZIP = True class ServerlessRepoApplicationReadme(Resource): RESOURCE_TYPE = "AWS::ServerlessRepo::Application" PROPERTY_NAME = "ReadmeUrl" PACKAGE_NULL_PROPERTY = False class ServerlessRepoApplicationLicense(Resource): RESOURCE_TYPE = "AWS::ServerlessRepo::Application" PROPERTY_NAME = "LicenseUrl" PACKAGE_NULL_PROPERTY = False class CloudFormationStackResource(Resource): """ Represents CloudFormation::Stack resource that can refer to a nested stack template via TemplateURL property. """ RESOURCE_TYPE = "AWS::CloudFormation::Stack" PROPERTY_NAME = "TemplateURL" def __init__(self, uploader): super(CloudFormationStackResource, self).__init__(uploader) def do_export(self, resource_id, resource_dict, parent_dir): """ If the nested stack template is valid, this method will export on the nested template, upload the exported template to S3 and set property to URL of the uploaded S3 template """ template_path = resource_dict.get(self.PROPERTY_NAME, None) if template_path is None or is_s3_url(template_path) or \ template_path.startswith(self.uploader.s3.meta.endpoint_url) or \ template_path.startswith("https://s3.amazonaws.com/"): # Nothing to do return abs_template_path = make_abs_path(parent_dir, template_path) if not is_local_file(abs_template_path): raise exceptions.InvalidTemplateUrlParameterError( property_name=self.PROPERTY_NAME, resource_id=resource_id, template_path=abs_template_path) exported_template_dict = \ Template(template_path, parent_dir, self.uploader).export() exported_template_str = yaml_dump(exported_template_dict) with mktempfile() as temporary_file: temporary_file.write(exported_template_str) temporary_file.flush() url = self.uploader.upload_with_dedup( temporary_file.name, "template") # TemplateUrl property requires S3 URL to be in path-style format parts = parse_s3_url(url, version_property="Version") s3_path_url = self.uploader.to_path_style_s3_url( parts["Key"], parts.get("Version", None)) set_value_from_jmespath(resource_dict, self.PROPERTY_NAME, s3_path_url) class ServerlessApplicationResource(CloudFormationStackResource): """ Represents Serverless::Application resource that can refer to a nested app template via Location property. """ RESOURCE_TYPE = "AWS::Serverless::Application" PROPERTY_NAME = "Location" class GlueJobCommandScriptLocationResource(Resource): """ Represents Glue::Job resource. """ RESOURCE_TYPE = "AWS::Glue::Job" # Note the PROPERTY_NAME includes a '.' implying it's nested. PROPERTY_NAME = "Command.ScriptLocation" RESOURCES_EXPORT_LIST = [ ServerlessFunctionResource, ServerlessApiResource, GraphQLSchemaResource, AppSyncResolverRequestTemplateResource, AppSyncResolverResponseTemplateResource, AppSyncFunctionConfigurationRequestTemplateResource, AppSyncFunctionConfigurationResponseTemplateResource, ApiGatewayRestApiResource, LambdaFunctionResource, ElasticBeanstalkApplicationVersion, CloudFormationStackResource, ServerlessApplicationResource, ServerlessLayerVersionResource, LambdaLayerVersionResource, GlueJobCommandScriptLocationResource, ] METADATA_EXPORT_LIST = [ ServerlessRepoApplicationReadme, ServerlessRepoApplicationLicense ] def include_transform_export_handler(template_dict, uploader, parent_dir): if template_dict.get("Name", None) != "AWS::Include": return template_dict include_location = template_dict.get("Parameters", {}).get("Location", None) if not include_location \ or not is_path_value_valid(include_location) \ or is_s3_url(include_location): # `include_location` is either empty, or not a string, or an S3 URI return template_dict # We are confident at this point that `include_location` is a string containing the local path abs_include_location = os.path.join(parent_dir, include_location) if is_local_file(abs_include_location): template_dict["Parameters"]["Location"] = uploader.upload_with_dedup(abs_include_location) else: raise exceptions.InvalidLocalPathError( resource_id="AWS::Include", property_name="Location", local_path=abs_include_location) return template_dict GLOBAL_EXPORT_DICT = { "Fn::Transform": include_transform_export_handler } class Template(object): """ Class to export a CloudFormation template """ def __init__(self, template_path, parent_dir, uploader, resources_to_export=RESOURCES_EXPORT_LIST, metadata_to_export=METADATA_EXPORT_LIST): """ Reads the template and makes it ready for export """ if not (is_local_folder(parent_dir) and os.path.isabs(parent_dir)): raise ValueError("parent_dir parameter must be " "an absolute path to a folder {0}" .format(parent_dir)) abs_template_path = make_abs_path(parent_dir, template_path) template_dir = os.path.dirname(abs_template_path) with open(abs_template_path, "r") as handle: template_str = handle.read() self.template_dict = yaml_parse(template_str) self.template_dir = template_dir self.resources_to_export = resources_to_export self.metadata_to_export = metadata_to_export self.uploader = uploader def export_global_artifacts(self, template_dict): """ Template params such as AWS::Include transforms are not specific to any resource type but contain artifacts that should be exported, here we iterate through the template dict and export params with a handler defined in GLOBAL_EXPORT_DICT """ for key, val in template_dict.items(): if key in GLOBAL_EXPORT_DICT: template_dict[key] = GLOBAL_EXPORT_DICT[key](val, self.uploader, self.template_dir) elif isinstance(val, dict): self.export_global_artifacts(val) elif isinstance(val, list): for item in val: if isinstance(item, dict): self.export_global_artifacts(item) return template_dict def export_metadata(self, template_dict): """ Exports the local artifacts referenced by the metadata section in the given template to an s3 bucket. :return: The template with references to artifacts that have been exported to s3. """ if "Metadata" not in template_dict: return template_dict for metadata_type, metadata_dict in template_dict["Metadata"].items(): for exporter_class in self.metadata_to_export: if exporter_class.RESOURCE_TYPE != metadata_type: continue exporter = exporter_class(self.uploader) exporter.export(metadata_type, metadata_dict, self.template_dir) return template_dict def export(self): """ Exports the local artifacts referenced by the given template to an s3 bucket. :return: The template with references to artifacts that have been exported to s3. """ self.template_dict = self.export_metadata(self.template_dict) if "Resources" not in self.template_dict: return self.template_dict self.template_dict = self.export_global_artifacts(self.template_dict) for resource_id, resource in self.template_dict["Resources"].items(): resource_type = resource.get("Type", None) resource_dict = resource.get("Properties", None) for exporter_class in self.resources_to_export: if exporter_class.RESOURCE_TYPE != resource_type: continue # Export code resources exporter = exporter_class(self.uploader) exporter.export(resource_id, resource_dict, self.template_dir) return self.template_dict awscli-1.17.14/awscli/customizations/datapipeline/0000755000000000000000000000000013620325757022133 5ustar rootroot00000000000000awscli-1.17.14/awscli/customizations/datapipeline/translator.py0000644000000000000000000001605413620325554024677 0ustar rootroot00000000000000# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import json from awscli.clidriver import CLIOperationCaller class PipelineDefinitionError(Exception): def __init__(self, msg, definition): full_msg = ( "Error in pipeline definition: %s\n" % msg) super(PipelineDefinitionError, self).__init__(full_msg) self.msg = msg self.definition = definition # Method to convert the dictionary input to a string # This is required for escaping def dict_to_string(dictionary, indent=2): return json.dumps(dictionary, indent=indent) # Method to parse the arguments to get the region value def get_region(session, parsed_globals): region = parsed_globals.region if region is None: region = session.get_config_variable('region') return region # Method to display the response for a particular CLI operation def display_response(session, operation_name, result, parsed_globals): cli_operation_caller = CLIOperationCaller(session) # Calling a private method. Should be changed after the functionality # is moved outside CliOperationCaller. cli_operation_caller._display_response( operation_name, result, parsed_globals) def api_to_definition(definition): # When we're translating from api_response -> definition # we have to be careful *not* to mutate the existing # response as other code might need to the original # api_response. if 'pipelineObjects' in definition: definition['objects'] = _api_to_objects_definition( definition.pop('pipelineObjects')) if 'parameterObjects' in definition: definition['parameters'] = _api_to_parameters_definition( definition.pop('parameterObjects')) if 'parameterValues' in definition: definition['values'] = _api_to_values_definition( definition.pop('parameterValues')) return definition def definition_to_api_objects(definition): if 'objects' not in definition: raise PipelineDefinitionError('Missing "objects" key', definition) api_elements = [] # To convert to the structure expected by the service, # we convert the existing structure to a list of dictionaries. # Each dictionary has a 'fields', 'id', and 'name' key. for element in definition['objects']: try: element_id = element.pop('id') except KeyError: raise PipelineDefinitionError('Missing "id" key of element: %s' % json.dumps(element), definition) api_object = {'id': element_id} # If a name is provided, then we use that for the name, # otherwise the id is used for the name. name = element.pop('name', element_id) api_object['name'] = name # Now we need the field list. Each element in the field list is a dict # with a 'key', 'stringValue'|'refValue' fields = [] for key, value in sorted(element.items()): fields.extend(_parse_each_field(key, value)) api_object['fields'] = fields api_elements.append(api_object) return api_elements def definition_to_api_parameters(definition): if 'parameters' not in definition: return None parameter_objects = [] for element in definition['parameters']: try: parameter_id = element.pop('id') except KeyError: raise PipelineDefinitionError('Missing "id" key of parameter: %s' % json.dumps(element), definition) parameter_object = {'id': parameter_id} # Now we need the attribute list. Each element in the attribute list # is a dict with a 'key', 'stringValue' attributes = [] for key, value in sorted(element.items()): attributes.extend(_parse_each_field(key, value)) parameter_object['attributes'] = attributes parameter_objects.append(parameter_object) return parameter_objects def definition_to_parameter_values(definition): if 'values' not in definition: return None parameter_values = [] for key in definition['values']: parameter_values.extend( _convert_single_parameter_value(key, definition['values'][key])) return parameter_values def _parse_each_field(key, value): values = [] if isinstance(value, list): for item in value: values.append(_convert_single_field(key, item)) else: values.append(_convert_single_field(key, value)) return values def _convert_single_field(key, value): field = {'key': key} if isinstance(value, dict) and list(value.keys()) == ['ref']: field['refValue'] = value['ref'] else: field['stringValue'] = value return field def _convert_single_parameter_value(key, values): parameter_values = [] if isinstance(values, list): for each_value in values: parameter_value = {'id': key, 'stringValue': each_value} parameter_values.append(parameter_value) else: parameter_value = {'id': key, 'stringValue': values} parameter_values.append(parameter_value) return parameter_values def _api_to_objects_definition(api_response): pipeline_objects = [] for element in api_response: current = { 'id': element['id'], 'name': element['name'] } for field in element['fields']: key = field['key'] if 'stringValue' in field: value = field['stringValue'] else: value = {'ref': field['refValue']} _add_value(key, value, current) pipeline_objects.append(current) return pipeline_objects def _api_to_parameters_definition(api_response): parameter_objects = [] for element in api_response: current = { 'id': element['id'] } for attribute in element['attributes']: _add_value(attribute['key'], attribute['stringValue'], current) parameter_objects.append(current) return parameter_objects def _api_to_values_definition(api_response): pipeline_values = {} for element in api_response: _add_value(element['id'], element['stringValue'], pipeline_values) return pipeline_values def _add_value(key, value, current_map): if key not in current_map: current_map[key] = value elif isinstance(current_map[key], list): # Dupe keys result in values aggregating # into a list. current_map[key].append(value) else: converted_list = [current_map[key], value] current_map[key] = converted_list awscli-1.17.14/awscli/customizations/datapipeline/listrunsformatter.py0000644000000000000000000000412313620325554026307 0ustar rootroot00000000000000# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. from awscli.formatter import FullyBufferedFormatter class ListRunsFormatter(FullyBufferedFormatter): TITLE_ROW_FORMAT_STRING = " %-50.50s %-19.19s %-23.23s" FIRST_ROW_FORMAT_STRING = "%4d. %-50.50s %-19.19s %-23.23s" SECOND_ROW_FORMAT_STRING = " %-50.50s %-19.19s %-19.19s" def _format_response(self, command_name, response, stream): self._print_headers(stream) for i, obj in enumerate(response): self._print_row(i, obj, stream) def _print_headers(self, stream): stream.write(self.TITLE_ROW_FORMAT_STRING % ( "Name", "Scheduled Start", "Status")) stream.write('\n') second_row = (self.SECOND_ROW_FORMAT_STRING % ( "ID", "Started", "Ended")) stream.write(second_row) stream.write('\n') stream.write('-' * len(second_row)) stream.write('\n') def _print_row(self, index, obj, stream): logical_name = obj['@componentParent'] object_id = obj['@id'] scheduled_start_date = obj.get('@scheduledStartTime', '') status = obj.get('@status', '') start_date = obj.get('@actualStartTime', '') end_date = obj.get('@actualEndTime', '') first_row = self.FIRST_ROW_FORMAT_STRING % ( index + 1, logical_name, scheduled_start_date, status) second_row = self.SECOND_ROW_FORMAT_STRING % ( object_id, start_date, end_date) stream.write(first_row) stream.write('\n') stream.write(second_row) stream.write('\n\n') awscli-1.17.14/awscli/customizations/datapipeline/constants.py0000644000000000000000000000354713620325554024525 0ustar rootroot00000000000000# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. # Declare all the constants used by DataPipeline in this file # DataPipeline role names DATAPIPELINE_DEFAULT_SERVICE_ROLE_NAME = "DataPipelineDefaultRole" DATAPIPELINE_DEFAULT_RESOURCE_ROLE_NAME = "DataPipelineDefaultResourceRole" # DataPipeline role arn names DATAPIPELINE_DEFAULT_SERVICE_ROLE_ARN = ("arn:aws:iam::aws:policy/" "service-role/AWSDataPipelineRole") DATAPIPELINE_DEFAULT_RESOURCE_ROLE_ARN = ("arn:aws:iam::aws:policy/" "service-role/" "AmazonEC2RoleforDataPipelineRole") # Assume Role Policy definitions for roles DATAPIPELINE_DEFAULT_RESOURCE_ROLE_ASSUME_POLICY = { "Version": "2008-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": {"Service": "ec2.amazonaws.com"}, "Action": "sts:AssumeRole" } ] } DATAPIPELINE_DEFAULT_SERVICE_ROLE_ASSUME_POLICY = { "Version": "2008-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": {"Service": ["datapipeline.amazonaws.com", "elasticmapreduce.amazonaws.com"] }, "Action": "sts:AssumeRole" } ] } awscli-1.17.14/awscli/customizations/datapipeline/createdefaultroles.py0000644000000000000000000002251013620325554026355 0ustar rootroot00000000000000# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. # Class to create default roles for datapipeline import logging from awscli.customizations.datapipeline.constants \ import DATAPIPELINE_DEFAULT_SERVICE_ROLE_NAME, \ DATAPIPELINE_DEFAULT_RESOURCE_ROLE_NAME, \ DATAPIPELINE_DEFAULT_SERVICE_ROLE_ARN, \ DATAPIPELINE_DEFAULT_RESOURCE_ROLE_ARN, \ DATAPIPELINE_DEFAULT_SERVICE_ROLE_ASSUME_POLICY, \ DATAPIPELINE_DEFAULT_RESOURCE_ROLE_ASSUME_POLICY from awscli.customizations.commands import BasicCommand from awscli.customizations.datapipeline.translator \ import display_response, dict_to_string, get_region from botocore.exceptions import ClientError LOG = logging.getLogger(__name__) class CreateDefaultRoles(BasicCommand): NAME = "create-default-roles" DESCRIPTION = ('Creates the default IAM role ' + DATAPIPELINE_DEFAULT_SERVICE_ROLE_NAME + ' and ' + DATAPIPELINE_DEFAULT_RESOURCE_ROLE_NAME + ' which are used while creating an EMR cluster.\n' 'If the roles do not exist, create-default-roles ' 'will automatically create them and set their policies.' ' If these roles are already ' 'created create-default-roles' ' will not update their policies.' '\n') def __init__(self, session, formatter=None): super(CreateDefaultRoles, self).__init__(session) def _run_main(self, parsed_args, parsed_globals, **kwargs): """Call to run the commands""" self._region = get_region(self._session, parsed_globals) self._endpoint_url = parsed_globals.endpoint_url self._iam_client = self._session.create_client( 'iam', region_name=self._region, endpoint_url=self._endpoint_url, verify=parsed_globals.verify_ssl ) return self._create_default_roles(parsed_args, parsed_globals) def _create_role(self, role_name, role_arn, role_policy): """Method to create a role for a given role name and arn if it does not exist """ role_result = None role_policy_result = None # Check if the role with the name exists if self._check_if_role_exists(role_name): LOG.debug('Role ' + role_name + ' exists.') else: LOG.debug('Role ' + role_name + ' does not exist.' ' Creating default role for EC2: ' + role_name) # Create a create using the IAM Client with a particular triplet # (role_name, role_arn, assume_role_policy) role_result = self._create_role_with_role_policy(role_name, role_policy, role_arn) role_policy_result = self._get_role_policy(role_arn) return role_result, role_policy_result def _construct_result(self, dpl_default_result, dpl_default_policy, dpl_default_res_result, dpl_default_res_policy): """Method to create a resultant list of responses for create roles for service and resource role """ result = [] self._construct_role_and_role_policy_structure(result, dpl_default_result, dpl_default_policy) self._construct_role_and_role_policy_structure(result, dpl_default_res_result, dpl_default_res_policy) return result def _create_default_roles(self, parsed_args, parsed_globals): # Setting the role name and arn value (datapipline_default_result, datapipline_default_policy) = self._create_role( DATAPIPELINE_DEFAULT_SERVICE_ROLE_NAME, DATAPIPELINE_DEFAULT_SERVICE_ROLE_ARN, DATAPIPELINE_DEFAULT_SERVICE_ROLE_ASSUME_POLICY) (datapipline_default_resource_result, datapipline_default_resource_policy) = self._create_role( DATAPIPELINE_DEFAULT_RESOURCE_ROLE_NAME, DATAPIPELINE_DEFAULT_RESOURCE_ROLE_ARN, DATAPIPELINE_DEFAULT_RESOURCE_ROLE_ASSUME_POLICY) # Check if the default EC2 Instance Profile for DataPipeline exists. instance_profile_name = DATAPIPELINE_DEFAULT_RESOURCE_ROLE_NAME if self._check_if_instance_profile_exists(instance_profile_name): LOG.debug('Instance Profile ' + instance_profile_name + ' exists.') else: LOG.debug('Instance Profile ' + instance_profile_name + 'does not exist. Creating default Instance Profile ' + instance_profile_name) self._create_instance_profile_with_role(instance_profile_name, instance_profile_name) result = self._construct_result(datapipline_default_result, datapipline_default_policy, datapipline_default_resource_result, datapipline_default_resource_policy) display_response(self._session, 'create_role', result, parsed_globals) return 0 def _get_role_policy(self, arn): """Method to get the Policy for a particular ARN This is used to display the policy contents to the user """ pol_det = self._iam_client.get_policy(PolicyArn=arn) policy_version_details = self._iam_client.get_policy_version( PolicyArn=arn, VersionId=pol_det["Policy"]["DefaultVersionId"]) return policy_version_details["PolicyVersion"]["Document"] def _create_role_with_role_policy( self, role_name, assume_role_policy, role_arn): """Method to create role with a given rolename, assume_role_policy and role_arn """ # Create a role using IAM client CreateRole API create_role_response = self._iam_client.create_role( RoleName=role_name, AssumeRolePolicyDocument=dict_to_string( assume_role_policy)) # Create a role using IAM client AttachRolePolicy API self._iam_client.attach_role_policy(PolicyArn=role_arn, RoleName=role_name) return create_role_response def _construct_role_and_role_policy_structure( self, list_val, response, policy): """Method to construct the message to be displayed to the user""" # If the response is not none they we get the role name # from the response and # append the policy information to the response if response is not None and response['Role'] is not None: list_val.append({'Role': response['Role'], 'RolePolicy': policy}) return list_val def _check_if_instance_profile_exists(self, instance_profile_name): """Method to verify if a particular role exists""" try: # Client call to get the instance profile with that name self._iam_client.get_instance_profile( InstanceProfileName=instance_profile_name) except ClientError as e: # If the instance profile does not exist then the error message # would contain the required message if e.response['Error']['Code'] == 'NoSuchEntity': # No instance profile error. return False else: # Some other error. raise. raise e return True def _check_if_role_exists(self, role_name): """Method to verify if a particular role exists""" try: # Client call to get the role self._iam_client.get_role(RoleName=role_name) except ClientError as e: # If the role does not exist then the error message # would contain the required message. if e.response['Error']['Code'] == 'NoSuchEntity': # No role error. return False else: # Some other error. raise. raise e return True def _create_instance_profile_with_role(self, instance_profile_name, role_name): """Method to create the instance profile with the role""" # Setting the value for instance profile name # Client call to create an instance profile self._iam_client.create_instance_profile( InstanceProfileName=instance_profile_name) # Adding the role to the Instance Profile self._iam_client.add_role_to_instance_profile( InstanceProfileName=instance_profile_name, RoleName=role_name) awscli-1.17.14/awscli/customizations/datapipeline/__init__.py0000644000000000000000000004106413620325554024244 0ustar rootroot00000000000000# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import json from datetime import datetime, timedelta from awscli.formatter import get_formatter from awscli.arguments import CustomArgument from awscli.customizations.commands import BasicCommand from awscli.customizations.datapipeline import translator from awscli.customizations.datapipeline.createdefaultroles \ import CreateDefaultRoles from awscli.customizations.datapipeline.listrunsformatter \ import ListRunsFormatter DEFINITION_HELP_TEXT = """\ The JSON pipeline definition. If the pipeline definition is in a file you can use the file:// syntax to specify a filename. """ PARAMETER_OBJECTS_HELP_TEXT = """\ The JSON parameter objects. If the parameter objects are in a file you can use the file:// syntax to specify a filename. You can optionally provide these in pipeline definition as well. Parameter objects provided on command line would replace the one in definition. """ PARAMETER_VALUES_HELP_TEXT = """\ The JSON parameter values. If the parameter values are in a file you can use the file:// syntax to specify a filename. You can optionally provide these in pipeline definition as well. Parameter values provided on command line would replace the one in definition. """ INLINE_PARAMETER_VALUES_HELP_TEXT = """\ The JSON parameter values. You can specify these as key-value pairs in the key=value format. Multiple parameters are separated by a space. For list type parameter values you can use the same key name and specify each value as a key value pair. e.g. arrayValue=value1 arrayValue=value2 """ MAX_ITEMS_PER_DESCRIBE = 100 class DocSectionNotFoundError(Exception): pass class ParameterDefinitionError(Exception): def __init__(self, msg): full_msg = ("Error in parameter: %s\n" % msg) super(ParameterDefinitionError, self).__init__(full_msg) self.msg = msg def register_customizations(cli): cli.register( 'building-argument-table.datapipeline.put-pipeline-definition', add_pipeline_definition) cli.register( 'building-argument-table.datapipeline.activate-pipeline', activate_pipeline_definition) cli.register( 'after-call.datapipeline.GetPipelineDefinition', translate_definition) cli.register( 'building-command-table.datapipeline', register_commands) cli.register_last( 'doc-output.datapipeline.get-pipeline-definition', document_translation) def register_commands(command_table, session, **kwargs): command_table['list-runs'] = ListRunsCommand(session) command_table['create-default-roles'] = CreateDefaultRoles(session) def document_translation(help_command, **kwargs): # Remove all the writes until we get to the output. # I don't think this is the ideal way to do this, we should # improve our plugin/doc system to make this easier. doc = help_command.doc current = '' while current != '======\nOutput\n======': try: current = doc.pop_write() except IndexError: # This should never happen, but in the rare case that it does # we should be raising something with a helpful error message. raise DocSectionNotFoundError( 'Could not find the "output" section for the command: %s' % help_command) doc.write('======\nOutput\n======') doc.write( '\nThe output of this command is the pipeline definition, which' ' is documented in the ' '`Pipeline Definition File Syntax ' '`__') def add_pipeline_definition(argument_table, **kwargs): argument_table['pipeline-definition'] = PipelineDefinitionArgument( 'pipeline-definition', required=True, help_text=DEFINITION_HELP_TEXT) argument_table['parameter-objects'] = ParameterObjectsArgument( 'parameter-objects', required=False, help_text=PARAMETER_OBJECTS_HELP_TEXT) argument_table['parameter-values-uri'] = ParameterValuesArgument( 'parameter-values-uri', required=False, help_text=PARAMETER_VALUES_HELP_TEXT) # Need to use an argument model for inline parameters to accept a list argument_table['parameter-values'] = ParameterValuesInlineArgument( 'parameter-values', required=False, nargs='+', help_text=INLINE_PARAMETER_VALUES_HELP_TEXT) # The pipeline-objects is no longer needed required because # a user can provide a pipeline-definition instead. # get-pipeline-definition also displays the output in the # translated format. del argument_table['pipeline-objects'] def activate_pipeline_definition(argument_table, **kwargs): argument_table['parameter-values-uri'] = ParameterValuesArgument( 'parameter-values-uri', required=False, help_text=PARAMETER_VALUES_HELP_TEXT) # Need to use an argument model for inline parameters to accept a list argument_table['parameter-values'] = ParameterValuesInlineArgument( 'parameter-values', required=False, nargs='+', help_text=INLINE_PARAMETER_VALUES_HELP_TEXT, ) def translate_definition(parsed, **kwargs): translator.api_to_definition(parsed) def convert_described_objects(api_describe_objects, sort_key_func=None): # We need to take a field list that looks like this: # {u'key': u'@sphere', u'stringValue': u'INSTANCE'}, # into {"@sphere": "INSTANCE}. # We convert the fields list into a field dict. converted = [] for obj in api_describe_objects: new_fields = { '@id': obj['id'], 'name': obj['name'], } for field in obj['fields']: new_fields[field['key']] = field.get('stringValue', field.get('refValue')) converted.append(new_fields) if sort_key_func is not None: converted.sort(key=sort_key_func) return converted class QueryArgBuilder(object): """ Convert CLI arguments to Query arguments used by QueryObject. """ def __init__(self, current_time=None): if current_time is None: current_time = datetime.utcnow() self.current_time = current_time def build_query(self, parsed_args): selectors = [] if parsed_args.start_interval is None and \ parsed_args.schedule_interval is None: # If no intervals are specified, default # to a start time of 4 days ago and an end time # of right now. end_datetime = self.current_time start_datetime = end_datetime - timedelta(days=4) start_time_str = start_datetime.strftime('%Y-%m-%dT%H:%M:%S') end_time_str = end_datetime.strftime('%Y-%m-%dT%H:%M:%S') selectors.append({ 'fieldName': '@actualStartTime', 'operator': { 'type': 'BETWEEN', 'values': [start_time_str, end_time_str] } }) else: self._build_schedule_times(selectors, parsed_args) if parsed_args.status is not None: self._build_status(selectors, parsed_args) query = {'selectors': selectors} return query def _build_schedule_times(self, selectors, parsed_args): if parsed_args.start_interval is not None: start_time_str = parsed_args.start_interval[0] end_time_str = parsed_args.start_interval[1] selectors.append({ 'fieldName': '@actualStartTime', 'operator': { 'type': 'BETWEEN', 'values': [start_time_str, end_time_str] } }) if parsed_args.schedule_interval is not None: start_time_str = parsed_args.schedule_interval[0] end_time_str = parsed_args.schedule_interval[1] selectors.append({ 'fieldName': '@scheduledStartTime', 'operator': { 'type': 'BETWEEN', 'values': [start_time_str, end_time_str] } }) def _build_status(self, selectors, parsed_args): selectors.append({ 'fieldName': '@status', 'operator': { 'type': 'EQ', 'values': [status.upper() for status in parsed_args.status] } }) class PipelineDefinitionArgument(CustomArgument): def add_to_params(self, parameters, value): if value is None: return parsed = json.loads(value) api_objects = translator.definition_to_api_objects(parsed) parameter_objects = translator.definition_to_api_parameters(parsed) parameter_values = translator.definition_to_parameter_values(parsed) parameters['pipelineObjects'] = api_objects # Use Parameter objects and values from def if not already provided if 'parameterObjects' not in parameters \ and parameter_objects is not None: parameters['parameterObjects'] = parameter_objects if 'parameterValues' not in parameters \ and parameter_values is not None: parameters['parameterValues'] = parameter_values class ParameterObjectsArgument(CustomArgument): def add_to_params(self, parameters, value): if value is None: return parsed = json.loads(value) parameter_objects = translator.definition_to_api_parameters(parsed) parameters['parameterObjects'] = parameter_objects class ParameterValuesArgument(CustomArgument): def add_to_params(self, parameters, value): if value is None: return if parameters.get('parameterValues', None) is not None: raise Exception( "Only parameter-values or parameter-values-uri is allowed" ) parsed = json.loads(value) parameter_values = translator.definition_to_parameter_values(parsed) parameters['parameterValues'] = parameter_values class ParameterValuesInlineArgument(CustomArgument): def add_to_params(self, parameters, value): if value is None: return if parameters.get('parameterValues', None) is not None: raise Exception( "Only parameter-values or parameter-values-uri is allowed" ) parameter_object = {} # break string into = point for argument in value: try: argument_components = argument.split('=', 1) key = argument_components[0] value = argument_components[1] if key in parameter_object: if isinstance(parameter_object[key], list): parameter_object[key].append(value) else: parameter_object[key] = [parameter_object[key], value] else: parameter_object[key] = value except IndexError: raise ParameterDefinitionError( "Invalid inline parameter format: %s" % argument ) parsed = {'values': parameter_object} parameter_values = translator.definition_to_parameter_values(parsed) parameters['parameterValues'] = parameter_values class ListRunsCommand(BasicCommand): NAME = 'list-runs' DESCRIPTION = ( 'Lists the times the specified pipeline has run. ' 'You can optionally filter the complete list of ' 'results to include only the runs you are interested in.') ARG_TABLE = [ {'name': 'pipeline-id', 'help_text': 'The identifier of the pipeline.', 'action': 'store', 'required': True, 'cli_type_name': 'string', }, {'name': 'status', 'help_text': ( 'Filters the list to include only runs in the ' 'specified statuses. ' 'The valid statuses are as follows: waiting, pending, cancelled, ' 'running, finished, failed, waiting_for_runner, ' 'and waiting_on_dependencies.'), 'action': 'store'}, {'name': 'start-interval', 'help_text': ( 'Filters the list to include only runs that started ' 'within the specified interval.'), 'action': 'store', 'required': False, 'cli_type_name': 'string', }, {'name': 'schedule-interval', 'help_text': ( 'Filters the list to include only runs that are scheduled to ' 'start within the specified interval.'), 'action': 'store', 'required': False, 'cli_type_name': 'string', }, ] VALID_STATUS = ['waiting', 'pending', 'cancelled', 'running', 'finished', 'failed', 'waiting_for_runner', 'waiting_on_dependencies', 'shutting_down'] def _run_main(self, parsed_args, parsed_globals, **kwargs): self._set_client(parsed_globals) self._parse_type_args(parsed_args) self._list_runs(parsed_args, parsed_globals) def _set_client(self, parsed_globals): # This is called from _run_main and is used to ensure that we have # a service/endpoint object to work with. self.client = self._session.create_client( 'datapipeline', region_name=parsed_globals.region, endpoint_url=parsed_globals.endpoint_url, verify=parsed_globals.verify_ssl) def _parse_type_args(self, parsed_args): # TODO: give good error messages! # Parse the start/schedule times. # Parse the status csv. if parsed_args.start_interval is not None: parsed_args.start_interval = [ arg.strip() for arg in parsed_args.start_interval.split(',')] if parsed_args.schedule_interval is not None: parsed_args.schedule_interval = [ arg.strip() for arg in parsed_args.schedule_interval.split(',')] if parsed_args.status is not None: parsed_args.status = [ arg.strip() for arg in parsed_args.status.split(',')] self._validate_status_choices(parsed_args.status) def _validate_status_choices(self, statuses): for status in statuses: if status not in self.VALID_STATUS: raise ValueError("Invalid status: %s, must be one of: %s" % (status, ', '.join(self.VALID_STATUS))) def _list_runs(self, parsed_args, parsed_globals): query = QueryArgBuilder().build_query(parsed_args) object_ids = self._query_objects(parsed_args.pipeline_id, query) objects = self._describe_objects(parsed_args.pipeline_id, object_ids) converted = convert_described_objects( objects, sort_key_func=lambda x: (x.get('@scheduledStartTime'), x.get('name'))) formatter = self._get_formatter(parsed_globals) formatter(self.NAME, converted) def _describe_objects(self, pipeline_id, object_ids): # DescribeObjects will only accept 100 objectIds at a time, # so we need to break up the list passed in into chunks that are at # most that size. We then aggregate the results to return. objects = [] for i in range(0, len(object_ids), MAX_ITEMS_PER_DESCRIBE): current_object_ids = object_ids[i:i + MAX_ITEMS_PER_DESCRIBE] result = self.client.describe_objects( pipelineId=pipeline_id, objectIds=current_object_ids) objects.extend(result['pipelineObjects']) return objects def _query_objects(self, pipeline_id, query): paginator = self.client.get_paginator('query_objects').paginate( pipelineId=pipeline_id, sphere='INSTANCE', query=query) parsed = paginator.build_full_result() return parsed['ids'] def _get_formatter(self, parsed_globals): output = parsed_globals.output if output is None: return ListRunsFormatter(parsed_globals) else: return get_formatter(output, parsed_globals) awscli-1.17.14/awscli/customizations/globalargs.py0000644000000000000000000001060613620325554022161 0ustar rootroot00000000000000# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import sys import os from botocore.client import Config from botocore.endpoint import DEFAULT_TIMEOUT from botocore.handlers import disable_signing import jmespath from awscli.compat import urlparse def register_parse_global_args(cli): cli.register('top-level-args-parsed', resolve_types, unique_id='resolve-types') cli.register('top-level-args-parsed', no_sign_request, unique_id='no-sign') cli.register('top-level-args-parsed', resolve_verify_ssl, unique_id='resolve-verify-ssl') cli.register('top-level-args-parsed', resolve_cli_read_timeout, unique_id='resolve-cli-read-timeout') cli.register('top-level-args-parsed', resolve_cli_connect_timeout, unique_id='resolve-cli-connect-timeout') def resolve_types(parsed_args, **kwargs): # This emulates the "type" arg from argparse, but does so in a way # that plugins can also hook into this process. _resolve_arg(parsed_args, 'query') _resolve_arg(parsed_args, 'endpoint_url') def _resolve_arg(parsed_args, name): value = getattr(parsed_args, name, None) if value is not None: new_value = getattr(sys.modules[__name__], '_resolve_%s' % name)(value) setattr(parsed_args, name, new_value) def _resolve_query(value): try: return jmespath.compile(value) except Exception as e: raise ValueError("Bad value for --query %s: %s" % (value, str(e))) def _resolve_endpoint_url(value): parsed = urlparse.urlparse(value) # Our http library requires you specify an endpoint url # that contains a scheme, so we'll verify that up front. if not parsed.scheme: raise ValueError('Bad value for --endpoint-url "%s": scheme is ' 'missing. Must be of the form ' 'http:/// or https:///' % value) return value def resolve_verify_ssl(parsed_args, session, **kwargs): arg_name = 'verify_ssl' arg_value = getattr(parsed_args, arg_name, None) if arg_value is not None: verify = None # Only consider setting a custom ca_bundle if they # haven't provided --no-verify-ssl. if not arg_value: verify = False else: verify = getattr(parsed_args, 'ca_bundle', None) or \ session.get_config_variable('ca_bundle') setattr(parsed_args, arg_name, verify) def no_sign_request(parsed_args, session, **kwargs): if not parsed_args.sign_request: # In order to make signing disabled for all requests # we need to use botocore's ``disable_signing()`` handler. session.register( 'choose-signer', disable_signing, unique_id='disable-signing') def resolve_cli_connect_timeout(parsed_args, session, **kwargs): arg_name = 'connect_timeout' _resolve_timeout(session, parsed_args, arg_name) def resolve_cli_read_timeout(parsed_args, session, **kwargs): arg_name = 'read_timeout' _resolve_timeout(session, parsed_args, arg_name) def _resolve_timeout(session, parsed_args, arg_name): arg_value = getattr(parsed_args, arg_name, None) if arg_value is None: arg_value = DEFAULT_TIMEOUT arg_value = int(arg_value) if arg_value == 0: arg_value = None setattr(parsed_args, arg_name, arg_value) # Update in the default client config so that the timeout will be used # by all clients created from then on. _update_default_client_config(session, arg_name, arg_value) def _update_default_client_config(session, arg_name, arg_value): current_default_config = session.get_default_client_config() new_default_config = Config(**{arg_name: arg_value}) if current_default_config is not None: new_default_config = current_default_config.merge(new_default_config) session.set_default_client_config(new_default_config) awscli-1.17.14/awscli/customizations/s3uploader.py0000644000000000000000000001715213620325630022123 0ustar rootroot00000000000000# Copyright 2012-2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import hashlib import logging import threading import os import sys import botocore import botocore.exceptions from s3transfer.manager import TransferManager from s3transfer.subscribers import BaseSubscriber from awscli.compat import collections_abc LOG = logging.getLogger(__name__) class NoSuchBucketError(Exception): def __init__(self, **kwargs): msg = self.fmt.format(**kwargs) Exception.__init__(self, msg) self.kwargs = kwargs fmt = ("S3 Bucket does not exist. " "Execute the command to create a new bucket" "\n" "aws s3 mb s3://{bucket_name}") class S3Uploader(object): """ Class to upload objects to S3 bucket that use versioning. If bucket does not already use versioning, this class will turn on versioning. """ @property def artifact_metadata(self): """ Metadata to attach to the object(s) uploaded by the uploader. """ return self._artifact_metadata @artifact_metadata.setter def artifact_metadata(self, val): if val is not None and not isinstance(val, collections_abc.Mapping): raise TypeError("Artifact metadata should be in dict type") self._artifact_metadata = val def __init__(self, s3_client, bucket_name, prefix=None, kms_key_id=None, force_upload=False, transfer_manager=None): self.bucket_name = bucket_name self.prefix = prefix self.kms_key_id = kms_key_id or None self.force_upload = force_upload self.s3 = s3_client self.transfer_manager = transfer_manager if not transfer_manager: self.transfer_manager = TransferManager(self.s3) self._artifact_metadata = None def upload(self, file_name, remote_path): """ Uploads given file to S3 :param file_name: Path to the file that will be uploaded :param remote_path: be uploaded :return: VersionId of the latest upload """ if self.prefix and len(self.prefix) > 0: remote_path = "{0}/{1}".format(self.prefix, remote_path) # Check if a file with same data exists if not self.force_upload and self.file_exists(remote_path): LOG.debug("File with same data already exists at {0}. " "Skipping upload".format(remote_path)) return self.make_url(remote_path) try: # Default to regular server-side encryption unless customer has # specified their own KMS keys additional_args = { "ServerSideEncryption": "AES256" } if self.kms_key_id: additional_args["ServerSideEncryption"] = "aws:kms" additional_args["SSEKMSKeyId"] = self.kms_key_id if self.artifact_metadata: additional_args["Metadata"] = self.artifact_metadata print_progress_callback = \ ProgressPercentage(file_name, remote_path) future = self.transfer_manager.upload(file_name, self.bucket_name, remote_path, additional_args, [print_progress_callback]) future.result() return self.make_url(remote_path) except botocore.exceptions.ClientError as ex: error_code = ex.response["Error"]["Code"] if error_code == "NoSuchBucket": raise NoSuchBucketError(bucket_name=self.bucket_name) raise ex def upload_with_dedup(self, file_name, extension=None): """ Makes and returns name of the S3 object based on the file's MD5 sum :param file_name: file to upload :param extension: String of file extension to append to the object :return: S3 URL of the uploaded object """ # This construction of remote_path is critical to preventing duplicate # uploads of same object. Uploader will check if the file exists in S3 # and re-upload only if necessary. So the template points to same file # in multiple places, this will upload only once filemd5 = self.file_checksum(file_name) remote_path = filemd5 if extension: remote_path = remote_path + "." + extension return self.upload(file_name, remote_path) def file_exists(self, remote_path): """ Check if the file we are trying to upload already exists in S3 :param remote_path: :return: True, if file exists. False, otherwise """ try: # Find the object that matches this ETag self.s3.head_object( Bucket=self.bucket_name, Key=remote_path) return True except botocore.exceptions.ClientError: # Either File does not exist or we are unable to get # this information. return False def make_url(self, obj_path): return "s3://{0}/{1}".format( self.bucket_name, obj_path) def file_checksum(self, file_name): with open(file_name, "rb") as file_handle: md5 = hashlib.md5() # Read file in chunks of 4096 bytes block_size = 4096 # Save current cursor position and reset cursor to start of file curpos = file_handle.tell() file_handle.seek(0) buf = file_handle.read(block_size) while len(buf) > 0: md5.update(buf) buf = file_handle.read(block_size) # Restore file cursor's position file_handle.seek(curpos) return md5.hexdigest() def to_path_style_s3_url(self, key, version=None): """ This link describes the format of Path Style URLs http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro """ base = self.s3.meta.endpoint_url result = "{0}/{1}/{2}".format(base, self.bucket_name, key) if version: result = "{0}?versionId={1}".format(result, version) return result class ProgressPercentage(BaseSubscriber): # This class was copied directly from S3Transfer docs def __init__(self, filename, remote_path): self._filename = filename self._remote_path = remote_path self._size = float(os.path.getsize(filename)) self._seen_so_far = 0 self._lock = threading.Lock() def on_progress(self, future, bytes_transferred, **kwargs): # To simplify we'll assume this is hooked up # to a single filename. with self._lock: self._seen_so_far += bytes_transferred percentage = (self._seen_so_far / self._size) * 100 sys.stdout.write( "\rUploading to %s %s / %s (%.2f%%)" % (self._remote_path, self._seen_so_far, self._size, percentage)) sys.stdout.flush() awscli-1.17.14/awscli/customizations/arguments.py0000644000000000000000000001200613620325554022045 0ustar rootroot00000000000000# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import os from awscli.arguments import CustomArgument import jmespath def resolve_given_outfile_path(path): """Asserts that a path is writable and returns the expanded path""" if path is None: return outfile = os.path.expanduser(os.path.expandvars(path)) if not os.access(os.path.dirname(os.path.abspath(outfile)), os.W_OK): raise ValueError('Unable to write to file: %s' % outfile) return outfile def is_parsed_result_successful(parsed_result): """Returns True if a parsed result is successful""" return parsed_result['ResponseMetadata']['HTTPStatusCode'] < 300 class OverrideRequiredArgsArgument(CustomArgument): """An argument that if specified makes all other arguments not required By not required, it refers to not having an error thrown when the parser does not find an argument that is required on the command line. To obtain this argument's property of ignoring required arguments, subclass from this class and fill out the ``ARG_DATA`` parameter as described below. Note this class is really only useful for subclassing. """ # ``ARG_DATA`` follows the same format as a member of ``ARG_TABLE`` in # ``BasicCommand`` class as specified in # ``awscli/customizations/commands.py``. # # For example, an ``ARG_DATA`` variable would be filled out as: # # ARG_DATA = # {'name': 'my-argument', # 'help_text': 'This is argument ensures the argument is specified' # 'no other arguments are required'} ARG_DATA = {'name': 'no-required-args'} def __init__(self, session): self._session = session self._register_argument_action() super(OverrideRequiredArgsArgument, self).__init__(**self.ARG_DATA) def _register_argument_action(self): self._session.register('before-building-argument-table-parser', self.override_required_args) def override_required_args(self, argument_table, args, **kwargs): name_in_cmdline = '--' + self.name # Set all ``Argument`` objects in ``argument_table`` to not required # if this argument's name is present in the command line. if name_in_cmdline in args: for arg_name in argument_table.keys(): argument_table[arg_name].required = False class StatefulArgument(CustomArgument): """An argument that maintains a stateful value""" def __init__(self, *args, **kwargs): super(StatefulArgument, self).__init__(*args, **kwargs) self._value = None def add_to_params(self, parameters, value): super(StatefulArgument, self).add_to_params(parameters, value) self._value = value @property def value(self): return self._value class QueryOutFileArgument(StatefulArgument): """An argument that write a JMESPath query result to a file""" def __init__(self, session, name, query, after_call_event, perm, *args, **kwargs): self._session = session self._query = query self._after_call_event = after_call_event self._perm = perm # Generate default help_text if text was not provided. if 'help_text' not in kwargs: kwargs['help_text'] = ('Saves the command output contents of %s ' 'to the given filename' % self.query) super(QueryOutFileArgument, self).__init__(name, *args, **kwargs) @property def query(self): return self._query @property def perm(self): return self._perm def add_to_params(self, parameters, value): value = resolve_given_outfile_path(value) super(QueryOutFileArgument, self).add_to_params(parameters, value) if self.value is not None: # Only register the event to save the argument if it is set self._session.register(self._after_call_event, self.save_query) def save_query(self, parsed, **kwargs): """Saves the result of a JMESPath expression to a file. This method only saves the query data if the response code of the parsed result is < 300. """ if is_parsed_result_successful(parsed): contents = jmespath.search(self.query, parsed) with open(self.value, 'w') as fp: # Don't write 'None' to a file -- write ''. if contents is None: fp.write('') else: fp.write(contents) os.chmod(self.value, self.perm) awscli-1.17.14/awscli/customizations/scalarparse.py0000644000000000000000000000574113620325554022350 0ustar rootroot00000000000000# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. """Change the scalar response parsing behavior for the AWS CLI. The underlying library used by botocore has some response parsing behavior that we'd like to modify in the AWS CLI. There are two: * Parsing binary content. * Parsing timestamps (dates) For the first option we can't print binary content to the terminal, so this customization leaves the binary content base64 encoded. If the user wants the binary content, they can then base64 decode the appropriate fields as needed. There's nothing currently done for timestamps, but this will change in the future. """ from botocore.utils import parse_timestamp from botocore.exceptions import ProfileNotFound def register_scalar_parser(event_handlers): event_handlers.register_first( 'session-initialized', add_scalar_parsers) def identity(x): return x def iso_format(value): return parse_timestamp(value).isoformat() def add_timestamp_parser(session): factory = session.get_component('response_parser_factory') try: timestamp_format = session.get_scoped_config().get( 'cli_timestamp_format', 'none') except ProfileNotFound: # If a --profile is provided that does not exist, loading # a value from get_scoped_config will crash the CLI. # This function can be called as the first handler for # the session-initialized event, which happens before a # profile can be created, even if the command would have # successfully created a profile. Instead of crashing here # on a ProfileNotFound the CLI should just use 'none'. timestamp_format = 'none' if timestamp_format == 'none': # For backwards compatibility reasons, we replace botocore's timestamp # parser (which parses to a datetime.datetime object) with the # identity function which prints the date exactly the same as it comes # across the wire. timestamp_parser = identity elif timestamp_format == 'iso8601': timestamp_parser = iso_format else: raise ValueError('Unknown cli_timestamp_format value: %s, valid values' ' are "none" or "iso8601"' % timestamp_format) factory.set_parser_defaults(timestamp_parser=timestamp_parser) def add_scalar_parsers(session, **kwargs): factory = session.get_component('response_parser_factory') factory.set_parser_defaults(blob_parser=identity) add_timestamp_parser(session) awscli-1.17.14/awscli/customizations/sessionmanager.py0000644000000000000000000001026613620325554023064 0ustar rootroot00000000000000# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import logging import json import errno from subprocess import check_call from awscli.compat import ignore_user_entered_signals from awscli.clidriver import ServiceOperation, CLIOperationCaller logger = logging.getLogger(__name__) ERROR_MESSAGE = ( 'SessionManagerPlugin is not found. ', 'Please refer to SessionManager Documentation here: ', 'http://docs.aws.amazon.com/console/systems-manager/', 'session-manager-plugin-not-found' ) def register_ssm_session(event_handlers): event_handlers.register('building-command-table.ssm', add_custom_start_session) def add_custom_start_session(session, command_table, **kwargs): command_table['start-session'] = StartSessionCommand( name='start-session', parent_name='ssm', session=session, operation_model=session.get_service_model( 'ssm').operation_model('StartSession'), operation_caller=StartSessionCaller(session), ) class StartSessionCommand(ServiceOperation): def create_help_command(self): help_command = super( StartSessionCommand, self).create_help_command() # Change the output shape because the command provides no output. self._operation_model.output_shape = None return help_command class StartSessionCaller(CLIOperationCaller): def invoke(self, service_name, operation_name, parameters, parsed_globals): client = self._session.create_client( service_name, region_name=parsed_globals.region, endpoint_url=parsed_globals.endpoint_url, verify=parsed_globals.verify_ssl) response = client.start_session(**parameters) session_id = response['SessionId'] region_name = client.meta.region_name # profile_name is used to passed on to session manager plugin # to fetch same profile credentials to make an api call in the plugin. # If no profile is passed then pass on empty string profile_name = self._session.profile \ if self._session.profile is not None else '' endpoint_url = client.meta.endpoint_url try: # ignore_user_entered_signals ignores these signals # because if signals which kills the process are not # captured would kill the foreground process but not the # background one. Capturing these would prevents process # from getting killed and these signals are input to plugin # and handling in there with ignore_user_entered_signals(): # call executable with necessary input check_call(["session-manager-plugin", json.dumps(response), region_name, "StartSession", profile_name, json.dumps(parameters), endpoint_url]) return 0 except OSError as ex: if ex.errno == errno.ENOENT: logger.debug('SessionManagerPlugin is not present', exc_info=True) # start-session api call returns response and starts the # session on ssm-agent and response is forwarded to # session-manager-plugin. If plugin is not present, terminate # is called so that service and ssm-agent terminates the # session to avoid zombie session active on ssm-agent for # default self terminate time client.terminate_session(SessionId=session_id) raise ValueError(''.join(ERROR_MESSAGE)) awscli-1.17.14/awscli/customizations/gamelift/0000755000000000000000000000000013620325757021264 5ustar rootroot00000000000000awscli-1.17.14/awscli/customizations/gamelift/uploadbuild.py0000644000000000000000000001375113620325554024144 0ustar rootroot00000000000000# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import threading import contextlib import os import tempfile import sys import zipfile from s3transfer import S3Transfer from awscli.customizations.commands import BasicCommand from awscli.customizations.s3.utils import human_readable_size class UploadBuildCommand(BasicCommand): NAME = 'upload-build' DESCRIPTION = 'Upload a new build to AWS GameLift.' ARG_TABLE = [ {'name': 'name', 'required': True, 'help_text': 'The name of the build'}, {'name': 'build-version', 'required': True, 'help_text': 'The version of the build'}, {'name': 'build-root', 'required': True, 'help_text': 'The path to the directory containing the build to upload'}, {'name': 'operating-system', 'required': False, 'help_text': 'The operating system the build runs on'} ] def _run_main(self, args, parsed_globals): gamelift_client = self._session.create_client( 'gamelift', region_name=parsed_globals.region, endpoint_url=parsed_globals.endpoint_url, verify=parsed_globals.verify_ssl ) # Validate a build directory if not validate_directory(args.build_root): sys.stderr.write( 'Fail to upload %s. ' 'The build root directory is empty or does not exist.\n' % (args.build_root) ) return 255 # Create a build based on the operating system given. create_build_kwargs = { 'Name': args.name, 'Version': args.build_version } if args.operating_system: create_build_kwargs['OperatingSystem'] = args.operating_system response = gamelift_client.create_build(**create_build_kwargs) build_id = response['Build']['BuildId'] # Retrieve a set of credentials and the s3 bucket and key. response = gamelift_client.request_upload_credentials( BuildId=build_id) upload_credentials = response['UploadCredentials'] bucket = response['StorageLocation']['Bucket'] key = response['StorageLocation']['Key'] # Create the S3 Client for uploading the build based on the # credentials returned from creating the build. access_key = upload_credentials['AccessKeyId'] secret_key = upload_credentials['SecretAccessKey'] session_token = upload_credentials['SessionToken'] s3_client = self._session.create_client( 's3', aws_access_key_id=access_key, aws_secret_access_key=secret_key, aws_session_token=session_token, region_name=parsed_globals.region, verify=parsed_globals.verify_ssl ) s3_transfer_mgr = S3Transfer(s3_client) try: fd, temporary_zipfile = tempfile.mkstemp('%s.zip' % build_id) zip_directory(temporary_zipfile, args.build_root) s3_transfer_mgr.upload_file( temporary_zipfile, bucket, key, callback=ProgressPercentage( temporary_zipfile, label='Uploading ' + args.build_root + ':' ) ) finally: os.close(fd) os.remove(temporary_zipfile) sys.stdout.write( 'Successfully uploaded %s to AWS GameLift\n' 'Build ID: %s\n' % (args.build_root, build_id)) return 0 def zip_directory(zipfile_name, source_root): source_root = os.path.abspath(source_root) with open(zipfile_name, 'wb') as f: zip_file = zipfile.ZipFile(f, 'w', zipfile.ZIP_DEFLATED, True) with contextlib.closing(zip_file) as zf: for root, dirs, files in os.walk(source_root): for filename in files: full_path = os.path.join(root, filename) relative_path = os.path.relpath( full_path, source_root) zf.write(full_path, relative_path) def validate_directory(source_root): # For Python26 on Windows, passing an empty string equates to the # current directory, which is not intended behavior. if not source_root: return False # We walk the root because we want to validate there's at least one file # that exists recursively from the root directory for path, dirs, files in os.walk(source_root): if files: return True return False # TODO: Remove this class once available to CLI from s3transfer # docstring. class ProgressPercentage(object): def __init__(self, filename, label=None): self._filename = filename self._label = label if self._label is None: self._label = self._filename self._size = float(os.path.getsize(filename)) self._seen_so_far = 0 self._lock = threading.Lock() def __call__(self, bytes_amount): with self._lock: self._seen_so_far += bytes_amount if self._size > 0: percentage = (self._seen_so_far / self._size) * 100 sys.stdout.write( "\r%s %s / %s (%.2f%%)" % ( self._label, human_readable_size(self._seen_so_far), human_readable_size(self._size), percentage ) ) sys.stdout.flush() awscli-1.17.14/awscli/customizations/gamelift/__init__.py0000644000000000000000000000201513620325554023366 0ustar rootroot00000000000000# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. from awscli.customizations.gamelift.uploadbuild import UploadBuildCommand from awscli.customizations.gamelift.getlog import GetGameSessionLogCommand def register_gamelift_commands(event_emitter): event_emitter.register('building-command-table.gamelift', inject_commands) def inject_commands(command_table, session, **kwargs): command_table['upload-build'] = UploadBuildCommand(session) command_table['get-game-session-log'] = GetGameSessionLogCommand(session) awscli-1.17.14/awscli/customizations/gamelift/getlog.py0000644000000000000000000000405513620325554023116 0ustar rootroot00000000000000# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import sys from functools import partial from awscli.compat import urlopen from awscli.customizations.commands import BasicCommand class GetGameSessionLogCommand(BasicCommand): NAME = 'get-game-session-log' DESCRIPTION = 'Download a compressed log file for a game session.' ARG_TABLE = [ {'name': 'game-session-id', 'required': True, 'help_text': 'The game session ID'}, {'name': 'save-as', 'required': True, 'help_text': 'The filename to which the file should be saved (.zip)'} ] def _run_main(self, args, parsed_globals): client = self._session.create_client( 'gamelift', region_name=parsed_globals.region, endpoint_url=parsed_globals.endpoint_url, verify=parsed_globals.verify_ssl ) # Retrieve a signed url. response = client.get_game_session_log_url( GameSessionId=args.game_session_id) url = response['PreSignedUrl'] # Retrieve the content from the presigned url and save it locally. contents = urlopen(url) sys.stdout.write( 'Downloading log archive for game session %s...\r' % args.game_session_id ) with open(args.save_as, 'wb') as f: for chunk in iter(partial(contents.read, 1024), b''): f.write(chunk) sys.stdout.write( 'Successfully downloaded log archive for game ' 'session %s to %s\n' % (args.game_session_id, args.save_as)) return 0 awscli-1.17.14/awscli/customizations/paginate.py0000644000000000000000000002731213620325554021636 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. """This module has customizations to unify paging paramters. For any operation that can be paginated, we will: * Hide the service specific pagination params. This can vary across services and we're going to replace them with a consistent set of arguments. The arguments will still work, but they are not documented. This allows us to add a pagination config after the fact and still remain backwards compatible with users that were manually doing pagination. * Add a ``--starting-token`` and a ``--max-items`` argument. """ import logging from functools import partial from botocore import xform_name from botocore.exceptions import DataNotFoundError, PaginationError from botocore import model from awscli.arguments import BaseCLIArgument logger = logging.getLogger(__name__) STARTING_TOKEN_HELP = """

A token to specify where to start paginating. This is the NextToken from a previously truncated response.

For usage examples, see Pagination in the AWS Command Line Interface User Guide.

""" MAX_ITEMS_HELP = """

The total number of items to return in the command's output. If the total number of items available is more than the value specified, a NextToken is provided in the command's output. To resume pagination, provide the NextToken value in the starting-token argument of a subsequent command. Do not use the NextToken response element directly outside of the AWS CLI.

For usage examples, see Pagination in the AWS Command Line Interface User Guide.

""" PAGE_SIZE_HELP = """

The size of each page to get in the AWS service call. This does not affect the number of items returned in the command's output. Setting a smaller page size results in more calls to the AWS service, retrieving fewer items in each call. This can help prevent the AWS service calls from timing out.

For usage examples, see Pagination in the AWS Command Line Interface User Guide.

""" def register_pagination(event_handlers): event_handlers.register('building-argument-table', unify_paging_params) event_handlers.register_last('doc-description', add_paging_description) def get_paginator_config(session, service_name, operation_name): try: paginator_model = session.get_paginator_model(service_name) except DataNotFoundError: return None try: operation_paginator_config = paginator_model.get_paginator( operation_name) except ValueError: return None return operation_paginator_config def add_paging_description(help_command, **kwargs): # This customization is only applied to the description of # Operations, so we must filter out all other events. if not isinstance(help_command.obj, model.OperationModel): return service_name = help_command.obj.service_model.service_name paginator_config = get_paginator_config( help_command.session, service_name, help_command.obj.name) if not paginator_config: return help_command.doc.style.new_paragraph() help_command.doc.writeln( ('``%s`` is a paginated operation. Multiple API calls may be issued ' 'in order to retrieve the entire data set of results. You can ' 'disable pagination by providing the ``--no-paginate`` argument.') % help_command.name) # Only include result key information if it is present. if paginator_config.get('result_key'): queries = paginator_config['result_key'] if type(queries) is not list: queries = [queries] queries = ", ".join([('``%s``' % s) for s in queries]) help_command.doc.writeln( ('When using ``--output text`` and the ``--query`` argument on a ' 'paginated response, the ``--query`` argument must extract data ' 'from the results of the following query expressions: %s') % queries) def unify_paging_params(argument_table, operation_model, event_name, session, **kwargs): paginator_config = get_paginator_config( session, operation_model.service_model.service_name, operation_model.name) if paginator_config is None: # We only apply these customizations to paginated responses. return logger.debug("Modifying paging parameters for operation: %s", operation_model.name) _remove_existing_paging_arguments(argument_table, paginator_config) parsed_args_event = event_name.replace('building-argument-table.', 'operation-args-parsed.') shadowed_args = {} add_paging_argument(argument_table, 'starting-token', PageArgument('starting-token', STARTING_TOKEN_HELP, parse_type='string', serialized_name='StartingToken'), shadowed_args) input_members = operation_model.input_shape.members type_name = 'integer' if 'limit_key' in paginator_config: limit_key_shape = input_members[paginator_config['limit_key']] type_name = limit_key_shape.type_name if type_name not in PageArgument.type_map: raise TypeError( ('Unsupported pagination type {0} for operation {1}' ' and parameter {2}').format( type_name, operation_model.name, paginator_config['limit_key'])) add_paging_argument(argument_table, 'page-size', PageArgument('page-size', PAGE_SIZE_HELP, parse_type=type_name, serialized_name='PageSize'), shadowed_args) add_paging_argument(argument_table, 'max-items', PageArgument('max-items', MAX_ITEMS_HELP, parse_type=type_name, serialized_name='MaxItems'), shadowed_args) session.register( parsed_args_event, partial(check_should_enable_pagination, list(_get_all_cli_input_tokens(paginator_config)), shadowed_args, argument_table)) def add_paging_argument(argument_table, arg_name, argument, shadowed_args): if arg_name in argument_table: # If there's already an entry in the arg table for this argument, # this means we're shadowing an argument for this operation. We # need to store this later in case pagination is turned off because # we put these arguments back. # See the comment in check_should_enable_pagination() for more info. shadowed_args[arg_name] = argument_table[arg_name] argument_table[arg_name] = argument def check_should_enable_pagination(input_tokens, shadowed_args, argument_table, parsed_args, parsed_globals, **kwargs): normalized_paging_args = ['start_token', 'max_items'] for token in input_tokens: py_name = token.replace('-', '_') if getattr(parsed_args, py_name) is not None and \ py_name not in normalized_paging_args: # The user has specified a manual (undocumented) pagination arg. # We need to automatically turn pagination off. logger.debug("User has specified a manual pagination arg. " "Automatically setting --no-paginate.") parsed_globals.paginate = False if not parsed_globals.paginate: ensure_paging_params_not_set(parsed_args, shadowed_args) # Because pagination is now disabled, there's a chance that # we were shadowing arguments. For example, we inject a # --max-items argument in unify_paging_params(). If the # the operation also provides its own MaxItems (which we # expose as --max-items) then our custom pagination arg # was shadowing the customers arg. When we turn pagination # off we need to put back the original argument which is # what we're doing here. for key, value in shadowed_args.items(): argument_table[key] = value def ensure_paging_params_not_set(parsed_args, shadowed_args): paging_params = ['starting_token', 'page_size', 'max_items'] shadowed_params = [p.replace('-', '_') for p in shadowed_args.keys()] params_used = [p for p in paging_params if p not in shadowed_params and getattr(parsed_args, p, None)] if len(params_used) > 0: converted_params = ', '.join( ["--" + p.replace('_', '-') for p in params_used]) raise PaginationError( message="Cannot specify --no-paginate along with pagination " "arguments: %s" % converted_params) def _remove_existing_paging_arguments(argument_table, pagination_config): for cli_name in _get_all_cli_input_tokens(pagination_config): argument_table[cli_name]._UNDOCUMENTED = True def _get_all_cli_input_tokens(pagination_config): # Get all input tokens including the limit_key # if it exists. tokens = _get_input_tokens(pagination_config) for token_name in tokens: cli_name = xform_name(token_name, '-') yield cli_name if 'limit_key' in pagination_config: key_name = pagination_config['limit_key'] cli_name = xform_name(key_name, '-') yield cli_name def _get_input_tokens(pagination_config): tokens = pagination_config['input_token'] if not isinstance(tokens, list): return [tokens] return tokens def _get_cli_name(param_objects, token_name): for param in param_objects: if param.name == token_name: return param.cli_name.lstrip('-') class PageArgument(BaseCLIArgument): type_map = { 'string': str, 'integer': int, 'long': int, } def __init__(self, name, documentation, parse_type, serialized_name): self.argument_model = model.Shape('PageArgument', {'type': 'string'}) self._name = name self._serialized_name = serialized_name self._documentation = documentation self._parse_type = parse_type self._required = False @property def cli_name(self): return '--' + self._name @property def cli_type_name(self): return self._parse_type @property def required(self): return self._required @required.setter def required(self, value): self._required = value @property def documentation(self): return self._documentation def add_to_parser(self, parser): parser.add_argument(self.cli_name, dest=self.py_name, type=self.type_map[self._parse_type]) def add_to_params(self, parameters, value): if value is not None: pagination_config = parameters.get('PaginationConfig', {}) pagination_config[self._serialized_name] = value parameters['PaginationConfig'] = pagination_config awscli-1.17.14/awscli/customizations/servicecatalog/0000755000000000000000000000000013620325757022467 5ustar rootroot00000000000000awscli-1.17.14/awscli/customizations/servicecatalog/utils.py0000644000000000000000000000222513620325554024175 0ustar rootroot00000000000000# Copyright 2012-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import os def make_url(region, bucket_name, obj_path, version=None): """ This link describes the format of Path Style URLs http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro """ base = "https://s3.amazonaws.com" if region and region != "us-east-1": base = "https://s3-{0}.amazonaws.com".format(region) result = "{0}/{1}/{2}".format(base, bucket_name, obj_path) if version: result = "{0}?versionId={1}".format(result, version) return result def get_s3_path(file_path): return os.path.basename(file_path) awscli-1.17.14/awscli/customizations/servicecatalog/helptext.py0000644000000000000000000000363113620325554024674 0ustar rootroot00000000000000# Copyright 2012-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. TAGS = "Tags to associate with the new product." BUCKET_NAME = ("Name of the S3 bucket name where the CloudFormation " "template will be uploaded to") SUPPORT_DESCRIPTION = "Support information about the product" SUPPORT_EMAIL = "Contact email for product support" PA_NAME = "The name assigned to the provisioning artifact" PA_DESCRIPTION = "The text description of the provisioning artifact" PA_TYPE = "The type of the provisioning artifact" DISTRIBUTOR = "The distributor of the product" PRODUCT_ID = "The product identifier" PRODUCT_NAME = "The name of the product" OWNER = "The owner of the product" PRODUCT_TYPE = "The type of the product to create" PRODUCT_DESCRIPTION = "The text description of the product" PRODUCT_COMMAND_DESCRIPTION = ("Create a new product using a CloudFormation " "template specified as a local file path") PA_COMMAND_DESCRIPTION = ("Create a new provisioning artifact for the " "specified product using a CloudFormation template " "specified as a local file path") GENERATE_COMMAND = ("Generate a Service Catalog product or provisioning " "artifact using a CloudFormation template specified " "as a local file path") FILE_PATH = "A local file path that references the CloudFormation template" awscli-1.17.14/awscli/customizations/servicecatalog/generate.py0000644000000000000000000000266413620325554024636 0ustar rootroot00000000000000# Copyright 2012-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. from awscli.customizations.commands import BasicCommand from awscli.customizations.servicecatalog import helptext from awscli.customizations.servicecatalog.generateproduct \ import GenerateProductCommand from awscli.customizations.servicecatalog.generateprovisioningartifact \ import GenerateProvisioningArtifactCommand class GenerateCommand(BasicCommand): NAME = "generate" DESCRIPTION = helptext.GENERATE_COMMAND SUBCOMMANDS = [ {'name': 'product', 'command_class': GenerateProductCommand}, {'name': 'provisioning-artifact', 'command_class': GenerateProvisioningArtifactCommand} ] def _run_main(self, parsed_args, parsed_globals): if parsed_args.subcommand is None: raise ValueError("usage: aws [options] " "[parameters]\naws: error: too few arguments") awscli-1.17.14/awscli/customizations/servicecatalog/exceptions.py0000644000000000000000000000164413620325554025222 0ustar rootroot00000000000000# Copyright 2012-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. class ServiceCatalogCommandError(Exception): fmt = 'An unspecified error occurred' def __init__(self, **kwargs): msg = self.fmt.format(**kwargs) Exception.__init__(self, msg) self.kwargs = kwargs class InvalidParametersException(ServiceCatalogCommandError): fmt = "An error occurred (InvalidParametersException) : {message}" awscli-1.17.14/awscli/customizations/servicecatalog/generateprovisioningartifact.py0000644000000000000000000000644713620325554031026 0ustar rootroot00000000000000# Copyright 2012-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import sys from awscli.customizations.servicecatalog import helptext from awscli.customizations.servicecatalog.generatebase \ import GenerateBaseCommand from botocore.compat import json class GenerateProvisioningArtifactCommand(GenerateBaseCommand): NAME = 'provisioning-artifact' DESCRIPTION = helptext.PA_COMMAND_DESCRIPTION ARG_TABLE = [ { 'name': 'file-path', 'required': True, 'help_text': helptext.FILE_PATH }, { 'name': 'bucket-name', 'required': True, 'help_text': helptext.BUCKET_NAME }, { 'name': 'provisioning-artifact-name', 'required': True, 'help_text': helptext.PA_NAME }, { 'name': 'provisioning-artifact-description', 'required': True, 'help_text': helptext.PA_DESCRIPTION }, { 'name': 'provisioning-artifact-type', 'required': True, 'help_text': helptext.PA_TYPE, 'choices': [ 'CLOUD_FORMATION_TEMPLATE', 'MARKETPLACE_AMI', 'MARKETPLACE_CAR' ] }, { 'name': 'product-id', 'required': True, 'help_text': helptext.PRODUCT_ID } ] def _run_main(self, parsed_args, parsed_globals): super(GenerateProvisioningArtifactCommand, self)._run_main( parsed_args, parsed_globals) self.region = self.get_and_validate_region(parsed_globals) self.s3_url = self.create_s3_url(parsed_args.bucket_name, parsed_args.file_path) self.scs_client = self._session.create_client( 'servicecatalog', region_name=self.region, endpoint_url=parsed_globals.endpoint_url, verify=parsed_globals.verify_ssl ) response = self.create_provisioning_artifact(parsed_args, self.s3_url) sys.stdout.write(json.dumps(response, indent=2, ensure_ascii=False)) return 0 def create_provisioning_artifact(self, parsed_args, s3_url): response = self.scs_client.create_provisioning_artifact( ProductId=parsed_args.product_id, Parameters={ 'Name': parsed_args.provisioning_artifact_name, 'Description': parsed_args.provisioning_artifact_description, 'Info': { 'LoadTemplateFromURL': s3_url }, 'Type': parsed_args.provisioning_artifact_type } ) if 'ResponseMetadata' in response: del response['ResponseMetadata'] return response awscli-1.17.14/awscli/customizations/servicecatalog/generatebase.py0000644000000000000000000000431613620325554025465 0ustar rootroot00000000000000# Copyright 2012-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. from awscli.customizations.commands import BasicCommand from awscli.customizations.servicecatalog.utils \ import make_url, get_s3_path from awscli.customizations.s3uploader import S3Uploader from awscli.customizations.servicecatalog import exceptions class GenerateBaseCommand(BasicCommand): def _run_main(self, parsed_args, parsed_globals): self.region = self.get_and_validate_region(parsed_globals) self.s3_client = self._session.create_client( 's3', region_name=self.region, endpoint_url=parsed_globals.endpoint_url, verify=parsed_globals.verify_ssl ) self.s3_uploader = S3Uploader(self.s3_client, parsed_args.bucket_name, force_upload=True) try: self.s3_uploader.upload(parsed_args.file_path, get_s3_path(parsed_args.file_path)) except OSError as ex: raise RuntimeError("%s cannot be found" % parsed_args.file_path) def get_and_validate_region(self, parsed_globals): region = parsed_globals.region if region is None: region = self._session.get_config_variable('region') if region not in self._session.get_available_regions('servicecatalog'): raise exceptions.InvalidParametersException( message="Region {0} is not supported".format( parsed_globals.region)) return region def create_s3_url(self, bucket_name, file_path): return make_url(self.region, bucket_name, get_s3_path(file_path)) awscli-1.17.14/awscli/customizations/servicecatalog/__init__.py0000644000000000000000000000164013620325554024574 0ustar rootroot00000000000000# Copyright 2012-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. from awscli.customizations.servicecatalog.generate \ import GenerateCommand def register_servicecatalog_commands(event_emitter): event_emitter.register('building-command-table.servicecatalog', inject_commands) def inject_commands(command_table, session, **kwargs): command_table['generate'] = GenerateCommand(session) awscli-1.17.14/awscli/customizations/servicecatalog/generateproduct.py0000644000000000000000000001275113620325554026235 0ustar rootroot00000000000000# Copyright 2012-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import sys from awscli.customizations.servicecatalog import helptext from awscli.customizations.servicecatalog.generatebase \ import GenerateBaseCommand from botocore.compat import json class GenerateProductCommand(GenerateBaseCommand): NAME = "product" DESCRIPTION = helptext.PRODUCT_COMMAND_DESCRIPTION ARG_TABLE = [ { 'name': 'product-name', 'required': True, 'help_text': helptext.PRODUCT_NAME }, { 'name': 'product-owner', 'required': True, 'help_text': helptext.OWNER }, { 'name': 'product-type', 'required': True, 'help_text': helptext.PRODUCT_TYPE, 'choices': ['CLOUD_FORMATION_TEMPLATE', 'MARKETPLACE'] }, { 'name': 'product-description', 'required': False, 'help_text': helptext.PRODUCT_DESCRIPTION }, { 'name': 'product-distributor', 'required': False, 'help_text': helptext.DISTRIBUTOR }, { 'name': 'tags', 'required': False, 'schema': { 'type': 'array', 'items': { 'type': 'string' } }, 'default': [], 'synopsis': '--tags Key=key1,Value=value1 Key=key2,Value=value2', 'help_text': helptext.TAGS }, { 'name': 'file-path', 'required': True, 'help_text': helptext.FILE_PATH }, { 'name': 'bucket-name', 'required': True, 'help_text': helptext.BUCKET_NAME }, { 'name': 'support-description', 'required': False, 'help_text': helptext.SUPPORT_DESCRIPTION }, { 'name': 'support-email', 'required': False, 'help_text': helptext.SUPPORT_EMAIL }, { 'name': 'provisioning-artifact-name', 'required': True, 'help_text': helptext.PA_NAME }, { 'name': 'provisioning-artifact-description', 'required': True, 'help_text': helptext.PA_DESCRIPTION }, { 'name': 'provisioning-artifact-type', 'required': True, 'help_text': helptext.PA_TYPE, 'choices': [ 'CLOUD_FORMATION_TEMPLATE', 'MARKETPLACE_AMI', 'MARKETPLACE_CAR' ] } ] def _run_main(self, parsed_args, parsed_globals): super(GenerateProductCommand, self)._run_main(parsed_args, parsed_globals) self.region = self.get_and_validate_region(parsed_globals) self.s3_url = self.create_s3_url(parsed_args.bucket_name, parsed_args.file_path) self.scs_client = self._session.create_client( 'servicecatalog', region_name=self.region, endpoint_url=parsed_globals.endpoint_url, verify=parsed_globals.verify_ssl ) response = self.create_product(self.build_args(parsed_args, self.s3_url), parsed_globals) sys.stdout.write(json.dumps(response, indent=2, ensure_ascii=False)) return 0 def create_product(self, args, parsed_globals): response = self.scs_client.create_product(**args) if 'ResponseMetadata' in response: del response['ResponseMetadata'] return response def _extract_tags(self, args_tags): tags = [] for tag in args_tags: tags.append(dict(t.split('=') for t in tag.split(','))) return tags def build_args(self, parsed_args, s3_url): args = { "Name": parsed_args.product_name, "Owner": parsed_args.product_owner, "ProductType": parsed_args.product_type, "Tags": self._extract_tags(parsed_args.tags), "ProvisioningArtifactParameters": { 'Name': parsed_args.provisioning_artifact_name, 'Description': parsed_args.provisioning_artifact_description, 'Info': { 'LoadTemplateFromURL': s3_url }, 'Type': parsed_args.provisioning_artifact_type } } # Non-required args if parsed_args.support_description: args["SupportDescription"] = parsed_args.support_description if parsed_args.product_description: args["Description"] = parsed_args.product_description if parsed_args.support_email: args["SupportEmail"] = parsed_args.support_email if parsed_args.product_distributor: args["Distributor"] = parsed_args.product_distributor return args awscli-1.17.14/awscli/customizations/rds.py0000644000000000000000000001017113620325554020631 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. """ This customization splits the modify-option-group into two separate commands: * ``add-option-group`` * ``remove-option-group`` In both commands the ``--options-to-remove`` and ``--options-to-add`` args will be renamed to just ``--options``. All the remaining args will be available in both commands (which proxy modify-option-group). """ from awscli.clidriver import ServiceOperation from awscli.clidriver import CLIOperationCaller from awscli.customizations import utils from awscli.customizations.commands import BasicCommand from awscli.customizations.utils import uni_print def register_rds_modify_split(cli): cli.register('building-command-table.rds', _building_command_table) cli.register('building-argument-table.rds.add-option-to-option-group', _rename_add_option) cli.register('building-argument-table.rds.remove-option-from-option-group', _rename_remove_option) def register_add_generate_db_auth_token(cli): cli.register('building-command-table.rds', _add_generate_db_auth_token) def _add_generate_db_auth_token(command_table, session, **kwargs): command = GenerateDBAuthTokenCommand(session) command_table['generate-db-auth-token'] = command def _rename_add_option(argument_table, **kwargs): utils.rename_argument(argument_table, 'options-to-include', new_name='options') del argument_table['options-to-remove'] def _rename_remove_option(argument_table, **kwargs): utils.rename_argument(argument_table, 'options-to-remove', new_name='options') del argument_table['options-to-include'] def _building_command_table(command_table, session, **kwargs): # Hooked up to building-command-table.rds # We don't need the modify-option-group operation. del command_table['modify-option-group'] # We're going to replace modify-option-group with two commands: # add-option-group and remove-option-group rds_model = session.get_service_model('rds') modify_operation_model = rds_model.operation_model('ModifyOptionGroup') command_table['add-option-to-option-group'] = ServiceOperation( parent_name='rds', name='add-option-to-option-group', operation_caller=CLIOperationCaller(session), session=session, operation_model=modify_operation_model) command_table['remove-option-from-option-group'] = ServiceOperation( parent_name='rds', name='remove-option-from-option-group', session=session, operation_model=modify_operation_model, operation_caller=CLIOperationCaller(session)) class GenerateDBAuthTokenCommand(BasicCommand): NAME = 'generate-db-auth-token' DESCRIPTION = ( 'Generates an auth token used to connect to a db with IAM credentials.' ) ARG_TABLE = [ {'name': 'hostname', 'required': True, 'help_text': 'The hostname of the database to connect to.'}, {'name': 'port', 'cli_type_name': 'integer', 'required': True, 'help_text': 'The port number the database is listening on.'}, {'name': 'username', 'required': True, 'help_text': 'The username to log in as.'} ] def _run_main(self, parsed_args, parsed_globals): rds = self._session.create_client( 'rds', parsed_globals.region, parsed_globals.endpoint_url, parsed_globals.verify_ssl ) token = rds.generate_db_auth_token( DBHostname=parsed_args.hostname, Port=parsed_args.port, DBUsername=parsed_args.username ) uni_print(token) uni_print('\n') return 0 awscli-1.17.14/awscli/customizations/s3/0000755000000000000000000000000013620325757020021 5ustar rootroot00000000000000awscli-1.17.14/awscli/customizations/s3/fileinfo.py0000644000000000000000000001013013620325554022154 0ustar rootroot00000000000000# Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. class FileInfo(object): """This class contains important details related to performing a task. It can perform operations such as ``upload``, ``download``, ``copy``, ``delete``, ``move``. Similarly to ``TaskInfo`` objects attributes like ``session`` need to be set in order to perform operations. :param dest: the destination path :type dest: string :param compare_key: the name of the file relative to the specified directory/prefix. This variable is used when performing synching or if the destination file is adopting the source file's name. :type compare_key: string :param size: The size of the file in bytes. :type size: integer :param last_update: the local time of last modification. :type last_update: datetime object :param dest_type: if the destination is s3 or local. :param dest_type: string :param parameters: a dictionary of important values this is assigned in the ``BasicTask`` object. :param associated_response_data: The response data used by the ``FileGenerator`` to create this task. It is either an dictionary from the list of a ListObjects or the response from a HeadObject. It will only be filled if the task was generated from an S3 bucket. """ def __init__(self, src, dest=None, compare_key=None, size=None, last_update=None, src_type=None, dest_type=None, operation_name=None, client=None, parameters=None, source_client=None, is_stream=False, associated_response_data=None): self.src = src self.src_type = src_type self.operation_name = operation_name self.client = client self.dest = dest self.dest_type = dest_type self.compare_key = compare_key self.size = size self.last_update = last_update # Usually inject ``parameters`` from ``BasicTask`` class. self.parameters = {} if parameters is not None: self.parameters = parameters self.source_client = source_client self.is_stream = is_stream self.associated_response_data = associated_response_data def is_glacier_compatible(self): """Determines if a file info object is glacier compatible Operations will fail if the S3 object has a storage class of GLACIER and it involves copying from S3 to S3, downloading from S3, or moving where S3 is the source (the delete will actually succeed, but we do not want fail to transfer the file and then successfully delete it). :returns: True if the FileInfo's operation will not fail because the operation is on a glacier object. False if it will fail. """ if self._is_glacier_object(self.associated_response_data): if self.operation_name in ['copy', 'download']: return False elif self.operation_name == 'move': if self.src_type == 's3': return False return True def _is_glacier_object(self, response_data): if response_data: if response_data.get('StorageClass') == 'GLACIER' and \ not self._is_restored(response_data): return True return False def _is_restored(self, response_data): # Returns True is this is a glacier object that has been # restored back to S3. # 'Restore' looks like: 'ongoing-request="false", expiry-date="..."' return 'ongoing-request="false"' in response_data.get('Restore', '') awscli-1.17.14/awscli/customizations/s3/utils.py0000644000000000000000000006602113620325554021533 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import argparse import logging from datetime import datetime import mimetypes import errno import os import re import time from collections import namedtuple, deque from dateutil.parser import parse from dateutil.tz import tzlocal, tzutc from s3transfer.subscribers import BaseSubscriber from awscli.compat import bytes_print from awscli.compat import queue LOGGER = logging.getLogger(__name__) HUMANIZE_SUFFIXES = ('KiB', 'MiB', 'GiB', 'TiB', 'PiB', 'EiB') EPOCH_TIME = datetime(1970, 1, 1, tzinfo=tzutc()) # Maximum object size allowed in S3. # See: http://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html MAX_UPLOAD_SIZE = 5 * (1024 ** 4) SIZE_SUFFIX = { 'kb': 1024, 'mb': 1024 ** 2, 'gb': 1024 ** 3, 'tb': 1024 ** 4, 'kib': 1024, 'mib': 1024 ** 2, 'gib': 1024 ** 3, 'tib': 1024 ** 4, } _S3_ACCESSPOINT_TO_BUCKET_KEY_REGEX = re.compile( r'^(?Parn:(aws).*:s3:[a-z\-0-9]+:[0-9]{12}:accesspoint[:/][^/]+)/?' r'(?P.*)$' ) def human_readable_size(value): """Convert an size in bytes into a human readable format. For example:: >>> human_readable_size(1) '1 Byte' >>> human_readable_size(10) '10 Bytes' >>> human_readable_size(1024) '1.0 KiB' >>> human_readable_size(1024 * 1024) '1.0 MiB' :param value: The size in bytes :return: The size in a human readable format based on base-2 units. """ one_decimal_point = '%.1f' base = 1024 bytes_int = float(value) if bytes_int == 1: return '1 Byte' elif bytes_int < base: return '%d Bytes' % bytes_int for i, suffix in enumerate(HUMANIZE_SUFFIXES): unit = base ** (i+2) if round((bytes_int / unit) * base) < base: return '%.1f %s' % ((base * bytes_int / unit), suffix) def human_readable_to_bytes(value): """Converts a human readable size to bytes. :param value: A string such as "10MB". If a suffix is not included, then the value is assumed to be an integer representing the size in bytes. :returns: The converted value in bytes as an integer """ value = value.lower() if value[-2:] == 'ib': # Assume IEC suffix. suffix = value[-3:].lower() else: suffix = value[-2:].lower() has_size_identifier = ( len(value) >= 2 and suffix in SIZE_SUFFIX) if not has_size_identifier: try: return int(value) except ValueError: raise ValueError("Invalid size value: %s" % value) else: multiplier = SIZE_SUFFIX[suffix] return int(value[:-len(suffix)]) * multiplier class AppendFilter(argparse.Action): """ This class is used as an action when parsing the parameters. Specifically it is used for actions corresponding to exclude and include filters. What it does is that it appends a list consisting of the name of the parameter and its value onto a list containing these [parameter, value] lists. In this case, the name of the parameter will either be --include or --exclude and the value will be the rule to apply. This will format all of the rules inputted into the command line in a way compatible with the Filter class. Note that rules that appear later in the command line take preferance over rulers that appear earlier. """ def __call__(self, parser, namespace, values, option_string=None): filter_list = getattr(namespace, self.dest) if filter_list: filter_list.append([option_string, values[0]]) else: filter_list = [[option_string, values[0]]] setattr(namespace, self.dest, filter_list) class CreateDirectoryError(Exception): pass class StablePriorityQueue(queue.Queue): """Priority queue that maintains FIFO order for same priority items. This class was written to handle the tasks created in awscli.customizations.s3.tasks, but it's possible to use this class outside of that context. In order for this to be the case, the following conditions should be met: * Objects that are queued should have a PRIORITY attribute. This should be an integer value not to exceed the max_priority value passed into the ``__init__``. Objects with lower priority numbers are retrieved before objects with higher priority numbers. * A relatively small max_priority should be chosen. ``get()`` calls are O(max_priority). Any object that does not have a ``PRIORITY`` attribute or whose priority exceeds ``max_priority`` will be queued at the highest (least important) priority available. """ def __init__(self, maxsize=0, max_priority=20): queue.Queue.__init__(self, maxsize=maxsize) self.priorities = [deque([]) for i in range(max_priority + 1)] self.default_priority = max_priority def _qsize(self): size = 0 for bucket in self.priorities: size += len(bucket) return size def _put(self, item): priority = min(getattr(item, 'PRIORITY', self.default_priority), self.default_priority) self.priorities[priority].append(item) def _get(self): for bucket in self.priorities: if not bucket: continue return bucket.popleft() def find_bucket_key(s3_path): """ This is a helper function that given an s3 path such that the path is of the form: bucket/key It will return the bucket and the key represented by the s3 path """ match = _S3_ACCESSPOINT_TO_BUCKET_KEY_REGEX.match(s3_path) if match: return match.group('bucket'), match.group('key') s3_components = s3_path.split('/', 1) bucket = s3_components[0] s3_key = '' if len(s3_components) > 1: s3_key = s3_components[1] return bucket, s3_key def split_s3_bucket_key(s3_path): """Split s3 path into bucket and key prefix. This will also handle the s3:// prefix. :return: Tuple of ('bucketname', 'keyname') """ if s3_path.startswith('s3://'): s3_path = s3_path[5:] return find_bucket_key(s3_path) def get_file_stat(path): """ This is a helper function that given a local path return the size of the file in bytes and time of last modification. """ try: stats = os.stat(path) except IOError as e: raise ValueError('Could not retrieve file stat of "%s": %s' % ( path, e)) try: update_time = datetime.fromtimestamp(stats.st_mtime, tzlocal()) except (ValueError, OSError, OverflowError): # Python's fromtimestamp raises value errors when the timestamp is out # of range of the platform's C localtime() function. This can cause # issues when syncing from systems with a wide range of valid # timestamps to systems with a lower range. Some systems support # 64-bit timestamps, for instance, while others only support 32-bit. # We don't want to fail in these cases, so instead we pass along none. update_time = None return stats.st_size, update_time def find_dest_path_comp_key(files, src_path=None): """ This is a helper function that determines the destination path and compare key given parameters received from the ``FileFormat`` class. """ src = files['src'] dest = files['dest'] src_type = src['type'] dest_type = dest['type'] if src_path is None: src_path = src['path'] sep_table = {'s3': '/', 'local': os.sep} if files['dir_op']: rel_path = src_path[len(src['path']):] else: rel_path = src_path.split(sep_table[src_type])[-1] compare_key = rel_path.replace(sep_table[src_type], '/') if files['use_src_name']: dest_path = dest['path'] dest_path += rel_path.replace(sep_table[src_type], sep_table[dest_type]) else: dest_path = dest['path'] return dest_path, compare_key def create_warning(path, error_message, skip_file=True): """ This creates a ``PrintTask`` for whenever a warning is to be thrown. """ print_string = "warning: " if skip_file: print_string = print_string + "Skipping file " + path + ". " print_string = print_string + error_message warning_message = WarningResult(message=print_string, error=False, warning=True) return warning_message class StdoutBytesWriter(object): """ This class acts as a file-like object that performs the bytes_print function on write. """ def __init__(self, stdout=None): self._stdout = stdout def write(self, b): """ Writes data to stdout as bytes. :param b: data to write """ bytes_print(b, self._stdout) def guess_content_type(filename): """Given a filename, guess it's content type. If the type cannot be guessed, a value of None is returned. """ try: return mimetypes.guess_type(filename)[0] # This catches a bug in the mimetype libary where some MIME types # specifically on windows machines cause a UnicodeDecodeError # because the MIME type in the Windows registery has an encoding # that cannot be properly encoded using the default system encoding. # https://bugs.python.org/issue9291 # # So instead of hard failing, just log the issue and fall back to the # default guessed content type of None. except UnicodeDecodeError: LOGGER.debug( 'Unable to guess content type for %s due to ' 'UnicodeDecodeError: ', filename, exc_info=True ) def relative_path(filename, start=os.path.curdir): """Cross platform relative path of a filename. If no relative path can be calculated (i.e different drives on Windows), then instead of raising a ValueError, the absolute path is returned. """ try: dirname, basename = os.path.split(filename) relative_dir = os.path.relpath(dirname, start) return os.path.join(relative_dir, basename) except ValueError: return os.path.abspath(filename) def set_file_utime(filename, desired_time): """ Set the utime of a file, and if it fails, raise a more explicit error. :param filename: the file to modify :param desired_time: the epoch timestamp to set for atime and mtime. :raises: SetFileUtimeError: if you do not have permission (errno 1) :raises: OSError: for all errors other than errno 1 """ try: os.utime(filename, (desired_time, desired_time)) except OSError as e: # Only raise a more explicit exception when it is a permission issue. if e.errno != errno.EPERM: raise e raise SetFileUtimeError( ("The file was downloaded, but attempting to modify the " "utime of the file failed. Is the file owned by another user?")) class SetFileUtimeError(Exception): pass def _date_parser(date_string): return parse(date_string).astimezone(tzlocal()) class BucketLister(object): """List keys in a bucket.""" def __init__(self, client, date_parser=_date_parser): self._client = client self._date_parser = date_parser def list_objects(self, bucket, prefix=None, page_size=None, extra_args=None): kwargs = {'Bucket': bucket, 'PaginationConfig': {'PageSize': page_size}} if prefix is not None: kwargs['Prefix'] = prefix if extra_args is not None: kwargs.update(extra_args) paginator = self._client.get_paginator('list_objects_v2') pages = paginator.paginate(**kwargs) for page in pages: contents = page.get('Contents', []) for content in contents: source_path = bucket + '/' + content['Key'] content['LastModified'] = self._date_parser( content['LastModified']) yield source_path, content class PrintTask(namedtuple('PrintTask', ['message', 'error', 'total_parts', 'warning'])): def __new__(cls, message, error=False, total_parts=None, warning=None): """ :param message: An arbitrary string associated with the entry. This can be used to communicate the result of the task. :param error: Boolean indicating a failure. :param total_parts: The total number of parts for multipart transfers. :param warning: Boolean indicating a warning """ return super(PrintTask, cls).__new__(cls, message, error, total_parts, warning) WarningResult = PrintTask class RequestParamsMapper(object): """A utility class that maps CLI params to request params Each method in the class maps to a particular operation and will set the request parameters depending on the operation and CLI parameters provided. For each of the class's methods the parameters are as follows: :type request_params: dict :param request_params: A dictionary to be filled out with the appropriate parameters for the specified client operation using the current CLI parameters :type cli_params: dict :param cli_params: A dictionary of the current CLI params that will be used to generate the request parameters for the specified operation For example, take the mapping of request parameters for PutObject:: >>> cli_request_params = {'sse': 'AES256', 'storage_class': 'GLACIER'} >>> request_params = {} >>> RequestParamsMapper.map_put_object_params( request_params, cli_request_params) >>> print(request_params) {'StorageClass': 'GLACIER', 'ServerSideEncryption': 'AES256'} Note that existing parameters in ``request_params`` will be overriden if a parameter in ``cli_params`` maps to the existing parameter. """ @classmethod def map_put_object_params(cls, request_params, cli_params): """Map CLI params to PutObject request params""" cls._set_general_object_params(request_params, cli_params) cls._set_metadata_params(request_params, cli_params) cls._set_sse_request_params(request_params, cli_params) cls._set_sse_c_request_params(request_params, cli_params) cls._set_request_payer_param(request_params, cli_params) @classmethod def map_get_object_params(cls, request_params, cli_params): """Map CLI params to GetObject request params""" cls._set_sse_c_request_params(request_params, cli_params) cls._set_request_payer_param(request_params, cli_params) @classmethod def map_copy_object_params(cls, request_params, cli_params): """Map CLI params to CopyObject request params""" cls._set_general_object_params(request_params, cli_params) cls._set_metadata_directive_param(request_params, cli_params) cls._set_metadata_params(request_params, cli_params) cls._auto_populate_metadata_directive(request_params) cls._set_sse_request_params(request_params, cli_params) cls._set_sse_c_and_copy_source_request_params( request_params, cli_params) cls._set_request_payer_param(request_params, cli_params) @classmethod def map_head_object_params(cls, request_params, cli_params): """Map CLI params to HeadObject request params""" cls._set_sse_c_request_params(request_params, cli_params) cls._set_request_payer_param(request_params, cli_params) @classmethod def map_create_multipart_upload_params(cls, request_params, cli_params): """Map CLI params to CreateMultipartUpload request params""" cls._set_general_object_params(request_params, cli_params) cls._set_sse_request_params(request_params, cli_params) cls._set_sse_c_request_params(request_params, cli_params) cls._set_metadata_params(request_params, cli_params) cls._set_request_payer_param(request_params, cli_params) @classmethod def map_upload_part_params(cls, request_params, cli_params): """Map CLI params to UploadPart request params""" cls._set_sse_c_request_params(request_params, cli_params) cls._set_request_payer_param(request_params, cli_params) @classmethod def map_upload_part_copy_params(cls, request_params, cli_params): """Map CLI params to UploadPartCopy request params""" cls._set_sse_c_and_copy_source_request_params( request_params, cli_params) cls._set_request_payer_param(request_params, cli_params) @classmethod def map_delete_object_params(cls, request_params, cli_params): cls._set_request_payer_param(request_params, cli_params) @classmethod def map_list_objects_v2_params(cls, request_params, cli_params): cls._set_request_payer_param(request_params, cli_params) @classmethod def _set_request_payer_param(cls, request_params, cli_params): if cli_params.get('request_payer'): request_params['RequestPayer'] = cli_params['request_payer'] @classmethod def _set_general_object_params(cls, request_params, cli_params): # Parameters set in this method should be applicable to the following # operations involving objects: PutObject, CopyObject, and # CreateMultipartUpload. general_param_translation = { 'acl': 'ACL', 'storage_class': 'StorageClass', 'website_redirect': 'WebsiteRedirectLocation', 'content_type': 'ContentType', 'cache_control': 'CacheControl', 'content_disposition': 'ContentDisposition', 'content_encoding': 'ContentEncoding', 'content_language': 'ContentLanguage', 'expires': 'Expires' } for cli_param_name in general_param_translation: if cli_params.get(cli_param_name): request_param_name = general_param_translation[cli_param_name] request_params[request_param_name] = cli_params[cli_param_name] cls._set_grant_params(request_params, cli_params) @classmethod def _set_grant_params(cls, request_params, cli_params): if cli_params.get('grants'): for grant in cli_params['grants']: try: permission, grantee = grant.split('=', 1) except ValueError: raise ValueError('grants should be of the form ' 'permission=principal') request_params[cls._permission_to_param(permission)] = grantee @classmethod def _permission_to_param(cls, permission): if permission == 'read': return 'GrantRead' if permission == 'full': return 'GrantFullControl' if permission == 'readacl': return 'GrantReadACP' if permission == 'writeacl': return 'GrantWriteACP' raise ValueError('permission must be one of: ' 'read|readacl|writeacl|full') @classmethod def _set_metadata_params(cls, request_params, cli_params): if cli_params.get('metadata'): request_params['Metadata'] = cli_params['metadata'] @classmethod def _auto_populate_metadata_directive(cls, request_params): if request_params.get('Metadata') and \ not request_params.get('MetadataDirective'): request_params['MetadataDirective'] = 'REPLACE' @classmethod def _set_metadata_directive_param(cls, request_params, cli_params): if cli_params.get('metadata_directive'): request_params['MetadataDirective'] = cli_params[ 'metadata_directive'] @classmethod def _set_sse_request_params(cls, request_params, cli_params): if cli_params.get('sse'): request_params['ServerSideEncryption'] = cli_params['sse'] if cli_params.get('sse_kms_key_id'): request_params['SSEKMSKeyId'] = cli_params['sse_kms_key_id'] @classmethod def _set_sse_c_request_params(cls, request_params, cli_params): if cli_params.get('sse_c'): request_params['SSECustomerAlgorithm'] = cli_params['sse_c'] request_params['SSECustomerKey'] = cli_params['sse_c_key'] @classmethod def _set_sse_c_copy_source_request_params(cls, request_params, cli_params): if cli_params.get('sse_c_copy_source'): request_params['CopySourceSSECustomerAlgorithm'] = cli_params[ 'sse_c_copy_source'] request_params['CopySourceSSECustomerKey'] = cli_params[ 'sse_c_copy_source_key'] @classmethod def _set_sse_c_and_copy_source_request_params(cls, request_params, cli_params): cls._set_sse_c_request_params(request_params, cli_params) cls._set_sse_c_copy_source_request_params(request_params, cli_params) class ProvideSizeSubscriber(BaseSubscriber): """ A subscriber which provides the transfer size before it's queued. """ def __init__(self, size): self.size = size def on_queued(self, future, **kwargs): future.meta.provide_transfer_size(self.size) # TODO: Eventually port this down to the BaseSubscriber or a new subscriber # class in s3transfer. The functionality is very convenient but may need # some further design decisions to make it a feature in s3transfer. class OnDoneFilteredSubscriber(BaseSubscriber): """Subscriber that differentiates between successes and failures It is really a convenience class so developers do not have to have to constantly remember to have a general try/except around future.result() """ def on_done(self, future, **kwargs): future_exception = None try: future.result() except Exception as e: future_exception = e # If the result propogates an error, call the on_failure # method instead. if future_exception: self._on_failure(future, future_exception) else: self._on_success(future) def _on_success(self, future): pass def _on_failure(self, future, e): pass class DeleteSourceSubscriber(OnDoneFilteredSubscriber): """A subscriber which deletes the source of the transfer.""" def _on_success(self, future): try: self._delete_source(future) except Exception as e: future.set_exception(e) def _delete_source(self, future): raise NotImplementedError('_delete_source()') class DeleteSourceObjectSubscriber(DeleteSourceSubscriber): """A subscriber which deletes an object.""" def __init__(self, client): self._client = client def _get_bucket(self, call_args): return call_args.bucket def _get_key(self, call_args): return call_args.key def _delete_source(self, future): call_args = future.meta.call_args delete_object_kwargs = { 'Bucket': self._get_bucket(call_args), 'Key': self._get_key(call_args) } if call_args.extra_args.get('RequestPayer'): delete_object_kwargs['RequestPayer'] = call_args.extra_args[ 'RequestPayer'] self._client.delete_object(**delete_object_kwargs) class DeleteCopySourceObjectSubscriber(DeleteSourceObjectSubscriber): """A subscriber which deletes the copy source.""" def _get_bucket(self, call_args): return call_args.copy_source['Bucket'] def _get_key(self, call_args): return call_args.copy_source['Key'] class DeleteSourceFileSubscriber(DeleteSourceSubscriber): """A subscriber which deletes a file.""" def _delete_source(self, future): os.remove(future.meta.call_args.fileobj) class BaseProvideContentTypeSubscriber(BaseSubscriber): """A subscriber that provides content type when creating s3 objects""" def on_queued(self, future, **kwargs): guessed_type = guess_content_type(self._get_filename(future)) if guessed_type is not None: future.meta.call_args.extra_args['ContentType'] = guessed_type def _get_filename(self, future): raise NotImplementedError('_get_filename()') class ProvideUploadContentTypeSubscriber(BaseProvideContentTypeSubscriber): def _get_filename(self, future): return future.meta.call_args.fileobj class ProvideCopyContentTypeSubscriber(BaseProvideContentTypeSubscriber): def _get_filename(self, future): return future.meta.call_args.copy_source['Key'] class ProvideLastModifiedTimeSubscriber(OnDoneFilteredSubscriber): """Sets utime for a downloaded file""" def __init__(self, last_modified_time, result_queue): self._last_modified_time = last_modified_time self._result_queue = result_queue def _on_success(self, future, **kwargs): filename = future.meta.call_args.fileobj try: last_update_tuple = self._last_modified_time.timetuple() mod_timestamp = time.mktime(last_update_tuple) set_file_utime(filename, int(mod_timestamp)) except Exception as e: warning_message = ( 'Successfully Downloaded %s but was unable to update the ' 'last modified time. %s' % (filename, e)) self._result_queue.put(create_warning(filename, warning_message)) class DirectoryCreatorSubscriber(BaseSubscriber): """Creates a directory to download if it does not exist""" def on_queued(self, future, **kwargs): d = os.path.dirname(future.meta.call_args.fileobj) try: if not os.path.exists(d): os.makedirs(d) except OSError as e: if not e.errno == errno.EEXIST: raise CreateDirectoryError( "Could not create directory %s: %s" % (d, e)) class NonSeekableStream(object): """Wrap a file like object as a non seekable stream. This class is used to wrap an existing file like object such that it only has a ``.read()`` method. There are some file like objects that aren't truly seekable but appear to be. For example, on windows, sys.stdin has a ``seek()`` method, and calling ``seek(0)`` even appears to work. However, subsequent ``.read()`` calls will just return an empty string. Consumers of these file like object have no way of knowing if these files are truly seekable or not, so this class can be used to force non-seekable behavior when you know for certain that a fileobj is non seekable. """ def __init__(self, fileobj): self._fileobj = fileobj def read(self, amt=None): if amt is None: return self._fileobj.read() else: return self._fileobj.read(amt) awscli-1.17.14/awscli/customizations/s3/fileinfobuilder.py0000644000000000000000000000617313620325554023537 0ustar rootroot00000000000000# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. from awscli.customizations.s3.fileinfo import FileInfo class FileInfoBuilder(object): """ This class takes a ``FileBase`` object's attributes and generates a ``FileInfo`` object so that the operation can be performed. """ def __init__(self, client, source_client=None, parameters = None, is_stream=False): self._client = client self._source_client = client if source_client is not None: self._source_client = source_client self._parameters = parameters self._is_stream = is_stream def call(self, files): for file_base in files: file_info = self._inject_info(file_base) yield file_info def _inject_info(self, file_base): file_info_attr = {} file_info_attr['src'] = file_base.src file_info_attr['dest'] = file_base.dest file_info_attr['compare_key'] = file_base.compare_key file_info_attr['size'] = file_base.size file_info_attr['last_update'] = file_base.last_update file_info_attr['src_type'] = file_base.src_type file_info_attr['dest_type'] = file_base.dest_type file_info_attr['operation_name'] = file_base.operation_name file_info_attr['parameters'] = self._parameters file_info_attr['is_stream'] = self._is_stream file_info_attr['associated_response_data'] = file_base.response_data # This is a bit quirky. The below conditional hinges on the --delete # flag being set, which only occurs during a sync command. The source # client in a sync delete refers to the source of the sync rather than # the source of the delete. What this means is that the client that # gets called during the delete process would point to the wrong region. # Normally this doesn't matter because DNS will re-route the request # to the correct region. In the case of s3v4 signing, however, this # would result in a failed delete. The conditional below fixes this # issue by swapping clients only in the case of a sync delete since # swapping which client is used in the delete function would then break # moving under s3v4. if (file_base.operation_name == 'delete' and self._parameters.get('delete')): file_info_attr['client'] = self._source_client file_info_attr['source_client'] = self._client else: file_info_attr['client'] = self._client file_info_attr['source_client'] = self._source_client return FileInfo(**file_info_attr) awscli-1.17.14/awscli/customizations/s3/filegenerator.py0000644000000000000000000003731013620325554023220 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import os import sys import stat from dateutil.parser import parse from dateutil.tz import tzlocal from botocore.exceptions import ClientError from awscli.customizations.s3.utils import find_bucket_key, get_file_stat from awscli.customizations.s3.utils import BucketLister, create_warning, \ find_dest_path_comp_key, EPOCH_TIME from awscli.compat import six from awscli.compat import queue _open = open def is_special_file(path): """ This function checks to see if a special file. It checks if the file is a character special device, block special device, FIFO, or socket. """ mode = os.stat(path).st_mode # Character special device. if stat.S_ISCHR(mode): return True # Block special device if stat.S_ISBLK(mode): return True # FIFO. if stat.S_ISFIFO(mode): return True # Socket. if stat.S_ISSOCK(mode): return True return False def is_readable(path): """ This function checks to see if a file or a directory can be read. This is tested by performing an operation that requires read access on the file or the directory. """ if os.path.isdir(path): try: os.listdir(path) except (OSError, IOError): return False else: try: with _open(path, 'r') as fd: pass except (OSError, IOError): return False return True # This class is provided primarily to provide a detailed error message. class FileDecodingError(Exception): """Raised when there was an issue decoding the file.""" ADVICE = ( "Please check your locale settings. The filename was decoded as: %s\n" "On posix platforms, check the LC_CTYPE environment variable." % (sys.getfilesystemencoding()) ) def __init__(self, directory, filename): self.directory = directory self.file_name = filename self.error_message = ( 'There was an error trying to decode the the file %s in ' 'directory "%s". \n%s' % (repr(self.file_name), self.directory, self.ADVICE) ) super(FileDecodingError, self).__init__(self.error_message) class FileStat(object): def __init__(self, src, dest=None, compare_key=None, size=None, last_update=None, src_type=None, dest_type=None, operation_name=None, response_data=None): self.src = src self.dest = dest self.compare_key = compare_key self.size = size self.last_update = last_update self.src_type = src_type self.dest_type = dest_type self.operation_name = operation_name self.response_data = response_data class FileGenerator(object): """ This is a class the creates a generator to yield files based on information returned from the ``FileFormat`` class. It is universal in the sense that it will handle s3 files, local files, local directories, and s3 objects under the same common prefix. The generator yields corresponding ``FileInfo`` objects to send to a ``Comparator`` or ``S3Handler``. """ def __init__(self, client, operation_name, follow_symlinks=True, page_size=None, result_queue=None, request_parameters=None): self._client = client self.operation_name = operation_name self.follow_symlinks = follow_symlinks self.page_size = page_size self.result_queue = result_queue if not result_queue: self.result_queue = queue.Queue() self.request_parameters = {} if request_parameters is not None: self.request_parameters = request_parameters def call(self, files): """ This is the generalized function to yield the ``FileInfo`` objects. ``dir_op`` and ``use_src_name`` flags affect which files are used and ensure the proper destination paths and compare keys are formed. """ function_table = {'s3': self.list_objects, 'local': self.list_files} source = files['src']['path'] src_type = files['src']['type'] dest_type = files['dest']['type'] file_iterator = function_table[src_type](source, files['dir_op']) for src_path, extra_information in file_iterator: dest_path, compare_key = find_dest_path_comp_key(files, src_path) file_stat_kwargs = { 'src': src_path, 'dest': dest_path, 'compare_key': compare_key, 'src_type': src_type, 'dest_type': dest_type, 'operation_name': self.operation_name } self._inject_extra_information(file_stat_kwargs, extra_information) yield FileStat(**file_stat_kwargs) def _inject_extra_information(self, file_stat_kwargs, extra_information): src_type = file_stat_kwargs['src_type'] file_stat_kwargs['size'] = extra_information['Size'] file_stat_kwargs['last_update'] = extra_information['LastModified'] # S3 objects require the response data retrieved from HeadObject # and ListObject if src_type == 's3': file_stat_kwargs['response_data'] = extra_information def list_files(self, path, dir_op): """ This function yields the appropriate local file or local files under a directory depending on if the operation is on a directory. For directories a depth first search is implemented in order to follow the same sorted pattern as a s3 list objects operation outputs. It yields the file's source path, size, and last update """ join, isdir, isfile = os.path.join, os.path.isdir, os.path.isfile error, listdir = os.error, os.listdir if not self.should_ignore_file(path): if not dir_op: stats = self._safely_get_file_stats(path) if stats: yield stats else: # We need to list files in byte order based on the full # expanded path of the key: 'test/1/2/3.txt' However, # listdir() will only give us contents a single directory # at a time, so we'll get 'test'. At the same time we don't # want to load the entire list of files into memory. This # is handled by first going through the current directory # contents and adding the directory separator to any # directories. We can then sort the contents, # and ensure byte order. listdir_names = listdir(path) names = [] for name in listdir_names: if not self.should_ignore_file_with_decoding_warnings( path, name): file_path = join(path, name) if isdir(file_path): name = name + os.path.sep names.append(name) self.normalize_sort(names, os.sep, '/') for name in names: file_path = join(path, name) if isdir(file_path): # Anything in a directory will have a prefix of # this current directory and will come before the # remaining contents in this directory. This # means we need to recurse into this sub directory # before yielding the rest of this directory's # contents. for x in self.list_files(file_path, dir_op): yield x else: stats = self._safely_get_file_stats(file_path) if stats: yield stats def _safely_get_file_stats(self, file_path): try: size, last_update = get_file_stat(file_path) except (OSError, ValueError): self.triggers_warning(file_path) else: last_update = self._validate_update_time(last_update, file_path) return file_path, {'Size': size, 'LastModified': last_update} def _validate_update_time(self, update_time, path): # If the update time is None we know we ran into an invalid tiemstamp. if update_time is None: warning = create_warning( path=path, error_message="File has an invalid timestamp. Passing epoch " "time as timestamp.", skip_file=False) self.result_queue.put(warning) return EPOCH_TIME return update_time def normalize_sort(self, names, os_sep, character): """ The purpose of this function is to ensure that the same path seperator is used when sorting. In windows, the path operator is a backslash as opposed to a forward slash which can lead to differences in sorting between s3 and a windows machine. """ names.sort(key=lambda item: item.replace(os_sep, character)) def should_ignore_file_with_decoding_warnings(self, dirname, filename): """ We can get a UnicodeDecodeError if we try to listdir() and can't decode the contents with sys.getfilesystemencoding(). In this case listdir() returns the bytestring, which means that join(, ) could raise a UnicodeDecodeError. When this happens we warn using a FileDecodingError that provides more information into what's going on. """ if not isinstance(filename, six.text_type): decoding_error = FileDecodingError(dirname, filename) warning = create_warning(repr(filename), decoding_error.error_message) self.result_queue.put(warning) return True path = os.path.join(dirname, filename) return self.should_ignore_file(path) def should_ignore_file(self, path): """ This function checks whether a file should be ignored in the file generation process. This includes symlinks that are not to be followed and files that generate warnings. """ if not self.follow_symlinks: if os.path.isdir(path) and path.endswith(os.sep): # Trailing slash must be removed to check if it is a symlink. path = path[:-1] if os.path.islink(path): return True warning_triggered = self.triggers_warning(path) if warning_triggered: return True return False def triggers_warning(self, path): """ This function checks the specific types and properties of a file. If the file would cause trouble, the function adds a warning to the result queue to be printed out and returns a boolean value notify whether the file caused a warning to be generated. Files that generate warnings are skipped. Currently, this function checks for files that do not exist and files that the user does not have read access. """ if not os.path.exists(path): warning = create_warning(path, "File does not exist.") self.result_queue.put(warning) return True if is_special_file(path): warning = create_warning(path, ("File is character special device, " "block special device, FIFO, or " "socket.")) self.result_queue.put(warning) return True if not is_readable(path): warning = create_warning(path, "File/Directory is not readable.") self.result_queue.put(warning) return True return False def list_objects(self, s3_path, dir_op): """ This function yields the appropriate object or objects under a common prefix depending if the operation is on objects under a common prefix. It yields the file's source path, size, and last update. """ # Short circuit path: if we are not recursing into the s3 # bucket and a specific path was given, we can just yield # that path and not have to call any operation in s3. bucket, prefix = find_bucket_key(s3_path) if not dir_op and prefix: yield self._list_single_object(s3_path) else: lister = BucketLister(self._client) extra_args = self.request_parameters.get('ListObjectsV2', {}) for key in lister.list_objects(bucket=bucket, prefix=prefix, page_size=self.page_size, extra_args=extra_args): source_path, response_data = key if response_data['Size'] == 0 and source_path.endswith('/'): if self.operation_name == 'delete': # This is to filter out manually created folders # in S3. They have a size zero and would be # undesirably downloaded. Local directories # are automatically created when they do not # exist locally. But user should be able to # delete them. yield source_path, response_data elif not dir_op and s3_path != source_path: pass else: yield source_path, response_data def _list_single_object(self, s3_path): # When we know we're dealing with a single object, we can avoid # a ListObjects operation (which causes concern for anyone setting # IAM policies with the smallest set of permissions needed) and # instead use a HeadObject request. if self.operation_name == 'delete': # If the operation is just a single remote delete, there is # no need to run HeadObject on the S3 object as none of the # information gained from HeadObject is required to delete the # object. return s3_path, {'Size': None, 'LastModified': None} bucket, key = find_bucket_key(s3_path) try: params = {'Bucket': bucket, 'Key': key} params.update(self.request_parameters.get('HeadObject', {})) response = self._client.head_object(**params) except ClientError as e: # We want to try to give a more helpful error message. # This is what the customer is going to see so we want to # give as much detail as we have. if not e.response['Error']['Code'] == '404': raise # The key does not exist so we'll raise a more specific # error message here. response = e.response.copy() response['Error']['Message'] = 'Key "%s" does not exist' % key raise ClientError(response, 'HeadObject') response['Size'] = int(response.pop('ContentLength')) last_update = parse(response['LastModified']) response['LastModified'] = last_update.astimezone(tzlocal()) return s3_path, response awscli-1.17.14/awscli/customizations/s3/results.py0000644000000000000000000006400013620325554022067 0ustar rootroot00000000000000# Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. from __future__ import division import logging import sys import threading import time from collections import namedtuple from collections import defaultdict from s3transfer.exceptions import CancelledError from s3transfer.exceptions import FatalError from s3transfer.subscribers import BaseSubscriber from awscli.compat import queue, ensure_text_type from awscli.customizations.s3.utils import relative_path from awscli.customizations.s3.utils import human_readable_size from awscli.customizations.utils import uni_print from awscli.customizations.s3.utils import WarningResult from awscli.customizations.s3.utils import OnDoneFilteredSubscriber LOGGER = logging.getLogger(__name__) BaseResult = namedtuple('BaseResult', ['transfer_type', 'src', 'dest']) def _create_new_result_cls(name, extra_fields=None, base_cls=BaseResult): # Creates a new namedtuple class that subclasses from BaseResult for the # benefit of filtering by type and ensuring particular base attrs. # NOTE: _fields is a public attribute that has an underscore to avoid # naming collisions for namedtuples: # https://docs.python.org/2/library/collections.html#collections.somenamedtuple._fields fields = list(base_cls._fields) if extra_fields: fields += extra_fields return type(name, (namedtuple(name, fields), base_cls), {}) QueuedResult = _create_new_result_cls('QueuedResult', ['total_transfer_size']) ProgressResult = _create_new_result_cls( 'ProgressResult', ['bytes_transferred', 'total_transfer_size', 'timestamp']) SuccessResult = _create_new_result_cls('SuccessResult') FailureResult = _create_new_result_cls('FailureResult', ['exception']) DryRunResult = _create_new_result_cls('DryRunResult') ErrorResult = namedtuple('ErrorResult', ['exception']) CtrlCResult = _create_new_result_cls('CtrlCResult', base_cls=ErrorResult) CommandResult = namedtuple( 'CommandResult', ['num_tasks_failed', 'num_tasks_warned']) FinalTotalSubmissionsResult = namedtuple( 'FinalTotalSubmissionsResult', ['total_submissions']) class ShutdownThreadRequest(object): pass class BaseResultSubscriber(OnDoneFilteredSubscriber): TRANSFER_TYPE = None def __init__(self, result_queue, transfer_type=None): """Subscriber to send result notifications during transfer process :param result_queue: The queue to place results to be processed later on. """ self._result_queue = result_queue self._result_kwargs_cache = {} self._transfer_type = transfer_type if transfer_type is None: self._transfer_type = self.TRANSFER_TYPE def on_queued(self, future, **kwargs): self._add_to_result_kwargs_cache(future) result_kwargs = self._result_kwargs_cache[future.meta.transfer_id] queued_result = QueuedResult(**result_kwargs) self._result_queue.put(queued_result) def on_progress(self, future, bytes_transferred, **kwargs): result_kwargs = self._result_kwargs_cache[future.meta.transfer_id] progress_result = ProgressResult( bytes_transferred=bytes_transferred, timestamp=time.time(), **result_kwargs) self._result_queue.put(progress_result) def _on_success(self, future): result_kwargs = self._on_done_pop_from_result_kwargs_cache(future) self._result_queue.put(SuccessResult(**result_kwargs)) def _on_failure(self, future, e): result_kwargs = self._on_done_pop_from_result_kwargs_cache(future) if isinstance(e, CancelledError): error_result_cls = CtrlCResult if isinstance(e, FatalError): error_result_cls = ErrorResult self._result_queue.put(error_result_cls(exception=e)) else: self._result_queue.put(FailureResult(exception=e, **result_kwargs)) def _add_to_result_kwargs_cache(self, future): src, dest = self._get_src_dest(future) result_kwargs = { 'transfer_type': self._transfer_type, 'src': src, 'dest': dest, 'total_transfer_size': future.meta.size } self._result_kwargs_cache[future.meta.transfer_id] = result_kwargs def _on_done_pop_from_result_kwargs_cache(self, future): result_kwargs = self._result_kwargs_cache.pop(future.meta.transfer_id) result_kwargs.pop('total_transfer_size') return result_kwargs def _get_src_dest(self, future): raise NotImplementedError('_get_src_dest()') class UploadResultSubscriber(BaseResultSubscriber): TRANSFER_TYPE = 'upload' def _get_src_dest(self, future): call_args = future.meta.call_args src = self._get_src(call_args.fileobj) dest = 's3://' + call_args.bucket + '/' + call_args.key return src, dest def _get_src(self, fileobj): return relative_path(fileobj) class UploadStreamResultSubscriber(UploadResultSubscriber): def _get_src(self, fileobj): return '-' class DownloadResultSubscriber(BaseResultSubscriber): TRANSFER_TYPE = 'download' def _get_src_dest(self, future): call_args = future.meta.call_args src = 's3://' + call_args.bucket + '/' + call_args.key dest = self._get_dest(call_args.fileobj) return src, dest def _get_dest(self, fileobj): return relative_path(fileobj) class DownloadStreamResultSubscriber(DownloadResultSubscriber): def _get_dest(self, fileobj): return '-' class CopyResultSubscriber(BaseResultSubscriber): TRANSFER_TYPE = 'copy' def _get_src_dest(self, future): call_args = future.meta.call_args copy_source = call_args.copy_source src = 's3://' + copy_source['Bucket'] + '/' + copy_source['Key'] dest = 's3://' + call_args.bucket + '/' + call_args.key return src, dest class DeleteResultSubscriber(BaseResultSubscriber): TRANSFER_TYPE = 'delete' def _get_src_dest(self, future): call_args = future.meta.call_args src = 's3://' + call_args.bucket + '/' + call_args.key return src, None class BaseResultHandler(object): """Base handler class to be called in the ResultProcessor""" def __call__(self, result): raise NotImplementedError('__call__()') class ResultRecorder(BaseResultHandler): """Records and track transfer statistics based on results receieved""" def __init__(self): self.bytes_transferred = 0 self.bytes_failed_to_transfer = 0 self.files_transferred = 0 self.files_failed = 0 self.files_warned = 0 self.errors = 0 self.expected_bytes_transferred = 0 self.expected_files_transferred = 0 self.final_expected_files_transferred = None self.start_time = None self.bytes_transfer_speed = 0 self._ongoing_progress = defaultdict(int) self._ongoing_total_sizes = {} self._result_handler_map = { QueuedResult: self._record_queued_result, ProgressResult: self._record_progress_result, SuccessResult: self._record_success_result, FailureResult: self._record_failure_result, WarningResult: self._record_warning_result, ErrorResult: self._record_error_result, CtrlCResult: self._record_error_result, FinalTotalSubmissionsResult: self._record_final_expected_files, } def expected_totals_are_final(self): return ( self.final_expected_files_transferred == self.expected_files_transferred ) def __call__(self, result): """Record the result of an individual Result object""" self._result_handler_map.get(type(result), self._record_noop)( result=result) def _get_ongoing_dict_key(self, result): if not isinstance(result, BaseResult): raise ValueError( 'Any result using _get_ongoing_dict_key must subclass from ' 'BaseResult. Provided result is of type: %s' % type(result) ) key_parts = [] for result_property in [result.transfer_type, result.src, result.dest]: if result_property is not None: key_parts.append(ensure_text_type(result_property)) return u':'.join(key_parts) def _pop_result_from_ongoing_dicts(self, result): ongoing_key = self._get_ongoing_dict_key(result) total_progress = self._ongoing_progress.pop(ongoing_key, 0) total_file_size = self._ongoing_total_sizes.pop(ongoing_key, None) return total_progress, total_file_size def _record_noop(self, **kwargs): # If the result does not have a handler, then do nothing with it. pass def _record_queued_result(self, result, **kwargs): if self.start_time is None: self.start_time = time.time() total_transfer_size = result.total_transfer_size self._ongoing_total_sizes[ self._get_ongoing_dict_key(result)] = total_transfer_size # The total transfer size can be None if we do not know the size # immediately so do not add to the total right away. if total_transfer_size: self.expected_bytes_transferred += total_transfer_size self.expected_files_transferred += 1 def _record_progress_result(self, result, **kwargs): bytes_transferred = result.bytes_transferred self._update_ongoing_transfer_size_if_unknown(result) self._ongoing_progress[ self._get_ongoing_dict_key(result)] += bytes_transferred self.bytes_transferred += bytes_transferred # Since the start time is captured in the result recorder and # capture timestamps in the subscriber, there is a chance that if # a progress result gets created right after the queued result # gets created that the timestamp on the progress result is less # than the timestamp of when the result processor actually # processes that initial queued result. So this will avoid # negative progress being displayed or zero divison occuring. if result.timestamp > self.start_time: self.bytes_transfer_speed = self.bytes_transferred / ( result.timestamp - self.start_time) def _update_ongoing_transfer_size_if_unknown(self, result): # This is a special case when the transfer size was previous not # known but was provided in a progress result. ongoing_key = self._get_ongoing_dict_key(result) # First, check if the total size is None, meaning its size is # currently unknown. if self._ongoing_total_sizes[ongoing_key] is None: total_transfer_size = result.total_transfer_size # If the total size is no longer None that means we just learned # of the size so let's update the appropriate places with this # knowledge if result.total_transfer_size is not None: self._ongoing_total_sizes[ongoing_key] = total_transfer_size # Figure out how many bytes have been unaccounted for as # the recorder has been keeping track of how many bytes # it has seen so far and add it to the total expected amount. ongoing_progress = self._ongoing_progress[ongoing_key] unaccounted_bytes = total_transfer_size - ongoing_progress self.expected_bytes_transferred += unaccounted_bytes # If we still do not know what the total transfer size is # just update the expected bytes with the know bytes transferred # as we know at the very least, those bytes are expected. else: self.expected_bytes_transferred += result.bytes_transferred def _record_success_result(self, result, **kwargs): self._pop_result_from_ongoing_dicts(result) self.files_transferred += 1 def _record_failure_result(self, result, **kwargs): # If there was a failure, we want to account for the failure in # the count for bytes transferred by just adding on the remaining bytes # that did not get transferred. total_progress, total_file_size = self._pop_result_from_ongoing_dicts( result) if total_file_size is not None: progress_left = total_file_size - total_progress self.bytes_failed_to_transfer += progress_left self.files_failed += 1 self.files_transferred += 1 def _record_warning_result(self, **kwargs): self.files_warned += 1 def _record_error_result(self, **kwargs): self.errors += 1 def _record_final_expected_files(self, result, **kwargs): self.final_expected_files_transferred = result.total_submissions class ResultPrinter(BaseResultHandler): _FILES_REMAINING = "{remaining_files} file(s) remaining" _ESTIMATED_EXPECTED_TOTAL = "~{expected_total}" _STILL_CALCULATING_TOTALS = " (calculating...)" BYTE_PROGRESS_FORMAT = ( 'Completed {bytes_completed}/{expected_bytes_completed} ' '({transfer_speed}) with ' + _FILES_REMAINING ) FILE_PROGRESS_FORMAT = ( 'Completed {files_completed} file(s) with ' + _FILES_REMAINING ) SUCCESS_FORMAT = ( u'{transfer_type}: {transfer_location}' ) DRY_RUN_FORMAT = u'(dryrun) ' + SUCCESS_FORMAT FAILURE_FORMAT = ( u'{transfer_type} failed: {transfer_location} {exception}' ) # TODO: Add "warning: " prefix once all commands are converted to using # result printer and remove "warning: " prefix from ``create_warning``. WARNING_FORMAT = ( u'{message}' ) ERROR_FORMAT = ( u'fatal error: {exception}' ) CTRL_C_MSG = 'cancelled: ctrl-c received' SRC_DEST_TRANSFER_LOCATION_FORMAT = u'{src} to {dest}' SRC_TRANSFER_LOCATION_FORMAT = u'{src}' def __init__(self, result_recorder, out_file=None, error_file=None): """Prints status of ongoing transfer :type result_recorder: ResultRecorder :param result_recorder: The associated result recorder :type out_file: file-like obj :param out_file: Location to write progress and success statements. By default, the location is sys.stdout. :type error_file: file-like obj :param error_file: Location to write warnings and errors. By default, the location is sys.stderr. """ self._result_recorder = result_recorder self._out_file = out_file if self._out_file is None: self._out_file = sys.stdout self._error_file = error_file if self._error_file is None: self._error_file = sys.stderr self._progress_length = 0 self._result_handler_map = { ProgressResult: self._print_progress, SuccessResult: self._print_success, FailureResult: self._print_failure, WarningResult: self._print_warning, ErrorResult: self._print_error, CtrlCResult: self._print_ctrl_c, DryRunResult: self._print_dry_run, FinalTotalSubmissionsResult: self._clear_progress_if_no_more_expected_transfers, } def __call__(self, result): """Print the progress of the ongoing transfer based on a result""" self._result_handler_map.get(type(result), self._print_noop)( result=result) def _print_noop(self, **kwargs): # If the result does not have a handler, then do nothing with it. pass def _print_dry_run(self, result, **kwargs): statement = self.DRY_RUN_FORMAT.format( transfer_type=result.transfer_type, transfer_location=self._get_transfer_location(result) ) statement = self._adjust_statement_padding(statement) self._print_to_out_file(statement) def _print_success(self, result, **kwargs): success_statement = self.SUCCESS_FORMAT.format( transfer_type=result.transfer_type, transfer_location=self._get_transfer_location(result) ) success_statement = self._adjust_statement_padding(success_statement) self._print_to_out_file(success_statement) self._redisplay_progress() def _print_failure(self, result, **kwargs): failure_statement = self.FAILURE_FORMAT.format( transfer_type=result.transfer_type, transfer_location=self._get_transfer_location(result), exception=result.exception ) failure_statement = self._adjust_statement_padding(failure_statement) self._print_to_error_file(failure_statement) self._redisplay_progress() def _print_warning(self, result, **kwargs): warning_statement = self.WARNING_FORMAT.format(message=result.message) warning_statement = self._adjust_statement_padding(warning_statement) self._print_to_error_file(warning_statement) self._redisplay_progress() def _print_error(self, result, **kwargs): self._flush_error_statement( self.ERROR_FORMAT.format(exception=result.exception)) def _print_ctrl_c(self, result, **kwargs): self._flush_error_statement(self.CTRL_C_MSG) def _flush_error_statement(self, error_statement): error_statement = self._adjust_statement_padding(error_statement) self._print_to_error_file(error_statement) def _get_transfer_location(self, result): if result.dest is None: return self.SRC_TRANSFER_LOCATION_FORMAT.format(src=result.src) return self.SRC_DEST_TRANSFER_LOCATION_FORMAT.format( src=result.src, dest=result.dest) def _redisplay_progress(self): # Reset to zero because done statements are printed with new lines # meaning there are no carriage returns to take into account when # printing the next line. self._progress_length = 0 self._add_progress_if_needed() def _add_progress_if_needed(self): if self._has_remaining_progress(): self._print_progress() def _print_progress(self, **kwargs): # Get all of the statistics in the correct form. remaining_files = self._get_expected_total( str(self._result_recorder.expected_files_transferred - self._result_recorder.files_transferred) ) # Create the display statement. if self._result_recorder.expected_bytes_transferred > 0: bytes_completed = human_readable_size( self._result_recorder.bytes_transferred + self._result_recorder.bytes_failed_to_transfer ) expected_bytes_completed = self._get_expected_total( human_readable_size( self._result_recorder.expected_bytes_transferred)) transfer_speed = human_readable_size( self._result_recorder.bytes_transfer_speed) + '/s' progress_statement = self.BYTE_PROGRESS_FORMAT.format( bytes_completed=bytes_completed, expected_bytes_completed=expected_bytes_completed, transfer_speed=transfer_speed, remaining_files=remaining_files ) else: # We're not expecting any bytes to be transferred, so we should # only print of information about number of files transferred. progress_statement = self.FILE_PROGRESS_FORMAT.format( files_completed=self._result_recorder.files_transferred, remaining_files=remaining_files ) if not self._result_recorder.expected_totals_are_final(): progress_statement += self._STILL_CALCULATING_TOTALS # Make sure that it overrides any previous progress bar. progress_statement = self._adjust_statement_padding( progress_statement, ending_char='\r') # We do not want to include the carriage return in this calculation # as progress length is used for determining whitespace padding. # So we subtract one off of the length. self._progress_length = len(progress_statement) - 1 # Print the progress out. self._print_to_out_file(progress_statement) def _get_expected_total(self, expected_total): if not self._result_recorder.expected_totals_are_final(): return self._ESTIMATED_EXPECTED_TOTAL.format( expected_total=expected_total) return expected_total def _adjust_statement_padding(self, print_statement, ending_char='\n'): print_statement = print_statement.ljust(self._progress_length, ' ') return print_statement + ending_char def _has_remaining_progress(self): if not self._result_recorder.expected_totals_are_final(): return True actual = self._result_recorder.files_transferred expected = self._result_recorder.expected_files_transferred return actual != expected def _print_to_out_file(self, statement): uni_print(statement, self._out_file) def _print_to_error_file(self, statement): uni_print(statement, self._error_file) def _clear_progress_if_no_more_expected_transfers(self, **kwargs): if self._progress_length and not self._has_remaining_progress(): uni_print(self._adjust_statement_padding(''), self._out_file) class NoProgressResultPrinter(ResultPrinter): """A result printer that doesn't print progress""" def _print_progress(self, **kwargs): pass class OnlyShowErrorsResultPrinter(ResultPrinter): """A result printer that only prints out errors""" def _print_progress(self, **kwargs): pass def _print_success(self, result, **kwargs): pass class ResultProcessor(threading.Thread): def __init__(self, result_queue, result_handlers=None): """Thread to process results from result queue This includes recording statistics and printing transfer status :param result_queue: The result queue to process results from :param result_handlers: A list of callables that take a result in as a parameter to process the result for that handler. """ threading.Thread.__init__(self) self._result_queue = result_queue self._result_handlers = result_handlers if self._result_handlers is None: self._result_handlers = [] self._result_handlers_enabled = True def run(self): while True: try: result = self._result_queue.get(True) if isinstance(result, ShutdownThreadRequest): LOGGER.debug( 'Shutdown request received in result processing ' 'thread, shutting down result thread.') break if self._result_handlers_enabled: self._process_result(result) # ErrorResults are fatal to the command. If a fatal error # is seen, we know that the command is trying to shutdown # so disable all of the handlers and quickly consume all # of the results in the result queue in order to get to # the shutdown request to clean up the process. if isinstance(result, ErrorResult): self._result_handlers_enabled = False except queue.Empty: pass def _process_result(self, result): for result_handler in self._result_handlers: try: result_handler(result) except Exception as e: LOGGER.debug( 'Error processing result %s with handler %s: %s', result, result_handler, e, exc_info=True) class CommandResultRecorder(object): def __init__(self, result_queue, result_recorder, result_processor): """Records the result for an entire command It will fully process all results in a result queue and determine a CommandResult representing the entire command. :type result_queue: queue.Queue :param result_queue: The result queue in which results are placed on and processed from :type result_recorder: ResultRecorder :param result_recorder: The result recorder to track the various results sent through the result queue :type result_processor: ResultProcessor :param result_processor: The result processor to process results placed on the queue """ self.result_queue = result_queue self._result_recorder = result_recorder self._result_processor = result_processor def start(self): self._result_processor.start() def shutdown(self): self.result_queue.put(ShutdownThreadRequest()) self._result_processor.join() def get_command_result(self): """Get the CommandResult representing the result of a command :rtype: CommandResult :returns: The CommandResult representing the total result from running a particular command """ return CommandResult( self._result_recorder.files_failed + self._result_recorder.errors, self._result_recorder.files_warned ) def notify_total_submissions(self, total): self.result_queue.put(FinalTotalSubmissionsResult(total)) def __enter__(self): self.start() return self def __exit__(self, exc_type, exc_value, *args): if exc_type: LOGGER.debug('Exception caught during command execution: %s', exc_value, exc_info=True) self.result_queue.put(ErrorResult(exception=exc_value)) self.shutdown() return True self.shutdown() awscli-1.17.14/awscli/customizations/s3/comparator.py0000644000000000000000000001407413620325554022543 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import logging from awscli.compat import advance_iterator LOG = logging.getLogger(__name__) class Comparator(object): """ This class performs all of the comparisons behind the sync operation """ def __init__(self, file_at_src_and_dest_sync_strategy, file_not_at_dest_sync_strategy, file_not_at_src_sync_strategy): self._sync_strategy = file_at_src_and_dest_sync_strategy self._not_at_dest_sync_strategy = file_not_at_dest_sync_strategy self._not_at_src_sync_strategy = file_not_at_src_sync_strategy def call(self, src_files, dest_files): """ This function preforms the actual comparisons. The parameters it takes are the generated files for both the source and the destination. The key concept in this function is that no matter the type of where the files are coming from, they are listed in the same order, least to greatest in collation order. This allows for easy comparisons to determine if file needs to be added or deleted. Comparison keys are used to determine if two files are the same and each file has a unique comparison key. If they are the same compare the size and last modified times to see if a file needs to be updated. Ultimately, it will yield a sequence of file info objectsthat will be sent to the ``S3Handler``. :param src_files: The generated FileInfo objects from the source. :param dest_files: The genereated FileInfo objects from the dest. :returns: Yields the FilInfo objects of the files that need to be operated on Algorithm: Try to take next from both files. If it is empty signal corresponding done flag. If both generated lists are not done compare compare_keys. If equal, compare size and time to see if it needs to be updated. If source compare_key is less than dest compare_key, the file needs to be added to the destination. Take the next source file but not not destination file. If the source compare_key is greater than dest compare_key, that destination file needs to be deleted from the destination. Take the next dest file but not the source file. If the source list is empty delete the rest of the files in the dest list from the destination. If the dest list is empty add the rest of the file in source list to the destionation. """ # :var src_done: True if there are no more files from the source left. src_done = False # :var dest_done: True if there are no more files form the dest left. dest_done = False # :var src_take: Take the next source file from the generated files if # true src_take = True # :var dest_take: Take the next dest file from the generated files if # true dest_take = True while True: try: if (not src_done) and src_take: src_file = advance_iterator(src_files) except StopIteration: src_file = None src_done = True try: if (not dest_done) and dest_take: dest_file = advance_iterator(dest_files) except StopIteration: dest_file = None dest_done = True if (not src_done) and (not dest_done): src_take = True dest_take = True compare_keys = self.compare_comp_key(src_file, dest_file) if compare_keys == 'equal': should_sync = self._sync_strategy.determine_should_sync( src_file, dest_file ) if should_sync: yield src_file elif compare_keys == 'less_than': src_take = True dest_take = False should_sync = self._not_at_dest_sync_strategy.determine_should_sync(src_file, None) if should_sync: yield src_file elif compare_keys == 'greater_than': src_take = False dest_take = True should_sync = self._not_at_src_sync_strategy.determine_should_sync(None, dest_file) if should_sync: yield dest_file elif (not src_done) and dest_done: src_take = True should_sync = self._not_at_dest_sync_strategy.determine_should_sync(src_file, None) if should_sync: yield src_file elif src_done and (not dest_done): dest_take = True should_sync = self._not_at_src_sync_strategy.determine_should_sync(None, dest_file) if should_sync: yield dest_file else: break def compare_comp_key(self, src_file, dest_file): """ Determines if the source compare_key is less than, equal to, or greater than the destination compare_key """ src_comp_key = src_file.compare_key dest_comp_key = dest_file.compare_key if (src_comp_key == dest_comp_key): return 'equal' elif (src_comp_key < dest_comp_key): return 'less_than' else: return 'greater_than' awscli-1.17.14/awscli/customizations/s3/subcommands.py0000644000000000000000000015400113620325554022702 0ustar rootroot00000000000000# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import os import logging import sys from botocore.client import Config from dateutil.parser import parse from dateutil.tz import tzlocal from awscli.compat import six from awscli.compat import queue from awscli.customizations.commands import BasicCommand from awscli.customizations.s3.comparator import Comparator from awscli.customizations.s3.fileinfobuilder import FileInfoBuilder from awscli.customizations.s3.fileformat import FileFormat from awscli.customizations.s3.filegenerator import FileGenerator from awscli.customizations.s3.fileinfo import FileInfo from awscli.customizations.s3.filters import create_filter from awscli.customizations.s3.s3handler import S3TransferHandlerFactory from awscli.customizations.s3.utils import find_bucket_key, AppendFilter, \ find_dest_path_comp_key, human_readable_size, \ RequestParamsMapper, split_s3_bucket_key from awscli.customizations.utils import uni_print from awscli.customizations.s3.syncstrategy.base import MissingFileSync, \ SizeAndLastModifiedSync, NeverSync from awscli.customizations.s3 import transferconfig LOGGER = logging.getLogger(__name__) RECURSIVE = {'name': 'recursive', 'action': 'store_true', 'dest': 'dir_op', 'help_text': ( "Command is performed on all files or objects " "under the specified directory or prefix.")} HUMAN_READABLE = {'name': 'human-readable', 'action': 'store_true', 'help_text': "Displays file sizes in human readable format."} SUMMARIZE = {'name': 'summarize', 'action': 'store_true', 'help_text': ( "Displays summary information " "(number of objects, total size).")} DRYRUN = {'name': 'dryrun', 'action': 'store_true', 'help_text': ( "Displays the operations that would be performed using the " "specified command without actually running them.")} QUIET = {'name': 'quiet', 'action': 'store_true', 'help_text': ( "Does not display the operations performed from the specified " "command.")} FORCE = {'name': 'force', 'action': 'store_true', 'help_text': ( "Deletes all objects in the bucket including the bucket itself. " "Note that versioned objects will not be deleted in this " "process which would cause the bucket deletion to fail because " "the bucket would not be empty. To delete versioned " "objects use the ``s3api delete-object`` command with " "the ``--version-id`` parameter.")} FOLLOW_SYMLINKS = {'name': 'follow-symlinks', 'action': 'store_true', 'default': True, 'group_name': 'follow_symlinks', 'help_text': ( "Symbolic links are followed " "only when uploading to S3 from the local filesystem. " "Note that S3 does not support symbolic links, so the " "contents of the link target are uploaded under the " "name of the link. When neither ``--follow-symlinks`` " "nor ``--no-follow-symlinks`` is specified, the default " "is to follow symlinks.")} NO_FOLLOW_SYMLINKS = {'name': 'no-follow-symlinks', 'action': 'store_false', 'dest': 'follow_symlinks', 'default': True, 'group_name': 'follow_symlinks'} NO_GUESS_MIME_TYPE = {'name': 'no-guess-mime-type', 'action': 'store_false', 'dest': 'guess_mime_type', 'default': True, 'help_text': ( "Do not try to guess the mime type for " "uploaded files. By default the mime type of a " "file is guessed when it is uploaded.")} CONTENT_TYPE = {'name': 'content-type', 'help_text': ( "Specify an explicit content type for this operation. " "This value overrides any guessed mime types.")} EXCLUDE = {'name': 'exclude', 'action': AppendFilter, 'nargs': 1, 'dest': 'filters', 'help_text': ( "Exclude all files or objects from the command that matches " "the specified pattern.")} INCLUDE = {'name': 'include', 'action': AppendFilter, 'nargs': 1, 'dest': 'filters', 'help_text': ( "Don't exclude files or objects " "in the command that match the specified pattern. " 'See Use of ' 'Exclude and Include Filters for details.')} ACL = {'name': 'acl', 'choices': ['private', 'public-read', 'public-read-write', 'authenticated-read', 'aws-exec-read', 'bucket-owner-read', 'bucket-owner-full-control', 'log-delivery-write'], 'help_text': ( "Sets the ACL for the object when the command is " "performed. If you use this parameter you must have the " '"s3:PutObjectAcl" permission included in the list of actions ' "for your IAM policy. " "Only accepts values of ``private``, ``public-read``, " "``public-read-write``, ``authenticated-read``, ``aws-exec-read``, " "``bucket-owner-read``, ``bucket-owner-full-control`` and " "``log-delivery-write``. " 'See Canned ACL for details')} GRANTS = { 'name': 'grants', 'nargs': '+', 'help_text': ( '

Grant specific permissions to individual users or groups. You ' 'can supply a list of grants of the form

--grants ' 'Permission=Grantee_Type=Grantee_ID [Permission=Grantee_Type=' 'Grantee_ID ...]To specify the same permission type ' 'for multiple ' 'grantees, specify the permission as such as --grants ' 'Permission=Grantee_Type=Grantee_ID,Grantee_Type=Grantee_ID,...' 'Each value contains the following elements:' '
  • Permission - Specifies ' 'the granted permissions, and can be set to read, readacl, ' 'writeacl, or full.
  • Grantee_Type - ' 'Specifies how the grantee is to be identified, and can be set ' 'to uri or id.
  • Grantee_ID - ' 'Specifies the grantee based on Grantee_Type. The ' 'Grantee_ID value can be one of:
    • uri ' '- The group\'s URI. For more information, see ' '' 'Who Is a Grantee?
    • ' '
    • id - The account\'s canonical ID
    ' '
' 'For more information on Amazon S3 access control, see ' 'Access Control')} SSE = { 'name': 'sse', 'nargs': '?', 'const': 'AES256', 'choices': ['AES256', 'aws:kms'], 'help_text': ( 'Specifies server-side encryption of the object in S3. ' 'Valid values are ``AES256`` and ``aws:kms``. If the parameter is ' 'specified but no value is provided, ``AES256`` is used.' ) } SSE_C = { 'name': 'sse-c', 'nargs': '?', 'const': 'AES256', 'choices': ['AES256'], 'help_text': ( 'Specifies server-side encryption using customer provided keys ' 'of the the object in S3. ``AES256`` is the only valid value. ' 'If the parameter is specified but no value is provided, ' '``AES256`` is used. If you provide this value, ``--sse-c-key`` ' 'must be specified as well.' ) } SSE_C_KEY = { 'name': 'sse-c-key', 'cli_type_name': 'blob', 'help_text': ( 'The customer-provided encryption key to use to server-side ' 'encrypt the object in S3. If you provide this value, ' '``--sse-c`` must be specified as well. The key provided should ' '**not** be base64 encoded.' ) } SSE_KMS_KEY_ID = { 'name': 'sse-kms-key-id', 'help_text': ( 'The customer-managed AWS Key Management Service (KMS) key ID that ' 'should be used to server-side encrypt the object in S3. You should ' 'only provide this parameter if you are using a customer managed ' 'customer master key (CMK) and not the AWS managed KMS CMK.' ) } SSE_C_COPY_SOURCE = { 'name': 'sse-c-copy-source', 'nargs': '?', 'const': 'AES256', 'choices': ['AES256'], 'help_text': ( 'This parameter should only be specified when copying an S3 object ' 'that was encrypted server-side with a customer-provided ' 'key. It specifies the algorithm to use when decrypting the source ' 'object. ``AES256`` is the only valid ' 'value. If the parameter is specified but no value is provided, ' '``AES256`` is used. If you provide this value, ' '``--sse-c-copy-source-key`` must be specified as well. ' ) } SSE_C_COPY_SOURCE_KEY = { 'name': 'sse-c-copy-source-key', 'cli_type_name': 'blob', 'help_text': ( 'This parameter should only be specified when copying an S3 object ' 'that was encrypted server-side with a customer-provided ' 'key. Specifies the customer-provided encryption key for Amazon S3 ' 'to use to decrypt the source object. The encryption key provided ' 'must be one that was used when the source object was created. ' 'If you provide this value, ``--sse-c-copy-source`` be specified as ' 'well. The key provided should **not** be base64 encoded.' ) } STORAGE_CLASS = {'name': 'storage-class', 'choices': ['STANDARD', 'REDUCED_REDUNDANCY', 'STANDARD_IA', 'ONEZONE_IA', 'INTELLIGENT_TIERING', 'GLACIER', 'DEEP_ARCHIVE'], 'help_text': ( "The type of storage to use for the object. " "Valid choices are: STANDARD | REDUCED_REDUNDANCY " "| STANDARD_IA | ONEZONE_IA | INTELLIGENT_TIERING " "| GLACIER | DEEP_ARCHIVE. " "Defaults to 'STANDARD'")} WEBSITE_REDIRECT = {'name': 'website-redirect', 'help_text': ( "If the bucket is configured as a website, " "redirects requests for this object to another object " "in the same bucket or to an external URL. Amazon S3 " "stores the value of this header in the object " "metadata.")} CACHE_CONTROL = {'name': 'cache-control', 'help_text': ( "Specifies caching behavior along the " "request/reply chain.")} CONTENT_DISPOSITION = {'name': 'content-disposition', 'help_text': ( "Specifies presentational information " "for the object.")} CONTENT_ENCODING = {'name': 'content-encoding', 'help_text': ( "Specifies what content encodings have been " "applied to the object and thus what decoding " "mechanisms must be applied to obtain the media-type " "referenced by the Content-Type header field.")} CONTENT_LANGUAGE = {'name': 'content-language', 'help_text': ("The language the content is in.")} SOURCE_REGION = {'name': 'source-region', 'help_text': ( "When transferring objects from an s3 bucket to an s3 " "bucket, this specifies the region of the source bucket." " Note the region specified by ``--region`` or through " "configuration of the CLI refers to the region of the " "destination bucket. If ``--source-region`` is not " "specified the region of the source will be the same " "as the region of the destination bucket.")} EXPIRES = { 'name': 'expires', 'help_text': ( "The date and time at which the object is no longer cacheable.") } METADATA = { 'name': 'metadata', 'cli_type_name': 'map', 'schema': { 'type': 'map', 'key': {'type': 'string'}, 'value': {'type': 'string'} }, 'help_text': ( "A map of metadata to store with the objects in S3. This will be " "applied to every object which is part of this request. In a sync, this " "means that files which haven't changed won't receive the new metadata. " "When copying between two s3 locations, the metadata-directive " "argument will default to 'REPLACE' unless otherwise specified." ) } METADATA_DIRECTIVE = { 'name': 'metadata-directive', 'choices': ['COPY', 'REPLACE'], 'help_text': ( 'Specifies whether the metadata is copied from the source object ' 'or replaced with metadata provided when copying S3 objects. ' 'Note that if the object is copied over in parts, the source ' 'object\'s metadata will not be copied over, no matter the value for ' '``--metadata-directive``, and instead the desired metadata values ' 'must be specified as parameters on the command line. ' 'Valid values are ``COPY`` and ``REPLACE``. If this parameter is not ' 'specified, ``COPY`` will be used by default. If ``REPLACE`` is used, ' 'the copied object will only have the metadata values that were' ' specified by the CLI command. Note that if you are ' 'using any of the following parameters: ``--content-type``, ' '``content-language``, ``--content-encoding``, ' '``--content-disposition``, ``--cache-control``, or ``--expires``, you ' 'will need to specify ``--metadata-directive REPLACE`` for ' 'non-multipart copies if you want the copied objects to have the ' 'specified metadata values.') } INDEX_DOCUMENT = {'name': 'index-document', 'help_text': ( 'A suffix that is appended to a request that is for ' 'a directory on the website endpoint (e.g. if the ' 'suffix is index.html and you make a request to ' 'samplebucket/images/ the data that is returned ' 'will be for the object with the key name ' 'images/index.html) The suffix must not be empty and ' 'must not include a slash character.')} ERROR_DOCUMENT = {'name': 'error-document', 'help_text': ( 'The object key name to use when ' 'a 4XX class error occurs.')} ONLY_SHOW_ERRORS = {'name': 'only-show-errors', 'action': 'store_true', 'help_text': ( 'Only errors and warnings are displayed. All other ' 'output is suppressed.')} NO_PROGRESS = {'name': 'no-progress', 'action': 'store_false', 'dest': 'progress', 'help_text': ( 'File transfer progress is not displayed. This flag ' 'is only applied when the quiet and only-show-errors ' 'flags are not provided.')} EXPECTED_SIZE = {'name': 'expected-size', 'help_text': ( 'This argument specifies the expected size of a stream ' 'in terms of bytes. Note that this argument is needed ' 'only when a stream is being uploaded to s3 and the size ' 'is larger than 50GB. Failure to include this argument ' 'under these conditions may result in a failed upload ' 'due to too many parts in upload.')} PAGE_SIZE = {'name': 'page-size', 'cli_type_name': 'integer', 'help_text': ( 'The number of results to return in each response to a list ' 'operation. The default value is 1000 (the maximum allowed). ' 'Using a lower value may help if an operation times out.')} IGNORE_GLACIER_WARNINGS = { 'name': 'ignore-glacier-warnings', 'action': 'store_true', 'help_text': ( 'Turns off glacier warnings. Warnings about an operation that cannot ' 'be performed because it involves copying, downloading, or moving ' 'a glacier object will no longer be printed to standard error and ' 'will no longer cause the return code of the command to be ``2``.' ) } FORCE_GLACIER_TRANSFER = { 'name': 'force-glacier-transfer', 'action': 'store_true', 'help_text': ( 'Forces a transfer request on all Glacier objects in a sync or ' 'recursive copy.' ) } REQUEST_PAYER = { 'name': 'request-payer', 'choices': ['requester'], 'nargs': '?', 'const': 'requester', 'help_text': ( 'Confirms that the requester knows that she or he will be charged ' 'for the request. Bucket owners need not specify this parameter in ' 'their requests. Documentation on downloading objects from requester ' 'pays buckets can be found at ' 'http://docs.aws.amazon.com/AmazonS3/latest/dev/' 'ObjectsinRequesterPaysBuckets.html' ) } TRANSFER_ARGS = [DRYRUN, QUIET, INCLUDE, EXCLUDE, ACL, FOLLOW_SYMLINKS, NO_FOLLOW_SYMLINKS, NO_GUESS_MIME_TYPE, SSE, SSE_C, SSE_C_KEY, SSE_KMS_KEY_ID, SSE_C_COPY_SOURCE, SSE_C_COPY_SOURCE_KEY, STORAGE_CLASS, GRANTS, WEBSITE_REDIRECT, CONTENT_TYPE, CACHE_CONTROL, CONTENT_DISPOSITION, CONTENT_ENCODING, CONTENT_LANGUAGE, EXPIRES, SOURCE_REGION, ONLY_SHOW_ERRORS, NO_PROGRESS, PAGE_SIZE, IGNORE_GLACIER_WARNINGS, FORCE_GLACIER_TRANSFER, REQUEST_PAYER] def get_client(session, region, endpoint_url, verify, config=None): return session.create_client('s3', region_name=region, endpoint_url=endpoint_url, verify=verify, config=config) class S3Command(BasicCommand): def _run_main(self, parsed_args, parsed_globals): self.client = get_client(self._session, parsed_globals.region, parsed_globals.endpoint_url, parsed_globals.verify_ssl) class ListCommand(S3Command): NAME = 'ls' DESCRIPTION = ("List S3 objects and common prefixes under a prefix or " "all S3 buckets. Note that the --output and --no-paginate " "arguments are ignored for this command.") USAGE = " or NONE" ARG_TABLE = [{'name': 'paths', 'nargs': '?', 'default': 's3://', 'positional_arg': True, 'synopsis': USAGE}, RECURSIVE, PAGE_SIZE, HUMAN_READABLE, SUMMARIZE, REQUEST_PAYER] def _run_main(self, parsed_args, parsed_globals): super(ListCommand, self)._run_main(parsed_args, parsed_globals) self._empty_result = False self._at_first_page = True self._size_accumulator = 0 self._total_objects = 0 self._human_readable = parsed_args.human_readable path = parsed_args.paths if path.startswith('s3://'): path = path[5:] bucket, key = find_bucket_key(path) if not bucket: self._list_all_buckets() elif parsed_args.dir_op: # Then --recursive was specified. self._list_all_objects_recursive( bucket, key, parsed_args.page_size, parsed_args.request_payer) else: self._list_all_objects( bucket, key, parsed_args.page_size, parsed_args.request_payer) if parsed_args.summarize: self._print_summary() if key: # User specified a key to look for. We should return an rc of one # if there are no matching keys and/or prefixes or return an rc # of zero if there are matching keys or prefixes. return self._check_no_objects() else: # This covers the case when user is trying to list all of of # the buckets or is trying to list the objects of a bucket # (without specifying a key). For both situations, a rc of 0 # should be returned because applicable errors are supplied by # the server (i.e. bucket not existing). These errors will be # thrown before reaching the automatic return of rc of zero. return 0 def _list_all_objects(self, bucket, key, page_size=None, request_payer=None): paginator = self.client.get_paginator('list_objects_v2') paging_args = { 'Bucket': bucket, 'Prefix': key, 'Delimiter': '/', 'PaginationConfig': {'PageSize': page_size} } if request_payer is not None: paging_args['RequestPayer'] = request_payer iterator = paginator.paginate(**paging_args) for response_data in iterator: self._display_page(response_data) def _display_page(self, response_data, use_basename=True): common_prefixes = response_data.get('CommonPrefixes', []) contents = response_data.get('Contents', []) if not contents and not common_prefixes: self._empty_result = True return for common_prefix in common_prefixes: prefix_components = common_prefix['Prefix'].split('/') prefix = prefix_components[-2] pre_string = "PRE".rjust(30, " ") print_str = pre_string + ' ' + prefix + '/\n' uni_print(print_str) for content in contents: last_mod_str = self._make_last_mod_str(content['LastModified']) self._size_accumulator += int(content['Size']) self._total_objects += 1 size_str = self._make_size_str(content['Size']) if use_basename: filename_components = content['Key'].split('/') filename = filename_components[-1] else: filename = content['Key'] print_str = last_mod_str + ' ' + size_str + ' ' + \ filename + '\n' uni_print(print_str) self._at_first_page = False def _list_all_buckets(self): response_data = self.client.list_buckets() buckets = response_data['Buckets'] for bucket in buckets: last_mod_str = self._make_last_mod_str(bucket['CreationDate']) print_str = last_mod_str + ' ' + bucket['Name'] + '\n' uni_print(print_str) def _list_all_objects_recursive(self, bucket, key, page_size=None, request_payer=None): paginator = self.client.get_paginator('list_objects_v2') paging_args = { 'Bucket': bucket, 'Prefix': key, 'PaginationConfig': {'PageSize': page_size} } if request_payer is not None: paging_args['RequestPayer'] = request_payer iterator = paginator.paginate(**paging_args) for response_data in iterator: self._display_page(response_data, use_basename=False) def _check_no_objects(self): if self._empty_result and self._at_first_page: # Nothing was returned in the first page of results when listing # the objects. return 1 return 0 def _make_last_mod_str(self, last_mod): """ This function creates the last modified time string whenever objects or buckets are being listed """ last_mod = parse(last_mod) last_mod = last_mod.astimezone(tzlocal()) last_mod_tup = (str(last_mod.year), str(last_mod.month).zfill(2), str(last_mod.day).zfill(2), str(last_mod.hour).zfill(2), str(last_mod.minute).zfill(2), str(last_mod.second).zfill(2)) last_mod_str = "%s-%s-%s %s:%s:%s" % last_mod_tup return last_mod_str.ljust(19, ' ') def _make_size_str(self, size): """ This function creates the size string when objects are being listed. """ if self._human_readable: size_str = human_readable_size(size) else: size_str = str(size) return size_str.rjust(10, ' ') def _print_summary(self): """ This function prints a summary of total objects and total bytes """ print_str = str(self._total_objects) uni_print("\nTotal Objects: ".rjust(15, ' ') + print_str + "\n") if self._human_readable: print_str = human_readable_size(self._size_accumulator) else: print_str = str(self._size_accumulator) uni_print("Total Size: ".rjust(15, ' ') + print_str + "\n") class WebsiteCommand(S3Command): NAME = 'website' DESCRIPTION = 'Set the website configuration for a bucket.' USAGE = '' ARG_TABLE = [{'name': 'paths', 'nargs': 1, 'positional_arg': True, 'synopsis': USAGE}, INDEX_DOCUMENT, ERROR_DOCUMENT] def _run_main(self, parsed_args, parsed_globals): super(WebsiteCommand, self)._run_main(parsed_args, parsed_globals) bucket = self._get_bucket_name(parsed_args.paths[0]) website_configuration = self._build_website_configuration(parsed_args) self.client.put_bucket_website( Bucket=bucket, WebsiteConfiguration=website_configuration) return 0 def _build_website_configuration(self, parsed_args): website_config = {} if parsed_args.index_document is not None: website_config['IndexDocument'] = \ {'Suffix': parsed_args.index_document} if parsed_args.error_document is not None: website_config['ErrorDocument'] = \ {'Key': parsed_args.error_document} return website_config def _get_bucket_name(self, path): # We support either: # s3://bucketname # bucketname # # We also strip off the trailing slash if a user # accidently appends a slash. if path.startswith('s3://'): path = path[5:] if path.endswith('/'): path = path[:-1] return path class PresignCommand(S3Command): NAME = 'presign' DESCRIPTION = ( "Generate a pre-signed URL for an Amazon S3 object. This allows " "anyone who receives the pre-signed URL to retrieve the S3 object " "with an HTTP GET request. For sigv4 requests the region needs to be " "configured explicitly." ) USAGE = "" ARG_TABLE = [{'name': 'path', 'positional_arg': True, 'synopsis': USAGE}, {'name': 'expires-in', 'default': 3600, 'cli_type_name': 'integer', 'help_text': ( 'Number of seconds until the pre-signed ' 'URL expires. Default is 3600 seconds.')}] def _run_main(self, parsed_args, parsed_globals): super(PresignCommand, self)._run_main(parsed_args, parsed_globals) path = parsed_args.path if path.startswith('s3://'): path = path[5:] bucket, key = find_bucket_key(path) url = self.client.generate_presigned_url( 'get_object', {'Bucket': bucket, 'Key': key}, ExpiresIn=parsed_args.expires_in ) uni_print(url) uni_print('\n') return 0 class S3TransferCommand(S3Command): def _run_main(self, parsed_args, parsed_globals): super(S3TransferCommand, self)._run_main(parsed_args, parsed_globals) self._convert_path_args(parsed_args) params = self._build_call_parameters(parsed_args, {}) cmd_params = CommandParameters(self.NAME, params, self.USAGE) cmd_params.add_region(parsed_globals) cmd_params.add_endpoint_url(parsed_globals) cmd_params.add_verify_ssl(parsed_globals) cmd_params.add_page_size(parsed_args) cmd_params.add_paths(parsed_args.paths) runtime_config = transferconfig.RuntimeConfig().build_config( **self._session.get_scoped_config().get('s3', {})) cmd = CommandArchitecture(self._session, self.NAME, cmd_params.parameters, runtime_config) cmd.set_clients() cmd.create_instructions() return cmd.run() def _build_call_parameters(self, args, command_params): """ This takes all of the commands in the name space and puts them into a dictionary """ for name, value in vars(args).items(): command_params[name] = value return command_params def _convert_path_args(self, parsed_args): if not isinstance(parsed_args.paths, list): parsed_args.paths = [parsed_args.paths] for i in range(len(parsed_args.paths)): path = parsed_args.paths[i] if isinstance(path, six.binary_type): dec_path = path.decode(sys.getfilesystemencoding()) enc_path = dec_path.encode('utf-8') new_path = enc_path.decode('utf-8') parsed_args.paths[i] = new_path class CpCommand(S3TransferCommand): NAME = 'cp' DESCRIPTION = "Copies a local file or S3 object to another location " \ "locally or in S3." USAGE = " or " \ "or " ARG_TABLE = [{'name': 'paths', 'nargs': 2, 'positional_arg': True, 'synopsis': USAGE}] + TRANSFER_ARGS + \ [METADATA, METADATA_DIRECTIVE, EXPECTED_SIZE, RECURSIVE] class MvCommand(S3TransferCommand): NAME = 'mv' DESCRIPTION = "Moves a local file or S3 object to " \ "another location locally or in S3." USAGE = " or " \ "or " ARG_TABLE = [{'name': 'paths', 'nargs': 2, 'positional_arg': True, 'synopsis': USAGE}] + TRANSFER_ARGS +\ [METADATA, METADATA_DIRECTIVE, RECURSIVE] class RmCommand(S3TransferCommand): NAME = 'rm' DESCRIPTION = "Deletes an S3 object." USAGE = "" ARG_TABLE = [{'name': 'paths', 'nargs': 1, 'positional_arg': True, 'synopsis': USAGE}, DRYRUN, QUIET, RECURSIVE, REQUEST_PAYER, INCLUDE, EXCLUDE, ONLY_SHOW_ERRORS, PAGE_SIZE] class SyncCommand(S3TransferCommand): NAME = 'sync' DESCRIPTION = "Syncs directories and S3 prefixes. Recursively copies " \ "new and updated files from the source directory to " \ "the destination. Only creates folders in the destination " \ "if they contain one or more files." USAGE = " or " \ " or " ARG_TABLE = [{'name': 'paths', 'nargs': 2, 'positional_arg': True, 'synopsis': USAGE}] + TRANSFER_ARGS + \ [METADATA, METADATA_DIRECTIVE] class MbCommand(S3Command): NAME = 'mb' DESCRIPTION = "Creates an S3 bucket." USAGE = "" ARG_TABLE = [{'name': 'path', 'positional_arg': True, 'synopsis': USAGE}] def _run_main(self, parsed_args, parsed_globals): super(MbCommand, self)._run_main(parsed_args, parsed_globals) if not parsed_args.path.startswith('s3://'): raise TypeError("%s\nError: Invalid argument type" % self.USAGE) bucket, _ = split_s3_bucket_key(parsed_args.path) bucket_config = {'LocationConstraint': self.client.meta.region_name} params = {'Bucket': bucket} if self.client.meta.region_name != 'us-east-1': params['CreateBucketConfiguration'] = bucket_config # TODO: Consolidate how we handle return codes and errors try: self.client.create_bucket(**params) uni_print("make_bucket: %s\n" % bucket) return 0 except Exception as e: uni_print( "make_bucket failed: %s %s\n" % (parsed_args.path, e), sys.stderr ) return 1 class RbCommand(S3Command): NAME = 'rb' DESCRIPTION = ( "Deletes an empty S3 bucket. A bucket must be completely empty " "of objects and versioned objects before it can be deleted. " "However, the ``--force`` parameter can be used to delete " "the non-versioned objects in the bucket before the bucket is " "deleted." ) USAGE = "" ARG_TABLE = [{'name': 'path', 'positional_arg': True, 'synopsis': USAGE}, FORCE] def _run_main(self, parsed_args, parsed_globals): super(RbCommand, self)._run_main(parsed_args, parsed_globals) if not parsed_args.path.startswith('s3://'): raise TypeError("%s\nError: Invalid argument type" % self.USAGE) bucket, key = split_s3_bucket_key(parsed_args.path) if key: raise ValueError('Please specify a valid bucket name only.' ' E.g. s3://%s' % bucket) if parsed_args.force: self._force(parsed_args.path, parsed_globals) try: self.client.delete_bucket(Bucket=bucket) uni_print("remove_bucket: %s\n" % bucket) return 0 except Exception as e: uni_print( "remove_bucket failed: %s %s\n" % (parsed_args.path, e), sys.stderr ) return 1 def _force(self, path, parsed_globals): """Calls rm --recursive on the given path.""" rm = RmCommand(self._session) rc = rm([path, '--recursive'], parsed_globals) if rc != 0: raise RuntimeError( "remove_bucket failed: Unable to delete all objects in the " "bucket, bucket will not be deleted.") class CommandArchitecture(object): """ This class drives the actual command. A command is performed in two steps. First a list of instructions is generated. This list of instructions identifies which type of components are required based on the name of the command and the parameters passed to the command line. After the instructions are generated the second step involves using the list of instructions to wire together an assortment of generators to perform the command. """ def __init__(self, session, cmd, parameters, runtime_config=None): self.session = session self.cmd = cmd self.parameters = parameters self.instructions = [] self._runtime_config = runtime_config self._endpoint = None self._source_endpoint = None self._client = None self._source_client = None def set_clients(self): client_config = None if self.parameters.get('sse') == 'aws:kms': client_config = Config(signature_version='s3v4') self._client = get_client( self.session, region=self.parameters['region'], endpoint_url=self.parameters['endpoint_url'], verify=self.parameters['verify_ssl'], config=client_config ) self._source_client = get_client( self.session, region=self.parameters['region'], endpoint_url=self.parameters['endpoint_url'], verify=self.parameters['verify_ssl'], config=client_config ) if self.parameters['source_region']: if self.parameters['paths_type'] == 's3s3': self._source_client = get_client( self.session, region=self.parameters['source_region'], endpoint_url=None, verify=self.parameters['verify_ssl'], config=client_config ) def create_instructions(self): """ This function creates the instructions based on the command name and extra parameters. Note that all commands must have an s3_handler instruction in the instructions and must be at the end of the instruction list because it sends the request to S3 and does not yield anything. """ if self.needs_filegenerator(): self.instructions.append('file_generator') if self.parameters.get('filters'): self.instructions.append('filters') if self.cmd == 'sync': self.instructions.append('comparator') self.instructions.append('file_info_builder') self.instructions.append('s3_handler') def needs_filegenerator(self): return not self.parameters['is_stream'] def choose_sync_strategies(self): """Determines the sync strategy for the command. It defaults to the default sync strategies but a customizable sync strategy can override the default strategy if it returns the instance of its self when the event is emitted. """ sync_strategies = {} # Set the default strategies. sync_strategies['file_at_src_and_dest_sync_strategy'] = \ SizeAndLastModifiedSync() sync_strategies['file_not_at_dest_sync_strategy'] = MissingFileSync() sync_strategies['file_not_at_src_sync_strategy'] = NeverSync() # Determine what strategies to override if any. responses = self.session.emit( 'choosing-s3-sync-strategy', params=self.parameters) if responses is not None: for response in responses: override_sync_strategy = response[1] if override_sync_strategy is not None: sync_type = override_sync_strategy.sync_type sync_type += '_sync_strategy' sync_strategies[sync_type] = override_sync_strategy return sync_strategies def run(self): """ This function wires together all of the generators and completes the command. First a dictionary is created that is indexed first by the command name. Then using the instruction, another dictionary can be indexed to obtain the objects corresponding to the particular instruction for that command. To begin the wiring, either a ``FileFormat`` or ``TaskInfo`` object, depending on the command, is put into a list. Then the function enters a while loop that pops off an instruction. It then determines the object needed and calls the call function of the object using the list as the input. Depending on the number of objects in the input list and the number of components in the list corresponding to the instruction, the call method of the component can be called two different ways. If the number of inputs is equal to the number of components a 1:1 mapping of inputs to components is used when calling the call function. If the there are more inputs than components, then a 2:1 mapping of inputs to components is used where the component call method takes two inputs instead of one. Whatever files are yielded from the call function is appended to a list and used as the input for the next repetition of the while loop until there are no more instructions. """ src = self.parameters['src'] dest = self.parameters['dest'] paths_type = self.parameters['paths_type'] files = FileFormat().format(src, dest, self.parameters) rev_files = FileFormat().format(dest, src, self.parameters) cmd_translation = { 'locals3': 'upload', 's3s3': 'copy', 's3local': 'download', 's3': 'delete' } result_queue = queue.Queue() operation_name = cmd_translation[paths_type] fgen_kwargs = { 'client': self._source_client, 'operation_name': operation_name, 'follow_symlinks': self.parameters['follow_symlinks'], 'page_size': self.parameters['page_size'], 'result_queue': result_queue, } rgen_kwargs = { 'client': self._client, 'operation_name': '', 'follow_symlinks': self.parameters['follow_symlinks'], 'page_size': self.parameters['page_size'], 'result_queue': result_queue, } fgen_request_parameters = \ self._get_file_generator_request_parameters_skeleton() self._map_request_payer_params(fgen_request_parameters) self._map_sse_c_params(fgen_request_parameters, paths_type) fgen_kwargs['request_parameters'] = fgen_request_parameters rgen_request_parameters = \ self._get_file_generator_request_parameters_skeleton() self._map_request_payer_params(rgen_request_parameters) rgen_kwargs['request_parameters'] = rgen_request_parameters file_generator = FileGenerator(**fgen_kwargs) rev_generator = FileGenerator(**rgen_kwargs) stream_dest_path, stream_compare_key = find_dest_path_comp_key(files) stream_file_info = [FileInfo(src=files['src']['path'], dest=stream_dest_path, compare_key=stream_compare_key, src_type=files['src']['type'], dest_type=files['dest']['type'], operation_name=operation_name, client=self._client, is_stream=True)] file_info_builder = FileInfoBuilder( self._client, self._source_client, self.parameters) s3_transfer_handler = S3TransferHandlerFactory( self.parameters, self._runtime_config)( self._client, result_queue) sync_strategies = self.choose_sync_strategies() command_dict = {} if self.cmd == 'sync': command_dict = {'setup': [files, rev_files], 'file_generator': [file_generator, rev_generator], 'filters': [create_filter(self.parameters), create_filter(self.parameters)], 'comparator': [Comparator(**sync_strategies)], 'file_info_builder': [file_info_builder], 's3_handler': [s3_transfer_handler]} elif self.cmd == 'cp' and self.parameters['is_stream']: command_dict = {'setup': [stream_file_info], 's3_handler': [s3_transfer_handler]} elif self.cmd == 'cp': command_dict = {'setup': [files], 'file_generator': [file_generator], 'filters': [create_filter(self.parameters)], 'file_info_builder': [file_info_builder], 's3_handler': [s3_transfer_handler]} elif self.cmd == 'rm': command_dict = {'setup': [files], 'file_generator': [file_generator], 'filters': [create_filter(self.parameters)], 'file_info_builder': [file_info_builder], 's3_handler': [s3_transfer_handler]} elif self.cmd == 'mv': command_dict = {'setup': [files], 'file_generator': [file_generator], 'filters': [create_filter(self.parameters)], 'file_info_builder': [file_info_builder], 's3_handler': [s3_transfer_handler]} files = command_dict['setup'] while self.instructions: instruction = self.instructions.pop(0) file_list = [] components = command_dict[instruction] for i in range(len(components)): if len(files) > len(components): file_list.append(components[i].call(*files)) else: file_list.append(components[i].call(files[i])) files = file_list # This is kinda quirky, but each call through the instructions # will replaces the files attr with the return value of the # file_list. The very last call is a single list of # [s3_handler], and the s3_handler returns the number of # tasks failed and the number of tasks warned. # This means that files[0] now contains a namedtuple with # the number of failed tasks and the number of warned tasks. # In terms of the RC, we're keeping it simple and saying # that > 0 failed tasks will give a 1 RC and > 0 warned # tasks will give a 2 RC. Otherwise a RC of zero is returned. rc = 0 if files[0].num_tasks_failed > 0: rc = 1 elif files[0].num_tasks_warned > 0: rc = 2 return rc def _get_file_generator_request_parameters_skeleton(self): return { 'HeadObject': {}, 'ListObjects': {}, 'ListObjectsV2': {} } def _map_request_payer_params(self, request_parameters): RequestParamsMapper.map_head_object_params( request_parameters['HeadObject'], { 'request_payer': self.parameters.get('request_payer') } ) RequestParamsMapper.map_list_objects_v2_params( request_parameters['ListObjectsV2'], { 'request_payer': self.parameters.get('request_payer') } ) def _map_sse_c_params(self, request_parameters, paths_type): # SSE-C may be neaded for HeadObject for copies/downloads/deletes # If the operation is s3 to s3, the FileGenerator should use the # copy source key and algorithm. Otherwise, use the regular # SSE-C key and algorithm. Note the reverse FileGenerator does # not need any of these because it is used only for sync operations # which only use ListObjects which does not require HeadObject. RequestParamsMapper.map_head_object_params( request_parameters['HeadObject'], self.parameters) if paths_type == 's3s3': RequestParamsMapper.map_head_object_params( request_parameters['HeadObject'], { 'sse_c': self.parameters.get('sse_c_copy_source'), 'sse_c_key': self.parameters.get('sse_c_copy_source_key') } ) class CommandParameters(object): """ This class is used to do some initial error based on the parameters and arguments passed to the command line. """ def __init__(self, cmd, parameters, usage): """ Stores command name and parameters. Ensures that the ``dir_op`` flag is true if a certain command is being used. :param cmd: The name of the command, e.g. "rm". :param parameters: A dictionary of parameters. :param usage: A usage string """ self.cmd = cmd self.parameters = parameters self.usage = usage if 'dir_op' not in parameters: self.parameters['dir_op'] = False if 'follow_symlinks' not in parameters: self.parameters['follow_symlinks'] = True if 'source_region' not in parameters: self.parameters['source_region'] = None if self.cmd in ['sync', 'mb', 'rb']: self.parameters['dir_op'] = True if self.cmd == 'mv': self.parameters['is_move'] = True else: self.parameters['is_move'] = False def add_paths(self, paths): """ Reformats the parameters dictionary by including a key and value for the source and the destination. If a destination is not used the destination is the same as the source to ensure the destination always have some value. """ self.check_path_type(paths) self._normalize_s3_trailing_slash(paths) src_path = paths[0] self.parameters['src'] = src_path if len(paths) == 2: self.parameters['dest'] = paths[1] elif len(paths) == 1: self.parameters['dest'] = paths[0] self._validate_streaming_paths() self._validate_path_args() self._validate_sse_c_args() def _validate_streaming_paths(self): self.parameters['is_stream'] = False if self.parameters['src'] == '-' or self.parameters['dest'] == '-': if self.cmd != 'cp' or self.parameters.get('dir_op'): raise ValueError( "Streaming currently is only compatible with " "non-recursive cp commands" ) self.parameters['is_stream'] = True self.parameters['dir_op'] = False self.parameters['only_show_errors'] = True def _validate_path_args(self): # If we're using a mv command, you can't copy the object onto itself. params = self.parameters if self.cmd == 'mv' and self._same_path(params['src'], params['dest']): raise ValueError("Cannot mv a file onto itself: '%s' - '%s'" % ( params['src'], params['dest'])) # If the user provided local path does not exist, hard fail because # we know that we will not be able to upload the file. if 'locals3' == params['paths_type'] and not params['is_stream']: if not os.path.exists(params['src']): raise RuntimeError( 'The user-provided path %s does not exist.' % params['src']) # If the operation is downloading to a directory that does not exist, # create the directories so no warnings are thrown during the syncing # process. elif 's3local' == params['paths_type'] and params['dir_op']: if not os.path.exists(params['dest']): os.makedirs(params['dest']) def _same_path(self, src, dest): if not self.parameters['paths_type'] == 's3s3': return False elif src == dest: return True elif dest.endswith('/'): src_base = os.path.basename(src) return src == os.path.join(dest, src_base) def _normalize_s3_trailing_slash(self, paths): for i, path in enumerate(paths): if path.startswith('s3://'): bucket, key = find_bucket_key(path[5:]) if not key and not path.endswith('/'): # If only a bucket was specified, we need # to normalize the path and ensure it ends # with a '/', s3://bucket -> s3://bucket/ path += '/' paths[i] = path def check_path_type(self, paths): """ This initial check ensures that the path types for the specified command is correct. """ template_type = {'s3s3': ['cp', 'sync', 'mv'], 's3local': ['cp', 'sync', 'mv'], 'locals3': ['cp', 'sync', 'mv'], 's3': ['mb', 'rb', 'rm'], 'local': [], 'locallocal': []} paths_type = '' usage = "usage: aws s3 %s %s" % (self.cmd, self.usage) for i in range(len(paths)): if paths[i].startswith('s3://'): paths_type = paths_type + 's3' else: paths_type = paths_type + 'local' if self.cmd in template_type[paths_type]: self.parameters['paths_type'] = paths_type else: raise TypeError("%s\nError: Invalid argument type" % usage) def add_region(self, parsed_globals): self.parameters['region'] = parsed_globals.region def add_endpoint_url(self, parsed_globals): """ Adds endpoint_url to the parameters. """ if 'endpoint_url' in parsed_globals: self.parameters['endpoint_url'] = getattr(parsed_globals, 'endpoint_url') else: self.parameters['endpoint_url'] = None def add_verify_ssl(self, parsed_globals): self.parameters['verify_ssl'] = parsed_globals.verify_ssl def add_page_size(self, parsed_args): self.parameters['page_size'] = getattr(parsed_args, 'page_size', None) def _validate_sse_c_args(self): self._validate_sse_c_arg() self._validate_sse_c_arg('sse_c_copy_source') self._validate_sse_c_copy_source_for_paths() def _validate_sse_c_arg(self, sse_c_type='sse_c'): sse_c_key_type = sse_c_type + '_key' sse_c_type_param = '--' + sse_c_type.replace('_', '-') sse_c_key_type_param = '--' + sse_c_key_type.replace('_', '-') if self.parameters.get(sse_c_type): if not self.parameters.get(sse_c_key_type): raise ValueError( 'It %s is specified, %s must be specified ' 'as well.' % (sse_c_type_param, sse_c_key_type_param) ) if self.parameters.get(sse_c_key_type): if not self.parameters.get(sse_c_type): raise ValueError( 'It %s is specified, %s must be specified ' 'as well.' % (sse_c_key_type_param, sse_c_type_param) ) def _validate_sse_c_copy_source_for_paths(self): if self.parameters.get('sse_c_copy_source'): if self.parameters['paths_type'] != 's3s3': raise ValueError( '--sse-c-copy-source is only supported for ' 'copy operations.' ) awscli-1.17.14/awscli/customizations/s3/s3handler.py0000644000000000000000000005540313620325554022260 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import logging import os from s3transfer.manager import TransferManager from awscli.customizations.s3.utils import ( human_readable_size, MAX_UPLOAD_SIZE, find_bucket_key, relative_path, create_warning, NonSeekableStream) from awscli.customizations.s3.transferconfig import \ create_transfer_config_from_runtime_config from awscli.customizations.s3.results import UploadResultSubscriber from awscli.customizations.s3.results import DownloadResultSubscriber from awscli.customizations.s3.results import CopyResultSubscriber from awscli.customizations.s3.results import UploadStreamResultSubscriber from awscli.customizations.s3.results import DownloadStreamResultSubscriber from awscli.customizations.s3.results import DeleteResultSubscriber from awscli.customizations.s3.results import QueuedResult from awscli.customizations.s3.results import SuccessResult from awscli.customizations.s3.results import FailureResult from awscli.customizations.s3.results import DryRunResult from awscli.customizations.s3.results import ResultRecorder from awscli.customizations.s3.results import ResultPrinter from awscli.customizations.s3.results import OnlyShowErrorsResultPrinter from awscli.customizations.s3.results import NoProgressResultPrinter from awscli.customizations.s3.results import ResultProcessor from awscli.customizations.s3.results import CommandResultRecorder from awscli.customizations.s3.utils import RequestParamsMapper from awscli.customizations.s3.utils import StdoutBytesWriter from awscli.customizations.s3.utils import ProvideSizeSubscriber from awscli.customizations.s3.utils import ProvideUploadContentTypeSubscriber from awscli.customizations.s3.utils import ProvideCopyContentTypeSubscriber from awscli.customizations.s3.utils import ProvideLastModifiedTimeSubscriber from awscli.customizations.s3.utils import DirectoryCreatorSubscriber from awscli.customizations.s3.utils import DeleteSourceFileSubscriber from awscli.customizations.s3.utils import DeleteSourceObjectSubscriber from awscli.customizations.s3.utils import DeleteCopySourceObjectSubscriber from awscli.compat import get_binary_stdin LOGGER = logging.getLogger(__name__) class S3TransferHandlerFactory(object): MAX_IN_MEMORY_CHUNKS = 6 def __init__(self, cli_params, runtime_config): """Factory for S3TransferHandlers :type cli_params: dict :param cli_params: The parameters provide to the CLI command :type runtime_config: RuntimeConfig :param runtime_config: The runtime config for the CLI command being run """ self._cli_params = cli_params self._runtime_config = runtime_config def __call__(self, client, result_queue): """Creates a S3TransferHandler instance :type client: botocore.client.Client :param client: The client to power the S3TransferHandler :type result_queue: queue.Queue :param result_queue: The result queue to be used to process results for the S3TransferHandler :returns: A S3TransferHandler instance """ transfer_config = create_transfer_config_from_runtime_config( self._runtime_config) transfer_config.max_in_memory_upload_chunks = self.MAX_IN_MEMORY_CHUNKS transfer_config.max_in_memory_download_chunks = \ self.MAX_IN_MEMORY_CHUNKS transfer_manager = TransferManager(client, transfer_config) LOGGER.debug( "Using a multipart threshold of %s and a part size of %s", transfer_config.multipart_threshold, transfer_config.multipart_chunksize ) result_recorder = ResultRecorder() result_processor_handlers = [result_recorder] self._add_result_printer(result_recorder, result_processor_handlers) result_processor = ResultProcessor( result_queue, result_processor_handlers) command_result_recorder = CommandResultRecorder( result_queue, result_recorder, result_processor) return S3TransferHandler( transfer_manager, self._cli_params, command_result_recorder) def _add_result_printer(self, result_recorder, result_processor_handlers): if self._cli_params.get('quiet'): return elif self._cli_params.get('only_show_errors'): result_printer = OnlyShowErrorsResultPrinter(result_recorder) elif self._cli_params.get('is_stream'): result_printer = OnlyShowErrorsResultPrinter(result_recorder) elif not self._cli_params.get('progress'): result_printer = NoProgressResultPrinter(result_recorder) else: result_printer = ResultPrinter(result_recorder) result_processor_handlers.append(result_printer) class S3TransferHandler(object): def __init__(self, transfer_manager, cli_params, result_command_recorder): """Backend for performing S3 transfers :type transfer_manager: s3transfer.manager.TransferManager :param transfer_manager: Transfer manager to use for transfers :type cli_params: dict :param cli_params: The parameters passed to the CLI command in the form of a dictionary :type result_command_recorder: ResultCommandRecorder :param result_command_recorder: The result command recorder to be used to get the final result of the transfer """ self._transfer_manager = transfer_manager # TODO: Ideally the s3 transfer handler should not need to know # about the result command recorder. It really only needs an interface # for adding results to the queue. When all of the commands have # converted to use this transfer handler, an effort should be made # to replace the passing of a result command recorder with an # abstraction to enqueue results. self._result_command_recorder = result_command_recorder submitter_args = ( self._transfer_manager, self._result_command_recorder.result_queue, cli_params ) self._submitters = [ UploadStreamRequestSubmitter(*submitter_args), DownloadStreamRequestSubmitter(*submitter_args), UploadRequestSubmitter(*submitter_args), DownloadRequestSubmitter(*submitter_args), CopyRequestSubmitter(*submitter_args), DeleteRequestSubmitter(*submitter_args), LocalDeleteRequestSubmitter(*submitter_args) ] def call(self, fileinfos): """Process iterable of FileInfos for transfer :type fileinfos: iterable of FileInfos param fileinfos: Set of FileInfos to submit to underlying transfer request submitters to make transfer API calls to S3 :rtype: CommandResult :returns: The result of the command that specifies the number of failures and warnings encountered. """ with self._result_command_recorder: with self._transfer_manager: total_submissions = 0 for fileinfo in fileinfos: for submitter in self._submitters: if submitter.can_submit(fileinfo): if submitter.submit(fileinfo): total_submissions += 1 break self._result_command_recorder.notify_total_submissions( total_submissions) return self._result_command_recorder.get_command_result() class BaseTransferRequestSubmitter(object): REQUEST_MAPPER_METHOD = None RESULT_SUBSCRIBER_CLASS = None def __init__(self, transfer_manager, result_queue, cli_params): """Submits transfer requests to the TransferManager Given a FileInfo object and provided CLI parameters, it will add the necessary extra arguments and subscribers in making a call to the TransferManager. :type transfer_manager: s3transfer.manager.TransferManager :param transfer_manager: The underlying transfer manager :type result_queue: queue.Queue :param result_queue: The result queue to use :type cli_params: dict :param cli_params: The associated CLI parameters passed in to the command as a dictionary. """ self._transfer_manager = transfer_manager self._result_queue = result_queue self._cli_params = cli_params def submit(self, fileinfo): """Submits a transfer request based on the FileInfo provided There is no guarantee that the transfer request will be made on behalf of the fileinfo as a fileinfo may be skipped based on circumstances in which the transfer is not possible. :type fileinfo: awscli.customizations.s3.fileinfo.FileInfo :param fileinfo: The FileInfo to be used to submit a transfer request to the underlying transfer manager. :rtype: s3transfer.futures.TransferFuture :returns: A TransferFuture representing the transfer if it the transfer was submitted. If it was not submitted nothing is returned. """ should_skip = self._warn_and_signal_if_skip(fileinfo) if not should_skip: return self._do_submit(fileinfo) def can_submit(self, fileinfo): """Checks whether it can submit a particular FileInfo :type fileinfo: awscli.customizations.s3.fileinfo.FileInfo :param fileinfo: The FileInfo to check if the transfer request submitter can handle. :returns: True if it can use the provided FileInfo to make a transfer request to the underlying transfer manager. False, otherwise. """ raise NotImplementedError('can_submit()') def _do_submit(self, fileinfo): extra_args = {} if self.REQUEST_MAPPER_METHOD: self.REQUEST_MAPPER_METHOD(extra_args, self._cli_params) subscribers = [] self._add_additional_subscribers(subscribers, fileinfo) # The result subscriber class should always be the last registered # subscriber to ensure it is not missing any information that # may have been added in a different subscriber such as size. if self.RESULT_SUBSCRIBER_CLASS: result_kwargs = {'result_queue': self._result_queue} if self._cli_params.get('is_move', False): result_kwargs['transfer_type'] = 'move' subscribers.append(self.RESULT_SUBSCRIBER_CLASS(**result_kwargs)) if not self._cli_params.get('dryrun'): return self._submit_transfer_request( fileinfo, extra_args, subscribers) else: self._submit_dryrun(fileinfo) def _submit_dryrun(self, fileinfo): transfer_type = fileinfo.operation_name if self._cli_params.get('is_move', False): transfer_type = 'move' src, dest = self._format_src_dest(fileinfo) self._result_queue.put(DryRunResult( transfer_type=transfer_type, src=src, dest=dest)) def _add_additional_subscribers(self, subscribers, fileinfo): pass def _submit_transfer_request(self, fileinfo, extra_args, subscribers): raise NotImplementedError('_submit_transfer_request()') def _warn_and_signal_if_skip(self, fileinfo): for warning_handler in self._get_warning_handlers(): if warning_handler(fileinfo): # On the first warning handler that returns a signal to skip # immediately propogate this signal and no longer check # the other warning handlers as no matter what the file will # be skipped. return True def _get_warning_handlers(self): # Returns a list of warning handlers, which are callables that # take in a single parameter representing a FileInfo. It will then # add a warning to result_queue if needed and return True if # that FileInfo should be skipped. return [] def _should_inject_content_type(self): return ( self._cli_params.get('guess_mime_type') and not self._cli_params.get('content_type') ) def _warn_glacier(self, fileinfo): if not self._cli_params.get('force_glacier_transfer'): if not fileinfo.is_glacier_compatible(): LOGGER.debug( 'Encountered glacier object s3://%s. Not performing ' '%s on object.' % (fileinfo.src, fileinfo.operation_name)) if not self._cli_params.get('ignore_glacier_warnings'): warning = create_warning( 's3://'+fileinfo.src, 'Object is of storage class GLACIER. Unable to ' 'perform %s operations on GLACIER objects. You must ' 'restore the object to be able to perform the ' 'operation. See aws s3 %s help for additional ' 'parameter options to ignore or force these ' 'transfers.' % (fileinfo.operation_name, fileinfo.operation_name) ) self._result_queue.put(warning) return True return False def _warn_parent_reference(self, fileinfo): # normpath() will use the OS path separator so we # need to take that into account when checking for a parent prefix. parent_prefix = '..' + os.path.sep escapes_cwd = os.path.normpath(fileinfo.compare_key).startswith( parent_prefix) if escapes_cwd: warning = create_warning( fileinfo.compare_key, "File references a parent directory.") self._result_queue.put(warning) return True return False def _format_src_dest(self, fileinfo): """Returns formatted versions of a fileinfos source and destination.""" raise NotImplementedError('_format_src_dest') def _format_local_path(self, path): return relative_path(path) def _format_s3_path(self, path): if path.startswith('s3://'): return path return 's3://' + path class UploadRequestSubmitter(BaseTransferRequestSubmitter): REQUEST_MAPPER_METHOD = RequestParamsMapper.map_put_object_params RESULT_SUBSCRIBER_CLASS = UploadResultSubscriber def can_submit(self, fileinfo): return fileinfo.operation_name == 'upload' def _add_additional_subscribers(self, subscribers, fileinfo): subscribers.append(ProvideSizeSubscriber(fileinfo.size)) if self._should_inject_content_type(): subscribers.append(ProvideUploadContentTypeSubscriber()) if self._cli_params.get('is_move', False): subscribers.append(DeleteSourceFileSubscriber()) def _submit_transfer_request(self, fileinfo, extra_args, subscribers): bucket, key = find_bucket_key(fileinfo.dest) filein = self._get_filein(fileinfo) return self._transfer_manager.upload( fileobj=filein, bucket=bucket, key=key, extra_args=extra_args, subscribers=subscribers ) def _get_filein(self, fileinfo): return fileinfo.src def _get_warning_handlers(self): return [self._warn_if_too_large] def _warn_if_too_large(self, fileinfo): if getattr(fileinfo, 'size') and fileinfo.size > MAX_UPLOAD_SIZE: file_path = relative_path(fileinfo.src) warning_message = ( "File %s exceeds s3 upload limit of %s." % ( file_path, human_readable_size(MAX_UPLOAD_SIZE))) warning = create_warning( file_path, warning_message, skip_file=False) self._result_queue.put(warning) def _format_src_dest(self, fileinfo): src = self._format_local_path(fileinfo.src) dest = self._format_s3_path(fileinfo.dest) return src, dest class DownloadRequestSubmitter(BaseTransferRequestSubmitter): REQUEST_MAPPER_METHOD = RequestParamsMapper.map_get_object_params RESULT_SUBSCRIBER_CLASS = DownloadResultSubscriber def can_submit(self, fileinfo): return fileinfo.operation_name == 'download' def _add_additional_subscribers(self, subscribers, fileinfo): subscribers.append(ProvideSizeSubscriber(fileinfo.size)) subscribers.append(DirectoryCreatorSubscriber()) subscribers.append(ProvideLastModifiedTimeSubscriber( fileinfo.last_update, self._result_queue)) if self._cli_params.get('is_move', False): subscribers.append(DeleteSourceObjectSubscriber( fileinfo.source_client)) def _submit_transfer_request(self, fileinfo, extra_args, subscribers): bucket, key = find_bucket_key(fileinfo.src) fileout = self._get_fileout(fileinfo) return self._transfer_manager.download( fileobj=fileout, bucket=bucket, key=key, extra_args=extra_args, subscribers=subscribers ) def _get_fileout(self, fileinfo): return fileinfo.dest def _get_warning_handlers(self): return [self._warn_glacier, self._warn_parent_reference] def _format_src_dest(self, fileinfo): src = self._format_s3_path(fileinfo.src) dest = self._format_local_path(fileinfo.dest) return src, dest class CopyRequestSubmitter(BaseTransferRequestSubmitter): REQUEST_MAPPER_METHOD = RequestParamsMapper.map_copy_object_params RESULT_SUBSCRIBER_CLASS = CopyResultSubscriber def can_submit(self, fileinfo): return fileinfo.operation_name == 'copy' def _add_additional_subscribers(self, subscribers, fileinfo): subscribers.append(ProvideSizeSubscriber(fileinfo.size)) if self._should_inject_content_type(): subscribers.append(ProvideCopyContentTypeSubscriber()) if self._cli_params.get('is_move', False): subscribers.append(DeleteCopySourceObjectSubscriber( fileinfo.source_client)) def _submit_transfer_request(self, fileinfo, extra_args, subscribers): bucket, key = find_bucket_key(fileinfo.dest) source_bucket, source_key = find_bucket_key(fileinfo.src) copy_source = {'Bucket': source_bucket, 'Key': source_key} return self._transfer_manager.copy( bucket=bucket, key=key, copy_source=copy_source, extra_args=extra_args, subscribers=subscribers, source_client=fileinfo.source_client ) def _get_warning_handlers(self): return [self._warn_glacier] def _format_src_dest(self, fileinfo): src = self._format_s3_path(fileinfo.src) dest = self._format_s3_path(fileinfo.dest) return src, dest class UploadStreamRequestSubmitter(UploadRequestSubmitter): RESULT_SUBSCRIBER_CLASS = UploadStreamResultSubscriber def can_submit(self, fileinfo): return ( fileinfo.operation_name == 'upload' and self._cli_params.get('is_stream') ) def _add_additional_subscribers(self, subscribers, fileinfo): expected_size = self._cli_params.get('expected_size', None) if expected_size is not None: subscribers.append(ProvideSizeSubscriber(int(expected_size))) def _get_filein(self, fileinfo): binary_stdin = get_binary_stdin() return NonSeekableStream(binary_stdin) def _format_local_path(self, path): return '-' class DownloadStreamRequestSubmitter(DownloadRequestSubmitter): RESULT_SUBSCRIBER_CLASS = DownloadStreamResultSubscriber def can_submit(self, fileinfo): return ( fileinfo.operation_name == 'download' and self._cli_params.get('is_stream') ) def _add_additional_subscribers(self, subscribers, fileinfo): pass def _get_fileout(self, fileinfo): return StdoutBytesWriter() def _format_local_path(self, path): return '-' class DeleteRequestSubmitter(BaseTransferRequestSubmitter): REQUEST_MAPPER_METHOD = RequestParamsMapper.map_delete_object_params RESULT_SUBSCRIBER_CLASS = DeleteResultSubscriber def can_submit(self, fileinfo): return fileinfo.operation_name == 'delete' and \ fileinfo.src_type == 's3' def _submit_transfer_request(self, fileinfo, extra_args, subscribers): bucket, key = find_bucket_key(fileinfo.src) return self._transfer_manager.delete( bucket=bucket, key=key, extra_args=extra_args, subscribers=subscribers) def _format_src_dest(self, fileinfo): return self._format_s3_path(fileinfo.src), None class LocalDeleteRequestSubmitter(BaseTransferRequestSubmitter): REQUEST_MAPPER_METHOD = None RESULT_SUBSCRIBER_CLASS = None def can_submit(self, fileinfo): return fileinfo.operation_name == 'delete' and \ fileinfo.src_type == 'local' def _submit_transfer_request(self, fileinfo, extra_args, subscribers): # This is quirky but essentially instead of relying on a built-in # method of s3 transfer, the logic lives directly in the submitter. # The reason a explicit delete local file does not # live in s3transfer is because it is outside the scope of s3transfer; # it should only have interfaces for interacting with S3. Therefore, # the burden of this functionality should live in the CLI. # The main downsides in doing this is that delete and the result # creation happens in the main thread as opposed to a separate thread # in s3transfer. However, this is not too big of a downside because # deleting a local file only happens for sync --delete downloads and # is very fast compared to all of the other types of transfers. src, dest = self._format_src_dest(fileinfo) result_kwargs = { 'transfer_type': 'delete', 'src': src, 'dest': dest } try: self._result_queue.put(QueuedResult( total_transfer_size=0, **result_kwargs)) os.remove(fileinfo.src) self._result_queue.put(SuccessResult(**result_kwargs)) except Exception as e: self._result_queue.put( FailureResult(exception=e, **result_kwargs)) finally: # Return True to indicate that the transfer was submitted return True def _format_src_dest(self, fileinfo): return self._format_local_path(fileinfo.src), None awscli-1.17.14/awscli/customizations/s3/filters.py0000644000000000000000000001453013620325554022041 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import logging import fnmatch import os from awscli.customizations.s3.utils import split_s3_bucket_key LOG = logging.getLogger(__name__) def create_filter(parameters): """Given the CLI parameters dict, create a Filter object.""" # We need to evaluate all the filters based on the source # directory. if parameters['filters']: cli_filters = parameters['filters'] real_filters = [] for filter_type, filter_pattern in cli_filters: real_filters.append((filter_type.lstrip('-'), filter_pattern)) source_location = parameters['src'] if source_location.startswith('s3://'): # This gives us (bucket, keyname) and we want # the bucket to be the root dir. src_rootdir = _get_s3_root(source_location, parameters['dir_op']) else: src_rootdir = _get_local_root(parameters['src'], parameters['dir_op']) destination_location = parameters['dest'] if destination_location.startswith('s3://'): dst_rootdir = _get_s3_root(parameters['dest'], parameters['dir_op']) else: dst_rootdir = _get_local_root(parameters['dest'], parameters['dir_op']) return Filter(real_filters, src_rootdir, dst_rootdir) else: return Filter({}, None, None) def _get_s3_root(source_location, dir_op): # Obtain the bucket and the key. bucket, key = split_s3_bucket_key(source_location) if not dir_op and not key.endswith('/'): # If we are not performing an operation on a directory and the key # is of the form: ``prefix/key``. We only want ``prefix`` included in # the the s3 root and not ``key``. key = '/'.join(key.split('/')[:-1]) # Rejoin the bucket and key back together. s3_path = '/'.join([bucket, key]) return s3_path def _get_local_root(source_location, dir_op): if dir_op: rootdir = os.path.abspath(source_location) else: rootdir = os.path.abspath(os.path.dirname(source_location)) return rootdir class Filter(object): """ This is a universal exclude/include filter. """ def __init__(self, patterns, rootdir, dst_rootdir): """ :var patterns: A list of patterns. A pattern consits of a list whose first member is a string 'exclude' or 'include'. The second member is the actual rule. :var rootdir: The root directory where the patterns are evaluated. This will generally be the directory of the source location. :var dst_rootdir: The destination root directory where the patterns are evaluated. This is only useful when the --delete option is also specified. """ self._original_patterns = patterns self.patterns = self._full_path_patterns(patterns, rootdir) self.dst_patterns = self._full_path_patterns(patterns, dst_rootdir) def _full_path_patterns(self, original_patterns, rootdir): # We need to transform the patterns into patterns that have # the root dir prefixed, so things like ``--exclude "*"`` # will actually be ['exclude', '/path/to/root/*'] full_patterns = [] for pattern in original_patterns: full_patterns.append( (pattern[0], os.path.join(rootdir, pattern[1]))) return full_patterns def call(self, file_infos): """ This function iterates over through the yielded file_info objects. It determines the type of the file and applies pattern matching to determine if the rule applies. While iterating though the patterns the file is assigned a boolean flag to determine if a file should be yielded on past the filer. Anything identified by the exclude filter has its flag set to false. Anything identified by the include filter has its flag set to True. All files begin with the flag set to true. Rules listed at the end will overwrite flags thrown by rules listed before it. """ for file_info in file_infos: file_path = file_info.src file_status = (file_info, True) for pattern, dst_pattern in zip(self.patterns, self.dst_patterns): current_file_status = self._match_pattern(pattern, file_info) if current_file_status is not None: file_status = current_file_status dst_current_file_status = self._match_pattern(dst_pattern, file_info) if dst_current_file_status is not None: file_status = dst_current_file_status LOG.debug("=%s final filtered status, should_include: %s", file_path, file_status[1]) if file_status[1]: yield file_info def _match_pattern(self, pattern, file_info): file_status = None file_path = file_info.src pattern_type = pattern[0] if file_info.src_type == 'local': path_pattern = pattern[1].replace('/', os.sep) else: path_pattern = pattern[1].replace(os.sep, '/') is_match = fnmatch.fnmatch(file_path, path_pattern) if is_match and pattern_type == 'include': file_status = (file_info, True) LOG.debug("%s matched include filter: %s", file_path, path_pattern) elif is_match and pattern_type == 'exclude': file_status = (file_info, False) LOG.debug("%s matched exclude filter: %s", file_path, path_pattern) else: LOG.debug("%s did not match %s filter: %s", file_path, pattern_type, path_pattern) return file_status awscli-1.17.14/awscli/customizations/s3/s3.py0000644000000000000000000000526513620325554020723 0ustar rootroot00000000000000# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. from awscli.customizations import utils from awscli.customizations.commands import BasicCommand from awscli.customizations.s3.subcommands import ListCommand, WebsiteCommand, \ CpCommand, MvCommand, RmCommand, SyncCommand, MbCommand, RbCommand, \ PresignCommand from awscli.customizations.s3.syncstrategy.register import \ register_sync_strategies def awscli_initialize(cli): """ This function is require to use the plugin. It calls the functions required to add all neccessary commands and parameters to the CLI. This function is necessary to install the plugin using a configuration file """ cli.register("building-command-table.main", add_s3) cli.register('building-command-table.sync', register_sync_strategies) def s3_plugin_initialize(event_handlers): """ This is a wrapper to make the plugin built-in to the cli as opposed to specifiying it in the configuration file. """ awscli_initialize(event_handlers) def add_s3(command_table, session, **kwargs): """ This creates a new service object for the s3 plugin. It sends the old s3 commands to the namespace ``s3api``. """ utils.rename_command(command_table, 's3', 's3api') command_table['s3'] = S3(session) class S3(BasicCommand): NAME = 's3' DESCRIPTION = BasicCommand.FROM_FILE('s3/_concepts.rst') SYNOPSIS = "aws s3 [ ...]" SUBCOMMANDS = [ {'name': 'ls', 'command_class': ListCommand}, {'name': 'website', 'command_class': WebsiteCommand}, {'name': 'cp', 'command_class': CpCommand}, {'name': 'mv', 'command_class': MvCommand}, {'name': 'rm', 'command_class': RmCommand}, {'name': 'sync', 'command_class': SyncCommand}, {'name': 'mb', 'command_class': MbCommand}, {'name': 'rb', 'command_class': RbCommand}, {'name': 'presign', 'command_class': PresignCommand}, ] def _run_main(self, parsed_args, parsed_globals): if parsed_args.subcommand is None: raise ValueError("usage: aws [options] " "[parameters]\naws: error: too few arguments") awscli-1.17.14/awscli/customizations/s3/fileformat.py0000644000000000000000000001361513620325554022524 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import os class FileFormat(object): def format(self, src, dest, parameters): """ This function formats the source and destination path to the proper form for a file generator. Note that a file is designated as an s3 file if it begins with s3:// :param src: The path of the source :type src: string :param dest: The path of the dest :type dest: string :param parameters: A dictionary that will be formed when the arguments of the command line have been parsed. For this function the dictionary should have the key 'dir_op' which is a boolean value that is true when the operation is being performed on a local directory/ all objects under a common prefix in s3 or false when it is on a single file/object. :returns: A dictionary that will be passed to a file generator. The dictionary contains the keys src, dest, dir_op, and use_src_name. src is a dictionary containing the source path and whether its located locally or in s3. dest is a dictionary containing the destination path and whether its located locally or in s3. """ src_type, src_path = self.identify_type(src) dest_type, dest_path = self.identify_type(dest) format_table = {'s3': self.s3_format, 'local': self.local_format} # :var dir_op: True when the operation being performed is on a # directory/objects under a common prefix or false when it # is a single file dir_op = parameters['dir_op'] src_path = format_table[src_type](src_path, dir_op)[0] # :var use_src_name: True when the destination file/object will take on # the name of the source file/object. False when it # will take on the name the user specified in the # command line. dest_path, use_src_name = format_table[dest_type](dest_path, dir_op) files = {'src': {'path': src_path, 'type': src_type}, 'dest': {'path': dest_path, 'type': dest_type}, 'dir_op': dir_op, 'use_src_name': use_src_name} return files def local_format(self, path, dir_op): """ This function formats the path of local files and returns whether the destination will keep its own name or take the source's name along with the editted path. Formatting Rules: 1) If a destination file is taking on a source name, it must end with the apporpriate operating system seperator General Options: 1) If the operation is on a directory, the destination file will always use the name of the corresponding source file. 2) If the path of the destination exists and is a directory it will always use the name of the source file. 3) If the destination path ends with the appropriate operating system seperator but is not an existing directory, the appropriate directories will be made and the file will use the source's name. 4) If the destination path does not end with the appropriate operating system seperator and is not an existing directory, the appropriate directories will be created and the file name will be of the one provided. """ full_path = os.path.abspath(path) if (os.path.exists(full_path) and os.path.isdir(full_path)) or dir_op: full_path += os.sep return full_path, True else: if path.endswith(os.sep): full_path += os.sep return full_path, True else: return full_path, False def s3_format(self, path, dir_op): """ This function formats the path of source files and returns whether the destination will keep its own name or take the source's name along with the edited path. Formatting Rules: 1) If a destination file is taking on a source name, it must end with a forward slash. General Options: 1) If the operation is on objects under a common prefix, the destination file will always use the name of the corresponding source file. 2) If the path ends with a forward slash, the appropriate prefixes will be formed and will use the name of the source. 3) If the path does not end with a forward slash, the appropriate prefix will be formed but use the the name provided as opposed to the source name. """ if dir_op: if not path.endswith('/'): path += '/' return path, True else: if not path.endswith('/'): return path, False else: return path, True def identify_type(self, path): """ It identifies whether the path is from local or s3. Returns the adjusted pathname and a string stating whether the file is from local or s3. If from s3 it strips off the s3:// from the beginnning of the path """ if path.startswith('s3://'): return 's3', path[5:] else: return 'local', path awscli-1.17.14/awscli/customizations/s3/transferconfig.py0000644000000000000000000001066213620325554023405 0ustar rootroot00000000000000# Copyright 2013-2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. from s3transfer.manager import TransferConfig from awscli.customizations.s3.utils import human_readable_to_bytes from awscli.compat import six # If the user does not specify any overrides, # these are the default values we use for the s3 transfer # commands. DEFAULTS = { 'multipart_threshold': 8 * (1024 ** 2), 'multipart_chunksize': 8 * (1024 ** 2), 'max_concurrent_requests': 10, 'max_queue_size': 1000, 'max_bandwidth': None } class InvalidConfigError(Exception): pass class RuntimeConfig(object): POSITIVE_INTEGERS = ['multipart_chunksize', 'multipart_threshold', 'max_concurrent_requests', 'max_queue_size', 'max_bandwidth'] HUMAN_READABLE_SIZES = ['multipart_chunksize', 'multipart_threshold'] HUMAN_READABLE_RATES = ['max_bandwidth'] @staticmethod def defaults(): return DEFAULTS.copy() def build_config(self, **kwargs): """Create and convert a runtime config dictionary. This method will merge and convert S3 runtime configuration data into a single dictionary that can then be passed to classes that use this runtime config. :param kwargs: Any key in the ``DEFAULTS`` dict. :return: A dictionary of the merged and converted values. """ runtime_config = DEFAULTS.copy() if kwargs: runtime_config.update(kwargs) self._convert_human_readable_sizes(runtime_config) self._convert_human_readable_rates(runtime_config) self._validate_config(runtime_config) return runtime_config def _convert_human_readable_sizes(self, runtime_config): for attr in self.HUMAN_READABLE_SIZES: value = runtime_config.get(attr) if value is not None and not isinstance(value, six.integer_types): runtime_config[attr] = human_readable_to_bytes(value) def _convert_human_readable_rates(self, runtime_config): for attr in self.HUMAN_READABLE_RATES: value = runtime_config.get(attr) if value is not None and not isinstance(value, six.integer_types): if not value.endswith('B/s'): raise InvalidConfigError( 'Invalid rate: %s. The value must be expressed ' 'as a rate in terms of bytes per seconds ' '(e.g. 10MB/s or 800KB/s)' % value) runtime_config[attr] = human_readable_to_bytes(value[:-2]) def _validate_config(self, runtime_config): for attr in self.POSITIVE_INTEGERS: value = runtime_config.get(attr) if value is not None: try: runtime_config[attr] = int(value) if not runtime_config[attr] > 0: self._error_positive_value(attr, value) except ValueError: self._error_positive_value(attr, value) def _error_positive_value(self, name, value): raise InvalidConfigError( "Value for %s must be a positive integer: %s" % (name, value)) def create_transfer_config_from_runtime_config(runtime_config): """ Creates an equivalent s3transfer TransferConfig :type runtime_config: dict :argument runtime_config: A valid RuntimeConfig-generated dict. :returns: A TransferConfig with the same configuration as the runtime config. """ translation_map = { 'max_concurrent_requests': 'max_request_concurrency', 'max_queue_size': 'max_request_queue_size', 'multipart_threshold': 'multipart_threshold', 'multipart_chunksize': 'multipart_chunksize', 'max_bandwidth': 'max_bandwidth', } kwargs = {} for key, value in runtime_config.items(): if key not in translation_map: continue kwargs[translation_map[key]] = value return TransferConfig(**kwargs) awscli-1.17.14/awscli/customizations/s3/__init__.py0000644000000000000000000000106513620325554022127 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. awscli-1.17.14/awscli/customizations/s3/syncstrategy/0000755000000000000000000000000013620325757022560 5ustar rootroot00000000000000awscli-1.17.14/awscli/customizations/s3/syncstrategy/base.py0000644000000000000000000002360013620325554024040 0ustar rootroot00000000000000# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import logging LOG = logging.getLogger(__name__) VALID_SYNC_TYPES = ['file_at_src_and_dest', 'file_not_at_dest', 'file_not_at_src'] class BaseSync(object): """Base sync strategy To create a new sync strategy, subclass from this class. """ # This is the argument that will be added to the ``SyncCommand`` arg table. # This argument will represent the sync strategy when the arguments for # the sync command are parsed. ``ARGUMENT`` follows the same format as # a member of ``ARG_TABLE`` in ``BasicCommand`` class as specified in # ``awscli/customizations/commands.py``. # # For example, if I wanted to perform the sync strategy whenever I type # ``--my-sync-strategy``, I would say: # # ARGUMENT = # {'name': 'my-sync-strategy', 'action': 'store-true', # 'help_text': 'Performs my sync strategy'} # # Typically, the argument's ``action`` should ``store_true`` to # minimize amount of extra code in making a custom sync strategy. ARGUMENT = None # At this point all that need to be done is implement # ``determine_should_sync`` method (see method for more information). def __init__(self, sync_type='file_at_src_and_dest'): """ :type sync_type: string :param sync_type: This determines where the sync strategy will be used. There are three strings to choose from: 'file_at_src_and_dest': apply sync strategy on a file that exists both at the source and the destination. 'file_not_at_dest': apply sync strategy on a file that exists at the source but not the destination. 'file_not_at_src': apply sync strategy on a file that exists at the destination but not the source. """ self._check_sync_type(sync_type) self._sync_type = sync_type def _check_sync_type(self, sync_type): if sync_type not in VALID_SYNC_TYPES: raise ValueError("Unknown sync_type: %s.\n" "Valid options are %s." % (sync_type, VALID_SYNC_TYPES)) @property def sync_type(self): return self._sync_type def register_strategy(self, session): """Registers the sync strategy class to the given session.""" session.register('building-arg-table.sync', self.add_sync_argument) session.register('choosing-s3-sync-strategy', self.use_sync_strategy) def determine_should_sync(self, src_file, dest_file): """Subclasses should implement this method. This function takes two ``FileStat`` objects (one from the source and one from the destination). Then makes a decision on whether a given operation (e.g. a upload, copy, download) should be allowed to take place. The function currently raises a ``NotImplementedError``. So this method must be overwritten when this class is subclassed. Note that this method must return a Boolean as documented below. :type src_file: ``FileStat`` object :param src_file: A representation of the opertaion that is to be performed on a specfic file existing in the source. Note if the file does not exist at the source, ``src_file`` is None. :type dest_file: ``FileStat`` object :param dest_file: A representation of the operation that is to be performed on a specific file existing in the destination. Note if the file does not exist at the destination, ``dest_file`` is None. :rtype: Boolean :return: True if an operation based on the ``FileStat`` should be allowed to occur. False if if an operation based on the ``FileStat`` should not be allowed to occur. Note the operation being referred to depends on the ``sync_type`` of the sync strategy: 'file_at_src_and_dest': refers to ``src_file`` 'file_not_at_dest': refers to ``src_file`` 'file_not_at_src': refers to ``dest_file`` """ raise NotImplementedError("determine_should_sync") @property def arg_name(self): # Retrieves the ``name`` of the sync strategy's ``ARGUMENT``. name = None if self.ARGUMENT is not None: name = self.ARGUMENT.get('name', None) return name @property def arg_dest(self): # Retrieves the ``dest`` of the sync strategy's ``ARGUMENT``. dest = None if self.ARGUMENT is not None: dest = self.ARGUMENT.get('dest', None) return dest def add_sync_argument(self, arg_table, **kwargs): # This function adds sync strategy's argument to the ``SyncCommand`` # argument table. if self.ARGUMENT is not None: arg_table.append(self.ARGUMENT) def use_sync_strategy(self, params, **kwargs): # This function determines which sync strategy the ``SyncCommand`` will # use. The sync strategy object must be returned by this method # if it is to be chosen as the sync strategy to use. # # ``params`` is a dictionary that specifies all of the arguments # the sync command is able to process as well as their values. # # Since ``ARGUMENT`` was added to the ``SyncCommand`` arg table, # the argument will be present in ``params``. # # If the argument was included in the actual ``aws s3 sync`` command # its value will show up as ``True`` in ``params`` otherwise its value # will be ``False`` in ``params`` assuming the argument's ``action`` # is ``store_true``. # # Note: If the ``action`` of ``ARGUMENT`` was not set to # ``store_true``, this method will need to be overwritten. # name_in_params = None # Check if a ``dest`` was specified in ``ARGUMENT`` as if it is # specified, the boolean value will be located at the argument's # ``dest`` value in the ``params`` dictionary. if self.arg_dest is not None: name_in_params = self.arg_dest # Then check ``name`` of ``ARGUMENT``, the boolean value will be # located at the argument's ``name`` value in the ``params`` # dictionary. elif self.arg_name is not None: # ``name`` has all ``-`` replaced with ``_`` in ``params``. name_in_params = self.arg_name.replace('-', '_') if name_in_params is not None: if params.get(name_in_params): # Return the sync strategy object to be used for syncing. return self return None def total_seconds(self, td): """ timedelta's time_seconds() function for python 2.6 users :param td: The difference between two datetime objects. """ return (td.microseconds + (td.seconds + td.days * 24 * 3600) * 10**6) / 10**6 def compare_size(self, src_file, dest_file): """ :returns: True if the sizes are the same. False otherwise. """ return src_file.size == dest_file.size def compare_time(self, src_file, dest_file): """ :returns: True if the file does not need updating based on time of last modification and type of operation. False if the file does need updating based on the time of last modification and type of operation. """ src_time = src_file.last_update dest_time = dest_file.last_update delta = dest_time - src_time cmd = src_file.operation_name if cmd == "upload" or cmd == "copy": if self.total_seconds(delta) >= 0: # Destination is newer than source. return True else: # Destination is older than source, so # we have a more recently updated file # at the source location. return False elif cmd == "download": if self.total_seconds(delta) <= 0: return True else: # delta is positive, so the destination # is newer than the source. return False class SizeAndLastModifiedSync(BaseSync): def determine_should_sync(self, src_file, dest_file): same_size = self.compare_size(src_file, dest_file) same_last_modified_time = self.compare_time(src_file, dest_file) should_sync = (not same_size) or (not same_last_modified_time) if should_sync: LOG.debug( "syncing: %s -> %s, size: %s -> %s, modified time: %s -> %s", src_file.src, src_file.dest, src_file.size, dest_file.size, src_file.last_update, dest_file.last_update) return should_sync class NeverSync(BaseSync): def __init__(self, sync_type='file_not_at_src'): super(NeverSync, self).__init__(sync_type) def determine_should_sync(self, src_file, dest_file): return False class MissingFileSync(BaseSync): def __init__(self, sync_type='file_not_at_dest'): super(MissingFileSync, self).__init__(sync_type) def determine_should_sync(self, src_file, dest_file): LOG.debug("syncing: %s -> %s, file does not exist at destination", src_file.src, src_file.dest) return True awscli-1.17.14/awscli/customizations/s3/syncstrategy/delete.py0000644000000000000000000000232013620325554024364 0ustar rootroot00000000000000# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import logging from awscli.customizations.s3.syncstrategy.base import BaseSync LOG = logging.getLogger(__name__) DELETE = {'name': 'delete', 'action': 'store_true', 'help_text': ( "Files that exist in the destination but not in the source are " "deleted during sync.")} class DeleteSync(BaseSync): ARGUMENT = DELETE def determine_should_sync(self, src_file, dest_file): dest_file.operation_name = 'delete' LOG.debug("syncing: (None) -> %s (remove), file does not " "exist at source (%s) and delete mode enabled", dest_file.src, dest_file.dest) return True awscli-1.17.14/awscli/customizations/s3/syncstrategy/register.py0000644000000000000000000000372113620325554024754 0ustar rootroot00000000000000# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. from awscli.customizations.s3.syncstrategy.sizeonly import SizeOnlySync from awscli.customizations.s3.syncstrategy.exacttimestamps import \ ExactTimestampsSync from awscli.customizations.s3.syncstrategy.delete import DeleteSync def register_sync_strategy(session, strategy_cls, sync_type='file_at_src_and_dest'): """Registers a single sync strategy :param session: The session that the sync strategy is being registered to. :param strategy_cls: The class of the sync strategy to be registered. :param sync_type: A string representing when to perform the sync strategy. See ``__init__`` method of ``BaseSyncStrategy`` for possible options. """ strategy = strategy_cls(sync_type) strategy.register_strategy(session) def register_sync_strategies(command_table, session, **kwargs): """Registers the different sync strategies. To register a sync strategy add ``register_sync_strategy(session, YourSyncStrategyClass, sync_type)`` to the list of registered strategies in this function. """ # Register the size only sync strategy. register_sync_strategy(session, SizeOnlySync) # Register the exact timestamps sync strategy. register_sync_strategy(session, ExactTimestampsSync) # Register the delete sync strategy. register_sync_strategy(session, DeleteSync, 'file_not_at_src') # Register additional sync strategies here... awscli-1.17.14/awscli/customizations/s3/syncstrategy/__init__.py0000644000000000000000000000106513620325554024666 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. awscli-1.17.14/awscli/customizations/s3/syncstrategy/sizeonly.py0000644000000000000000000000242413620325554025003 0ustar rootroot00000000000000# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import logging from awscli.customizations.s3.syncstrategy.base import BaseSync LOG = logging.getLogger(__name__) SIZE_ONLY = {'name': 'size-only', 'action': 'store_true', 'help_text': ( 'Makes the size of each key the only criteria used to ' 'decide whether to sync from source to destination.')} class SizeOnlySync(BaseSync): ARGUMENT = SIZE_ONLY def determine_should_sync(self, src_file, dest_file): same_size = self.compare_size(src_file, dest_file) should_sync = not same_size if should_sync: LOG.debug("syncing: %s -> %s, size_changed: %s", src_file.src, src_file.dest, not same_size) return should_sync awscli-1.17.14/awscli/customizations/s3/syncstrategy/exacttimestamps.py0000644000000000000000000000322613620325554026343 0ustar rootroot00000000000000# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import logging from awscli.customizations.s3.syncstrategy.base import SizeAndLastModifiedSync LOG = logging.getLogger(__name__) EXACT_TIMESTAMPS = {'name': 'exact-timestamps', 'action': 'store_true', 'help_text': ( 'When syncing from S3 to local, same-sized ' 'items will be ignored only when the timestamps ' 'match exactly. The default behavior is to ignore ' 'same-sized items unless the local version is newer ' 'than the S3 version.')} class ExactTimestampsSync(SizeAndLastModifiedSync): ARGUMENT = EXACT_TIMESTAMPS def compare_time(self, src_file, dest_file): src_time = src_file.last_update dest_time = dest_file.last_update delta = dest_time - src_time cmd = src_file.operation_name if cmd == 'download': return self.total_seconds(delta) == 0 else: return super(ExactTimestampsSync, self).compare_time(src_file, dest_file) awscli-1.17.14/awscli/customizations/sagemaker.py0000644000000000000000000000176613620325554022012 0ustar rootroot00000000000000# Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. from awscli.customizations.utils import make_hidden_command_alias def register_alias_sagemaker_runtime_command(event_emitter): event_emitter.register( 'building-command-table.main', alias_sagemaker_runtime_command ) def alias_sagemaker_runtime_command(command_table, **kwargs): make_hidden_command_alias( command_table, existing_name='sagemaker-runtime', alias_name='runtime.sagemaker', ) awscli-1.17.14/awscli/customizations/iamvirtmfa.py0000644000000000000000000000631613620325554022206 0ustar rootroot00000000000000# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. """ This customization makes it easier to deal with the bootstrapping data returned by the ``iam create-virtual-mfa-device`` command. You can choose to bootstrap via a QRCode or via a Base32String. You specify your choice via the ``--bootstrap-method`` option which should be either "QRCodePNG" or "Base32StringSeed". You then specify the path to where you would like your bootstrapping data saved using the ``--outfile`` option. The command will pull the appropriate data field out of the response and write it to the specified file. It will also remove the two bootstrap data fields from the response. """ import base64 from awscli.customizations.arguments import StatefulArgument from awscli.customizations.arguments import resolve_given_outfile_path from awscli.customizations.arguments import is_parsed_result_successful CHOICES = ('QRCodePNG', 'Base32StringSeed') OUTPUT_HELP = ('The output path and file name where the bootstrap ' 'information will be stored.') BOOTSTRAP_HELP = ('Method to use to seed the virtual MFA. ' 'Valid values are: %s | %s' % CHOICES) class FileArgument(StatefulArgument): def add_to_params(self, parameters, value): # Validate the file here so we can raise an error prior # calling the service. value = resolve_given_outfile_path(value) super(FileArgument, self).add_to_params(parameters, value) class IAMVMFAWrapper(object): def __init__(self, event_handler): self._event_handler = event_handler self._outfile = FileArgument( 'outfile', help_text=OUTPUT_HELP, required=True) self._method = StatefulArgument( 'bootstrap-method', help_text=BOOTSTRAP_HELP, choices=CHOICES, required=True) self._event_handler.register( 'building-argument-table.iam.create-virtual-mfa-device', self._add_options) self._event_handler.register( 'after-call.iam.CreateVirtualMFADevice', self._save_file) def _add_options(self, argument_table, **kwargs): argument_table['outfile'] = self._outfile argument_table['bootstrap-method'] = self._method def _save_file(self, parsed, **kwargs): if not is_parsed_result_successful(parsed): return method = self._method.value outfile = self._outfile.value if method in parsed['VirtualMFADevice']: body = parsed['VirtualMFADevice'][method] with open(outfile, 'wb') as fp: fp.write(base64.b64decode(body)) for choice in CHOICES: if choice in parsed['VirtualMFADevice']: del parsed['VirtualMFADevice'][choice] awscli-1.17.14/awscli/customizations/configure/0000755000000000000000000000000013620325757021455 5ustar rootroot00000000000000awscli-1.17.14/awscli/customizations/configure/set.py0000644000000000000000000001104513620325554022616 0ustar rootroot00000000000000# Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import os from awscli.customizations.commands import BasicCommand from awscli.customizations.configure.writer import ConfigFileWriter from . import PREDEFINED_SECTION_NAMES, profile_to_section class ConfigureSetCommand(BasicCommand): NAME = 'set' DESCRIPTION = BasicCommand.FROM_FILE('configure', 'set', '_description.rst') SYNOPSIS = 'aws configure set varname value [--profile profile-name]' EXAMPLES = BasicCommand.FROM_FILE('configure', 'set', '_examples.rst') ARG_TABLE = [ {'name': 'varname', 'help_text': 'The name of the config value to set.', 'action': 'store', 'cli_type_name': 'string', 'positional_arg': True}, {'name': 'value', 'help_text': 'The value to set.', 'action': 'store', 'no_paramfile': True, # To disable the default paramfile behavior 'cli_type_name': 'string', 'positional_arg': True}, ] # Any variables specified in this list will be written to # the ~/.aws/credentials file instead of ~/.aws/config. _WRITE_TO_CREDS_FILE = ['aws_access_key_id', 'aws_secret_access_key', 'aws_session_token'] def __init__(self, session, config_writer=None): super(ConfigureSetCommand, self).__init__(session) if config_writer is None: config_writer = ConfigFileWriter() self._config_writer = config_writer def _run_main(self, args, parsed_globals): varname = args.varname value = args.value section = 'default' # Before handing things off to the config writer, # we need to find out three things: # 1. What section we're writing to (section). # 2. The name of the config key (varname) # 3. The actual value (value). if '.' not in varname: # unqualified name, scope it to the current # profile (or leave it as the 'default' section if # no profile is set). if self._session.profile is not None: section = profile_to_section(self._session.profile) else: # First figure out if it's been scoped to a profile. parts = varname.split('.') if parts[0] in ('default', 'profile'): # Then we know we're scoped to a profile. if parts[0] == 'default': section = 'default' remaining = parts[1:] else: # [profile, profile_name, ...] section = profile_to_section(parts[1]) remaining = parts[2:] varname = remaining[0] if len(remaining) == 2: value = {remaining[1]: value} elif parts[0] not in PREDEFINED_SECTION_NAMES: if self._session.profile is not None: section = profile_to_section(self._session.profile) else: profile_name = self._session.get_config_variable('profile') if profile_name is not None: section = profile_name varname = parts[0] if len(parts) == 2: value = {parts[1]: value} elif len(parts) == 2: # Otherwise it's something like "set preview.service true" # of something in the [plugin] section. section, varname = parts config_filename = os.path.expanduser( self._session.get_config_variable('config_file')) updated_config = {'__section__': section, varname: value} if varname in self._WRITE_TO_CREDS_FILE: config_filename = os.path.expanduser( self._session.get_config_variable('credentials_file')) section_name = updated_config['__section__'] if section_name.startswith('profile '): updated_config['__section__'] = section_name[8:] self._config_writer.update_config(updated_config, config_filename) awscli-1.17.14/awscli/customizations/configure/get.py0000644000000000000000000001023713620325554022604 0ustar rootroot00000000000000# Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import sys import logging from awscli.customizations.commands import BasicCommand from awscli.compat import six from . import PREDEFINED_SECTION_NAMES LOG = logging.getLogger(__name__) class ConfigureGetCommand(BasicCommand): NAME = 'get' DESCRIPTION = BasicCommand.FROM_FILE('configure', 'get', '_description.rst') SYNOPSIS = 'aws configure get varname [--profile profile-name]' EXAMPLES = BasicCommand.FROM_FILE('configure', 'get', '_examples.rst') ARG_TABLE = [ {'name': 'varname', 'help_text': 'The name of the config value to retrieve.', 'action': 'store', 'cli_type_name': 'string', 'positional_arg': True}, ] def __init__(self, session, stream=sys.stdout, error_stream=sys.stderr): super(ConfigureGetCommand, self).__init__(session) self._stream = stream self._error_stream = error_stream def _run_main(self, args, parsed_globals): varname = args.varname if '.' not in varname: # get_scoped_config() returns the config variables in the config # file (not the logical_var names), which is what we want. config = self._session.get_scoped_config() value = config.get(varname) else: value = self._get_dotted_config_value(varname) LOG.debug(u'Config value retrieved: %s' % value) if isinstance(value, six.string_types): self._stream.write(value) self._stream.write('\n') return 0 elif isinstance(value, dict): # TODO: add support for this. We would need to print it off in # the same format as the config file. self._error_stream.write( 'varname (%s) must reference a value, not a section or ' 'sub-section.' % varname ) return 1 else: return 1 def _get_dotted_config_value(self, varname): parts = varname.split('.') num_dots = varname.count('.') # Logic to deal with predefined sections like [preview], [plugin] and # etc. if num_dots == 1 and parts[0] in PREDEFINED_SECTION_NAMES: full_config = self._session.full_config section, config_name = varname.split('.') value = full_config.get(section, {}).get(config_name) if value is None: # Try to retrieve it from the profile config. value = full_config['profiles'].get( section, {}).get(config_name) return value if parts[0] == 'profile': profile_name = parts[1] config_name = parts[2] remaining = parts[3:] # Check if varname starts with 'default' profile (e.g. # default.emr-dev.emr.instance_profile) If not, go further to check # if varname starts with a known profile name elif parts[0] == 'default' or ( parts[0] in self._session.full_config['profiles']): profile_name = parts[0] config_name = parts[1] remaining = parts[2:] else: profile_name = self._session.get_config_variable('profile') if profile_name is None: profile_name = 'default' config_name = parts[0] remaining = parts[1:] value = self._session.full_config['profiles'].get( profile_name, {}).get(config_name) if len(remaining) == 1: try: value = value.get(remaining[-1]) except AttributeError: value = None return value awscli-1.17.14/awscli/customizations/configure/writer.py0000644000000000000000000002123213620325554023336 0ustar rootroot00000000000000# Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. import os import re from . import SectionNotFoundError class ConfigFileWriter(object): SECTION_REGEX = re.compile(r'\[(?P
[^]]+)\]') OPTION_REGEX = re.compile( r'(?P