\n'
'\nValues for the following can be set in the AWS CLI'
' config file using the "aws configure set" command: --service-role, --log-uri,'
' and InstanceProfile and KeyName arguments under --ec2-attributes.')
CLUSTER_NAME = (
'The name of the cluster. If not provided, the default is "Development Cluster".
')
LOG_URI = (
'Specifies the location in Amazon S3 to which log files '
'are periodically written. If a value is not provided, '
'logs files are not written to Amazon S3 from the master node '
'and are lost if the master node terminates.
')
SERVICE_ROLE = (
'Specifies an IAM service role, which Amazon EMR requires to call other AWS services '
'on your behalf during cluster operation. This parameter '
'is usually specified when a customized service role is used. '
'To specify the default service role, as well as the default instance '
'profile, use the --use-default-roles
parameter. '
'If the role and instance profile do not already exist, use the '
'aws emr create-default-roles
command to create them.
')
AUTOSCALING_ROLE = (
'Specify --auto-scaling-role EMR_AutoScaling_DefaultRole
'
' if an automatic scaling policy is specified for an instance group'
' using the --instance-groups
parameter. This default'
' IAM role allows the automatic scaling feature'
' to launch and terminate Amazon EC2 instances during scaling operations.
')
USE_DEFAULT_ROLES = (
'Specifies that the cluster should use the default'
' service role (EMR_DefaultRole) and instance profile (EMR_EC2_DefaultRole)'
' for permissions to access other AWS services.
'
'Make sure that the role and instance profile exist first. To create them,'
' use the create-default-roles
command.
')
AMI_VERSION = (
'Applies only to Amazon EMR release versions earlier than 4.0. Use'
' --release-label
for 4.0 and later. Specifies'
' the version of Amazon Linux Amazon Machine Image (AMI)'
' to use when launching Amazon EC2 instances in the cluster.'
' For example, --ami-version 3.1.0
.')
RELEASE_LABEL = (
'
Specifies the Amazon EMR release version, which determines'
' the versions of application software that are installed on the cluster.'
' For example, --release-label emr-5.15.0
installs'
' the application versions and features available in that version.'
' For details about application versions and features available'
' in each release, see the Amazon EMR Release Guide:
'
'https://docs.aws.amazon.com/emr/ReleaseGuide
'
'Use --release-label
only for Amazon EMR release version 4.0'
' and later. Use --ami-version
for earlier versions.'
' You cannot specify both a release label and AMI version.
')
CONFIGURATIONS = (
'Specifies a JSON file that contains configuration classifications,'
' which you can use to customize applications that Amazon EMR installs'
' when cluster instances launch. Applies only to Amazon EMR 4.0 and later.'
' The file referenced can either be stored locally (for example,'
' --configurations file://configurations.json
)'
' or stored in Amazon S3 (for example, --configurations'
' https://s3.amazonaws.com/myBucket/configurations.json
).'
' Each classification usually corresponds to the xml configuration'
' file for an application, such as yarn-site
for YARN. For a list of'
' available configuration classifications and example JSON, see'
' the following topic in the Amazon EMR Release Guide:
'
'https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-configure-apps.html
')
INSTANCE_GROUPS = (
'Specifies the number and type of Amazon EC2 instances'
' to create for each node type in a cluster, using uniform instance groups.'
' You can specify either --instance-groups
or'
' --instance-fleets
but not both.'
' For more information, see the following topic in the EMR Management Guide:
'
'https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-instance-group-configuration.html
'
'You can specify arguments individually using multiple'
' InstanceGroupType
argument blocks, one for the MASTER
'
' instance group, one for a CORE
instance group,'
' and optional, multiple TASK
instance groups.
'
'If you specify inline JSON structures, enclose the entire'
' InstanceGroupType
argument block in single quotation marks.'
'
Each InstanceGroupType
block takes the following inline arguments.'
' Optional arguments are shown in [square brackets].
'
'[Name]
- An optional friendly name for the instance group.'
'InstanceGroupType
- MASTER
, CORE
, or TASK
.'
'InstanceType
- The type of EC2 instance, for'
' example m4.large
,'
' to use for all nodes in the instance group.'
'InstanceCount
- The number of EC2 instances to provision in the instance group.'
'[BidPrice]
- If specified, indicates that the instance group uses Spot Instances.'
' This is the maximum price you are willing to pay for Spot Instances. Specify OnDemandPrice'
' to set the amount equal to the On-Demand price, or specify an amount in USD.'
'[EbsConfiguration]
- Specifies additional Amazon EBS storage volumes attached'
' to EC2 instances using an inline JSON structure.'
'[AutoScalingPolicy]
- Specifies an automatic scaling policy for the'
' instance group using an inline JSON structure.')
INSTANCE_FLEETS = (
'Applies only to Amazon EMR release version 5.0 and later. Specifies'
' the number and type of Amazon EC2 instances to create'
' for each node type in a cluster, using instance fleets.'
' You can specify either --instance-fleets
or'
' --instance-groups
but not both.'
' For more information and examples, see the following topic in the Amazon EMR Management Guide:
'
'https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-instance-fleet.html
'
'You can specify arguments individually using multiple'
' InstanceFleetType
argument blocks, one for the MASTER
'
' instance fleet, one for a CORE
instance fleet,'
' and an optional TASK
instance fleet.
'
'The following arguments can be specified for each instance fleet. Optional arguments are shown in [square brackets].
'
'[Name]
- An optional friendly name for the instance fleet.'
'InstanceFleetType
- MASTER
, CORE
, or TASK
.'
'TargetOnDemandCapacity
- The target capacity of On-Demand units'
' for the instance fleet, which determines how many On-Demand Instances to provision.'
' The WeightedCapacity
specified for an instance type within'
' InstanceTypeConfigs
counts toward this total when an instance type'
' with the On-Demand purchasing option launches.'
'TargetSpotCapacity
- The target capacity of Spot units'
' for the instance fleet, which determines how many Spot Instances to provision.'
' The WeightedCapacity
specified for an instance type within'
' InstanceTypeConfigs
counts toward this total when an instance'
' type with the Spot purchasing option launches.'
'[LaunchSpecifications]
- When TargetSpotCapacity
is specified,'
' specifies the block duration and timeout action for Spot Instances.'
'InstanceTypeConfigs
- Specifies up to five EC2 instance types to'
' use in the instance fleet, including details such as Spot price and Amazon EBS configuration.')
INSTANCE_TYPE = (
'Shortcut parameter as an alternative to --instance-groups
.'
' Specifies the type of Amazon EC2 instance to use in a cluster.'
' If used without the --instance-count
parameter,'
' the cluster consists of a single master node running on the EC2 instance type'
' specified. When used together with --instance-count
,'
' one instance is used for the master node, and the remainder'
' are used for the core node type.
')
INSTANCE_COUNT = (
'Shortcut parameter as an alternative to --instance-groups
'
' when used together with --instance-type
. Specifies the'
' number of Amazon EC2 instances to create for a cluster.'
' One instance is used for the master node, and the remainder'
' are used for the core node type.
')
ADDITIONAL_INFO = (
'Specifies additional information during cluster creation.
')
EC2_ATTRIBUTES = (
'Configures cluster and Amazon EC2 instance configurations. Accepts'
' the following arguments:
'
'KeyName
- Specifies the name of the AWS EC2 key pair that will be used for'
' SSH connections to the master node and other instances on the cluster.'
'AvailabilityZone
- Specifies the availability zone in which to launch'
' the cluster. For example, us-west-1b
.'
'SubnetId
- Specifies the VPC subnet in which to create the cluster.'
'InstanceProfile
- An IAM role that allows EC2 instances to'
' access other AWS services, such as Amazon S3, that'
' are required for operations.'
'EmrManagedMasterSecurityGroup
- The security group ID of the Amazon EC2'
' security group for the master node.'
'EmrManagedSlaveSecurityGroup
- The security group ID of the Amazon EC2'
' security group for the slave nodes.'
'ServiceAccessSecurityGroup
- The security group ID of the Amazon EC2 '
'security group for Amazon EMR access to clusters in VPC private subnets.'
'AdditionalMasterSecurityGroups
- A list of additional Amazon EC2'
' security group IDs for the master node.'
'AdditionalSlaveSecurityGroups
- A list of additional Amazon EC2'
' security group IDs for the slave nodes.')
AUTO_TERMINATE = (
'Specifies whether the cluster should terminate after'
' completing all the steps. Auto termination is off by default.
')
TERMINATION_PROTECTED = (
'Specifies whether to lock the cluster to prevent the'
' Amazon EC2 instances from being terminated by API call,'
' user intervention, or an error.
')
SCALE_DOWN_BEHAVIOR = (
'Specifies the way that individual Amazon EC2 instances terminate'
' when an automatic scale-in activity occurs or an instance group is resized.
'
'Accepted values:
'
'TERMINATE_AT_TASK_COMPLETION
- Specifies that Amazon EMR'
' blacklists and drains tasks from nodes before terminating the instance.'
'TERMINATE_AT_INSTANCE_HOUR
- Specifies that Amazon EMR'
' terminate EC2 instances at the instance-hour boundary, regardless of when'
' the request to terminate was submitted.'
)
VISIBILITY = (
'Specifies whether the cluster is visible to all IAM users of'
' the AWS account associated with the cluster. If set to'
' --visible-to-all-users
, all IAM users of that AWS account'
' can view it. If they have the proper policy permissions set, they can '
' also manage the cluster. If it is set to --no-visible-to-all-users
,'
' only the IAM user that created the cluster can view and manage it. '
' Clusters are visible by default.
')
DEBUGGING = (
'Specifies that the debugging tool is enabled for the cluster,'
' which allows you to browse log files using the Amazon EMR console.'
' Turning debugging on requires that you specify --log-uri
'
' because log files must be stored in Amazon S3 so that'
' Amazon EMR can index them for viewing in the console.
')
TAGS = (
'A list of tags to associate with a cluster, which apply to'
' each Amazon EC2 instance in the cluster. Tags are key-value pairs that'
' consist of a required key string'
' with a maximum of 128 characters, and an optional value string'
' with a maximum of 256 characters.
'
'You can specify tags in key=value
format or you can add a'
' tag without a value using only the key name, for example key
.'
' Use a space to separate multiple tags.
')
BOOTSTRAP_ACTIONS = (
'Specifies a list of bootstrap actions to run on each EC2 instance when'
' a cluster is created. Bootstrap actions run on each instance'
' immediately after Amazon EMR provisions the EC2 instance and'
' before Amazon EMR installs specified applications.
'
'You can specify a bootstrap action as an inline JSON structure'
' enclosed in single quotation marks, or you can use a shorthand'
' syntax, specifying multiple bootstrap actions, each separated'
' by a space. When using the shorthand syntax, each bootstrap'
' action takes the following parameters, separated by'
' commas with no trailing space. Optional parameters'
' are shown in [square brackets].
'
'Path
- The path and file name of the script'
' to run, which must be accessible to each instance in the cluster.'
' For example, Path=s3://mybucket/myscript.sh
.'
'[Name]
- A friendly name to help you identify'
' the bootstrap action. For example, Name=BootstrapAction1
'
'[Args]
- A comma-separated list of arguments'
' to pass to the bootstrap action script. Arguments can be'
' either a list of values (Args=arg1,arg2,arg3
)'
' or a list of key-value pairs, as well as optional values,'
' enclosed in square brackets (Args=[arg1,arg2=arg2value,arg3])
.')
APPLICATIONS = (
'Specifies the applications to install on the cluster.'
' Available applications and their respective versions vary'
' by Amazon EMR release. For more information, see the'
' Amazon EMR Release Guide:
'
'https://docs.aws.amazon.com/emr/latest/ReleaseGuide/
'
'When using versions of Amazon EMR earlier than 4.0,'
' some applications take optional arguments for configuration.'
' Arguments should either be a comma-separated list of values'
' (Args=arg1,arg2,arg3
) or a bracket-enclosed list of values'
' and key-value pairs (Args=[arg1,arg2=arg3,arg4]
).
')
EMR_FS = (
'Specifies EMRFS configuration options, such as consistent view'
' and Amazon S3 encryption parameters.
'
'When you use Amazon EMR release version 4.8.0 or later, we recommend'
' that you use the --configurations
option together'
' with the emrfs-site
configuration classification'
' to configure EMRFS, and use security configurations'
' to configure encryption for EMRFS data in Amazon S3 instead.'
' For more information, see the following topic in the Amazon EMR Management Guide:
'
'https://docs.aws.amazon.com/emr/latest/ManagementGuide/emrfs-configure-consistent-view.html
')
RESTORE_FROM_HBASE = (
'Applies only when using Amazon EMR release versions earlier than 4.0.'
' Launches a new HBase cluster and populates it with'
' data from a previous backup of an HBase cluster. HBase'
' must be installed using the --applications
option.
')
STEPS = (
'Specifies a list of steps to be executed by the cluster. Steps run'
' only on the master node after applications are installed'
' and are used to submit work to a cluster. A step can be'
' specified using the shorthand syntax, by referencing a JSON file'
' or by specifying an inline JSON structure. Args
supplied with steps'
' should be a comma-separated list of values (Args=arg1,arg2,arg3
) or'
' a bracket-enclosed list of values and key-value'
' pairs (Args=[arg1,arg2=value,arg4
).
')
INSTALL_APPLICATIONS = (
'The applications to be installed.'
' Takes the following parameters: '
'Name
and Args
.
')
EBS_ROOT_VOLUME_SIZE = (
'Applies only to Amazon EMR release version 4.0 and earlier. Specifies the size,'
' in GiB, of the EBS root device volume of the Amazon Linux AMI'
' that is used for each EC2 instance in the cluster.
')
SECURITY_CONFIG = (
'Specifies the name of a security configuration to use for the cluster.'
' A security configuration defines data encryption settings and'
' other security options. For more information, see'
' the following topic in the Amazon EMR Management Guide:
'
'https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-encryption-enable-security-configuration.html
'
'Use list-security-configurations
to get a list of available'
' security configurations in the active account.
')
CUSTOM_AMI_ID = (
'Applies only to Amazon EMR release version 5.7.0 and later.'
' Specifies the AMI ID of a custom AMI to use'
' when Amazon EMR provisions EC2 instances. A custom'
' AMI can be used to encrypt the Amazon EBS root volume. It'
' can also be used instead of bootstrap actions to customize'
' cluster node configurations. For more information, see'
' the following topic in the Amazon EMR Management Guide:
'
'https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-custom-ami.html
')
REPO_UPGRADE_ON_BOOT = (
'Applies only when a --custom-ami-id
is'
' specified. On first boot, by default, Amazon Linux AMIs'
' connect to package repositories to install security updates'
' before other services start. You can set this parameter'
' using --rep-upgrade-on-boot NONE
to'
' disable these updates. CAUTION: This creates additional'
' security risks.
')
KERBEROS_ATTRIBUTES = (
'Specifies required cluster attributes for Kerberos when Kerberos authentication'
' is enabled in the specified --security-configuration
.'
' Takes the following arguments:
'
' Realm
- Specifies the name of the Kerberos'
' realm to which all nodes in a cluster belong. For example,'
' Realm=EC2.INTERNAL
.'
' KdcAdminPassword
- Specifies the password used within the cluster'
' for the kadmin service, which maintains Kerberos principals, password'
' policies, and keytabs for the cluster.'
' CrossRealmTrustPrincipalPassword
- Required when establishing a cross-realm trust'
' with a KDC in a different realm. This is the cross-realm principal password,'
' which must be identical across realms.'
' ADDomainJoinUser
- Required when establishing trust with an Active Directory'
' domain. This is the User logon name of an AD account with sufficient privileges to join resouces to the domain.'
' ADDomainJoinPassword
- The AD password for ADDomainJoinUser
.')
# end create-cluster options help descriptions
LIST_CLUSTERS_CLUSTER_STATES = (
'Specifies that only clusters in the states specified are'
' listed. Alternatively, you can use the shorthand'
' form for single states or a group of states.
'
'Takes the following state values:
'
'STARTING
'
'BOOTSTRAPPING
'
'RUNNING
'
'WAITING
'
'TERMINATING
'
'TERMINATED
'
'TERMINATED_WITH_ERRORS
')
LIST_CLUSTERS_STATE_FILTERS = (
'Shortcut options for --cluster-states. The'
' following shortcut options can be specified:
'
'--active
- list only clusters that'
' are STARTING
,BOOTSTRAPPING
,'
' RUNNING
, WAITING
, or TERMINATING
. '
'--terminated
- list only clusters that are TERMINATED
. '
'--failed
- list only clusters that are TERMINATED_WITH_ERRORS
.')
LIST_CLUSTERS_CREATED_AFTER = (
'List only those clusters created after the date and time'
' specified in the format yyyy-mm-ddThh:mm:ss. For example,'
' --created-after 2017-07-04T00:01:30.
')
LIST_CLUSTERS_CREATED_BEFORE = (
'List only those clusters created after the date and time'
' specified in the format yyyy-mm-ddThh:mm:ss. For example,'
' --created-after 2017-07-04T00:01:30.
')
EMR_MANAGED_MASTER_SECURITY_GROUP = (
'The identifier of the Amazon EC2 security group '
'for the master node.
')
EMR_MANAGED_SLAVE_SECURITY_GROUP = (
'The identifier of the Amazon EC2 security group '
'for the slave nodes.
')
SERVICE_ACCESS_SECURITY_GROUP = (
'The identifier of the Amazon EC2 security group '
'for Amazon EMR to access clusters in VPC private subnets.
')
ADDITIONAL_MASTER_SECURITY_GROUPS = (
' A list of additional Amazon EC2 security group IDs for '
'the master node
')
ADDITIONAL_SLAVE_SECURITY_GROUPS = (
'A list of additional Amazon EC2 security group IDs for '
'the slave nodes.
')
AVAILABLE_ONLY_FOR_AMI_VERSIONS = (
'This command is only available when using Amazon EMR versions'
'earlier than 4.0.')
STEP_CONCURRENCY_LEVEL = (
'This command specifies the step concurrency level of the cluster.'
'Default is 1 which is non-concurrent.'
)
awscli-1.17.14/awscli/customizations/emr/command.py 0000644 0000000 0000000 00000012502 13620325554 022242 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
from awscli.customizations.commands import BasicCommand
from awscli.customizations.emr import config
from awscli.customizations.emr import configutils
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import exceptions
LOG = logging.getLogger(__name__)
class Command(BasicCommand):
region = None
UNSUPPORTED_COMMANDS_FOR_RELEASE_BASED_CLUSTERS = set([
'install-applications',
'restore-from-hbase-backup',
'schedule-hbase-backup',
'create-hbase-backup',
'disable-hbase-backups',
])
def supports_arg(self, name):
return any((x['name'] == name for x in self.ARG_TABLE))
def _run_main(self, parsed_args, parsed_globals):
self._apply_configs(parsed_args,
configutils.get_configs(self._session))
self.region = emrutils.get_region(self._session, parsed_globals)
self._validate_unsupported_commands_for_release_based_clusters(
parsed_args, parsed_globals)
return self._run_main_command(parsed_args, parsed_globals)
def _apply_configs(self, parsed_args, parsed_configs):
applicable_configurations = \
self._get_applicable_configurations(parsed_args, parsed_configs)
configs_added = {}
for configuration in applicable_configurations:
configuration.add(self, parsed_args,
parsed_configs[configuration.name])
configs_added[configuration.name] = \
parsed_configs[configuration.name]
if configs_added:
LOG.debug("Updated arguments with configs: %s" % configs_added)
else:
LOG.debug("No configs applied")
LOG.debug("Running command with args: %s" % parsed_args)
def _get_applicable_configurations(self, parsed_args, parsed_configs):
# We need to find the applicable configurations by applying
# following filters:
# 1. Configurations that are applicable to this command
# 3. Configurations that are present in parsed_configs
# 2. Configurations that are not present in parsed_args
configurations = \
config.get_applicable_configurations(self)
configurations = [x for x in configurations
if x.name in parsed_configs and
not x.is_present(parsed_args)]
configurations = self._filter_configurations_in_special_cases(
configurations, parsed_args, parsed_configs)
return configurations
def _filter_configurations_in_special_cases(self, configurations,
parsed_args, parsed_configs):
# Subclasses can override this method to filter the applicable
# configurations further based upon some custom logic
# Default behavior is to return the configurations list as is
return configurations
def _run_main_command(self, parsed_args, parsed_globals):
# Subclasses should implement this method.
# parsed_globals are the parsed global args (things like region,
# profile, output, etc.)
# parsed_args are any arguments you've defined in your ARG_TABLE
# that are parsed.
# parsed_args are updated to include any emr specific configuration
# from the config file if the corresponding argument is not
# explicitly specified on the CLI
raise NotImplementedError("_run_main_command")
def _validate_unsupported_commands_for_release_based_clusters(
self, parsed_args, parsed_globals):
command = self.NAME
if (command in self.UNSUPPORTED_COMMANDS_FOR_RELEASE_BASED_CLUSTERS and
hasattr(parsed_args, 'cluster_id')):
release_label = emrutils.get_release_label(
parsed_args.cluster_id, self._session, self.region,
parsed_globals.endpoint_url, parsed_globals.verify_ssl)
if release_label:
raise exceptions.UnsupportedCommandWithReleaseError(
command=command,
release_label=release_label)
def override_args_required_option(argument_table, args, session, **kwargs):
# This function overrides the 'required' property of an argument
# if a value corresponding to that argument is present in the config
# file
# We don't want to override when user is viewing the help so that we
# can show the required options correctly in the help
need_to_override = False if len(args) == 1 and args[0] == 'help' \
else True
if need_to_override:
parsed_configs = configutils.get_configs(session)
for arg_name in argument_table.keys():
if arg_name.replace('-', '_') in parsed_configs:
argument_table[arg_name].required = False
awscli-1.17.14/awscli/customizations/emr/config.py 0000644 0000000 0000000 00000011231 13620325554 022067 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
from awscli.customizations.emr import configutils
from awscli.customizations.emr import exceptions
LOG = logging.getLogger(__name__)
SUPPORTED_CONFIG_LIST = [
{'name': 'service_role'},
{'name': 'log_uri'},
{'name': 'instance_profile', 'arg_name': 'ec2_attributes',
'arg_value_key': 'InstanceProfile'},
{'name': 'key_name', 'arg_name': 'ec2_attributes',
'arg_value_key': 'KeyName'},
{'name': 'enable_debugging', 'type': 'boolean'},
{'name': 'key_pair_file'}
]
TYPES = ['string', 'boolean']
def get_applicable_configurations(command):
supported_configurations = _create_supported_configurations()
return [x for x in supported_configurations if x.is_applicable(command)]
def _create_supported_configuration(config):
config_type = config['type'] if 'type' in config else 'string'
if (config_type == 'string'):
config_arg_name = config['arg_name'] \
if 'arg_name' in config else config['name']
config_arg_value_key = config['arg_value_key'] \
if 'arg_value_key' in config else None
configuration = StringConfiguration(config['name'],
config_arg_name,
config_arg_value_key)
elif (config_type == 'boolean'):
configuration = BooleanConfiguration(config['name'])
return configuration
def _create_supported_configurations():
return [_create_supported_configuration(config)
for config in SUPPORTED_CONFIG_LIST]
class Configuration(object):
def __init__(self, name, arg_name):
self.name = name
self.arg_name = arg_name
def is_applicable(self, command):
raise NotImplementedError("is_applicable")
def is_present(self, parsed_args):
raise NotImplementedError("is_present")
def add(self, command, parsed_args, value):
raise NotImplementedError("add")
def _check_arg(self, parsed_args, arg_name):
return getattr(parsed_args, arg_name, None)
class StringConfiguration(Configuration):
def __init__(self, name, arg_name, arg_value_key=None):
super(StringConfiguration, self).__init__(name, arg_name)
self.arg_value_key = arg_value_key
def is_applicable(self, command):
return command.supports_arg(self.arg_name.replace('_', '-'))
def is_present(self, parsed_args):
if (not self.arg_value_key):
return self._check_arg(parsed_args, self.arg_name)
else:
return self._check_arg(parsed_args, self.arg_name) \
and self.arg_value_key in getattr(parsed_args, self.arg_name)
def add(self, command, parsed_args, value):
if (not self.arg_value_key):
setattr(parsed_args, self.arg_name, value)
else:
if (not self._check_arg(parsed_args, self.arg_name)):
setattr(parsed_args, self.arg_name, {})
getattr(parsed_args, self.arg_name)[self.arg_value_key] = value
class BooleanConfiguration(Configuration):
def __init__(self, name):
super(BooleanConfiguration, self).__init__(name, name)
self.no_version_arg_name = "no_" + name
def is_applicable(self, command):
return command.supports_arg(self.arg_name.replace('_', '-')) and \
command.supports_arg(self.no_version_arg_name.replace('_', '-'))
def is_present(self, parsed_args):
return self._check_arg(parsed_args, self.arg_name) \
or self._check_arg(parsed_args, self.no_version_arg_name)
def add(self, command, parsed_args, value):
if (value.lower() == 'true'):
setattr(parsed_args, self.arg_name, True)
setattr(parsed_args, self.no_version_arg_name, False)
elif (value.lower() == 'false'):
setattr(parsed_args, self.arg_name, False)
setattr(parsed_args, self.no_version_arg_name, True)
else:
raise exceptions.InvalidBooleanConfigError(
config_value=value,
config_key=self.arg_name,
profile_var_name=configutils.get_current_profile_var_name(
command._session))
awscli-1.17.14/awscli/customizations/emr/addtags.py 0000644 0000000 0000000 00000002101 13620325554 022225 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.arguments import CustomArgument
from awscli.customizations.emr import helptext
from awscli.customizations.emr import emrutils
def modify_tags_argument(argument_table, **kwargs):
argument_table['tags'] = TagsArgument('tags', required=True,
help_text=helptext.TAGS, nargs='+')
class TagsArgument(CustomArgument):
def add_to_params(self, parameters, value):
if value is None:
return
parameters['Tags'] = emrutils.parse_tags(value)
awscli-1.17.14/awscli/customizations/emr/sshutils.py 0000644 0000000 0000000 00000006432 13620325554 022507 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
from awscli.customizations.emr import exceptions
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import constants
from botocore.exceptions import WaiterError
LOG = logging.getLogger(__name__)
def validate_and_find_master_dns(session, parsed_globals, cluster_id):
"""
Utility method for ssh, socks, put and get command.
Check if the cluster to be connected to is
terminated or being terminated.
Check if the cluster is running.
Find master instance public dns of a given cluster.
Return the latest created master instance public dns name.
Throw MasterDNSNotAvailableError or ClusterTerminatedError.
"""
cluster_state = emrutils.get_cluster_state(
session, parsed_globals, cluster_id)
if cluster_state in constants.TERMINATED_STATES:
raise exceptions.ClusterTerminatedError
emr = emrutils.get_client(session, parsed_globals)
try:
cluster_running_waiter = emr.get_waiter('cluster_running')
if cluster_state in constants.STARTING_STATES:
print("Waiting for the cluster to start.")
cluster_running_waiter.wait(ClusterId=cluster_id)
except WaiterError:
raise exceptions.MasterDNSNotAvailableError
return emrutils.find_master_dns(
session=session, cluster_id=cluster_id,
parsed_globals=parsed_globals)
def validate_ssh_with_key_file(key_file):
if (emrutils.which('putty.exe') or emrutils.which('ssh') or
emrutils.which('ssh.exe')) is None:
raise exceptions.SSHNotFoundError
else:
check_ssh_key_format(key_file)
def validate_scp_with_key_file(key_file):
if (emrutils.which('pscp.exe') or emrutils.which('scp') or
emrutils.which('scp.exe')) is None:
raise exceptions.SCPNotFoundError
else:
check_scp_key_format(key_file)
def check_scp_key_format(key_file):
# If only pscp is present and the file format is incorrect
if (emrutils.which('pscp.exe') is not None and
(emrutils.which('scp.exe') or emrutils.which('scp')) is None):
if check_command_key_format(key_file, ['ppk']) is False:
raise exceptions.WrongPuttyKeyError
else:
pass
def check_ssh_key_format(key_file):
# If only putty is present and the file format is incorrect
if (emrutils.which('putty.exe') is not None and
(emrutils.which('ssh.exe') or emrutils.which('ssh')) is None):
if check_command_key_format(key_file, ['ppk']) is False:
raise exceptions.WrongPuttyKeyError
else:
pass
def check_command_key_format(key_file, accepted_file_format=[]):
if any(key_file.endswith(i) for i in accepted_file_format):
return True
else:
return False
awscli-1.17.14/awscli/customizations/emr/ssh.py 0000644 0000000 0000000 00000017063 13620325554 021430 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import os
import subprocess
import tempfile
from awscli.customizations.emr import constants
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import sshutils
from awscli.customizations.emr.command import Command
KEY_PAIR_FILE_HELP_TEXT = '\nA value for the variable Key Pair File ' \
'can be set in the AWS CLI config file using the ' \
'"aws configure set emr.key_pair_file " command.\n'
class Socks(Command):
NAME = 'socks'
DESCRIPTION = ('Create a socks tunnel on port 8157 from your machine '
'to the master.\n%s' % KEY_PAIR_FILE_HELP_TEXT)
ARG_TABLE = [
{'name': 'cluster-id', 'required': True,
'help_text': 'Cluster Id of cluster you want to ssh into'},
{'name': 'key-pair-file', 'required': True,
'help_text': 'Private key file to use for login'},
]
def _run_main_command(self, parsed_args, parsed_globals):
try:
master_dns = sshutils.validate_and_find_master_dns(
session=self._session,
parsed_globals=parsed_globals,
cluster_id=parsed_args.cluster_id)
key_file = parsed_args.key_pair_file
sshutils.validate_ssh_with_key_file(key_file)
f = tempfile.NamedTemporaryFile(delete=False)
if (emrutils.which('ssh') or emrutils.which('ssh.exe')):
command = ['ssh', '-o', 'StrictHostKeyChecking=no', '-o',
'ServerAliveInterval=10', '-ND', '8157', '-i',
parsed_args.key_pair_file, constants.SSH_USER +
'@' + master_dns]
else:
command = ['putty', '-ssh', '-i', parsed_args.key_pair_file,
constants.SSH_USER + '@' + master_dns, '-N', '-D',
'8157']
print(' '.join(command))
rc = subprocess.call(command)
return rc
except KeyboardInterrupt:
print('Disabling Socks Tunnel.')
return 0
class SSH(Command):
NAME = 'ssh'
DESCRIPTION = ('SSH into master node of the cluster.\n%s' %
KEY_PAIR_FILE_HELP_TEXT)
ARG_TABLE = [
{'name': 'cluster-id', 'required': True,
'help_text': 'Cluster Id of cluster you want to ssh into'},
{'name': 'key-pair-file', 'required': True,
'help_text': 'Private key file to use for login'},
{'name': 'command', 'help_text': 'Command to execute on Master Node'}
]
def _run_main_command(self, parsed_args, parsed_globals):
master_dns = sshutils.validate_and_find_master_dns(
session=self._session,
parsed_globals=parsed_globals,
cluster_id=parsed_args.cluster_id)
key_file = parsed_args.key_pair_file
sshutils.validate_ssh_with_key_file(key_file)
f = tempfile.NamedTemporaryFile(delete=False)
if (emrutils.which('ssh') or emrutils.which('ssh.exe')):
command = ['ssh', '-o', 'StrictHostKeyChecking=no', '-o',
'ServerAliveInterval=10', '-i',
parsed_args.key_pair_file, constants.SSH_USER +
'@' + master_dns, '-t']
if parsed_args.command:
command.append(parsed_args.command)
else:
command = ['putty', '-ssh', '-i', parsed_args.key_pair_file,
constants.SSH_USER + '@' + master_dns, '-t']
if parsed_args.command:
f.write(parsed_args.command)
f.write('\nread -n1 -r -p "Command completed. Press any key."')
command.append('-m')
command.append(f.name)
f.close()
print(' '.join(command))
rc = subprocess.call(command)
os.remove(f.name)
return rc
class Put(Command):
NAME = 'put'
DESCRIPTION = ('Put file onto the master node.\n%s' %
KEY_PAIR_FILE_HELP_TEXT)
ARG_TABLE = [
{'name': 'cluster-id', 'required': True,
'help_text': 'Cluster Id of cluster you want to put file onto'},
{'name': 'key-pair-file', 'required': True,
'help_text': 'Private key file to use for login'},
{'name': 'src', 'required': True,
'help_text': 'Source file path on local machine'},
{'name': 'dest', 'help_text': 'Destination file path on remote host'}
]
def _run_main_command(self, parsed_args, parsed_globals):
master_dns = sshutils.validate_and_find_master_dns(
session=self._session,
parsed_globals=parsed_globals,
cluster_id=parsed_args.cluster_id)
key_file = parsed_args.key_pair_file
sshutils.validate_scp_with_key_file(key_file)
if (emrutils.which('scp') or emrutils.which('scp.exe')):
command = ['scp', '-r', '-o StrictHostKeyChecking=no',
'-i', parsed_args.key_pair_file, parsed_args.src,
constants.SSH_USER + '@' + master_dns]
else:
command = ['pscp', '-scp', '-r', '-i', parsed_args.key_pair_file,
parsed_args.src, constants.SSH_USER + '@' + master_dns]
# if the instance is not terminated
if parsed_args.dest:
command[-1] = command[-1] + ":" + parsed_args.dest
else:
command[-1] = command[-1] + ":" + parsed_args.src.split('/')[-1]
print(' '.join(command))
rc = subprocess.call(command)
return rc
class Get(Command):
NAME = 'get'
DESCRIPTION = ('Get file from master node.\n%s' % KEY_PAIR_FILE_HELP_TEXT)
ARG_TABLE = [
{'name': 'cluster-id', 'required': True,
'help_text': 'Cluster Id of cluster you want to get file from'},
{'name': 'key-pair-file', 'required': True,
'help_text': 'Private key file to use for login'},
{'name': 'src', 'required': True,
'help_text': 'Source file path on remote host'},
{'name': 'dest', 'help_text': 'Destination file path on your machine'}
]
def _run_main_command(self, parsed_args, parsed_globals):
master_dns = sshutils.validate_and_find_master_dns(
session=self._session,
parsed_globals=parsed_globals,
cluster_id=parsed_args.cluster_id)
key_file = parsed_args.key_pair_file
sshutils.validate_scp_with_key_file(key_file)
if (emrutils.which('scp') or emrutils.which('scp.exe')):
command = ['scp', '-r', '-o StrictHostKeyChecking=no', '-i',
parsed_args.key_pair_file, constants.SSH_USER + '@' +
master_dns + ':' + parsed_args.src]
else:
command = ['pscp', '-scp', '-r', '-i', parsed_args.key_pair_file,
constants.SSH_USER + '@' + master_dns + ':' +
parsed_args.src]
if parsed_args.dest:
command.append(parsed_args.dest)
else:
command.append(parsed_args.src.split('/')[-1])
print(' '.join(command))
rc = subprocess.call(command)
return rc
awscli-1.17.14/awscli/customizations/emr/emr.py 0000644 0000000 0000000 00000006271 13620325554 021415 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import hbase
from awscli.customizations.emr import ssh
from awscli.customizations.emr.addsteps import AddSteps
from awscli.customizations.emr.createcluster import CreateCluster
from awscli.customizations.emr.addinstancegroups import AddInstanceGroups
from awscli.customizations.emr.createdefaultroles import CreateDefaultRoles
from awscli.customizations.emr.modifyclusterattributes import ModifyClusterAttr
from awscli.customizations.emr.installapplications import InstallApplications
from awscli.customizations.emr.describecluster import DescribeCluster
from awscli.customizations.emr.terminateclusters import TerminateClusters
from awscli.customizations.emr.addtags import modify_tags_argument
from awscli.customizations.emr.listclusters \
import modify_list_clusters_argument
from awscli.customizations.emr.command import override_args_required_option
def emr_initialize(cli):
"""
The entry point for EMR high level commands.
"""
cli.register('building-command-table.emr', register_commands)
cli.register('building-argument-table.emr.add-tags', modify_tags_argument)
cli.register(
'building-argument-table.emr.list-clusters',
modify_list_clusters_argument)
cli.register('before-building-argument-table-parser.emr.*',
override_args_required_option)
def register_commands(command_table, session, **kwargs):
"""
Called when the EMR command table is being built. Used to inject new
high level commands into the command list. These high level commands
must not collide with existing low-level API call names.
"""
command_table['terminate-clusters'] = TerminateClusters(session)
command_table['describe-cluster'] = DescribeCluster(session)
command_table['modify-cluster-attributes'] = ModifyClusterAttr(session)
command_table['install-applications'] = InstallApplications(session)
command_table['create-cluster'] = CreateCluster(session)
command_table['add-steps'] = AddSteps(session)
command_table['restore-from-hbase-backup'] = \
hbase.RestoreFromHBaseBackup(session)
command_table['create-hbase-backup'] = hbase.CreateHBaseBackup(session)
command_table['schedule-hbase-backup'] = hbase.ScheduleHBaseBackup(session)
command_table['disable-hbase-backups'] = \
hbase.DisableHBaseBackups(session)
command_table['create-default-roles'] = CreateDefaultRoles(session)
command_table['add-instance-groups'] = AddInstanceGroups(session)
command_table['ssh'] = ssh.SSH(session)
command_table['socks'] = ssh.Socks(session)
command_table['get'] = ssh.Get(session)
command_table['put'] = ssh.Put(session)
awscli-1.17.14/awscli/customizations/emr/emrfsutils.py 0000644 0000000 0000000 00000021622 13620325554 023024 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import constants
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import exceptions
from botocore.compat import OrderedDict
CONSISTENT_OPTIONAL_KEYS = ['RetryCount', 'RetryPeriod']
CSE_KMS_REQUIRED_KEYS = ['KMSKeyId']
CSE_CUSTOM_REQUIRED_KEYS = ['CustomProviderLocation', 'CustomProviderClass']
CSE_PROVIDER_TYPES = [constants.EMRFS_KMS, constants.EMRFS_CUSTOM]
ENCRYPTION_TYPES = [constants.EMRFS_CLIENT_SIDE, constants.EMRFS_SERVER_SIDE]
CONSISTENT_OPTION_NAME = "--emrfs Consistent=true/false"
CSE_OPTION_NAME = '--emrfs Encryption=ClientSide'
CSE_KMS_OPTION_NAME = '--emrfs Encryption=ClientSide,ProviderType=KMS'
CSE_CUSTOM_OPTION_NAME = '--emrfs Encryption=ClientSide,ProviderType=Custom'
def build_bootstrap_action_configs(region, emrfs_args):
bootstrap_actions = []
_verify_emrfs_args(emrfs_args)
if _need_to_configure_cse(emrfs_args, 'CUSTOM'):
# Download custom encryption provider from Amazon S3 to EMR Cluster
bootstrap_actions.append(
emrutils.build_bootstrap_action(
path=constants.EMRFS_CSE_CUSTOM_S3_GET_BA_PATH,
name=constants.S3_GET_BA_NAME,
args=[constants.S3_GET_BA_SRC,
emrfs_args.get('CustomProviderLocation'),
constants.S3_GET_BA_DEST,
constants.EMRFS_CUSTOM_DEST_PATH,
constants.S3_GET_BA_FORCE]))
emrfs_setup_ba_args = _build_ba_args_to_setup_emrfs(emrfs_args)
bootstrap_actions.append(
emrutils.build_bootstrap_action(
path=emrutils.build_s3_link(
relative_path=constants.CONFIG_HADOOP_PATH,
region=region),
name=constants.EMRFS_BA_NAME,
args=emrfs_setup_ba_args))
return bootstrap_actions
def build_emrfs_confiuration(emrfs_args):
_verify_emrfs_args(emrfs_args)
emrfs_properties = _build_emrfs_properties(emrfs_args)
if _need_to_configure_cse(emrfs_args, 'CUSTOM'):
emrfs_properties[constants.EMRFS_CSE_CUSTOM_PROVIDER_URI_KEY] = \
emrfs_args.get('CustomProviderLocation')
emrfs_configuration = {
'Classification': constants.EMRFS_SITE,
'Properties': emrfs_properties}
return emrfs_configuration
def _verify_emrfs_args(emrfs_args):
# Encryption should have a valid value
if 'Encryption' in emrfs_args \
and emrfs_args['Encryption'].upper() not in ENCRYPTION_TYPES:
raise exceptions.UnknownEncryptionTypeError(
encryption=emrfs_args['Encryption'])
# Only one of SSE and Encryption should be configured
if 'SSE' in emrfs_args and 'Encryption' in emrfs_args:
raise exceptions.BothSseAndEncryptionConfiguredError(
sse=emrfs_args['SSE'], encryption=emrfs_args['Encryption'])
# CSE should be configured correctly
# ProviderType should be present and should have valid value
# Given the type, the required parameters should be present
if ('Encryption' in emrfs_args and
emrfs_args['Encryption'].upper() == constants.EMRFS_CLIENT_SIDE):
if 'ProviderType' not in emrfs_args:
raise exceptions.MissingParametersError(
object_name=CSE_OPTION_NAME, missing='ProviderType')
elif emrfs_args['ProviderType'].upper() not in CSE_PROVIDER_TYPES:
raise exceptions.UnknownCseProviderTypeError(
provider_type=emrfs_args['ProviderType'])
elif emrfs_args['ProviderType'].upper() == 'KMS':
_verify_required_args(emrfs_args.keys(), CSE_KMS_REQUIRED_KEYS,
CSE_KMS_OPTION_NAME)
elif emrfs_args['ProviderType'].upper() == 'CUSTOM':
_verify_required_args(emrfs_args.keys(), CSE_CUSTOM_REQUIRED_KEYS,
CSE_CUSTOM_OPTION_NAME)
# No child attributes should be present if the parent feature is not
# configured
if 'Consistent' not in emrfs_args:
_verify_child_args(emrfs_args.keys(), CONSISTENT_OPTIONAL_KEYS,
CONSISTENT_OPTION_NAME)
if not _need_to_configure_cse(emrfs_args, 'KMS'):
_verify_child_args(emrfs_args.keys(), CSE_KMS_REQUIRED_KEYS,
CSE_KMS_OPTION_NAME)
if not _need_to_configure_cse(emrfs_args, 'CUSTOM'):
_verify_child_args(emrfs_args.keys(), CSE_CUSTOM_REQUIRED_KEYS,
CSE_CUSTOM_OPTION_NAME)
def _verify_required_args(actual_keys, required_keys, object_name):
if any(x not in actual_keys for x in required_keys):
missing_keys = list(
sorted(set(required_keys).difference(set(actual_keys))))
raise exceptions.MissingParametersError(
object_name=object_name, missing=emrutils.join(missing_keys))
def _verify_child_args(actual_keys, child_keys, parent_object_name):
if any(x in actual_keys for x in child_keys):
invalid_keys = list(
sorted(set(child_keys).intersection(set(actual_keys))))
raise exceptions.InvalidEmrFsArgumentsError(
invalid=emrutils.join(invalid_keys),
parent_object_name=parent_object_name)
def _build_ba_args_to_setup_emrfs(emrfs_args):
emrfs_properties = _build_emrfs_properties(emrfs_args)
return _create_ba_args(emrfs_properties)
def _build_emrfs_properties(emrfs_args):
"""
Assumption: emrfs_args is valid i.e. all required attributes are present
"""
emrfs_properties = OrderedDict()
if _need_to_configure_consistent_view(emrfs_args):
_update_properties_for_consistent_view(emrfs_properties, emrfs_args)
if _need_to_configure_sse(emrfs_args):
_update_properties_for_sse(emrfs_properties, emrfs_args)
if _need_to_configure_cse(emrfs_args, 'KMS'):
_update_properties_for_cse(emrfs_properties, emrfs_args, 'KMS')
if _need_to_configure_cse(emrfs_args, 'CUSTOM'):
_update_properties_for_cse(emrfs_properties, emrfs_args, 'CUSTOM')
if 'Args' in emrfs_args:
for arg_value in emrfs_args.get('Args'):
key, value = emrutils.split_to_key_value(arg_value)
emrfs_properties[key] = value
return emrfs_properties
def _need_to_configure_consistent_view(emrfs_args):
return 'Consistent' in emrfs_args
def _need_to_configure_sse(emrfs_args):
return 'SSE' in emrfs_args \
or ('Encryption' in emrfs_args and
emrfs_args['Encryption'].upper() == constants.EMRFS_SERVER_SIDE)
def _need_to_configure_cse(emrfs_args, cse_type):
return ('Encryption' in emrfs_args and
emrfs_args['Encryption'].upper() == constants.EMRFS_CLIENT_SIDE and
'ProviderType' in emrfs_args and
emrfs_args['ProviderType'].upper() == cse_type)
def _update_properties_for_consistent_view(emrfs_properties, emrfs_args):
emrfs_properties[constants.EMRFS_CONSISTENT_KEY] = \
str(emrfs_args['Consistent']).lower()
if 'RetryCount' in emrfs_args:
emrfs_properties[constants.EMRFS_RETRY_COUNT_KEY] = \
str(emrfs_args['RetryCount'])
if 'RetryPeriod' in emrfs_args:
emrfs_properties[constants.EMRFS_RETRY_PERIOD_KEY] = \
str(emrfs_args['RetryPeriod'])
def _update_properties_for_sse(emrfs_properties, emrfs_args):
sse_value = emrfs_args['SSE'] if 'SSE' in emrfs_args else True
# if 'SSE' is not in emrfs_args then 'Encryption' must be 'ServerSide'
emrfs_properties[constants.EMRFS_SSE_KEY] = str(sse_value).lower()
def _update_properties_for_cse(emrfs_properties, emrfs_args, cse_type):
emrfs_properties[constants.EMRFS_CSE_KEY] = 'true'
if cse_type == 'KMS':
emrfs_properties[
constants.EMRFS_CSE_ENCRYPTION_MATERIALS_PROVIDER_KEY] = \
constants.EMRFS_CSE_KMS_PROVIDER_FULL_CLASS_NAME
emrfs_properties[constants.EMRFS_CSE_KMS_KEY_ID_KEY] =\
emrfs_args['KMSKeyId']
elif cse_type == 'CUSTOM':
emrfs_properties[
constants.EMRFS_CSE_ENCRYPTION_MATERIALS_PROVIDER_KEY] = \
emrfs_args['CustomProviderClass']
def _update_emrfs_ba_args(ba_args, key_value):
ba_args.append(constants.EMRFS_BA_ARG_KEY)
ba_args.append(key_value)
def _create_ba_args(emrfs_properties):
ba_args = []
for key, value in emrfs_properties.items():
key_value = key
if value:
key_value = key_value + "=" + value
_update_emrfs_ba_args(ba_args, key_value)
return ba_args
awscli-1.17.14/awscli/customizations/emr/listclusters.py 0000644 0000000 0000000 00000007277 13620325554 023401 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.arguments import CustomArgument
from awscli.customizations.emr import helptext
from awscli.customizations.emr import exceptions
from awscli.customizations.emr import constants
def modify_list_clusters_argument(argument_table, **kwargs):
argument_table['cluster-states'] = \
ClusterStatesArgument(
name='cluster-states',
help_text=helptext.LIST_CLUSTERS_CLUSTER_STATES,
nargs='+')
argument_table['active'] = \
ActiveStateArgument(
name='active', help_text=helptext.LIST_CLUSTERS_STATE_FILTERS,
action='store_true', group_name='states_filter')
argument_table['terminated'] = \
TerminatedStateArgument(
name='terminated',
action='store_true', group_name='states_filter')
argument_table['failed'] = \
FailedStateArgument(
name='failed', action='store_true', group_name='states_filter')
argument_table['created-before'] = CreatedBefore(
name='created-before', help_text=helptext.LIST_CLUSTERS_CREATED_BEFORE,
cli_type_name='timestamp')
argument_table['created-after'] = CreatedAfter(
name='created-after', help_text=helptext.LIST_CLUSTERS_CREATED_AFTER,
cli_type_name='timestamp')
class ClusterStatesArgument(CustomArgument):
def add_to_params(self, parameters, value):
if value is not None:
if (parameters.get('ClusterStates') is not None and
len(parameters.get('ClusterStates')) > 0):
raise exceptions.ClusterStatesFilterValidationError()
parameters['ClusterStates'] = value
class ActiveStateArgument(CustomArgument):
def add_to_params(self, parameters, value):
if value is True:
if (parameters.get('ClusterStates') is not None and
len(parameters.get('ClusterStates')) > 0):
raise exceptions.ClusterStatesFilterValidationError()
parameters['ClusterStates'] = constants.LIST_CLUSTERS_ACTIVE_STATES
class TerminatedStateArgument(CustomArgument):
def add_to_params(self, parameters, value):
if value is True:
if (parameters.get('ClusterStates') is not None and
len(parameters.get('ClusterStates')) > 0):
raise exceptions.ClusterStatesFilterValidationError()
parameters['ClusterStates'] = \
constants.LIST_CLUSTERS_TERMINATED_STATES
class FailedStateArgument(CustomArgument):
def add_to_params(self, parameters, value):
if value is True:
if (parameters.get('ClusterStates') is not None and
len(parameters.get('ClusterStates')) > 0):
raise exceptions.ClusterStatesFilterValidationError()
parameters['ClusterStates'] = constants.LIST_CLUSTERS_FAILED_STATES
class CreatedBefore(CustomArgument):
def add_to_params(self, parameters, value):
if value is None:
return
parameters['CreatedBefore'] = value
class CreatedAfter(CustomArgument):
def add_to_params(self, parameters, value):
if value is None:
return
parameters['CreatedAfter'] = value
awscli-1.17.14/awscli/customizations/emr/instancefleetsutils.py 0000644 0000000 0000000 00000004657 13620325554 024730 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import constants
from awscli.customizations.emr import exceptions
def validate_and_build_instance_fleets(parsed_instance_fleets):
"""
Helper method that converts --instance-fleets option value in
create-cluster to Amazon Elastic MapReduce InstanceFleetConfig
data type.
"""
instance_fleets = []
for instance_fleet in parsed_instance_fleets:
instance_fleet_config = {}
keys = instance_fleet.keys()
if 'Name' in keys:
instance_fleet_config['Name'] = instance_fleet['Name']
else:
instance_fleet_config['Name'] = instance_fleet['InstanceFleetType']
instance_fleet_config['InstanceFleetType'] = instance_fleet['InstanceFleetType']
if 'TargetOnDemandCapacity' in keys:
instance_fleet_config['TargetOnDemandCapacity'] = instance_fleet['TargetOnDemandCapacity']
if 'TargetSpotCapacity' in keys:
instance_fleet_config['TargetSpotCapacity'] = instance_fleet['TargetSpotCapacity']
if 'InstanceTypeConfigs' in keys:
if 'TargetSpotCapacity' in keys:
for instance_type_config in instance_fleet['InstanceTypeConfigs']:
instance_type_config_keys = instance_type_config.keys()
instance_fleet_config['InstanceTypeConfigs'] = instance_fleet['InstanceTypeConfigs']
if 'LaunchSpecifications' in keys:
instanceFleetProvisioningSpecifications = instance_fleet['LaunchSpecifications']
instance_fleet_config['LaunchSpecifications'] = {}
if 'SpotSpecification' in instanceFleetProvisioningSpecifications:
instance_fleet_config['LaunchSpecifications']['SpotSpecification'] = \
instanceFleetProvisioningSpecifications['SpotSpecification']
instance_fleets.append(instance_fleet_config)
return instance_fleets
awscli-1.17.14/awscli/customizations/emr/hbaseutils.py 0000644 0000000 0000000 00000001671 13620325554 022774 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import constants
def build_hbase_restore_from_backup_args(dir, backup_version=None):
args = [constants.HBASE_MAIN,
constants.HBASE_RESTORE,
constants.HBASE_BACKUP_DIR, dir]
if backup_version is not None:
args.append(constants.HBASE_BACKUP_VERSION_FOR_RESTORE)
args.append(backup_version)
return args
awscli-1.17.14/awscli/customizations/emr/constants.py 0000644 0000000 0000000 00000014741 13620325554 022647 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
# Declare all the constants used by EMR in this file.
EC2_ROLE_NAME = "EMR_EC2_DefaultRole"
EMR_ROLE_NAME = "EMR_DefaultRole"
EMR_AUTOSCALING_ROLE_NAME = "EMR_AutoScaling_DefaultRole"
ROLE_ARN_PATTERN = "arn:{{region_suffix}}:iam::aws:policy/service-role/{{policy_name}}"
EC2_ROLE_POLICY_NAME = "AmazonElasticMapReduceforEC2Role"
EMR_ROLE_POLICY_NAME = "AmazonElasticMapReduceRole"
EMR_AUTOSCALING_ROLE_POLICY_NAME = "AmazonElasticMapReduceforAutoScalingRole"
# Action on failure
CONTINUE = 'CONTINUE'
CANCEL_AND_WAIT = 'CANCEL_AND_WAIT'
TERMINATE_CLUSTER = 'TERMINATE_CLUSTER'
DEFAULT_FAILURE_ACTION = CONTINUE
# Market type
SPOT = 'SPOT'
ON_DEMAND = 'ON_DEMAND'
SCRIPT_RUNNER_PATH = '/libs/script-runner/script-runner.jar'
COMMAND_RUNNER = 'command-runner.jar'
DEBUGGING_PATH = '/libs/state-pusher/0.1/fetch'
DEBUGGING_COMMAND = 'state-pusher-script'
DEBUGGING_NAME = 'Setup Hadoop Debugging'
CONFIG_HADOOP_PATH = '/bootstrap-actions/configure-hadoop'
# S3 copy bootstrap action
S3_GET_BA_NAME = 'S3 get'
S3_GET_BA_SRC = '-s'
S3_GET_BA_DEST = '-d'
S3_GET_BA_FORCE = '-f'
# EMRFS
EMRFS_BA_NAME = 'Setup EMRFS'
EMRFS_BA_ARG_KEY = '-e'
EMRFS_CONSISTENT_KEY = 'fs.s3.consistent'
EMRFS_SSE_KEY = 'fs.s3.enableServerSideEncryption'
EMRFS_RETRY_COUNT_KEY = 'fs.s3.consistent.retryCount'
EMRFS_RETRY_PERIOD_KEY = 'fs.s3.consistent.retryPeriodSeconds'
EMRFS_CSE_KEY = 'fs.s3.cse.enabled'
EMRFS_CSE_KMS_KEY_ID_KEY = 'fs.s3.cse.kms.keyId'
EMRFS_CSE_ENCRYPTION_MATERIALS_PROVIDER_KEY = \
'fs.s3.cse.encryptionMaterialsProvider'
EMRFS_CSE_CUSTOM_PROVIDER_URI_KEY = 'fs.s3.cse.encryptionMaterialsProvider.uri'
EMRFS_CSE_KMS_PROVIDER_FULL_CLASS_NAME = ('com.amazon.ws.emr.hadoop.fs.cse.'
'KMSEncryptionMaterialsProvider')
EMRFS_CSE_CUSTOM_S3_GET_BA_PATH = 'file:/usr/share/aws/emr/scripts/s3get'
EMRFS_CUSTOM_DEST_PATH = '/usr/share/aws/emr/auxlib'
EMRFS_SERVER_SIDE = 'SERVERSIDE'
EMRFS_CLIENT_SIDE = 'CLIENTSIDE'
EMRFS_KMS = 'KMS'
EMRFS_CUSTOM = 'CUSTOM'
EMRFS_SITE = 'emrfs-site'
MAX_BOOTSTRAP_ACTION_NUMBER = 16
BOOTSTRAP_ACTION_NAME = 'Bootstrap action'
HIVE_BASE_PATH = '/libs/hive'
HIVE_SCRIPT_PATH = '/libs/hive/hive-script'
HIVE_SCRIPT_COMMAND = 'hive-script'
PIG_BASE_PATH = '/libs/pig'
PIG_SCRIPT_PATH = '/libs/pig/pig-script'
PIG_SCRIPT_COMMAND = 'pig-script'
GANGLIA_INSTALL_BA_PATH = '/bootstrap-actions/install-ganglia'
# HBase
HBASE_INSTALL_BA_PATH = '/bootstrap-actions/setup-hbase'
HBASE_PATH_HADOOP1_INSTALL_JAR = '/home/hadoop/lib/hbase-0.92.0.jar'
HBASE_PATH_HADOOP2_INSTALL_JAR = '/home/hadoop/lib/hbase.jar'
HBASE_INSTALL_ARG = ['emr.hbase.backup.Main', '--start-master']
HBASE_JAR_PATH = '/home/hadoop/lib/hbase.jar'
HBASE_MAIN = 'emr.hbase.backup.Main'
# HBase commands
HBASE_RESTORE = '--restore'
HBASE_BACKUP_DIR_FOR_RESTORE = '--backup-dir-to-restore'
HBASE_BACKUP_VERSION_FOR_RESTORE = '--backup-version'
HBASE_BACKUP = '--backup'
HBASE_SCHEDULED_BACKUP = '--set-scheduled-backup'
HBASE_BACKUP_DIR = '--backup-dir'
HBASE_INCREMENTAL_BACKUP_INTERVAL = '--incremental-backup-time-interval'
HBASE_INCREMENTAL_BACKUP_INTERVAL_UNIT = '--incremental-backup-time-unit'
HBASE_FULL_BACKUP_INTERVAL = '--full-backup-time-interval'
HBASE_FULL_BACKUP_INTERVAL_UNIT = '--full-backup-time-unit'
HBASE_DISABLE_FULL_BACKUP = '--disable-full-backups'
HBASE_DISABLE_INCREMENTAL_BACKUP = '--disable-incremental-backups'
HBASE_BACKUP_STARTTIME = '--start-time'
HBASE_BACKUP_CONSISTENT = '--consistent'
HBASE_BACKUP_STEP_NAME = 'Backup HBase'
HBASE_RESTORE_STEP_NAME = 'Restore HBase'
HBASE_SCHEDULE_BACKUP_STEP_NAME = 'Modify Backup Schedule'
IMPALA_INSTALL_PATH = '/libs/impala/setup-impala'
# Step
HADOOP_STREAMING_PATH = '/home/hadoop/contrib/streaming/hadoop-streaming.jar'
HADOOP_STREAMING_COMMAND = 'hadoop-streaming'
CUSTOM_JAR = 'custom_jar'
HIVE = 'hive'
PIG = 'pig'
IMPALA = 'impala'
STREAMING = 'streaming'
GANGLIA = 'ganglia'
HBASE = 'hbase'
SPARK = 'spark'
DEFAULT_CUSTOM_JAR_STEP_NAME = 'Custom JAR'
DEFAULT_STREAMING_STEP_NAME = 'Streaming program'
DEFAULT_HIVE_STEP_NAME = 'Hive program'
DEFAULT_PIG_STEP_NAME = 'Pig program'
DEFAULT_IMPALA_STEP_NAME = 'Impala program'
DEFAULT_SPARK_STEP_NAME = 'Spark application'
ARGS = '--args'
RUN_HIVE_SCRIPT = '--run-hive-script'
HIVE_VERSIONS = '--hive-versions'
HIVE_STEP_CONFIG = 'HiveStepConfig'
RUN_PIG_SCRIPT = '--run-pig-script'
PIG_VERSIONS = '--pig-versions'
PIG_STEP_CONFIG = 'PigStepConfig'
RUN_IMPALA_SCRIPT = '--run-impala-script'
SPARK_SUBMIT_PATH = '/home/hadoop/spark/bin/spark-submit'
SPARK_SUBMIT_COMMAND = 'spark-submit'
IMPALA_STEP_CONFIG = 'ImpalaStepConfig'
SPARK_STEP_CONFIG = 'SparkStepConfig'
STREAMING_STEP_CONFIG = 'StreamingStepConfig'
CUSTOM_JAR_STEP_CONFIG = 'CustomJARStepConfig'
INSTALL_PIG_ARG = '--install-pig'
INSTALL_PIG_NAME = 'Install Pig'
INSTALL_HIVE_ARG = '--install-hive'
INSTALL_HIVE_NAME = 'Install Hive'
HIVE_SITE_KEY = '--hive-site'
INSTALL_HIVE_SITE_ARG = '--install-hive-site'
INSTALL_HIVE_SITE_NAME = 'Install Hive Site Configuration'
BASE_PATH_ARG = '--base-path'
INSTALL_GANGLIA_NAME = 'Install Ganglia'
INSTALL_HBASE_NAME = 'Install HBase'
START_HBASE_NAME = 'Start HBase'
INSTALL_IMPALA_NAME = 'Install Impala'
IMPALA_VERSION = '--impala-version'
IMPALA_CONF = '--impala-conf'
FULL = 'full'
INCREMENTAL = 'incremental'
MINUTES = 'minutes'
HOURS = 'hours'
DAYS = 'days'
NOW = 'now'
TRUE = 'true'
FALSE = 'false'
EC2 = 'ec2'
EMR = 'elasticmapreduce'
APPLICATION_AUTOSCALING = 'application-autoscaling'
LATEST = 'latest'
APPLICATIONS = ["HIVE", "PIG", "HBASE", "GANGLIA", "IMPALA", "SPARK", "MAPR",
"MAPR_M3", "MAPR_M5", "MAPR_M7"]
SSH_USER = 'hadoop'
STARTING_STATES = ['STARTING', 'BOOTSTRAPPING']
TERMINATED_STATES = ['TERMINATED', 'TERMINATING', 'TERMINATED_WITH_ERRORS']
# list-clusters
LIST_CLUSTERS_ACTIVE_STATES = ['STARTING', 'BOOTSTRAPPING', 'RUNNING',
'WAITING', 'TERMINATING']
LIST_CLUSTERS_TERMINATED_STATES = ['TERMINATED']
LIST_CLUSTERS_FAILED_STATES = ['TERMINATED_WITH_ERRORS']
INSTANCE_FLEET_TYPE = 'INSTANCE_FLEET'
awscli-1.17.14/awscli/customizations/emr/argumentschema.py 0000644 0000000 0000000 00000075342 13620325554 023642 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import helptext
from awscli.customizations.emr.createdefaultroles import EC2_ROLE_NAME
CONFIGURATIONS_PROPERTIES_SCHEMA = {
"type": "map",
"key": {
"type": "string",
"description": "Configuration key"
},
"value": {
"type": "string",
"description": "Configuration value"
},
"description": "Application configuration properties"
}
CONFIGURATIONS_CLASSIFICATION_SCHEMA = {
"type": "string",
"description": "Application configuration classification name",
}
INNER_CONFIGURATIONS_SCHEMA = {
"type": "array",
"items": {
"type": "object",
"properties": {
"Classification": CONFIGURATIONS_CLASSIFICATION_SCHEMA,
"Properties": CONFIGURATIONS_PROPERTIES_SCHEMA
}
},
"description": "Instance group application configurations."
}
OUTER_CONFIGURATIONS_SCHEMA = {
"type": "array",
"items": {
"type": "object",
"properties": {
"Classification": CONFIGURATIONS_CLASSIFICATION_SCHEMA,
"Properties": CONFIGURATIONS_PROPERTIES_SCHEMA,
"Configurations": INNER_CONFIGURATIONS_SCHEMA
}
},
"description": "Instance group application configurations."
}
INSTANCE_GROUPS_SCHEMA = {
"type": "array",
"items": {
"type": "object",
"properties": {
"Name": {
"type": "string",
"description":
"Friendly name given to the instance group."
},
"InstanceGroupType": {
"type": "string",
"description":
"The type of the instance group in the cluster.",
"enum": ["MASTER", "CORE", "TASK"],
"required": True
},
"BidPrice": {
"type": "string",
"description":
"Bid price for each Amazon EC2 instance in the "
"instance group when launching nodes as Spot Instances, "
"expressed in USD."
},
"InstanceType": {
"type": "string",
"description":
"The Amazon EC2 instance type for all instances "
"in the instance group.",
"required": True
},
"InstanceCount": {
"type": "integer",
"description": "Target number of Amazon EC2 instances "
"for the instance group",
"required": True
},
"EbsConfiguration": {
"type": "object",
"description": "EBS configuration that will be associated with the instance group.",
"properties": {
"EbsOptimized": {
"type": "boolean",
"description": "Boolean flag used to tag EBS-optimized instances.",
},
"EbsBlockDeviceConfigs": {
"type": "array",
"items": {
"type": "object",
"properties": {
"VolumeSpecification" : {
"type": "object",
"description": "The EBS volume specification that will be created and attached to every instance in this instance group.",
"properties": {
"VolumeType": {
"type": "string",
"description": "The EBS volume type that is attached to all the instances in the instance group. Valid types are: gp2, io1, and standard.",
"required": True
},
"SizeInGB": {
"type": "integer",
"description": "The EBS volume size, in GB, that is attached to all the instances in the instance group.",
"required": True
},
"Iops": {
"type": "integer",
"description": "The IOPS of the EBS volume that is attached to all the instances in the instance group.",
}
}
},
"VolumesPerInstance": {
"type": "integer",
"description": "The number of EBS volumes that will be created and attached to each instance in the instance group.",
}
}
}
}
}
},
"AutoScalingPolicy": {
"type": "object",
"description": "Auto Scaling policy that will be associated with the instance group.",
"properties": {
"Constraints": {
"type": "object",
"description": "The Constraints that will be associated to an Auto Scaling policy.",
"properties": {
"MinCapacity": {
"type": "integer",
"description": "The minimum value for the instances to scale in"
" to in response to scaling activities."
},
"MaxCapacity": {
"type": "integer",
"description": "The maximum value for the instances to scale out to in response"
" to scaling activities"
}
}
},
"Rules": {
"type": "array",
"description": "The Rules associated to an Auto Scaling policy.",
"items": {
"type": "object",
"properties": {
"Name": {
"type": "string",
"description": "Name of the Auto Scaling rule."
},
"Description": {
"type": "string",
"description": "Description of the Auto Scaling rule."
},
"Action": {
"type": "object",
"description": "The Action associated to an Auto Scaling rule.",
"properties": {
"Market": { # Required for Instance Fleets
"type": "string",
"description": "Market type of the Amazon EC2 instances used to create a "
"cluster node by Auto Scaling action.",
"enum": ["ON_DEMAND", "SPOT"]
},
"SimpleScalingPolicyConfiguration": {
"type": "object",
"description": "The Simple scaling configuration that will be associated"
"to Auto Scaling action.",
"properties": {
"AdjustmentType": {
"type": "string",
"description": "Specifies how the ScalingAdjustment parameter is "
"interpreted.",
"enum": ["CHANGE_IN_CAPACITY", "PERCENT_CHANGE_IN_CAPACITY",
"EXACT_CAPACITY"]
},
"ScalingAdjustment": {
"type": "integer",
"description": "The amount by which to scale, based on the "
"specified adjustment type."
},
"CoolDown": {
"type": "integer",
"description": "The amount of time, in seconds, after a scaling "
"activity completes and before the next scaling "
"activity can start."
}
}
}
}
},
"Trigger": {
"type": "object",
"description": "The Trigger associated to an Auto Scaling rule.",
"properties": {
"CloudWatchAlarmDefinition": {
"type": "object",
"description": "The Alarm to be registered with CloudWatch, to trigger"
" scaling activities.",
"properties": {
"ComparisonOperator": {
"type": "string",
"description": "The arithmetic operation to use when comparing the"
" specified Statistic and Threshold."
},
"EvaluationPeriods": {
"type": "integer",
"description": "The number of periods over which data is compared"
" to the specified threshold."
},
"MetricName": {
"type": "string",
"description": "The name for the alarm's associated metric."
},
"Namespace": {
"type": "string",
"description": "The namespace for the alarm's associated metric."
},
"Period": {
"type": "integer",
"description": "The period in seconds over which the specified "
"statistic is applied."
},
"Statistic": {
"type": "string",
"description": "The statistic to apply to the alarm's associated "
"metric."
},
"Threshold": {
"type": "double",
"description": "The value against which the specified statistic is "
"compared."
},
"Unit": {
"type": "string",
"description": "The statistic's unit of measure."
},
"Dimensions": {
"type": "array",
"description": "The dimensions for the alarm's associated metric.",
"items": {
"type": "object",
"properties": {
"Key": {
"type": "string",
"description": "Dimension Key."
},
"Value": {
"type": "string",
"description": "Dimension Value."
}
}
}
}
}
}
}
}
}
}
}
}
},
"Configurations": OUTER_CONFIGURATIONS_SCHEMA
}
}
}
INSTANCE_FLEETS_SCHEMA = {
"type": "array",
"items": {
"type": "object",
"properties": {
"Name": {
"type": "string",
"description": "Friendly name given to the instance fleet."
},
"InstanceFleetType": {
"type": "string",
"description": "The type of the instance fleet in the cluster.",
"enum": ["MASTER", "CORE", "TASK"],
"required": True
},
"TargetOnDemandCapacity": {
"type": "integer",
"description": "Target on-demand capacity for the instance fleet."
},
"TargetSpotCapacity": {
"type": "integer",
"description": "Target spot capacity for the instance fleet."
},
"InstanceTypeConfigs": {
"type": "array",
"items": {
"type": "object",
"properties": {
"InstanceType": {
"type": "string",
"description": "The Amazon EC2 instance type for the instance fleet.",
"required": True
},
"WeightedCapacity": {
"type": "integer",
"description": "The weight assigned to an instance type, which will impact the overall fulfillment of the capacity."
},
"BidPrice": {
"type": "string",
"description": "Bid price for each Amazon EC2 instance in the "
"instance fleet when launching nodes as Spot Instances, "
"expressed in USD."
},
"BidPriceAsPercentageOfOnDemandPrice": {
"type": "double",
"description": "Bid price as percentage of on-demand price."
},
"EbsConfiguration": {
"type": "object",
"description": "EBS configuration that is associated with the instance group.",
"properties": {
"EbsOptimized": {
"type": "boolean",
"description": "Boolean flag used to tag EBS-optimized instances.",
},
"EbsBlockDeviceConfigs": {
"type": "array",
"items": {
"type": "object",
"properties": {
"VolumeSpecification" : {
"type": "object",
"description": "The EBS volume specification that is created "
"and attached to each instance in the instance group.",
"properties": {
"VolumeType": {
"type": "string",
"description": "The EBS volume type that is attached to all "
"the instances in the instance group. Valid types are: "
"gp2, io1, and standard.",
"required": True
},
"SizeInGB": {
"type": "integer",
"description": "The EBS volume size, in GB, that is attached "
"to all the instances in the instance group.",
"required": True
},
"Iops": {
"type": "integer",
"description": "The IOPS of the EBS volume that is attached to "
"all the instances in the instance group.",
}
}
},
"VolumesPerInstance": {
"type": "integer",
"description": "The number of EBS volumes that will be created and "
"attached to each instance in the instance group.",
}
}
}
}
}
},
"Configurations": OUTER_CONFIGURATIONS_SCHEMA
}
}
},
"LaunchSpecifications": {
"type": "object",
"properties" : {
"SpotSpecification": {
"type": "object",
"properties": {
"TimeoutDurationMinutes": {
"type": "integer",
"description": "The time, in minutes, after which the action specified in TimeoutAction field will be performed if requested resources are unavailable."
},
"TimeoutAction": {
"type": "string",
"description": "The action that is performed after TimeoutDurationMinutes.",
"enum": [
"TERMINATE_CLUSTER",
"SWITCH_TO_ONDEMAND"
]
},
"BlockDurationMinutes": {
"type": "integer",
"description": "Block duration in minutes."
}
}
}
}
}
}
}
}
EC2_ATTRIBUTES_SCHEMA = {
"type": "object",
"properties": {
"KeyName": {
"type": "string",
"description":
"The name of the Amazon EC2 key pair that can "
"be used to ssh to the master node as the user 'hadoop'."
},
"SubnetId": {
"type": "string",
"description":
"To launch the cluster in Amazon "
"Virtual Private Cloud (Amazon VPC), set this parameter to "
"the identifier of the Amazon VPC subnet where you want "
"the cluster to launch. If you do not specify this value, "
"the cluster is launched in the normal Amazon Web Services "
"cloud, outside of an Amazon VPC. "
},
"SubnetIds": {
"type": "array",
"description":
"List of SubnetIds.",
"items": {
"type": "string"
}
},
"AvailabilityZone": {
"type": "string",
"description": "The Availability Zone the cluster will run in."
},
"AvailabilityZones": {
"type": "array",
"description": "List of AvailabilityZones.",
"items": {
"type": "string"
}
},
"InstanceProfile": {
"type": "string",
"description":
"An IAM role for the cluster. The EC2 instances of the cluster"
" assume this role. The default role is " +
EC2_ROLE_NAME + ". In order to use the default"
" role, you must have already created it using the "
"create-default-roles
command. "
},
"EmrManagedMasterSecurityGroup": {
"type": "string",
"description": helptext.EMR_MANAGED_MASTER_SECURITY_GROUP
},
"EmrManagedSlaveSecurityGroup": {
"type": "string",
"description": helptext.EMR_MANAGED_SLAVE_SECURITY_GROUP
},
"ServiceAccessSecurityGroup": {
"type": "string",
"description": helptext.SERVICE_ACCESS_SECURITY_GROUP
},
"AdditionalMasterSecurityGroups": {
"type": "array",
"description": helptext.ADDITIONAL_MASTER_SECURITY_GROUPS,
"items": {
"type": "string"
}
},
"AdditionalSlaveSecurityGroups": {
"type": "array",
"description": helptext.ADDITIONAL_SLAVE_SECURITY_GROUPS,
"items": {
"type": "string"
}
}
}
}
APPLICATIONS_SCHEMA = {
"type": "array",
"items": {
"type": "object",
"properties": {
"Name": {
"type": "string",
"description": "Application name.",
"enum": ["MapR", "HUE", "HIVE", "PIG", "HBASE",
"IMPALA", "GANGLIA", "HADOOP", "SPARK"],
"required": True
},
"Args": {
"type": "array",
"description":
"A list of arguments to pass to the application.",
"items": {
"type": "string"
}
}
}
}
}
BOOTSTRAP_ACTIONS_SCHEMA = {
"type": "array",
"items": {
"type": "object",
"properties": {
"Name": {
"type": "string",
"default": "Bootstrap Action"
},
"Path": {
"type": "string",
"description":
"Location of the script to run during a bootstrap action. "
"Can be either a location in Amazon S3 or "
"on a local file system.",
"required": True
},
"Args": {
"type": "array",
"description":
"A list of command line arguments to pass to "
"the bootstrap action script",
"items": {
"type": "string"
}
}
}
}
}
STEPS_SCHEMA = {
"type": "array",
"items": {
"type": "object",
"properties": {
"Type": {
"type": "string",
"description":
"The type of a step to be added to the cluster.",
"default": "custom_jar",
"enum": ["CUSTOM_JAR", "STREAMING", "HIVE", "PIG", "IMPALA"],
},
"Name": {
"type": "string",
"description": "The name of the step. ",
},
"ActionOnFailure": {
"type": "string",
"description": "The action to take if the cluster step fails.",
"enum": ["TERMINATE_CLUSTER", "CANCEL_AND_WAIT", "CONTINUE"],
"default": "CONTINUE"
},
"Jar": {
"type": "string",
"description": "A path to a JAR file run during the step.",
},
"Args": {
"type": "array",
"description":
"A list of command line arguments to pass to the step.",
"items": {
"type": "string"
}
},
"MainClass": {
"type": "string",
"description":
"The name of the main class in the specified "
"Java file. If not specified, the JAR file should "
"specify a Main-Class in its manifest file."
},
"Properties": {
"type": "string",
"description":
"A list of Java properties that are set when the step "
"runs. You can use these properties to pass key value "
"pairs to your main function."
}
}
}
}
HBASE_RESTORE_FROM_BACKUP_SCHEMA = {
"type": "object",
"properties": {
"Dir": {
"type": "string",
"description": helptext.HBASE_BACKUP_DIR
},
"BackupVersion": {
"type": "string",
"description": helptext.HBASE_BACKUP_VERSION
}
}
}
EMR_FS_SCHEMA = {
"type": "object",
"properties": {
"Consistent": {
"type": "boolean",
"description": "Enable EMRFS consistent view."
},
"SSE": {
"type": "boolean",
"description": "Enable Amazon S3 server-side encryption on files "
"written to S3 by EMRFS."
},
"RetryCount": {
"type": "integer",
"description":
"The maximum number of times to retry upon S3 inconsistency."
},
"RetryPeriod": {
"type": "integer",
"description": "The amount of time (in seconds) until the first "
"retry. Subsequent retries use an exponential "
"back-off."
},
"Args": {
"type": "array",
"description": "A list of arguments to pass for additional "
"EMRFS configuration.",
"items": {
"type": "string"
}
},
"Encryption": {
"type": "string",
"description": "EMRFS encryption type.",
"enum": ["SERVERSIDE", "CLIENTSIDE"]
},
"ProviderType": {
"type": "string",
"description": "EMRFS client-side encryption provider type.",
"enum": ["KMS", "CUSTOM"]
},
"KMSKeyId": {
"type": "string",
"description": "AWS KMS's customer master key identifier",
},
"CustomProviderLocation": {
"type": "string",
"description": "Custom encryption provider JAR location."
},
"CustomProviderClass": {
"type": "string",
"description": "Custom encryption provider full class name."
}
}
}
TAGS_SCHEMA = {
"type": "array",
"items": {
"type": "string"
}
}
KERBEROS_ATTRIBUTES_SCHEMA = {
"type": "object",
"properties": {
"Realm": {
"type": "string",
"description": "The name of Kerberos realm."
},
"KdcAdminPassword": {
"type": "string",
"description": "The password of Kerberos administrator."
},
"CrossRealmTrustPrincipalPassword": {
"type": "string",
"description": "The password to establish cross-realm trusts."
},
"ADDomainJoinUser": {
"type": "string",
"description": "The name of the user with privileges to join instances to Active Directory."
},
"ADDomainJoinPassword": {
"type": "string",
"description": "The password of the user with privileges to join instances to Active Directory."
}
}
}
awscli-1.17.14/awscli/customizations/emr/exceptions.py 0000644 0000000 0000000 00000023547 13620325554 023020 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
class EmrError(Exception):
"""
The base exception class for Emr exceptions.
:ivar msg: The descriptive message associated with the error.
"""
fmt = 'An unspecified error occurred'
def __init__(self, **kwargs):
msg = self.fmt.format(**kwargs)
Exception.__init__(self, msg)
self.kwargs = kwargs
class MissingParametersError(EmrError):
"""
One or more required parameters were not supplied.
:ivar object_name: The object that has missing parameters.
This can be an operation or a parameter (in the
case of inner params). The str() of this object
will be used so it doesn't need to implement anything
other than str().
:ivar missing: The names of the missing parameters.
"""
fmt = ('aws: error: The following required parameters are missing for '
'{object_name}: {missing}.')
class EmptyListError(EmrError):
"""
The provided list is empty.
:ivar param: The provided list parameter
"""
fmt = ('aws: error: The prameter {param} cannot be an empty list.')
class MissingRequiredInstanceGroupsError(EmrError):
"""
In create-cluster command, none of --instance-group,
--instance-count nor --instance-type were not supplied.
"""
fmt = ('aws: error: Must specify either --instance-groups or '
'--instance-type with --instance-count(optional) to '
'configure instance groups.')
class InstanceGroupsValidationError(EmrError):
"""
--instance-type and --instance-count are shortcut option
for --instance-groups and they cannot be specified
together with --instance-groups
"""
fmt = ('aws: error: You may not specify --instance-type '
'or --instance-count with --instance-groups, '
'because --instance-type and --instance-count are '
'shortcut options for --instance-groups.')
class InvalidAmiVersionError(EmrError):
"""
The supplied ami-version is invalid.
:ivar ami_version: The provided ami_version.
"""
fmt = ('aws: error: The supplied AMI version "{ami_version}" is invalid.'
' Please see AMI Versions Supported in Amazon EMR in '
'Amazon Elastic MapReduce Developer Guide: '
'http://docs.aws.amazon.com/ElasticMapReduce/'
'latest/DeveloperGuide/ami-versions-supported.html')
class MissingBooleanOptionsError(EmrError):
"""
Required boolean options are not supplied.
:ivar true_option
:ivar false_option
"""
fmt = ('aws: error: Must specify one of the following boolean options: '
'{true_option}|{false_option}.')
class UnknownStepTypeError(EmrError):
"""
The provided step type is not supported.
:ivar step_type: the step_type provided.
"""
fmt = ('aws: error: The step type {step_type} is not supported.')
class UnknownIamEndpointError(EmrError):
"""
The IAM endpoint is not known for the specified region.
:ivar region: The region specified.
"""
fmt = 'IAM endpoint not known for region: {region}.' +\
' Specify the iam-endpoint using the --iam-endpoint option.'
class ResolveServicePrincipalError(EmrError):
"""
The service principal could not be resolved from the region or the
endpoint.
"""
fmt = 'Could not resolve the service principal from' +\
' the region or the endpoint.'
class LogUriError(EmrError):
"""
The LogUri is not specified and debugging is enabled for the cluster.
"""
fmt = ('aws: error: LogUri not specified. You must specify a logUri '
'if you enable debugging when creating a cluster.')
class MasterDNSNotAvailableError(EmrError):
"""
Cannot get dns of master node on the cluster.
"""
fmt = 'Cannot get DNS of master node on the cluster. '\
' Please try again after some time.'
class WrongPuttyKeyError(EmrError):
"""
A wrong key has been used with a compatible program.
"""
fmt = 'Key file file format is incorrect. Putty expects a ppk file. '\
'Please refer to documentation at http://docs.aws.amazon.com/'\
'ElasticMapReduce/latest/DeveloperGuide/EMR_SetUp_SSH.html. '
class SSHNotFoundError(EmrError):
"""
SSH or Putty not available.
"""
fmt = 'SSH or Putty not available. Please refer to the documentation '\
'at http://docs.aws.amazon.com/ElasticMapReduce/latest/'\
'DeveloperGuide/EMR_SetUp_SSH.html.'
class SCPNotFoundError(EmrError):
"""
SCP or Pscp not available.
"""
fmt = 'SCP or Pscp not available. Please refer to the documentation '\
'at http://docs.aws.amazon.com/ElasticMapReduce/latest/'\
'DeveloperGuide/EMR_SetUp_SSH.html. '
class SubnetAndAzValidationError(EmrError):
"""
SubnetId and AvailabilityZone are mutual exclusive in --ec2-attributes.
"""
fmt = ('aws: error: You may not specify both a SubnetId and an Availabili'
'tyZone (placement) because ec2SubnetId implies a placement.')
class RequiredOptionsError(EmrError):
"""
Either of option1 or option2 is required.
"""
fmt = ('aws: error: Either {option1} or {option2} is required.')
class MutualExclusiveOptionError(EmrError):
"""
The provided option1 and option2 are mutually exclusive.
:ivar option1
:ivar option2
:ivar message (optional)
"""
def __init__(self, **kwargs):
msg = ('aws: error: You cannot specify both ' +
kwargs.get('option1', '') + ' and ' +
kwargs.get('option2', '') + ' options together.' +
kwargs.get('message', ''))
Exception.__init__(self, msg)
class MissingApplicationsError(EmrError):
"""
The application required for a step is not installed when creating a
cluster.
:ivar applications
"""
def __init__(self, **kwargs):
msg = ('aws: error: Some of the steps require the following'
' applications to be installed: ' +
', '.join(kwargs['applications']) + '. Please install the'
' applications using --applications.')
Exception.__init__(self, msg)
class ClusterTerminatedError(EmrError):
"""
The cluster is terminating or has already terminated.
"""
fmt = 'aws: error: Cluster terminating or already terminated.'
class ClusterStatesFilterValidationError(EmrError):
"""
In the list-clusters command, customers can specify only one
of the following states filters:
--cluster-states, --active, --terminated, --failed
"""
fmt = ('aws: error: You can specify only one of the cluster state '
'filters: --cluster-states, --active, --terminated, --failed.')
class MissingClusterAttributesError(EmrError):
"""
In the modify-cluster-attributes command, customers need to provide
at least one of the following cluster attributes: --visible-to-all-users,
--no-visible-to-all-users, --termination-protected
and --no-termination-protected
"""
fmt = ('aws: error: Must specify one of the following boolean options: '
'--visible-to-all-users|--no-visible-to-all-users, '
'--termination-protected|--no-termination-protected.')
class InvalidEmrFsArgumentsError(EmrError):
"""
The provided EMRFS parameters are invalid as parent feature e.g.,
Consistent View, CSE, SSE is not configured
:ivar invalid: Invalid parameters
:ivar parent_object_name: Parent feature name
"""
fmt = ('aws: error: {parent_object_name} is not specified. Thus, '
' following parameters are invalid: {invalid}')
class DuplicateEmrFsConfigurationError(EmrError):
fmt = ('aws: error: EMRFS should be configured either using '
'--configuration or --emrfs but not both')
class UnknownCseProviderTypeError(EmrError):
"""
The provided EMRFS client-side encryption provider type is not supported.
:ivar provider_type: the provider_type provided.
"""
fmt = ('aws: error: The client side encryption type "{provider_type}" is '
'not supported. You must specify either KMS or Custom')
class UnknownEncryptionTypeError(EmrError):
"""
The provided encryption type is not supported.
:ivar provider_type: the provider_type provided.
"""
fmt = ('aws: error: The encryption type "{encryption}" is invalid. '
'You must specify either ServerSide or ClientSide')
class BothSseAndEncryptionConfiguredError(EmrError):
"""
Only one of SSE or Encryption can be configured.
:ivar sse: Value for SSE
:ivar encryption: Value for encryption
"""
fmt = ('aws: error: Both SSE={sse} and Encryption={encryption} are '
'configured for --emrfs. You must specify only one of the two.')
class InvalidBooleanConfigError(EmrError):
fmt = ("aws: error: {config_value} for {config_key} in the config file is "
"invalid. The value should be either 'True' or 'False'. Use "
"'aws configure set {profile_var_name}.emr.{config_key} ' "
"command to set a valid value.")
class UnsupportedCommandWithReleaseError(EmrError):
fmt = ("aws: error: {command} is not supported with "
"'{release_label}' release.")
class MissingAutoScalingRoleError(EmrError):
fmt = ("aws: error: Must specify --auto-scaling-role when configuring an "
"AutoScaling policy for an instance group.")
awscli-1.17.14/awscli/customizations/emr/addsteps.py 0000644 0000000 0000000 00000003633 13620325554 022440 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import argumentschema
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import helptext
from awscli.customizations.emr import steputils
from awscli.customizations.emr.command import Command
class AddSteps(Command):
NAME = 'add-steps'
DESCRIPTION = ('Add a list of steps to a cluster.')
ARG_TABLE = [
{'name': 'cluster-id', 'required': True,
'help_text': helptext.CLUSTER_ID
},
{'name': 'steps',
'required': True,
'nargs': '+',
'schema': argumentschema.STEPS_SCHEMA,
'help_text': helptext.STEPS
}
]
def _run_main_command(self, parsed_args, parsed_globals):
parsed_steps = parsed_args.steps
release_label = emrutils.get_release_label(
parsed_args.cluster_id, self._session, self.region,
parsed_globals.endpoint_url, parsed_globals.verify_ssl)
step_list = steputils.build_step_config_list(
parsed_step_list=parsed_steps, region=self.region,
release_label=release_label)
parameters = {
'JobFlowId': parsed_args.cluster_id,
'Steps': step_list
}
emrutils.call_and_display_response(self._session, 'AddJobFlowSteps',
parameters, parsed_globals)
return 0
awscli-1.17.14/awscli/customizations/emr/applicationutils.py 0000644 0000000 0000000 00000014626 13620325554 024221 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import constants
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import exceptions
def build_applications(region,
parsed_applications, ami_version=None):
app_list = []
step_list = []
ba_list = []
for app_config in parsed_applications:
app_name = app_config['Name'].lower()
if app_name == constants.HIVE:
hive_version = constants.LATEST
step_list.append(
_build_install_hive_step(region=region))
args = app_config.get('Args')
if args is not None:
hive_site_path = _find_matching_arg(
key=constants.HIVE_SITE_KEY, args_list=args)
if hive_site_path is not None:
step_list.append(
_build_install_hive_site_step(
region=region,
hive_site_path=hive_site_path))
elif app_name == constants.PIG:
pig_version = constants.LATEST
step_list.append(
_build_pig_install_step(
region=region))
elif app_name == constants.GANGLIA:
ba_list.append(
_build_ganglia_install_bootstrap_action(
region=region))
elif app_name == constants.HBASE:
ba_list.append(
_build_hbase_install_bootstrap_action(
region=region))
if ami_version >= '3.0':
step_list.append(
_build_hbase_install_step(
constants.HBASE_PATH_HADOOP2_INSTALL_JAR))
elif ami_version >= '2.1':
step_list.append(
_build_hbase_install_step(
constants.HBASE_PATH_HADOOP1_INSTALL_JAR))
else:
raise ValueError('aws: error: AMI version ' + ami_version +
'is not compatible with HBase.')
elif app_name == constants.IMPALA:
ba_list.append(
_build_impala_install_bootstrap_action(
region=region,
args=app_config.get('Args')))
else:
app_list.append(
_build_supported_product(
app_config['Name'], app_config.get('Args')))
return app_list, ba_list, step_list
def _build_supported_product(name, args):
if args is None:
args = []
config = {'Name': name.lower(), 'Args': args}
return config
def _build_ganglia_install_bootstrap_action(region):
return emrutils.build_bootstrap_action(
name=constants.INSTALL_GANGLIA_NAME,
path=emrutils.build_s3_link(
relative_path=constants.GANGLIA_INSTALL_BA_PATH,
region=region))
def _build_hbase_install_bootstrap_action(region):
return emrutils.build_bootstrap_action(
name=constants.INSTALL_HBASE_NAME,
path=emrutils.build_s3_link(
relative_path=constants.HBASE_INSTALL_BA_PATH,
region=region))
def _build_hbase_install_step(jar):
return emrutils.build_step(
jar=jar,
name=constants.START_HBASE_NAME,
action_on_failure=constants.TERMINATE_CLUSTER,
args=constants.HBASE_INSTALL_ARG)
def _build_impala_install_bootstrap_action(region, args=None):
args_list = [
constants.BASE_PATH_ARG,
emrutils.build_s3_link(region=region),
constants.IMPALA_VERSION,
constants.LATEST]
if args is not None:
args_list.append(constants.IMPALA_CONF)
args_list.append(','.join(args))
return emrutils.build_bootstrap_action(
name=constants.INSTALL_IMPALA_NAME,
path=emrutils.build_s3_link(
relative_path=constants.IMPALA_INSTALL_PATH,
region=region),
args=args_list)
def _build_install_hive_step(region,
action_on_failure=constants.TERMINATE_CLUSTER):
step_args = [
emrutils.build_s3_link(constants.HIVE_SCRIPT_PATH, region),
constants.INSTALL_HIVE_ARG,
constants.BASE_PATH_ARG,
emrutils.build_s3_link(constants.HIVE_BASE_PATH, region),
constants.HIVE_VERSIONS,
constants.LATEST]
step = emrutils.build_step(
name=constants.INSTALL_HIVE_NAME,
action_on_failure=action_on_failure,
jar=emrutils.build_s3_link(constants.SCRIPT_RUNNER_PATH, region),
args=step_args)
return step
def _build_install_hive_site_step(region, hive_site_path,
action_on_failure=constants.CANCEL_AND_WAIT):
step_args = [
emrutils.build_s3_link(constants.HIVE_SCRIPT_PATH, region),
constants.BASE_PATH_ARG,
emrutils.build_s3_link(constants.HIVE_BASE_PATH),
constants.INSTALL_HIVE_SITE_ARG,
hive_site_path,
constants.HIVE_VERSIONS,
constants.LATEST]
step = emrutils.build_step(
name=constants.INSTALL_HIVE_SITE_NAME,
action_on_failure=action_on_failure,
jar=emrutils.build_s3_link(constants.SCRIPT_RUNNER_PATH, region),
args=step_args)
return step
def _build_pig_install_step(region,
action_on_failure=constants.TERMINATE_CLUSTER):
step_args = [
emrutils.build_s3_link(constants.PIG_SCRIPT_PATH, region),
constants.INSTALL_PIG_ARG,
constants.BASE_PATH_ARG,
emrutils.build_s3_link(constants.PIG_BASE_PATH, region),
constants.PIG_VERSIONS,
constants.LATEST]
step = emrutils.build_step(
name=constants.INSTALL_PIG_NAME,
action_on_failure=action_on_failure,
jar=emrutils.build_s3_link(constants.SCRIPT_RUNNER_PATH, region),
args=step_args)
return step
def _find_matching_arg(key, args_list):
for arg in args_list:
if key in arg:
return arg
return None
awscli-1.17.14/awscli/customizations/emr/createdefaultroles.py 0000644 0000000 0000000 00000031741 13620325554 024507 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
import re
import botocore.exceptions
from botocore import xform_name
from awscli.customizations.utils import get_policy_arn_suffix
from awscli.customizations.emr import configutils
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import exceptions
from awscli.customizations.emr.command import Command
from awscli.customizations.emr.constants import EC2
from awscli.customizations.emr.constants import EC2_ROLE_NAME
from awscli.customizations.emr.constants import ROLE_ARN_PATTERN
from awscli.customizations.emr.constants import EMR
from awscli.customizations.emr.constants import EMR_ROLE_NAME
from awscli.customizations.emr.constants import EMR_AUTOSCALING_ROLE_NAME
from awscli.customizations.emr.constants import APPLICATION_AUTOSCALING
from awscli.customizations.emr.constants import EC2_ROLE_POLICY_NAME
from awscli.customizations.emr.constants import EMR_ROLE_POLICY_NAME
from awscli.customizations.emr.constants import EMR_AUTOSCALING_ROLE_POLICY_NAME
from awscli.customizations.emr.exceptions import ResolveServicePrincipalError
LOG = logging.getLogger(__name__)
def assume_role_policy(serviceprincipal):
return {
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {"Service": serviceprincipal},
"Action": "sts:AssumeRole"
}
]
}
def get_role_policy_arn(region, policy_name):
region_suffix = get_policy_arn_suffix(region)
role_arn = ROLE_ARN_PATTERN.replace("{{region_suffix}}", region_suffix)
role_arn = role_arn.replace("{{policy_name}}", policy_name)
return role_arn
def get_service_principal(service, endpoint_host):
return service + '.' + _get_suffix(endpoint_host)
def _get_suffix(endpoint_host):
return _get_suffix_from_endpoint_host(endpoint_host)
def _get_suffix_from_endpoint_host(endpoint_host):
suffix_match = _get_regex_match_from_endpoint_host(endpoint_host)
if suffix_match is not None and suffix_match.lastindex >= 3:
suffix = suffix_match.group(3)
else:
raise ResolveServicePrincipalError
return suffix
def _get_regex_match_from_endpoint_host(endpoint_host):
if endpoint_host is None:
return None
regex_match = re.match("(https?://)([^.]+).elasticmapreduce.([^/]*)",
endpoint_host)
# Supports 'elasticmapreduce.{region}.' and '{region}.elasticmapreduce.'
if regex_match is None:
regex_match = re.match("(https?://elasticmapreduce).([^.]+).([^/]*)",
endpoint_host)
return regex_match
class CreateDefaultRoles(Command):
NAME = "create-default-roles"
DESCRIPTION = ('Creates the default IAM role ' +
EC2_ROLE_NAME + ' and ' +
EMR_ROLE_NAME + ' which can be used when creating the'
' cluster using the create-cluster command. The default'
' roles for EMR use managed policies, which are updated'
' automatically to support future EMR functionality.\n'
'\nIf you do not have a Service Role and Instance Profile '
'variable set for your create-cluster command in the AWS '
'CLI config file, create-default-roles will automatically '
'set the values for these variables with these default '
'roles. If you have already set a value for Service Role '
'or Instance Profile, create-default-roles will not '
'automatically set the defaults for these variables in the '
'AWS CLI config file. You can view settings for variables '
'in the config file using the "aws configure get" command.'
'\n')
ARG_TABLE = [
{'name': 'iam-endpoint',
'no_paramfile': True,
'help_text': 'The IAM endpoint to call for creating the roles.'
' This is optional and should only be specified when a'
' custom endpoint should be called for IAM operations'
'.
'}
]
def _run_main_command(self, parsed_args, parsed_globals):
self.iam_endpoint_url = parsed_args.iam_endpoint
self._check_for_iam_endpoint(self.region, self.iam_endpoint_url)
self.emr_endpoint_url = \
self._session.create_client(
'emr',
region_name=self.region,
endpoint_url=parsed_globals.endpoint_url,
verify=parsed_globals.verify_ssl).meta.endpoint_url
LOG.debug('elasticmapreduce endpoint used for resolving'
' service principal: ' + self.emr_endpoint_url)
# Create default EC2 Role for EMR if it does not exist.
ec2_result, ec2_policy = self._create_role_if_not_exists(parsed_globals, EC2_ROLE_NAME,
EC2_ROLE_POLICY_NAME, [EC2])
# Create default EC2 Instance Profile for EMR if it does not exist.
instance_profile_name = EC2_ROLE_NAME
if self.check_if_instance_profile_exists(instance_profile_name,
parsed_globals):
LOG.debug('Instance Profile ' + instance_profile_name + ' exists.')
else:
LOG.debug('Instance Profile ' + instance_profile_name +
'does not exist. Creating default Instance Profile ' +
instance_profile_name)
self._create_instance_profile_with_role(instance_profile_name,
instance_profile_name,
parsed_globals)
# Create default EMR Role if it does not exist.
emr_result, emr_policy = self._create_role_if_not_exists(parsed_globals, EMR_ROLE_NAME,
EMR_ROLE_POLICY_NAME, [EMR])
# Create default EMR AutoScaling Role if it does not exist.
emr_autoscaling_result, emr_autoscaling_policy = \
self._create_role_if_not_exists(parsed_globals, EMR_AUTOSCALING_ROLE_NAME,
EMR_AUTOSCALING_ROLE_POLICY_NAME, [EMR, APPLICATION_AUTOSCALING])
configutils.update_roles(self._session)
emrutils.display_response(
self._session,
'create_role',
self._construct_result(ec2_result, ec2_policy,
emr_result, emr_policy,
emr_autoscaling_result, emr_autoscaling_policy),
parsed_globals)
return 0
def _create_role_if_not_exists(self, parsed_globals, role_name, policy_name, service_names):
result = None
policy = None
if self.check_if_role_exists(role_name, parsed_globals):
LOG.debug('Role ' + role_name + ' exists.')
else:
LOG.debug('Role ' + role_name + ' does not exist.'
' Creating default role: ' + role_name)
role_arn = get_role_policy_arn(self.region, policy_name)
result = self._create_role_with_role_policy(
role_name, service_names, role_arn, parsed_globals)
policy = self._get_role_policy(role_arn, parsed_globals)
return result, policy
def _check_for_iam_endpoint(self, region, iam_endpoint):
try:
self._session.create_client('emr', region)
except botocore.exceptions.UnknownEndpointError:
if iam_endpoint is None:
raise exceptions.UnknownIamEndpointError(region=region)
def _construct_result(self, ec2_response, ec2_policy,
emr_response, emr_policy,
emr_autoscaling_response, emr_autoscaling_policy):
result = []
self._construct_role_and_role_policy_structure(
result, ec2_response, ec2_policy)
self._construct_role_and_role_policy_structure(
result, emr_response, emr_policy)
self._construct_role_and_role_policy_structure(
result, emr_autoscaling_response, emr_autoscaling_policy)
return result
def _construct_role_and_role_policy_structure(
self, list, response, policy):
if response is not None and response['Role'] is not None:
list.append({'Role': response['Role'], 'RolePolicy': policy})
return list
def check_if_role_exists(self, role_name, parsed_globals):
parameters = {'RoleName': role_name}
try:
self._call_iam_operation('GetRole', parameters, parsed_globals)
except botocore.exceptions.ClientError as e:
role_not_found_code = "NoSuchEntity"
error_code = e.response.get('Error', {}).get('Code', '')
if role_not_found_code == error_code:
# No role error.
return False
else:
# Some other error. raise.
raise e
return True
def check_if_instance_profile_exists(self, instance_profile_name,
parsed_globals):
parameters = {'InstanceProfileName': instance_profile_name}
try:
self._call_iam_operation('GetInstanceProfile', parameters,
parsed_globals)
except botocore.exceptions.ClientError as e:
profile_not_found_code = 'NoSuchEntity'
error_code = e.response.get('Error', {}).get('Code')
if profile_not_found_code == error_code:
# No instance profile error.
return False
else:
# Some other error. raise.
raise e
return True
def _get_role_policy(self, arn, parsed_globals):
parameters = {}
parameters['PolicyArn'] = arn
policy_details = self._call_iam_operation('GetPolicy', parameters,
parsed_globals)
parameters["VersionId"] = policy_details["Policy"]["DefaultVersionId"]
policy_version_details = self._call_iam_operation('GetPolicyVersion',
parameters,
parsed_globals)
return policy_version_details["PolicyVersion"]["Document"]
def _create_role_with_role_policy(
self, role_name, service_names, role_arn, parsed_globals):
if len(service_names) == 1:
service_principal = get_service_principal(service_names[0], self.emr_endpoint_url)
else:
service_principal = []
for service in service_names:
service_principal.append(get_service_principal(service, self.emr_endpoint_url))
LOG.debug(service_principal)
parameters = {'RoleName': role_name}
_assume_role_policy = \
emrutils.dict_to_string(assume_role_policy(service_principal))
parameters['AssumeRolePolicyDocument'] = _assume_role_policy
create_role_response = self._call_iam_operation('CreateRole',
parameters,
parsed_globals)
parameters = {}
parameters['PolicyArn'] = role_arn
parameters['RoleName'] = role_name
self._call_iam_operation('AttachRolePolicy',
parameters, parsed_globals)
return create_role_response
def _create_instance_profile_with_role(self, instance_profile_name,
role_name, parsed_globals):
# Creating an Instance Profile
parameters = {'InstanceProfileName': instance_profile_name}
self._call_iam_operation('CreateInstanceProfile', parameters,
parsed_globals)
# Adding the role to the Instance Profile
parameters = {}
parameters['InstanceProfileName'] = instance_profile_name
parameters['RoleName'] = role_name
self._call_iam_operation('AddRoleToInstanceProfile', parameters,
parsed_globals)
def _call_iam_operation(self, operation_name, parameters, parsed_globals):
client = self._session.create_client(
'iam', region_name=self.region, endpoint_url=self.iam_endpoint_url,
verify=parsed_globals.verify_ssl)
return getattr(client, xform_name(operation_name))(**parameters)
awscli-1.17.14/awscli/customizations/emr/instancegroupsutils.py 0000644 0000000 0000000 00000006745 13620325554 024765 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import constants
from awscli.customizations.emr import exceptions
def build_instance_groups(parsed_instance_groups):
"""
Helper method that converts --instance-groups option value in
create-cluster and add-instance-groups to
Amazon Elastic MapReduce InstanceGroupConfig data type.
"""
instance_groups = []
for instance_group in parsed_instance_groups:
ig_config = {}
keys = instance_group.keys()
if 'Name' in keys:
ig_config['Name'] = instance_group['Name']
else:
ig_config['Name'] = instance_group['InstanceGroupType']
ig_config['InstanceType'] = instance_group['InstanceType']
ig_config['InstanceCount'] = instance_group['InstanceCount']
ig_config['InstanceRole'] = instance_group['InstanceGroupType'].upper()
if 'BidPrice' in keys:
if instance_group['BidPrice'] != 'OnDemandPrice':
ig_config['BidPrice'] = instance_group['BidPrice']
ig_config['Market'] = constants.SPOT
else:
ig_config['Market'] = constants.ON_DEMAND
if 'EbsConfiguration' in keys:
ig_config['EbsConfiguration'] = instance_group['EbsConfiguration']
if 'AutoScalingPolicy' in keys:
ig_config['AutoScalingPolicy'] = instance_group['AutoScalingPolicy']
if 'Configurations' in keys:
ig_config['Configurations'] = instance_group['Configurations']
instance_groups.append(ig_config)
return instance_groups
def _build_instance_group(
instance_type, instance_count, instance_group_type):
ig_config = {}
ig_config['InstanceType'] = instance_type
ig_config['InstanceCount'] = instance_count
ig_config['InstanceRole'] = instance_group_type.upper()
ig_config['Name'] = ig_config['InstanceRole']
ig_config['Market'] = constants.ON_DEMAND
return ig_config
def validate_and_build_instance_groups(
instance_groups, instance_type, instance_count):
if (instance_groups is None and instance_type is None):
raise exceptions.MissingRequiredInstanceGroupsError
if (instance_groups is not None and
(instance_type is not None or
instance_count is not None)):
raise exceptions.InstanceGroupsValidationError
if instance_groups is not None:
return build_instance_groups(instance_groups)
else:
instance_groups = []
master_ig = _build_instance_group(
instance_type=instance_type,
instance_count=1,
instance_group_type="MASTER")
instance_groups.append(master_ig)
if instance_count is not None and int(instance_count) > 1:
core_ig = _build_instance_group(
instance_type=instance_type,
instance_count=int(instance_count) - 1,
instance_group_type="CORE")
instance_groups.append(core_ig)
return instance_groups
awscli-1.17.14/awscli/customizations/emr/steputils.py 0000644 0000000 0000000 00000020442 13620325554 022662 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import constants
from awscli.customizations.emr import exceptions
def build_step_config_list(parsed_step_list, region, release_label):
step_config_list = []
for step in parsed_step_list:
step_type = step.get('Type')
if step_type is None:
step_type = constants.CUSTOM_JAR
step_type = step_type.lower()
step_config = {}
if step_type == constants.CUSTOM_JAR:
step_config = build_custom_jar_step(parsed_step=step)
elif step_type == constants.STREAMING:
step_config = build_streaming_step(
parsed_step=step, release_label=release_label)
elif step_type == constants.HIVE:
step_config = build_hive_step(
parsed_step=step, region=region,
release_label=release_label)
elif step_type == constants.PIG:
step_config = build_pig_step(
parsed_step=step, region=region,
release_label=release_label)
elif step_type == constants.IMPALA:
step_config = build_impala_step(
parsed_step=step, region=region,
release_label=release_label)
elif step_type == constants.SPARK:
step_config = build_spark_step(
parsed_step=step, region=region,
release_label=release_label)
else:
raise exceptions.UnknownStepTypeError(step_type=step_type)
step_config_list.append(step_config)
return step_config_list
def build_custom_jar_step(parsed_step):
name = _apply_default_value(
arg=parsed_step.get('Name'),
value=constants.DEFAULT_CUSTOM_JAR_STEP_NAME)
action_on_failure = _apply_default_value(
arg=parsed_step.get('ActionOnFailure'),
value=constants.DEFAULT_FAILURE_ACTION)
emrutils.check_required_field(
structure=constants.CUSTOM_JAR_STEP_CONFIG,
name='Jar',
value=parsed_step.get('Jar'))
return emrutils.build_step(
jar=parsed_step.get('Jar'),
args=parsed_step.get('Args'),
name=name,
action_on_failure=action_on_failure,
main_class=parsed_step.get('MainClass'),
properties=emrutils.parse_key_value_string(
parsed_step.get('Properties')))
def build_streaming_step(parsed_step, release_label):
name = _apply_default_value(
arg=parsed_step.get('Name'),
value=constants.DEFAULT_STREAMING_STEP_NAME)
action_on_failure = _apply_default_value(
arg=parsed_step.get('ActionOnFailure'),
value=constants.DEFAULT_FAILURE_ACTION)
args = parsed_step.get('Args')
emrutils.check_required_field(
structure=constants.STREAMING_STEP_CONFIG,
name='Args',
value=args)
emrutils.check_empty_string_list(name='Args', value=args)
args_list = []
if release_label:
jar = constants.COMMAND_RUNNER
args_list.append(constants.HADOOP_STREAMING_COMMAND)
else:
jar = constants.HADOOP_STREAMING_PATH
args_list += args
return emrutils.build_step(
jar=jar,
args=args_list,
name=name,
action_on_failure=action_on_failure)
def build_hive_step(parsed_step, release_label, region=None):
args = parsed_step.get('Args')
emrutils.check_required_field(
structure=constants.HIVE_STEP_CONFIG, name='Args', value=args)
emrutils.check_empty_string_list(name='Args', value=args)
name = _apply_default_value(
arg=parsed_step.get('Name'),
value=constants.DEFAULT_HIVE_STEP_NAME)
action_on_failure = \
_apply_default_value(
arg=parsed_step.get('ActionOnFailure'),
value=constants.DEFAULT_FAILURE_ACTION)
return emrutils.build_step(
jar=_get_runner_jar(release_label, region),
args=_build_hive_args(args, release_label, region),
name=name,
action_on_failure=action_on_failure)
def _build_hive_args(args, release_label, region):
args_list = []
if release_label:
args_list.append(constants.HIVE_SCRIPT_COMMAND)
else:
args_list.append(emrutils.build_s3_link(
relative_path=constants.HIVE_SCRIPT_PATH, region=region))
args_list.append(constants.RUN_HIVE_SCRIPT)
if not release_label:
args_list.append(constants.HIVE_VERSIONS)
args_list.append(constants.LATEST)
args_list.append(constants.ARGS)
args_list += args
return args_list
def build_pig_step(parsed_step, release_label, region=None):
args = parsed_step.get('Args')
emrutils.check_required_field(
structure=constants.PIG_STEP_CONFIG, name='Args', value=args)
emrutils.check_empty_string_list(name='Args', value=args)
name = _apply_default_value(
arg=parsed_step.get('Name'),
value=constants.DEFAULT_PIG_STEP_NAME)
action_on_failure = _apply_default_value(
arg=parsed_step.get('ActionOnFailure'),
value=constants.DEFAULT_FAILURE_ACTION)
return emrutils.build_step(
jar=_get_runner_jar(release_label, region),
args=_build_pig_args(args, release_label, region),
name=name,
action_on_failure=action_on_failure)
def _build_pig_args(args, release_label, region):
args_list = []
if release_label:
args_list.append(constants.PIG_SCRIPT_COMMAND)
else:
args_list.append(emrutils.build_s3_link(
relative_path=constants.PIG_SCRIPT_PATH, region=region))
args_list.append(constants.RUN_PIG_SCRIPT)
if not release_label:
args_list.append(constants.PIG_VERSIONS)
args_list.append(constants.LATEST)
args_list.append(constants.ARGS)
args_list += args
return args_list
def build_impala_step(parsed_step, release_label, region=None):
if release_label:
raise exceptions.UnknownStepTypeError(step_type=constants.IMPALA)
name = _apply_default_value(
arg=parsed_step.get('Name'),
value=constants.DEFAULT_IMPALA_STEP_NAME)
action_on_failure = _apply_default_value(
arg=parsed_step.get('ActionOnFailure'),
value=constants.DEFAULT_FAILURE_ACTION)
args_list = [
emrutils.build_s3_link(
relative_path=constants.IMPALA_INSTALL_PATH, region=region),
constants.RUN_IMPALA_SCRIPT]
args = parsed_step.get('Args')
emrutils.check_required_field(
structure=constants.IMPALA_STEP_CONFIG, name='Args', value=args)
args_list += args
return emrutils.build_step(
jar=emrutils.get_script_runner(region),
args=args_list,
name=name,
action_on_failure=action_on_failure)
def build_spark_step(parsed_step, release_label, region=None):
name = _apply_default_value(
arg=parsed_step.get('Name'),
value=constants.DEFAULT_SPARK_STEP_NAME)
action_on_failure = _apply_default_value(
arg=parsed_step.get('ActionOnFailure'),
value=constants.DEFAULT_FAILURE_ACTION)
args = parsed_step.get('Args')
emrutils.check_required_field(
structure=constants.SPARK_STEP_CONFIG, name='Args', value=args)
return emrutils.build_step(
jar=_get_runner_jar(release_label, region),
args=_build_spark_args(args, release_label, region),
name=name,
action_on_failure=action_on_failure)
def _build_spark_args(args, release_label, region):
args_list = []
if release_label:
args_list.append(constants.SPARK_SUBMIT_COMMAND)
else:
args_list.append(constants.SPARK_SUBMIT_PATH)
args_list += args
return args_list
def _apply_default_value(arg, value):
if arg is None:
arg = value
return arg
def _get_runner_jar(release_label, region):
return constants.COMMAND_RUNNER if release_label \
else emrutils.get_script_runner(region)
awscli-1.17.14/awscli/customizations/emr/addinstancegroups.py 0000644 0000000 0000000 00000005127 13620325554 024346 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import argumentschema
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import helptext
from awscli.customizations.emr import instancegroupsutils
from awscli.customizations.emr.command import Command
class AddInstanceGroups(Command):
NAME = 'add-instance-groups'
DESCRIPTION = 'Adds an instance group to a running cluster.'
ARG_TABLE = [
{'name': 'cluster-id', 'required': True,
'help_text': helptext.CLUSTER_ID},
{'name': 'instance-groups', 'required': True,
'help_text': helptext.INSTANCE_GROUPS,
'schema': argumentschema.INSTANCE_GROUPS_SCHEMA}
]
def _run_main_command(self, parsed_args, parsed_globals):
parameters = {'JobFlowId': parsed_args.cluster_id}
parameters['InstanceGroups'] = \
instancegroupsutils.build_instance_groups(
parsed_args.instance_groups)
add_instance_groups_response = emrutils.call(
self._session, 'add_instance_groups', parameters,
self.region, parsed_globals.endpoint_url,
parsed_globals.verify_ssl)
constructed_result = self._construct_result(
add_instance_groups_response)
emrutils.display_response(self._session, 'add_instance_groups',
constructed_result, parsed_globals)
return 0
def _construct_result(self, add_instance_groups_result):
jobFlowId = None
instanceGroupIds = None
clusterArn = None
if add_instance_groups_result is not None:
jobFlowId = add_instance_groups_result.get('JobFlowId')
instanceGroupIds = add_instance_groups_result.get(
'InstanceGroupIds')
clusterArn = add_instance_groups_result.get('ClusterArn')
if jobFlowId is not None and instanceGroupIds is not None:
return {'ClusterId': jobFlowId,
'InstanceGroupIds': instanceGroupIds,
'ClusterArn': clusterArn}
else:
return {}
awscli-1.17.14/awscli/customizations/emr/emrutils.py 0000644 0000000 0000000 00000020155 13620325630 022466 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import json
import logging
import os
from awscli.clidriver import CLIOperationCaller
from awscli.customizations.emr import constants
from awscli.customizations.emr import exceptions
from botocore.exceptions import WaiterError, NoCredentialsError
from botocore import xform_name
LOG = logging.getLogger(__name__)
def parse_tags(raw_tags_list):
tags_dict_list = []
if raw_tags_list:
for tag in raw_tags_list:
if tag.find('=') == -1:
key, value = tag, ''
else:
key, value = tag.split('=', 1)
tags_dict_list.append({'Key': key, 'Value': value})
return tags_dict_list
def parse_key_value_string(key_value_string):
# raw_key_value_string is a list of key value pairs separated by comma.
# Examples: "k1=v1,k2='v 2',k3,k4"
key_value_list = []
if key_value_string is not None:
raw_key_value_list = key_value_string.split(',')
for kv in raw_key_value_list:
if kv.find('=') == -1:
key, value = kv, ''
else:
key, value = kv.split('=', 1)
key_value_list.append({'Key': key, 'Value': value})
return key_value_list
else:
return None
def apply_boolean_options(
true_option, true_option_name, false_option, false_option_name):
if true_option and false_option:
error_message = \
'aws: error: cannot use both ' + true_option_name + \
' and ' + false_option_name + ' options together.'
raise ValueError(error_message)
elif true_option:
return True
else:
return False
# Deprecate. Rename to apply_dict
def apply(params, key, value):
if value:
params[key] = value
return params
def apply_dict(params, key, value):
if value:
params[key] = value
return params
def apply_params(src_params, src_key, dest_params, dest_key):
if src_key in src_params.keys() and src_params[src_key]:
dest_params[dest_key] = src_params[src_key]
return dest_params
def build_step(
jar, name='Step',
action_on_failure=constants.DEFAULT_FAILURE_ACTION,
args=None,
main_class=None,
properties=None):
check_required_field(
structure='HadoopJarStep', name='Jar', value=jar)
step = {}
apply_dict(step, 'Name', name)
apply_dict(step, 'ActionOnFailure', action_on_failure)
jar_config = {}
jar_config['Jar'] = jar
apply_dict(jar_config, 'Args', args)
apply_dict(jar_config, 'MainClass', main_class)
apply_dict(jar_config, 'Properties', properties)
step['HadoopJarStep'] = jar_config
return step
def build_bootstrap_action(
path,
name='Bootstrap Action',
args=None):
if path is None:
raise exceptions.MissingParametersError(
object_name='ScriptBootstrapActionConfig', missing='Path')
ba_config = {}
apply_dict(ba_config, 'Name', name)
script_config = {}
apply_dict(script_config, 'Args', args)
script_config['Path'] = path
apply_dict(ba_config, 'ScriptBootstrapAction', script_config)
return ba_config
def build_s3_link(relative_path='', region='us-east-1'):
if region is None:
region = 'us-east-1'
return 's3://{0}.elasticmapreduce{1}'.format(region, relative_path)
def get_script_runner(region='us-east-1'):
if region is None:
region = 'us-east-1'
return build_s3_link(
relative_path=constants.SCRIPT_RUNNER_PATH, region=region)
def check_required_field(structure, name, value):
if not value:
raise exceptions.MissingParametersError(
object_name=structure, missing=name)
def check_empty_string_list(name, value):
if not value or (len(value) == 1 and value[0].strip() == ""):
raise exceptions.EmptyListError(param=name)
def call(session, operation_name, parameters, region_name=None,
endpoint_url=None, verify=None):
# We could get an error from get_endpoint() about not having
# a region configured. Before this happens we want to check
# for credentials so we can give a good error message.
if session.get_credentials() is None:
raise NoCredentialsError()
client = session.create_client(
'emr', region_name=region_name, endpoint_url=endpoint_url,
verify=verify)
LOG.debug('Calling ' + str(operation_name))
return getattr(client, operation_name)(**parameters)
def get_example_file(command):
return open('awscli/examples/emr/' + command + '.rst')
def dict_to_string(dict, indent=2):
return json.dumps(dict, indent=indent)
def get_client(session, parsed_globals):
return session.create_client(
'emr',
region_name=get_region(session, parsed_globals),
endpoint_url=parsed_globals.endpoint_url,
verify=parsed_globals.verify_ssl)
def get_cluster_state(session, parsed_globals, cluster_id):
client = get_client(session, parsed_globals)
data = client.describe_cluster(ClusterId=cluster_id)
return data['Cluster']['Status']['State']
def find_master_dns(session, parsed_globals, cluster_id):
"""
Returns the master_instance's 'PublicDnsName'.
"""
client = get_client(session, parsed_globals)
data = client.describe_cluster(ClusterId=cluster_id)
return data['Cluster']['MasterPublicDnsName']
def which(program):
for path in os.environ["PATH"].split(os.pathsep):
path = path.strip('"')
exe_file = os.path.join(path, program)
if os.path.isfile(exe_file) and os.access(exe_file, os.X_OK):
return exe_file
return None
def call_and_display_response(session, operation_name, parameters,
parsed_globals):
cli_operation_caller = CLIOperationCaller(session)
cli_operation_caller.invoke(
'emr', operation_name,
parameters, parsed_globals)
def display_response(session, operation_name, result, parsed_globals):
cli_operation_caller = CLIOperationCaller(session)
# Calling a private method. Should be changed after the functionality
# is moved outside CliOperationCaller.
cli_operation_caller._display_response(
operation_name, result, parsed_globals)
def get_region(session, parsed_globals):
region = parsed_globals.region
if region is None:
region = session.get_config_variable('region')
return region
def join(values, separator=',', lastSeparator='and'):
"""
Helper method to print a list of values
[1,2,3] -> '1, 2 and 3'
"""
values = [str(x) for x in values]
if len(values) < 1:
return ""
elif len(values) == 1:
return values[0]
else:
separator = '%s ' % separator
return ' '.join([separator.join(values[:-1]),
lastSeparator, values[-1]])
def split_to_key_value(string):
if string.find('=') == -1:
return string, ''
else:
return string.split('=', 1)
def get_cluster(cluster_id, session, region,
endpoint_url, verify_ssl):
describe_cluster_params = {'ClusterId': cluster_id}
describe_cluster_response = call(
session, 'describe_cluster', describe_cluster_params,
region, endpoint_url,
verify_ssl)
if describe_cluster_response is not None:
return describe_cluster_response.get('Cluster')
def get_release_label(cluster_id, session, region,
endpoint_url, verify_ssl):
cluster = get_cluster(cluster_id, session, region,
endpoint_url, verify_ssl)
if cluster is not None:
return cluster.get('ReleaseLabel')
awscli-1.17.14/awscli/customizations/emr/__init__.py 0000644 0000000 0000000 00000001065 13620325554 022365 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
awscli-1.17.14/awscli/customizations/emr/createcluster.py 0000644 0000000 0000000 00000057735 13620325630 023505 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import re
from awscli.customizations.commands import BasicCommand
from awscli.customizations.emr import applicationutils
from awscli.customizations.emr import argumentschema
from awscli.customizations.emr import constants
from awscli.customizations.emr import emrfsutils
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import exceptions
from awscli.customizations.emr import hbaseutils
from awscli.customizations.emr import helptext
from awscli.customizations.emr import instancegroupsutils
from awscli.customizations.emr import instancefleetsutils
from awscli.customizations.emr import steputils
from awscli.customizations.emr.command import Command
from awscli.customizations.emr.constants import EC2_ROLE_NAME
from awscli.customizations.emr.constants import EMR_ROLE_NAME
from botocore.compat import json
class CreateCluster(Command):
NAME = 'create-cluster'
DESCRIPTION = helptext.CREATE_CLUSTER_DESCRIPTION
ARG_TABLE = [
{'name': 'release-label',
'help_text': helptext.RELEASE_LABEL},
{'name': 'ami-version',
'help_text': helptext.AMI_VERSION},
{'name': 'instance-groups',
'schema': argumentschema.INSTANCE_GROUPS_SCHEMA,
'help_text': helptext.INSTANCE_GROUPS},
{'name': 'instance-type',
'help_text': helptext.INSTANCE_TYPE},
{'name': 'instance-count',
'help_text': helptext.INSTANCE_COUNT},
{'name': 'auto-terminate', 'action': 'store_true',
'group_name': 'auto_terminate',
'help_text': helptext.AUTO_TERMINATE},
{'name': 'no-auto-terminate', 'action': 'store_true',
'group_name': 'auto_terminate'},
{'name': 'instance-fleets',
'schema': argumentschema.INSTANCE_FLEETS_SCHEMA,
'help_text': helptext.INSTANCE_FLEETS},
{'name': 'name',
'default': 'Development Cluster',
'help_text': helptext.CLUSTER_NAME},
{'name': 'log-uri',
'help_text': helptext.LOG_URI},
{'name': 'service-role',
'help_text': helptext.SERVICE_ROLE},
{'name': 'auto-scaling-role',
'help_text': helptext.AUTOSCALING_ROLE},
{'name': 'use-default-roles', 'action': 'store_true',
'help_text': helptext.USE_DEFAULT_ROLES},
{'name': 'configurations',
'help_text': helptext.CONFIGURATIONS},
{'name': 'ec2-attributes',
'help_text': helptext.EC2_ATTRIBUTES,
'schema': argumentschema.EC2_ATTRIBUTES_SCHEMA},
{'name': 'termination-protected', 'action': 'store_true',
'group_name': 'termination_protected',
'help_text': helptext.TERMINATION_PROTECTED},
{'name': 'no-termination-protected', 'action': 'store_true',
'group_name': 'termination_protected'},
{'name': 'scale-down-behavior',
'help_text': helptext.SCALE_DOWN_BEHAVIOR},
{'name': 'visible-to-all-users', 'action': 'store_true',
'group_name': 'visibility',
'help_text': helptext.VISIBILITY},
{'name': 'no-visible-to-all-users', 'action': 'store_true',
'group_name': 'visibility'},
{'name': 'enable-debugging', 'action': 'store_true',
'group_name': 'debug',
'help_text': helptext.DEBUGGING},
{'name': 'no-enable-debugging', 'action': 'store_true',
'group_name': 'debug'},
{'name': 'tags', 'nargs': '+',
'help_text': helptext.TAGS,
'schema': argumentschema.TAGS_SCHEMA},
{'name': 'bootstrap-actions',
'help_text': helptext.BOOTSTRAP_ACTIONS,
'schema': argumentschema.BOOTSTRAP_ACTIONS_SCHEMA},
{'name': 'applications',
'help_text': helptext.APPLICATIONS,
'schema': argumentschema.APPLICATIONS_SCHEMA},
{'name': 'emrfs',
'help_text': helptext.EMR_FS,
'schema': argumentschema.EMR_FS_SCHEMA},
{'name': 'steps',
'schema': argumentschema.STEPS_SCHEMA,
'help_text': helptext.STEPS},
{'name': 'additional-info',
'help_text': helptext.ADDITIONAL_INFO},
{'name': 'restore-from-hbase-backup',
'schema': argumentschema.HBASE_RESTORE_FROM_BACKUP_SCHEMA,
'help_text': helptext.RESTORE_FROM_HBASE},
{'name': 'security-configuration',
'help_text': helptext.SECURITY_CONFIG},
{'name': 'custom-ami-id',
'help_text' : helptext.CUSTOM_AMI_ID},
{'name': 'ebs-root-volume-size',
'help_text' : helptext.EBS_ROOT_VOLUME_SIZE},
{'name': 'repo-upgrade-on-boot',
'help_text' : helptext.REPO_UPGRADE_ON_BOOT},
{'name': 'kerberos-attributes',
'schema': argumentschema.KERBEROS_ATTRIBUTES_SCHEMA,
'help_text': helptext.KERBEROS_ATTRIBUTES},
{'name': 'step-concurrency-level',
'cli_type_name': 'integer',
'help_text': helptext.STEP_CONCURRENCY_LEVEL}
]
SYNOPSIS = BasicCommand.FROM_FILE('emr', 'create-cluster-synopsis.txt')
EXAMPLES = BasicCommand.FROM_FILE('emr', 'create-cluster-examples.rst')
def _run_main_command(self, parsed_args, parsed_globals):
params = {}
params['Name'] = parsed_args.name
self._validate_release_label_ami_version(parsed_args)
service_role_validation_message = (
" Either choose --use-default-roles or use both --service-role "
" and --ec2-attributes InstanceProfile=.")
if parsed_args.use_default_roles is True and \
parsed_args.service_role is not None:
raise exceptions.MutualExclusiveOptionError(
option1="--use-default-roles",
option2="--service-role",
message=service_role_validation_message)
if parsed_args.use_default_roles is True and \
parsed_args.ec2_attributes is not None and \
'InstanceProfile' in parsed_args.ec2_attributes:
raise exceptions.MutualExclusiveOptionError(
option1="--use-default-roles",
option2="--ec2-attributes InstanceProfile",
message=service_role_validation_message)
if parsed_args.instance_groups is not None and \
parsed_args.instance_fleets is not None:
raise exceptions.MutualExclusiveOptionError(
option1="--instance-groups",
option2="--instance-fleets")
instances_config = {}
if parsed_args.instance_fleets is not None:
instances_config['InstanceFleets'] = \
instancefleetsutils.validate_and_build_instance_fleets(
parsed_args.instance_fleets)
else:
instances_config['InstanceGroups'] = \
instancegroupsutils.validate_and_build_instance_groups(
instance_groups=parsed_args.instance_groups,
instance_type=parsed_args.instance_type,
instance_count=parsed_args.instance_count)
if parsed_args.release_label is not None:
params["ReleaseLabel"] = parsed_args.release_label
if parsed_args.configurations is not None:
try:
params["Configurations"] = json.loads(
parsed_args.configurations)
except ValueError:
raise ValueError('aws: error: invalid json argument for '
'option --configurations')
if (parsed_args.release_label is None and
parsed_args.ami_version is not None):
is_valid_ami_version = re.match('\d?\..*', parsed_args.ami_version)
if is_valid_ami_version is None:
raise exceptions.InvalidAmiVersionError(
ami_version=parsed_args.ami_version)
params['AmiVersion'] = parsed_args.ami_version
emrutils.apply_dict(
params, 'AdditionalInfo', parsed_args.additional_info)
emrutils.apply_dict(params, 'LogUri', parsed_args.log_uri)
if parsed_args.use_default_roles is True:
parsed_args.service_role = EMR_ROLE_NAME
if parsed_args.ec2_attributes is None:
parsed_args.ec2_attributes = {}
parsed_args.ec2_attributes['InstanceProfile'] = EC2_ROLE_NAME
emrutils.apply_dict(params, 'ServiceRole', parsed_args.service_role)
if parsed_args.instance_groups is not None:
for instance_group in instances_config['InstanceGroups']:
if 'AutoScalingPolicy' in instance_group.keys():
if parsed_args.auto_scaling_role is None:
raise exceptions.MissingAutoScalingRoleError()
emrutils.apply_dict(params, 'AutoScalingRole', parsed_args.auto_scaling_role)
if parsed_args.scale_down_behavior is not None:
emrutils.apply_dict(params, 'ScaleDownBehavior', parsed_args.scale_down_behavior)
if (
parsed_args.no_auto_terminate is False and
parsed_args.auto_terminate is False):
parsed_args.no_auto_terminate = True
instances_config['KeepJobFlowAliveWhenNoSteps'] = \
emrutils.apply_boolean_options(
parsed_args.no_auto_terminate,
'--no-auto-terminate',
parsed_args.auto_terminate,
'--auto-terminate')
instances_config['TerminationProtected'] = \
emrutils.apply_boolean_options(
parsed_args.termination_protected,
'--termination-protected',
parsed_args.no_termination_protected,
'--no-termination-protected')
if (parsed_args.visible_to_all_users is False and
parsed_args.no_visible_to_all_users is False):
parsed_args.visible_to_all_users = True
params['VisibleToAllUsers'] = \
emrutils.apply_boolean_options(
parsed_args.visible_to_all_users,
'--visible-to-all-users',
parsed_args.no_visible_to_all_users,
'--no-visible-to-all-users')
params['Tags'] = emrutils.parse_tags(parsed_args.tags)
params['Instances'] = instances_config
if parsed_args.ec2_attributes is not None:
self._build_ec2_attributes(
cluster=params, parsed_attrs=parsed_args.ec2_attributes)
debugging_enabled = emrutils.apply_boolean_options(
parsed_args.enable_debugging,
'--enable-debugging',
parsed_args.no_enable_debugging,
'--no-enable-debugging')
if parsed_args.log_uri is None and debugging_enabled is True:
raise exceptions.LogUriError
if debugging_enabled is True:
self._update_cluster_dict(
cluster=params,
key='Steps',
value=[
self._build_enable_debugging(parsed_args, parsed_globals)])
if parsed_args.applications is not None:
if parsed_args.release_label is None:
app_list, ba_list, step_list = \
applicationutils.build_applications(
region=self.region,
parsed_applications=parsed_args.applications,
ami_version=params['AmiVersion'])
self._update_cluster_dict(
params, 'NewSupportedProducts', app_list)
self._update_cluster_dict(
params, 'BootstrapActions', ba_list)
self._update_cluster_dict(
params, 'Steps', step_list)
else:
params["Applications"] = []
for application in parsed_args.applications:
params["Applications"].append(application)
hbase_restore_config = parsed_args.restore_from_hbase_backup
if hbase_restore_config is not None:
args = hbaseutils.build_hbase_restore_from_backup_args(
dir=hbase_restore_config.get('Dir'),
backup_version=hbase_restore_config.get('BackupVersion'))
step_config = emrutils.build_step(
jar=constants.HBASE_JAR_PATH,
name=constants.HBASE_RESTORE_STEP_NAME,
action_on_failure=constants.CANCEL_AND_WAIT,
args=args)
self._update_cluster_dict(
params, 'Steps', [step_config])
if parsed_args.bootstrap_actions is not None:
self._build_bootstrap_actions(
cluster=params,
parsed_boostrap_actions=parsed_args.bootstrap_actions)
if parsed_args.emrfs is not None:
self._handle_emrfs_parameters(
cluster=params,
emrfs_args=parsed_args.emrfs,
release_label=parsed_args.release_label)
if parsed_args.steps is not None:
steps_list = steputils.build_step_config_list(
parsed_step_list=parsed_args.steps,
region=self.region,
release_label=parsed_args.release_label)
self._update_cluster_dict(
cluster=params, key='Steps', value=steps_list)
if parsed_args.security_configuration is not None:
emrutils.apply_dict(
params, 'SecurityConfiguration', parsed_args.security_configuration)
if parsed_args.custom_ami_id is not None:
emrutils.apply_dict(
params, 'CustomAmiId', parsed_args.custom_ami_id
)
if parsed_args.ebs_root_volume_size is not None:
emrutils.apply_dict(
params, 'EbsRootVolumeSize', int(parsed_args.ebs_root_volume_size)
)
if parsed_args.repo_upgrade_on_boot is not None:
emrutils.apply_dict(
params, 'RepoUpgradeOnBoot', parsed_args.repo_upgrade_on_boot
)
if parsed_args.kerberos_attributes is not None:
emrutils.apply_dict(
params, 'KerberosAttributes', parsed_args.kerberos_attributes)
if parsed_args.step_concurrency_level is not None:
params['StepConcurrencyLevel'] = parsed_args.step_concurrency_level
self._validate_required_applications(parsed_args)
run_job_flow_response = emrutils.call(
self._session, 'run_job_flow', params, self.region,
parsed_globals.endpoint_url, parsed_globals.verify_ssl)
constructed_result = self._construct_result(run_job_flow_response)
emrutils.display_response(self._session, 'run_job_flow',
constructed_result, parsed_globals)
return 0
def _construct_result(self, run_job_flow_result):
jobFlowId = None
clusterArn = None
if run_job_flow_result is not None:
jobFlowId = run_job_flow_result.get('JobFlowId')
clusterArn = run_job_flow_result.get('ClusterArn')
if jobFlowId is not None:
return {'ClusterId': jobFlowId,
'ClusterArn': clusterArn }
else:
return {}
def _build_ec2_attributes(self, cluster, parsed_attrs):
keys = parsed_attrs.keys()
instances = cluster['Instances']
if ('SubnetId' in keys and 'SubnetIds' in keys):
raise exceptions.MutualExclusiveOptionError(
option1="SubnetId",
option2="SubnetIds")
if ('AvailabilityZone' in keys and 'AvailabilityZones' in keys):
raise exceptions.MutualExclusiveOptionError(
option1="AvailabilityZone",
option2="AvailabilityZones")
if ('SubnetId' in keys or 'SubnetIds' in keys) \
and ('AvailabilityZone' in keys or 'AvailabilityZones' in keys):
raise exceptions.SubnetAndAzValidationError
emrutils.apply_params(
src_params=parsed_attrs, src_key='KeyName',
dest_params=instances, dest_key='Ec2KeyName')
emrutils.apply_params(
src_params=parsed_attrs, src_key='SubnetId',
dest_params=instances, dest_key='Ec2SubnetId')
emrutils.apply_params(
src_params=parsed_attrs, src_key='SubnetIds',
dest_params=instances, dest_key='Ec2SubnetIds')
if 'AvailabilityZone' in keys:
instances['Placement'] = dict()
emrutils.apply_params(
src_params=parsed_attrs, src_key='AvailabilityZone',
dest_params=instances['Placement'],
dest_key='AvailabilityZone')
if 'AvailabilityZones' in keys:
instances['Placement'] = dict()
emrutils.apply_params(
src_params=parsed_attrs, src_key='AvailabilityZones',
dest_params=instances['Placement'],
dest_key='AvailabilityZones')
emrutils.apply_params(
src_params=parsed_attrs, src_key='InstanceProfile',
dest_params=cluster, dest_key='JobFlowRole')
emrutils.apply_params(
src_params=parsed_attrs, src_key='EmrManagedMasterSecurityGroup',
dest_params=instances, dest_key='EmrManagedMasterSecurityGroup')
emrutils.apply_params(
src_params=parsed_attrs, src_key='EmrManagedSlaveSecurityGroup',
dest_params=instances, dest_key='EmrManagedSlaveSecurityGroup')
emrutils.apply_params(
src_params=parsed_attrs, src_key='ServiceAccessSecurityGroup',
dest_params=instances, dest_key='ServiceAccessSecurityGroup')
emrutils.apply_params(
src_params=parsed_attrs, src_key='AdditionalMasterSecurityGroups',
dest_params=instances, dest_key='AdditionalMasterSecurityGroups')
emrutils.apply_params(
src_params=parsed_attrs, src_key='AdditionalSlaveSecurityGroups',
dest_params=instances, dest_key='AdditionalSlaveSecurityGroups')
emrutils.apply(params=cluster, key='Instances', value=instances)
return cluster
def _build_bootstrap_actions(
self, cluster, parsed_boostrap_actions):
cluster_ba_list = cluster.get('BootstrapActions')
if cluster_ba_list is None:
cluster_ba_list = []
bootstrap_actions = []
if len(cluster_ba_list) + len(parsed_boostrap_actions) \
> constants.MAX_BOOTSTRAP_ACTION_NUMBER:
raise ValueError('aws: error: maximum number of '
'bootstrap actions for a cluster exceeded.')
for ba in parsed_boostrap_actions:
ba_config = {}
if ba.get('Name') is not None:
ba_config['Name'] = ba.get('Name')
else:
ba_config['Name'] = constants.BOOTSTRAP_ACTION_NAME
script_arg_config = {}
emrutils.apply_params(
src_params=ba, src_key='Path',
dest_params=script_arg_config, dest_key='Path')
emrutils.apply_params(
src_params=ba, src_key='Args',
dest_params=script_arg_config, dest_key='Args')
emrutils.apply(
params=ba_config,
key='ScriptBootstrapAction',
value=script_arg_config)
bootstrap_actions.append(ba_config)
result = cluster_ba_list + bootstrap_actions
if len(result) > 0:
cluster['BootstrapActions'] = result
return cluster
def _build_enable_debugging(self, parsed_args, parsed_globals):
if parsed_args.release_label:
jar = constants.COMMAND_RUNNER
args = [constants.DEBUGGING_COMMAND]
else:
jar = emrutils.get_script_runner(self.region)
args = [emrutils.build_s3_link(
relative_path=constants.DEBUGGING_PATH,
region=self.region)]
return emrutils.build_step(
name=constants.DEBUGGING_NAME,
action_on_failure=constants.TERMINATE_CLUSTER,
jar=jar,
args=args)
def _update_cluster_dict(self, cluster, key, value):
if key in cluster.keys():
cluster[key] += value
elif value is not None and len(value) > 0:
cluster[key] = value
return cluster
def _validate_release_label_ami_version(self, parsed_args):
if parsed_args.ami_version is not None and \
parsed_args.release_label is not None:
raise exceptions.MutualExclusiveOptionError(
option1="--ami-version",
option2="--release-label")
if parsed_args.ami_version is None and \
parsed_args.release_label is None:
raise exceptions.RequiredOptionsError(
option1="--ami-version",
option2="--release-label")
# Checks if the applications required by steps are specified
# using the --applications option.
def _validate_required_applications(self, parsed_args):
specified_apps = set([])
if parsed_args.applications is not None:
specified_apps = \
set([app['Name'].lower() for app in parsed_args.applications])
missing_apps = self._get_missing_applications_for_steps(specified_apps,
parsed_args)
# Check for HBase.
if parsed_args.restore_from_hbase_backup is not None:
if constants.HBASE not in specified_apps:
missing_apps.add(constants.HBASE.title())
if len(missing_apps) != 0:
raise exceptions.MissingApplicationsError(
applications=missing_apps)
def _get_missing_applications_for_steps(self, specified_apps, parsed_args):
allowed_app_steps = set([constants.HIVE, constants.PIG,
constants.IMPALA])
missing_apps = set([])
if parsed_args.steps is not None:
for step in parsed_args.steps:
if len(missing_apps) == len(allowed_app_steps):
break
step_type = step.get('Type')
if step_type is not None:
step_type = step_type.lower()
if step_type in allowed_app_steps and \
step_type not in specified_apps:
missing_apps.add(step['Type'].title())
return missing_apps
def _filter_configurations_in_special_cases(self, configurations,
parsed_args, parsed_configs):
if parsed_args.use_default_roles:
configurations = [x for x in configurations
if x.name != 'service_role' and
x.name != 'instance_profile']
return configurations
def _handle_emrfs_parameters(self, cluster, emrfs_args, release_label):
if release_label:
self.validate_no_emrfs_configuration(cluster)
emrfs_configuration = emrfsutils.build_emrfs_confiuration(
emrfs_args)
self._update_cluster_dict(
cluster=cluster, key='Configurations',
value=[emrfs_configuration])
else:
emrfs_ba_config_list = emrfsutils.build_bootstrap_action_configs(
self.region, emrfs_args)
self._update_cluster_dict(
cluster=cluster, key='BootstrapActions',
value=emrfs_ba_config_list)
def validate_no_emrfs_configuration(self, cluster):
if 'Configurations' in cluster:
for config in cluster['Configurations']:
if config is not None and \
config.get('Classification') == constants.EMRFS_SITE:
raise exceptions.DuplicateEmrFsConfigurationError
awscli-1.17.14/awscli/customizations/emr/installapplications.py 0000644 0000000 0000000 00000005477 13620325554 024716 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.emr import applicationutils
from awscli.customizations.emr import argumentschema
from awscli.customizations.emr import constants
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import helptext
from awscli.customizations.emr.command import Command
class InstallApplications(Command):
NAME = 'install-applications'
DESCRIPTION = ('Installs applications on a running cluster. Currently only'
' Hive and Pig can be installed using this command, and'
' this command is only supported by AMI versions'
' (3.x and 2.x).')
ARG_TABLE = [
{'name': 'cluster-id', 'required': True,
'help_text': helptext.CLUSTER_ID},
{'name': 'applications', 'required': True,
'help_text': helptext.INSTALL_APPLICATIONS,
'schema': argumentschema.APPLICATIONS_SCHEMA},
]
# Applications supported by the install-applications command.
supported_apps = ['HIVE', 'PIG']
def _run_main_command(self, parsed_args, parsed_globals):
parameters = {'JobFlowId': parsed_args.cluster_id}
self._check_for_supported_apps(parsed_args.applications)
parameters['Steps'] = applicationutils.build_applications(
self.region, parsed_args.applications)[2]
emrutils.call_and_display_response(self._session, 'AddJobFlowSteps',
parameters, parsed_globals)
return 0
def _check_for_supported_apps(self, parsed_applications):
for app_config in parsed_applications:
app_name = app_config['Name'].upper()
if app_name in constants.APPLICATIONS:
if app_name not in self.supported_apps:
raise ValueError(
"aws: error: " + app_config['Name'] + " cannot be"
" installed on a running cluster. 'Name' should be one"
" of the following: " +
', '.join(self.supported_apps))
else:
raise ValueError(
"aws: error: Unknown application: " + app_config['Name'] +
". 'Name' should be one of the following: " +
', '.join(constants.APPLICATIONS))
awscli-1.17.14/awscli/customizations/emr/configutils.py 0000644 0000000 0000000 00000004772 13620325554 023164 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
import os
from awscli.customizations.configure.writer import ConfigFileWriter
from awscli.customizations.emr.constants import EC2_ROLE_NAME
from awscli.customizations.emr.constants import EMR_ROLE_NAME
LOG = logging.getLogger(__name__)
def get_configs(session):
return session.get_scoped_config().get('emr', {})
def get_current_profile_name(session):
profile_name = session.get_config_variable('profile')
return 'default' if profile_name is None else profile_name
def get_current_profile_var_name(session):
return _get_profile_str(session, '.')
def _get_profile_str(session, separator):
profile_name = session.get_config_variable('profile')
return 'default' if profile_name is None \
else 'profile%c%s' % (separator, profile_name)
def is_any_role_configured(session):
parsed_configs = get_configs(session)
return True if ('instance_profile' in parsed_configs or
'service_role' in parsed_configs) \
else False
def update_roles(session):
if is_any_role_configured(session):
LOG.debug("At least one of the roles is already associated with "
"your current profile ")
else:
config_writer = ConfigWriter(session)
config_writer.update_config('service_role', EMR_ROLE_NAME)
config_writer.update_config('instance_profile', EC2_ROLE_NAME)
LOG.debug("Associated default roles with your current profile")
class ConfigWriter(object):
def __init__(self, session):
self.session = session
self.section = _get_profile_str(session, ' ')
self.config_file_writer = ConfigFileWriter()
def update_config(self, key, value):
config_filename = \
os.path.expanduser(self.session.get_config_variable('config_file'))
updated_config = {'__section__': self.section,
'emr': {key: value}}
self.config_file_writer.update_config(updated_config, config_filename)
awscli-1.17.14/awscli/customizations/emr/describecluster.py 0000644 0000000 0000000 00000011170 13620325554 024006 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from awscli.customizations.commands import BasicCommand
from awscli.customizations.emr import constants
from awscli.customizations.emr import emrutils
from awscli.customizations.emr import helptext
from awscli.customizations.emr.command import Command
from botocore.exceptions import NoCredentialsError
class DescribeCluster(Command):
NAME = 'describe-cluster'
DESCRIPTION = ('Provides cluster-level details including status, hardware'
' and software configuration, VPC settings, bootstrap'
' actions, instance groups and so on. For information about'
' the cluster steps, see list-steps
.')
ARG_TABLE = [
{'name': 'cluster-id', 'required': True,
'help_text': helptext.CLUSTER_ID}
]
def _run_main_command(self, parsed_args, parsed_globals):
parameters = {'ClusterId': parsed_args.cluster_id}
list_instance_fleets_result = None
list_instance_groups_result = None
is_fleet_based_cluster = False
describe_cluster_result = self._call(
self._session, 'describe_cluster', parameters, parsed_globals)
if 'Cluster' in describe_cluster_result:
describe_cluster = describe_cluster_result['Cluster']
if describe_cluster.get('InstanceCollectionType') == constants.INSTANCE_FLEET_TYPE:
is_fleet_based_cluster = True
if 'Ec2InstanceAttributes' in describe_cluster:
ec2_instance_attr_keys = \
describe_cluster['Ec2InstanceAttributes'].keys()
ec2_instance_attr = \
describe_cluster['Ec2InstanceAttributes']
else:
ec2_instance_attr_keys = {}
if is_fleet_based_cluster:
list_instance_fleets_result = self._call(
self._session, 'list_instance_fleets', parameters,
parsed_globals)
else:
list_instance_groups_result = self._call(
self._session, 'list_instance_groups', parameters,
parsed_globals)
list_bootstrap_actions_result = self._call(
self._session, 'list_bootstrap_actions',
parameters, parsed_globals)
constructed_result = self._construct_result(
describe_cluster_result,
list_instance_fleets_result,
list_instance_groups_result,
list_bootstrap_actions_result)
emrutils.display_response(self._session, 'describe_cluster',
constructed_result, parsed_globals)
return 0
def _call(self, session, operation_name, parameters, parsed_globals):
return emrutils.call(
session, operation_name, parameters,
region_name=self.region,
endpoint_url=parsed_globals.endpoint_url,
verify=parsed_globals.verify_ssl)
def _get_key_of_result(self, keys):
# Return the first key that is not "Marker"
for key in keys:
if key != "Marker":
return key
def _construct_result(
self, describe_cluster_result, list_instance_fleets_result,
list_instance_groups_result, list_bootstrap_actions_result):
result = describe_cluster_result
result['Cluster']['BootstrapActions'] = []
if (list_instance_fleets_result is not None and
list_instance_fleets_result.get('InstanceFleets') is not None):
result['Cluster']['InstanceFleets'] = \
list_instance_fleets_result.get('InstanceFleets')
if (list_instance_groups_result is not None and
list_instance_groups_result.get('InstanceGroups') is not None):
result['Cluster']['InstanceGroups'] = \
list_instance_groups_result.get('InstanceGroups')
if (list_bootstrap_actions_result is not None and
list_bootstrap_actions_result.get('BootstrapActions')
is not None):
result['Cluster']['BootstrapActions'] = \
list_bootstrap_actions_result['BootstrapActions']
return result
awscli-1.17.14/awscli/customizations/putmetricdata.py 0000644 0000000 0000000 00000015401 13620325554 022710 0 ustar root root 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""
This customization adds the following scalar parameters to the
cloudwatch put-metric-data operation:
* --metric-name
* --dimensions
* --timestamp
* --value
* --statistic-values
* --unit
* --storage-resolution
"""
import decimal
from awscli.arguments import CustomArgument
from awscli.utils import split_on_commas
from awscli.customizations.utils import validate_mutually_exclusive_handler
def register_put_metric_data(event_handler):
event_handler.register(
'building-argument-table.cloudwatch.put-metric-data', _promote_args)
event_handler.register(
'operation-args-parsed.cloudwatch.put-metric-data',
validate_mutually_exclusive_handler(
['metric_data'], ['metric_name', 'timestamp', 'unit', 'value',
'dimensions', 'statistic_values']))
def _promote_args(argument_table, operation_model, **kwargs):
# We're providing top level params for metric-data. This means
# that metric-data is now longer a required arg. We do need
# to check that either metric-data or the complex args we've added
# have been provided.
argument_table['metric-data'].required = False
argument_table['metric-name'] = PutMetricArgument(
'metric-name', help_text='The name of the metric.')
argument_table['timestamp'] = PutMetricArgument(
'timestamp', help_text='The time stamp used for the metric. '
'If not specified, the default value is '
'set to the time the metric data was '
'received.')
argument_table['unit'] = PutMetricArgument(
'unit', help_text='The unit of metric.')
argument_table['value'] = PutMetricArgument(
'value', help_text='The value for the metric. Although the --value '
'parameter accepts numbers of type Double, '
'Amazon CloudWatch truncates values with very '
'large exponents. Values with base-10 exponents '
'greater than 126 (1 x 10^126) are truncated. '
'Likewise, values with base-10 exponents less '
'than -130 (1 x 10^-130) are also truncated.')
argument_table['dimensions'] = PutMetricArgument(
'dimensions', help_text=(
'The --dimensions argument further expands '
'on the identity of a metric using a Name=Value '
'pair, separated by commas, for example: '
'--dimensions InstanceID=1-23456789,InstanceType=m1.small'
'
. Note that the --dimensions
argument has a '
'different format when used in get-metric-data
, '
'where for the same example you would use the format '
'--dimensions Name=InstanceID,Value=i-aaba32d4 '
'Name=InstanceType,value=m1.small
.'
)
)
argument_table['statistic-values'] = PutMetricArgument(
'statistic-values', help_text='A set of statistical values describing '
'the metric.')
metric_data = operation_model.input_shape.members['MetricData'].member
storage_resolution = metric_data.members['StorageResolution']
argument_table['storage-resolution'] = PutMetricArgument(
'storage-resolution', help_text=storage_resolution.documentation
)
def insert_first_element(name):
def _wrap_add_to_params(func):
def _add_to_params(self, parameters, value):
if value is None:
return
if name not in parameters:
# We're taking a shortcut here and assuming that the first
# element is a struct type, hence the default value of
# a dict. If this was going to be more general we'd need
# to have this paramterized, i.e. you pass in some sort of
# factory function that creates the initial starting value.
parameters[name] = [{}]
first_element = parameters[name][0]
return func(self, first_element, value)
return _add_to_params
return _wrap_add_to_params
class PutMetricArgument(CustomArgument):
def add_to_params(self, parameters, value):
method_name = '_add_param_%s' % self.name.replace('-', '_')
return getattr(self, method_name)(parameters, value)
@insert_first_element('MetricData')
def _add_param_metric_name(self, first_element, value):
first_element['MetricName'] = value
@insert_first_element('MetricData')
def _add_param_unit(self, first_element, value):
first_element['Unit'] = value
@insert_first_element('MetricData')
def _add_param_timestamp(self, first_element, value):
first_element['Timestamp'] = value
@insert_first_element('MetricData')
def _add_param_value(self, first_element, value):
# Use a Decimal to avoid loss in precision.
first_element['Value'] = decimal.Decimal(value)
@insert_first_element('MetricData')
def _add_param_dimensions(self, first_element, value):
# Dimensions needs a little more processing. We support
# the key=value,key2=value syntax so we need to parse
# that.
dimensions = []
for pair in split_on_commas(value):
key, value = pair.split('=')
dimensions.append({'Name': key, 'Value': value})
first_element['Dimensions'] = dimensions
@insert_first_element('MetricData')
def _add_param_statistic_values(self, first_element, value):
# StatisticValues is a struct type so we are parsing
# a csv keyval list into a dict.
statistics = {}
for pair in split_on_commas(value):
key, value = pair.split('=')
# There are four supported values: Maximum, Minimum, SampleCount,
# and Sum. All of them are documented as a type double so we can
# convert these to a decimal value to preserve precision.
statistics[key] = decimal.Decimal(value)
first_element['StatisticValues'] = statistics
@insert_first_element('MetricData')
def _add_param_storage_resolution(self, first_element, value):
first_element['StorageResolution'] = int(value)
awscli-1.17.14/awscli/customizations/s3events.py 0000644 0000000 0000000 00000006730 13620325554 021621 0 ustar root root 0000000 0000000 # Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""Add S3 specific event streaming output arg."""
from awscli.arguments import CustomArgument
STREAM_HELP_TEXT = 'Filename where the records will be saved'
class DocSectionNotFoundError(Exception):
pass
def register_event_stream_arg(event_handlers):
event_handlers.register(
'building-argument-table.s3api.select-object-content',
add_event_stream_output_arg)
event_handlers.register_last(
'doc-output.s3api.select-object-content',
replace_event_stream_docs
)
def add_event_stream_output_arg(argument_table, operation_model,
session, **kwargs):
argument_table['outfile'] = S3SelectStreamOutputArgument(
name='outfile', help_text=STREAM_HELP_TEXT,
cli_type_name='string', positional_arg=True,
stream_key=operation_model.output_shape.serialization['payload'],
session=session)
def replace_event_stream_docs(help_command, **kwargs):
doc = help_command.doc
current = ''
while current != '======\nOutput\n======':
try:
current = doc.pop_write()
except IndexError:
# This should never happen, but in the rare case that it does
# we should be raising something with a helpful error message.
raise DocSectionNotFoundError(
'Could not find the "output" section for the command: %s'
% help_command)
doc.write('======\nOutput\n======\n')
doc.write("This command generates no output. The selected "
"object content is written to the specified outfile.\n")
class S3SelectStreamOutputArgument(CustomArgument):
_DOCUMENT_AS_REQUIRED = True
def __init__(self, stream_key, session, **kwargs):
super(S3SelectStreamOutputArgument, self).__init__(**kwargs)
# This is the key in the response body where we can find the
# streamed contents.
self._stream_key = stream_key
self._output_file = None
self._session = session
def add_to_params(self, parameters, value):
self._output_file = value
self._session.register('after-call.s3.SelectObjectContent',
self.save_file)
def save_file(self, parsed, **kwargs):
# This method is hooked into after-call which fires
# before the error checking happens in the client.
# Therefore if the stream_key is not in the parsed
# response we immediately return and let the default
# error handling happen.
if self._stream_key not in parsed:
return
event_stream = parsed[self._stream_key]
with open(self._output_file, 'wb') as fp:
for event in event_stream:
if 'Records' in event:
fp.write(event['Records']['Payload'])
# We don't want to include the streaming param in
# the returned response, it's not JSON serializable.
del parsed[self._stream_key]
awscli-1.17.14/awscli/argparser.py 0000644 0000000 0000000 00000016203 13620325554 016736 0 ustar root root 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import argparse
import sys
from awscli.compat import six
from difflib import get_close_matches
HELP_BLURB = (
"To see help text, you can run:\n"
"\n"
" aws help\n"
" aws help\n"
" aws help\n"
)
USAGE = (
"aws [options] [ ...] [parameters]\n"
"%s" % HELP_BLURB
)
class CommandAction(argparse.Action):
"""Custom action for CLI command arguments
Allows the choices for the argument to be mutable. The choices
are dynamically retrieved from the keys of the referenced command
table
"""
def __init__(self, option_strings, dest, command_table, **kwargs):
self.command_table = command_table
super(CommandAction, self).__init__(
option_strings, dest, choices=self.choices, **kwargs
)
def __call__(self, parser, namespace, values, option_string=None):
setattr(namespace, self.dest, values)
@property
def choices(self):
return list(self.command_table.keys())
@choices.setter
def choices(self, val):
# argparse.Action will always try to set this value upon
# instantiation, but this value should be dynamically
# generated from the command table keys. So make this a
# NOOP if argparse.Action tries to set this value.
pass
class CLIArgParser(argparse.ArgumentParser):
Formatter = argparse.RawTextHelpFormatter
# When displaying invalid choice error messages,
# this controls how many options to show per line.
ChoicesPerLine = 2
def _check_value(self, action, value):
"""
It's probably not a great idea to override a "hidden" method
but the default behavior is pretty ugly and there doesn't
seem to be any other way to change it.
"""
# converted value must be one of the choices (if specified)
if action.choices is not None and value not in action.choices:
msg = ['Invalid choice, valid choices are:\n']
for i in range(len(action.choices))[::self.ChoicesPerLine]:
current = []
for choice in action.choices[i:i+self.ChoicesPerLine]:
current.append('%-40s' % choice)
msg.append(' | '.join(current))
possible = get_close_matches(value, action.choices, cutoff=0.8)
if possible:
extra = ['\n\nInvalid choice: %r, maybe you meant:\n' % value]
for word in possible:
extra.append(' * %s' % word)
msg.extend(extra)
raise argparse.ArgumentError(action, '\n'.join(msg))
def parse_known_args(self, args, namespace=None):
parsed, remaining = super(CLIArgParser, self).parse_known_args(args, namespace)
terminal_encoding = getattr(sys.stdin, 'encoding', 'utf-8')
if terminal_encoding is None:
# In some cases, sys.stdin won't have an encoding set,
# (e.g if it's set to a StringIO). In this case we just
# default to utf-8.
terminal_encoding = 'utf-8'
for arg, value in vars(parsed).items():
if isinstance(value, six.binary_type):
setattr(parsed, arg, value.decode(terminal_encoding))
elif isinstance(value, list):
encoded = []
for v in value:
if isinstance(v, six.binary_type):
encoded.append(v.decode(terminal_encoding))
else:
encoded.append(v)
setattr(parsed, arg, encoded)
return parsed, remaining
class MainArgParser(CLIArgParser):
Formatter = argparse.RawTextHelpFormatter
def __init__(self, command_table, version_string,
description, argument_table, prog=None):
super(MainArgParser, self).__init__(
formatter_class=self.Formatter,
add_help=False,
conflict_handler='resolve',
description=description,
usage=USAGE,
prog=prog)
self._build(command_table, version_string, argument_table)
def _create_choice_help(self, choices):
help_str = ''
for choice in sorted(choices):
help_str += '* %s\n' % choice
return help_str
def _build(self, command_table, version_string, argument_table):
for argument_name in argument_table:
argument = argument_table[argument_name]
argument.add_to_parser(self)
self.add_argument('--version', action="version",
version=version_string,
help='Display the version of this tool')
self.add_argument('command', action=CommandAction,
command_table=command_table)
class ServiceArgParser(CLIArgParser):
def __init__(self, operations_table, service_name):
super(ServiceArgParser, self).__init__(
formatter_class=argparse.RawTextHelpFormatter,
add_help=False,
conflict_handler='resolve',
usage=USAGE)
self._build(operations_table)
self._service_name = service_name
def _build(self, operations_table):
self.add_argument('operation', action=CommandAction,
command_table=operations_table)
class ArgTableArgParser(CLIArgParser):
"""CLI arg parser based on an argument table."""
def __init__(self, argument_table, command_table=None):
# command_table is an optional subcommand_table. If it's passed
# in, then we'll update the argparse to parse a 'subcommand' argument
# and populate the choices field with the command table keys.
super(ArgTableArgParser, self).__init__(
formatter_class=self.Formatter,
add_help=False,
usage=USAGE,
conflict_handler='resolve')
if command_table is None:
command_table = {}
self._build(argument_table, command_table)
def _build(self, argument_table, command_table):
for arg_name in argument_table:
argument = argument_table[arg_name]
argument.add_to_parser(self)
if command_table:
self.add_argument('subcommand', action=CommandAction,
command_table=command_table, nargs='?')
def parse_known_args(self, args, namespace=None):
if len(args) == 1 and args[0] == 'help':
namespace = argparse.Namespace()
namespace.help = 'help'
return namespace, []
else:
return super(ArgTableArgParser, self).parse_known_args(
args, namespace)
awscli-1.17.14/awscli/errorhandler.py 0000644 0000000 0000000 00000005723 13620325554 017444 0 ustar root root 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
LOG = logging.getLogger(__name__)
class BaseOperationError(Exception):
MSG_TEMPLATE = ("A {error_type} error ({error_code}) occurred "
"when calling the {operation_name} operation: "
"{error_message}")
def __init__(self, error_code, error_message, error_type, operation_name,
http_status_code):
msg = self.MSG_TEMPLATE.format(
error_code=error_code, error_message=error_message,
error_type=error_type, operation_name=operation_name)
super(BaseOperationError, self).__init__(msg)
self.error_code = error_code
self.error_message = error_message
self.error_type = error_type
self.operation_name = operation_name
self.http_status_code = http_status_code
class ClientError(BaseOperationError):
pass
class ServerError(BaseOperationError):
pass
class ErrorHandler(object):
"""
This class is responsible for handling any HTTP errors that occur
when a service operation is called. It is registered for the
``after-call`` event and will have the opportunity to inspect
all operation calls. If the HTTP response contains an error
``status_code`` an appropriate error message will be printed and
the handler will short-circuit all further processing by exiting
with an appropriate error code.
"""
def __call__(self, http_response, parsed, model, **kwargs):
LOG.debug('HTTP Response Code: %d', http_response.status_code)
error_type = None
error_class = None
if http_response.status_code >= 500:
error_type = 'server'
error_class = ServerError
elif http_response.status_code >= 400 or http_response.status_code == 301:
error_type = 'client'
error_class = ClientError
if error_class is not None:
code, message = self._get_error_code_and_message(parsed)
raise error_class(
error_code=code, error_message=message,
error_type=error_type, operation_name=model.name,
http_status_code=http_response.status_code)
def _get_error_code_and_message(self, response):
code = 'Unknown'
message = 'Unknown'
if 'Error' in response:
error = response['Error']
return error.get('Code', code), error.get('Message', message)
return (code, message)
awscli-1.17.14/awscli/handlers.py 0000644 0000000 0000000 00000023142 13620325630 016543 0 ustar root root 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""Builtin CLI extensions.
This is a collection of built in CLI extensions that can be automatically
registered with the event system.
"""
from awscli.argprocess import ParamShorthandParser
from awscli.paramfile import register_uri_param_handler
from awscli.customizations import datapipeline
from awscli.customizations.addexamples import add_examples
from awscli.customizations.argrename import register_arg_renames
from awscli.customizations.assumerole import register_assume_role_provider
from awscli.customizations.awslambda import register_lambda_create_function
from awscli.customizations.cliinputjson import register_cli_input_json
from awscli.customizations.cloudformation import initialize as cloudformation_init
from awscli.customizations.cloudfront import register as register_cloudfront
from awscli.customizations.cloudsearch import initialize as cloudsearch_init
from awscli.customizations.cloudsearchdomain import register_cloudsearchdomain
from awscli.customizations.cloudtrail import initialize as cloudtrail_init
from awscli.customizations.codecommit import initialize as codecommit_init
from awscli.customizations.codedeploy.codedeploy import initialize as \
codedeploy_init
from awscli.customizations.configservice.getstatus import register_get_status
from awscli.customizations.configservice.putconfigurationrecorder import \
register_modify_put_configuration_recorder
from awscli.customizations.configservice.rename_cmd import \
register_rename_config
from awscli.customizations.configservice.subscribe import register_subscribe
from awscli.customizations.configure.configure import register_configure_cmd
from awscli.customizations.history import register_history_mode
from awscli.customizations.history import register_history_commands
from awscli.customizations.ec2.addcount import register_count_events
from awscli.customizations.ec2.bundleinstance import register_bundleinstance
from awscli.customizations.ec2.decryptpassword import ec2_add_priv_launch_key
from awscli.customizations.ec2.protocolarg import register_protocol_args
from awscli.customizations.ec2.runinstances import register_runinstances
from awscli.customizations.ec2.secgroupsimplify import register_secgroup
from awscli.customizations.ec2.paginate import register_ec2_page_size_injector
from awscli.customizations.ecr import register_ecr_commands
from awscli.customizations.emr.emr import emr_initialize
from awscli.customizations.eks import initialize as eks_initialize
from awscli.customizations.ecs import initialize as ecs_initialize
from awscli.customizations.gamelift import register_gamelift_commands
from awscli.customizations.generatecliskeleton import \
register_generate_cli_skeleton
from awscli.customizations.globalargs import register_parse_global_args
from awscli.customizations.iamvirtmfa import IAMVMFAWrapper
from awscli.customizations.iot import register_create_keys_and_cert_arguments
from awscli.customizations.iot import register_create_keys_from_csr_arguments
from awscli.customizations.iot_data import register_custom_endpoint_note
from awscli.customizations.kms import register_fix_kms_create_grant_docs
from awscli.customizations.dlm.dlm import dlm_initialize
from awscli.customizations.opsworks import initialize as opsworks_init
from awscli.customizations.paginate import register_pagination
from awscli.customizations.preview import register_preview_commands
from awscli.customizations.putmetricdata import register_put_metric_data
from awscli.customizations.rds import register_rds_modify_split
from awscli.customizations.rds import register_add_generate_db_auth_token
from awscli.customizations.rekognition import register_rekognition_detect_labels
from awscli.customizations.removals import register_removals
from awscli.customizations.route53 import register_create_hosted_zone_doc_fix
from awscli.customizations.s3.s3 import s3_plugin_initialize
from awscli.customizations.s3endpoint import register_s3_endpoint
from awscli.customizations.s3errormsg import register_s3_error_msg
from awscli.customizations.scalarparse import register_scalar_parser
from awscli.customizations.sessendemail import register_ses_send_email
from awscli.customizations.streamingoutputarg import add_streaming_output_arg
from awscli.customizations.translate import register_translate_import_terminology
from awscli.customizations.toplevelbool import register_bool_params
from awscli.customizations.waiters import register_add_waiters
from awscli.customizations.opsworkscm import register_alias_opsworks_cm
from awscli.customizations.mturk import register_alias_mturk_command
from awscli.customizations.sagemaker import register_alias_sagemaker_runtime_command
from awscli.customizations.servicecatalog import register_servicecatalog_commands
from awscli.customizations.s3events import register_event_stream_arg
from awscli.customizations.sessionmanager import register_ssm_session
from awscli.customizations.sms_voice import register_sms_voice_hide
from awscli.customizations.dynamodb import register_dynamodb_paginator_fix
def awscli_initialize(event_handlers):
event_handlers.register('session-initialized', register_uri_param_handler)
param_shorthand = ParamShorthandParser()
event_handlers.register('process-cli-arg', param_shorthand)
# The s3 error mesage needs to registered before the
# generic error handler.
register_s3_error_msg(event_handlers)
# # The following will get fired for every option we are
# # documenting. It will attempt to add an example_fn on to
# # the parameter object if the parameter supports shorthand
# # syntax. The documentation event handlers will then use
# # the examplefn to generate the sample shorthand syntax
# # in the docs. Registering here should ensure that this
# # handler gets called first but it still feels a bit brittle.
# event_handlers.register('doc-option-example.*.*.*',
# param_shorthand.add_example_fn)
event_handlers.register('doc-examples.*.*',
add_examples)
register_cli_input_json(event_handlers)
event_handlers.register('building-argument-table.*',
add_streaming_output_arg)
register_count_events(event_handlers)
event_handlers.register('building-argument-table.ec2.get-password-data',
ec2_add_priv_launch_key)
register_parse_global_args(event_handlers)
register_pagination(event_handlers)
register_secgroup(event_handlers)
register_bundleinstance(event_handlers)
s3_plugin_initialize(event_handlers)
register_runinstances(event_handlers)
register_removals(event_handlers)
register_preview_commands(event_handlers)
register_rds_modify_split(event_handlers)
register_rekognition_detect_labels(event_handlers)
register_add_generate_db_auth_token(event_handlers)
register_put_metric_data(event_handlers)
register_ses_send_email(event_handlers)
IAMVMFAWrapper(event_handlers)
register_arg_renames(event_handlers)
register_configure_cmd(event_handlers)
cloudtrail_init(event_handlers)
register_ecr_commands(event_handlers)
register_bool_params(event_handlers)
register_protocol_args(event_handlers)
datapipeline.register_customizations(event_handlers)
cloudsearch_init(event_handlers)
emr_initialize(event_handlers)
eks_initialize(event_handlers)
ecs_initialize(event_handlers)
register_cloudsearchdomain(event_handlers)
register_s3_endpoint(event_handlers)
register_generate_cli_skeleton(event_handlers)
register_assume_role_provider(event_handlers)
register_add_waiters(event_handlers)
codedeploy_init(event_handlers)
register_subscribe(event_handlers)
register_get_status(event_handlers)
register_rename_config(event_handlers)
register_scalar_parser(event_handlers)
opsworks_init(event_handlers)
register_lambda_create_function(event_handlers)
register_fix_kms_create_grant_docs(event_handlers)
register_create_hosted_zone_doc_fix(event_handlers)
register_modify_put_configuration_recorder(event_handlers)
codecommit_init(event_handlers)
register_custom_endpoint_note(event_handlers)
event_handlers.register(
'building-argument-table.iot.create-keys-and-certificate',
register_create_keys_and_cert_arguments)
event_handlers.register(
'building-argument-table.iot.create-certificate-from-csr',
register_create_keys_from_csr_arguments)
register_cloudfront(event_handlers)
register_gamelift_commands(event_handlers)
register_ec2_page_size_injector(event_handlers)
cloudformation_init(event_handlers)
register_alias_opsworks_cm(event_handlers)
register_alias_mturk_command(event_handlers)
register_alias_sagemaker_runtime_command(event_handlers)
register_servicecatalog_commands(event_handlers)
register_translate_import_terminology(event_handlers)
register_history_mode(event_handlers)
register_history_commands(event_handlers)
register_event_stream_arg(event_handlers)
dlm_initialize(event_handlers)
register_ssm_session(event_handlers)
register_sms_voice_hide(event_handlers)
register_dynamodb_paginator_fix(event_handlers)
awscli-1.17.14/awscli/arguments.py 0000644 0000000 0000000 00000044602 13620325554 016761 0 ustar root root 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""Abstractions for CLI arguments.
This module contains abstractions for representing CLI arguments.
This includes how the CLI argument parser is created, how arguments
are serialized, and how arguments are bound (if at all) to operation
arguments.
The BaseCLIArgument is the interface for all arguments. This is the interface
expected by objects that work with arguments. If you want to implement your
own argument subclass, make sure it implements everything in BaseCLIArgument.
Arguments generally fall into one of several categories:
* global argument. These arguments may influence what the CLI does,
but aren't part of the input parameters needed to make an API call. For
example, the ``--region`` argument specifies which region to send the request
to. The ``--output`` argument specifies how to display the response to the
user. The ``--query`` argument specifies how to select specific elements
from a response.
* operation argument. These are arguments that influence the parameters we
send to a service when making an API call. Some of these arguments are
automatically created directly from introspecting the JSON service model.
Sometimes customizations may provide a pseudo-argument that takes the
user input and maps the input value to several API parameters.
"""
import logging
from botocore import xform_name
from botocore.hooks import first_non_none_response
from awscli.argprocess import unpack_cli_arg
from awscli.schema import SchemaTransformer
from botocore import model
LOG = logging.getLogger('awscli.arguments')
class UnknownArgumentError(Exception):
pass
def create_argument_model_from_schema(schema):
# Given a JSON schems (described in schema.py), convert it
# to a shape object from `botocore.model.Shape` that can be
# used as the argument_model for the Argument classes below.
transformer = SchemaTransformer()
shapes_map = transformer.transform(schema)
shape_resolver = model.ShapeResolver(shapes_map)
# The SchemaTransformer guarantees that the top level shape
# will always be named 'InputShape'.
arg_shape = shape_resolver.get_shape_by_name('InputShape')
return arg_shape
class BaseCLIArgument(object):
"""Interface for CLI argument.
This class represents the interface used for representing CLI
arguments.
"""
def __init__(self, name):
self._name = name
def add_to_arg_table(self, argument_table):
"""Add this object to the argument_table.
The ``argument_table`` represents the argument for the operation.
This is called by the ``ServiceOperation`` object to create the
arguments associated with the operation.
:type argument_table: dict
:param argument_table: The argument table. The key is the argument
name, and the value is an object implementing this interface.
"""
argument_table[self.name] = self
def add_to_parser(self, parser):
"""Add this object to the parser instance.
This method is called by the associated ``ArgumentParser``
instance. This method should make the relevant calls
to ``add_argument`` to add itself to the argparser.
:type parser: ``argparse.ArgumentParser``.
:param parser: The argument parser associated with the operation.
"""
pass
def add_to_params(self, parameters, value):
"""Add this object to the parameters dict.
This method is responsible for taking the value specified
on the command line, and deciding how that corresponds to
parameters used by the service/operation.
:type parameters: dict
:param parameters: The parameters dictionary that will be
given to ``botocore``. This should match up to the
parameters associated with the particular operation.
:param value: The value associated with the CLI option.
"""
pass
@property
def name(self):
return self._name
@property
def cli_name(self):
return '--' + self._name
@property
def cli_type_name(self):
raise NotImplementedError("cli_type_name")
@property
def required(self):
raise NotImplementedError("required")
@property
def documentation(self):
raise NotImplementedError("documentation")
@property
def cli_type(self):
raise NotImplementedError("cli_type")
@property
def py_name(self):
return self._name.replace('-', '_')
@property
def choices(self):
"""List valid choices for argument value.
If this value is not None then this should return a list of valid
values for the argument.
"""
return None
@property
def synopsis(self):
return ''
@property
def positional_arg(self):
return False
@property
def nargs(self):
return None
@name.setter
def name(self, value):
self._name = value
@property
def group_name(self):
"""Get the group name associated with the argument.
An argument can be part of a group. This property will
return the name of that group.
This base class has no default behavior for groups, code
that consumes argument objects can use them for whatever
purposes they like (documentation, mutually exclusive group
validation, etc.).
"""
return None
class CustomArgument(BaseCLIArgument):
"""
Represents a CLI argument that is configured from a dictionary.
For example, the "top level" arguments used for the CLI
(--region, --output) can use a CustomArgument argument,
as these are described in the cli.json file as dictionaries.
This class is also useful for plugins/customizations that want to
add additional args.
"""
def __init__(self, name, help_text='', dest=None, default=None,
action=None, required=None, choices=None, nargs=None,
cli_type_name=None, group_name=None, positional_arg=False,
no_paramfile=False, argument_model=None, synopsis='',
const=None):
self._name = name
self._help = help_text
self._dest = dest
self._default = default
self._action = action
self._required = required
self._nargs = nargs
self._const = const
self._cli_type_name = cli_type_name
self._group_name = group_name
self._positional_arg = positional_arg
if choices is None:
choices = []
self._choices = choices
self._synopsis = synopsis
# These are public attributes that are ok to access from external
# objects.
self.no_paramfile = no_paramfile
self.argument_model = None
if argument_model is None:
argument_model = self._create_scalar_argument_model()
self.argument_model = argument_model
# If the top level element is a list then set nargs to
# accept multiple values seperated by a space.
if self.argument_model is not None and \
self.argument_model.type_name == 'list':
self._nargs = '+'
def _create_scalar_argument_model(self):
if self._nargs is not None:
# If nargs is not None then argparse will parse the value
# as a list, so we don't create an argument_object so we don't
# go through param validation.
return None
# If no argument model is provided, we create a basic
# shape argument.
type_name = self.cli_type_name
return create_argument_model_from_schema({'type': type_name})
@property
def cli_name(self):
if self._positional_arg:
return self._name
else:
return '--' + self._name
def add_to_parser(self, parser):
"""
See the ``BaseCLIArgument.add_to_parser`` docs for more information.
"""
cli_name = self.cli_name
kwargs = {}
if self._dest is not None:
kwargs['dest'] = self._dest
if self._action is not None:
kwargs['action'] = self._action
if self._default is not None:
kwargs['default'] = self._default
if self._choices:
kwargs['choices'] = self._choices
if self._required is not None:
kwargs['required'] = self._required
if self._nargs is not None:
kwargs['nargs'] = self._nargs
if self._const is not None:
kwargs['const'] = self._const
parser.add_argument(cli_name, **kwargs)
@property
def required(self):
if self._required is None:
return False
return self._required
@required.setter
def required(self, value):
self._required = value
@property
def documentation(self):
return self._help
@property
def cli_type_name(self):
if self._cli_type_name is not None:
return self._cli_type_name
elif self._action in ['store_true', 'store_false']:
return 'boolean'
elif self.argument_model is not None:
return self.argument_model.type_name
else:
# Default to 'string' type if we don't have any
# other info.
return 'string'
@property
def cli_type(self):
cli_type = str
if self._action in ['store_true', 'store_false']:
cli_type = bool
return cli_type
@property
def choices(self):
return self._choices
@property
def group_name(self):
return self._group_name
@property
def synopsis(self):
return self._synopsis
@property
def positional_arg(self):
return self._positional_arg
@property
def nargs(self):
return self._nargs
class CLIArgument(BaseCLIArgument):
"""Represents a CLI argument that maps to a service parameter.
"""
TYPE_MAP = {
'structure': str,
'map': str,
'timestamp': str,
'list': str,
'string': str,
'float': float,
'integer': str,
'long': int,
'boolean': bool,
'double': float,
'blob': str
}
def __init__(self, name, argument_model, operation_model,
event_emitter, is_required=False,
serialized_name=None):
"""
:type name: str
:param name: The name of the argument in "cli" form
(e.g. ``min-instances``).
:type argument_model: ``botocore.model.Shape``
:param argument_model: The shape object that models the argument.
:type argument_model: ``botocore.model.OperationModel``
:param argument_model: The object that models the associated operation.
:type event_emitter: ``botocore.hooks.BaseEventHooks``
:param event_emitter: The event emitter to use when emitting events.
This class will emit events during parts of the argument
parsing process. This event emitter is what is used to emit
such events.
:type is_required: boolean
:param is_required: Indicates if this parameter is required or not.
"""
self._name = name
# This is the name we need to use when constructing the parameters
# dict we send to botocore. While we can change the .name attribute
# which is the name exposed in the CLI, the serialized name we use
# for botocore is invariant and should not be changed.
if serialized_name is None:
serialized_name = name
self._serialized_name = serialized_name
self.argument_model = argument_model
self._required = is_required
self._operation_model = operation_model
self._event_emitter = event_emitter
self._documentation = argument_model.documentation
@property
def py_name(self):
return self._name.replace('-', '_')
@property
def required(self):
return self._required
@required.setter
def required(self, value):
self._required = value
@property
def documentation(self):
return self._documentation
@documentation.setter
def documentation(self, value):
self._documentation = value
@property
def cli_type_name(self):
return self.argument_model.type_name
@property
def cli_type(self):
return self.TYPE_MAP.get(self.argument_model.type_name, str)
def add_to_parser(self, parser):
"""
See the ``BaseCLIArgument.add_to_parser`` docs for more information.
"""
cli_name = self.cli_name
parser.add_argument(
cli_name,
help=self.documentation,
type=self.cli_type,
required=self.required)
def add_to_params(self, parameters, value):
if value is None:
return
else:
# This is a two step process. First is the process of converting
# the command line value into a python value. Normally this is
# handled by argparse directly, but there are cases where extra
# processing is needed. For example, "--foo name=value" the value
# can be converted from "name=value" to {"name": "value"}. This is
# referred to as the "unpacking" process. Once we've unpacked the
# argument value, we have to decide how this is converted into
# something that can be consumed by botocore. Many times this is
# just associating the key and value in the params dict as down
# below. Sometimes this can be more complicated, and subclasses
# can customize as they need.
unpacked = self._unpack_argument(value)
LOG.debug('Unpacked value of %r for parameter "%s": %r', value,
self.py_name, unpacked)
parameters[self._serialized_name] = unpacked
def _unpack_argument(self, value):
service_name = self._operation_model.service_model.service_name
operation_name = xform_name(self._operation_model.name, '-')
override = self._emit_first_response('process-cli-arg.%s.%s' % (
service_name, operation_name), param=self.argument_model,
cli_argument=self, value=value)
if override is not None:
# A plugin supplied an alternate conversion,
# use it instead.
return override
else:
# Fall back to the default arg processing.
return unpack_cli_arg(self, value)
def _emit(self, name, **kwargs):
return self._event_emitter.emit(name, **kwargs)
def _emit_first_response(self, name, **kwargs):
responses = self._emit(name, **kwargs)
return first_non_none_response(responses)
class ListArgument(CLIArgument):
def add_to_parser(self, parser):
cli_name = self.cli_name
parser.add_argument(cli_name,
nargs='*',
type=self.cli_type,
required=self.required)
class BooleanArgument(CLIArgument):
"""Represent a boolean CLI argument.
A boolean parameter is specified without a value::
aws foo bar --enabled
For cases wher the boolean parameter is required we need to add
two parameters::
aws foo bar --enabled
aws foo bar --no-enabled
We use the capabilities of the CLIArgument to help achieve this.
"""
def __init__(self, name, argument_model, operation_model,
event_emitter,
is_required=False, action='store_true', dest=None,
group_name=None, default=None,
serialized_name=None):
super(BooleanArgument, self).__init__(name,
argument_model,
operation_model,
event_emitter,
is_required,
serialized_name=serialized_name)
self._mutex_group = None
self._action = action
if dest is None:
self._destination = self.py_name
else:
self._destination = dest
if group_name is None:
self._group_name = self.name
else:
self._group_name = group_name
self._default = default
def add_to_params(self, parameters, value):
# If a value was explicitly specified (so value is True/False
# but *not* None) then we add it to the params dict.
# If the value was not explicitly set (value is None)
# we don't add it to the params dict.
if value is not None:
parameters[self._serialized_name] = value
def add_to_arg_table(self, argument_table):
# Boolean parameters are a bit tricky. For a single boolean parameter
# we actually want two CLI params, a --foo, and a --no-foo. To do this
# we need to add two entries to the argument table. So we can add
# ourself as the positive option (--no), and then create a clone of
# ourselves for the negative service. We then insert both into the
# arg table.
argument_table[self.name] = self
negative_name = 'no-%s' % self.name
negative_version = self.__class__(
negative_name, self.argument_model,
self._operation_model, self._event_emitter,
action='store_false', dest=self._destination,
group_name=self.group_name, serialized_name=self._serialized_name)
argument_table[negative_name] = negative_version
def add_to_parser(self, parser):
parser.add_argument(self.cli_name,
help=self.documentation,
action=self._action,
default=self._default,
dest=self._destination)
@property
def group_name(self):
return self._group_name
awscli-1.17.14/awscli/argprocess.py 0000644 0000000 0000000 00000050721 13620325554 017123 0 ustar root root 0000000 0000000 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""Module for processing CLI args."""
import os
import logging
from awscli.compat import six
from botocore.compat import OrderedDict, json
from awscli import SCALAR_TYPES, COMPLEX_TYPES
from awscli import shorthand
from awscli.utils import find_service_and_method_in_event_name
from botocore.utils import is_json_value_header
LOG = logging.getLogger('awscli.argprocess')
class ParamError(Exception):
def __init__(self, cli_name, message):
"""
:type cli_name: string
:param cli_name: The complete cli argument name,
e.g. "--foo-bar". It should include the leading
hyphens if that's how a user would specify the name.
:type message: string
:param message: The error message to display to the user.
"""
full_message = ("Error parsing parameter '%s': %s" %
(cli_name, message))
super(ParamError, self).__init__(full_message)
self.cli_name = cli_name
self.message = message
class ParamSyntaxError(Exception):
pass
class ParamUnknownKeyError(Exception):
def __init__(self, key, valid_keys):
valid_keys = ', '.join(valid_keys)
full_message = (
"Unknown key '%s', valid choices "
"are: %s" % (key, valid_keys))
super(ParamUnknownKeyError, self).__init__(full_message)
class TooComplexError(Exception):
pass
def unpack_argument(session, service_name, operation_name, cli_argument, value):
"""
Unpack an argument's value from the commandline. This is part one of a two
step process in handling commandline arguments. Emits the load-cli-arg
event with service, operation, and parameter names. Example::
load-cli-arg.ec2.describe-instances.foo
"""
param_name = getattr(cli_argument, 'name', 'anonymous')
value_override = session.emit_first_non_none_response(
'load-cli-arg.%s.%s.%s' % (service_name,
operation_name,
param_name),
param=cli_argument, value=value, service_name=service_name,
operation_name=operation_name)
if value_override is not None:
value = value_override
return value
def detect_shape_structure(param):
stack = []
return _detect_shape_structure(param, stack)
def _detect_shape_structure(param, stack):
if param.name in stack:
return 'recursive'
else:
stack.append(param.name)
try:
if param.type_name in SCALAR_TYPES:
return 'scalar'
elif param.type_name == 'structure':
sub_types = [_detect_shape_structure(p, stack)
for p in param.members.values()]
# We're distinguishing between structure(scalar)
# and structure(scalars), because for the case of
# a single scalar in a structure we can simplify
# more than a structure(scalars).
if len(sub_types) == 1 and all(p == 'scalar' for p in sub_types):
return 'structure(scalar)'
elif len(sub_types) > 1 and all(p == 'scalar' for p in sub_types):
return 'structure(scalars)'
else:
return 'structure(%s)' % ', '.join(sorted(set(sub_types)))
elif param.type_name == 'list':
return 'list-%s' % _detect_shape_structure(param.member, stack)
elif param.type_name == 'map':
if param.value.type_name in SCALAR_TYPES:
return 'map-scalar'
else:
return 'map-%s' % _detect_shape_structure(param.value, stack)
finally:
stack.pop()
def unpack_cli_arg(cli_argument, value):
"""
Parses and unpacks the encoded string command line parameter
and returns native Python data structures that can be passed
to the Operation.
:type cli_argument: :class:`awscli.arguments.BaseCLIArgument`
:param cli_argument: The CLI argument object.
:param value: The value of the parameter. This can be a number of
different python types (str, list, etc). This is the value as
it's specified on the command line.
:return: The "unpacked" argument than can be sent to the `Operation`
object in python.
"""
return _unpack_cli_arg(cli_argument.argument_model, value,
cli_argument.cli_name)
def _special_type(model):
# check if model is jsonvalue header and that value is serializable
if model.serialization.get('jsonvalue') and \
model.serialization.get('location') == 'header' and \
model.type_name == 'string':
return True
return False
def _unpack_cli_arg(argument_model, value, cli_name):
if is_json_value_header(argument_model):
return _unpack_json_cli_arg(argument_model, value, cli_name)
elif argument_model.type_name in SCALAR_TYPES:
return unpack_scalar_cli_arg(
argument_model, value, cli_name)
elif argument_model.type_name in COMPLEX_TYPES:
return _unpack_complex_cli_arg(
argument_model, value, cli_name)
else:
return six.text_type(value)
def _unpack_json_cli_arg(argument_model, value, cli_name):
try:
return json.loads(value, object_pairs_hook=OrderedDict)
except ValueError as e:
raise ParamError(
cli_name, "Invalid JSON: %s\nJSON received: %s"
% (e, value))
def _unpack_complex_cli_arg(argument_model, value, cli_name):
type_name = argument_model.type_name
if type_name == 'structure' or type_name == 'map':
if value.lstrip()[0] == '{':
try:
return json.loads(value, object_pairs_hook=OrderedDict)
except ValueError as e:
raise ParamError(
cli_name, "Invalid JSON: %s\nJSON received: %s"
% (e, value))
raise ParamError(cli_name, "Invalid JSON:\n%s" % value)
elif type_name == 'list':
if isinstance(value, six.string_types):
if value.lstrip()[0] == '[':
return json.loads(value, object_pairs_hook=OrderedDict)
elif isinstance(value, list) and len(value) == 1:
single_value = value[0].strip()
if single_value and single_value[0] == '[':
return json.loads(value[0], object_pairs_hook=OrderedDict)
try:
# There's a couple of cases remaining here.
# 1. It's possible that this is just a list of strings, i.e
# --security-group-ids sg-1 sg-2 sg-3 => ['sg-1', 'sg-2', 'sg-3']
# 2. It's possible this is a list of json objects:
# --filters '{"Name": ..}' '{"Name": ...}'
member_shape_model = argument_model.member
return [_unpack_cli_arg(member_shape_model, v, cli_name)
for v in value]
except (ValueError, TypeError) as e:
# The list params don't have a name/cli_name attached to them
# so they will have bad error messages. We're going to
# attach the parent parameter to this error message to provide
# a more helpful error message.
raise ParamError(cli_name, value[0])
def unpack_scalar_cli_arg(argument_model, value, cli_name=''):
# Note the cli_name is used strictly for error reporting. It's
# not required to use unpack_scalar_cli_arg
if argument_model.type_name == 'integer' or argument_model.type_name == 'long':
return int(value)
elif argument_model.type_name == 'float' or argument_model.type_name == 'double':
# TODO: losing precision on double types
return float(value)
elif argument_model.type_name == 'blob' and \
argument_model.serialization.get('streaming'):
file_path = os.path.expandvars(value)
file_path = os.path.expanduser(file_path)
if not os.path.isfile(file_path):
msg = 'Blob values must be a path to a file.'
raise ParamError(cli_name, msg)
return open(file_path, 'rb')
elif argument_model.type_name == 'boolean':
if isinstance(value, six.string_types) and value.lower() == 'false':
return False
return bool(value)
else:
return value
def _is_complex_shape(model):
if model.type_name not in ['structure', 'list', 'map']:
return False
elif model.type_name == 'list':
if model.member.type_name not in ['structure', 'list', 'map']:
return False
return True
class ParamShorthand(object):
def _uses_old_list_case(self, service_id, operation_name, argument_name):
"""
Determines whether a given operation for a service needs to use the
deprecated shorthand parsing case for lists of structures that only have
a single member.
"""
cases = {
'firehose': {
'put-record-batch': ['records']
},
'workspaces': {
'reboot-workspaces': ['reboot-workspace-requests'],
'rebuild-workspaces': ['rebuild-workspace-requests'],
'terminate-workspaces': ['terminate-workspace-requests']
},
'elastic-load-balancing': {
'remove-tags': ['tags'],
'describe-instance-health': ['instances'],
'deregister-instances-from-load-balancer': ['instances'],
'register-instances-with-load-balancer': ['instances']
}
}
cases = cases.get(service_id, {}).get(operation_name, [])
return argument_name in cases
class ParamShorthandParser(ParamShorthand):
def __init__(self):
self._parser = shorthand.ShorthandParser()
self._visitor = shorthand.BackCompatVisitor()
def __call__(self, cli_argument, value, event_name, **kwargs):
"""Attempt to parse shorthand syntax for values.
This is intended to be hooked up as an event handler (hence the
**kwargs). Given ``param`` object and its string ``value``,
figure out if we can parse it. If we can parse it, we return
the parsed value (typically some sort of python dict).
:type cli_argument: :class:`awscli.arguments.BaseCLIArgument`
:param cli_argument: The CLI argument object.
:type param: :class:`botocore.parameters.Parameter`
:param param: The parameter object (includes various metadata
about the parameter).
:type value: str
:param value: The value for the parameter type on the command
line, e.g ``--foo this_value``, value would be ``"this_value"``.
:returns: If we can parse the value we return the parsed value.
If it looks like JSON, we return None (which tells the event
emitter to use the default ``unpack_cli_arg`` provided that
no other event handlers can parsed the value). If we
run into an error parsing the value, a ``ParamError`` will
be raised.
"""
if not self._should_parse_as_shorthand(cli_argument, value):
return
else:
service_id, operation_name = \
find_service_and_method_in_event_name(event_name)
return self._parse_as_shorthand(
cli_argument, value, service_id, operation_name)
def _parse_as_shorthand(self, cli_argument, value, service_id,
operation_name):
try:
LOG.debug("Parsing param %s as shorthand",
cli_argument.cli_name)
handled_value = self._handle_special_cases(
cli_argument, value, service_id, operation_name)
if handled_value is not None:
return handled_value
if isinstance(value, list):
# Because of how we're using argparse, list shapes
# are configured with nargs='+' which means the ``value``
# is given to us "conveniently" as a list. When
# this happens we need to parse each list element
# individually.
parsed = [self._parser.parse(v) for v in value]
self._visitor.visit(parsed, cli_argument.argument_model)
else:
# Otherwise value is just a string.
parsed = self._parser.parse(value)
self._visitor.visit(parsed, cli_argument.argument_model)
except shorthand.ShorthandParseError as e:
raise ParamError(cli_argument.cli_name, str(e))
except (ParamError, ParamUnknownKeyError) as e:
# The shorthand parse methods don't have the cli_name,
# so any ParamError won't have this value. To accommodate
# this, ParamErrors are caught and reraised with the cli_name
# injected.
raise ParamError(cli_argument.cli_name, str(e))
return parsed
def _handle_special_cases(self, cli_argument, value, service_id,
operation_name):
# We need to handle a few special cases that the previous
# parser handled in order to stay backwards compatible.
model = cli_argument.argument_model
if model.type_name == 'list' and \
model.member.type_name == 'structure' and \
len(model.member.members) == 1 and \
self._uses_old_list_case(service_id, operation_name, cli_argument.name):
# First special case is handling a list of structures
# of a single element such as:
#
# --instance-ids id-1 id-2 id-3
#
# gets parsed as:
#
# [{"InstanceId": "id-1"}, {"InstanceId": "id-2"},
# {"InstanceId": "id-3"}]
key_name = list(model.member.members.keys())[0]
new_values = [{key_name: v} for v in value]
return new_values
elif model.type_name == 'structure' and \
len(model.members) == 1 and \
'Value' in model.members and \
model.members['Value'].type_name == 'string' and \
'=' not in value:
# Second special case is where a structure of a single
# value whose member name is "Value" can be specified
# as:
# --instance-terminate-behavior shutdown
#
# gets parsed as:
# {"Value": "shutdown"}
return {'Value': value}
def _should_parse_as_shorthand(self, cli_argument, value):
# We first need to make sure this is a parameter that qualifies
# for simplification. The first short-circuit case is if it looks
# like json we immediately return.
if value and isinstance(value, list):
check_val = value[0]
else:
check_val = value
if isinstance(check_val, six.string_types) and check_val.strip().startswith(
('[', '{')):
LOG.debug("Param %s looks like JSON, not considered for "
"param shorthand.", cli_argument.py_name)
return False
model = cli_argument.argument_model
# The second case is to make sure the argument is sufficiently
# complex, that is, it's base type is a complex type *and*
# if it's a list, then it can't be a list of scalar types.
return _is_complex_shape(model)
class ParamShorthandDocGen(ParamShorthand):
"""Documentation generator for param shorthand syntax."""
_DONT_DOC = object()
_MAX_STACK = 3
def supports_shorthand(self, argument_model):
"""Checks if a CLI argument supports shorthand syntax."""
if argument_model is not None:
return _is_complex_shape(argument_model)
return False
def generate_shorthand_example(self, cli_argument, service_id,
operation_name):
"""Generate documentation for a CLI argument.
:type cli_argument: awscli.arguments.BaseCLIArgument
:param cli_argument: The CLI argument which to generate
documentation for.
:return: Returns either a string or ``None``. If a string
is returned, it is the generated shorthand example.
If a value of ``None`` is returned then this indicates
that no shorthand syntax is available for the provided
``argument_model``.
"""
docstring = self._handle_special_cases(
cli_argument, service_id, operation_name)
if docstring is self._DONT_DOC:
return None
elif docstring:
return docstring
# Otherwise we fall back to the normal docgen for shorthand
# syntax.
stack = []
try:
if cli_argument.argument_model.type_name == 'list':
argument_model = cli_argument.argument_model.member
return self._shorthand_docs(argument_model, stack) + ' ...'
else:
return self._shorthand_docs(cli_argument.argument_model, stack)
except TooComplexError:
return ''
def _handle_special_cases(self, cli_argument, service_id, operation_name):
model = cli_argument.argument_model
if model.type_name == 'list' and \
model.member.type_name == 'structure' and \
len(model.member.members) == 1 and \
self._uses_old_list_case(
service_id, operation_name, cli_argument.name):
member_name = list(model.member.members)[0]
# Handle special case where the min/max is exactly one.
metadata = model.metadata
if metadata.get('min') == 1 and metadata.get('max') == 1:
return '%s %s1' % (cli_argument.cli_name, member_name)
return '%s %s1 %s2 %s3' % (cli_argument.cli_name, member_name,
member_name, member_name)
elif model.type_name == 'structure' and \
len(model.members) == 1 and \
'Value' in model.members and \
model.members['Value'].type_name == 'string':
return self._DONT_DOC
return ''
def _shorthand_docs(self, argument_model, stack):
if len(stack) > self._MAX_STACK:
raise TooComplexError()
if argument_model.type_name == 'structure':
return self._structure_docs(argument_model, stack)
elif argument_model.type_name == 'list':
return self._list_docs(argument_model, stack)
elif argument_model.type_name == 'map':
return self._map_docs(argument_model, stack)
else:
return argument_model.type_name
def _list_docs(self, argument_model, stack):
list_member = argument_model.member
stack.append(list_member.name)
try:
element_docs = self._shorthand_docs(argument_model.member, stack)
finally:
stack.pop()
if list_member.type_name in COMPLEX_TYPES or len(stack) > 1:
return '[%s,%s]' % (element_docs, element_docs)
else:
return '%s,%s' % (element_docs, element_docs)
def _map_docs(self, argument_model, stack):
k = argument_model.key
value_docs = self._shorthand_docs(argument_model.value, stack)
start = 'KeyName1=%s,KeyName2=%s' % (value_docs, value_docs)
if k.enum and not stack:
start += '\n\nWhere valid key names are:\n'
for enum in k.enum:
start += ' %s\n' % enum
elif stack:
start = '{%s}' % start
return start
def _structure_docs(self, argument_model, stack):
parts = []
for name, member_shape in argument_model.members.items():
parts.append(self._member_docs(name, member_shape, stack))
inner_part = ','.join(parts)
if not stack:
return inner_part
return '{%s}' % inner_part
def _member_docs(self, name, shape, stack):
if stack.count(shape.name) > 0:
return '( ... recursive ... )'
stack.append(shape.name)
try:
value_doc = self._shorthand_docs(shape, stack)
finally:
stack.pop()
return '%s=%s' % (name, value_doc)
awscli-1.17.14/awscli/paramfile.py 0000644 0000000 0000000 00000023470 13620325630 016707 0 ustar root root 0000000 0000000 # Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
import os
import copy
from botocore.awsrequest import AWSRequest
from botocore.httpsession import URLLib3Session
from botocore.exceptions import ProfileNotFound
from awscli.compat import six
from awscli.compat import compat_open
from awscli.argprocess import ParamError
logger = logging.getLogger(__name__)
# These are special cased arguments that do _not_ get the
# special param file processing. This is typically because it
# refers to an actual URI of some sort and we don't want to actually
# download the content (i.e TemplateURL in cloudformation).
PARAMFILE_DISABLED = set([
'api-gateway.put-integration.uri',
'api-gateway.create-integration.integration-uri',
'api-gateway.create-api.target',
'api-gateway.update-api.target',
'appstream.create-stack.redirect-url',
'appstream.create-stack.feedback-url',
'appstream.update-stack.redirect-url',
'appstream.update-stack.feedback-url',
'cloudformation.create-stack.template-url',
'cloudformation.update-stack.template-url',
'cloudformation.create-stack-set.template-url',
'cloudformation.update-stack-set.template-url',
'cloudformation.create-change-set.template-url',
'cloudformation.validate-template.template-url',
'cloudformation.estimate-template-cost.template-url',
'cloudformation.get-template-summary.template-url',
'cloudformation.create-stack.stack-policy-url',
'cloudformation.update-stack.stack-policy-url',
'cloudformation.set-stack-policy.stack-policy-url',
# aws cloudformation package --template-file
'custom.package.template-file',
# aws cloudformation deploy --template-file
'custom.deploy.template-file',
'cloudformation.update-stack.stack-policy-during-update-url',
# We will want to change the event name to ``s3`` as opposed to
# custom in the near future along with ``s3`` to ``s3api``.
'custom.cp.website-redirect',
'custom.mv.website-redirect',
'custom.sync.website-redirect',
'guardduty.create-ip-set.location',
'guardduty.update-ip-set.location',
'guardduty.create-threat-intel-set.location',
'guardduty.update-threat-intel-set.location',
'comprehend.detect-dominant-language.text',
'comprehend.batch-detect-dominant-language.text-list',
'comprehend.detect-entities.text',
'comprehend.batch-detect-entities.text-list',
'comprehend.detect-key-phrases.text',
'comprehend.batch-detect-key-phrases.text-list',
'comprehend.detect-sentiment.text',
'comprehend.batch-detect-sentiment.text-list',
'iam.create-open-id-connect-provider.url',
'machine-learning.predict.predict-endpoint',
'mediatailor.put-playback-configuration.ad-decision-server-url',
'mediatailor.put-playback-configuration.slate-ad-url',
'mediatailor.put-playback-configuration.video-content-source-url',
'rds.copy-db-cluster-snapshot.pre-signed-url',
'rds.create-db-cluster.pre-signed-url',
'rds.copy-db-snapshot.pre-signed-url',
'rds.create-db-instance-read-replica.pre-signed-url',
'sagemaker.create-notebook-instance.default-code-repository',
'sagemaker.create-notebook-instance.additional-code-repositories',
'sagemaker.update-notebook-instance.default-code-repository',
'sagemaker.update-notebook-instance.additional-code-repositories',
'serverlessapplicationrepository.create-application.home-page-url',
'serverlessapplicationrepository.create-application.license-url',
'serverlessapplicationrepository.create-application.readme-url',
'serverlessapplicationrepository.create-application.source-code-url',
'serverlessapplicationrepository.create-application.template-url',
'serverlessapplicationrepository.create-application-version.source-code-url',
'serverlessapplicationrepository.create-application-version.template-url',
'serverlessapplicationrepository.update-application.home-page-url',
'serverlessapplicationrepository.update-application.readme-url',
'service-catalog.create-product.support-url',
'service-catalog.update-product.support-url',
'sqs.add-permission.queue-url',
'sqs.change-message-visibility.queue-url',
'sqs.change-message-visibility-batch.queue-url',
'sqs.delete-message.queue-url',
'sqs.delete-message-batch.queue-url',
'sqs.delete-queue.queue-url',
'sqs.get-queue-attributes.queue-url',
'sqs.list-dead-letter-source-queues.queue-url',
'sqs.receive-message.queue-url',
'sqs.remove-permission.queue-url',
'sqs.send-message.queue-url',
'sqs.send-message-batch.queue-url',
'sqs.set-queue-attributes.queue-url',
'sqs.purge-queue.queue-url',
'sqs.list-queue-tags.queue-url',
'sqs.tag-queue.queue-url',
'sqs.untag-queue.queue-url',
's3.copy-object.website-redirect-location',
's3.create-multipart-upload.website-redirect-location',
's3.put-object.website-redirect-location',
# Double check that this has been renamed!
'sns.subscribe.notification-endpoint',
'iot.create-job.document-source',
'translate.translate-text.text',
'workdocs.create-notification-subscription.notification-endpoint'
])
class ResourceLoadingError(Exception):
pass
def register_uri_param_handler(session, **kwargs):
prefix_map = copy.deepcopy(LOCAL_PREFIX_MAP)
try:
fetch_url = session.get_scoped_config().get(
'cli_follow_urlparam', 'true') == 'true'
except ProfileNotFound:
# If a --profile is provided that does not exist, loading
# a value from get_scoped_config will crash the CLI.
# This function can be called as the first handler for
# the session-initialized event, which happens before a
# profile can be created, even if the command would have
# successfully created a profile. Instead of crashing here
# on a ProfileNotFound the CLI should just use 'none'.
fetch_url = True
if fetch_url:
prefix_map.update(REMOTE_PREFIX_MAP)
handler = URIArgumentHandler(prefix_map)
session.register('load-cli-arg', handler)
class URIArgumentHandler(object):
def __init__(self, prefixes=None):
if prefixes is None:
prefixes = copy.deepcopy(LOCAL_PREFIX_MAP)
prefixes.update(REMOTE_PREFIX_MAP)
self._prefixes = prefixes
def __call__(self, event_name, param, value, **kwargs):
"""Handler that supports param values from URIs."""
cli_argument = param
qualified_param_name = '.'.join(event_name.split('.')[1:])
if qualified_param_name in PARAMFILE_DISABLED or \
getattr(cli_argument, 'no_paramfile', None):
return
else:
return self._check_for_uri_param(cli_argument, value)
def _check_for_uri_param(self, param, value):
if isinstance(value, list) and len(value) == 1:
value = value[0]
try:
return get_paramfile(value, self._prefixes)
except ResourceLoadingError as e:
raise ParamError(param.cli_name, six.text_type(e))
def get_paramfile(path, cases):
"""Load parameter based on a resource URI.
It is possible to pass parameters to operations by referring
to files or URI's. If such a reference is detected, this
function attempts to retrieve the data from the file or URI
and returns it. If there are any errors or if the ``path``
does not appear to refer to a file or URI, a ``None`` is
returned.
:type path: str
:param path: The resource URI, e.g. file://foo.txt. This value
may also be a non resource URI, in which case ``None`` is returned.
:type cases: dict
:param cases: A dictionary of URI prefixes to function mappings
that a parameter is checked against.
:return: The loaded value associated with the resource URI.
If the provided ``path`` is not a resource URI, then a
value of ``None`` is returned.
"""
data = None
if isinstance(path, six.string_types):
for prefix, function_spec in cases.items():
if path.startswith(prefix):
function, kwargs = function_spec
data = function(prefix, path, **kwargs)
return data
def get_file(prefix, path, mode):
file_path = os.path.expandvars(os.path.expanduser(path[len(prefix):]))
try:
with compat_open(file_path, mode) as f:
return f.read()
except UnicodeDecodeError:
raise ResourceLoadingError(
'Unable to load paramfile (%s), text contents could '
'not be decoded. If this is a binary file, please use the '
'fileb:// prefix instead of the file:// prefix.' % file_path)
except (OSError, IOError) as e:
raise ResourceLoadingError('Unable to load paramfile %s: %s' % (
path, e))
def get_uri(prefix, uri):
try:
session = URLLib3Session()
r = session.send(AWSRequest('GET', uri).prepare())
if r.status_code == 200:
return r.text
else:
raise ResourceLoadingError(
"received non 200 status code of %s" % (
r.status_code))
except Exception as e:
raise ResourceLoadingError('Unable to retrieve %s: %s' % (uri, e))
LOCAL_PREFIX_MAP = {
'file://': (get_file, {'mode': 'r'}),
'fileb://': (get_file, {'mode': 'rb'}),
}
REMOTE_PREFIX_MAP = {
'http://': (get_uri, {}),
'https://': (get_uri, {}),
}
awscli-1.17.14/awscli/shorthand.py 0000644 0000000 0000000 00000034523 13620325556 016751 0 ustar root root 0000000 0000000 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""Module for parsing shorthand syntax.
This module parses any CLI options that use a "shorthand"
syntax::
--foo A=b,C=d
|------|
|
Shorthand syntax
This module provides two main classes to do this.
First, there's a ``ShorthandParser`` class. This class works
on a purely syntactic level. It looks only at the string value
provided to it in order to figure out how the string should be parsed.
However, because there was a pre-existing shorthand parser, we need
to remain backwards compatible with the previous parser. One of the
things the previous parser did was use the associated JSON model to
control how the expression was parsed.
In order to accommodate this a post processing class is provided that
takes the parsed values from the ``ShorthandParser`` as well as the
corresponding JSON model for the CLI argument and makes any adjustments
necessary to maintain backwards compatibility. This is done in the
``BackCompatVisitor`` class.
"""
import re
import string
_EOF = object()
class _NamedRegex(object):
def __init__(self, name, regex_str):
self.name = name
self.regex = re.compile(regex_str, re.UNICODE)
def match(self, value):
return self.regex.match(value)
class ShorthandParseError(Exception):
def __init__(self, value, expected, actual, index):
self.value = value
self.expected = expected
self.actual = actual
self.index = index
msg = self._construct_msg()
super(ShorthandParseError, self).__init__(msg)
def _construct_msg(self):
consumed, remaining, num_spaces = self.value, '', self.index
if '\n' in self.value[:self.index]:
# If there's newlines in the consumed expression, we want
# to make sure we're only counting the spaces
# from the last newline:
# foo=bar,\n
# bar==baz
# ^
last_newline = self.value[:self.index].rindex('\n')
num_spaces = self.index - last_newline - 1
if '\n' in self.value[self.index:]:
# If there's newline in the remaining, divide value
# into consumed and remainig
# foo==bar,\n
# ^
# bar=baz
next_newline = self.index + self.value[self.index:].index('\n')
consumed = self.value[:next_newline]
remaining = self.value[next_newline:]
msg = (
"Expected: '%s', received: '%s' for input:\n"
"%s\n"
"%s"
"%s"
) % (self.expected, self.actual, consumed,
' ' * num_spaces + '^', remaining)
return msg
class ShorthandParser(object):
"""Parses shorthand syntax in the CLI.
Note that this parser does not rely on any JSON models to control
how to parse the shorthand syntax.
"""
_SINGLE_QUOTED = _NamedRegex('singled quoted', r'\'(?:\\\\|\\\'|[^\'])*\'')
_DOUBLE_QUOTED = _NamedRegex('double quoted', r'"(?:\\\\|\\"|[^"])*"')
_START_WORD = u'\!\#-&\(-\+\--\<\>-Z\\\\-z\u007c-\uffff'
_FIRST_FOLLOW_CHARS = u'\s\!\#-&\(-\+\--\\\\\^-\|~-\uffff'
_SECOND_FOLLOW_CHARS = u'\s\!\#-&\(-\+\--\<\>-\uffff'
_ESCAPED_COMMA = '(\\\\,)'
_FIRST_VALUE = _NamedRegex(
'first',
u'({escaped_comma}|[{start_word}])'
u'({escaped_comma}|[{follow_chars}])*'.format(
escaped_comma=_ESCAPED_COMMA,
start_word=_START_WORD,
follow_chars=_FIRST_FOLLOW_CHARS,
))
_SECOND_VALUE = _NamedRegex(
'second',
u'({escaped_comma}|[{start_word}])'
u'({escaped_comma}|[{follow_chars}])*'.format(
escaped_comma=_ESCAPED_COMMA,
start_word=_START_WORD,
follow_chars=_SECOND_FOLLOW_CHARS,
))
def __init__(self):
self._tokens = []
def parse(self, value):
"""Parse shorthand syntax.
For example::
parser = ShorthandParser()
parser.parse('a=b') # {'a': 'b'}
parser.parse('a=b,c') # {'a': ['b', 'c']}
:tpye value: str
:param value: Any value that needs to be parsed.
:return: Parsed value, which will be a dictionary.
"""
self._input_value = value
self._index = 0
return self._parameter()
def _parameter(self):
# parameter = keyval *("," keyval)
params = {}
params.update(self._keyval())
while self._index < len(self._input_value):
self._expect(',', consume_whitespace=True)
params.update(self._keyval())
return params
def _keyval(self):
# keyval = key "=" [values]
key = self._key()
self._expect('=', consume_whitespace=True)
values = self._values()
return {key: values}
def _key(self):
# key = 1*(alpha / %x30-39 / %x5f / %x2e / %x23) ; [a-zA-Z0-9\-_.#/]
valid_chars = string.ascii_letters + string.digits + '-_.#/:'
start = self._index
while not self._at_eof():
if self._current() not in valid_chars:
break
self._index += 1
return self._input_value[start:self._index]
def _values(self):
# values = csv-list / explicit-list / hash-literal
if self._at_eof():
return ''
elif self._current() == '[':
return self._explicit_list()
elif self._current() == '{':
return self._hash_literal()
else:
return self._csv_value()
def _csv_value(self):
# Supports either:
# foo=bar -> 'bar'
# ^
# foo=bar,baz -> ['bar', 'baz']
# ^
first_value = self._first_value()
self._consume_whitespace()
if self._at_eof() or self._input_value[self._index] != ',':
return first_value
self._expect(',', consume_whitespace=True)
csv_list = [first_value]
# Try to parse remaining list values.
# It's possible we don't parse anything:
# a=b,c=d
# ^-here
# In the case above, we'll hit the ShorthandParser,
# backtrack to the comma, and return a single scalar
# value 'b'.
while True:
try:
current = self._second_value()
self._consume_whitespace()
if self._at_eof():
csv_list.append(current)
break
self._expect(',', consume_whitespace=True)
csv_list.append(current)
except ShorthandParseError:
# Backtrack to the previous comma.
# This can happen when we reach this case:
# foo=a,b,c=d,e=f
# ^-start
# foo=a,b,c=d,e=f
# ^-error, "expected ',' received '='
# foo=a,b,c=d,e=f
# ^-backtrack to here.
if self._at_eof():
raise
self._backtrack_to(',')
break
if len(csv_list) == 1:
# Then this was a foo=bar case, so we expect
# this to parse to a scalar value 'bar', i.e
# {"foo": "bar"} instead of {"bar": ["bar"]}
return first_value
return csv_list
def _value(self):
result = self._FIRST_VALUE.match(self._input_value[self._index:])
if result is not None:
consumed = self._consume_matched_regex(result)
return consumed.replace('\\,', ',').rstrip()
return ''
def _explicit_list(self):
# explicit-list = "[" [value *(",' value)] "]"
self._expect('[', consume_whitespace=True)
values = []
while self._current() != ']':
val = self._explicit_values()
values.append(val)
self._consume_whitespace()
if self._current() != ']':
self._expect(',')
self._consume_whitespace()
self._expect(']')
return values
def _explicit_values(self):
# values = csv-list / explicit-list / hash-literal
if self._current() == '[':
return self._explicit_list()
elif self._current() == '{':
return self._hash_literal()
else:
return self._first_value()
def _hash_literal(self):
self._expect('{', consume_whitespace=True)
keyvals = {}
while self._current() != '}':
key = self._key()
self._expect('=', consume_whitespace=True)
v = self._explicit_values()
self._consume_whitespace()
if self._current() != '}':
self._expect(',')
self._consume_whitespace()
keyvals[key] = v
self._expect('}')
return keyvals
def _first_value(self):
# first-value = value / single-quoted-val / double-quoted-val
if self._current() == "'":
return self._single_quoted_value()
elif self._current() == '"':
return self._double_quoted_value()
return self._value()
def _single_quoted_value(self):
# single-quoted-value = %x27 *(val-escaped-single) %x27
# val-escaped-single = %x20-26 / %x28-7F / escaped-escape /
# (escape single-quote)
return self._consume_quoted(self._SINGLE_QUOTED, escaped_char="'")
def _consume_quoted(self, regex, escaped_char=None):
value = self._must_consume_regex(regex)[1:-1]
if escaped_char is not None:
value = value.replace("\\%s" % escaped_char, escaped_char)
value = value.replace("\\\\", "\\")
return value
def _double_quoted_value(self):
return self._consume_quoted(self._DOUBLE_QUOTED, escaped_char='"')
def _second_value(self):
if self._current() == "'":
return self._single_quoted_value()
elif self._current() == '"':
return self._double_quoted_value()
else:
consumed = self._must_consume_regex(self._SECOND_VALUE)
return consumed.replace('\\,', ',').rstrip()
def _expect(self, char, consume_whitespace=False):
if consume_whitespace:
self._consume_whitespace()
if self._index >= len(self._input_value):
raise ShorthandParseError(self._input_value, char,
'EOF', self._index)
actual = self._input_value[self._index]
if actual != char:
raise ShorthandParseError(self._input_value, char,
actual, self._index)
self._index += 1
if consume_whitespace:
self._consume_whitespace()
def _must_consume_regex(self, regex):
result = regex.match(self._input_value[self._index:])
if result is not None:
return self._consume_matched_regex(result)
raise ShorthandParseError(self._input_value, '<%s>' % regex.name,
'', self._index)
def _consume_matched_regex(self, result):
start, end = result.span()
v = self._input_value[self._index+start:self._index+end]
self._index += (end - start)
return v
def _current(self):
# If the index is at the end of the input value,
# then _EOF will be returned.
if self._index < len(self._input_value):
return self._input_value[self._index]
return _EOF
def _at_eof(self):
return self._index >= len(self._input_value)
def _backtrack_to(self, char):
while self._index >= 0 and self._input_value[self._index] != char:
self._index -= 1
def _consume_whitespace(self):
while self._current() != _EOF and self._current() in string.whitespace:
self._index += 1
class ModelVisitor(object):
def visit(self, params, model):
self._visit({}, model, '', params)
def _visit(self, parent, shape, name, value):
method = getattr(self, '_visit_%s' % shape.type_name,
self._visit_scalar)
method(parent, shape, name, value)
def _visit_structure(self, parent, shape, name, value):
if not isinstance(value, dict):
return
for member_name, member_shape in shape.members.items():
self._visit(value, member_shape, member_name,
value.get(member_name))
def _visit_list(self, parent, shape, name, value):
if not isinstance(value, list):
return
for i, element in enumerate(value):
self._visit(value, shape.member, i, element)
def _visit_map(self, parent, shape, name, value):
if not isinstance(value, dict):
return
value_shape = shape.value
for k, v in value.items():
self._visit(value, value_shape, k, v)
def _visit_scalar(self, parent, shape, name, value):
pass
class BackCompatVisitor(ModelVisitor):
def _visit_list(self, parent, shape, name, value):
if not isinstance(value, list):
# Convert a -> [a] because they specified
# "foo=bar", but "bar" should really be ["bar"].
if value is not None:
parent[name] = [value]
else:
return super(BackCompatVisitor, self)._visit_list(
parent, shape, name, value)
def _visit_scalar(self, parent, shape, name, value):
if value is None:
return
type_name = shape.type_name
if type_name in ['integer', 'long']:
parent[name] = int(value)
elif type_name in ['double', 'float']:
parent[name] = float(value)
elif type_name == 'boolean':
# We want to make sure we only set a value
# only if "true"/"false" is specified.
if value.lower() == 'true':
parent[name] = True
elif value.lower() == 'false':
parent[name] = False
awscli-1.17.14/awscli/clidriver.py 0000644 0000000 0000000 00000064463 13620325554 016746 0 ustar root root 0000000 0000000 # Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import sys
import signal
import logging
import botocore.session
from botocore import __version__ as botocore_version
from botocore.hooks import HierarchicalEmitter
from botocore import xform_name
from botocore.compat import copy_kwargs, OrderedDict
from botocore.exceptions import NoCredentialsError
from botocore.exceptions import NoRegionError
from botocore.history import get_global_history_recorder
from awscli import EnvironmentVariables, __version__
from awscli.compat import get_stderr_text_writer
from awscli.formatter import get_formatter
from awscli.plugin import load_plugins
from awscli.commands import CLICommand
from awscli.compat import six
from awscli.argparser import MainArgParser
from awscli.argparser import ServiceArgParser
from awscli.argparser import ArgTableArgParser
from awscli.argparser import USAGE
from awscli.help import ProviderHelpCommand
from awscli.help import ServiceHelpCommand
from awscli.help import OperationHelpCommand
from awscli.arguments import CustomArgument
from awscli.arguments import ListArgument
from awscli.arguments import BooleanArgument
from awscli.arguments import CLIArgument
from awscli.arguments import UnknownArgumentError
from awscli.argprocess import unpack_argument
from awscli.alias import AliasLoader
from awscli.alias import AliasCommandInjector
from awscli.utils import emit_top_level_args_parsed_event
from awscli.utils import write_exception
LOG = logging.getLogger('awscli.clidriver')
LOG_FORMAT = (
'%(asctime)s - %(threadName)s - %(name)s - %(levelname)s - %(message)s')
HISTORY_RECORDER = get_global_history_recorder()
# Don't remove this line. The idna encoding
# is used by getaddrinfo when dealing with unicode hostnames,
# and in some cases, there appears to be a race condition
# where threads will get a LookupError on getaddrinfo() saying
# that the encoding doesn't exist. Using the idna encoding before
# running any CLI code (and any threads it may create) ensures that
# the encodings.idna is imported and registered in the codecs registry,
# which will stop the LookupErrors from happening.
# See: https://bugs.python.org/issue29288
u''.encode('idna')
def main():
driver = create_clidriver()
rc = driver.main()
HISTORY_RECORDER.record('CLI_RC', rc, 'CLI')
return rc
def create_clidriver():
session = botocore.session.Session(EnvironmentVariables)
_set_user_agent_for_session(session)
load_plugins(session.full_config.get('plugins', {}),
event_hooks=session.get_component('event_emitter'))
driver = CLIDriver(session=session)
return driver
def _set_user_agent_for_session(session):
session.user_agent_name = 'aws-cli'
session.user_agent_version = __version__
session.user_agent_extra = 'botocore/%s' % botocore_version
class CLIDriver(object):
def __init__(self, session=None):
if session is None:
self.session = botocore.session.get_session(EnvironmentVariables)
_set_user_agent_for_session(self.session)
else:
self.session = session
self._cli_data = None
self._command_table = None
self._argument_table = None
self.alias_loader = AliasLoader()
def _get_cli_data(self):
# Not crazy about this but the data in here is needed in
# several places (e.g. MainArgParser, ProviderHelp) so
# we load it here once.
if self._cli_data is None:
self._cli_data = self.session.get_data('cli')
return self._cli_data
def _get_command_table(self):
if self._command_table is None:
self._command_table = self._build_command_table()
return self._command_table
def _get_argument_table(self):
if self._argument_table is None:
self._argument_table = self._build_argument_table()
return self._argument_table
def _build_command_table(self):
"""
Create the main parser to handle the global arguments.
:rtype: ``argparser.ArgumentParser``
:return: The parser object
"""
command_table = self._build_builtin_commands(self.session)
self.session.emit('building-command-table.main',
command_table=command_table,
session=self.session,
command_object=self)
return command_table
def _build_builtin_commands(self, session):
commands = OrderedDict()
services = session.get_available_services()
for service_name in services:
commands[service_name] = ServiceCommand(cli_name=service_name,
session=self.session,
service_name=service_name)
return commands
def _add_aliases(self, command_table, parser):
parser = self._create_parser(command_table)
injector = AliasCommandInjector(
self.session, self.alias_loader)
injector.inject_aliases(command_table, parser)
def _build_argument_table(self):
argument_table = OrderedDict()
cli_data = self._get_cli_data()
cli_arguments = cli_data.get('options', None)
for option in cli_arguments:
option_params = copy_kwargs(cli_arguments[option])
cli_argument = self._create_cli_argument(option, option_params)
cli_argument.add_to_arg_table(argument_table)
# Then the final step is to send out an event so handlers
# can add extra arguments or modify existing arguments.
self.session.emit('building-top-level-params',
argument_table=argument_table)
return argument_table
def _create_cli_argument(self, option_name, option_params):
return CustomArgument(
option_name, help_text=option_params.get('help', ''),
dest=option_params.get('dest'),
default=option_params.get('default'),
action=option_params.get('action'),
required=option_params.get('required'),
choices=option_params.get('choices'),
cli_type_name=option_params.get('type'))
def create_help_command(self):
cli_data = self._get_cli_data()
return ProviderHelpCommand(self.session, self._get_command_table(),
self._get_argument_table(),
cli_data.get('description', None),
cli_data.get('synopsis', None),
cli_data.get('help_usage', None))
def _create_parser(self, command_table):
# Also add a 'help' command.
command_table['help'] = self.create_help_command()
cli_data = self._get_cli_data()
parser = MainArgParser(
command_table, self.session.user_agent(),
cli_data.get('description', None),
self._get_argument_table(),
prog="aws")
return parser
def main(self, args=None):
"""
:param args: List of arguments, with the 'aws' removed. For example,
the command "aws s3 list-objects --bucket foo" will have an
args list of ``['s3', 'list-objects', '--bucket', 'foo']``.
"""
if args is None:
args = sys.argv[1:]
command_table = self._get_command_table()
parser = self._create_parser(command_table)
self._add_aliases(command_table, parser)
parsed_args, remaining = parser.parse_known_args(args)
try:
# Because _handle_top_level_args emits events, it's possible
# that exceptions can be raised, which should have the same
# general exception handling logic as calling into the
# command table. This is why it's in the try/except clause.
self._handle_top_level_args(parsed_args)
self._emit_session_event(parsed_args)
HISTORY_RECORDER.record(
'CLI_VERSION', self.session.user_agent(), 'CLI')
HISTORY_RECORDER.record('CLI_ARGUMENTS', args, 'CLI')
return command_table[parsed_args.command](remaining, parsed_args)
except UnknownArgumentError as e:
sys.stderr.write("usage: %s\n" % USAGE)
sys.stderr.write(str(e))
sys.stderr.write("\n")
return 255
except NoRegionError as e:
msg = ('%s You can also configure your region by running '
'"aws configure".' % e)
self._show_error(msg)
return 255
except NoCredentialsError as e:
msg = ('%s. You can configure credentials by running '
'"aws configure".' % e)
self._show_error(msg)
return 255
except KeyboardInterrupt:
# Shell standard for signals that terminate
# the process is to return 128 + signum, in this case
# SIGINT=2, so we'll have an RC of 130.
sys.stdout.write("\n")
return 128 + signal.SIGINT
except Exception as e:
LOG.debug("Exception caught in main()", exc_info=True)
LOG.debug("Exiting with rc 255")
write_exception(e, outfile=get_stderr_text_writer())
return 255
def _emit_session_event(self, parsed_args):
# This event is guaranteed to run after the session has been
# initialized and a profile has been set. This was previously
# problematic because if something in CLIDriver caused the
# session components to be reset (such as session.profile = foo)
# then all the prior registered components would be removed.
self.session.emit(
'session-initialized', session=self.session,
parsed_args=parsed_args)
def _show_error(self, msg):
LOG.debug(msg, exc_info=True)
sys.stderr.write(msg)
sys.stderr.write('\n')
def _handle_top_level_args(self, args):
emit_top_level_args_parsed_event(self.session, args)
if args.profile:
self.session.set_config_variable('profile', args.profile)
if args.region:
self.session.set_config_variable('region', args.region)
if args.debug:
# TODO:
# Unfortunately, by setting debug mode here, we miss out
# on all of the debug events prior to this such as the
# loading of plugins, etc.
self.session.set_stream_logger('botocore', logging.DEBUG,
format_string=LOG_FORMAT)
self.session.set_stream_logger('awscli', logging.DEBUG,
format_string=LOG_FORMAT)
self.session.set_stream_logger('s3transfer', logging.DEBUG,
format_string=LOG_FORMAT)
self.session.set_stream_logger('urllib3', logging.DEBUG,
format_string=LOG_FORMAT)
LOG.debug("CLI version: %s", self.session.user_agent())
LOG.debug("Arguments entered to CLI: %s", sys.argv[1:])
else:
self.session.set_stream_logger(logger_name='awscli',
log_level=logging.ERROR)
class ServiceCommand(CLICommand):
"""A service command for the CLI.
For example, ``aws ec2 ...`` we'd create a ServiceCommand
object that represents the ec2 service.
"""
def __init__(self, cli_name, session, service_name=None):
# The cli_name is the name the user types, the name we show
# in doc, etc.
# The service_name is the name we used internally with botocore.
# For example, we have the 's3api' as the cli_name for the service
# but this is actually bound to the 's3' service name in botocore,
# i.e. we load s3.json from the botocore data dir. Most of
# the time these are the same thing but in the case of renames,
# we want users/external things to be able to rename the cli name
# but *not* the service name, as this has to be exactly what
# botocore expects.
self._name = cli_name
self.session = session
self._command_table = None
if service_name is None:
# Then default to using the cli name.
self._service_name = cli_name
else:
self._service_name = service_name
self._lineage = [self]
self._service_model = None
@property
def name(self):
return self._name
@name.setter
def name(self, value):
self._name = value
@property
def service_model(self):
return self._get_service_model()
@property
def lineage(self):
return self._lineage
@lineage.setter
def lineage(self, value):
self._lineage = value
def _get_command_table(self):
if self._command_table is None:
self._command_table = self._create_command_table()
return self._command_table
def _get_service_model(self):
if self._service_model is None:
api_version = self.session.get_config_variable('api_versions').get(
self._service_name, None)
self._service_model = self.session.get_service_model(
self._service_name, api_version=api_version)
return self._service_model
def __call__(self, args, parsed_globals):
# Once we know we're trying to call a service for this operation
# we can go ahead and create the parser for it. We
# can also grab the Service object from botocore.
service_parser = self._create_parser()
parsed_args, remaining = service_parser.parse_known_args(args)
command_table = self._get_command_table()
return command_table[parsed_args.operation](remaining, parsed_globals)
def _create_command_table(self):
command_table = OrderedDict()
service_model = self._get_service_model()
for operation_name in service_model.operation_names:
cli_name = xform_name(operation_name, '-')
operation_model = service_model.operation_model(operation_name)
command_table[cli_name] = ServiceOperation(
name=cli_name,
parent_name=self._name,
session=self.session,
operation_model=operation_model,
operation_caller=CLIOperationCaller(self.session),
)
self.session.emit('building-command-table.%s' % self._name,
command_table=command_table,
session=self.session,
command_object=self)
self._add_lineage(command_table)
return command_table
def _add_lineage(self, command_table):
for command in command_table:
command_obj = command_table[command]
command_obj.lineage = self.lineage + [command_obj]
def create_help_command(self):
command_table = self._get_command_table()
return ServiceHelpCommand(session=self.session,
obj=self._get_service_model(),
command_table=command_table,
arg_table=None,
event_class='.'.join(self.lineage_names),
name=self._name)
def _create_parser(self):
command_table = self._get_command_table()
# Also add a 'help' command.
command_table['help'] = self.create_help_command()
return ServiceArgParser(
operations_table=command_table, service_name=self._name)
class ServiceOperation(object):
"""A single operation of a service.
This class represents a single operation for a service, for
example ``ec2.DescribeInstances``.
"""
ARG_TYPES = {
'list': ListArgument,
'boolean': BooleanArgument,
}
DEFAULT_ARG_CLASS = CLIArgument
def __init__(self, name, parent_name, operation_caller,
operation_model, session):
"""
:type name: str
:param name: The name of the operation/subcommand.
:type parent_name: str
:param parent_name: The name of the parent command.
:type operation_model: ``botocore.model.OperationModel``
:param operation_object: The operation model
associated with this subcommand.
:type operation_caller: ``CLIOperationCaller``
:param operation_caller: An object that can properly call the
operation.
:type session: ``botocore.session.Session``
:param session: The session object.
"""
self._arg_table = None
self._name = name
# These is used so we can figure out what the proper event
# name should be ..
self._parent_name = parent_name
self._operation_caller = operation_caller
self._lineage = [self]
self._operation_model = operation_model
self._session = session
if operation_model.deprecated:
self._UNDOCUMENTED = True
@property
def name(self):
return self._name
@name.setter
def name(self, value):
self._name = value
@property
def lineage(self):
return self._lineage
@lineage.setter
def lineage(self, value):
self._lineage = value
@property
def lineage_names(self):
# Represents the lineage of a command in terms of command ``name``
return [cmd.name for cmd in self.lineage]
@property
def arg_table(self):
if self._arg_table is None:
self._arg_table = self._create_argument_table()
return self._arg_table
def __call__(self, args, parsed_globals):
# Once we know we're trying to call a particular operation
# of a service we can go ahead and load the parameters.
event = 'before-building-argument-table-parser.%s.%s' % \
(self._parent_name, self._name)
self._emit(event, argument_table=self.arg_table, args=args,
session=self._session)
operation_parser = self._create_operation_parser(self.arg_table)
self._add_help(operation_parser)
parsed_args, remaining = operation_parser.parse_known_args(args)
if parsed_args.help == 'help':
op_help = self.create_help_command()
return op_help(remaining, parsed_globals)
elif parsed_args.help:
remaining.append(parsed_args.help)
if remaining:
raise UnknownArgumentError(
"Unknown options: %s" % ', '.join(remaining))
event = 'operation-args-parsed.%s.%s' % (self._parent_name,
self._name)
self._emit(event, parsed_args=parsed_args,
parsed_globals=parsed_globals)
call_parameters = self._build_call_parameters(
parsed_args, self.arg_table)
event = 'calling-command.%s.%s' % (self._parent_name,
self._name)
override = self._emit_first_non_none_response(
event,
call_parameters=call_parameters,
parsed_args=parsed_args,
parsed_globals=parsed_globals
)
# There are two possible values for override. It can be some type
# of exception that will be raised if detected or it can represent
# the desired return code. Note that a return code of 0 represents
# a success.
if override is not None:
if isinstance(override, Exception):
# If the override value provided back is an exception then
# raise the exception
raise override
else:
# This is the value usually returned by the ``invoke()``
# method of the operation caller. It represents the return
# code of the operation.
return override
else:
# No override value was supplied.
return self._operation_caller.invoke(
self._operation_model.service_model.service_name,
self._operation_model.name,
call_parameters, parsed_globals)
def create_help_command(self):
return OperationHelpCommand(
self._session,
operation_model=self._operation_model,
arg_table=self.arg_table,
name=self._name, event_class='.'.join(self.lineage_names))
def _add_help(self, parser):
# The 'help' output is processed a little differently from
# the operation help because the arg_table has
# CLIArguments for values.
parser.add_argument('help', nargs='?')
def _build_call_parameters(self, args, arg_table):
# We need to convert the args specified on the command
# line as valid **kwargs we can hand to botocore.
service_params = {}
# args is an argparse.Namespace object so we're using vars()
# so we can iterate over the parsed key/values.
parsed_args = vars(args)
for arg_object in arg_table.values():
py_name = arg_object.py_name
if py_name in parsed_args:
value = parsed_args[py_name]
value = self._unpack_arg(arg_object, value)
arg_object.add_to_params(service_params, value)
return service_params
def _unpack_arg(self, cli_argument, value):
# Unpacks a commandline argument into a Python value by firing the
# load-cli-arg.service-name.operation-name event.
session = self._session
service_name = self._operation_model.service_model.endpoint_prefix
operation_name = xform_name(self._name, '-')
return unpack_argument(session, service_name, operation_name,
cli_argument, value)
def _create_argument_table(self):
argument_table = OrderedDict()
input_shape = self._operation_model.input_shape
required_arguments = []
arg_dict = {}
if input_shape is not None:
required_arguments = input_shape.required_members
arg_dict = input_shape.members
for arg_name, arg_shape in arg_dict.items():
cli_arg_name = xform_name(arg_name, '-')
arg_class = self.ARG_TYPES.get(arg_shape.type_name,
self.DEFAULT_ARG_CLASS)
is_token = arg_shape.metadata.get('idempotencyToken', False)
is_required = arg_name in required_arguments and not is_token
event_emitter = self._session.get_component('event_emitter')
arg_object = arg_class(
name=cli_arg_name,
argument_model=arg_shape,
is_required=is_required,
operation_model=self._operation_model,
serialized_name=arg_name,
event_emitter=event_emitter)
arg_object.add_to_arg_table(argument_table)
LOG.debug(argument_table)
self._emit('building-argument-table.%s.%s' % (self._parent_name,
self._name),
operation_model=self._operation_model,
session=self._session,
command=self,
argument_table=argument_table)
return argument_table
def _emit(self, name, **kwargs):
return self._session.emit(name, **kwargs)
def _emit_first_non_none_response(self, name, **kwargs):
return self._session.emit_first_non_none_response(
name, **kwargs)
def _create_operation_parser(self, arg_table):
parser = ArgTableArgParser(arg_table)
return parser
class CLIOperationCaller(object):
"""Call an AWS operation and format the response."""
def __init__(self, session):
self._session = session
def invoke(self, service_name, operation_name, parameters, parsed_globals):
"""Invoke an operation and format the response.
:type service_name: str
:param service_name: The name of the service. Note this is the service name,
not the endpoint prefix (e.g. ``ses`` not ``email``).
:type operation_name: str
:param operation_name: The operation name of the service. The casing
of the operation name should match the exact casing used by the service,
e.g. ``DescribeInstances``, not ``describe-instances`` or
``describe_instances``.
:type parameters: dict
:param parameters: The parameters for the operation call. Again, these values
have the same casing used by the service.
:type parsed_globals: Namespace
:param parsed_globals: The parsed globals from the command line.
:return: None, the result is displayed through a formatter, but no
value is returned.
"""
client = self._session.create_client(
service_name, region_name=parsed_globals.region,
endpoint_url=parsed_globals.endpoint_url,
verify=parsed_globals.verify_ssl)
response = self._make_client_call(
client, operation_name, parameters, parsed_globals)
self._display_response(operation_name, response, parsed_globals)
return 0
def _make_client_call(self, client, operation_name, parameters,
parsed_globals):
py_operation_name = xform_name(operation_name)
if client.can_paginate(py_operation_name) and parsed_globals.paginate:
paginator = client.get_paginator(py_operation_name)
response = paginator.paginate(**parameters)
else:
response = getattr(client, xform_name(operation_name))(
**parameters)
return response
def _display_response(self, command_name, response,
parsed_globals):
output = parsed_globals.output
if output is None:
output = self._session.get_config_variable('output')
formatter = get_formatter(output, parsed_globals)
formatter(command_name, response)
awscli-1.17.14/awscli/clidocs.py 0000644 0000000 0000000 00000065161 13620325630 016372 0 ustar root root 0000000 0000000 # Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
import os
from botocore import xform_name
from botocore.docs.bcdoc.docevents import DOC_EVENTS
from botocore.model import StringShape
from botocore.utils import is_json_value_header
from awscli import SCALAR_TYPES
from awscli.argprocess import ParamShorthandDocGen
from awscli.topictags import TopicTagDB
from awscli.utils import find_service_and_method_in_event_name
LOG = logging.getLogger(__name__)
class CLIDocumentEventHandler(object):
def __init__(self, help_command):
self.help_command = help_command
self.register(help_command.session, help_command.event_class)
self._arg_groups = self._build_arg_table_groups(help_command)
self._documented_arg_groups = []
def _build_arg_table_groups(self, help_command):
arg_groups = {}
for name, arg in help_command.arg_table.items():
if arg.group_name is not None:
arg_groups.setdefault(arg.group_name, []).append(arg)
return arg_groups
def _get_argument_type_name(self, shape, default):
if is_json_value_header(shape):
return 'JSON'
return default
def _map_handlers(self, session, event_class, mapfn):
for event in DOC_EVENTS:
event_handler_name = event.replace('-', '_')
if hasattr(self, event_handler_name):
event_handler = getattr(self, event_handler_name)
format_string = DOC_EVENTS[event]
num_args = len(format_string.split('.')) - 2
format_args = (event_class,) + ('*',) * num_args
event_string = event + format_string % format_args
unique_id = event_class + event_handler_name
mapfn(event_string, event_handler, unique_id)
def register(self, session, event_class):
"""
The default register iterates through all of the
available document events and looks for a corresponding
handler method defined in the object. If it's there, that
handler method will be registered for the all events of
that type for the specified ``event_class``.
"""
self._map_handlers(session, event_class, session.register)
def unregister(self):
"""
The default unregister iterates through all of the
available document events and looks for a corresponding
handler method defined in the object. If it's there, that
handler method will be unregistered for the all events of
that type for the specified ``event_class``.
"""
self._map_handlers(self.help_command.session,
self.help_command.event_class,
self.help_command.session.unregister)
# These are default doc handlers that apply in the general case.
def doc_breadcrumbs(self, help_command, **kwargs):
doc = help_command.doc
if doc.target != 'man':
cmd_names = help_command.event_class.split('.')
doc.write('[ ')
doc.write(':ref:`aws `')
full_cmd_list = ['aws']
for cmd in cmd_names[:-1]:
doc.write(' . ')
full_cmd_list.append(cmd)
full_cmd_name = ' '.join(full_cmd_list)
doc.write(':ref:`%s `' % (cmd, full_cmd_name))
doc.write(' ]')
def doc_title(self, help_command, **kwargs):
doc = help_command.doc
doc.style.new_paragraph()
reference = help_command.event_class.replace('.', ' ')
if reference != 'aws':
reference = 'aws ' + reference
doc.writeln('.. _cli:%s:' % reference)
doc.style.h1(help_command.name)
def doc_description(self, help_command, **kwargs):
doc = help_command.doc
doc.style.h2('Description')
doc.include_doc_string(help_command.description)
doc.style.new_paragraph()
def doc_synopsis_start(self, help_command, **kwargs):
self._documented_arg_groups = []
doc = help_command.doc
doc.style.h2('Synopsis')
doc.style.start_codeblock()
doc.writeln('%s' % help_command.name)
def doc_synopsis_option(self, arg_name, help_command, **kwargs):
doc = help_command.doc
argument = help_command.arg_table[arg_name]
if argument.group_name in self._arg_groups:
if argument.group_name in self._documented_arg_groups:
# This arg is already documented so we can move on.
return
option_str = ' | '.join(
[a.cli_name for a in
self._arg_groups[argument.group_name]])
self._documented_arg_groups.append(argument.group_name)
else:
option_str = '%s ' % argument.cli_name
if not (argument.required
or getattr(argument, '_DOCUMENT_AS_REQUIRED', False)):
option_str = '[%s]' % option_str
doc.writeln('%s' % option_str)
def doc_synopsis_end(self, help_command, **kwargs):
doc = help_command.doc
doc.style.end_codeblock()
# Reset the documented arg groups for other sections
# that may document args (the detailed docs following
# the synopsis).
self._documented_arg_groups = []
def doc_options_start(self, help_command, **kwargs):
doc = help_command.doc
doc.style.h2('Options')
if not help_command.arg_table:
doc.write('*None*\n')
def doc_option(self, arg_name, help_command, **kwargs):
doc = help_command.doc
argument = help_command.arg_table[arg_name]
if argument.group_name in self._arg_groups:
if argument.group_name in self._documented_arg_groups:
# This arg is already documented so we can move on.
return
name = ' | '.join(
['``%s``' % a.cli_name for a in
self._arg_groups[argument.group_name]])
self._documented_arg_groups.append(argument.group_name)
else:
name = '``%s``' % argument.cli_name
doc.write('%s (%s)\n' % (name, self._get_argument_type_name(
argument.argument_model, argument.cli_type_name)))
doc.style.indent()
doc.include_doc_string(argument.documentation)
self._document_enums(argument, doc)
doc.style.dedent()
doc.style.new_paragraph()
def doc_relateditems_start(self, help_command, **kwargs):
if help_command.related_items:
doc = help_command.doc
doc.style.h2('See Also')
def doc_relateditem(self, help_command, related_item, **kwargs):
doc = help_command.doc
doc.write('* ')
doc.style.sphinx_reference_label(
label='cli:%s' % related_item,
text=related_item
)
doc.write('\n')
def _document_enums(self, argument, doc):
"""Documents top-level parameter enums"""
if hasattr(argument, 'argument_model'):
model = argument.argument_model
if isinstance(model, StringShape):
if model.enum:
doc.style.new_paragraph()
doc.write('Possible values:')
doc.style.start_ul()
for enum in model.enum:
doc.style.li('``%s``' % enum)
doc.style.end_ul()
class ProviderDocumentEventHandler(CLIDocumentEventHandler):
def doc_breadcrumbs(self, help_command, event_name, **kwargs):
pass
def doc_synopsis_start(self, help_command, **kwargs):
doc = help_command.doc
doc.style.h2('Synopsis')
doc.style.codeblock(help_command.synopsis)
doc.include_doc_string(help_command.help_usage)
def doc_synopsis_option(self, arg_name, help_command, **kwargs):
pass
def doc_synopsis_end(self, help_command, **kwargs):
doc = help_command.doc
doc.style.new_paragraph()
def doc_options_start(self, help_command, **kwargs):
doc = help_command.doc
doc.style.h2('Options')
def doc_option(self, arg_name, help_command, **kwargs):
doc = help_command.doc
argument = help_command.arg_table[arg_name]
doc.writeln('``%s`` (%s)' % (argument.cli_name,
argument.cli_type_name))
doc.include_doc_string(argument.documentation)
if argument.choices:
doc.style.start_ul()
for choice in argument.choices:
doc.style.li(choice)
doc.style.end_ul()
def doc_subitems_start(self, help_command, **kwargs):
doc = help_command.doc
doc.style.h2('Available Services')
doc.style.toctree()
def doc_subitem(self, command_name, help_command, **kwargs):
doc = help_command.doc
file_name = '%s/index' % command_name
doc.style.tocitem(command_name, file_name=file_name)
class ServiceDocumentEventHandler(CLIDocumentEventHandler):
# A service document has no synopsis.
def doc_synopsis_start(self, help_command, **kwargs):
pass
def doc_synopsis_option(self, arg_name, help_command, **kwargs):
pass
def doc_synopsis_end(self, help_command, **kwargs):
pass
# A service document has no option section.
def doc_options_start(self, help_command, **kwargs):
pass
def doc_option(self, arg_name, help_command, **kwargs):
pass
def doc_option_example(self, arg_name, help_command, **kwargs):
pass
def doc_options_end(self, help_command, **kwargs):
pass
def doc_description(self, help_command, **kwargs):
doc = help_command.doc
service_model = help_command.obj
doc.style.h2('Description')
# TODO: need a documentation attribute.
doc.include_doc_string(service_model.documentation)
def doc_subitems_start(self, help_command, **kwargs):
doc = help_command.doc
doc.style.h2('Available Commands')
doc.style.toctree()
def doc_subitem(self, command_name, help_command, **kwargs):
doc = help_command.doc
subcommand = help_command.command_table[command_name]
subcommand_table = getattr(subcommand, 'subcommand_table', {})
# If the subcommand table has commands in it,
# direct the subitem to the command's index because
# it has more subcommands to be documented.
if (len(subcommand_table) > 0):
file_name = '%s/index' % command_name
doc.style.tocitem(command_name, file_name=file_name)
else:
doc.style.tocitem(command_name)
class OperationDocumentEventHandler(CLIDocumentEventHandler):
AWS_DOC_BASE = 'https://docs.aws.amazon.com/goto/WebAPI'
def doc_description(self, help_command, **kwargs):
doc = help_command.doc
operation_model = help_command.obj
doc.style.h2('Description')
doc.include_doc_string(operation_model.documentation)
self._add_webapi_crosslink(help_command)
self._add_top_level_args_reference(help_command)
def _add_top_level_args_reference(self, help_command):
help_command.doc.writeln('')
help_command.doc.write("See ")
help_command.doc.style.internal_link(
title="'aws help'",
page='/reference/index'
)
help_command.doc.writeln(' for descriptions of global parameters.')
def _add_webapi_crosslink(self, help_command):
doc = help_command.doc
operation_model = help_command.obj
service_model = operation_model.service_model
service_uid = service_model.metadata.get('uid')
if service_uid is None:
# If there's no service_uid in the model, we can't
# be certain if the generated cross link will work
# so we don't generate any crosslink info.
return
doc.style.new_paragraph()
doc.write("See also: ")
link = '%s/%s/%s' % (self.AWS_DOC_BASE, service_uid,
operation_model.name)
doc.style.external_link(title="AWS API Documentation", link=link)
doc.writeln('')
def _json_example_value_name(self, argument_model, include_enum_values=True):
# If include_enum_values is True, then the valid enum values
# are included as the sample JSON value.
if isinstance(argument_model, StringShape):
if argument_model.enum and include_enum_values:
choices = argument_model.enum
return '|'.join(['"%s"' % c for c in choices])
else:
return '"string"'
elif argument_model.type_name == 'boolean':
return 'true|false'
else:
return '%s' % argument_model.type_name
def _json_example(self, doc, argument_model, stack):
if argument_model.name in stack:
# Document the recursion once, otherwise just
# note the fact that it's recursive and return.
if stack.count(argument_model.name) > 1:
if argument_model.type_name == 'structure':
doc.write('{ ... recursive ... }')
return
stack.append(argument_model.name)
try:
self._do_json_example(doc, argument_model, stack)
finally:
stack.pop()
def _do_json_example(self, doc, argument_model, stack):
if argument_model.type_name == 'list':
doc.write('[')
if argument_model.member.type_name in SCALAR_TYPES:
doc.write('%s, ...' % self._json_example_value_name(argument_model.member))
else:
doc.style.indent()
doc.style.new_line()
self._json_example(doc, argument_model.member, stack)
doc.style.new_line()
doc.write('...')
doc.style.dedent()
doc.style.new_line()
doc.write(']')
elif argument_model.type_name == 'map':
doc.write('{')
doc.style.indent()
key_string = self._json_example_value_name(argument_model.key)
doc.write('%s: ' % key_string)
if argument_model.value.type_name in SCALAR_TYPES:
doc.write(self._json_example_value_name(argument_model.value))
else:
doc.style.indent()
self._json_example(doc, argument_model.value, stack)
doc.style.dedent()
doc.style.new_line()
doc.write('...')
doc.style.dedent()
doc.write('}')
elif argument_model.type_name == 'structure':
self._doc_input_structure_members(doc, argument_model, stack)
def _doc_input_structure_members(self, doc, argument_model, stack):
doc.write('{')
doc.style.indent()
doc.style.new_line()
members = argument_model.members
for i, member_name in enumerate(members):
member_model = members[member_name]
member_type_name = member_model.type_name
if member_type_name in SCALAR_TYPES:
doc.write('"%s": %s' % (member_name,
self._json_example_value_name(member_model)))
elif member_type_name == 'structure':
doc.write('"%s": ' % member_name)
self._json_example(doc, member_model, stack)
elif member_type_name == 'map':
doc.write('"%s": ' % member_name)
self._json_example(doc, member_model, stack)
elif member_type_name == 'list':
doc.write('"%s": ' % member_name)
self._json_example(doc, member_model, stack)
if i < len(members) - 1:
doc.write(',')
doc.style.new_line()
doc.style.dedent()
doc.style.new_line()
doc.write('}')
def doc_option_example(self, arg_name, help_command, event_name, **kwargs):
service_id, operation_name = \
find_service_and_method_in_event_name(event_name)
doc = help_command.doc
cli_argument = help_command.arg_table[arg_name]
if cli_argument.group_name in self._arg_groups:
if cli_argument.group_name in self._documented_arg_groups:
# Args with group_names (boolean args) don't
# need to generate example syntax.
return
argument_model = cli_argument.argument_model
docgen = ParamShorthandDocGen()
if docgen.supports_shorthand(cli_argument.argument_model):
example_shorthand_syntax = docgen.generate_shorthand_example(
cli_argument, service_id, operation_name)
if example_shorthand_syntax is None:
# If the shorthand syntax returns a value of None,
# this indicates to us that there is no example
# needed for this param so we can immediately
# return.
return
if example_shorthand_syntax:
doc.style.new_paragraph()
doc.write('Shorthand Syntax')
doc.style.start_codeblock()
for example_line in example_shorthand_syntax.splitlines():
doc.writeln(example_line)
doc.style.end_codeblock()
if argument_model is not None and argument_model.type_name == 'list' and \
argument_model.member.type_name in SCALAR_TYPES:
# A list of scalars is special. While you *can* use
# JSON ( ["foo", "bar", "baz"] ), you can also just
# use the argparse behavior of space separated lists.
# "foo" "bar" "baz". In fact we don't even want to
# document the JSON syntax in this case.
member = argument_model.member
doc.style.new_paragraph()
doc.write('Syntax')
doc.style.start_codeblock()
example_type = self._json_example_value_name(
member, include_enum_values=False)
doc.write('%s %s ...' % (example_type, example_type))
if isinstance(member, StringShape) and member.enum:
# If we have enum values, we can tell the user
# exactly what valid values they can provide.
self._write_valid_enums(doc, member.enum)
doc.style.end_codeblock()
doc.style.new_paragraph()
elif cli_argument.cli_type_name not in SCALAR_TYPES:
doc.style.new_paragraph()
doc.write('JSON Syntax')
doc.style.start_codeblock()
self._json_example(doc, argument_model, stack=[])
doc.style.end_codeblock()
doc.style.new_paragraph()
def _write_valid_enums(self, doc, enum_values):
doc.style.new_paragraph()
doc.write("Where valid values are:\n")
for value in enum_values:
doc.write(" %s\n" % value)
doc.write("\n")
def doc_output(self, help_command, event_name, **kwargs):
doc = help_command.doc
doc.style.h2('Output')
operation_model = help_command.obj
output_shape = operation_model.output_shape
if output_shape is None or not output_shape.members:
doc.write('None')
else:
for member_name, member_shape in output_shape.members.items():
self._doc_member_for_output(doc, member_name, member_shape, stack=[])
def _doc_member_for_output(self, doc, member_name, member_shape, stack):
if member_shape.name in stack:
# Document the recursion once, otherwise just
# note the fact that it's recursive and return.
if stack.count(member_shape.name) > 1:
if member_shape.type_name == 'structure':
doc.write('( ... recursive ... )')
return
stack.append(member_shape.name)
try:
self._do_doc_member_for_output(doc, member_name,
member_shape, stack)
finally:
stack.pop()
def _do_doc_member_for_output(self, doc, member_name, member_shape, stack):
docs = member_shape.documentation
if member_name:
doc.write('%s -> (%s)' % (member_name, self._get_argument_type_name(
member_shape, member_shape.type_name)))
else:
doc.write('(%s)' % member_shape.type_name)
doc.style.indent()
doc.style.new_paragraph()
doc.include_doc_string(docs)
doc.style.new_paragraph()
member_type_name = member_shape.type_name
if member_type_name == 'structure':
for sub_name, sub_shape in member_shape.members.items():
self._doc_member_for_output(doc, sub_name, sub_shape, stack)
elif member_type_name == 'map':
key_shape = member_shape.key
key_name = key_shape.serialization.get('name', 'key')
self._doc_member_for_output(doc, key_name, key_shape, stack)
value_shape = member_shape.value
value_name = value_shape.serialization.get('name', 'value')
self._doc_member_for_output(doc, value_name, value_shape, stack)
elif member_type_name == 'list':
self._doc_member_for_output(doc, '', member_shape.member, stack)
doc.style.dedent()
doc.style.new_paragraph()
def doc_options_end(self, help_command, **kwargs):
self._add_top_level_args_reference(help_command)
class TopicListerDocumentEventHandler(CLIDocumentEventHandler):
DESCRIPTION = (
'This is the AWS CLI Topic Guide. It gives access to a set '
'of topics that provide a deeper understanding of the CLI. To access '
'the list of topics from the command line, run ``aws help topics``. '
'To access a specific topic from the command line, run '
'``aws help [topicname]``, where ``topicname`` is the name of the '
'topic as it appears in the output from ``aws help topics``.')
def __init__(self, help_command):
self.help_command = help_command
self.register(help_command.session, help_command.event_class)
self._topic_tag_db = TopicTagDB()
self._topic_tag_db.load_json_index()
def doc_breadcrumbs(self, help_command, **kwargs):
doc = help_command.doc
if doc.target != 'man':
doc.write('[ ')
doc.style.sphinx_reference_label(label='cli:aws', text='aws')
doc.write(' ]')
def doc_title(self, help_command, **kwargs):
doc = help_command.doc
doc.style.new_paragraph()
doc.style.link_target_definition(
refname='cli:aws help %s' % self.help_command.name,
link='')
doc.style.h1('AWS CLI Topic Guide')
def doc_description(self, help_command, **kwargs):
doc = help_command.doc
doc.style.h2('Description')
doc.include_doc_string(self.DESCRIPTION)
doc.style.new_paragraph()
def doc_synopsis_start(self, help_command, **kwargs):
pass
def doc_synopsis_end(self, help_command, **kwargs):
pass
def doc_options_start(self, help_command, **kwargs):
pass
def doc_options_end(self, help_command, **kwargs):
pass
def doc_subitems_start(self, help_command, **kwargs):
doc = help_command.doc
doc.style.h2('Available Topics')
categories = self._topic_tag_db.query('category')
topic_names = self._topic_tag_db.get_all_topic_names()
# Sort the categories
category_names = sorted(categories.keys())
for category_name in category_names:
doc.style.h3(category_name)
doc.style.new_paragraph()
# Write out the topic and a description for each topic under
# each category.
for topic_name in sorted(categories[category_name]):
description = self._topic_tag_db.get_tag_single_value(
topic_name, 'description')
doc.write('* ')
doc.style.sphinx_reference_label(
label='cli:aws help %s' % topic_name,
text=topic_name
)
doc.write(': %s\n' % description)
# Add a hidden toctree to make sure everything is connected in
# the document.
doc.style.hidden_toctree()
for topic_name in topic_names:
doc.style.hidden_tocitem(topic_name)
class TopicDocumentEventHandler(TopicListerDocumentEventHandler):
def doc_breadcrumbs(self, help_command, **kwargs):
doc = help_command.doc
if doc.target != 'man':
doc.write('[ ')
doc.style.sphinx_reference_label(label='cli:aws', text='aws')
doc.write(' . ')
doc.style.sphinx_reference_label(
label='cli:aws help topics',
text='topics'
)
doc.write(' ]')
def doc_title(self, help_command, **kwargs):
doc = help_command.doc
doc.style.new_paragraph()
doc.style.link_target_definition(
refname='cli:aws help %s' % self.help_command.name,
link='')
title = self._topic_tag_db.get_tag_single_value(
help_command.name, 'title')
doc.style.h1(title)
def doc_description(self, help_command, **kwargs):
doc = help_command.doc
topic_filename = os.path.join(self._topic_tag_db.topic_dir,
help_command.name + '.rst')
contents = self._remove_tags_from_content(topic_filename)
doc.writeln(contents)
doc.style.new_paragraph()
def _remove_tags_from_content(self, filename):
with open(filename, 'r') as f:
lines = f.readlines()
content_begin_index = 0
for i, line in enumerate(lines):
# If a line is encountered that does not begin with the tag
# end the search for tags and mark where tags end.
if not self._line_has_tag(line):
content_begin_index = i
break
# Join all of the non-tagged lines back together.
return ''.join(lines[content_begin_index:])
def _line_has_tag(self, line):
for tag in self._topic_tag_db.valid_tags:
if line.startswith(':' + tag + ':'):
return True
return False
def doc_subitems_start(self, help_command, **kwargs):
pass
awscli-1.17.14/awscli/testutils.py 0000644 0000000 0000000 00000106313 13620325630 017005 0 ustar root root 0000000 0000000 # Copyright 2012-2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""Test utilities for the AWS CLI.
This module includes various classes/functions that help in writing
CLI unit/integration tests. This module should not be imported by
any module **except** for test code. This is included in the CLI
package so that code that is not part of the CLI can still take
advantage of all the testing utilities we provide.
"""
import os
import sys
import copy
import shutil
import time
import json
import logging
import tempfile
import platform
import contextlib
import string
import binascii
from pprint import pformat
from subprocess import Popen, PIPE
from awscli.compat import StringIO
try:
import mock
except ImportError as e:
# In the off chance something imports this module
# that's not suppose to, we should not stop the CLI
# by raising an ImportError. Now if anything actually
# *uses* this module that isn't suppose to, that's a
# different story.
mock = None
from awscli.compat import six
from botocore.session import Session
from botocore.exceptions import ClientError
from botocore.exceptions import WaiterError
import botocore.loaders
from botocore.awsrequest import AWSResponse
import awscli.clidriver
from awscli.plugin import load_plugins
from awscli.clidriver import CLIDriver
from awscli import EnvironmentVariables
import unittest
# In python 3, order matters when calling assertEqual to
# compare lists and dictionaries with lists. Therefore,
# assertItemsEqual needs to be used but it is renamed to
# assertCountEqual in python 3.
if six.PY2:
unittest.TestCase.assertCountEqual = unittest.TestCase.assertItemsEqual
_LOADER = botocore.loaders.Loader()
INTEG_LOG = logging.getLogger('awscli.tests.integration')
AWS_CMD = None
def skip_if_windows(reason):
"""Decorator to skip tests that should not be run on windows.
Example usage:
@skip_if_windows("Not valid")
def test_some_non_windows_stuff(self):
self.assertEqual(...)
"""
def decorator(func):
return unittest.skipIf(
platform.system() not in ['Darwin', 'Linux'], reason)(func)
return decorator
def set_invalid_utime(path):
"""Helper function to set an invalid last modified time"""
try:
os.utime(path, (-1, -100000000000))
except (OSError, OverflowError):
# Some OS's such as Windows throws an error for trying to set a
# last modified time of that size. So if an error is thrown, set it
# to just a negative time which will trigger the warning as well for
# Windows.
os.utime(path, (-1, -1))
def create_clidriver():
driver = awscli.clidriver.create_clidriver()
session = driver.session
data_path = session.get_config_variable('data_path').split(os.pathsep)
if not data_path:
data_path = []
_LOADER.search_paths.extend(data_path)
session.register_component('data_loader', _LOADER)
return driver
def get_aws_cmd():
global AWS_CMD
import awscli
if AWS_CMD is None:
# Try /bin/aws
repo_root = os.path.dirname(os.path.abspath(awscli.__file__))
aws_cmd = os.path.join(repo_root, 'bin', 'aws')
if not os.path.isfile(aws_cmd):
aws_cmd = _search_path_for_cmd('aws')
if aws_cmd is None:
raise ValueError('Could not find "aws" executable. Either '
'make sure it is on your PATH, or you can '
'explicitly set this value using '
'"set_aws_cmd()"')
AWS_CMD = aws_cmd
return AWS_CMD
def _search_path_for_cmd(cmd_name):
for path in os.environ.get('PATH', '').split(os.pathsep):
full_cmd_path = os.path.join(path, cmd_name)
if os.path.isfile(full_cmd_path):
return full_cmd_path
return None
def set_aws_cmd(aws_cmd):
global AWS_CMD
AWS_CMD = aws_cmd
@contextlib.contextmanager
def temporary_file(mode):
"""This is a cross platform temporary file creation.
tempfile.NamedTemporary file on windows creates a secure temp file
that can't be read by other processes and can't be opened a second time.
For tests, we generally *want* them to be read multiple times.
The test fixture writes the temp file contents, the test reads the
temp file.
"""
temporary_directory = tempfile.mkdtemp()
basename = 'tmpfile-%s' % str(random_chars(8))
full_filename = os.path.join(temporary_directory, basename)
open(full_filename, 'w').close()
try:
with open(full_filename, mode) as f:
yield f
finally:
shutil.rmtree(temporary_directory)
def create_bucket(session, name=None, region=None):
"""
Creates a bucket
:returns: the name of the bucket created
"""
if not region:
region = 'us-west-2'
client = session.create_client('s3', region_name=region)
if name:
bucket_name = name
else:
bucket_name = random_bucket_name()
params = {'Bucket': bucket_name}
if region != 'us-east-1':
params['CreateBucketConfiguration'] = {'LocationConstraint': region}
try:
client.create_bucket(**params)
except ClientError as e:
if e.response['Error'].get('Code') == 'BucketAlreadyOwnedByYou':
# This can happen in the retried request, when the first one
# succeeded on S3 but somehow the response never comes back.
# We still got a bucket ready for test anyway.
pass
else:
raise
return bucket_name
def random_chars(num_chars):
"""Returns random hex characters.
Useful for creating resources with random names.
"""
return binascii.hexlify(os.urandom(int(num_chars / 2))).decode('ascii')
def random_bucket_name(prefix='awscli-s3integ-', num_random=15):
"""Generate a random S3 bucket name.
:param prefix: A prefix to use in the bucket name. Useful
for tracking resources. This default value makes it easy
to see which buckets were created from CLI integ tests.
:param num_random: Number of random chars to include in the bucket name.
:returns: The name of a randomly generated bucket name as a string.
"""
return prefix + random_chars(num_random)
class BaseCLIDriverTest(unittest.TestCase):
"""Base unittest that use clidriver.
This will load all the default plugins as well so it
will simulate the behavior the user will see.
"""
def setUp(self):
self.environ = {
'AWS_DATA_PATH': os.environ['AWS_DATA_PATH'],
'AWS_DEFAULT_REGION': 'us-east-1',
'AWS_ACCESS_KEY_ID': 'access_key',
'AWS_SECRET_ACCESS_KEY': 'secret_key',
'AWS_CONFIG_FILE': '',
}
self.environ_patch = mock.patch('os.environ', self.environ)
self.environ_patch.start()
self.driver = create_clidriver()
self.session = self.driver.session
def tearDown(self):
self.environ_patch.stop()
class BaseAWSHelpOutputTest(BaseCLIDriverTest):
def setUp(self):
super(BaseAWSHelpOutputTest, self).setUp()
self.renderer_patch = mock.patch('awscli.help.get_renderer')
self.renderer_mock = self.renderer_patch.start()
self.renderer = CapturedRenderer()
self.renderer_mock.return_value = self.renderer
def tearDown(self):
super(BaseAWSHelpOutputTest, self).tearDown()
self.renderer_patch.stop()
def assert_contains(self, contains):
if contains not in self.renderer.rendered_contents:
self.fail("The expected contents:\n%s\nwere not in the "
"actual rendered contents:\n%s" % (
contains, self.renderer.rendered_contents))
def assert_contains_with_count(self, contains, count):
r_count = self.renderer.rendered_contents.count(contains)
if r_count != count:
self.fail("The expected contents:\n%s\n, with the "
"count:\n%d\nwere not in the actual rendered "
" contents:\n%s\nwith count:\n%d" % (
contains, count, self.renderer.rendered_contents, r_count))
def assert_not_contains(self, contents):
if contents in self.renderer.rendered_contents:
self.fail("The contents:\n%s\nwere not suppose to be in the "
"actual rendered contents:\n%s" % (
contents, self.renderer.rendered_contents))
def assert_text_order(self, *args, **kwargs):
# First we need to find where the SYNOPSIS section starts.
starting_from = kwargs.pop('starting_from')
args = list(args)
contents = self.renderer.rendered_contents
self.assertIn(starting_from, contents)
start_index = contents.find(starting_from)
arg_indices = [contents.find(arg, start_index) for arg in args]
previous = arg_indices[0]
for i, index in enumerate(arg_indices[1:], 1):
if index == -1:
self.fail('The string %r was not found in the contents: %s'
% (args[index], contents))
if index < previous:
self.fail('The string %r came before %r, but was suppose to come '
'after it.\n%s' % (args[i], args[i - 1], contents))
previous = index
class CapturedRenderer(object):
def __init__(self):
self.rendered_contents = ''
def render(self, contents):
self.rendered_contents = contents.decode('utf-8')
class CapturedOutput(object):
def __init__(self, stdout, stderr):
self.stdout = stdout
self.stderr = stderr
@contextlib.contextmanager
def capture_output():
stderr = six.StringIO()
stdout = six.StringIO()
with mock.patch('sys.stderr', stderr):
with mock.patch('sys.stdout', stdout):
yield CapturedOutput(stdout, stderr)
@contextlib.contextmanager
def capture_input(input_bytes=b''):
input_data = six.BytesIO(input_bytes)
if six.PY3:
mock_object = mock.Mock()
mock_object.buffer = input_data
else:
mock_object = input_data
with mock.patch('sys.stdin', mock_object):
yield input_data
class BaseAWSCommandParamsTest(unittest.TestCase):
maxDiff = None
def setUp(self):
self.last_params = {}
self.last_kwargs = None
# awscli/__init__.py injects AWS_DATA_PATH at import time
# so that we can find cli.json. This might be fixed in the
# future, but for now we just grab that value out of the real
# os.environ so the patched os.environ has this data and
# the CLI works.
self.environ = {
'AWS_DATA_PATH': os.environ['AWS_DATA_PATH'],
'AWS_DEFAULT_REGION': 'us-east-1',
'AWS_ACCESS_KEY_ID': 'access_key',
'AWS_SECRET_ACCESS_KEY': 'secret_key',
'AWS_CONFIG_FILE': '',
'AWS_SHARED_CREDENTIALS_FILE': '',
}
self.environ_patch = mock.patch('os.environ', self.environ)
self.environ_patch.start()
self.http_response = AWSResponse(None, 200, {}, None)
self.parsed_response = {}
self.make_request_patch = mock.patch('botocore.endpoint.Endpoint.make_request')
self.make_request_is_patched = False
self.operations_called = []
self.parsed_responses = None
self.driver = create_clidriver()
def tearDown(self):
# This clears all the previous registrations.
self.environ_patch.stop()
if self.make_request_is_patched:
self.make_request_patch.stop()
self.make_request_is_patched = False
def before_call(self, params, **kwargs):
self._store_params(params)
def _store_params(self, params):
self.last_request_dict = params
self.last_params = params['body']
def patch_make_request(self):
# If you do not stop a previously started patch,
# it can never be stopped if you call start() again on the same
# patch again...
# So stop the current patch before calling start() on it again.
if self.make_request_is_patched:
self.make_request_patch.stop()
self.make_request_is_patched = False
make_request_patch = self.make_request_patch.start()
if self.parsed_responses is not None:
make_request_patch.side_effect = lambda *args, **kwargs: \
(self.http_response, self.parsed_responses.pop(0))
else:
make_request_patch.return_value = (self.http_response, self.parsed_response)
self.make_request_is_patched = True
def assert_params_for_cmd(self, cmd, params=None, expected_rc=0,
stderr_contains=None, ignore_params=None):
stdout, stderr, rc = self.run_cmd(cmd, expected_rc)
if stderr_contains is not None:
self.assertIn(stderr_contains, stderr)
if params is not None:
# The last kwargs of Operation.call() in botocore.
last_kwargs = copy.copy(self.last_kwargs)
if ignore_params is not None:
for key in ignore_params:
try:
del last_kwargs[key]
except KeyError:
pass
if params != last_kwargs:
self.fail("Actual params did not match expected params.\n"
"Expected:\n\n"
"%s\n"
"Actual:\n\n%s\n" % (
pformat(params), pformat(last_kwargs)))
return stdout, stderr, rc
def before_parameter_build(self, params, model, **kwargs):
self.last_kwargs = params
self.operations_called.append((model, params.copy()))
def run_cmd(self, cmd, expected_rc=0):
logging.debug("Calling cmd: %s", cmd)
self.patch_make_request()
event_emitter = self.driver.session.get_component('event_emitter')
event_emitter.register('before-call', self.before_call)
event_emitter.register_first(
'before-parameter-build.*.*', self.before_parameter_build)
if not isinstance(cmd, list):
cmdlist = cmd.split()
else:
cmdlist = cmd
with capture_output() as captured:
try:
rc = self.driver.main(cmdlist)
except SystemExit as e:
# We need to catch SystemExit so that we
# can get a proper rc and still present the
# stdout/stderr to the test runner so we can
# figure out what went wrong.
rc = e.code
stderr = captured.stderr.getvalue()
stdout = captured.stdout.getvalue()
self.assertEqual(
rc, expected_rc,
"Unexpected rc (expected: %s, actual: %s) for command: %s\n"
"stdout:\n%sstderr:\n%s" % (
expected_rc, rc, cmd, stdout, stderr))
return stdout, stderr, rc
class BaseAWSPreviewCommandParamsTest(BaseAWSCommandParamsTest):
def setUp(self):
self.preview_patch = mock.patch(
'awscli.customizations.preview.mark_as_preview')
self.preview_patch.start()
super(BaseAWSPreviewCommandParamsTest, self).setUp()
def tearDown(self):
self.preview_patch.stop()
super(BaseAWSPreviewCommandParamsTest, self).tearDown()
class BaseCLIWireResponseTest(unittest.TestCase):
def setUp(self):
self.environ = {
'AWS_DATA_PATH': os.environ['AWS_DATA_PATH'],
'AWS_DEFAULT_REGION': 'us-east-1',
'AWS_ACCESS_KEY_ID': 'access_key',
'AWS_SECRET_ACCESS_KEY': 'secret_key',
'AWS_CONFIG_FILE': ''
}
self.environ_patch = mock.patch('os.environ', self.environ)
self.environ_patch.start()
# TODO: fix this patch when we have a better way to stub out responses
self.send_patch = mock.patch('botocore.endpoint.Endpoint._send')
self.send_is_patched = False
self.driver = create_clidriver()
def tearDown(self):
self.environ_patch.stop()
if self.send_is_patched:
self.send_patch.stop()
self.send_is_patched = False
def patch_send(self, status_code=200, headers={}, content=b''):
if self.send_is_patched:
self.send_patch.stop()
self.send_is_patched = False
send_patch = self.send_patch.start()
send_patch.return_value = mock.Mock(status_code=status_code,
headers=headers,
content=content)
self.send_is_patched = True
def run_cmd(self, cmd, expected_rc=0):
if not isinstance(cmd, list):
cmdlist = cmd.split()
else:
cmdlist = cmd
with capture_output() as captured:
try:
rc = self.driver.main(cmdlist)
except SystemExit as e:
rc = e.code
stderr = captured.stderr.getvalue()
stdout = captured.stdout.getvalue()
self.assertEqual(
rc, expected_rc,
"Unexpected rc (expected: %s, actual: %s) for command: %s\n"
"stdout:\n%sstderr:\n%s" % (
expected_rc, rc, cmd, stdout, stderr))
return stdout, stderr, rc
class FileCreator(object):
def __init__(self):
self.rootdir = tempfile.mkdtemp()
def remove_all(self):
if os.path.exists(self.rootdir):
shutil.rmtree(self.rootdir)
def create_file(self, filename, contents, mtime=None, mode='w'):
"""Creates a file in a tmpdir
``filename`` should be a relative path, e.g. "foo/bar/baz.txt"
It will be translated into a full path in a tmp dir.
If the ``mtime`` argument is provided, then the file's
mtime will be set to the provided value (must be an epoch time).
Otherwise the mtime is left untouched.
``mode`` is the mode the file should be opened either as ``w`` or
`wb``.
Returns the full path to the file.
"""
full_path = os.path.join(self.rootdir, filename)
if not os.path.isdir(os.path.dirname(full_path)):
os.makedirs(os.path.dirname(full_path))
with open(full_path, mode) as f:
f.write(contents)
current_time = os.path.getmtime(full_path)
# Subtract a few years off the last modification date.
os.utime(full_path, (current_time, current_time - 100000000))
if mtime is not None:
os.utime(full_path, (mtime, mtime))
return full_path
def append_file(self, filename, contents):
"""Append contents to a file
``filename`` should be a relative path, e.g. "foo/bar/baz.txt"
It will be translated into a full path in a tmp dir.
Returns the full path to the file.
"""
full_path = os.path.join(self.rootdir, filename)
if not os.path.isdir(os.path.dirname(full_path)):
os.makedirs(os.path.dirname(full_path))
with open(full_path, 'a') as f:
f.write(contents)
return full_path
def full_path(self, filename):
"""Translate relative path to full path in temp dir.
f.full_path('foo/bar.txt') -> /tmp/asdfasd/foo/bar.txt
"""
return os.path.join(self.rootdir, filename)
class ProcessTerminatedError(Exception):
pass
class Result(object):
def __init__(self, rc, stdout, stderr, memory_usage=None):
self.rc = rc
self.stdout = stdout
self.stderr = stderr
INTEG_LOG.debug("rc: %s", rc)
INTEG_LOG.debug("stdout: %s", stdout)
INTEG_LOG.debug("stderr: %s", stderr)
if memory_usage is None:
memory_usage = []
self.memory_usage = memory_usage
@property
def json(self):
return json.loads(self.stdout)
def _escape_quotes(command):
# For windows we have different rules for escaping.
# First, double quotes must be escaped.
command = command.replace('"', '\\"')
# Second, single quotes do nothing, to quote a value we need
# to use double quotes.
command = command.replace("'", '"')
return command
def aws(command, collect_memory=False, env_vars=None,
wait_for_finish=True, input_data=None, input_file=None):
"""Run an aws command.
This help function abstracts the differences of running the "aws"
command on different platforms.
If collect_memory is ``True`` the the Result object will have a list
of memory usage taken at 2 second intervals. The memory usage
will be in bytes.
If env_vars is None, this will set the environment variables
to be used by the aws process.
If wait_for_finish is False, then the Process object is returned
to the caller. It is then the caller's responsibility to ensure
proper cleanup. This can be useful if you want to test timeout's
or how the CLI responds to various signals.
:type input_data: string
:param input_data: This string will be communicated to the process through
the stdin of the process. It essentially allows the user to
avoid having to use a file handle to pass information to the process.
Note that this string is not passed on creation of the process, but
rather communicated to the process.
:type input_file: a file handle
:param input_file: This is a file handle that will act as the
the stdin of the process immediately on creation. Essentially
any data written to the file will be read from stdin of the
process. This is needed if you plan to stream data into stdin while
collecting memory.
"""
if platform.system() == 'Windows':
command = _escape_quotes(command)
if 'AWS_TEST_COMMAND' in os.environ:
aws_command = os.environ['AWS_TEST_COMMAND']
else:
aws_command = 'python %s' % get_aws_cmd()
full_command = '%s %s' % (aws_command, command)
stdout_encoding = get_stdout_encoding()
if isinstance(full_command, six.text_type) and not six.PY3:
full_command = full_command.encode(stdout_encoding)
INTEG_LOG.debug("Running command: %s", full_command)
env = os.environ.copy()
if 'AWS_DEFAULT_REGION' not in env:
env['AWS_DEFAULT_REGION'] = "us-east-1"
if env_vars is not None:
env = env_vars
if input_file is None:
input_file = PIPE
process = Popen(full_command, stdout=PIPE, stderr=PIPE, stdin=input_file,
shell=True, env=env)
if not wait_for_finish:
return process
memory = None
if not collect_memory:
kwargs = {}
if input_data:
kwargs = {'input': input_data}
stdout, stderr = process.communicate(**kwargs)
else:
stdout, stderr, memory = _wait_and_collect_mem(process)
return Result(process.returncode,
stdout.decode(stdout_encoding),
stderr.decode(stdout_encoding),
memory)
def get_stdout_encoding():
encoding = getattr(sys.__stdout__, 'encoding', None)
if encoding is None:
encoding = 'utf-8'
return encoding
def _wait_and_collect_mem(process):
# We only know how to collect memory on mac/linux.
if platform.system() == 'Darwin':
get_memory = _get_memory_with_ps
elif platform.system() == 'Linux':
get_memory = _get_memory_with_ps
else:
raise ValueError(
"Can't collect memory for process on platform %s." %
platform.system())
memory = []
while process.poll() is None:
try:
current = get_memory(process.pid)
except ProcessTerminatedError:
# It's possible the process terminated between .poll()
# and get_memory().
break
memory.append(current)
stdout, stderr = process.communicate()
return stdout, stderr, memory
def _get_memory_with_ps(pid):
# It's probably possible to do with proc_pidinfo and ctypes on a Mac,
# but we'll do it the easy way with parsing ps output.
command_list = 'ps u -p'.split()
command_list.append(str(pid))
p = Popen(command_list, stdout=PIPE)
stdout = p.communicate()[0]
if not p.returncode == 0:
raise ProcessTerminatedError(str(pid))
else:
# Get the RSS from output that looks like this:
# USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
# user 47102 0.0 0.1 2437000 4496 s002 S+ 7:04PM 0:00.12 python2.6
return int(stdout.splitlines()[1].split()[5]) * 1024
class BaseS3CLICommand(unittest.TestCase):
"""Base class for aws s3 command.
This contains convenience functions to make writing these tests easier
and more streamlined.
"""
_PUT_HEAD_SHARED_EXTRAS = [
'SSECustomerAlgorithm',
'SSECustomerKey',
'SSECustomerKeyMD5',
'RequestPayer',
]
def setUp(self):
self.files = FileCreator()
self.session = botocore.session.get_session()
self.regions = {}
self.region = 'us-west-2'
self.client = self.session.create_client('s3', region_name=self.region)
self.extra_setup()
def extra_setup(self):
# Subclasses can use this to define extra setup steps.
pass
def tearDown(self):
self.files.remove_all()
self.extra_teardown()
def extra_teardown(self):
# Subclasses can use this to define extra teardown steps.
pass
def override_parser(self, **kwargs):
factory = self.session.get_component('response_parser_factory')
factory.set_parser_defaults(**kwargs)
def create_client_for_bucket(self, bucket_name):
region = self.regions.get(bucket_name, self.region)
client = self.session.create_client('s3', region_name=region)
return client
def assert_key_contents_equal(self, bucket, key, expected_contents):
self.wait_until_key_exists(bucket, key)
if isinstance(expected_contents, six.BytesIO):
expected_contents = expected_contents.getvalue().decode('utf-8')
actual_contents = self.get_key_contents(bucket, key)
# The contents can be huge so we try to give helpful error messages
# without necessarily printing the actual contents.
self.assertEqual(len(actual_contents), len(expected_contents))
if actual_contents != expected_contents:
self.fail("Contents for %s/%s do not match (but they "
"have the same length)" % (bucket, key))
def create_bucket(self, name=None, region=None):
if not region:
region = self.region
bucket_name = create_bucket(self.session, name, region)
self.regions[bucket_name] = region
self.addCleanup(self.delete_bucket, bucket_name)
# Wait for the bucket to exist before letting it be used.
self.wait_bucket_exists(bucket_name)
return bucket_name
def put_object(self, bucket_name, key_name, contents='', extra_args=None):
client = self.create_client_for_bucket(bucket_name)
call_args = {
'Bucket': bucket_name,
'Key': key_name, 'Body': contents
}
if extra_args is not None:
call_args.update(extra_args)
response = client.put_object(**call_args)
self.addCleanup(self.delete_key, bucket_name, key_name)
extra_head_params = {}
if extra_args:
extra_head_params = dict(
(k, v) for (k, v) in extra_args.items()
if k in self._PUT_HEAD_SHARED_EXTRAS
)
self.wait_until_key_exists(
bucket_name,
key_name,
extra_params=extra_head_params,
)
return response
def delete_bucket(self, bucket_name, attempts=5, delay=5):
self.remove_all_objects(bucket_name)
client = self.create_client_for_bucket(bucket_name)
# There's a chance that, even though the bucket has been used
# several times, the delete will fail due to eventual consistency
# issues.
attempts_remaining = attempts
while True:
attempts_remaining -= 1
try:
client.delete_bucket(Bucket=bucket_name)
break
except client.exceptions.NoSuchBucket:
if self.bucket_not_exists(bucket_name):
# Fast fail when the NoSuchBucket error is real.
break
if attempts_remaining <= 0:
raise
time.sleep(delay)
self.regions.pop(bucket_name, None)
def remove_all_objects(self, bucket_name):
client = self.create_client_for_bucket(bucket_name)
paginator = client.get_paginator('list_objects')
pages = paginator.paginate(Bucket=bucket_name)
key_names = []
for page in pages:
key_names += [obj['Key'] for obj in page.get('Contents', [])]
for key_name in key_names:
self.delete_key(bucket_name, key_name)
def delete_key(self, bucket_name, key_name):
client = self.create_client_for_bucket(bucket_name)
response = client.delete_object(Bucket=bucket_name, Key=key_name)
def get_key_contents(self, bucket_name, key_name):
self.wait_until_key_exists(bucket_name, key_name)
client = self.create_client_for_bucket(bucket_name)
response = client.get_object(Bucket=bucket_name, Key=key_name)
return response['Body'].read().decode('utf-8')
def wait_bucket_exists(self, bucket_name, min_successes=3):
client = self.create_client_for_bucket(bucket_name)
waiter = client.get_waiter('bucket_exists')
consistency_waiter = ConsistencyWaiter(
min_successes=min_successes, delay_initial_poll=True)
consistency_waiter.wait(
lambda: waiter.wait(Bucket=bucket_name) is None
)
def bucket_not_exists(self, bucket_name):
client = self.create_client_for_bucket(bucket_name)
try:
client.head_bucket(Bucket=bucket_name)
return True
except ClientError as error:
if error.response.get('Code') == '404':
return False
raise
def key_exists(self, bucket_name, key_name, min_successes=3):
try:
self.wait_until_key_exists(
bucket_name, key_name, min_successes=min_successes)
return True
except (ClientError, WaiterError):
return False
def key_not_exists(self, bucket_name, key_name, min_successes=3):
try:
self.wait_until_key_not_exists(
bucket_name, key_name, min_successes=min_successes)
return True
except (ClientError, WaiterError):
return False
def list_buckets(self):
response = self.client.list_buckets()
return response['Buckets']
def content_type_for_key(self, bucket_name, key_name):
parsed = self.head_object(bucket_name, key_name)
return parsed['ContentType']
def head_object(self, bucket_name, key_name):
client = self.create_client_for_bucket(bucket_name)
response = client.head_object(Bucket=bucket_name, Key=key_name)
return response
def wait_until_key_exists(self, bucket_name, key_name, extra_params=None,
min_successes=3):
self._wait_for_key(bucket_name, key_name, extra_params,
min_successes, exists=True)
def wait_until_key_not_exists(self, bucket_name, key_name, extra_params=None,
min_successes=3):
self._wait_for_key(bucket_name, key_name, extra_params,
min_successes, exists=False)
def _wait_for_key(self, bucket_name, key_name, extra_params=None,
min_successes=3, exists=True):
client = self.create_client_for_bucket(bucket_name)
if exists:
waiter = client.get_waiter('object_exists')
else:
waiter = client.get_waiter('object_not_exists')
params = {'Bucket': bucket_name, 'Key': key_name}
if extra_params is not None:
params.update(extra_params)
for _ in range(min_successes):
waiter.wait(**params)
def assert_no_errors(self, p):
self.assertEqual(
p.rc, 0,
"Non zero rc (%s) received: %s" % (p.rc, p.stdout + p.stderr))
self.assertNotIn("Error:", p.stderr)
self.assertNotIn("failed:", p.stderr)
self.assertNotIn("client error", p.stderr)
self.assertNotIn("server error", p.stderr)
class StringIOWithFileNo(StringIO):
def fileno(self):
return 0
class TestEventHandler(object):
def __init__(self, handler=None):
self._handler = handler
self._called = False
@property
def called(self):
return self._called
def handler(self, **kwargs):
self._called = True
if self._handler is not None:
self._handler(**kwargs)
class ConsistencyWaiterException(Exception):
pass
class ConsistencyWaiter(object):
"""
A waiter class for some check to reach a consistent state.
:type min_successes: int
:param min_successes: The minimum number of successful check calls to
treat the check as stable. Default of 1 success.
:type max_attempts: int
:param min_successes: The maximum number of times to attempt calling
the check. Default of 20 attempts.
:type delay: int
:param delay: The number of seconds to delay the next API call after a
failed check call. Default of 5 seconds.
"""
def __init__(self, min_successes=1, max_attempts=20, delay=5,
delay_initial_poll=False):
self.min_successes = min_successes
self.max_attempts = max_attempts
self.delay = delay
self.delay_initial_poll = delay_initial_poll
def wait(self, check, *args, **kwargs):
"""
Wait until the check succeeds the configured number of times
:type check: callable
:param check: A callable that returns True or False to indicate
if the check succeeded or failed.
:type args: list
:param args: Any ordered arguments to be passed to the check.
:type kwargs: dict
:param kwargs: Any keyword arguments to be passed to the check.
"""
attempts = 0
successes = 0
if self.delay_initial_poll:
time.sleep(self.delay)
while attempts < self.max_attempts:
attempts += 1
if check(*args, **kwargs):
successes += 1
if successes >= self.min_successes:
return
else:
time.sleep(self.delay)
fail_msg = self._fail_message(attempts, successes)
raise ConsistencyWaiterException(fail_msg)
def _fail_message(self, attempts, successes):
format_args = (attempts, successes)
return 'Failed after %s attempts, only had %s successes' % format_args
awscli-1.17.14/awscli/schema.py 0000644 0000000 0000000 00000014374 13620325556 016221 0 ustar root root 0000000 0000000 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from collections import defaultdict
class ParameterRequiredError(ValueError):
pass
class SchemaTransformer(object):
"""
Transforms a custom argument parameter schema into an internal
model representation so that it can be treated like a normal
service model. This includes shorthand JSON parsing and
automatic documentation generation. The format of the schema
follows JSON Schema, which can be found here:
http://json-schema.org/
Only a relevant subset of features is supported here:
* Types: `object`, `array`, `string`, `integer`, `boolean`
* Properties: `type`, `description`, `required`, `enum`
For example::
{
"type": "array",
"items": {
"type": "object",
"properties": {
"arg1": {
"type": "string",
"required": True,
"enum": [
"Value1",
"Value2",
"Value3"
]
},
"arg2": {
"type": "integer",
"description": "The number of calls"
}
}
}
}
Assuming the schema is applied to a service named `foo`, with an
operation named `bar` and that the parameter is called `baz`, you
could call it with the shorthand JSON like so::
$ aws foo bar --baz arg1=Value1,arg2=5 arg1=Value2
"""
JSON_SCHEMA_TO_AWS_TYPES = {
'object': 'structure',
'array': 'list',
}
def __init__(self):
self._shape_namer = ShapeNameGenerator()
def transform(self, schema):
"""Convert JSON schema to the format used internally by the AWS CLI.
:type schema: dict
:param schema: The JSON schema describing the argument model.
:rtype: dict
:return: The transformed model in a form that can be consumed
internally by the AWS CLI. The dictionary returned will
have a list of shapes, where the shape representing the
transformed schema is always named ``InputShape`` in the
returned dictionary.
"""
shapes = {}
self._transform(schema, shapes, 'InputShape')
return shapes
def _transform(self, schema, shapes, shape_name):
if 'type' not in schema:
raise ParameterRequiredError("Missing required key: 'type'")
if schema['type'] == 'object':
shapes[shape_name] = self._transform_structure(schema, shapes)
elif schema['type'] == 'array':
shapes[shape_name] = self._transform_list(schema, shapes)
elif schema['type'] == 'map':
shapes[shape_name] = self._transform_map(schema, shapes)
else:
shapes[shape_name] = self._transform_scalar(schema)
return shapes
def _transform_scalar(self, schema):
return self._populate_initial_shape(schema)
def _transform_structure(self, schema, shapes):
# Transforming a structure involves:
# 1. Generating the shape definition for the structure
# 2. Generating the shape definitions for its members
structure_shape = self._populate_initial_shape(schema)
members = {}
required_members = []
for key, value in schema['properties'].items():
current_type_name = self._json_schema_to_aws_type(value)
current_shape_name = self._shape_namer.new_shape_name(
current_type_name)
members[key] = {'shape': current_shape_name}
if value.get('required', False):
required_members.append(key)
self._transform(value, shapes, current_shape_name)
structure_shape['members'] = members
if required_members:
structure_shape['required'] = required_members
return structure_shape
def _transform_map(self, schema, shapes):
structure_shape = self._populate_initial_shape(schema)
for attribute in ['key', 'value']:
type_name = self._json_schema_to_aws_type(schema[attribute])
shape_name = self._shape_namer.new_shape_name(type_name)
structure_shape[attribute] = {'shape': shape_name}
self._transform(schema[attribute], shapes, shape_name)
return structure_shape
def _transform_list(self, schema, shapes):
# Transforming a structure involves:
# 1. Generating the shape definition for the structure
# 2. Generating the shape definitions for its 'items' member
list_shape = self._populate_initial_shape(schema)
member_type = self._json_schema_to_aws_type(schema['items'])
member_shape_name = self._shape_namer.new_shape_name(member_type)
list_shape['member'] = {'shape': member_shape_name}
self._transform(schema['items'], shapes, member_shape_name)
return list_shape
def _populate_initial_shape(self, schema):
shape = {'type': self._json_schema_to_aws_type(schema)}
if 'description' in schema:
shape['documentation'] = schema['description']
if 'enum' in schema:
shape['enum'] = schema['enum']
return shape
def _json_schema_to_aws_type(self, schema):
if 'type' not in schema:
raise ParameterRequiredError("Missing required key: 'type'")
type_name = schema['type']
return self.JSON_SCHEMA_TO_AWS_TYPES.get(type_name, type_name)
class ShapeNameGenerator(object):
def __init__(self):
self._name_cache = defaultdict(int)
def new_shape_name(self, type_name):
self._name_cache[type_name] += 1
current_index = self._name_cache[type_name]
return '%sType%s' % (type_name.capitalize(), current_index)
awscli-1.17.14/awscli/data/ 0000755 0000000 0000000 00000000000 13620325757 015312 5 ustar root root 0000000 0000000 awscli-1.17.14/awscli/data/cli.json 0000644 0000000 0000000 00000005637 13620325554 016762 0 ustar root root 0000000 0000000 {
"description": "The AWS Command Line Interface is a unified tool to manage your AWS services.",
"synopsis": "aws [options] [parameters]",
"help_usage": "Use *aws command help* for information on a specific command. Use *aws help topics* to view a list of available help topics. The synopsis for each command shows its parameters and their usage. Optional parameters are shown in square brackets.",
"options": {
"debug": {
"action": "store_true",
"help": "Turn on debug logging.
"
},
"endpoint-url": {
"help": "Override command's default URL with the given URL.
"
},
"no-verify-ssl": {
"action": "store_false",
"dest": "verify_ssl",
"help": "By default, the AWS CLI uses SSL when communicating with AWS services. For each SSL connection, the AWS CLI will verify SSL certificates. This option overrides the default behavior of verifying SSL certificates.
"
},
"no-paginate": {
"action": "store_false",
"help": "Disable automatic pagination.
",
"dest": "paginate"
},
"output": {
"choices": [
"json",
"text",
"table"
],
"help": "The formatting style for command output.
"
},
"query": {
"help": "A JMESPath query to use in filtering the response data.
"
},
"profile": {
"help": "Use a specific profile from your credential file.
"
},
"region": {
"help": "The region to use. Overrides config/env settings.
"
},
"version": {
"action": "version",
"help": "Display the version of this tool.
"
},
"color": {
"choices": ["on", "off", "auto"],
"default": "auto",
"help": "Turn on/off color output.
"
},
"no-sign-request": {
"action": "store_false",
"dest": "sign_request",
"help": "Do not sign requests. Credentials will not be loaded if this argument is provided.
"
},
"ca-bundle": {
"dest": "ca_bundle",
"help": "The CA certificate bundle to use when verifying SSL certificates. Overrides config/env settings.
"
},
"cli-read-timeout": {
"dest": "read_timeout",
"type": "int",
"help": "The maximum socket read time in seconds. If the value is set to 0, the socket read will be blocking and not timeout.
"
},
"cli-connect-timeout": {
"dest": "connect_timeout",
"type": "int",
"help": "The maximum socket connect time in seconds. If the value is set to 0, the socket connect will be blocking and not timeout.
"
}
}
}
awscli-1.17.14/awscli/__init__.py 0000644 0000000 0000000 00000002672 13620325757 016521 0 ustar root root 0000000 0000000 # Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""
AWSCLI
----
A Universal Command Line Environment for Amazon Web Services.
"""
import os
__version__ = '1.17.14'
#
# Get our data path to be added to botocore's search path
#
_awscli_data_path = []
if 'AWS_DATA_PATH' in os.environ:
for path in os.environ['AWS_DATA_PATH'].split(os.pathsep):
path = os.path.expandvars(path)
path = os.path.expanduser(path)
_awscli_data_path.append(path)
_awscli_data_path.append(
os.path.join(os.path.dirname(os.path.abspath(__file__)), 'data')
)
os.environ['AWS_DATA_PATH'] = os.pathsep.join(_awscli_data_path)
EnvironmentVariables = {
'ca_bundle': ('ca_bundle', 'AWS_CA_BUNDLE', None, None),
'output': ('output', 'AWS_DEFAULT_OUTPUT', 'json', None),
}
SCALAR_TYPES = set([
'string', 'float', 'integer', 'long', 'boolean', 'double',
'blob', 'timestamp'
])
COMPLEX_TYPES = set(['structure', 'map', 'list'])
awscli-1.17.14/awscli/alias.py 0000644 0000000 0000000 00000025716 13620325554 016052 0 ustar root root 0000000 0000000 # Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import logging
import os
import shlex
import subprocess
from botocore.configloader import raw_config_parse
from awscli.compat import compat_shell_quote
from awscli.commands import CLICommand
from awscli.utils import emit_top_level_args_parsed_event
LOG = logging.getLogger(__name__)
class InvalidAliasException(Exception):
pass
class AliasLoader(object):
def __init__(self,
alias_filename=os.path.expanduser(
os.path.join('~', '.aws', 'cli', 'alias'))):
"""Interface for loading and interacting with alias file
:param alias_filename: The name of the file to load aliases from.
This file must be an INI file.
"""
self._filename = alias_filename
self._aliases = None
def _build_aliases(self):
self._aliases = self._load_aliases()
self._cleanup_alias_values(self._aliases.get('toplevel', {}))
def _load_aliases(self):
if os.path.exists(self._filename):
return raw_config_parse(
self._filename, parse_subsections=False)
return {'toplevel': {}}
def _cleanup_alias_values(self, aliases):
for alias in aliases:
# Beginning and end line separators should not be included
# in the internal representation of the alias value.
aliases[alias] = aliases[alias].strip()
def get_aliases(self):
if self._aliases is None:
self._build_aliases()
return self._aliases.get('toplevel', {})
class AliasCommandInjector(object):
def __init__(self, session, alias_loader):
"""Injects alias commands for a command table
:type session: botocore.session.Session
:param session: The botocore session
:type alias_loader: awscli.alias.AliasLoader
:param alias_loader: The alias loader to use
"""
self._session = session
self._alias_loader = alias_loader
def inject_aliases(self, command_table, parser):
for alias_name, alias_value in \
self._alias_loader.get_aliases().items():
if alias_value.startswith('!'):
alias_cmd = ExternalAliasCommand(alias_name, alias_value)
else:
service_alias_cmd_args = [
alias_name, alias_value, self._session, command_table,
parser
]
# If the alias name matches something already in the
# command table provide the command it is about
# to clobber as a possible reference that it will
# need to proxy to.
if alias_name in command_table:
service_alias_cmd_args.append(
command_table[alias_name])
alias_cmd = ServiceAliasCommand(*service_alias_cmd_args)
command_table[alias_name] = alias_cmd
class BaseAliasCommand(CLICommand):
_UNDOCUMENTED = True
def __init__(self, alias_name, alias_value):
"""Base class for alias command
:type alias_name: string
:param alias_name: The name of the alias
:type alias_value: string
:param alias_value: The parsed value of the alias. This can be
retrieved from `AliasLoader.get_aliases()[alias_name]`
"""
self._alias_name = alias_name
self._alias_value = alias_value
def __call__(self, args, parsed_args):
raise NotImplementedError('__call__')
@property
def name(self):
return self._alias_name
@name.setter
def name(self, value):
self._alias_name = value
class ServiceAliasCommand(BaseAliasCommand):
UNSUPPORTED_GLOBAL_PARAMETERS = [
'debug',
'profile'
]
def __init__(self, alias_name, alias_value, session, command_table,
parser, shadow_proxy_command=None):
"""Command for a `toplevel` subcommand alias
:type alias_name: string
:param alias_name: The name of the alias
:type alias_value: string
:param alias_value: The parsed value of the alias. This can be
retrieved from `AliasLoader.get_aliases()[alias_name]`
:type session: botocore.session.Session
:param session: The botocore session
:type command_table: dict
:param command_table: The command table containing all of the
possible service command objects that a particular alias could
redirect to.
:type parser: awscli.argparser.MainArgParser
:param parser: The parser to parse commands provided at the top level
of a CLI command which includes service commands and global
parameters. This is used to parse the service commmand and any
global parameters from the alias's value.
:type shadow_proxy_command: CLICommand
:param shadow_proxy_command: A built-in command that
potentially shadows the alias in name. If the alias
references this command in its value, the alias should proxy
to this command as oppposed to proxy to itself in the command
table
"""
super(ServiceAliasCommand, self).__init__(alias_name, alias_value)
self._session = session
self._command_table = command_table
self._parser = parser
self._shadow_proxy_command = shadow_proxy_command
def __call__(self, args, parsed_globals):
alias_args = self._get_alias_args()
parsed_alias_args, remaining = self._parser.parse_known_args(
alias_args)
self._update_parsed_globals(parsed_alias_args, parsed_globals)
# Take any of the remaining arguments that were not parsed out and
# prepend them to the remaining args provided to the alias.
remaining.extend(args)
LOG.debug(
'Alias %r passing on arguments: %r to %r command',
self._alias_name, remaining, parsed_alias_args.command)
# Pass the update remaing args and global args to the service command
# the alias proxied to.
command = self._command_table[parsed_alias_args.command]
if self._shadow_proxy_command:
shadow_name = self._shadow_proxy_command.name
# Use the shadow command only if the aliases value
# uses that command indicating it needs to proxy over to
# a built-in command.
if shadow_name == parsed_alias_args.command:
LOG.debug(
'Using shadowed command object: %s '
'for alias: %s', self._shadow_proxy_command,
self._alias_name
)
command = self._shadow_proxy_command
return command(remaining, parsed_globals)
def _get_alias_args(self):
try:
alias_args = shlex.split(self._alias_value)
except ValueError as e:
raise InvalidAliasException(
'Value of alias "%s" could not be parsed. '
'Received error: %s when parsing:\n%s' % (
self._alias_name, e, self._alias_value)
)
alias_args = [arg.strip(os.linesep) for arg in alias_args]
LOG.debug(
'Expanded subcommand alias %r with value: %r to: %r',
self._alias_name, self._alias_value, alias_args
)
return alias_args
def _update_parsed_globals(self, parsed_alias_args, parsed_globals):
global_params_to_update = self._get_global_parameters_to_update(
parsed_alias_args)
# Emit the top level args parsed event to ensure all possible
# customizations that typically get applied are applied to the
# global parameters provided in the alias before updating
# the original provided global parameter values
# and passing those onto subsequent commands.
emit_top_level_args_parsed_event(self._session, parsed_alias_args)
for param_name in global_params_to_update:
updated_param_value = getattr(parsed_alias_args, param_name)
setattr(parsed_globals, param_name, updated_param_value)
def _get_global_parameters_to_update(self, parsed_alias_args):
# Retrieve a list of global parameters that the newly parsed args
# from the alias will have to clobber from the originally provided
# parsed globals.
global_params_to_update = []
for parsed_param, value in vars(parsed_alias_args).items():
# To determine which parameters in the alias were global values
# compare the parsed alias parameters to the default as
# specified by the parser. If the parsed values from the alias
# differs from the default value in the parser,
# that global parameter must have been provided in the alias.
if self._parser.get_default(parsed_param) != value:
if parsed_param in self.UNSUPPORTED_GLOBAL_PARAMETERS:
raise InvalidAliasException(
'Global parameter "--%s" detected in alias "%s" '
'which is not support in subcommand aliases.' % (
parsed_param, self._alias_name))
else:
global_params_to_update.append(parsed_param)
return global_params_to_update
class ExternalAliasCommand(BaseAliasCommand):
def __init__(self, alias_name, alias_value, invoker=subprocess.call):
"""Command for external aliases
Executes command external of CLI as opposed to being a proxy
to another command.
:type alias_name: string
:param alias_name: The name of the alias
:type alias_value: string
:param alias_value: The parsed value of the alias. This can be
retrieved from `AliasLoader.get_aliases()[alias_name]`
:type invoker: callable
:param invoker: Callable to run arguments of external alias. The
signature should match that of ``subprocess.call``
"""
self._alias_name = alias_name
self._alias_value = alias_value
self._invoker = invoker
def __call__(self, args, parsed_globals):
command_components = [
self._alias_value[1:]
]
command_components.extend(compat_shell_quote(a) for a in args)
command = ' '.join(command_components)
LOG.debug(
'Using external alias %r with value: %r to run: %r',
self._alias_name, self._alias_value, command)
return self._invoker(command, shell=True)
awscli-1.17.14/awscli/topictags.py 0000644 0000000 0000000 00000030553 13620325556 016753 0 ustar root root 0000000 0000000 # Copyright (c) 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish, dis-
# tribute, sublicense, and/or sell copies of the Software, and to permit
# persons to whom the Software is furnished to do so, subject to the fol-
# lowing conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
#
import os
import json
import docutils.core
class TopicTagDB(object):
"""This class acts like a database for the tags of all available topics.
A tag is an element in a topic reStructured text file that contains
information about a topic. Information can range from titles to even
related CLI commands. Here are all of the currently supported tags:
Tag Meaning Required?
--- ------- ---------
:title: The title of the topic Yes
:description: Sentence description of topic Yes
:category: Category topic falls under Yes
:related topic: A related topic No
:related command: A related command No
To see examples of how to specify tags, look in the directory
awscli/topics. Note that tags can have multiple values by delimiting
values with commas. All tags must be on their own line in the file.
This class can load a JSON index represeting all topics and their tags,
scan all of the topics and store the values of their tags, retrieve the
tag value for a particular topic, query for all the topics with a specific
tag and/or value, and save the loaded data back out to a JSON index.
The structure of the database can be viewed as a python dictionary:
{'topic-name-1': {
'title': ['My First Topic Title'],
'description': ['This describes my first topic'],
'category': ['General Topics', 'S3'],
'related command': ['aws s3'],
'related topic': ['topic-name-2']
},
'topic-name-2': { .....
}
The keys of the dictionary are the CLI command names of the topics. These
names are based off the name of the reStructed text file that corresponds
to the topic. The value of these keys are dictionaries of tags, where the
tags are keys and their value is a list of values for that tag. Note
that all tag values for a specific tag of a specific topic are unique.
"""
VALID_TAGS = ['category', 'description', 'title', 'related topic',
'related command']
# The default directory to look for topics.
TOPIC_DIR = os.path.join(
os.path.dirname(
os.path.abspath(__file__)), 'topics')
# The default JSON index to load.
JSON_INDEX = os.path.join(TOPIC_DIR, 'topic-tags.json')
def __init__(self, tag_dictionary=None, index_file=JSON_INDEX,
topic_dir=TOPIC_DIR):
"""
:param index_file: The path to a specific JSON index to load.
If nothing is specified it will default to the default JSON
index at ``JSON_INDEX``.
:param topic_dir: The path to the directory where to retrieve
the topic source files. Note that if you store your index
in this directory, you must supply the full path to the json
index to the ``file_index`` argument as it may not be ignored when
listing topic source files. If nothing is specified it will
default to the default directory at ``TOPIC_DIR``.
"""
self._tag_dictionary = tag_dictionary
if self._tag_dictionary is None:
self._tag_dictionary = {}
self._index_file = index_file
self._topic_dir = topic_dir
@property
def index_file(self):
return self._index_file
@index_file.setter
def index_file(self, value):
self._index_file = value
@property
def topic_dir(self):
return self._topic_dir
@topic_dir.setter
def topic_dir(self, value):
self._topic_dir = value
@property
def valid_tags(self):
return self.VALID_TAGS
def load_json_index(self):
"""Loads a JSON file into the tag dictionary."""
with open(self.index_file, 'r') as f:
self._tag_dictionary = json.load(f)
def save_to_json_index(self):
"""Writes the loaded data back out to the JSON index."""
with open(self.index_file, 'w') as f:
f.write(json.dumps(self._tag_dictionary, indent=4, sort_keys=True))
def get_all_topic_names(self):
"""Retrieves all of the topic names of the loaded JSON index"""
return list(self._tag_dictionary)
def get_all_topic_src_files(self):
"""Retrieves the file paths of all the topics in directory"""
topic_full_paths = []
topic_names = os.listdir(self.topic_dir)
for topic_name in topic_names:
# Do not try to load hidden files.
if not topic_name.startswith('.'):
topic_full_path = os.path.join(self.topic_dir, topic_name)
# Ignore the JSON Index as it is stored with topic files.
if topic_full_path != self.index_file:
topic_full_paths.append(topic_full_path)
return topic_full_paths
def scan(self, topic_files):
"""Scan in the tags of a list of topics into memory.
Note that if there are existing values in an entry in the database
of tags, they will not be overwritten. Any new values will be
appended to original values.
:param topic_files: A list of paths to topics to scan into memory.
"""
for topic_file in topic_files:
with open(topic_file, 'r') as f:
# Parse out the name of the topic
topic_name = self._find_topic_name(topic_file)
# Add the topic to the dictionary if it does not exist
self._add_topic_name_to_dict(topic_name)
topic_content = f.read()
# Record the tags and the values
self._add_tag_and_values_from_content(
topic_name, topic_content)
def _find_topic_name(self, topic_src_file):
# Get the name of each of these files
topic_name_with_ext = os.path.basename(topic_src_file)
# Strip of the .rst extension from the files
return topic_name_with_ext[:-4]
def _add_tag_and_values_from_content(self, topic_name, content):
# Retrieves tags and values and adds from content of topic file
# to the dictionary.
doctree = docutils.core.publish_doctree(content).asdom()
fields = doctree.getElementsByTagName('field')
for field in fields:
field_name = field.getElementsByTagName('field_name')[0]
field_body = field.getElementsByTagName('field_body')[0]
# Get the tag.
tag = field_name.firstChild.nodeValue
if tag in self.VALID_TAGS:
# Get the value of the tag.
values = field_body.childNodes[0].firstChild.nodeValue
# Seperate values into a list by splitting at commas
tag_values = values.split(',')
# Strip the white space around each of these values.
for i in range(len(tag_values)):
tag_values[i] = tag_values[i].strip()
self._add_tag_to_dict(topic_name, tag, tag_values)
else:
raise ValueError(
"Tag %s found under topic %s is not supported."
% (tag, topic_name)
)
def _add_topic_name_to_dict(self, topic_name):
# This method adds a topic name to the dictionary if it does not
# already exist
# Check if the topic is in the topic tag dictionary
if self._tag_dictionary.get(topic_name, None) is None:
self._tag_dictionary[topic_name] = {}
def _add_tag_to_dict(self, topic_name, tag, values):
# This method adds a tag to the dictionary given its tag and value
# If there are existing values associated to the tag it will add
# only values that previously did not exist in the list.
# Add topic to the topic tag dictionary if needed.
self._add_topic_name_to_dict(topic_name)
# Get all of a topics tags
topic_tags = self._tag_dictionary[topic_name]
self._add_key_values(topic_tags, tag, values)
def _add_key_values(self, dictionary, key, values):
# This method adds a value to a dictionary given a key.
# If there are existing values associated to the key it will add
# only values that previously did not exist in the list. All values
# in the dictionary should be lists
if dictionary.get(key, None) is None:
dictionary[key] = []
for value in values:
if value not in dictionary[key]:
dictionary[key].append(value)
def query(self, tag, values=None):
"""Groups topics by a specific tag and/or tag value.
:param tag: The name of the tag to query for.
:param values: A list of tag values to only include in query.
If no value is provided, all possible tag values will be returned
:rtype: dictionary
:returns: A dictionary whose keys are all possible tag values and the
keys' values are all of the topic names that had that tag value
in its source file. For example, if ``topic-name-1`` had the tag
``:category: foo, bar`` and ``topic-name-2`` had the tag
``:category: foo`` and we queried based on ``:category:``,
the returned dictionary would be:
{
'foo': ['topic-name-1', 'topic-name-2'],
'bar': ['topic-name-1']
}
"""
query_dict = {}
for topic_name in self._tag_dictionary.keys():
# Get the tag values for a specified tag of the topic
if self._tag_dictionary[topic_name].get(tag, None) is not None:
tag_values = self._tag_dictionary[topic_name][tag]
for tag_value in tag_values:
# Add the values to dictionary to be returned if
# no value constraints are provided or if the tag value
# falls in the allowed tag values.
if values is None or tag_value in values:
self._add_key_values(query_dict,
key=tag_value,
values=[topic_name])
return query_dict
def get_tag_value(self, topic_name, tag, default_value=None):
"""Get a value of a tag for a topic
:param topic_name: The name of the topic
:param tag: The name of the tag to retrieve
:param default_value: The value to return if the topic and/or tag
does not exist.
"""
if topic_name in self._tag_dictionary:
return self._tag_dictionary[topic_name].get(tag, default_value)
return default_value
def get_tag_single_value(self, topic_name, tag):
"""Get the value of a tag for a topic (i.e. not wrapped in a list)
:param topic_name: The name of the topic
:param tag: The name of the tag to retrieve
:raises VauleError: Raised if there is not exactly one value
in the list value.
"""
value = self.get_tag_value(topic_name, tag)
if value is not None:
if len(value) != 1:
raise ValueError(
'Tag %s for topic %s has value %. Expected a single '
'element in list.' % (tag, topic_name, value)
)
value = value[0]
return value
awscli-1.17.14/awscli/completer.py 0000755 0000000 0000000 00000013363 13620325554 016751 0 ustar root root 0000000 0000000 # Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
# http://aws.amazon.com/apache2.0/
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import awscli.clidriver
import sys
import logging
import copy
LOG = logging.getLogger(__name__)
class Completer(object):
def __init__(self, driver=None):
if driver is not None:
self.driver = driver
else:
self.driver = awscli.clidriver.create_clidriver()
self.main_help = self.driver.create_help_command()
self.main_options = self._get_documented_completions(
self.main_help.arg_table)
def complete(self, cmdline, point=None):
if point is None:
point = len(cmdline)
args = cmdline[0:point].split()
current_arg = args[-1]
cmd_args = [w for w in args if not w.startswith('-')]
opts = [w for w in args if w.startswith('-')]
cmd_name, cmd = self._get_command(self.main_help, cmd_args)
subcmd_name, subcmd = self._get_command(cmd, cmd_args)
if cmd_name is None:
# If we didn't find any command names in the cmdline
# lets try to complete provider options
return self._complete_provider(current_arg, opts)
elif subcmd_name is None:
return self._complete_command(cmd_name, cmd, current_arg, opts)
return self._complete_subcommand(subcmd_name, subcmd, current_arg, opts)
def _complete_command(self, command_name, command_help, current_arg, opts):
if current_arg == command_name:
if command_help:
return self._get_documented_completions(
command_help.command_table)
elif current_arg.startswith('-'):
return self._find_possible_options(current_arg, opts)
elif command_help is not None:
# See if they have entered a partial command name
return self._get_documented_completions(
command_help.command_table, current_arg)
return []
def _complete_subcommand(self, subcmd_name, subcmd_help, current_arg, opts):
if current_arg != subcmd_name and current_arg.startswith('-'):
return self._find_possible_options(current_arg, opts, subcmd_help)
return []
def _complete_option(self, option_name):
if option_name == '--endpoint-url':
return []
if option_name == '--output':
cli_data = self.driver.session.get_data('cli')
return cli_data['options']['output']['choices']
if option_name == '--profile':
return self.driver.session.available_profiles
return []
def _complete_provider(self, current_arg, opts):
if current_arg.startswith('-'):
return self._find_possible_options(current_arg, opts)
elif current_arg == 'aws':
return self._get_documented_completions(
self.main_help.command_table)
else:
# Otherwise, see if they have entered a partial command name
return self._get_documented_completions(
self.main_help.command_table, current_arg)
def _get_command(self, command_help, command_args):
if command_help is not None and command_help.command_table is not None:
for command_name in command_args:
if command_name in command_help.command_table:
cmd_obj = command_help.command_table[command_name]
return command_name, cmd_obj.create_help_command()
return None, None
def _get_documented_completions(self, table, startswith=None):
names = []
for key, command in table.items():
if getattr(command, '_UNDOCUMENTED', False):
# Don't tab complete undocumented commands/params
continue
if startswith is not None and not key.startswith(startswith):
continue
if getattr(command, 'positional_arg', False):
continue
names.append(key)
return names
def _find_possible_options(self, current_arg, opts, subcmd_help=None):
all_options = copy.copy(self.main_options)
if subcmd_help is not None:
all_options += self._get_documented_completions(
subcmd_help.arg_table)
for option in opts:
# Look through list of options on cmdline. If there are
# options that have already been specified and they are
# not the current word, remove them from list of possibles.
if option != current_arg:
stripped_opt = option.lstrip('-')
if stripped_opt in all_options:
all_options.remove(stripped_opt)
cw = current_arg.lstrip('-')
possibilities = ['--' + n for n in all_options if n.startswith(cw)]
if len(possibilities) == 1 and possibilities[0] == current_arg:
return self._complete_option(possibilities[0])
return possibilities
def complete(cmdline, point):
choices = Completer().complete(cmdline, point)
print(' \n'.join(choices))
if __name__ == '__main__':
if len(sys.argv) == 3:
cmdline = sys.argv[1]
point = int(sys.argv[2])
elif len(sys.argv) == 2:
cmdline = sys.argv[1]
else:
print('usage: %s ' % sys.argv[0])
sys.exit(1)
print(complete(cmdline, point))
awscli-1.17.14/awscli/topics/ 0000755 0000000 0000000 00000000000 13620325757 015702 5 ustar root root 0000000 0000000 awscli-1.17.14/awscli/topics/s3-faq.rst 0000644 0000000 0000000 00000005006 13620325556 017524 0 ustar root root 0000000 0000000 :title: AWS CLI S3 FAQ
:description: Frequented Asked Questions for Amazon S3 in the AWS CLI
:category: S3
:related command: s3 cp, s3 sync, s3 mv, s3 rm
S3 FAQ
======
Below are common questions regarding the use of Amazon S3 in the AWS CLI.
Q: Does the AWS CLI validate checksums?
---------------------------------------
The AWS CLI will perform checksum validation for uploading and downloading
files in specific scenarios.
Upload
~~~~~~
The AWS CLI will calculate and auto-populate the ``Content-MD5`` header for
both standard and multipart uploads. If the checksum that S3 calculates does
not match the ``Content-MD5`` provided, S3 will not store the object and
instead will return an error message back the AWS CLI. The AWS CLI will retry
this error up to 5 times before giving up. On the case that any files fail to
transfer successfully to S3, the AWS CLI will exit with a non zero RC.
See ``aws help return-codes`` for more information.
If the upload request is signed with Signature Version 4, then a
``Content-MD5`` is not calculated. Instead, the AWS CLI uses the
``x-amz-content-sha256`` header as a checksum instead of ``Content-MD5``.
The AWS CLI will use Signature Version 4 for S3 in several cases:
* You're using an AWS region that only supports Signature Version 4. This
includes ``eu-central-1`` and ``ap-northeast-2``.
* You explicitly opt in and set ``signature_version = s3v4`` in your
``~/.aws/config`` file.
Note that the AWS CLI will add a ``Content-MD5`` header for both
the high level ``aws s3`` commands that perform uploads
(``aws s3 cp``, ``aws s3 sync``) as well as the low level ``s3api``
commands including ``aws s3api put-object`` and ``aws s3api upload-part``.
Download
~~~~~~~~
The AWS CLI will attempt to verify the checksum of downloads when possible,
based on the ``ETag`` header returned from a ``GetObject`` request that's
performed whenever the AWS CLI downloads objects from S3. If the calculated
MD5 checksum does not match the expected checksum, the file is deleted
and the download is retried. This process is retried up to 3 times.
If a downloads fails, the AWS CLI will exit with a non zero RC.
See ``aws help return-codes`` for more information.
There are several conditions where the CLI is *not* able to verify
checksums on downloads:
* If the object was uploaded via multipart uploads
* If the object was uploaded using server side encryption with KMS
* If the object was uploaded using a customer provided encryption key
* If the object is downloaded using range ``GetObject`` requests
awscli-1.17.14/awscli/topics/topic-tags.json 0000644 0000000 0000000 00000003051 13620325556 020643 0 ustar root root 0000000 0000000 {
"config-vars": {
"category": [
"General"
],
"description": [
"Configuration Variables for the AWS CLI"
],
"related command": [
"configure",
"configure get",
"configure set"
],
"related topic": [
"s3-config"
],
"title": [
"AWS CLI Configuration Variables"
]
},
"return-codes": {
"category": [
"General"
],
"description": [
"Describes the various return codes of the AWS CLI"
],
"related command": [
"s3",
"s3 cp",
"s3 sync",
"s3 mv",
"s3 rm"
],
"title": [
"AWS CLI Return Codes"
]
},
"s3-config": {
"category": [
"S3"
],
"description": [
"Advanced configuration for AWS S3 Commands"
],
"related command": [
"s3 cp",
"s3 sync",
"s3 mv",
"s3 rm"
],
"title": [
"AWS CLI S3 Configuration"
]
},
"s3-faq": {
"category": [
"S3"
],
"description": [
"Frequented Asked Questions for Amazon S3 in the AWS CLI"
],
"related command": [
"s3 cp",
"s3 sync",
"s3 mv",
"s3 rm"
],
"title": [
"AWS CLI S3 FAQ"
]
}
} awscli-1.17.14/awscli/topics/config-vars.rst 0000644 0000000 0000000 00000050324 13620325556 020653 0 ustar root root 0000000 0000000 :title: AWS CLI Configuration Variables
:description: Configuration Variables for the AWS CLI
:category: General
:related command: configure, configure get, configure set
:related topic: s3-config
Configuration values for the AWS CLI can come from several sources:
* As a command line option
* As an environment variable
* As a value in the AWS CLI config file
* As a value in the AWS Shared Credential file
Some options are only available in the AWS CLI config. This topic guide covers
all the configuration variables available in the AWS CLI.
Note that if you are just looking to get the minimum required configuration to
run the AWS CLI, we recommend running ``aws configure``, which will prompt you
for the necessary configuration values.
Config File Format
==================
The AWS CLI config file, which defaults to ``~/.aws/config`` has the following
format::
[default]
aws_access_key_id=foo
aws_secret_access_key=bar
region=us-west-2
The ``default`` section refers to the configuration values for the default
profile. You can create profiles, which represent logical groups of
configuration. Profiles that aren't the default profile are specified by
creating a section titled "profile profilename"::
[profile testing]
aws_access_key_id=foo
aws_secret_access_key=bar
region=us-west-2
Nested Values
-------------
Some service specific configuration, discussed in more detail below, has a
single top level key, with nested sub values. These sub values are denoted by
indentation::
[profile testing]
aws_access_key_id = foo
aws_secret_access_key = bar
region = us-west-2
s3 =
max_concurrent_requests=10
max_queue_size=1000
General Options
===============
The AWS CLI has a few general options:
==================== =========== ===================== ===================== ============================
Variable Option Config Entry Environment Variable Description
==================== =========== ===================== ===================== ============================
profile --profile N/A AWS_PROFILE Default profile name
-------------------- ----------- --------------------- --------------------- ----------------------------
region --region region AWS_DEFAULT_REGION Default AWS Region
-------------------- ----------- --------------------- --------------------- ----------------------------
output --output output AWS_DEFAULT_OUTPUT Default output style
-------------------- ----------- --------------------- --------------------- ----------------------------
cli_timestamp_format N/A cli_timestamp_format N/A Output format of timestamps
-------------------- ----------- --------------------- --------------------- ----------------------------
cli_follow_urlparam N/A cli_follow_urlparam N/A Fetch URL url parameters
-------------------- ----------- --------------------- --------------------- ----------------------------
ca_bundle --ca-bundle ca_bundle AWS_CA_BUNDLE CA Certificate Bundle
-------------------- ----------- --------------------- --------------------- ----------------------------
parameter_validation N/A parameter_validation N/A Toggles parameter validation
-------------------- ----------- --------------------- --------------------- ----------------------------
tcp_keepalive N/A tcp_keepalive N/A Toggles TCP Keep-Alive
==================== =========== ===================== ===================== ============================
The third column, Config Entry, is the value you would specify in the AWS CLI
config file. By default, this location is ``~/.aws/config``. If you need to
change this value, you can set the ``AWS_CONFIG_FILE`` environment variable
to change this location.
The valid values of the ``output`` configuration variable are:
* json
* table
* text
``cli_timestamp_format`` controls the format of timestamps displayed by the AWS CLI.
The valid values of the ``cli_timestamp_format`` configuration variable are:
* none - Display the timestamp exactly as received from the HTTP response.
* iso8601 - Reformat timestamp using iso8601 in the UTC timezone.
``cli_follow_urlparam`` controls whether or not the CLI will attempt to follow
URL links in parameters that start with either prefix ``https://`` or
``http://``. The valid values of the ``cli_follow_urlparam`` configuration
variable are:
* true - This is the default value. With this configured the CLI will follow
any string parameters that start with ``https://`` or ``http://`` will be
fetched, and the downloaded content will be used as the parameter instead.
* false - The CLI will not treat strings prefixed with ``https://`` or
``http://`` any differently than normal string parameters.
``parameter_validation`` controls whether parameter validation should occur
when serializing requests. The default is True. You can disable parameter
validation for performance reasons. Otherwise, it's recommended to leave
parameter validation enabled.
When you specify a profile, either using ``--profile profile-name`` or by
setting a value for the ``AWS_PROFILE`` environment variable, profile
name you provide is used to find the corresponding section in the AWS CLI
config file. For example, specifying ``--profile development`` will instruct
the AWS CLI to look for a section in the AWS CLI config file of
``[profile development]``.
Precedence
----------
The above configuration values have the following precedence:
* Command line options
* Environment variables
* Configuration file
Credentials
===========
Credentials can be specified in several ways:
* Environment variables
* The AWS Shared Credential File
* The AWS CLI config file
============================= ============================= ================================= ==============================
Variable Creds/Config Entry Environment Variable Description
============================= ============================= ================================= ==============================
access_key aws_access_key_id AWS_ACCESS_KEY_ID AWS Access Key
----------------------------- ----------------------------- --------------------------------- ------------------------------
secret_key aws_secret_access_key AWS_SECRET_ACCESS_KEY AWS Secret Key
----------------------------- ----------------------------- --------------------------------- ------------------------------
token aws_session_token AWS_SESSION_TOKEN AWS Token (temp credentials)
----------------------------- ----------------------------- --------------------------------- ------------------------------
metadata_service_timeout metadata_service_timeout AWS_METADATA_SERVICE_TIMEOUT EC2 metadata creds timeout
----------------------------- ----------------------------- --------------------------------- ------------------------------
metadata_service_num_attempts metadata_service_num_attempts AWS_METADATA_SERVICE_NUM_ATTEMPTS EC2 metadata creds retry count
============================= ============================= ================================= ==============================
The second column specifies the name that you can specify in either the AWS CLI
config file or the AWS Shared credentials file (``~/.aws/credentials``).
The Shared Credentials File
---------------------------
The shared credentials file has a default location of
``~/.aws/credentials``. You can change the location of the shared
credentials file by setting the ``AWS_SHARED_CREDENTIALS_FILE``
environment variable.
This file is an INI formatted file with section names
corresponding to profiles. With each section, the three configuration
variables shown above can be specified: ``aws_access_key_id``,
``aws_secret_access_key``, ``aws_session_token``. **These are the only
supported values in the shared credential file.** Also note that the
section names are different than the AWS CLI config file (``~/.aws/config``).
In the AWS CLI config file, you create a new profile by creating a section of
``[profile profile-name]``, for example::
[profile development]
aws_access_key_id=foo
aws_secret_access_key=bar
In the shared credentials file, profiles are not prefixed with ``profile``,
for example::
[development]
aws_access_key_id=foo
aws_secret_access_key=bar
Precedence
----------
Credentials from environment variables have precedence over credentials from
the shared credentials and AWS CLI config file. Credentials specified in the
shared credentials file have precedence over credentials in the AWS CLI config
file. If ``AWS_PROFILE`` environment variable is set and the
``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY`` environment variables are
set, then the credentials provided by ``AWS_ACCESS_KEY_ID`` and
``AWS_SECRET_ACCESS_KEY`` will override the credentials located in the
profile provided by ``AWS_PROFILE``.
Using AWS IAM Roles
-------------------
If you are on an Amazon EC2 instance that was launched with an IAM role, the
AWS CLI will automatically retrieve credentials for you. You do not need
to configure any credentials.
Additionally, you can specify a role for the AWS CLI to assume, and the AWS
CLI will automatically make the corresponding ``AssumeRole`` calls for you.
Note that configuration variables for using IAM roles can only be in the AWS
CLI config file.
You can specify the following configuration values for configuring an IAM role
in the AWS CLI config file:
* ``role_arn`` - The ARN of the role you want to assume.
* ``source_profile`` - The AWS CLI profile that contains credentials /
configuration the CLI should use for the initial ``assume-role`` call. This
profile may be another profile configured to use ``assume-role``, though
if static credentials are present in the profile they will take precedence.
This parameter cannot be provided alongside ``credential_source``.
* ``credential_source`` - The credential provider to use to get credentials for
the initial ``assume-role`` call. This parameter cannot be provided
alongside ``source_profile``. Valid values are:
* ``Environment`` to pull source credentials from environment variables.
* ``Ec2InstanceMetadata`` to use the EC2 instance role as source credentials.
* ``EcsContainer`` to use the ECS container credentials as the source
credentials.
* ``external_id`` - A unique identifier that is used by third parties to assume
a role in their customers' accounts. This maps to the ``ExternalId``
parameter in the ``AssumeRole`` operation. This is an optional parameter.
* ``mfa_serial`` - The identification number of the MFA device to use when
assuming a role. This is an optional parameter. Specify this value if the
trust policy of the role being assumed includes a condition that requires MFA
authentication. The value is either the serial number for a hardware device
(such as GAHT12345678) or an Amazon Resource Name (ARN) for a virtual device
(such as arn:aws:iam::123456789012:mfa/user).
* ``role_session_name`` - The name applied to this assume-role session. This
value affects the assumed role user ARN (such as
arn:aws:sts::123456789012:assumed-role/role_name/role_session_name). This
maps to the ``RoleSessionName`` parameter in the ``AssumeRole`` operation.
This is an optional parameter. If you do not provide this value, a
session name will be automatically generated.
* ``duration_seconds`` - The duration, in seconds, of the role session.
The value can range from 900 seconds (15 minutes) up to the maximum
session duration setting for the role. This is an optional parameter
and by default, the value is set to 3600 seconds.
If you do not have MFA authentication required, then you only need to specify a
``role_arn`` and either a ``source_profile`` or a ``credential_source``.
When you specify a profile that has IAM role configuration, the AWS CLI
will make an ``AssumeRole`` call to retrieve temporary credentials. These
credentials are then stored (in ``~/.aws/cli/cache``). Subsequent AWS CLI
commands will use the cached temporary credentials until they expire, in which
case the AWS CLI will automatically refresh credentials.
If you specify an ``mfa_serial``, then the first time an ``AssumeRole`` call is
made, you will be prompted to enter the MFA code. Subsequent commands will use
the cached temporary credentials. However, when the temporary credentials
expire, you will be re-prompted for another MFA code.
Example configuration using ``source_profile``::
# In ~/.aws/credentials:
[development]
aws_access_key_id=foo
aws_secret_access_key=bar
# In ~/.aws/config
[profile crossaccount]
role_arn=arn:aws:iam:...
source_profile=development
Example configuration using ``credential_source`` to use the instance role as
the source credentials for the assume role call::
# In ~/.aws/config
[profile crossaccount]
role_arn=arn:aws:iam:...
credential_source=Ec2InstanceMetadata
Assume Role With Web Identity
--------------------------------------
Within the ``~/.aws/config`` file, you can also configure a profile to indicate
that the AWS CLI should assume a role. When you do this, the AWS CLI will
automatically make the corresponding ``AssumeRoleWithWebIdentity`` calls to AWS
STS on your behalf.
When you specify a profile that has IAM role configuration, the AWS CLI will
make an ``AssumeRoleWithWebIdentity`` call to retrieve temporary credentials.
These credentials are then stored (in ``~/.aws/cli/cache``). Subsequent AWS
CLI commands will use the cached temporary credentials until they expire, in
which case the AWS CLI will automatically refresh credentials.
You can specify the following configuration values for configuring an
assume role with web identity profile in the shared config:
* ``role_arn`` - The ARN of the role you want to assume.
* ``web_identity_token_file`` - The path to a file which contains an OAuth 2.0
access token or OpenID Connect ID token that is provided by the identity
provider. The contents of this file will be loaded and passed as the
``WebIdentityToken`` argument to the ``AssumeRoleWithWebIdentity`` operation.
* ``role_session_name`` - The name applied to this assume-role session. This
value affects the assumed role user ARN (such as
arn:aws:sts::123456789012:assumed-role/role_name/role_session_name). This
maps to the ``RoleSessionName`` parameter in the
``AssumeRoleWithWebIdentity`` operation. This is an optional parameter. If
you do not provide this value, a session name will be automatically
generated.
Below is an example configuration for the minimal amount of configuration
needed to configure an assume role with web identity profile::
# In ~/.aws/config
[profile web-identity]
role_arn=arn:aws:iam:...
web_identity_token_file=/path/to/a/token
This provider can also be configured via the environment:
``AWS_ROLE_ARN``
The ARN of the role you want to assume.
``AWS_WEB_IDENTITY_TOKEN_FILE``
The path to the web identity token file.
``AWS_ROLE_SESSION_NAME``
The name applied to this assume-role session.
.. note::
These environment variables currently only apply to the assume role with
web identity provider and do not apply to the general assume role provider
configuration.
Sourcing Credentials From External Processes
--------------------------------------------
.. warning::
The following describes a method of sourcing credentials from an external
process. This can potentially be dangerous, so proceed with caution. Other
credential providers should be preferred if at all possible. If using
this option, you should make sure that the config file is as locked down
as possible using security best practices for your operating system.
Ensure that your custom credential tool does not write any secret
information to StdErr because the SDKs and CLI can capture and log such
information, potentially exposing it to unauthorized users.
If you have a method of sourcing credentials that isn't built in to the AWS
CLI, you can integrate it by using ``credential_process`` in the config file.
The AWS CLI will call that command exactly as given and then read json data
from stdout. The process must write credentials to stdout in the following
format::
{
"Version": 1,
"AccessKeyId": "",
"SecretAccessKey": "",
"SessionToken": "",
"Expiration": ""
}
The ``Version`` key must be set to ``1``. This value may be bumped over time
as the payload structure evolves.
The ``Expiration`` key is an ISO8601 formatted timestamp. If the ``Expiration``
key is not returned in stdout, the credentials are long term credentials that
do not refresh. Otherwise the credentials are considered refreshable
credentials and will be refreshed automatically. NOTE: Unlike with assume role
credentials, the AWS CLI will NOT cache process credentials. If caching is
needed, it must be implemented in the external process.
The process can return a non-zero RC to indicate that an error occurred while
retrieving credentials.
Some process providers may need additional information in order to retrieve the
appropriate credentials. This can be done via command line arguments. NOTE:
command line options may be visible to process running on the same machine.
Example configuration::
[profile dev]
credential_process = /opt/bin/awscreds-custom
Example configuration with parameters::
[profile dev]
credential_process = /opt/bin/awscreds-custom --username monty
Service Specific Configuration
==============================
API Versions
------------
The API version to use for a service can be set using the ``api_versions``
key. To specify an API version, set the API version to the name of the service
as a sub value for ``api_versions``.
Example configuration::
[profile development]
aws_access_key_id=foo
aws_secret_access_key=bar
api_versions =
ec2 = 2015-03-01
cloudfront = 2015-09-17
By setting an API version for a service, it ensures that the interface for
that service's commands is representative of the specified API version.
In the example configuration, the ``ec2`` CLI commands will be representative
of Amazon EC2's ``2015-03-01`` API version and the ``cloudfront`` CLI commands
will be representative of Amazon CloudFront's ``2015-09-17`` API version.
AWS STS
-------
To set STS endpoint resolution logic, use the ``AWS_STS_REGIONAL_ENDPOINTS``
environment variable or ``sts_regional_endpoints`` configuration file option.
By default, this configuration option is set to ``legacy``. Valid values are:
* ``regional``
Uses the STS endpoint that corresponds to the configured region. For
example if the client is configured to use ``us-west-2``, all calls
to STS will be make to the ``sts.us-west-2.amazonaws.com`` regional
endpoint instead of the global ``sts.amazonaws.com`` endpoint.
* ``legacy``
Uses the global STS endpoint, ``sts.amazonaws.com``, for the following
configured regions:
* ``ap-northeast-1``
* ``ap-south-1``
* ``ap-southeast-1``
* ``ap-southeast-2``
* ``aws-global``
* ``ca-central-1``
* ``eu-central-1``
* ``eu-north-1``
* ``eu-west-1``
* ``eu-west-2``
* ``eu-west-3``
* ``sa-east-1``
* ``us-east-1``
* ``us-east-2``
* ``us-west-1``
* ``us-west-2``
All other regions will use their respective regional endpoint.
Amazon S3
---------
There are a number of configuration variables specific to the S3 commands. See
:doc:`s3-config` (``aws help topics s3-config``) for more details.
OS Specific Configuration
=========================
Locale
------
If you have data stored in AWS that uses a particular encoding, you should make
sure that your systems are configured to accept that encoding. For instance, if
you have unicode characters as part of a key on EC2 you will need to make sure
that your locale is set to a unicode-compatible locale. How you configure your
locale will depend on your operating system and your specific IT requirements.
One option for UNIX systems is the ``LC_ALL`` environment variable. Setting
``LC_ALL=en_US.UTF-8``, for instance, would give you a United States English
locale which is compatible with unicode.
awscli-1.17.14/awscli/topics/return-codes.rst 0000644 0000000 0000000 00000003657 13620325556 021056 0 ustar root root 0000000 0000000 :title: AWS CLI Return Codes
:description: Describes the various return codes of the AWS CLI
:category: General
:related command: s3, s3 cp, s3 sync, s3 mv, s3 rm
These are the following return codes returned at the end of execution
of a CLI command:
* ``0`` -- Command was successful. There were no errors thrown by either
the CLI or by the service the request was made to.
* ``1`` -- Limited to ``s3`` commands, at least one or more s3 transfers
failed for the command executed.
* ``2`` -- The meaning of this return code depends on the command being run.
The primary meaning is that the command entered on the command
line failed to be parsed. Parsing failures can be caused by,
but are not limited to, missing any required subcommands or arguments
or using any unknown commands or arguments.
Note that this return code meaning is applicable to all CLI commands.
The other meaning is only applicable to ``s3`` commands.
It can mean at least one or more files marked
for transfer were skipped during the transfer process. However, all
other files marked for transfer were successfully transferred.
Files that are skipped during the transfer process include:
files that do not exist, files that are character special devices,
block special device, FIFO's, or sockets, and files that the user cannot
read from.
* ``130`` -- The process received a SIGINT (Ctrl-C).
* ``255`` -- Command failed. There were errors thrown by either the CLI or
by the service the request was made to.
To determine the return code of a command, run the following right after
running a CLI command. Note that this will work only on POSIX systems::
$ echo $?
Output (if successful)::
0
On Windows PowerShell, the return code can be determined by running::
> echo $lastexitcode
Output (if successful)::
0
On Windows Command Prompt, the return code can be determined by running::
> echo %errorlevel%
Output (if successful)::
0
awscli-1.17.14/awscli/topics/s3-config.rst 0000644 0000000 0000000 00000025667 13620325556 020241 0 ustar root root 0000000 0000000 :title: AWS CLI S3 Configuration
:description: Advanced configuration for AWS S3 Commands
:category: S3
:related command: s3 cp, s3 sync, s3 mv, s3 rm
The ``aws s3`` transfer commands, which include the ``cp``, ``sync``, ``mv``,
and ``rm`` commands, have additional configuration values you can use to
control S3 transfers. This topic guide discusses these parameters as well as
best practices and guidelines for setting these values.
Before discussing the specifics of these values, note that these values are
entirely optional. You should be able to use the ``aws s3`` transfer commands
without having to configure any of these values. These configuration values
are provided in the case where you need to modify one of these values, either
for performance reasons or to account for the specific environment where these
``aws s3`` commands are being run.
Configuration Values
====================
These are the configuration values you can set specifically for the ``aws s3``
command set:
* ``max_concurrent_requests`` - The maximum number of concurrent requests.
* ``max_queue_size`` - The maximum number of tasks in the task queue.
* ``multipart_threshold`` - The size threshold the CLI uses for multipart
transfers of individual files.
* ``multipart_chunksize`` - When using multipart transfers, this is the chunk
size that the CLI uses for multipart transfers of individual files.
* ``max_bandwidth`` - The maximum bandwidth that will be consumed for uploading
and downloading data to and from Amazon S3.
These are the configuration values that can be set for both ``aws s3``
and ``aws s3api``:
* ``use_accelerate_endpoint`` - Use the Amazon S3 Accelerate endpoint for
all ``s3`` and ``s3api`` commands. You **must** first enable S3 Accelerate
on your bucket before attempting to use the endpoint. This is mutually
exclusive with the ``use_dualstack_endpoint`` option.
* ``use_dualstack_endpoint`` - Use the Amazon S3 dual IPv4 / IPv6 endpoint for
all ``s3 `` and ``s3api`` commands. This is mutually exclusive with the
``use_accelerate_endpoint`` option.
* ``addressing_style`` - Specifies which addressing style to use. This controls
if the bucket name is in the hostname or part of the URL. Value values are:
``path``, ``virtual``, and ``auto``. The default value is ``auto``.
* ``payload_signing_enabled`` - Refers to whether or not to SHA256 sign sigv4
payloads. By default, this is disabled for streaming uploads (UploadPart
and PutObject) when using https.
These values must be set under the top level ``s3`` key in the AWS Config File,
which has a default location of ``~/.aws/config``. Below is an example
configuration::
[profile development]
aws_access_key_id=foo
aws_secret_access_key=bar
s3 =
max_concurrent_requests = 20
max_queue_size = 10000
multipart_threshold = 64MB
multipart_chunksize = 16MB
max_bandwidth = 50MB/s
use_accelerate_endpoint = true
addressing_style = path
Note that all the S3 configuration values are indented and nested under the top
level ``s3`` key.
You can also set these values programmatically using the ``aws configure set``
command. For example, to set the above values for the default profile, you
could instead run these commands::
$ aws configure set default.s3.max_concurrent_requests 20
$ aws configure set default.s3.max_queue_size 10000
$ aws configure set default.s3.multipart_threshold 64MB
$ aws configure set default.s3.multipart_chunksize 16MB
$ aws configure set default.s3.max_bandwidth 50MB/s
$ aws configure set default.s3.use_accelerate_endpoint true
$ aws configure set default.s3.addressing_style path
max_concurrent_requests
-----------------------
**Default** - ``10``
The ``aws s3`` transfer commands are multithreaded. At any given time,
multiple requests to Amazon S3 are in flight. For example, if you are
uploading a directory via ``aws s3 cp localdir s3://bucket/ --recursive``, the
AWS CLI could be uploading the local files ``localdir/file1``,
``localdir/file2``, and ``localdir/file3`` in parallel. The
``max_concurrent_requests`` specifies the maximum number of transfer commands
that are allowed at any given time.
You may need to change this value for a few reasons:
* Decreasing this value - On some environments, the default of 10 concurrent
requests can overwhelm a system. This may cause connection timeouts or
slow the responsiveness of the system. Lowering this value will make the
S3 transfer commands less resource intensive. The tradeoff is that
S3 transfers may take longer to complete. Lowering this value may be
necessary if using a tool such as ``trickle`` to limit bandwidth.
* Increasing this value - In some scenarios, you may want the S3 transfers
to complete as quickly as possible, using as much network bandwidth
as necessary. In this scenario, the default number of concurrent requests
may not be sufficient to utilize all the network bandwidth available.
Increasing this value may improve the time it takes to complete an
S3 transfer.
max_queue_size
--------------
**Default** - ``1000``
The AWS CLI internally uses a producer consumer model, where we queue up S3
tasks that are then executed by consumers, which in this case utilize a bound
thread pool, controlled by ``max_concurrent_requests``. A task generally maps
to a single S3 operation. For example, as task could be a ``PutObjectTask``,
or a ``GetObjectTask``, or an ``UploadPartTask``. The enqueuing rate can be
much faster than the rate at which consumers are executing tasks. To avoid
unbounded growth, the task queue size is capped to a specific size. This
configuration value changes the value of that maximum number.
You generally will not need to change this value. This value also corresponds
to the number of tasks we are aware of that need to be executed. This means
that by default we can only see 1000 tasks ahead. Until the S3 command knows
the total number of tasks executed, the progress line will show a total of
``...``. Increasing this value means that we will be able to more quickly know
the total number of tasks needed, assuming that the enqueuing rate is quicker
than the rate of task consumption. The tradeoff is that a larger max queue
size will require more memory.
multipart_threshold
-------------------
**Default** - ``8MB``
When uploading, downloading, or copying a file, the S3 commands
will switch to multipart operations if the file reaches a given
size threshold. The ``multipart_threshold`` controls this value.
You can specify this value in one of two ways:
* The file size in bytes. For example, ``1048576``.
* The file size with a size suffix. You can use ``KB``, ``MB``, ``GB``,
``TB``. For example: ``10MB``, ``1GB``. Note that S3 imposes
constraints on valid values that can be used for multipart
operations.
multipart_chunksize
-------------------
**Default** - ``8MB``
**Minimum For Uploads** - ``5MB``
Once the S3 commands have decided to use multipart operations, the
file is divided into chunks. This configuration option specifies what
the chunk size (also referred to as the part size) should be. This
value can specified using the same semantics as ``multipart_threshold``,
that is either as the number of bytes as an integer, or using a size
suffix.
max_bandwidth
-------------
**Default** - None
This controls the maximum bandwidth that the S3 commands will
utilize when streaming content data to and from S3. Thus, this value only
applies for uploads and downloads. It does not apply to copies nor deletes
because those data transfers take place server side. The value is
in terms of **bytes** per second. The value can be specified as:
* An integer. For example, ``1048576`` would set the maximum bandwidth usage
to 1 MB per second.
* A rate suffix. You can specify rate suffixes using: ``KB/s``, ``MB/s``,
``GB/s``, etc. For example: ``300KB/s``, ``10MB/s``.
In general, it is recommended to first use ``max_concurrent_requests`` to lower
transfers to the desired bandwidth consumption. The ``max_bandwidth`` setting
should then be used to further limit bandwidth consumption if setting
``max_concurrent_requests`` is unable to lower bandwidth consumption to the
desired rate. This is recommended because ``max_concurrent_requests`` controls
how many threads are currently running. So if a high ``max_concurrent_requests``
value is set and a low ``max_bandwidth`` value is set, it may result in
threads having to wait unnecessarily which can lead to excess resource
consumption and connection timeouts.
use_accelerate_endpoint
-----------------------
**Default** - ``false``
If set to ``true``, will direct all Amazon S3 requests to the S3 Accelerate
endpoint: ``s3-accelerate.amazonaws.com``. To use this endpoint, your bucket
must be enabled to use S3 Accelerate. All request will be sent using the
virtual style of bucket addressing: ``my-bucket.s3-accelerate.amazonaws.com``.
Any ``ListBuckets``, ``CreateBucket``, and ``DeleteBucket`` requests will not
be sent to the Accelerate endpoint as the endpoint does not support those
operations. This behavior can also be set if ``--endpoint-url`` parameter
is set to ``https://s3-accelerate.amazonaws.com`` or
``http://s3-accelerate.amazonaws.com`` for any ``s3`` or ``s3api`` command. This
option is mutually exclusive with the ``use_dualstack_endpoint`` option.
use_dualstack_endpoint
----------------------
**Default** - ``false``
If set to ``true``, will direct all Amazon S3 requests to the dual IPv4 / IPv6
endpoint for the configured region. This option is mutually exclusive with
the ``use_accelerate_endpoint`` option.
addressing_style
----------------
**Default** - ``auto``
There's two styles of constructing an S3 endpoint. The first is with
the bucket included as part of the hostname. This corresponds to the
addressing style of ``virtual``. The second is with the bucket included
as part of the path of the URI, corresponding to the addressing style
of ``path``. The default value in the CLI is to use ``auto``, which
will attempt to use ``virtual`` where possible, but will fall back to
``path`` style if necessary. For example, if your bucket name is not
DNS compatible, the bucket name cannot be part of the hostname and
must be in the path. With ``auto``, the CLI will detect this condition
and automatically switch to ``path`` style for you. If you set the
addressing style to ``path``, you must ensure that the AWS region you
configured in the AWS CLI matches the same region of your bucket.
payload_signing_enabled
-----------------------
If set to ``true``, s3 payloads will receive additional content validation in
the form of a SHA256 checksum which will be calculated for you and included in
the request signature. If set to ``false``, the checksum will not be calculated.
Disabling this can be useful to save the performance overhead that the
checksum calculation would otherwise cause.
By default, this is disabled for streaming uploads (UploadPart and PutObject),
but only if a ContentMD5 is present (it is generated by default) and the
endpoint uses HTTPS.
awscli-1.17.14/awscli/examples/ 0000755 0000000 0000000 00000000000 13620325757 016217 5 ustar root root 0000000 0000000 awscli-1.17.14/awscli/examples/iot-jobs-data/ 0000755 0000000 0000000 00000000000 13620325757 020654 5 ustar root root 0000000 0000000 awscli-1.17.14/awscli/examples/iot-jobs-data/get-pending-job-executions.rst 0000644 0000000 0000000 00000001437 13620325556 026545 0 ustar root root 0000000 0000000 **To get a list of all jobs that are not in a terminal status for a thing**
The following ``get-pending-job-executions`` example displays a list of all jobs that aren't in a terminal state for the specified thing. ::
aws iot-jobs-data get-pending-job-executions \
--thing-name MotionSensor1
Output::
{
"inProgressJobs": [
],
"queuedJobs": [
{
"executionNumber": 2939653338,
"jobId": "SampleJob",
"lastUpdatedAt": 1567701875.743,
"queuedAt": 1567701902.444,
"versionNumber": 3
}
]
}
For more information, see `Devices and Jobs `__ in the *AWS IoT Developer Guide*.
awscli-1.17.14/awscli/examples/iot-jobs-data/start-next-pending-job-execution.rst 0000644 0000000 0000000 00000001657 13620325556 027720 0 ustar root root 0000000 0000000 **To get and start the next pending job execution for a thing**
The following ``start-next-pending-job-execution`` example retrieves and starts the next job execution whose status is `IN_PROGRESS` or `QUEUED` for the specified thing. ::
aws iot-jobs-data start-next-pending-job-execution \
--thing-name MotionSensor1
This command produces no output.
Output::
{
"execution": {
"approximateSecondsBeforeTimedOut": 88,
"executionNumber": 2939653338,
"jobId": "SampleJob",
"lastUpdatedAt": 1567714853.743,
"queuedAt": 1567701902.444,
"startedAt": 1567714871.690,
"status": "IN_PROGRESS",
"thingName": "MotionSensor1 ",
"versionNumber": 3
}
}
For more information, see `Devices and Jobs `__ in the *AWS IoT Developer Guide*.
awscli-1.17.14/awscli/examples/iot-jobs-data/describe-job-execution.rst 0000644 0000000 0000000 00000001462 13620325556 025737 0 ustar root root 0000000 0000000 **To get the details of a job execution**
The following ``describe-job-execution`` example retrieves the details of the latest execution of the specified job and thing. ::
aws iot-jobs-data describe-job-execution \
--job-id SampleJob \
--thing-name MotionSensor1
Output::
{
"execution": {
"approximateSecondsBeforeTimedOut": 88,
"executionNumber": 2939653338,
"jobId": "SampleJob",
"lastUpdatedAt": 1567701875.743,
"queuedAt": 1567701902.444,
"status": "QUEUED",
"thingName": "MotionSensor1 ",
"versionNumber": 3
}
}
For more information, see `Devices and Jobs `__ in the *AWS IoT Developer Guide*.
awscli-1.17.14/awscli/examples/iot-jobs-data/update-job-execution.rst 0000644 0000000 0000000 00000001127 13620325556 025437 0 ustar root root 0000000 0000000 **To update the status of a job execution**
The following ``update-job-execution`` example updates the status of the specified job and thing. ::
aws iot-jobs-data update-job-execution \
--job-id SampleJob \
--thing-name MotionSensor1 \
--status REMOVED
This command produces no output.
Output::
{
"executionState": {
"status": "REMOVED",
"versionNumber": 3
},
}
For more information, see `Devices and Jobs `__ in the *AWS IoT Developer Guide*.
awscli-1.17.14/awscli/examples/resource-groups/ 0000755 0000000 0000000 00000000000 13620325757 021363 5 ustar root root 0000000 0000000 awscli-1.17.14/awscli/examples/resource-groups/update-group.rst 0000644 0000000 0000000 00000001033 13620325556 024523 0 ustar root root 0000000 0000000 **To update the description for a resource group**
The following ``update-group`` example updates the description for a group named ``WebServer3`` in the current region. ::
aws resource-groups update-group \
--group-name WebServer3 \
--description "Group of web server resources."
Output::
{
"Group": {
"GroupArn": "arn:aws:resource-groups:us-east-2:123456789012:group/WebServer3",
"Name": "WebServer3"
"Description": "Group of web server resources."
}
}
awscli-1.17.14/awscli/examples/resource-groups/update-group-query.rst 0000644 0000000 0000000 00000004150 13620325556 025671 0 ustar root root 0000000 0000000 **Example 1: To update the query for a tag-based resource group**
The following ``update-group-query`` example updates the query for a tag-based resource group of Amazon EC2 instances. To update the group query, you can change the values specified for ``ResourceTypeFilters`` or ``TagFilters``. ::
aws resource-groups update-group-query \
--group-name WebServer3 \
--resource-query '{"Type":"TAG_FILTERS_1_0", "Query":"{\"ResourceTypeFilters\":[\"AWS::EC2::Instance\"],\"TagFilters\":[{\"Key\":\"Name\", \"Values\":[\"WebServers\"]}]}"}'
Output::
{
"Group": {
"GroupArn": "arn:aws:resource-groups:us-east-2:123456789012:group/WebServer3",
"Name": "WebServer3"
},
"ResourceQuery": {
"Type": "TAG_FILTERS_1_0",
"Query": "{\"ResourceTypeFilters\":[\"AWS::EC2::Instance\"],\"TagFilters\":[{\"Key\":\"Name\", \"Values\":[\"WebServers\"]}]}"
}
}
**Example 2: To update the query for a CloudFormation stack-based resource group**
The following ``update-group-query`` example updates the query for an AWS CloudFormation stack-based resource group named ``sampleCFNstackgroup``. To update the group query, you can change the values specified for ``ResourceTypeFilters`` or ``StackIdentifier``. ::
aws resource-groups update-group-query \
--group-name sampleCFNstackgroup \
--resource-query '{"Type": "CLOUDFORMATION_STACK_1_0", "Query": "{\"ResourceTypeFilters\":[\"AWS::AllSupported\"],\"StackIdentifier\":\"arn:aws:cloudformation:us-east-2:123456789012:stack/testcloudformationstack/1415z9z0-z39z-11z8-97z5-500z212zz6fz\"}"}'
Output::
{
"Group": {
"GroupArn": "arn:aws:resource-groups:us-east-2:123456789012:group/sampleCFNstackgroup",
"Name": "sampleCFNstackgroup"
},
"ResourceQuery": {
"Type": "CLOUDFORMATION_STACK_1_0",
"Query":'{\"CloudFormationStackArn\":\"arn:aws:cloudformation:us-east-2:123456789012:stack/testcloudformationstack/1415z9z0-z39z-11z8-97z5-500z212zz6fz\",\"ResourceTypeFilters\":[\"AWS::AllSupported\"]}"}'
}
}
awscli-1.17.14/awscli/examples/resource-groups/create-group.rst 0000644 0000000 0000000 00000004044 13620325556 024511 0 ustar root root 0000000 0000000 **Example 1: To create a tag-based resource group**
The following ``create-group`` example creates a tag-based resource group of Amazon EC2 instances in the current region that are tagged with a tag key of ``Name``, and a tag key value of ``WebServers``. The group name is ``WebServer3``. ::
aws resource-groups create-group \
--name WebServer3 \
--resource-query '{"Type":"TAG_FILTERS_1_0", "Query":"{\"ResourceTypeFilters\":[\"AWS::EC2::Instance\"],\"TagFilters\":[{\"Key\":\"Name\", \"Values\":[\"WebServers\"]}]}"}'
Output::
{
"Group": {
"GroupArn": "arn:aws:resource-groups:us-east-2:000000000000:group/WebServer3",
"Name": "WebServer3"
},
"ResourceQuery": {
"Type": "TAG_FILTERS_1_0",
"Query": "{\"ResourceTypeFilters\":[\"AWS::EC2::Instance\"],\"TagFilters\":[{\"Key\":\"Name\", \"Values\":[\"WebServers\"]}]}"
}
}
**Example 2: To create a CloudFormation stack-based resource group**
The following ``create-group`` example creates an AWS CloudFormation stack-based resource group named ``sampleCFNstackgroup``. The query allows all resources that are in the CloudFormation stack that are supported by AWS Resource Groups. ::
aws resource-groups create-group \
--name sampleCFNstackgroup \
--resource-query '{"Type": "CLOUDFORMATION_STACK_1_0", "Query": "{\"ResourceTypeFilters\":[\"AWS::AllSupported\"],\"StackIdentifier\":\"arn:aws:cloudformation:us-east-2:123456789012:stack/testcloudformationstack/1415z9z0-z39z-11z8-97z5-500z212zz6fz\"}"}'
Output::
{
"Group": {
"GroupArn": "arn:aws:resource-groups:us-east-2:123456789012:group/sampleCFNstackgroup",
"Name": "sampleCFNstackgroup"
},
"ResourceQuery": {
"Type": "CLOUDFORMATION_STACK_1_0",
"Query":'{\"CloudFormationStackArn\":\"arn:aws:cloudformation:us-east-2:123456789012:stack/testcloudformationstack/1415z9z0-z39z-11z8-97z5-500z212zz6fz\",\"ResourceTypeFilters\":[\"AWS::AllSupported\"]}"}'
}
}
awscli-1.17.14/awscli/examples/cloudwatch/ 0000755 0000000 0000000 00000000000 13620325757 020354 5 ustar root root 0000000 0000000 awscli-1.17.14/awscli/examples/cloudwatch/set-alarm-state.rst 0000644 0000000 0000000 00000000621 13620325554 024103 0 ustar root root 0000000 0000000 **To temporarily change the state of an alarm**
The following example uses the ``set-alarm-state`` command to temporarily change the state of an
Amazon CloudWatch alarm named "myalarm" and set it to the ALARM state for testing purposes::
aws cloudwatch set-alarm-state --alarm-name "myalarm" --state-value ALARM --state-reason "testing purposes"
This command returns to the prompt if successful.
awscli-1.17.14/awscli/examples/cloudwatch/disable-alarm-actions.rst 0000644 0000000 0000000 00000000416 13620325554 025235 0 ustar root root 0000000 0000000 **To disable actions for an alarm**
The following example uses the ``disable-alarm-actions`` command to disable all actions for the alarm named myalarm.::
aws cloudwatch disable-alarm-actions --alarm-names myalarm
This command returns to the prompt if successful.
awscli-1.17.14/awscli/examples/cloudwatch/describe-alarm-history.rst 0000644 0000000 0000000 00000003552 13620325554 025457 0 ustar root root 0000000 0000000 **To retrieve history for an alarm**
The following example uses the ``describe-alarm-history`` command to retrieve history for the Amazon
CloudWatch alarm named "myalarm"::
aws cloudwatch describe-alarm-history --alarm-name "myalarm" --history-item-type StateUpdate
Output::
{
"AlarmHistoryItems": [
{
"Timestamp": "2014-04-09T18:59:06.442Z",
"HistoryItemType": "StateUpdate",
"AlarmName": "myalarm",
"HistoryData": "{\"version\":\"1.0\",\"oldState\":{\"stateValue\":\"ALARM\",\"stateReason\":\"testing purposes\"},\"newState\":{\"stateValue\":\"OK\",\"stateReason\":\"Threshold Crossed: 2 datapoints were not greater than the threshold (70.0). The most recent datapoints: [38.958, 40.292].\",\"stateReasonData\":{\"version\":\"1.0\",\"queryDate\":\"2014-04-09T18:59:06.419+0000\",\"startDate\":\"2014-04-09T18:44:00.000+0000\",\"statistic\":\"Average\",\"period\":300,\"recentDatapoints\":[38.958,40.292],\"threshold\":70.0}}}",
"HistorySummary": "Alarm updated from ALARM to OK"
},
{
"Timestamp": "2014-04-09T18:59:05.805Z",
"HistoryItemType": "StateUpdate",
"AlarmName": "myalarm",
"HistoryData": "{\"version\":\"1.0\",\"oldState\":{\"stateValue\":\"OK\",\"stateReason\":\"Threshold Crossed: 2 datapoints were not greater than the threshold (70.0). The most recent datapoints: [38.839999999999996, 39.714].\",\"stateReasonData\":{\"version\":\"1.0\",\"queryDate\":\"2014-03-11T22:45:41.569+0000\",\"startDate\":\"2014-03-11T22:30:00.000+0000\",\"statistic\":\"Average\",\"period\":300,\"recentDatapoints\":[38.839999999999996,39.714],\"threshold\":70.0}},\"newState\":{\"stateValue\":\"ALARM\",\"stateReason\":\"testing purposes\"}}",
"HistorySummary": "Alarm updated from OK to ALARM"
}
]
}
awscli-1.17.14/awscli/examples/cloudwatch/enable-alarm-actions.rst 0000644 0000000 0000000 00000000416 13620325554 025060 0 ustar root root 0000000 0000000 **To enable all actions for an alarm**
The following example uses the ``enable-alarm-actions`` command to enable all actions for the alarm named myalarm.::
aws cloudwatch enable-alarm-actions --alarm-names myalarm
This command returns to the prompt if successful.
awscli-1.17.14/awscli/examples/cloudwatch/delete-alarms.rst 0000644 0000000 0000000 00000000376 13620325554 023626 0 ustar root root 0000000 0000000 **To delete an alarm**
The following example uses the ``delete-alarms`` command to delete the Amazon CloudWatch alarm
named "myalarm"::
aws cloudwatch delete-alarms --alarm-names myalarm
Output::
This command returns to the prompt if successful.
awscli-1.17.14/awscli/examples/cloudwatch/list-metrics.rst 0000644 0000000 0000000 00000005204 13620325630 023514 0 ustar root root 0000000 0000000 **To list the metrics for Amazon SNS**
The following ``list-metrics`` example displays the metrics for Amazon SNS. ::
aws cloudwatch list-metrics \
--namespace "AWS/SNS"
Output::
{
"Metrics": [
{
"Namespace": "AWS/SNS",
"Dimensions": [
{
"Name": "TopicName",
"Value": "NotifyMe"
}
],
"MetricName": "PublishSize"
},
{
"Namespace": "AWS/SNS",
"Dimensions": [
{
"Name": "TopicName",
"Value": "CFO"
}
],
"MetricName": "PublishSize"
},
{
"Namespace": "AWS/SNS",
"Dimensions": [
{
"Name": "TopicName",
"Value": "NotifyMe"
}
],
"MetricName": "NumberOfNotificationsFailed"
},
{
"Namespace": "AWS/SNS",
"Dimensions": [
{
"Name": "TopicName",
"Value": "NotifyMe"
}
],
"MetricName": "NumberOfNotificationsDelivered"
},
{
"Namespace": "AWS/SNS",
"Dimensions": [
{
"Name": "TopicName",
"Value": "NotifyMe"
}
],
"MetricName": "NumberOfMessagesPublished"
},
{
"Namespace": "AWS/SNS",
"Dimensions": [
{
"Name": "TopicName",
"Value": "CFO"
}
],
"MetricName": "NumberOfMessagesPublished"
},
{
"Namespace": "AWS/SNS",
"Dimensions": [
{
"Name": "TopicName",
"Value": "CFO"
}
],
"MetricName": "NumberOfNotificationsDelivered"
},
{
"Namespace": "AWS/SNS",
"Dimensions": [
{
"Name": "TopicName",
"Value": "CFO"
}
],
"MetricName": "NumberOfNotificationsFailed"
}
]
}
awscli-1.17.14/awscli/examples/cloudwatch/put-metric-alarm.rst 0000644 0000000 0000000 00000002604 13620325554 024266 0 ustar root root 0000000 0000000 **To send an Amazon Simple Notification Service email message when CPU utilization exceeds 70 percent**
The following example uses the ``put-metric-alarm`` command to send an Amazon Simple Notification Service email message when CPU utilization exceeds 70 percent::
aws cloudwatch put-metric-alarm --alarm-name cpu-mon --alarm-description "Alarm when CPU exceeds 70 percent" --metric-name CPUUtilization --namespace AWS/EC2 --statistic Average --period 300 --threshold 70 --comparison-operator GreaterThanThreshold --dimensions "Name=InstanceId,Value=i-12345678" --evaluation-periods 2 --alarm-actions arn:aws:sns:us-east-1:111122223333:MyTopic --unit Percent
This command returns to the prompt if successful. If an alarm with the same name already exists, it will be overwritten by the new alarm.
**To specify multiple dimensions**
The following example illustrates how to specify multiple dimensions. Each dimension is specified as a Name/Value pair, with a comma between the name and the value. Multiple dimensions are separated by a space::
aws cloudwatch put-metric-alarm --alarm-name "Default_Test_Alarm3" --alarm-description "The default example alarm" --namespace "CW EXAMPLE METRICS" --metric-name Default_Test --statistic Average --period 60 --evaluation-periods 3 --threshold 50 --comparison-operator GreaterThanOrEqualToThreshold --dimensions Name=key1,Value=value1 Name=key2,Value=value2
awscli-1.17.14/awscli/examples/cloudwatch/get-metric-statistics.rst 0000644 0000000 0000000 00000011464 13620325554 025337 0 ustar root root 0000000 0000000 **To get the CPU utilization per EC2 instance**
The following example uses the ``get-metric-statistics`` command to get the CPU utilization for an EC2
instance with the ID i-abcdef.
.. __: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/US_GetStatistics.html
::
aws cloudwatch get-metric-statistics --metric-name CPUUtilization --start-time 2014-04-08T23:18:00Z --end-time 2014-04-09T23:18:00Z --period 3600 --namespace AWS/EC2 --statistics Maximum --dimensions Name=InstanceId,Value=i-abcdef
Output::
{
"Datapoints": [
{
"Timestamp": "2014-04-09T11:18:00Z",
"Maximum": 44.79,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T20:18:00Z",
"Maximum": 47.92,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T19:18:00Z",
"Maximum": 50.85,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T09:18:00Z",
"Maximum": 47.92,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T03:18:00Z",
"Maximum": 76.84,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T21:18:00Z",
"Maximum": 48.96,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T14:18:00Z",
"Maximum": 47.92,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T08:18:00Z",
"Maximum": 47.92,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T16:18:00Z",
"Maximum": 45.55,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T06:18:00Z",
"Maximum": 47.92,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T13:18:00Z",
"Maximum": 45.08,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T05:18:00Z",
"Maximum": 47.92,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T18:18:00Z",
"Maximum": 46.88,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T17:18:00Z",
"Maximum": 52.08,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T07:18:00Z",
"Maximum": 47.92,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T02:18:00Z",
"Maximum": 51.23,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T12:18:00Z",
"Maximum": 47.67,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-08T23:18:00Z",
"Maximum": 46.88,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T10:18:00Z",
"Maximum": 51.91,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T04:18:00Z",
"Maximum": 47.13,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T15:18:00Z",
"Maximum": 48.96,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T00:18:00Z",
"Maximum": 48.16,
"Unit": "Percent"
},
{
"Timestamp": "2014-04-09T01:18:00Z",
"Maximum": 49.18,
"Unit": "Percent"
}
],
"Label": "CPUUtilization"
}
**Specifying multiple dimensions**
The following example illustrates how to specify multiple dimensions. Each dimension is specified as a Name/Value pair, with a comma between the name and the value. Multiple dimensions are separated by a space. If a single metric includes multiple dimensions, you must specify a value for every defined dimension.
For more examples using the ``get-metric-statistics`` command, see `Get Statistics for a Metric`__ in the *Amazon CloudWatch Developer Guide*.
.. __: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/US_GetStatistics.html
::
aws cloudwatch get-metric-statistics --metric-name Buffers --namespace MyNameSpace --dimensions Name=InstanceID,Value=i-abcdef Name=InstanceType,Value=m1.small --start-time 2016-10-15T04:00:00Z --end-time 2016-10-19T07:00:00Z --statistics Average --period 60
awscli-1.17.14/awscli/examples/cloudwatch/put-metric-data.rst 0000644 0000000 0000000 00000002146 13620325554 024104 0 ustar root root 0000000 0000000 **To publish a custom metric to Amazon CloudWatch**
The following example uses the ``put-metric-data`` command to publish a custom metric to Amazon CloudWatch::
aws cloudwatch put-metric-data --namespace "Usage Metrics" --metric-data file://metric.json
The values for the metric itself are stored in the JSON file, ``metric.json``.
Here are the contents of that file::
[
{
"MetricName": "New Posts",
"Timestamp": "Wednesday, June 12, 2013 8:28:20 PM",
"Value": 0.50,
"Unit": "Count"
}
]
For more information, see `Publishing Custom Metrics`_ in the *Amazon CloudWatch Developer Guide*.
.. _`Publishing Custom Metrics`: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/publishingMetrics.html
**To specify multiple dimensions**
The following example illustrates how to specify multiple dimensions. Each dimension is specified as a Name=Value pair. Multiple dimensions are separated by a comma.::
aws cloudwatch put-metric-data --metric-name Buffers --namespace MyNameSpace --unit Bytes --value 231434333 --dimensions InstanceID=1-23456789,InstanceType=m1.small
awscli-1.17.14/awscli/examples/cloudwatch/describe-alarms-for-metric.rst 0000644 0000000 0000000 00000006752 13620325554 026215 0 ustar root root 0000000 0000000 **To display information about alarms associated with a metric**
The following example uses the ``describe-alarms-for-metric`` command to display information about
any alarms associated with the Amazon EC2 CPUUtilization metric and the instance with the ID i-0c986c72.::
aws cloudwatch describe-alarms-for-metric --metric-name CPUUtilization --namespace AWS/EC2 --dimensions Name=InstanceId,Value=i-0c986c72
Output::
{
"MetricAlarms": [
{
"EvaluationPeriods": 10,
"AlarmArn": "arn:aws:cloudwatch:us-east-1:111122223333:alarm:myHighCpuAlarm2",
"StateUpdatedTimestamp": "2013-10-30T03:03:51.479Z",
"AlarmConfigurationUpdatedTimestamp": "2013-10-30T03:03:50.865Z",
"ComparisonOperator": "GreaterThanOrEqualToThreshold",
"AlarmActions": [
"arn:aws:sns:us-east-1:111122223333:NotifyMe"
],
"Namespace": "AWS/EC2",
"AlarmDescription": "CPU usage exceeds 70 percent",
"StateReasonData": "{\"version\":\"1.0\",\"queryDate\":\"2013-10-30T03:03:51.479+0000\",\"startDate\":\"2013-10-30T02:08:00.000+0000\",\"statistic\":\"Average\",\"period\":300,\"recentDatapoints\":[40.698,39.612,42.432,39.796,38.816,42.28,42.854,40.088,40.760000000000005,41.316],\"threshold\":70.0}",
"Period": 300,
"StateValue": "OK",
"Threshold": 70.0,
"AlarmName": "myHighCpuAlarm2",
"Dimensions": [
{
"Name": "InstanceId",
"Value": "i-0c986c72"
}
],
"Statistic": "Average",
"StateReason": "Threshold Crossed: 10 datapoints were not greater than or equal to the threshold (70.0). The most recent datapoints: [40.760000000000005, 41.316].",
"InsufficientDataActions": [],
"OKActions": [],
"ActionsEnabled": true,
"MetricName": "CPUUtilization"
},
{
"EvaluationPeriods": 2,
"AlarmArn": "arn:aws:cloudwatch:us-east-1:111122223333:alarm:myHighCpuAlarm",
"StateUpdatedTimestamp": "2014-04-09T18:59:06.442Z",
"AlarmConfigurationUpdatedTimestamp": "2014-04-09T22:26:05.958Z",
"ComparisonOperator": "GreaterThanThreshold",
"AlarmActions": [
"arn:aws:sns:us-east-1:111122223333:HighCPUAlarm"
],
"Namespace": "AWS/EC2",
"AlarmDescription": "CPU usage exceeds 70 percent",
"StateReasonData": "{\"version\":\"1.0\",\"queryDate\":\"2014-04-09T18:59:06.419+0000\",\"startDate\":\"2014-04-09T18:44:00.000+0000\",\"statistic\":\"Average\",\"period\":300,\"recentDatapoints\":[38.958,40.292],\"threshold\":70.0}",
"Period": 300,
"StateValue": "OK",
"Threshold": 70.0,
"AlarmName": "myHighCpuAlarm",
"Dimensions": [
{
"Name": "InstanceId",
"Value": "i-0c986c72"
}
],
"Statistic": "Average",
"StateReason": "Threshold Crossed: 2 datapoints were not greater than the threshold (70.0). The most recent datapoints: [38.958, 40.292].",
"InsufficientDataActions": [],
"OKActions": [],
"ActionsEnabled": false,
"MetricName": "CPUUtilization"
}
]
}
awscli-1.17.14/awscli/examples/cloudwatch/describe-alarms.rst 0000644 0000000 0000000 00000003327 13620325554 024143 0 ustar root root 0000000 0000000 **To list information about an alarm**
The following example uses the ``describe-alarms`` command to provide information about the alarm named "myalarm"::
aws cloudwatch describe-alarms --alarm-names "myalarm"
Output::
{
"MetricAlarms": [
{
"EvaluationPeriods": 2,
"AlarmArn": "arn:aws:cloudwatch:us-east-1:123456789012:alarm:myalarm",
"StateUpdatedTimestamp": "2014-04-09T18:59:06.442Z",
"AlarmConfigurationUpdatedTimestamp": "2012-12-27T00:49:54.032Z",
"ComparisonOperator": "GreaterThanThreshold",
"AlarmActions": [
"arn:aws:sns:us-east-1:123456789012:myHighCpuAlarm"
],
"Namespace": "AWS/EC2",
"AlarmDescription": "CPU usage exceeds 70 percent",
"StateReasonData": "{\"version\":\"1.0\",\"queryDate\":\"2014-04-09T18:59:06.419+0000\",\"startDate\":\"2014-04-09T18:44:00.000+0000\",\"statistic\":\"Average\",\"period\":300,\"recentDatapoints\":[38.958,40.292],\"threshold\":70.0}",
"Period": 300,
"StateValue": "OK",
"Threshold": 70.0,
"AlarmName": "myalarm",
"Dimensions": [
{
"Name": "InstanceId",
"Value": "i-0c986c72"
}
],
"Statistic": "Average",
"StateReason": "Threshold Crossed: 2 datapoints were not greater than the threshold (70.0). The most recent datapoints: [38.958, 40.292].",
"InsufficientDataActions": [],
"OKActions": [],
"ActionsEnabled": true,
"MetricName": "CPUUtilization"
}
]
}
awscli-1.17.14/awscli/examples/apigatewayv2/ 0000755 0000000 0000000 00000000000 13620325757 020622 5 ustar root root 0000000 0000000 awscli-1.17.14/awscli/examples/apigatewayv2/create-integration.rst 0000644 0000000 0000000 00000002054 13620325554 025134 0 ustar root root 0000000 0000000 **To create a WebSocket API Integration Request**
The following ``create-integration`` requests an integration for a WebSocket API. ::
aws apigatewayv2 create-integration \
--api-id aabbccddee \
--passthrough-behavior WHEN_NO_MATCH \
--timeout-in-millis 29000 \
--connection-type INTERNET
--request-templates "{\"application/json\": \"{\"statusCode\":200}\"}"
--integration-type MOCK
Output::
{
"PassthroughBehavior": "WHEN_NO_MATCH",
"TimeoutInMillis": 29000,
"ConnectionType": "INTERNET",
"IntegrationResponseSelectionExpression": "${response.statuscode}",
"RequestTemplates": {
"application/json": "{"statusCode":200}"
},
"IntegrationId": "0abcdef",
"IntegrationType": "MOCK"
}
For more information, see `Set up a WebSocket API Integration Request in API Gateway `_ in the *Amazon API Gateway Developer Guide*.
awscli-1.17.14/awscli/examples/apigatewayv2/create-route.rst 0000644 0000000 0000000 00000001126 13620325554 023746 0 ustar root root 0000000 0000000 **To create a route for a WebSocket API**
The following ``create-route`` example creates a route for a WebSocket API. ::
aws apigatewayv2 create-route \
--api-id aabbccddee \
--route-key $default
Output::
{
"ApiKeyRequired": false,
"AuthorizationType": "NONE",
"RouteKey": "$default",
"RouteId": "1122334"
}
For more information, see `Set up Routes for a WebSocket API in API Gateway `_ in the *Amazon API Gateway Developer Guide*
awscli-1.17.14/awscli/examples/apigatewayv2/create-api.rst 0000644 0000000 0000000 00000001450 13620325554 023361 0 ustar root root 0000000 0000000 **To create a WebSocket API**
The following ``create-api`` example creates a WebSocket API with the specified name. ::
aws apigatewayv2 create-api \
--name "myWebSocketApi" \
--protocol-type WEBSOCKET \
--route-selection-expression '$request.body.action'
Output::
{
"ApiKeySelectionExpression": "$request.header.x-api-key",
"Name": "myWebSocketApi",
"CreatedDate": "2018-11-15T06:23:51Z",
"ProtocolType": "WEBSOCKET",
"RouteSelectionExpression": "'$request.body.action'",
"ApiId": "aabbccddee"
}
For more information, see `Create a WebSocket API in API Gateway `_ in the *Amazon API Gateway Developer Guide*.
awscli-1.17.14/awscli/examples/es/ 0000755 0000000 0000000 00000000000 13620325757 016626 5 ustar root root 0000000 0000000 awscli-1.17.14/awscli/examples/es/create-elasticsearch-domain.rst 0000644 0000000 0000000 00000004701 13620325556 024677 0 ustar root root 0000000 0000000 The following ``create-elasticsearch-domain`` command creates a new Amazon Elasticsearch Service domain within a VPC and restricts access to a single user. Amazon ES infers the VPC ID from the specified subnet and security group IDs::
aws es create-elasticsearch-domain --domain-name vpc-cli-example --elasticsearch-version 6.2 --elasticsearch-cluster-config InstanceType=m4.large.elasticsearch,InstanceCount=1 --ebs-options EBSEnabled=true,VolumeType=standard,VolumeSize=10 --access-policies '{"Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": {"AWS": "arn:aws:iam::123456789012:root" }, "Action":"es:*", "Resource": "arn:aws:es:us-west-1:123456789012:domain/vpc-cli-example/*" } ] }' --vpc-options SubnetIds=subnet-1a2a3a4a,SecurityGroupIds=sg-2a3a4a5a
Output::
{
"DomainStatus": {
"ElasticsearchClusterConfig": {
"DedicatedMasterEnabled": false,
"InstanceCount": 1,
"ZoneAwarenessEnabled": false,
"InstanceType": "m4.large.elasticsearch"
},
"DomainId": "123456789012/vpc-cli-example",
"CognitoOptions": {
"Enabled": false
},
"VPCOptions": {
"SubnetIds": [
"subnet-1a2a3a4a"
],
"VPCId": "vpc-3a4a5a6a",
"SecurityGroupIds": [
"sg-2a3a4a5a"
],
"AvailabilityZones": [
"us-west-1c"
]
},
"Created": true,
"Deleted": false,
"EBSOptions": {
"VolumeSize": 10,
"VolumeType": "standard",
"EBSEnabled": true
},
"Processing": true,
"DomainName": "vpc-cli-example",
"SnapshotOptions": {
"AutomatedSnapshotStartHour": 0
},
"ElasticsearchVersion": "6.2",
"AccessPolicies": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":\"arn:aws:iam::123456789012:root\"},\"Action\":\"es:*\",\"Resource\":\"arn:aws:es:us-west-1:123456789012:domain/vpc-cli-example/*\"}]}",
"AdvancedOptions": {
"rest.action.multi.allow_explicit_index": "true"
},
"EncryptionAtRestOptions": {
"Enabled": false
},
"ARN": "arn:aws:es:us-west-1:123456789012:domain/vpc-cli-example"
}
}
awscli-1.17.14/awscli/examples/kinesis/ 0000755 0000000 0000000 00000000000 13620325757 017664 5 ustar root root 0000000 0000000 awscli-1.17.14/awscli/examples/kinesis/deregister-stream-consumer.rst 0000644 0000000 0000000 00000001163 13620325630 025664 0 ustar root root 0000000 0000000 **To deregister a data stream consumer**
The following ``deregister-stream-consumer`` example deregisters the specified consumer from the specified data stream. ::
aws kinesis deregister-stream-consumer \
--stream-arn arn:aws:kinesis:us-west-2:123456789012:stream/samplestream \
--consumer-name KinesisConsumerApplication
This command produces no output.
For more information, see `Developing Consumers with Enhanced Fan-Out Using the Kinesis Data Streams API `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/describe-stream-summary.rst 0000644 0000000 0000000 00000001765 13620325630 025161 0 ustar root root 0000000 0000000 **To describe a data stream summary**
The following ``describe-stream-summary`` example provides a summarized description (without the shard list) of the specified data stream. ::
aws kinesis describe-stream-summary \
--stream-name samplestream
Output::
{
"StreamDescriptionSummary": {
"StreamName": "samplestream",
"StreamARN": "arn:aws:kinesis:us-west-2:123456789012:stream/samplestream",
"StreamStatus": "ACTIVE",
"RetentionPeriodHours": 48,
"StreamCreationTimestamp": 1572297168.0,
"EnhancedMonitoring": [
{
"ShardLevelMetrics": []
}
],
"EncryptionType": "NONE",
"OpenShardCount": 3,
"ConsumerCount": 0
}
}
For more information, see `Creating and Managing Streams `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/start-stream-encryption.rst 0000644 0000000 0000000 00000001165 13620325630 025225 0 ustar root root 0000000 0000000 **To enable data stream encryption**
The following ``start-stream-encryption`` example enables server-side encryption for the specified stream, using the specified AWS KMS key. ::
aws kinesis start-stream-encryption \
--encryption-type KMS \
--key-id arn:aws:kms:us-west-2:012345678912:key/a3c4a7cd-728b-45dd-b334-4d3eb496e452 \
--stream-name samplestream
This command produces no output.
For more information, see `Data Protection in Amazon Kinesis Data Streams `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/decrease-stream-retention-period.rst 0000644 0000000 0000000 00000001164 13620325630 026737 0 ustar root root 0000000 0000000 **To decrease data stream retention period**
The following ``decrease-stream-retention-period`` example decreases the retention period (the length of time data records are accessible after they are added to the stream) of a stream named samplestream to 48 hours. ::
aws kinesis decrease-stream-retention-period \
--stream-name samplestream \
--retention-period-hours 48
This command produces no output.
For more information, see `Changing the Data Retention Period `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/list-streams.rst 0000644 0000000 0000000 00000000724 13620325630 023036 0 ustar root root 0000000 0000000 **To list data streams**
The following ``list-streams`` example lists all active data streams in the current account and region. ::
aws kinesis list-streams
Output::
{
"StreamNames": [
"samplestream",
"samplestream1"
]
}
For more information, see `Listing Streams `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/increase-stream-retention-period.rst 0000644 0000000 0000000 00000001156 13620325630 026756 0 ustar root root 0000000 0000000 **To increase data stream retention period**
The following ``increase-stream-retention-period`` example increases the retention period (the length of time data records are accessible after they are added to the stream) of the specified stream to 168 hours. ::
aws kinesis increase-stream-retention-period \
--stream-name samplestream \
--retention-period-hours 168
This command produces no output.
For more information, see `Changing the Data Retention Period `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/describe-stream-consumer.rst 0000644 0000000 0000000 00000001762 13620325630 025314 0 ustar root root 0000000 0000000 **To describe a data stream consumer**
The following ``describe-stream-consumer`` example returns the description of the specified consumer, registered with the specified data stream. ::
aws kinesis describe-stream-consumer \
--stream-arn arn:aws:kinesis:us-west-2:012345678912:stream/samplestream \
--consumer-name KinesisConsumerApplication
Output::
{
"ConsumerDescription": {
"ConsumerName": "KinesisConsumerApplication",
"ConsumerARN": "arn:aws:kinesis:us-west-2:123456789012:stream/samplestream/consumer/KinesisConsumerApplication:1572383852",
"ConsumerStatus": "ACTIVE",
"ConsumerCreationTimestamp": 1572383852.0,
"StreamARN": "arn:aws:kinesis:us-west-2:123456789012:stream/samplestream"
}
}
For more information, see `Reading Data from Amazon Kinesis Data Streams `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/delete-stream.rst 0000644 0000000 0000000 00000000623 13620325630 023140 0 ustar root root 0000000 0000000 **To delete a data stream**
The following ``delete-stream`` example deletes the specified data stream. ::
aws kinesis delete-stream \
--stream-name samplestream
This command produces no output.
For more information, see `Deleting a Stream `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/remove-tags-from-stream.rst 0000644 0000000 0000000 00000000732 13620325630 025071 0 ustar root root 0000000 0000000 **To remove tags from a data stream**
The following ``remove-tags-from-stream`` example removes the tag with the specified key from the specified data stream. ::
aws kinesis remove-tags-from-stream \
--stream-name samplestream \
--tag-keys samplekey
This command produces no output.
For more information, see `Tagging Your Streams `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/disable-enhanced-monitoring.rst 0000644 0000000 0000000 00000001611 13620325630 025734 0 ustar root root 0000000 0000000 **To disable enhanced monitoring for shard-level metrics**
The following ``disable-enhanced-monitoring`` example disables enhanced Kinesis data stream monitoring for shard-level metrics. ::
aws kinesis disable-enhanced-monitoring \
--stream-name samplestream --shard-level-metrics ALL
Output::
{
"StreamName": "samplestream",
"CurrentShardLevelMetrics": [
"IncomingBytes",
"OutgoingRecords",
"IteratorAgeMilliseconds",
"IncomingRecords",
"ReadProvisionedThroughputExceeded",
"WriteProvisionedThroughputExceeded",
"OutgoingBytes"
],
"DesiredShardLevelMetrics": []
}
For more information, see `Monitoring Streams in Amazon Kinesis Data Streams `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/get-shard-iterator.rst 0000644 0000000 0000000 00000001655 13620325630 024120 0 ustar root root 0000000 0000000 **To obtain a shard iterator**
The following ``get-shard-iterator`` example uses the ``AT_SEQUENCE_NUMBER`` shard iterator type and generates a shard iterator to start reading data records exactly from the position denoted by the specified sequence number. ::
aws kinesis get-shard-iterator \
--stream-name samplestream \
--shard-id shardId-000000000001 \
--shard-iterator-type LATEST
Output::
{
"ShardIterator": "AAAAAAAAAAFEvJjIYI+3jw/4aqgH9FifJ+n48XWTh/IFIsbILP6o5eDueD39NXNBfpZ10WL5K6ADXk8w+5H+Qhd9cFA9k268CPXCz/kebq1TGYI7Vy+lUkA9BuN3xvATxMBGxRY3zYK05gqgvaIRn94O8SqeEqwhigwZxNWxID3Ej7YYYcxQi8Q/fIrCjGAy/n2r5Z9G864YpWDfN9upNNQAR/iiOWKs"
}
For more information, see `Developing Consumers Using the Kinesis Data Streams API with the AWS SDK for Java `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/update-shard-count.rst 0000644 0000000 0000000 00000001261 13620325630 024113 0 ustar root root 0000000 0000000 **To update the shard count in a data stream**
The following ``update-shard-count`` example updates the shard count of the specified data stream to 6. This example uses uniform scaling, which creates shards of equal size. ::
aws kinesis update-shard-count \
--stream-name samplestream \
--scaling-type UNIFORM_SCALING \
--target-shard-count 6
Output::
{
"StreamName": "samplestream",
"CurrentShardCount": 3,
"TargetShardCount": 6
}
For more information, see `Resharding a Stream `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/create-stream.rst 0000644 0000000 0000000 00000000701 13620325630 023136 0 ustar root root 0000000 0000000 **To create a data stream**
The following ``create-stream`` example creates a data stream named samplestream with 3 shards. ::
aws kinesis create-stream \
--stream-name samplestream \
--shard-count 3
This command produces no output.
For more information, see `Creating a Stream `__ in the *Amazon Kinesis Data Streams Developer Guide*. awscli-1.17.14/awscli/examples/kinesis/describe-limits.rst 0000644 0000000 0000000 00000000675 13620325630 023473 0 ustar root root 0000000 0000000 **To describe shard limits**
The following ``describe-limits`` example displays the shard limits and usage for the current AWS account. ::
aws kinesis describe-limits
Output::
{
"ShardLimit": 500,
"OpenShardCount": 29
}
For more information, see `Resharding a Stream `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/split-shard.rst 0000644 0000000 0000000 00000001020 13620325630 022627 0 ustar root root 0000000 0000000 **To split shards**
The following ``split-shard`` example splits the specified shard into two new shards using a new starting hash key of 10. ::
aws kinesis split-shard \
--stream-name samplestream \
--shard-to-split shardId-000000000000 \
--new-starting-hash-key 10
This command produces no output.
For more information, see `Splitting a Shard `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/add-tags-to-stream.rst 0000644 0000000 0000000 00000000740 13620325630 024002 0 ustar root root 0000000 0000000 **To add tags to a data stream**
The following ``add-tags-to-stream`` example assigns a tag with the key ``samplekey`` and value ``example`` to the specified stream. ::
aws kinesis add-tags-to-stream \
--stream-name samplestream \
--tags samplekey=example
This command produces no output.
For more information, see `Tagging Your Streams `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/get-records.rst 0000644 0000000 0000000 00000001417 13620325630 022625 0 ustar root root 0000000 0000000 **To obtain records from a shard**
The following ``get-records`` example gets data records from a Kinesis data stream's shard using the specified shard iterator. ::
aws kinesis get-records \
--shard-iterator AAAAAAAAAAF7/0mWD7IuHj1yGv/TKuNgx2ukD5xipCY4cy4gU96orWwZwcSXh3K9tAmGYeOZyLZrvzzeOFVf9iN99hUPw/w/b0YWYeehfNvnf1DYt5XpDJghLKr3DzgznkTmMymDP3R+3wRKeuEw6/kdxY2yKJH0veaiekaVc4N2VwK/GvaGP2Hh9Fg7N++q0Adg6fIDQPt4p8RpavDbk+A4sL9SWGE1
Output::
{
"Records": [],
"MillisBehindLatest": 80742000
}
For more information, see `Developing Consumers Using the Kinesis Data Streams API with the AWS SDK for Java `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/put-record.rst 0000644 0000000 0000000 00000001372 13620325630 022473 0 ustar root root 0000000 0000000 **To write a record into a data stream**
The following ``put-record`` example writes a single data record into the specified data stream using the specified partition key. ::
aws kinesis put-record \
--stream-name samplestream \
--data sampledatarecord \
--partition-key samplepartitionkey
Output::
{
"ShardId": "shardId-000000000009",
"SequenceNumber": "49600902273357540915989931256901506243878407835297513618",
"EncryptionType": "KMS"
}
For more information, see `Developing Producers Using the Amazon Kinesis Data Streams API with the AWS SDK for Java `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/stop-stream-encryption.rst 0000644 0000000 0000000 00000001166 13620325630 025056 0 ustar root root 0000000 0000000 **To disable data stream encryption**
The following ``stop-stream-encryption`` example disables server-side encryption for the specified stream, using the specified AWS KMS key. ::
aws kinesis start-stream-encryption \
--encryption-type KMS \
--key-id arn:aws:kms:us-west-2:012345678912:key/a3c4a7cd-728b-45dd-b334-4d3eb496e452 \
--stream-name samplestream
This command produces no output.
For more information, see `Data Protection in Amazon Kinesis Data Streams `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/list-tags-for-stream.rst 0000644 0000000 0000000 00000001063 13620325630 024370 0 ustar root root 0000000 0000000 **To list tags for a data stream**
The following ``list-tags-for-stream`` example lists the tags attached to the specified data stream. ::
aws kinesis list-tags-for-stream \
--stream-name samplestream
Output::
{
"Tags": [
{
"Key": "samplekey",
"Value": "example"
}
],
"HasMoreTags": false
}
For more information, see `Tagging Your Streams `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/put-records.rst 0000644 0000000 0000000 00000002120 13620325630 022646 0 ustar root root 0000000 0000000 **To write multiple records into a data stream**
The following ``put-records`` example writes a data record using the specified partition key and another data record using a different partition key in a single call. ::
aws kinesis put-records \
--stream-name samplestream \
--records Data=blob1,PartitionKey=partitionkey1 Data=blob2,PartitionKey=partitionkey2
Output::
{
"FailedRecordCount": 0,
"Records": [
{
"SequenceNumber": "49600883331171471519674795588238531498465399900093808706",
"ShardId": "shardId-000000000004"
},
{
"SequenceNumber": "49600902273357540915989931256902715169698037101720764562",
"ShardId": "shardId-000000000009"
}
],
"EncryptionType": "KMS"
}
For more information, see `Developing Producers Using the Amazon Kinesis Data Streams API with the AWS SDK for Java `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/list-shards.rst 0000644 0000000 0000000 00000002764 13620325630 022652 0 ustar root root 0000000 0000000 **To list shards in a data stream**
The following ``list-shards`` example lists all shards in the specified stream starting with the shard whose ID immediately follows the specified ``exclusive-start-shard-id`` of ``shardId-000000000000``. ::
aws kinesis list-shards \
--stream-name samplestream \
--exclusive-start-shard-id shardId-000000000000
Output::
{
"Shards": [
{
"ShardId": "shardId-000000000001",
"HashKeyRange": {
"StartingHashKey": "113427455640312821154458202477256070485",
"EndingHashKey": "226854911280625642308916404954512140969"
},
"SequenceNumberRange": {
"StartingSequenceNumber": "49600871682979337187563555549332609155523708941634633746"
}
},
{
"ShardId": "shardId-000000000002",
"HashKeyRange": {
"StartingHashKey": "226854911280625642308916404954512140970",
"EndingHashKey": "340282366920938463463374607431768211455"
},
"SequenceNumberRange": {
"StartingSequenceNumber": "49600871683001637932762086172474144873796357303140614178"
}
}
]
}
For more information, see `Listing Shards `__ in the *Amazon Kinesis Data Streams Developer Guide*. awscli-1.17.14/awscli/examples/kinesis/enable-enhanced-monitoring.rst 0000644 0000000 0000000 00000001617 13620325630 025565 0 ustar root root 0000000 0000000 **To enable enhanced monitoring for shard-level metrics**
The following ``enable-enhanced-monitoring`` example enables enhanced Kinesis data stream monitoring for shard-level metrics. ::
aws kinesis enable-enhanced-monitoring \
--stream-name samplestream \
--shard-level-metrics ALL
Output::
{
"StreamName": "samplestream",
"CurrentShardLevelMetrics": [],
"DesiredShardLevelMetrics": [
"IncomingBytes",
"OutgoingRecords",
"IteratorAgeMilliseconds",
"IncomingRecords",
"ReadProvisionedThroughputExceeded",
"WriteProvisionedThroughputExceeded",
"OutgoingBytes"
]
}
For more information, see `Monitoring Streams in Amazon Kinesis Data Streams `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/merge-shards.rst 0000644 0000000 0000000 00000001155 13620325630 022767 0 ustar root root 0000000 0000000 **To merge shards**
The following ``merge-shards`` example merges two adjacent shards with IDs of shardId-000000000000 and shardId-000000000001 in the specified data stream and combines them into a single shard. ::
aws kinesis merge-shards \
--stream-name samplestream \
--shard-to-merge shardId-000000000000 \
--adjacent-shard-to-merge shardId-000000000001
This command produces no output.
For more information, see `Merging Two Shards `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/register-stream-consumer.rst 0000644 0000000 0000000 00000001675 13620325630 025363 0 ustar root root 0000000 0000000 **To register a data stream consumer**
The following ``register-stream-consumer`` example registers a consumer called ``KinesisConsumerApplication`` with the specified data stream. ::
aws kinesis register-stream-consumer \
--stream-arn arn:aws:kinesis:us-west-2:012345678912:stream/samplestream \
--consumer-name KinesisConsumerApplication
Output::
{
"Consumer": {
"ConsumerName": "KinesisConsumerApplication",
"ConsumerARN": "arn:aws:kinesis:us-west-2: 123456789012:stream/samplestream/consumer/KinesisConsumerApplication:1572383852",
"ConsumerStatus": "CREATING",
"ConsumerCreationTimestamp": 1572383852.0
}
}
For more information, see `Developing Consumers with Enhanced Fan-Out Using the Kinesis Data Streams API `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/kinesis/describe-stream.rst 0000644 0000000 0000000 00000004541 13620325630 023461 0 ustar root root 0000000 0000000 **To describe a data stream**
The following ``describe-stream`` example returns the details of the specified data stream. ::
aws kinesis describe-stream \
--stream-name samplestream
Output::
{
"StreamDescription": {
"Shards": [
{
"ShardId": "shardId-000000000000",
"HashKeyRange": {
"StartingHashKey": "0",
"EndingHashKey": "113427455640312821154458202477256070484"
},
"SequenceNumberRange": {
"StartingSequenceNumber": "49600871682957036442365024926191073437251060580128653314"
}
},
{
"ShardId": "shardId-000000000001",
"HashKeyRange": {
"StartingHashKey": "113427455640312821154458202477256070485",
"EndingHashKey": "226854911280625642308916404954512140969"
},
"SequenceNumberRange": {
"StartingSequenceNumber": "49600871682979337187563555549332609155523708941634633746"
}
},
{
"ShardId": "shardId-000000000002",
"HashKeyRange": {
"StartingHashKey": "226854911280625642308916404954512140970",
"EndingHashKey": "340282366920938463463374607431768211455"
},
"SequenceNumberRange": {
"StartingSequenceNumber": "49600871683001637932762086172474144873796357303140614178"
}
}
],
"StreamARN": "arn:aws:kinesis:us-west-2:123456789012:stream/samplestream",
"StreamName": "samplestream",
"StreamStatus": "ACTIVE",
"RetentionPeriodHours": 24,
"EnhancedMonitoring": [
{
"ShardLevelMetrics": []
}
],
"EncryptionType": "NONE",
"KeyId": null,
"StreamCreationTimestamp": 1572297168.0
}
}
For more information, see `Creatinga and Managing Streams `__ in the *Amazon Kinesis Data Streams Developer Guide*.
awscli-1.17.14/awscli/examples/importexport/ 0000755 0000000 0000000 00000000000 13620325757 020773 5 ustar root root 0000000 0000000 awscli-1.17.14/awscli/examples/importexport/update-job.rst 0000644 0000000 0000000 00000000742 13620325556 023557 0 ustar root root 0000000 0000000 The following command updates the specified job::
aws importexport update-job --job-id EX1ID --job-type import --manifest file://manifest.txt --no-validate-only
The output for the update-jobs command looks like the following::
True **** Device will be erased before being returned. ****
With this command, you can either modify the original manifest you submitted, or you can start over and create a new manifest file. In either case, the original manifest is discarded.
awscli-1.17.14/awscli/examples/importexport/get-shipping-label.rst 0000644 0000000 0000000 00000001650 13620325556 025177 0 ustar root root 0000000 0000000 The following command creates a pre-paid shipping label for the specified job::
aws importexport get-shipping-label --job-ids EX1ID --name "Jane Roe" --company "Example Corp." --phone-number "206-555-1111" --country "USA" --state-or-province "WA" --city "Anytown" --postal-code "91011-1111" --street-1 "123 Any Street"
The output for the get-shipping-label command looks like the following::
https://s3.amazonaws.com/myBucket/shipping-label-EX1ID.pdf
The link in the output contains the pre-paid shipping label generated in a PDF. It also contains shipping instructions with a unique bar code to identify and authenticate your device. For more information about using the pre-paid shipping label and shipping your device, see `Shipping Your Storage Device`_ in the *AWS Import/Export Developer Guide*.
.. _`Shipping Your Storage Device`: http://docs.aws.amazon.com/AWSImportExport/latest/DG/CHAP_ShippingYourStorageDevice.html
awscli-1.17.14/awscli/examples/importexport/get-status.rst 0000644 0000000 0000000 00000002272 13620325556 023625 0 ustar root root 0000000 0000000 The following command returns the status the specified job::
aws importexport get-status --job-id EX1ID
The output for the get-status command looks like the following::
2015-05-27T18:58:21Z manifestVersion:2.0
generator:Text editor
bucket:myBucket
deviceId:49382
eraseDevice:yes
notificationEmail:john.doe@example.com;jane.roe@example.com
trueCryptPassword:password123
acl:private
serviceLevel:standard
returnAddress:
name: Jane Roe
company: Example Corp.
street1: 123 Any Street
street2:
street3:
city: Anytown
stateOrProvince: WA
postalCode: 91011-1111
country:USA
phoneNumber:206-555-1111 0 EX1ID Import NotReceived AWS has not received your device. Pending The specified job has not started.
ktKDXpdbEXAMPLEyGFJmQO744UHw= version:2.0
signingMethod:HmacSHA1
jobId:EX1ID
signature:ktKDXpdbEXAMPLEyGFJmQO744UHw=
When you ship your device, it will be delivered to a sorting facility, and then forwarded on to an AWS data center. Note that when you send a get-status command, the status of your job will not show as ``At AWS`` until the shipment has been received at the AWS data center.
awscli-1.17.14/awscli/examples/importexport/list-jobs.rst 0000644 0000000 0000000 00000000630 13620325556 023427 0 ustar root root 0000000 0000000 The following command lists the jobs you've created::
aws importexport list-jobs
The output for the list-jobs command looks like the following::
JOBS 2015-05-27T18:58:21Z False EX1ID Import
You can only list jobs created by users under the AWS account you are currently using. Listing jobs returns useful information, like job IDs, which are necessary for other AWS Import/Export commands.
awscli-1.17.14/awscli/examples/importexport/create-job.rst 0000644 0000000 0000000 00000003003 13620325556 023531 0 ustar root root 0000000 0000000 The following command creates an import job from a manifest file::
aws importexport create-job --job-type import --manifest file://manifest --no-validate-only
The file ``manifest`` is a YAML formatted text file in the current directory with the following content::
manifestVersion: 2.0;
returnAddress:
name: Jane Roe
company: Example Corp.
street1: 123 Any Street
city: Anytown
stateOrProvince: WA
postalCode: 91011-1111
phoneNumber: 206-555-1111
country: USA
deviceId: 49382
eraseDevice: yes
notificationEmail: john.doe@example.com;jane.roe@example.com
bucket: myBucket
For more information on the manifest file format, see `Creating Import Manifests`_ in the *AWS Import/Export Developer Guide*.
.. _`Creating Import Manifests`: http://docs.aws.amazon.com/AWSImportExport/latest/DG/ImportManifestFile.html
You can also pass the manifest as a string in quotes::
aws importexport create-job --job-type import --manifest 'manifestVersion: 2.0;
returnAddress:
name: Jane Roe
company: Example Corp.
street1: 123 Any Street
city: Anytown
stateOrProvince: WA
postalCode: 91011-1111
phoneNumber: 206-555-1111
country: USA
deviceId: 49382
eraseDevice: yes
notificationEmail: john.doe@example.com;jane.roe@example.com
bucket: myBucket'
For information on quoting string arguments and using files, see `Specifying Parameter Values`_ in the *AWS CLI User Guide*.
.. _`Specifying Parameter Values`: http://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html
awscli-1.17.14/awscli/examples/importexport/cancel-job.rst 0000644 0000000 0000000 00000000355 13620325556 023522 0 ustar root root 0000000 0000000 The following command cancels the specified job::
aws importexport cancel-job --job-id EX1ID
Only jobs that were created by the AWS account you're currently using can be canceled. Jobs that have already completed cannot be canceled.
awscli-1.17.14/awscli/examples/waf/ 0000755 0000000 0000000 00000000000 13620325757 016774 5 ustar root root 0000000 0000000 awscli-1.17.14/awscli/examples/waf/update-sql-injection-match-set.rst 0000755 0000000 0000000 00000001276 13620325556 025456 0 ustar root root 0000000 0000000 **To update a SQL Injection Match Set**
The following ``update-sql-injection-match-set`` command deletes a SqlInjectionMatchTuple object (filters) in a SQL injection match set::
aws waf update-sql-injection-match-set --sql-injection-match-set-id a123fae4-b567-8e90-1234-5ab67ac8ca90 --change-token 12cs345-67cd-890b-1cd2-c3a4567d89f1 --updates Action="DELETE",SqlInjectionMatchTuple={FieldToMatch={Type="QUERY_STRING"},TextTransformation="URL_DECODE"}
For more information, see `Working with SQL Injection Match Conditions`_ in the *AWS WAF* developer guide.
.. _`Working with SQL Injection Match Conditions`: https://docs.aws.amazon.com/waf/latest/developerguide/web-acl-sql-conditions.html
awscli-1.17.14/awscli/examples/waf/update-size-constraint-set.rst 0000755 0000000 0000000 00000001254 13620325556 024735 0 ustar root root 0000000 0000000 **To update a size constraint set**
The following ``update-size-constraint-set`` command deletes a SizeConstraint object (filters) in a size constraint set::
aws waf update-size-constraint-set --size-constraint-set-id a123fae4-b567-8e90-1234-5ab67ac8ca90 --change-token 12cs345-67cd-890b-1cd2-c3a4567d89f1 --updates Action="DELETE",SizeConstraint={FieldToMatch={Type="QUERY_STRING"},TextTransformation="NONE",ComparisonOperator="GT",Size=0}
For more information, see `Working with Size Constraint Conditions`_ in the *AWS WAF* developer guide.
.. _`Working with Size Constraint Conditions`: https://docs.aws.amazon.com/waf/latest/developerguide/web-acl-size-conditions.html
awscli-1.17.14/awscli/examples/waf/update-ip-set.rst 0000755 0000000 0000000 00000002220 13620325556 022203 0 ustar root root 0000000 0000000 **To update an IP set**
The following ``update-ip-set`` command updates an IPSet with an IPv4 address and deletes an IPv6 address::
aws waf update-ip-set --ip-set-id a123fae4-b567-8e90-1234-5ab67ac8ca90 --change-token 12cs345-67cd-890b-1cd2-c3a4567d89f1 --updates Action="INSERT",IPSetDescriptor={Type="IPV4",Value="12.34.56.78/16"},Action="DELETE",IPSetDescriptor={Type="IPV6",Value="1111:0000:0000:0000:0000:0000:0000:0111/128"}
Alternatively you can use a JSON file to specify the input. For example::
aws waf update-ip-set --ip-set-id a123fae4-b567-8e90-1234-5ab67ac8ca90 --change-token 12cs345-67cd-890b-1cd2-c3a4567d89f1 --updates file://change.json
Where content of the JSON file is::
[
{
"Action": "INSERT",
"IPSetDescriptor":
{
"Type": "IPV4",
"Value": "12.34.56.78/16"
}
},
{
"Action": "DELETE",
"IPSetDescriptor":
{
"Type": "IPV6",
"Value": "1111:0000:0000:0000:0000:0000:0000:0111/128"
}
}
]
For more information, see `Working with IP Match Conditions`_ in the *AWS WAF* developer guide.
.. _`Working with IP Match Conditions`: https://docs.aws.amazon.com/waf/latest/developerguide/web-acl-ip-conditions.html
awscli-1.17.14/awscli/examples/waf/update-xss-match-set.rst 0000755 0000000 0000000 00000001207 13620325556 023506 0 ustar root root 0000000 0000000 **To update an XSSMatchSet**
The following ``update-xss-match-set`` command deletes an XssMatchTuple object (filters) in an XssMatchSet::
aws waf update-xss-match-set --xss-match-set-id a123fae4-b567-8e90-1234-5ab67ac8ca90 --change-token 12cs345-67cd-890b-1cd2-c3a4567d89f1 --updates Action="DELETE",XssMatchTuple={FieldToMatch={Type="QUERY_STRING"},TextTransformation="URL_DECODE"}
For more information, see `Working with Cross-site Scripting Match Conditions`_ in the *AWS WAF* developer guide.
.. _`Working with Cross-site Scripting Match Conditions`: https://docs.aws.amazon.com/waf/latest/developerguide/web-acl-xss-conditions.html
awscli-1.17.14/awscli/examples/waf/update-web-acl.rst 0000755 0000000 0000000 00000001052 13620325556 022316 0 ustar root root 0000000 0000000 **To update a web ACL**
The following ``update-web-acl`` command deletes an ActivatedRule object in a WebACL.
aws waf update-web-acl --web-acl-id a123fae4-b567-8e90-1234-5ab67ac8ca90 --change-token 12cs345-67cd-890b-1cd2-c3a4567d89f1 --updates Action="DELETE",ActivatedRule={Priority=1,RuleId="WAFRule-1-Example",Action={Type="ALLOW"},Type="REGULAR"}
For more information, see `Working with Web ACLs`_ in the *AWS WAF* developer guide.
.. _`Working with Web ACLs`: https://docs.aws.amazon.com/waf/latest/developerguide/web-acl-working-with.html
awscli-1.17.14/awscli/examples/waf/update-rule.rst 0000755 0000000 0000000 00000000771 13620325556 021762 0 ustar root root 0000000 0000000 **To update a rule**
The following ``update-rule`` command deletes a Predicate object in a rule::
aws waf update-rule --rule-id a123fae4-b567-8e90-1234-5ab67ac8ca90 --change-token 12cs345-67cd-890b-1cd2-c3a4567d89f1 --updates Action="DELETE",Predicate={Negated=false,Type="ByteMatch",DataId="MyByteMatchSetID"}
For more information, see `Working with Rules`_ in the *AWS WAF* developer guide.
.. _`Working with Rules`:
https://docs.aws.amazon.com/waf/latest/developerguide/web-acl-rules.html
awscli-1.17.14/awscli/examples/waf/update-byte-match-set.rst 0000755 0000000 0000000 00000001257 13620325556 023641 0 ustar root root 0000000 0000000 **To update a byte match set**
The following ``update-byte-match-set`` command deletes a ByteMatchTuple object (filter) in a ByteMatchSet::
aws waf update-byte-match-set --byte-match-set-id a123fae4-b567-8e90-1234-5ab67ac8ca90 --change-token 12cs345-67cd-890b-1cd2-c3a4567d89f1 --updates Action="DELETE",ByteMatchTuple={FieldToMatch={Type="HEADER",Data="referer"},TargetString="badrefer1",TextTransformation="NONE",PositionalConstraint="CONTAINS"}
For more information, see `Working with String Match Conditions`_ in the *AWS WAF* developer guide.
.. _`Working with String Match Conditions`: https://docs.aws.amazon.com/waf/latest/developerguide/web-acl-string-conditions.html
awscli-1.17.14/awscli/examples/waf/put-logging-configuration.rst 0000644 0000000 0000000 00000001467 13620325556 024634 0 ustar root root 0000000 0000000 **To create a logging configuration for the web ACL ARN with the specified Kinesis Firehose stream ARN**
The following ``put-logging-configuration`` example displays logging configuration for WAF with CloudFront. ::
aws waf put-logging-configuration \
--logging-configuration ResourceArn=arn:aws:waf::123456789012:webacl/3bffd3ed-fa2e-445e-869f-a6a7cf153fd3,LogDestinationConfigs=arn:aws:firehose:us-east-1:123456789012:deliverystream/aws-waf-logs-firehose-stream,RedactedFields=[]
Output::
{
"LoggingConfiguration": {
"ResourceArn": "arn:aws:waf::123456789012:webacl/3bffd3ed-fa2e-445e-869f-a6a7cf153fd3",
"LogDestinationConfigs": [
"arn:aws:firehose:us-east-1:123456789012:deliverystream/aws-waf-logs-firehose-stream"
]
}
}
awscli-1.17.14/awscli/examples/organizations/ 0000755 0000000 0000000 00000000000 13620325757 021106 5 ustar root root 0000000 0000000 awscli-1.17.14/awscli/examples/organizations/disable-policy-type.rst 0000755 0000000 0000000 00000001023 13620325556 025513 0 ustar root root 0000000 0000000 **To disable a policy type in a root**
The following example shows how to disable the service control policy (SCP) policy type in a root: ::
aws organizations disable-policy-type --root-id r-examplerootid111 --policy-type SERVICE_CONTROL_POLICY
The output shows that the PolicyTypes response element no longer includes SERVICE_CONTROL_POLICY: ::
{
"Root": {
"PolicyTypes": [],
"Name": "Root",
"Id": "r-examplerootid111",
"Arn": "arn:aws:organizations::111111111111:root/o-exampleorgid/r-examplerootid111"
}
} awscli-1.17.14/awscli/examples/organizations/move-account.rst 0000755 0000000 0000000 00000000466 13620325556 024246 0 ustar root root 0000000 0000000 **To move an account between roots or OUs**
The following example shows you how to move the master account in the organization from the root to an OU: ::
aws organizations move-account --account-id 333333333333 --source-parent-id r-examplerootid111 --destination-parent-id ou-examplerootid111-exampleouid111 awscli-1.17.14/awscli/examples/organizations/decline-handshake.rst 0000755 0000000 0000000 00000002564 13620325556 025176 0 ustar root root 0000000 0000000 **To decline a handshake sent from another account**
The following example shows that Susan, an admin who is the owner of account 222222222222, declines an invitation to join Bill's organization. The DeclineHandshake operation returns a handshake object, showing that the state is now DECLINED: ::
aws organizations decline-handshake --handshake-id h-examplehandshakeid111
The output includes a handshake object that shows the new state of ``DECLINED``: ::
{
"Handshake": {
"Id": "h-examplehandshakeid111",
"State": "DECLINED",
"Resources": [
{
"Type": "ORGANIZATION",
"Value": "o-exampleorgid",
"Resources": [
{
"Type": "MASTER_EMAIL",
"Value": "bill@example.com"
},
{
"Type": "MASTER_NAME",
"Value": "Master Account"
}
]
},
{
"Type": "EMAIL",
"Value": "susan@example.com"
},
{
"Type": "NOTES",
"Value": "This is an invitation to Susan's account to join the Bill's organization."
}
],
"Parties": [
{
"Type": "EMAIL",
"Id": "susan@example.com"
},
{
"Type": "ORGANIZATION",
"Id": "o-exampleorgid"
}
],
"Action": "INVITE",
"RequestedTimestamp": 1470684478.687,
"ExpirationTimestamp": 1471980478.687,
"Arn": "arn:aws:organizations::111111111111:handshake/o-exampleorgid/invite/h-examplehandshakeid111"
}
} awscli-1.17.14/awscli/examples/organizations/enable-policy-type.rst 0000755 0000000 0000000 00000001157 13620325556 025346 0 ustar root root 0000000 0000000 **To enable the use of a policy type in a root**
The following example shows how to enable the service control policy (SCP) policy type in a root: ::
aws organizations enable-policy-type --root-id r-examplerootid111 --policy-type SERVICE_CONTROL_POLICY
The output shows a root object with a policyTypes response element showing that SCPs are now enabled: ::
{
"Root": {
"PolicyTypes": [
{
"Status":"ENABLED",
"Type":"SERVICE_CONTROL_POLICY"
}
],
"Id": "r-examplerootid111",
"Name": "Root",
"Arn": "arn:aws:organizations::111111111111:root/o-exampleorgid/r-examplerootid111"
}
} awscli-1.17.14/awscli/examples/organizations/describe-account.rst 0000755 0000000 0000000 00000001035 13620325556 025051 0 ustar root root 0000000 0000000 **To get the details about an account**
The following example shows you how to request details about an account: ::
aws organizations describe-account --account-id 555555555555
The output shows an account object with the details about the account: ::
{
"Account": {
"Id": "555555555555",
"Arn": "arn:aws:organizations::111111111111:account/o-exampleorgid/555555555555",
"Name": "Beta account",
"Email": "anika@example.com",
"JoinedMethod": "INVITED",
"JoinedTimeStamp": 1481756563.134,
"Status": "ACTIVE"
}
} awscli-1.17.14/awscli/examples/organizations/leave-organization.rst 0000755 0000000 0000000 00000000340 13620325556 025433 0 ustar root root 0000000 0000000 **To leave an organization as a member account**
The following example shows the administrator of a member account requesting to leave the organization it is currently a member of: ::
aws organizations leave-organization
awscli-1.17.14/awscli/examples/organizations/list-targets-for-policy.rst 0000755 0000000 0000000 00000002013 13620325556 026337 0 ustar root root 0000000 0000000 **To retrieve a list of the roots, OUs, and accounts that a policy is attached to**
The following example shows how to get a list of the roots, OUs, and accounts that the specified policy is attached to: ::
aws organizations list-targets-for-policy --policy-id p-FullAWSAccess
The output includes a list of attachment objects with summary information about the roots, OUs, and accounts the policy is attached to: ::
{
"Targets": [
{
"Arn": "arn:aws:organizations::111111111111:root/o-exampleorgid/r-examplerootid111",
"Name": "Root",
"TargetId":"r-examplerootid111",
"Type":"ROOT"
},
{
"Arn": "arn:aws:organizations::111111111111:account/o-exampleorgid/333333333333;",
"Name": "Developer Test Account",
"TargetId": "333333333333",
"Type": "ACCOUNT"
},
{
"Arn":"arn:aws:organizations::111111111111:ou/o-exampleorgid/ou-examplerootid111-exampleouid111",
"Name":"Accounting",
"TargetId":"ou-examplerootid111-exampleouid111",
"Type":"ORGANIZATIONAL_UNIT"
}
]
} awscli-1.17.14/awscli/examples/organizations/attach-policy.rst 0000755 0000000 0000000 00000000744 13620325556 024406 0 ustar root root 0000000 0000000 **To attach a policy to a root, OU, or account**
**Example 1**
The following example shows how to attach a service control policy (SCP) to an OU: ::
aws organizations attach-policy
--policy-id p-examplepolicyid111
--target-id ou-examplerootid111-exampleouid111
**Example 2**
The following example shows how to attach a service control policy directly to an account: ::
aws organizations attach-policy
--policy-id p-examplepolicyid111
--target-id 333333333333
awscli-1.17.14/awscli/examples/organizations/list-accounts.rst 0000755 0000000 0000000 00000002573 13620325556 024437 0 ustar root root 0000000 0000000 **To retrieve a list of all of the accounts in an organization**
The following example shows you how to request a list of the accounts in an organization: ::
aws organizations list-accounts
The output includes a list of account summary objects. ::
{
"Accounts": [
{
"Arn": "arn:aws:organizations::111111111111:account/o-exampleorgid/111111111111",
"JoinedMethod": "INVITED",
"JoinedTimestamp": 1481830215.45,
"Id": "111111111111",
"Name": "Master Account",
"Email": "bill@example.com",
"Status": "ACTIVE"
},
{
"Arn": "arn:aws:organizations::111111111111:account/o-exampleorgid/222222222222",
"JoinedMethod": "INVITED",
"JoinedTimestamp": 1481835741.044,
"Id": "222222222222",
"Name": "Production Account",
"Email": "alice@example.com",
"Status": "ACTIVE"
},
{
"Arn": "arn:aws:organizations::111111111111:account/o-exampleorgid/333333333333",
"JoinedMethod": "INVITED",
"JoinedTimestamp": 1481835795.536,
"Id": "333333333333",
"Name": "Development Account",
"Email": "juan@example.com",
"Status": "ACTIVE"
},
{
"Arn": "arn:aws:organizations::111111111111:account/o-exampleorgid/444444444444",
"JoinedMethod": "INVITED",
"JoinedTimestamp": 1481835812.143,
"Id": "444444444444",
"Name": "Test Account",
"Email": "anika@example.com",
"Status": "ACTIVE"
}
]
} awscli-1.17.14/awscli/examples/organizations/list-accounts-for-parent.rst 0000755 0000000 0000000 00000001603 13620325556 026503 0 ustar root root 0000000 0000000 **To retrieve a list of all of the accounts in a specified parent root or OU**
The following example shows how to request a list of the accounts in an OU: ::
aws organizations list-accounts-for-parent --parent-id ou-examplerootid111-exampleouid111
The output includes a list of account summary objects. ::
{
"Accounts": [
{
"Arn": "arn:aws:organizations::111111111111:account/o-exampleorgid/333333333333",
"JoinedMethod": "INVITED",
"JoinedTimestamp": 1481835795.536,
"Id": "333333333333",
"Name": "Development Account",
"Email": "juan@example.com",
"Status": "ACTIVE"
},
{
"Arn": "arn:aws:organizations::111111111111:account/o-exampleorgid/444444444444",
"JoinedMethod": "INVITED",
"JoinedTimestamp": 1481835812.143,
"Id": "444444444444",
"Name": "Test Account",
"Email": "anika@example.com",
"Status": "ACTIVE"
}
]
} awscli-1.17.14/awscli/examples/organizations/cancel-handshake.rst 0000755 0000000 0000000 00000002666 13620325556 025023 0 ustar root root 0000000 0000000 **To cancel a handshake sent from another account**
Bill previously sent an invitation to Susan's account to join his organization. He changes his mind and decides to cancel the invitation before Susan accepts it. The following example shows Bill's cancellation: ::
aws organizations cancel-handshake --handshake-id h-examplehandshakeid111
The output includes a handshake object that shows that the state is now ``CANCELED``: ::
{
"Handshake": {
"Id": "h-examplehandshakeid111",
"State":"CANCELED",
"Action": "INVITE",
"Arn": "arn:aws:organizations::111111111111:handshake/o-exampleorgid/invite/h-examplehandshakeid111",
"Parties": [
{
"Id": "o-exampleorgid",
"Type": "ORGANIZATION"
},
{
"Id": "susan@example.com",
"Type": "EMAIL"
}
],
"Resources": [
{
"Type": "ORGANIZATION",
"Value": "o-exampleorgid",
"Resources": [
{
"Type": "MASTER_EMAIL",
"Value": "bill@example.com"
},
{
"Type": "MASTER_NAME",
"Value": "Master Account"
},
{
"Type": "ORGANIZATION_FEATURE_SET",
"Value": "CONSOLIDATED_BILLING"
}
]
},
{
"Type": "EMAIL",
"Value": "anika@example.com"
},
{
"Type": "NOTES",
"Value": "This is a request for Susan's account to join Bob's organization."
}
],
"RequestedTimestamp": 1.47008383521E9,
"ExpirationTimestamp": 1.47137983521E9
}
} awscli-1.17.14/awscli/examples/organizations/accept-handshake.rst 0000755 0000000 0000000 00000002353 13620325556 025026 0 ustar root root 0000000 0000000 **To accept a handshake from another account**
Bill, the owner of an organization, has previously invited Juan's account to join his organization. The following example shows Juan's account accepting the handshake and thus agreeing to the invitation. ::
aws organizations accept-handshake --handshake-id h-examplehandshakeid111
The output shows the following: ::
{
"Handshake": {
"Action": "INVITE",
"Arn": "arn:aws:organizations::111111111111:handshake/o-exampleorgid/invite/h-examplehandshakeid111",
"RequestedTimestamp": 1481656459.257,
"ExpirationTimestamp": 1482952459.257,
"Id": "h-examplehandshakeid111",
"Parties": [
{
"Id": "o-exampleorgid",
"Type": "ORGANIZATION"
},
{
"Id": "juan@example.com",
"Type": "EMAIL"
}
],
"Resources": [
{
"Resources": [
{
"Type": "MASTER_EMAIL",
"Value": "bill@amazon.com"
},
{
"Type": "MASTER_NAME",
"Value": "Org Master Account"
},
{
"Type": "ORGANIZATION_FEATURE_SET",
"Value": "ALL"
}
],
"Type": "ORGANIZATION",
"Value": "o-exampleorgid"
},
{
"Type": "EMAIL",
"Value": "juan@example.com"
}
],
"State": "ACCEPTED"
}
} awscli-1.17.14/awscli/examples/organizations/list-roots.rst 0000755 0000000 0000000 00000001011 13620325556 023750 0 ustar root root 0000000 0000000 **To retrieve a list of the roots in an organization**
This example shows you how to get the list of roots for an organization: ::
aws organizations list-roots
The output includes a list of root structures with summary information: ::
{
"Roots": [
{
"Name": "Root",
"Arn": "arn:aws:organizations::111111111111:root/o-exampleorgid/r-examplerootid111",
"Id": "r-examplerootid111",
"PolicyTypes": [
{
"Status":"ENABLED",
"Type":"SERVICE_CONTROL_POLICY"
}
]
}
]
} awscli-1.17.14/awscli/examples/organizations/list-parents.rst 0000755 0000000 0000000 00000000651 13620325556 024267 0 ustar root root 0000000 0000000 **To list the parent OUs or roots for an account or child OU**
The following example you how to list the root or parent OU that contains that account 444444444444: ::
aws organizations list-parents --child-id 444444444444
The output shows that the specified account is in the OU with specified ID: ::
{
"Parents": [
{
"Id": "ou-examplerootid111-exampleouid111",
"Type": "ORGANIZATIONAL_UNIT"
}
]
} awscli-1.17.14/awscli/examples/organizations/create-policy.rst 0000755 0000000 0000000 00000003507 13620325556 024405 0 ustar root root 0000000 0000000 **Example 1: To create a policy with a text source file for the JSON policy**
The following example shows you how to create an service control policy (SCP) named ``AllowAllS3Actions``. The policy contents are taken from a file on the local computer called ``policy.json``. ::
aws organizations create-policy --content file://policy.json --name AllowAllS3Actions, --type SERVICE_CONTROL_POLICY --description "Allows delegation of all S3 actions"
The output includes a policy object with details about the new policy: ::
{
"Policy": {
"Content": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Action\":[\"s3:*\"],\"Resource\":[\"*\"]}]}",
"PolicySummary": {
"Arn": "arn:aws:organizations::o-exampleorgid:policy/service_control_policy/p-examplepolicyid111",
"Description": "Allows delegation of all S3 actions",
"Name": "AllowAllS3Actions",
"Type":"SERVICE_CONTROL_POLICY"
}
}
}
**Example 2: To create a policy with a JSON policy as a parameter**
The following example shows you how to create the same SCP, this time by embedding the policy contents as a JSON string in the parameter. The string must be escaped with backslashes before the double quotes to ensure that they are treated as literals in the parameter, which itself is surrounded by double quotes: ::
aws organizations create-policy --content "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Action\":[\"s3:*\"],\"Resource\":[\"*\"]}]}" --name AllowAllS3Actions --type SERVICE_CONTROL_POLICY --description "Allows delegation of all S3 actions"
For more information about creating and using policies in your organization, see `Managing Organization Policies`_ in the *AWS Organizations User Guide*.
.. _`Managing Organization Policies`: http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies.html awscli-1.17.14/awscli/examples/organizations/list-children.rst 0000755 0000000 0000000 00000001062 13620325556 024400 0 ustar root root 0000000 0000000 **To retrieve the child accounts and OUs of a parent OU or root**
The following example you how to list the root or OU that contains that account 444444444444: ::
aws organizations list-children --child-type ORGANIZATIONAL_UNIT --parent-id ou-examplerootid111-exampleouid111
The output shows the two child OUs that are contained by the parent: ::
{
"Children": [
{
"Id": "ou-examplerootid111-exampleouid111",
"Type":"ORGANIZATIONAL_UNIT"
},
{
"Id":"ou-examplerootid111-exampleouid222",
"Type":"ORGANIZATIONAL_UNIT"
}
]
} awscli-1.17.14/awscli/examples/organizations/list-policies.rst 0000755 0000000 0000000 00000002503 13620325556 024420 0 ustar root root 0000000 0000000 **To retrieve a list of all policies in an organization of a certain type**
The following example shows you how to get a list of SCPs, as specified by the filter parameter: ::
aws organizations list-policies --filter SERVICE_CONTROL_POLICY
The output includes a list of policies with summary information: ::
{
"Policies": [
{
"Type": "SERVICE_CONTROL_POLICY",
"Name": "AllowAllS3Actions",
"AwsManaged": false,
"Id": "p-examplepolicyid111",
"Arn": "arn:aws:organizations::111111111111:policy/service_control_policy/p-examplepolicyid111",
"Description": "Enables account admins to delegate permissions for any S3 actions to users and roles in their accounts."
},
{
"Type": "SERVICE_CONTROL_POLICY",
"Name": "AllowAllEC2Actions",
"AwsManaged": false,
"Id": "p-examplepolicyid222",
"Arn": "arn:aws:organizations::111111111111:policy/service_control_policy/p-examplepolicyid222",
"Description": "Enables account admins to delegate permissions for any EC2 actions to users and roles in their accounts."
},
{
"AwsManaged": true,
"Description": "Allows access to every operation",
"Type": "SERVICE_CONTROL_POLICY",
"Id": "p-FullAWSAccess",
"Arn": "arn:aws:organizations::aws:policy/service_control_policy/p-FullAWSAccess",
"Name": "FullAWSAccess"
}
]
} awscli-1.17.14/awscli/examples/organizations/remove-account-from-organization.rst 0000755 0000000 0000000 00000000351 13620325556 030231 0 ustar root root 0000000 0000000 **To remove an account from an organization as the master account**
The following example shows you how to remove an account from an organization: ::
aws organizations remove-account-from-organization --account-id 333333333333
awscli-1.17.14/awscli/examples/organizations/update-policy.rst 0000755 0000000 0000000 00000004021 13620325556 024414 0 ustar root root 0000000 0000000 **Example 1: To rename a policy**
The following ``update-policy`` example renames a policy and gives it a new description. ::
aws organizations update-policy \
--policy-id p-examplepolicyid111 \
--name Renamed-Policy \
--description "This description replaces the original."
The output shows the new name and description. ::
{
"Policy": {
"Content": "{\n \"Version\":\"2012-10-17\",\n \"Statement\":{\n \"Effect\":\"Allow\",\n \"Action\":\"ec2:*\",\n \"Resource\":\"*\"\n }\n}\n",
"PolicySummary": {
"Id": "p-examplepolicyid111",
"AwsManaged": false,
"Arn":"arn:aws:organizations::111111111111:policy/o-exampleorgid/service_control_policy/p-examplepolicyid111",
"Description": "This description replaces the original.",
"Name": "Renamed-Policy",
"Type": "SERVICE_CONTROL_POLICY"
}
}
}
**Example 2: To replace a policy's JSON text content**
The following example shows you how to replace the JSON text of the SCP in the previous example with a new JSON policy text string that allows S3 instead of EC2: ::
aws organizations update-policy \
--policy-id p-examplepolicyid111 \
--content "{\"Version\":\"2012-10-17\",\"Statement\":{\"Effect\":\"Allow\",\"Action\":\"s3:*\",\"Resource\":\"*\"}}"
The output shows the new content::
{
"Policy": {
"Content": "{ \"Version\": \"2012-10-17\", \"Statement\": { \"Effect\": \"Allow\", \"Action\": \"s3:*\", \"Resource\": \"*\" } }",
"PolicySummary": {
"Arn": "arn:aws:organizations::111111111111:policy/o-exampleorgid/service_control_policy/p-examplepolicyid111",
"AwsManaged": false;
"Description": "This description replaces the original.",
"Id": "p-examplepolicyid111",
"Name": "Renamed-Policy",
"Type": "SERVICE_CONTROL_POLICY"
}
}
}
awscli-1.17.14/awscli/examples/organizations/describe-handshake.rst 0000755 0000000 0000000 00000002375 13620325556 025353 0 ustar root root 0000000 0000000 **To get information about a handshake**
The following example shows you how to request details about a handshake. The handshake ID comes either from the original call to ``InviteAccountToOrganization``, or from a call to ``ListHandshakesForAccount`` or ``ListHandshakesForOrganization``: ::
aws organizations describe-handshake --handshake-id h-examplehandshakeid111
The output includes a handshake object that has all the details about the requested handshake: ::
{
"Handshake": {
"Id": "h-examplehandshakeid111",
"State": "OPEN",
"Resources": [
{
"Type": "ORGANIZATION",
"Value": "o-exampleorgid",
"Resources": [
{
"Type": "MASTER_EMAIL",
"Value": "bill@example.com"
},
{
"Type": "MASTER_NAME",
"Value": "Master Account"
}
]
},
{
"Type": "EMAIL",
"Value": "anika@example.com"
}
],
"Parties": [
{
"Type": "ORGANIZATION",
"Id": "o-exampleorgid"
},
{
"Type": "EMAIL",
"Id": "anika@example.com"
}
],
"Action": "INVITE",
"RequestedTimestamp": 1470158698.046,
"ExpirationTimestamp": 1471454698.046,
"Arn": "arn:aws:organizations::111111111111:handshake/o-exampleorgid/invite/h-examplehandshakeid111"
}
} awscli-1.17.14/awscli/examples/organizations/list-handshakes-for-organization.rst 0000755 0000000 0000000 00000004617 13620325556 030220 0 ustar root root 0000000 0000000 **To retrieve a list of the handshakes associated with an organization**
The following example shows how to get a list of handshakes that are associated with the current organization: ::
aws organizations list-handshakes-for-organization
The output shows two handshakes. The first one is an invitation to Juan's account and shows a state of OPEN. The second is an invitation to Anika's account and shows a state of ACCEPTED: ::
{
"Handshakes": [
{
"Action": "INVITE",
"Arn": "arn:aws:organizations::111111111111:handshake/o-exampleorgid/invite/h-examplehandshakeid111",
"ExpirationTimestamp": 1482952459.257,
"Id": "h-examplehandshakeid111",
"Parties": [
{
"Id": "o-exampleorgid",
"Type": "ORGANIZATION"
},
{
"Id": "juan@example.com",
"Type": "EMAIL"
}
],
"RequestedTimestamp": 1481656459.257,
"Resources": [
{
"Resources": [
{
"Type": "MASTER_EMAIL",
"Value": "bill@amazon.com"
},
{
"Type": "MASTER_NAME",
"Value": "Org Master Account"
},
{
"Type": "ORGANIZATION_FEATURE_SET",
"Value": "FULL"
}
],
"Type": "ORGANIZATION",
"Value": "o-exampleorgid"
},
{
"Type": "EMAIL",
"Value": "juan@example.com"
},
{
"Type":"NOTES",
"Value":"This is an invitation to Juan's account to join Bill's organization."
}
],
"State": "OPEN"
},
{
"Action": "INVITE",
"State":"ACCEPTED",
"Arn": "arn:aws:organizations::111111111111:handshake/o-exampleorgid/invite/h-examplehandshakeid111",
"ExpirationTimestamp": 1.471797437427E9,
"Id": "h-examplehandshakeid222",
"Parties": [
{
"Id": "o-exampleorgid",
"Type": "ORGANIZATION"
},
{
"Id": "anika@example.com",
"Type": "EMAIL"
}
],
"RequestedTimestamp": 1.469205437427E9,
"Resources": [
{
"Resources": [
{
"Type":"MASTER_EMAIL",
"Value":"bill@example.com"
},
{
"Type":"MASTER_NAME",
"Value":"Master Account"
}
],
"Type":"ORGANIZATION",
"Value":"o-exampleorgid"
},
{
"Type":"EMAIL",
"Value":"anika@example.com"
},
{
"Type":"NOTES",
"Value":"This is an invitation to Anika's account to join Bill's organization."
}
]
}
]
} awscli-1.17.14/awscli/examples/organizations/delete-organizational-unit.rst 0000755 0000000 0000000 00000000414 13620325556 027075 0 ustar root root 0000000 0000000 **To delete an OU**
The following example shows how to delete an OU. The example assumes that you previously removed all accounts and other OUs from the OU: ::
aws organizations delete-organizational-unit --organizational-unit-id ou-examplerootid111-exampleouid111
awscli-1.17.14/awscli/examples/organizations/create-organization.rst 0000755 0000000 0000000 00000003574 13620325556 025616 0 ustar root root 0000000 0000000 **Example 1: To create a new organization**
Bill wants to create an organization using credentials from account 111111111111. The following example shows that the account becomes the master account in the new organization. Because he does not specify a features set, the new organization defaults to all features enabled and service control policies are enabled on the root. ::
aws organizations create-organization
The output includes an organization object with details about the new organization: ::
{
"Organization": {
"AvailablePolicyTypes": [
{
"Status": "ENABLED",
"Type": "SERVICE_CONTROL_POLICY"
}
],
"MasterAccountId": "111111111111",
"MasterAccountArn": "arn:aws:organizations::111111111111:account/o-exampleorgid/111111111111",
"MasterAccountEmail": "bill@example.com",
"FeatureSet": "ALL",
"Id": "o-exampleorgid",
"Arn": "arn:aws:organizations::111111111111:organization/o-exampleorgid"
}
}
**Example 2: To create a new organization with only consolidated billing features enabled**
The following example creates an organization that supports only the consolidated billing features: ::
aws organizations create-organization --feature-set CONSOLIDATED_BILLING
The output includes an organization object with details about the new organization: ::
{
"Organization": {
"Arn": "arn:aws:organizations::111111111111:organization/o-exampleorgid",
"AvailablePolicyTypes": [],
"Id": "o-exampleorgid",
"MasterAccountArn": "arn:aws:organizations::111111111111:account/o-exampleorgid/111111111111",
"MasterAccountEmail": "bill@example.com",
"MasterAccountId": "111111111111",
"FeatureSet": "CONSOLIDATED_BILLING"
}
}
For more information, see `Creating an Organization`_ in the *AWS Organizations Users Guide*.
.. _`Creating an Organization`: http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_create.html
awscli-1.17.14/awscli/examples/organizations/enable-all-features.rst 0000755 0000000 0000000 00000002356 13620325556 025456 0 ustar root root 0000000 0000000 **To enable all features in an organization**
This example shows the administrator asking all the invited accounts in the organization to approve enabled all features in the organization. AWS Organizations sends an email to the address that is registered with every invited member account asking the owner to approve the change to all features by accepting the handshake that is sent. After all invited member accounts accept the handshake, the organization administrator can finalize the change to all features, and those with appropriate permissions can create policies and apply them to roots, OUs, and accounts: ::
aws organizations enable-all-features
The output is a handshake object that is sent to all invited member accounts for approval: ::
{
"Handshake": {
"Action": "ENABLE_ALL_FEATURES",
"Arn":"arn:aws:organizations::111111111111:handshake/o-exampleorgid/enable_all_features/h-examplehandshakeid111",
"ExpirationTimestamp":1.483127868609E9,
"Id":"h-examplehandshakeid111",
"Parties": [
{
"id":"o-exampleorgid",
"type":"ORGANIZATION"
}
],
"requestedTimestamp":1.481831868609E9,
"resources": [
{
"type":"ORGANIZATION",
"value":"o-exampleorgid"
}
],
"state":"REQUESTED"
}
} awscli-1.17.14/awscli/examples/organizations/list-handshakes-for-account.rst 0000755 0000000 0000000 00000002404 13620325556 027140 0 ustar root root 0000000 0000000 **To retrieve a list of the handshakes sent to an account**
The following example shows how to get a list of all handshakes that are associated with the account of the credentials that were used to call the operation: ::
aws organizations list-handshakes-for-account
The output includes a list of handshake structures with information about each handshake including its current state: ::
{
"Handshake": {
"Action": "INVITE",
"Arn": "arn:aws:organizations::111111111111:handshake/o-exampleorgid/invite/h-examplehandshakeid111",
"ExpirationTimestamp": 1482952459.257,
"Id": "h-examplehandshakeid111",
"Parties": [
{
"Id": "o-exampleorgid",
"Type": "ORGANIZATION"
},
{
"Id": "juan@example.com",
"Type": "EMAIL"
}
],
"RequestedTimestamp": 1481656459.257,
"Resources": [
{
"Resources": [
{
"Type": "MASTER_EMAIL",
"Value": "bill@amazon.com"
},
{
"Type": "MASTER_NAME",
"Value": "Org Master Account"
},
{
"Type": "ORGANIZATION_FEATURE_SET",
"Value": "FULL"
}
],
"Type": "ORGANIZATION",
"Value": "o-exampleorgid"
},
{
"Type": "EMAIL",
"Value": "juan@example.com"
}
],
"State": "OPEN"
}
} awscli-1.17.14/awscli/examples/organizations/list-create-account-status.rst 0000755 0000000 0000000 00000002342 13620325556 027030 0 ustar root root 0000000 0000000 **Example 1: To retrieve a list of the account creation requests made in the current organization**
The following example shows how to request a list of account creation requests for an organization that have completed successfully: ::
aws organizations list-create-account-status --states SUCCEEDED
The output includes an array of objects with information about each request. ::
{
"CreateAccountStatuses": [
{
"AccountId": "444444444444",
"AccountName": "Developer Test Account",
"CompletedTimeStamp": 1481835812.143,
"Id": "car-examplecreateaccountrequestid111",
"RequestedTimeStamp": 1481829432.531,
"State": "SUCCEEDED"
}
]
}
**Example 2: To retrieve a list of the in progress account creation requests made in the current organization**
The following example gets a list of in-progress account creation requests for an organization: ::
aws organizations list-create-account-status --states IN_PROGRESS
The output includes an array of objects with information about each request. ::
{
"CreateAccountStatuses": [
{
"State": "IN_PROGRESS",
"Id": "car-examplecreateaccountrequestid111",
"RequestedTimeStamp": 1481829432.531,
"AccountName": "Production Account"
}
]
} awscli-1.17.14/awscli/examples/organizations/detach-policy.rst 0000755 0000000 0000000 00000000352 13620325556 024365 0 ustar root root 0000000 0000000 **To detach a policy from a root, OU, or account**
The following example shows how to detach a policy from an OU: ::
aws organizations detach-policy --target-id ou-examplerootid111-exampleouid111 --policy-id p-examplepolicyid111
awscli-1.17.14/awscli/examples/organizations/delete-policy.rst 0000755 0000000 0000000 00000000365 13620325556 024403 0 ustar root root 0000000 0000000 **To delete a policy**
The following example shows how to delete a policy from an organization. The example assumes that you previously detached the policy from all entities: ::
aws organizations delete-policy --policy-id p-examplepolicyid111 awscli-1.17.14/awscli/examples/organizations/describe-organizational-unit.rst 0000755 0000000 0000000 00000001011 13620325556 027405 0 ustar root root 0000000 0000000 **To get information about an OU**
The following example shows how to request details about an OU: ::
aws organizations describe-organizational-unit --organizational-unit-id ou-examplerootid111-exampleouid111
The output includes an OrganizationUnit object that contains the details about the OU: ::
{
"OrganizationalUnit": {
"Name": "Accounting Group",
"Arn": "arn:aws:organizations::o-exampleorgid:ou/o-exampleorgid/ou-examplerootid111-exampleouid111",
"Id": "ou-examplerootid111-exampleouid111"
}
} awscli-1.17.14/awscli/examples/organizations/invite-account-to-organization.rst 0000755 0000000 0000000 00000002527 13620325556 027720 0 ustar root root 0000000 0000000 **To invite an account to join an organization**
The following example shows the master account owned by bill@example.com inviting the account owned by juan@example.com to join an organization: ::
aws organizations invite-account-to-organization --target '{"Type": "EMAIL", "Id": "juan@example.com"}' --notes "This is a request for Juan's account to join Bill's organization."
The output includes a handshake structure that shows what is sent to the invited account: ::
{
"Handshake": {
"Action": "INVITE",
"Arn": "arn:aws:organizations::111111111111:handshake/o-exampleorgid/invite/h-examplehandshakeid111",
"ExpirationTimestamp": 1482952459.257,
"Id": "h-examplehandshakeid111",
"Parties": [
{
"Id": "o-exampleorgid",
"Type": "ORGANIZATION"
},
{
"Id": "juan@example.com",
"Type": "EMAIL"
}
],
"RequestedTimestamp": 1481656459.257,
"Resources": [
{
"Resources": [
{
"Type": "MASTER_EMAIL",
"Value": "bill@amazon.com"
},
{
"Type": "MASTER_NAME",
"Value": "Org Master Account"
},
{
"Type": "ORGANIZATION_FEATURE_SET",
"Value": "FULL"
}
],
"Type": "ORGANIZATION",
"Value": "o-exampleorgid"
},
{
"Type": "EMAIL",
"Value": "juan@example.com"
}
],
"State": "OPEN"
}
} awscli-1.17.14/awscli/examples/organizations/create-account.rst 0000755 0000000 0000000 00000002660 13620325556 024541 0 ustar root root 0000000 0000000 **To create a member account that is automatically part of the organization**
The following example shows how to create a member account in an organization. The member account is configured with the name Production Account and the email address of susan@example.com. Organizations automatically creates an IAM role using the default name of OrganizationAccountAccessRole because the roleName parameter is not specified. Also, the setting that allows IAM users or roles with sufficient permissions to access account billing data is set to the default value of ALLOW because the IamUserAccessToBilling parameter is not specified. Organizations automatically sends Susan a "Welcome to AWS" email: ::
aws organizations create-account --email susan@example.com --account-name "Production Account"
The output includes a request object that shows that the status is now ``IN_PROGRESS``: ::
{
"CreateAccountStatus": {
"State": "IN_PROGRESS",
"Id": "car-examplecreateaccountrequestid111"
}
}
You can later query the current status of the request by providing the Id response value to the describe-create-account-status command as the value for the create-account-request-id parameter.
For more information, see `Creating an AWS Account in Your Organization`_ in the *AWS Organizations Users Guide*.
.. _`Creating an AWS Account in Your Organization`: http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_create.html
awscli-1.17.14/awscli/examples/organizations/list-policies-for-target.rst 0000755 0000000 0000000 00000001674 13620325556 026500 0 ustar root root 0000000 0000000 **To retrieve a list of the SCPs attached directly to an account**
The following example shows how to get a list of all service control policies (SCPs), as specified by the Filter parameter, that are directly attached to an account: ::
aws organizations list-policies-for-target --filter SERVICE_CONTROL_POLICY --target-id 444444444444
The output includes a list of policy structures with summary information about the policies. The list does not include policies that apply to the account because of inheritance from its location in an OU hierarchy: ::
{
"Policies": [
{
"Type": "SERVICE_CONTROL_POLICY",
"Name": "AllowAllEC2Actions",
"AwsManaged", false,
"Id": "p-examplepolicyid222",
"Arn": "arn:aws:organizations::o-exampleorgid:policy/service_control_policy/p-examplepolicyid222",
"Description": "Enables account admins to delegate permissions for any EC2 actions to users and roles in their accounts."
}
]
} awscli-1.17.14/awscli/examples/organizations/list-organizational-units-for-parent.rst 0000755 0000000 0000000 00000001230 13620325556 031041 0 ustar root root 0000000 0000000 **To retrieve a list of the OUs in a parent OU or root**
The following example shows you how to get a list of OUs in a specified root: ::
aws organizations list-organizational-units-for-parent --parent-id r-examplerootid111
The output shows that the specified root contains two OUs and shows details of each: ::
{
"OrganizationalUnits": [
{
"Name": "AccountingDepartment",
"Arn": "arn:aws:organizations::o-exampleorgid:ou/r-examplerootid111/ou-examplerootid111-exampleouid111"
},
{
"Name": "ProductionDepartment",
"Arn": "arn:aws:organizations::o-exampleorgid:ou/r-examplerootid111/ou-examplerootid111-exampleouid222"
}
]
} awscli-1.17.14/awscli/examples/organizations/describe-create-account-status.rst 0000755 0000000 0000000 00000001477 13620325556 027645 0 ustar root root 0000000 0000000 **To get the latest status about a request to create an account**
The following example shows how to request the latest status for a previous request to create an account in an organization. The specified --request-id comes from the response of the original call to create-account. The account creation request shows by the status field that Organizations successfully completed the creation of the account.
Command::
aws organizations describe-create-account-status --create-account-request-id car-examplecreateaccountrequestid111
Output::
{
"CreateAccountStatus": {
"State": "SUCCEEDED",
"AccountId": "555555555555",
"AccountName": "Beta account",
"RequestedTimestamp": 1470684478.687,
"CompletedTimestamp": 1470684532.472,
"Id": "car-examplecreateaccountrequestid111"
}
}
awscli-1.17.14/awscli/examples/organizations/delete-organization.rst 0000755 0000000 0000000 00000000521 13620325556 025602 0 ustar root root 0000000 0000000 **To delete an organization**
The following example shows how to delete an organization. To perform this operation, you must be an admin of the master account in the organization. The example assumes that you previously removed all the member accounts, OUs, and policies from the organization: ::
aws organizations delete-organization awscli-1.17.14/awscli/examples/organizations/describe-organization.rst 0000755 0000000 0000000 00000001320 13620325556 026116 0 ustar root root 0000000 0000000 **To get information about the current organization**
The following example shows you how to request details about an organization: ::
aws organizations describe-organization
The output includes an organization object that has the details about the organization: ::
{
"Organization": {
"MasterAccountArn": "arn:aws:organizations::111111111111:account/o-exampleorgid/111111111111",
"MasterAccountEmail": "bill@example.com",
"MasterAccountId": "111111111111",
"Id": "o-exampleorgid",
"FeatureSet": "ALL",
"Arn": "arn:aws:organizations::111111111111:organization/o-exampleorgid",
"AvailablePolicyTypes": [
{
"Status": "ENABLED",
"Type": "SERVICE_CONTROL_POLICY"
}
]
}
} awscli-1.17.14/awscli/examples/organizations/create-organizational-unit.rst 0000755 0000000 0000000 00000001005 13620325556 027073 0 ustar root root 0000000 0000000 **To create an OU in a root or parent OU**
The following example shows how to create an OU that is named AccountingOU: ::
aws organizations create-organizational-unit --parent-id r-examplerootid111 --name AccountingOU
The output includes an organizationalUnit object with details about the new OU: ::
{
"OrganizationalUnit": {
"Id": "ou-examplerootid111-exampleouid111",
"Arn": "arn:aws:organizations::111111111111:ou/o-exampleorgid/ou-examplerootid111-exampleouid111",
"Name": "AccountingOU"
}
} awscli-1.17.14/awscli/examples/organizations/update-organizational-unit.rst 0000755 0000000 0000000 00000000755 13620325556 027125 0 ustar root root 0000000 0000000 **To rename an OU**
This example shows you how to rename an OU: In this example, the OU is renamed "AccountingOU": ::
aws organizations update-organizational-unit --organizational-unit-id ou-examplerootid111-exampleouid111 --name AccountingOU
The output shows the new name: ::
{
"OrganizationalUnit": {
"Id": "ou-examplerootid111-exampleouid111"
"Name": "AccountingOU",
"Arn": "arn:aws:organizations::111111111111:ou/o-exampleorgid/ou-examplerootid111-exampleouid111""
}
} awscli-1.17.14/awscli/examples/organizations/describe-policy.rst 0000755 0000000 0000000 00000001437 13620325556 024722 0 ustar root root 0000000 0000000 **To get information about a policy**
The following example shows how to request information about a policy: ::
aws organizations describe-policy --policy-id p-examplepolicyid111
The output includes a policy object that contains details about the policy: ::
{
"Policy": {
"Content": "{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Action\": \"*\",\n \"Resource\": \"*\"\n }\n ]\n}",
"PolicySummary": {
"Arn": "arn:aws:organizations::111111111111:policy/o-exampleorgid/service_control_policy/p-examplepolicyid111",
"Type": "SERVICE_CONTROL_POLICY",
"Id": "p-examplepolicyid111",
"AwsManaged": false,
"Name": "AllowAllS3Actions",
"Description": "Enables admins to delegate S3 permissions"
}
}
} awscli-1.17.14/awscli/examples/iotanalytics/ 0000755 0000000 0000000 00000000000 13620325757 020722 5 ustar root root 0000000 0000000 awscli-1.17.14/awscli/examples/iotanalytics/cancel-pipeline-reprocessing.rst 0000644 0000000 0000000 00000001072 13620325556 027202 0 ustar root root 0000000 0000000 **To cancel the reprocessing of data through a pipeline**
The following ``cancel-pipeline-reprocessing`` example cancels the reprocessing of data through the specified pipeline. ::
aws iotanalytics cancel-pipeline-reprocessing \
--pipeline-name mypipeline \
--reprocessing-id "6ad2764f-fb13-4de3-b101-4e74af03b043"
This command produces no output.
For more information, see `CancelPipelineReprocessing `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/delete-dataset.rst 0000644 0000000 0000000 00000000722 13620325556 024337 0 ustar root root 0000000 0000000 **To delete a dataset**
The following ``delete-dataset`` example deletes the specified dataset. You don't have to delete the content of the dataset before you perform this operation. ::
aws iotanalytics delete-dataset \
--dataset-name mydataset
This command produces no output.
For more information, see `DeleteDataset `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/create-channel.rst 0000644 0000000 0000000 00000001772 13620325556 024331 0 ustar root root 0000000 0000000 **To create a channel**
The following ``create-channel`` example creates a channel with the specified configuration. A channel collects data from an MQTT topic and archives the raw, unprocessed messages before publishing the data to a pipeline. ::
aws iotanalytics create-channel \
--cli-input-json file://create-channel.json
Contents of ``create-channel.json``::
{
"channelName": "mychannel",
"retentionPeriod": {
"unlimited": true
},
"tags": [
{
"key": "Environment",
"value": "Production"
}
]
}
Output::
{
"channelArn": "arn:aws:iotanalytics:us-west-2:123456789012:channel/mychannel",
"channelName": "mychannel",
"retentionPeriod": {
"unlimited": true
}
}
For more information, see `CreateChannel `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/delete-channel.rst 0000644 0000000 0000000 00000000607 13620325556 024324 0 ustar root root 0000000 0000000 **Delete an IoT Analytics Channel**
The following ``delete-channel`` example deletes the specified channel. ::
aws iotanalytics delete-channel \
--channel-name mychannel
This command produces no output.
For more information, see `DeleteChannel `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/update-channel.rst 0000644 0000000 0000000 00000001076 13620325556 024345 0 ustar root root 0000000 0000000 **To modify a channel**
The following ``update-channel`` example modifies the settings for the specified channel. ::
aws iotanalytics update-channel \
--cli-input-json file://update-channel.json
Contents of ``update-channel.json``::
{
"channelName": "mychannel",
"retentionPeriod": {
"numberOfDays": 92
}
}
This command produces no output.
For more information, see `UpdateChannel `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/update-pipeline.rst 0000644 0000000 0000000 00000002540 13620325556 024537 0 ustar root root 0000000 0000000 **To update a pipeline**
The following ``update-pipeline`` example modifies the settings of the specified pipeline. You must specify both a channel and a data store activity and, optionally, as many as 23 additional activities, in the ``pipelineActivities`` array. ::
aws iotanalytics update-pipeline \
--cli-input-json file://update-pipeline.json
Contents of update-pipeline.json::
{
"pipelineName": "mypipeline",
"pipelineActivities": [
{
"channel": {
"name": "myChannelActivity",
"channelName": "mychannel",
"next": "myMathActivity"
}
},
{
"datastore": {
"name": "myDatastoreActivity",
"datastoreName": "mydatastore"
}
},
{
"math": {
"name": "myMathActivity",
"math": "(((temp - 32) * 5.0) / 9.0) + 273.15",
"attribute": "tempK",
"next": "myDatastoreActivity"
}
}
]
}
This command produces no output.
For more information, see `UpdatePipeline `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/sample-channel-data.rst 0000644 0000000 0000000 00000001164 13620325556 025251 0 ustar root root 0000000 0000000 **To retrieve sample messages from a channel**
The following ``sample-channel-data`` example retrieves a sample of messages from the specified channel ingested during the specified timeframe. You can retrieve up to 10 messages. ::
aws iotanalytics sample-channel-data \
--channel-name mychannel
Output::
{
"payloads": [
"eyAidGVtcGVyYXR1cmUiOiAyMCB9",
"eyAiZm9vIjogImJhciIgfQ=="
]
}
For more information, see `SampleChannelData `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/delete-datastore.rst 0000644 0000000 0000000 00000000616 13620325556 024702 0 ustar root root 0000000 0000000 **To delete a data store**
The following ``delete-datastore`` example deletes the specified data store. ::
aws iotanalytics delete-datastore \
--datastore-name mydatastore
This command produces no output.
For more information, see `DeleteDatastore `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/describe-channel.rst 0000644 0000000 0000000 00000001720 13620325556 024637 0 ustar root root 0000000 0000000 **To retrieve information about a channel**
The following ``describe-channel`` example displays details, including statistics, for the specified channel. ::
aws iotanalytics describe-channel \
--channel-name mychannel \
--include-statistics
Output::
{
"statistics": {
"size": {
"estimatedSizeInBytes": 402.0,
"estimatedOn": 1561504380.0
}
},
"channel": {
"status": "ACTIVE",
"name": "mychannel",
"lastUpdateTime": 1557860351.001,
"creationTime": 1557860351.001,
"retentionPeriod": {
"unlimited": true
},
"arn": "arn:aws:iotanalytics:us-west-2:123456789012:channel/mychannel"
}
}
For more information, see `DescribeChannel `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/list-datasets.rst 0000644 0000000 0000000 00000001502 13620325556 024230 0 ustar root root 0000000 0000000 **To retrieve information about datasets**
The following ``list-datasets`` example lists summary information about available datasets. ::
aws iotanalytics list-datasets
Output::
{
"datasetSummaries": [
{
"status": "ACTIVE",
"datasetName": "mydataset",
"lastUpdateTime": 1557859240.658,
"triggers": [],
"creationTime": 1557859240.658,
"actions": [
{
"actionName": "query_32",
"actionType": "QUERY"
}
]
}
]
}
For more information, see `ListDatasets `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/run-pipeline-activity.rst 0000644 0000000 0000000 00000001767 13620325556 025725 0 ustar root root 0000000 0000000 **To simulate a pipeline activity**
The following ``run-pipeline-activity`` example simulates the results of running a pipeline activity on a message payload. ::
aws iotanalytics run-pipeline-activity \
--pipeline-activity file://maths.json \
--payloads file://payloads.json
Contents of ``maths.json``::
{
"math": {
"name": "MyMathActivity",
"math": "((temp - 32) * 5.0) / 9.0",
"attribute": "tempC"
}
}
Contents of ``payloads.json``::
[
"{\"humidity\": 52, \"temp\": 68 }",
"{\"humidity\": 52, \"temp\": 32 }"
]
Output::
{
"logResult": "",
"payloads": [
"eyJodW1pZGl0eSI6NTIsInRlbXAiOjY4LCJ0ZW1wQyI6MjB9",
"eyJodW1pZGl0eSI6NTIsInRlbXAiOjMyLCJ0ZW1wQyI6MH0="
]
}
For more information, see `RunPipelineActivity `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/get-dataset-content.rst 0000644 0000000 0000000 00000001644 13620325556 025330 0 ustar root root 0000000 0000000 **To retrieve the contents of a dataset**
The following ``get-dataset-content`` example retrieves the contents of a dataset as presigned URIs. ::
aws iotanalytics get-dataset-content --dataset-name mydataset
Output::
{
"status": {
"state": "SUCCEEDED"
},
"timestamp": 1557863215.995,
"entries": [
{
"dataURI": "https://aws-radiant-dataset-12345678-1234-1234-1234-123456789012.s3.us-west-2.amazonaws.com/results/12345678-e8b3-46ba-b2dd-efe8d86cf385.csv?X-Amz-Security-Token=...-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20190628T173437Z&X-Amz-SignedHeaders=host&X-Amz-Expires=7200&X-Amz-Credential=...F20190628%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Signature=..."
}
]
}
For more information, see `GetDatasetContent `__ in the *guide*.
awscli-1.17.14/awscli/examples/iotanalytics/create-datastore.rst 0000644 0000000 0000000 00000001677 13620325556 024713 0 ustar root root 0000000 0000000 **To create a data store**
The following ``create-datastore`` example creates a data store, which is a repository for messages. ::
aws iotanalytics create-datastore \
--cli-input-json file://create-datastore.json
Contents of ``create-datastore.json``::
{
"datastoreName": "mydatastore",
"retentionPeriod": {
"numberOfDays": 90
},
"tags": [
{
"key": "Environment",
"value": "Production"
}
]
}
Output::
{
"datastoreName": "mydatastore",
"datastoreArn": "arn:aws:iotanalytics:us-west-2:123456789012:datastore/mydatastore",
"retentionPeriod": {
"numberOfDays": 90,
"unlimited": false
}
}
For more information, see `CreateDatastore `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/batch-put-message.rst 0000644 0000000 0000000 00000001306 13620325556 024762 0 ustar root root 0000000 0000000 **To send a message to a channel**
The following ``batch-put-message`` example sends a message to the specified channel. ::
aws iotanalytics batch-put-message \
--cli-input-json file://batch-put-message.json
Contents of ``batch-put-message.json``::
{
"channelName": "mychannel",
"messages": [
{
"messageId": "0001",
"payload": "eyAidGVtcGVyYXR1cmUiOiAyMCB9"
}
]
}
Output::
{
"batchPutMessageErrorEntries": []
}
For more information, see `BatchPutMessage `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/describe-datastore.rst 0000644 0000000 0000000 00000001750 13620325556 025220 0 ustar root root 0000000 0000000 **To retrieve information about a data store**
The following ``describe-datastore`` example displays details, including statistics, for the specified data store. ::
aws iotanalytics describe-datastore \
--datastore-name mydatastore \
--include-statistics
Output::
{
"datastore": {
"status": "ACTIVE",
"name": "mydatastore",
"lastUpdateTime": 1557858971.02,
"creationTime": 1557858971.02,
"retentionPeriod": {
"unlimited": true
},
"arn": "arn:aws:iotanalytics:us-west-2:123456789012:datastore/mydatastore"
},
"statistics": {
"size": {
"estimatedSizeInBytes": 397.0,
"estimatedOn": 1561592040.0
}
}
}
For more information, see `DescribeDatastore `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/describe-logging-options.rst 0000644 0000000 0000000 00000001172 13620325556 026347 0 ustar root root 0000000 0000000 **To retrieve the current logging options**
The following ``describe-logging-options`` example displays the current AWS IoT Analytics logging options. ::
aws iotanalytics describe-logging-options
This command produces no output.
Output::
{
"loggingOptions": {
"roleArn": "arn:aws:iam::123456789012:role/service-role/myIoTAnalyticsRole",
"enabled": true,
"level": "ERROR"
}
}
For more information, see `DescribeLoggingOptions `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/untag-resource.rst 0000644 0000000 0000000 00000001022 13620325556 024407 0 ustar root root 0000000 0000000 **To remove tags from a resource**
The following ``untag-resource`` example removes the tags with the specified key names from the specified resource. ::
aws iotanalytics untag-resource \
--resource-arn "arn:aws:iotanalytics:us-west-2:123456789012:channel/mychannel" \
--tag-keys "[\"Environment\"]"
This command produces no output.
For more information, see `UntagResource `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/delete-pipeline.rst 0000644 0000000 0000000 00000000603 13620325556 024515 0 ustar root root 0000000 0000000 **To delete a pipeline**
The following ``delete-pipeline`` example deletes the specified pipeline. ::
aws iotanalytics delete-pipeline \
--pipeline-name mypipeline
This command produces no output.
For more information, see `DeletePipeline `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/list-dataset-contents.rst 0000644 0000000 0000000 00000001410 13620325556 025676 0 ustar root root 0000000 0000000 **To list information about dataset contents**
The following ``list-dataset-contents`` example lists information about dataset contents that have been created. ::
aws iotanalytics list-dataset-contents \
--dataset-name mydataset
Output::
{
"datasetContentSummaries": [
{
"status": {
"state": "SUCCEEDED"
},
"scheduleTime": 1557863215.995,
"version": "b10ea2a9-66c1-4d99-8d1f-518113b738d0",
"creationTime": 1557863215.995
}
]
}
For more information, see `ListDatasetContents `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/list-tags-for-resource.rst 0000644 0000000 0000000 00000001141 13620325556 025766 0 ustar root root 0000000 0000000 **To list tags for a resource**
The following ``list-tags-for-resource`` example Lists the tags that you have attached to the specified resource. ::
aws iotanalytics list-tags-for-resource \
--resource-arn "arn:aws:iotanalytics:us-west-2:123456789012:channel/mychannel"
Output::
{
"tags": [
{
"value": "bar",
"key": "foo"
}
]
}
For more information, see `ListTagsForResource `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/create-dataset-content.rst 0000644 0000000 0000000 00000001126 13620325556 026007 0 ustar root root 0000000 0000000 **To create the content of a dataset**
The following ``create-dataset-content`` example creates the content of the specified dataset by applying a ``queryAction`` (an SQL query) or a ``containerAction`` (executing a containerized application). ::
aws iotanalytics create-dataset-content \
--dataset-name mydataset
Output::
{
"versionId": "d494b416-9850-4670-b885-ca22f1e89d62"
}
For more information, see `CreateDatasetContent `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/create-pipeline.rst 0000644 0000000 0000000 00000003262 13620325556 024522 0 ustar root root 0000000 0000000 **Create an IoT Analytics Pipeline**
The following ``create-pipeline`` example creates a pipeline. A pipeline consumes messages from a channel and allows you to process the messages before storing them in a data store. You must specify both a channel and a data store activity and, optionally, as many as 23 additional activities in the ``pipelineActivities`` array. ::
aws iotanalytics create-pipeline \
--cli-input-json file://create-pipeline.json
Contents of ``create-pipeline.json``::
{
"pipelineName": "mypipeline",
"pipelineActivities": [
{
"channel": {
"name": "myChannelActivity",
"channelName": "mychannel",
"next": "myMathActivity"
}
},
{
"datastore": {
"name": "myDatastoreActivity",
"datastoreName": "mydatastore"
}
},
{
"math": {
"name": "myMathActivity",
"math": "((temp - 32) * 5.0) / 9.0",
"attribute": "tempC",
"next": "myDatastoreActivity"
}
}
],
"tags": [
{
"key": "Environment",
"value": "Beta"
}
]
}
Output::
{
"pipelineArn": "arn:aws:iotanalytics:us-west-2:123456789012:pipeline/mypipeline",
"pipelineName": "mypipeline"
}
For more information, see `CreatePipeline `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/start-pipeline-reprocessing.rst 0000644 0000000 0000000 00000001035 13620325556 027111 0 ustar root root 0000000 0000000 **To start pipeline reprocessing**
The following ``start-pipeline-reprocessing`` example starts the reprocessing of raw message data through the specified pipeline. ::
aws iotanalytics start-pipeline-reprocessing \
--pipeline-name mypipeline
Output::
{
"reprocessingId": "6ad2764f-fb13-4de3-b101-4e74af03b043"
}
For more information, see `StartPipelineReprocessing `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/describe-dataset.rst 0000644 0000000 0000000 00000002175 13620325556 024661 0 ustar root root 0000000 0000000 **To retrieve information about a dataset**
The following ``describe-dataset`` example displays details for the specified dataset. ::
aws iotanalytics describe-dataset \
--dataset-name mydataset
Output::
{
"dataset": {
"status": "ACTIVE",
"contentDeliveryRules": [],
"name": "mydataset",
"lastUpdateTime": 1557859240.658,
"triggers": [],
"creationTime": 1557859240.658,
"actions": [
{
"actionName": "query_32",
"queryAction": {
"sqlQuery": "SELECT * FROM mydatastore",
"filters": []
}
}
],
"retentionPeriod": {
"numberOfDays": 90,
"unlimited": false
},
"arn": "arn:aws:iotanalytics:us-west-2:123456789012:dataset/mydataset"
}
}
For more information, see `DescribeDataset `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/list-datastores.rst 0000644 0000000 0000000 00000001167 13620325556 024600 0 ustar root root 0000000 0000000 **To retrieve a list of data stores**
The following ``list-datastores`` example displays summary information about the available data stores. ::
aws iotanalytics list-datastores
Output::
{
"datastoreSummaries": [
{
"status": "ACTIVE",
"datastoreName": "mydatastore",
"creationTime": 1557858971.02,
"lastUpdateTime": 1557858971.02
}
]
}
For more information, see `ListDatastores `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/create-dataset.rst 0000644 0000000 0000000 00000002612 13620325556 024340 0 ustar root root 0000000 0000000 **To create a dataset**
The following ``create-dataset`` example creates a dataset. A dataset stores data retrieved from a data store by applying a ``queryAction`` (a SQL query) or a ``containerAction`` (executing a containerized application). This operation creates the skeleton of a dataset. You can populate the dataset manually by calling ``CreateDatasetContent`` or automatically according to a ``trigger`` you specify. ::
aws iotanalytics create-dataset \
--cli-input-json file://create-dataset.json
Contents of ``create-dataset.json``::
{
"datasetName": "mydataset",
"actions": [
{
"actionName": "myDatasetAction",
"queryAction": {
"sqlQuery": "SELECT * FROM mydatastore"
}
}
],
"retentionPeriod": {
"unlimited": true
},
"tags": [
{
"key": "Environment",
"value": "Production"
}
]
}
Output::
{
"datasetName": "mydataset",
"retentionPeriod": {
"unlimited": true
},
"datasetArn": "arn:aws:iotanalytics:us-west-2:123456789012:dataset/mydataset"
}
For more information, see `CreateDataset `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/update-datastore.rst 0000644 0000000 0000000 00000001117 13620325556 024717 0 ustar root root 0000000 0000000 **To update a data store**
The following ``update-datastore`` example modifies the settings of the specified data store. ::
aws iotanalytics update-datastore \
--cli-input-json file://update-datastore.json
Contents of update-datastore.json::
{
"datastoreName": "mydatastore",
"retentionPeriod": {
"numberOfDays": 93
}
}
This command produces no output.
For more information, see `UpdateDatastore `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/delete-dataset-content.rst 0000644 0000000 0000000 00000000656 13620325556 026015 0 ustar root root 0000000 0000000 **To delete dataset content**
The following ``delete-dataset-content`` example deletes the content of the specified dataset. ::
aws iotanalytics delete-dataset-content \
--dataset-name mydataset
This command produces no output.
For more information, see `DeleteDatasetContent `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/describe-pipeline.rst 0000644 0000000 0000000 00000002556 13620325556 025044 0 ustar root root 0000000 0000000 **To retrieve information about a pipeline**
The following ``describe-pipeline`` example displays details for the specified pipeline. ::
aws iotanalytics describe-pipeline \
--pipeline-name mypipeline
Output::
{
"pipeline": {
"activities": [
{
"channel": {
"channelName": "mychannel",
"name": "mychannel_28",
"next": "mydatastore_29"
}
},
{
"datastore": {
"datastoreName": "mydatastore",
"name": "mydatastore_29"
}
}
],
"name": "mypipeline",
"lastUpdateTime": 1561676362.515,
"creationTime": 1557859124.432,
"reprocessingSummaries": [
{
"status": "SUCCEEDED",
"creationTime": 1561676362.189,
"id": "6ad2764f-fb13-4de3-b101-4e74af03b043"
}
],
"arn": "arn:aws:iotanalytics:us-west-2:123456789012:pipeline/mypipeline"
}
}
For more information, see `DescribePipeline `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/list-channels.rst 0000644 0000000 0000000 00000001143 13620325556 024214 0 ustar root root 0000000 0000000 **To retrieve a list of channels**
The following ``list-channels`` example displays summary information for the available channels. ::
aws iotanalytics list-channels
Output::
{
"channelSummaries": [
{
"status": "ACTIVE",
"channelName": "mychannel",
"creationTime": 1557860351.001,
"lastUpdateTime": 1557860351.001
}
]
}
For more information, see `ListChannels `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/tag-resource.rst 0000644 0000000 0000000 00000001047 13620325556 024053 0 ustar root root 0000000 0000000 **To add or modify tags for a resource**
The following ``tag-resource`` example adds to or modifies the tags attached to the specified resource. ::
aws iotanalytics tag-resource \
--resource-arn "arn:aws:iotanalytics:us-west-2:123456789012:channel/mychannel" \
--tags "[{\"key\": \"Environment\", \"value\": \"Production\"}]"
This command produces no output.
For more information, see `TagResource `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/list-pipelines.rst 0000644 0000000 0000000 00000001143 13620325556 024411 0 ustar root root 0000000 0000000 **To retrieve a list of pipelines**
The following ``list-pipelines`` example displays a list of available pipelines. ::
aws iotanalytics list-pipelines
Output::
{
"pipelineSummaries": [
{
"pipelineName": "mypipeline",
"creationTime": 1557859124.432,
"lastUpdateTime": 1557859124.432,
"reprocessingSummaries": []
}
]
}
For more information, see `ListPipelines `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/put-logging-options.rst 0000644 0000000 0000000 00000001765 13620325556 025407 0 ustar root root 0000000 0000000 **To set or update logging options**
The following ``put-logging-options`` example sets or updates the AWS IoT Analytics logging options. If you update the value of any ``loggingOptions`` field, it can take up to one minute for the change to take effect. Also, if you change the policy attached to the role you specified in the "roleArn" field (for example, to correct an invalid policy) it can take up to five minutes for that change to take effect. ::
aws iotanalytics put-logging-options \
--cli-input-json file://put-logging-options.json
Contents of ``put-logging-options.json``::
{
"loggingOptions": {
"roleArn": "arn:aws:iam::123456789012:role/service-role/myIoTAnalyticsRole",
"level": "ERROR",
"enabled": true
}
}
This command produces no output.
For more information, see `PutLoggingOptions `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/iotanalytics/update-dataset.rst 0000644 0000000 0000000 00000001440 13620325556 024355 0 ustar root root 0000000 0000000 **To update a dataset**
The following ``update-dataset`` example modifies the settings of the specified dataset. ::
aws iotanalytics update-dataset \
--cli-input-json file://update-dataset.json
Contents of ``update-dataset.json``::
{
"datasetName": "mydataset",
"actions": [
{
"actionName": "myDatasetUpdateAction",
"queryAction": {
"sqlQuery": "SELECT * FROM mydatastore"
}
}
],
"retentionPeriod": {
"numberOfDays": 92
}
}
This command produces no output.
For more information, see `UpdateDataset `__ in the *AWS IoT Analytics API Reference*.
awscli-1.17.14/awscli/examples/pinpoint/ 0000755 0000000 0000000 00000000000 13620325757 020057 5 ustar root root 0000000 0000000 awscli-1.17.14/awscli/examples/pinpoint/get-apps.rst 0000755 0000000 0000000 00000003404 13620325556 022332 0 ustar root root 0000000 0000000 **To retrieve information about all of your applications**
The following ``get-apps`` example retrieves information about all of your applications (projects). ::
aws pinpoint get-apps
Output::
{
"ApplicationsResponse": {
"Item": [
{
"Arn": "arn:aws:mobiletargeting:us-west-2:AIDACKCEVSQ6C2EXAMPLE:apps/810c7aab86d42fb2b56c8c966example",
"Id": "810c7aab86d42fb2b56c8c966example",
"Name": "ExampleCorp",
"tags": {
"Year": "2019",
"Stack": "Production"
}
},
{
"Arn": "arn:aws:mobiletargeting:us-west-2:AIDACKCEVSQ6C2EXAMPLE:apps/42d8c7eb0990a57ba1d5476a3example",
"Id": "42d8c7eb0990a57ba1d5476a3example",
"Name": "AnyCompany",
"tags": {}
},
{
"Arn": "arn:aws:mobiletargeting:us-west-2:AIDACKCEVSQ6C2EXAMPLE:apps/80f5c382b638ffe5ad12376bbexample",
"Id": "80f5c382b638ffe5ad12376bbexample",
"Name": "ExampleCorp_Test",
"tags": {
"Year": "2019",
"Stack": "Test"
}
}
],
"NextToken": "eyJDcmVhdGlvbkRhdGUiOiIyMDE5LTA3LTE2VDE0OjM4OjUzLjkwM1oiLCJBY2NvdW50SWQiOiI1MTIzOTcxODM4NzciLCJBcHBJZCI6Ijk1ZTM2MGRiMzBkMjQ1ZjRiYTYwYjhlMzllMzZlNjZhIn0"
}
}
The presence of the ``NextToken`` response value indicates that there is more output available. Call the command again and supply that value as the ``NextToken`` input parameter.
awscli-1.17.14/awscli/examples/pinpoint/create-app.rst 0000755 0000000 0000000 00000002226 13620325556 022634 0 ustar root root 0000000 0000000 **Example 1: To create an application**
The following ``create-app`` example creates a new application (project). ::
aws pinpoint create-app \
--create-application-request Name=ExampleCorp
Output::
{
"ApplicationResponse": {
"Arn": "arn:aws:mobiletargeting:us-west-2:AIDACKCEVSQ6C2EXAMPLE:apps/810c7aab86d42fb2b56c8c966example",
"Id": "810c7aab86d42fb2b56c8c966example",
"Name": "ExampleCorp",
"tags": {}
}
}
**Example 2: To create an application that is tagged**
The following ``create-app`` example creates a new application (project) and associates a tag (key and value) with the application. ::
aws pinpoint create-app \
--create-application-request Name=ExampleCorp,tags={"Stack"="Test"}
Output::
{
"ApplicationResponse": {
"Arn": "arn:aws:mobiletargeting:us-west-2:AIDACKCEVSQ6C2EXAMPLE:apps/810c7aab86d42fb2b56c8c966example",
"Id": "810c7aab86d42fb2b56c8c966example",
"Name": "ExampleCorp",
"tags": {
"Stack": "Test"
}
}
}
awscli-1.17.14/awscli/examples/pinpoint/untag-resource.rst 0000755 0000000 0000000 00000001706 13620325556 023560 0 ustar root root 0000000 0000000 **Example 1: To remove a tag from a resource**
The following ``untag-resource`` example removes the specified tag (key name and value) from a resource. ::
aws pinpoint untag-resource \
--resource-arn arn:aws:mobiletargeting:us-west-2:AIDACKCEVSQ6C2EXAMPLE:apps/810c7aab86d42fb2b56c8c966example \
--tag-keys Year
This command produces no output.
**Example 2: To remove multiple tags from a resource**
The following ``untag-resource`` example removes the specified tags (key names and values) from a resource. ::
aws pinpoint untag-resource \
--resource-arn arn:aws:mobiletargeting:us-east-1:AIDACKCEVSQ6C2EXAMPLE:apps/810c7aab86d42fb2b56c8c966example \
--tag-keys Year Stack
This command produces no output.
For more information, see 'Tagging Amazon Pinpoint Resources '__ in the *Amazon Pinpoint Developer Guide*.
awscli-1.17.14/awscli/examples/pinpoint/list-tags-for-resource.rst 0000755 0000000 0000000 00000001324 13620325556 025131 0 ustar root root 0000000 0000000 **To retrieve a list of tags for a resource**
The following ``list-tags-for-resource`` example retrieves all the tags (key names and values) that are associated with the specified resource. ::
aws pinpoint list-tags-for-resource \
--resource-arn arn:aws:mobiletargeting:us-west-2:AIDACKCEVSQ6C2EXAMPLE:apps/810c7aab86d42fb2b56c8c966example
Output::
{
"TagsModel": {
"tags": {
"Year": "2019",
"Stack": "Production"
}
}
}
For more information, see 'Tagging Amazon Pinpoint Resources '__ in the *Amazon Pinpoint Developer Guide*. awscli-1.17.14/awscli/examples/pinpoint/delete-app.rst 0000755 0000000 0000000 00000000771 13620325556 022636 0 ustar root root 0000000 0000000 **To delete an application**
The following ``delete-app`` example deletes an application (project). ::
aws pinpoint delete-app \
--application-id 810c7aab86d42fb2b56c8c966example
Output::
{
"ApplicationResponse": {
"Arn": "arn:aws:mobiletargeting:us-west-2:AIDACKCEVSQ6C2EXAMPLE:apps/810c7aab86d42fb2b56c8c966example",
"Id": "810c7aab86d42fb2b56c8c966example",
"Name": "ExampleCorp",
"tags": {}
}
}
awscli-1.17.14/awscli/examples/pinpoint/tag-resource.rst 0000755 0000000 0000000 00000001057 13620325556 023214 0 ustar root root 0000000 0000000 **To add tags to a resource**
The following example adds two tags (key names and values) to a resource. ::
aws pinpoint list-tags-for-resource \
--resource-arn arn:aws:mobiletargeting:us-east-1:AIDACKCEVSQ6C2EXAMPLE:apps/810c7aab86d42fb2b56c8c966example \
--tags-model tags={Stack=Production,Year=2019}
This command produces no output.
For more information, see 'Tagging Amazon Pinpoint Resources '__ in the *Amazon Pinpoint Developer Guide*. awscli-1.17.14/awscli/examples/opsworks/ 0000755 0000000 0000000 00000000000 13620325757 020106 5 ustar root root 0000000 0000000 awscli-1.17.14/awscli/examples/opsworks/set-load-based-auto-scaling.rst 0000644 0000000 0000000 00000002506 13620325556 026010 0 ustar root root 0000000 0000000 **To set the load-based scaling configuration for a layer**
The following example enables load-based scaling for a specified layer and sets the configuration
for that layer.
You must use ``create-instance`` to add load-based instances to the layer. ::
aws opsworks --region us-east-1 set-load-based-auto-scaling --layer-id 523569ae-2faf-47ac-b39e-f4c4b381f36d --enable --up-scaling file://upscale.json --down-scaling file://downscale.json
The example puts the upscaling threshold settings in a separate file in the working directory named ``upscale.json``, which contains the following. ::
{
"InstanceCount": 2,
"ThresholdsWaitTime": 3,
"IgnoreMetricsTime": 3,
"CpuThreshold": 85,
"MemoryThreshold": 85,
"LoadThreshold": 85
}
The example puts the downscaling threshold settings in a separate file in the working directory named ``downscale.json``, which contains the following. ::
{
"InstanceCount": 2,
"ThresholdsWaitTime": 3,
"IgnoreMetricsTime": 3,
"CpuThreshold": 35,
"MemoryThreshold": 30,
"LoadThreshold": 30
}
*Output*: None.
**More Information**
For more information, see `Using Automatic Load-based Scaling`_ in the *AWS OpsWorks User Guide*.
.. _`Using Automatic Load-based Scaling`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-autoscaling-loadbased.html
awscli-1.17.14/awscli/examples/opsworks/unassign-volume.rst 0000644 0000000 0000000 00000001317 13620325556 023773 0 ustar root root 0000000 0000000 **To unassign a volume from its instance**
The following example unassigns a registered Amazon Elastic Block Store (Amazon EBS) volume from its instance.
The volume is identified by its volume ID, which is the GUID that AWS OpsWorks assigns when
you register the volume with a stack, not the Amazon Elastic Compute Cloud (Amazon EC2) volume ID. ::
aws opsworks --region us-east-1 unassign-volume --volume-id 8430177d-52b7-4948-9c62-e195af4703df
*Output*: None.
**More Information**
For more information, see `Unassigning Amazon EBS Volumes`_ in the *AWS OpsWorks User Guide*.
.. _`Unassigning Amazon EBS Volumes`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources-detach.html#resources-detach-ebs
awscli-1.17.14/awscli/examples/opsworks/describe-timebased-auto-scaling.rst 0000644 0000000 0000000 00000002314 13620325556 026734 0 ustar root root 0000000 0000000 **To describe the time-based scaling configuration of an instance**
The following example describes a specified instance's time-based scaling configuration.
The instance is identified by its instance ID, which you can find on the instances's
details page or by running ``describe-instances``. ::
aws opsworks describe-time-based-auto-scaling --region us-east-1 --instance-ids 701f2ffe-5d8e-4187-b140-77b75f55de8d
*Output*: The example has a single time-based instance. ::
{
"TimeBasedAutoScalingConfigurations": [
{
"InstanceId": "701f2ffe-5d8e-4187-b140-77b75f55de8d",
"AutoScalingSchedule": {
"Monday": {
"11": "on",
"10": "on",
"13": "on",
"12": "on"
},
"Tuesday": {
"11": "on",
"10": "on",
"13": "on",
"12": "on"
}
}
}
]
}
**More Information**
For more information, see `How Automatic Time-based Scaling Works`_ in the *AWS OpsWorks User Guide*.
.. _`How Automatic Time-based Scaling Works`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-autoscaling.html#workinginstances-autoscaling-timebased
awscli-1.17.14/awscli/examples/opsworks/register.rst 0000644 0000000 0000000 00000015552 13620325556 022471 0 ustar root root 0000000 0000000 **To register instances with a stack**
The following examples show a variety of ways to register instances with a stack that were created outside of AWS Opsworks.
You can run ``register`` from the instance to be registered, or from a separate workstation.
For more information, see `Registering Amazon EC2 and On-premises Instances`_ in the *AWS OpsWorks User Guide*.
.. _`Registering Amazon EC2 and On-premises Instances`: http://docs.aws.amazon.com/opsworks/latest/userguide/registered-instances-register-registering.html
**Note**: For brevity, the examples omit the ``region`` argument.
*To register an Amazon EC2 instance*
To indicate that you are registering an EC2 instance, set the ``--infrastructure-class`` argument
to ``ec2``.
The following example registers an EC2 instance with the specified stack from a separate workstation.
The instance is identified by its EC2 ID, ``i-12345678``. The example uses the workstation's default SSH username and attempts
to log in to the instance using authentication techniques that do not require a password,
such as a default private SSH key. If that fails, ``register`` queries for the password. ::
aws opsworks register --infrastructure-class=ec2 --stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb i-12345678
The following example registers an EC2 instance with the specifed stack from a separate workstation.
It uses the ``--ssh-username`` and ``--ssh-private-key`` arguments to explicitly
specify the SSH username and private key file that the command uses to log into the instance.
``ec2-user`` is the standard username for Amazon Linux instances. Use ``ubuntu`` for Ubuntu instances. ::
aws opsworks register --infrastructure-class=ec2 --stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb --ssh-username ec2-user --ssh-private-key ssh_private_key i-12345678
The following example registers the EC2 instance that is running the ``register`` command.
Log in to the instance with SSH and run ``register`` with the ``--local`` argument instead of an instance ID or hostname. ::
aws opsworks register --infrastructure-class ec2 --stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb --local
*To register an on-premises instance*
To indicate that you are registering an on-premises instance, set the ``--infrastructure-class`` argument
to ``on-premises``.
The following example registers an existing on-premises instance with a specified stack from a separate workstation.
The instance is identified by its IP address, ``192.0.2.3``. The example uses the workstation's default SSH username and attempts
to log in to the instance using authentication techniques that do not require a password,
such as a default private SSH key. If that fails, ``register`` queries for the password. ::
aws opsworks register --infrastructure-class on-premises --stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb 192.0.2.3
The following example registers an on-premises instance with a specified stack from a separate workstation.
The instance is identified by its hostname, ``host1``. The ``--override-...`` arguments direct AWS OpsWorks
to display ``webserver1`` as the host name and ``192.0.2.3`` and ``10.0.0.2`` as the instance's public and
private IP addresses, respectively. ::
aws opsworks register --infrastructure-class on-premises --stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb --override-hostname webserver1 --override-public-ip 192.0.2.3 --override-private-ip 10.0.0.2 host1
The following example registers an on-premises instance with a specified stack from a separate workstation.
The instance is identified by its IP address. ``register`` logs into the instance using the specified SSH username and private key file. ::
aws opsworks register --infrastructure-class on-premises --stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb --ssh-username admin --ssh-private-key ssh_private_key 192.0.2.3
The following example registers an existing on-premises instance with a specified stack from a separate workstation.
The command logs into the instance using a custom SSH command string that specifies
the SSH password and the instance's IP address. ::
aws opsworks register --infrastructure-class on-premises --stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb --override-ssh "sshpass -p 'mypassword' ssh your-user@192.0.2.3"
The following example registers the on-premises instance that is running the ``register`` command.
Log in to the instance with SSH and run ``register`` with the ``--local`` argument instead of an instance ID or hostname. ::
aws opsworks register --infrastructure-class on-premises --stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb --local
*Output*: The following is typical output for registering an EC2 instance.
::
Warning: Permanently added '52.11.41.206' (ECDSA) to the list of known hosts.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 6403k 100 6403k 0 0 2121k 0 0:00:03 0:00:03 --:--:-- 2121k
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Initializing AWS OpsWorks environment
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Running on Ubuntu
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Checking if OS is supported
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Running on supported OS
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Setup motd
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Executing: ln -sf --backup /etc/motd.opsworks-static /etc/motd
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Enabling multiverse repositories
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Customizing APT environment
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Installing system packages
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Executing: dpkg --configure -a
[Tue, 24 Feb 2015 20:48:37 +0000] opsworks-init: Executing with retry: apt-get update
[Tue, 24 Feb 2015 20:49:13 +0000] opsworks-init: Executing: apt-get install -y ruby ruby-dev libicu-dev libssl-dev libxslt-dev libxml2-dev libyaml-dev monit
[Tue, 24 Feb 2015 20:50:13 +0000] opsworks-init: Using assets bucket from environment: 'opsworks-instance-assets-us-east-1.s3.amazonaws.com'.
[Tue, 24 Feb 2015 20:50:13 +0000] opsworks-init: Installing Ruby for the agent
[Tue, 24 Feb 2015 20:50:13 +0000] opsworks-init: Executing: /tmp/opsworks-agent-installer.YgGq8wF3UUre6yDy/opsworks-agent-installer/opsworks-agent/bin/installer_wrapper.sh -r -R opsworks-instance-assets-us-east-1.s3.amazonaws.com
[Tue, 24 Feb 2015 20:50:44 +0000] opsworks-init: Starting the installer
Instance successfully registered. Instance ID: 4d6d1710-ded9-42a1-b08e-b043ad7af1e2
Connection to 52.11.41.206 closed.
**More Information**
For more information, see `Registering an Instance with an AWS OpsWorks Stack`_ in the *AWS OpsWorks User Guide*.
.. _`Registering an Instance with an AWS OpsWorks Stack`: http://docs.aws.amazon.com/opsworks/latest/userguide/registered-instances-register.html
awscli-1.17.14/awscli/examples/opsworks/stop-stack.rst 0000644 0000000 0000000 00000001007 13620325556 022723 0 ustar root root 0000000 0000000 **To stop a stack's instances**
The following example stops all of a stack's 24/7 instances.
To stop a particular instance, use ``stop-instance``. ::
aws opsworks --region us-east-1 stop-stack --stack-id 8c428b08-a1a1-46ce-a5f8-feddc43771b8
*Output*: No output.
**More Information**
For more information, see `Stopping an Instance`_ in the *AWS OpsWorks User Guide*.
.. _`Stopping an Instance`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-starting.html#workinginstances-starting-stop
awscli-1.17.14/awscli/examples/opsworks/describe-stacks.rst 0000644 0000000 0000000 00000004337 13620325556 023712 0 ustar root root 0000000 0000000 **To describe stacks**
The following ``describe-stacks`` command describes an account's stacks. ::
aws opsworks --region us-east-1 describe-stacks
*Output*::
{
"Stacks": [
{
"ServiceRoleArn": "arn:aws:iam::444455556666:role/aws-opsworks-service-role",
"StackId": "aeb7523e-7c8b-49d4-b866-03aae9d4fbcb",
"DefaultRootDeviceType": "instance-store",
"Name": "TomStack-sd",
"ConfigurationManager": {
"Version": "11.4",
"Name": "Chef"
},
"UseCustomCookbooks": true,
"CustomJson": "{\n \"tomcat\": {\n \"base_version\": 7,\n \"java_opts\": \"-Djava.awt.headless=true -Xmx256m\"\n },\n \"datasources\": {\n \"ROOT\": \"jdbc/mydb\"\n }\n}",
"Region": "us-east-1",
"DefaultInstanceProfileArn": "arn:aws:iam::444455556666:instance-profile/aws-opsworks-ec2-role",
"CustomCookbooksSource": {
"Url": "git://github.com/example-repo/tomcustom.git",
"Type": "git"
},
"DefaultAvailabilityZone": "us-east-1a",
"HostnameTheme": "Layer_Dependent",
"Attributes": {
"Color": "rgb(45, 114, 184)"
},
"DefaultOs": "Amazon Linux",
"CreatedAt": "2013-08-01T22:53:42+00:00"
},
{
"ServiceRoleArn": "arn:aws:iam::444455556666:role/aws-opsworks-service-role",
"StackId": "40738975-da59-4c5b-9789-3e422f2cf099",
"DefaultRootDeviceType": "instance-store",
"Name": "MyStack",
"ConfigurationManager": {
"Version": "11.4",
"Name": "Chef"
},
"UseCustomCookbooks": false,
"Region": "us-east-1",
"DefaultInstanceProfileArn": "arn:aws:iam::444455556666:instance-profile/aws-opsworks-ec2-role",
"CustomCookbooksSource": {},
"DefaultAvailabilityZone": "us-east-1a",
"HostnameTheme": "Layer_Dependent",
"Attributes": {
"Color": "rgb(45, 114, 184)"
},
"DefaultOs": "Amazon Linux",
"CreatedAt": "2013-10-25T19:24:30+00:00"
}
]
}
**More Information**
For more information, see `Stacks`_ in the *AWS OpsWorks User Guide*.
.. _`Stacks`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks.html
awscli-1.17.14/awscli/examples/opsworks/describe-stack-summary.rst 0000644 0000000 0000000 00000001402 13620325556 025210 0 ustar root root 0000000 0000000 **To describe a stack's configuration**
The following ``describe-stack-summary`` command returns a summary of the specified stack's configuration. ::
aws opsworks --region us-east-1 describe-stack-summary --stack-id 8c428b08-a1a1-46ce-a5f8-feddc43771b8
*Output*::
{
"StackSummary": {
"StackId": "8c428b08-a1a1-46ce-a5f8-feddc43771b8",
"InstancesCount": {
"Booting": 1
},
"Name": "CLITest",
"AppsCount": 1,
"LayersCount": 1,
"Arn": "arn:aws:opsworks:us-west-2:123456789012:stack/8c428b08-a1a1-46ce-a5f8-feddc43771b8/"
}
}
**More Information**
For more information, see `Stacks`_ in the *AWS OpsWorks User Guide*.
.. _`Stacks`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks.html
awscli-1.17.14/awscli/examples/opsworks/update-volume.rst 0000644 0000000 0000000 00000001366 13620325556 023432 0 ustar root root 0000000 0000000 **To update a registered volume**
The following example updates a registered Amazon Elastic Block Store (Amazon EBS) volume's mount point.
The volume is identified by its volume ID, which is the GUID that AWS OpsWorks assigns to the volume when
you register it with a stack, not the Amazon Elastic Compute Cloud (Amazon EC2) volume ID.::
aws opsworks --region us-east-1 update-volume --volume-id 8430177d-52b7-4948-9c62-e195af4703df --mount-point /mnt/myvol
*Output*: None.
**More Information**
For more information, see `Assigning Amazon EBS Volumes to an Instance`_ in the *AWS OpsWorks User Guide*.
.. _`Assigning Amazon EBS Volumes to an Instance`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources-attach.html#resources-attach-ebs
awscli-1.17.14/awscli/examples/opsworks/describe-layers.rst 0000644 0000000 0000000 00000013215 13620325556 023714 0 ustar root root 0000000 0000000 **To describe a stack's layers**
The following ``describe-layers`` commmand describes the layers in a specified stack::
aws opsworks --region us-east-1 describe-layers --stack-id 38ee91e2-abdc-4208-a107-0b7168b3cc7a
*Output*::
{
"Layers": [
{
"StackId": "38ee91e2-abdc-4208-a107-0b7168b3cc7a",
"Type": "db-master",
"DefaultSecurityGroupNames": [
"AWS-OpsWorks-DB-Master-Server"
],
"Name": "MySQL",
"Packages": [],
"DefaultRecipes": {
"Undeploy": [],
"Setup": [
"opsworks_initial_setup",
"ssh_host_keys",
"ssh_users",
"mysql::client",
"dependencies",
"ebs",
"opsworks_ganglia::client",
"mysql::server",
"dependencies",
"deploy::mysql"
],
"Configure": [
"opsworks_ganglia::configure-client",
"ssh_users",
"agent_version",
"deploy::mysql"
],
"Shutdown": [
"opsworks_shutdown::default",
"mysql::stop"
],
"Deploy": [
"deploy::default",
"deploy::mysql"
]
},
"CustomRecipes": {
"Undeploy": [],
"Setup": [],
"Configure": [],
"Shutdown": [],
"Deploy": []
},
"EnableAutoHealing": false,
"LayerId": "41a20847-d594-4325-8447-171821916b73",
"Attributes": {
"MysqlRootPasswordUbiquitous": "true",
"RubygemsVersion": null,
"RailsStack": null,
"HaproxyHealthCheckMethod": null,
"RubyVersion": null,
"BundlerVersion": null,
"HaproxyStatsPassword": null,
"PassengerVersion": null,
"MemcachedMemory": null,
"EnableHaproxyStats": null,
"ManageBundler": null,
"NodejsVersion": null,
"HaproxyHealthCheckUrl": null,
"MysqlRootPassword": "*****FILTERED*****",
"GangliaPassword": null,
"GangliaUser": null,
"HaproxyStatsUrl": null,
"GangliaUrl": null,
"HaproxyStatsUser": null
},
"Shortname": "db-master",
"AutoAssignElasticIps": false,
"CustomSecurityGroupIds": [],
"CreatedAt": "2013-07-25T18:11:19+00:00",
"VolumeConfigurations": [
{
"MountPoint": "/vol/mysql",
"Size": 10,
"NumberOfDisks": 1
}
]
},
{
"StackId": "38ee91e2-abdc-4208-a107-0b7168b3cc7a",
"Type": "custom",
"DefaultSecurityGroupNames": [
"AWS-OpsWorks-Custom-Server"
],
"Name": "TomCustom",
"Packages": [],
"DefaultRecipes": {
"Undeploy": [],
"Setup": [
"opsworks_initial_setup",
"ssh_host_keys",
"ssh_users",
"mysql::client",
"dependencies",
"ebs",
"opsworks_ganglia::client"
],
"Configure": [
"opsworks_ganglia::configure-client",
"ssh_users",
"agent_version"
],
"Shutdown": [
"opsworks_shutdown::default"
],
"Deploy": [
"deploy::default"
]
},
"CustomRecipes": {
"Undeploy": [],
"Setup": [
"tomcat::setup"
],
"Configure": [
"tomcat::configure"
],
"Shutdown": [],
"Deploy": [
"tomcat::deploy"
]
},
"EnableAutoHealing": true,
"LayerId": "e6cbcd29-d223-40fc-8243-2eb213377440",
"Attributes": {
"MysqlRootPasswordUbiquitous": null,
"RubygemsVersion": null,
"RailsStack": null,
"HaproxyHealthCheckMethod": null,
"RubyVersion": null,
"BundlerVersion": null,
"HaproxyStatsPassword": null,
"PassengerVersion": null,
"MemcachedMemory": null,
"EnableHaproxyStats": null,
"ManageBundler": null,
"NodejsVersion": null,
"HaproxyHealthCheckUrl": null,
"MysqlRootPassword": null,
"GangliaPassword": null,
"GangliaUser": null,
"HaproxyStatsUrl": null,
"GangliaUrl": null,
"HaproxyStatsUser": null
},
"Shortname": "tomcustom",
"AutoAssignElasticIps": false,
"CustomSecurityGroupIds": [],
"CreatedAt": "2013-07-25T18:12:53+00:00",
"VolumeConfigurations": []
}
]
}
**More Information**
For more information, see Layers_ in the *AWS OpsWorks User Guide*.
.. _Layers: http://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers.html
awscli-1.17.14/awscli/examples/opsworks/set-permission.rst 0000644 0000000 0000000 00000002364 13620325556 023623 0 ustar root root 0000000 0000000 **To grant per-stack AWS OpsWorks permission levels**
When you import an AWS Identity and Access Management (IAM) user into AWS OpsWorks by calling ``create-user-profile``, the user has only those
permissions that are granted by the attached IAM policies.
You can grant AWS OpsWorks permissions by modifying a user's policies.
However, it is often easier to import a user and then use the ``set-permission`` command to grant
the user one of the standard permission levels for each stack to which the user will need access.
The following example grants permission for the specified stack for a user, who
is identified by Amazon Resource Name (ARN). The example grants the user a Manage permissions level, with sudo and SSH privileges on the stack's
instances. ::
aws opsworks set-permission --region us-east-1 --stack-id 71c7ca72-55ae-4b6a-8ee1-a8dcded3fa0f --level manage --iam-user-arn arn:aws:iam::123456789102:user/cli-user-test --allow-ssh --allow-sudo
*Output*: None.
**More Information**
For more information, see `Granting AWS OpsWorks Users Per-Stack Permissions`_ in the *AWS OpsWorks User Guide*.
.. _`Granting AWS OpsWorks Users Per-Stack Permissions`: http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users-console.html
awscli-1.17.14/awscli/examples/opsworks/describe-instances.rst 0000644 0000000 0000000 00000006514 13620325556 024410 0 ustar root root 0000000 0000000 **To describe instances**
The following ``describe-instances`` commmand describes the instances in a specified stack::
aws opsworks --region us-east-1 describe-instances --stack-id 8c428b08-a1a1-46ce-a5f8-feddc43771b8
*Output*: The following output example is for a stack with two instances. The first is a registered
EC2 instance, and the second was created by AWS OpsWorks.
::
{
"Instances": [
{
"StackId": "71c7ca72-55ae-4b6a-8ee1-a8dcded3fa0f",
"PrivateDns": "ip-10-31-39-66.us-west-2.compute.internal",
"LayerIds": [
"26cf1d32-6876-42fa-bbf1-9cadc0bff938"
],
"EbsOptimized": false,
"ReportedOs": {
"Version": "14.04",
"Name": "ubuntu",
"Family": "debian"
},
"Status": "online",
"InstanceId": "4d6d1710-ded9-42a1-b08e-b043ad7af1e2",
"SshKeyName": "US-West-2",
"InfrastructureClass": "ec2",
"RootDeviceVolumeId": "vol-d08ec6c1",
"SubnetId": "subnet-b8de0ddd",
"InstanceType": "t1.micro",
"CreatedAt": "2015-02-24T20:52:49+00:00",
"AmiId": "ami-35501205",
"Hostname": "ip-192-0-2-0",
"Ec2InstanceId": "i-5cd23551",
"PublicDns": "ec2-192-0-2-0.us-west-2.compute.amazonaws.com",
"SecurityGroupIds": [
"sg-c4d3f0a1"
],
"Architecture": "x86_64",
"RootDeviceType": "ebs",
"InstallUpdatesOnBoot": true,
"Os": "Custom",
"VirtualizationType": "paravirtual",
"AvailabilityZone": "us-west-2a",
"PrivateIp": "10.31.39.66",
"PublicIp": "192.0.2.06",
"RegisteredBy": "arn:aws:iam::123456789102:user/AWS/OpsWorks/OpsWorks-EC2Register-i-5cd23551"
},
{
"StackId": "71c7ca72-55ae-4b6a-8ee1-a8dcded3fa0f",
"PrivateDns": "ip-10-31-39-158.us-west-2.compute.internal",
"SshHostRsaKeyFingerprint": "69:6b:7b:8b:72:f3:ed:23:01:00:05:bc:9f:a4:60:c1",
"LayerIds": [
"26cf1d32-6876-42fa-bbf1-9cadc0bff938"
],
"EbsOptimized": false,
"ReportedOs": {},
"Status": "booting",
"InstanceId": "9b137a0d-2f5d-4cc0-9704-13da4b31fdcb",
"SshKeyName": "US-West-2",
"InfrastructureClass": "ec2",
"RootDeviceVolumeId": "vol-e09dd5f1",
"SubnetId": "subnet-b8de0ddd",
"InstanceProfileArn": "arn:aws:iam::123456789102:instance-profile/aws-opsworks-ec2-role",
"InstanceType": "c3.large",
"CreatedAt": "2015-02-24T21:29:33+00:00",
"AmiId": "ami-9fc29baf",
"SshHostDsaKeyFingerprint": "fc:87:95:c3:f5:e1:3b:9f:d2:06:6e:62:9a:35:27:e8",
"Ec2InstanceId": "i-8d2dca80",
"PublicDns": "ec2-192-0-2-1.us-west-2.compute.amazonaws.com",
"SecurityGroupIds": [
"sg-b022add5",
"sg-b122add4"
],
"Architecture": "x86_64",
"RootDeviceType": "ebs",
"InstallUpdatesOnBoot": true,
"Os": "Amazon Linux 2014.09",
"VirtualizationType": "paravirtual",
"AvailabilityZone": "us-west-2a",
"Hostname": "custom11",
"PrivateIp": "10.31.39.158",
"PublicIp": "192.0.2.0"
}
]
}
**More Information**
For more information, see `Instances`_ in the *AWS OpsWorks User Guide*.
.. _`Instances`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances.html
awscli-1.17.14/awscli/examples/opsworks/deregister-volume.rst 0000644 0000000 0000000 00000001176 13620325556 024304 0 ustar root root 0000000 0000000 **To deregister an Amazon EBS volume**
The following example deregisters an EBS volume from its stack.
The volume is identified by its volume ID, which is the GUID that AWS OpsWorks assigned when
you registered the volume with the stack, not the EC2 volume ID. ::
aws opsworks deregister-volume --region us-east-1 --volume-id 5c48ef52-3144-4bf5-beaa-fda4deb23d4d
*Output*: None.
**More Information**
For more information, see `Deregistering Amazon EBS Volumes`_ in the *AWS OpsWorks User Guide*.
.. _`Deregistering Amazon EBS Volumes`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources-dereg.html#resources-dereg-ebs
awscli-1.17.14/awscli/examples/opsworks/describe-elastic-ips.rst 0000644 0000000 0000000 00000001140 13620325556 024624 0 ustar root root 0000000 0000000 **To describe Elastic IP instances**
The following ``describe-elastic-ips`` commmand describes the Elastic IP addresses in a specified instance. ::
aws opsworks --region us-east-1 describe-elastic-ips --instance-id b62f3e04-e9eb-436c-a91f-d9e9a396b7b0
*Output*::
{
"ElasticIps": [
{
"Ip": "192.0.2.0",
"Domain": "standard",
"Region": "us-west-2"
}
]
}
**More Information**
For more information, see Instances_ in the *AWS OpsWorks User Guide*.
.. _Instances: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances.html
awscli-1.17.14/awscli/examples/opsworks/describe-raid-arrays.rst 0000644 0000000 0000000 00000002061 13620325556 024630 0 ustar root root 0000000 0000000 **To describe RAID arrays**
The following example describes the RAID arrays attached to the instances in a specified stack. ::
aws opsworks --region us-east-1 describe-raid-arrays --stack-id d72553d4-8727-448c-9b00-f024f0ba1b06
*Output*: The following is the output for a stack with one RAID array. ::
{
"RaidArrays": [
{
"StackId": "d72553d4-8727-448c-9b00-f024f0ba1b06",
"AvailabilityZone": "us-west-2a",
"Name": "Created for php-app1",
"NumberOfDisks": 2,
"InstanceId": "9f14adbc-ced5-43b6-bf01-e7d0db6cf2f7",
"RaidLevel": 0,
"VolumeType": "standard",
"RaidArrayId": "f2d4e470-5972-4676-b1b8-bae41ec3e51c",
"Device": "/dev/md0",
"MountPoint": "/mnt/workspace",
"CreatedAt": "2015-02-26T23:53:09+00:00",
"Size": 100
}
]
}
For more information, see `EBS Volumes`_ in the *AWS OpsWorks User Guide*.
.. _`EBS Volumes`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html#workinglayers-basics-edit-ebs
awscli-1.17.14/awscli/examples/opsworks/describe-permissions.rst 0000644 0000000 0000000 00000001607 13620325556 024772 0 ustar root root 0000000 0000000 **To obtain a user's per-stack AWS OpsWorks permission level**
The following example shows how to to obtain an AWS Identity and Access Management (IAM) user's permission level on a specified stack. ::
aws opsworks --region us-east-1 describe-permissions --iam-user-arn arn:aws:iam::123456789012:user/cli-user-test --stack-id d72553d4-8727-448c-9b00-f024f0ba1b06
*Output*::
{
"Permissions": [
{
"StackId": "d72553d4-8727-448c-9b00-f024f0ba1b06",
"IamUserArn": "arn:aws:iam::123456789012:user/cli-user-test",
"Level": "manage",
"AllowSudo": true,
"AllowSsh": true
}
]
}
**More Information**
For more information, see `Granting Per-Stack Permissions Levels`_ in the *AWS OpsWorks User Guide*.
.. _`Granting Per-Stack Permissions Levels`: http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users-console.html
awscli-1.17.14/awscli/examples/opsworks/create-stack.rst 0000644 0000000 0000000 00000002071 13620325556 023203 0 ustar root root 0000000 0000000 **To create a stack**
The following ``create-stack`` command creates a stack named CLI Stack. ::
aws opsworks create-stack --name "CLI Stack" --stack-region "us-east-1" --service-role-arn arn:aws:iam::123456789012:role/aws-opsworks-service-role --default-instance-profile-arn arn:aws:iam::123456789012:instance-profile/aws-opsworks-ec2-role --region us-east-1
The ``service-role-arn`` and ``default-instance-profile-arn`` parameters are required. You typically
use the ones that AWS OpsWorks
creates for you when you create your first stack. To get the Amazon Resource Names (ARNs) for your
account, go to the `IAM console`_, choose ``Roles`` in the navigation panel,
choose the role or profile, and choose the ``Summary`` tab.
.. _`IAM console`: https://console.aws.amazon.com/iam/home
*Output*::
{
"StackId": "f6673d70-32e6-4425-8999-265dd002fec7"
}
**More Information**
For more information, see `Create a New Stack`_ in the *AWS OpsWorks User Guide*.
.. _`Create a New Stack`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html
awscli-1.17.14/awscli/examples/opsworks/describe-rds-db-instances.rst 0000644 0000000 0000000 00000001765 13620325556 025564 0 ustar root root 0000000 0000000 **To describe a stack's registered Amazon RDS instances**
The following example describes the Amazon RDS instances registered with a specified stack. ::
aws opsworks --region us-east-1 describe-rds-db-instances --stack-id d72553d4-8727-448c-9b00-f024f0ba1b06
*Output*: The following is the output for a stack with one registered RDS instance. ::
{
"RdsDbInstances": [
{
"Engine": "mysql",
"StackId": "d72553d4-8727-448c-9b00-f024f0ba1b06",
"MissingOnRds": false,
"Region": "us-west-2",
"RdsDbInstanceArn": "arn:aws:rds:us-west-2:123456789012:db:clitestdb",
"DbPassword": "*****FILTERED*****",
"Address": "clitestdb.cdlqlk5uwd0k.us-west-2.rds.amazonaws.com",
"DbUser": "cliuser",
"DbInstanceIdentifier": "clitestdb"
}
]
}
For more information, see `Resource Management`_ in the *AWS OpsWorks User Guide*.
.. _`Resource Management`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources.html
awscli-1.17.14/awscli/examples/opsworks/assign-volume.rst 0000644 0000000 0000000 00000001611 13620325556 023425 0 ustar root root 0000000 0000000 **To assign a registered volume to an instance**
The following example assigns a registered Amazon Elastic Block Store (Amazon EBS) volume to an instance.
The volume is identified by its volume ID, which is the GUID that AWS OpsWorks assigns when
you register the volume with a stack, not the Amazon Elastic Compute Cloud (Amazon EC2) volume ID.
Before you run ``assign-volume``, you must first run ``update-volume`` to assign a mount point to the volume. ::
aws opsworks --region us-east-1 assign-volume --instance-id 4d6d1710-ded9-42a1-b08e-b043ad7af1e2 --volume-id 26cf1d32-6876-42fa-bbf1-9cadc0bff938
*Output*: None.
**More Information**
For more information, see `Assigning Amazon EBS Volumes to an Instance`_ in the *AWS OpsWorks User Guide*.
.. _`Assigning Amazon EBS Volumes to an Instance`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources-attach.html#resources-attach-ebs
awscli-1.17.14/awscli/examples/opsworks/update-layer.rst 0000644 0000000 0000000 00000001007 13620325556 023227 0 ustar root root 0000000 0000000 **To update a layer**
The following example updates a specified layer to use Amazon EBS-optimized instances. ::
aws opsworks --region us-east-1 update-layer --layer-id 888c5645-09a5-4d0e-95a8-812ef1db76a4 --use-ebs-optimized-instances
*Output*: None.
**More Information**
For more information, see `Editing an OpsWorks Layer's Configuration`_ in the *AWS OpsWorks User Guide*.
.. _`Editing an OpsWorks Layer's Configuration`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html
awscli-1.17.14/awscli/examples/opsworks/delete-layer.rst 0000644 0000000 0000000 00000001310 13620325556 023204 0 ustar root root 0000000 0000000 **To delete a layer**
The following example deletes a specified layer, which is identified by its layer ID.
You can obtain a layer ID by going to the layer's details page on the AWS OpsWorks console or by
running the ``describe-layers`` command.
**Note:** Before deleting a layer, you must use ``delete-instance`` to delete all of the layer's instances. ::
aws opsworks delete-layer --region us-east-1 --layer-id a919454e-b816-4598-b29a-5796afb498ed
*Output*: None.
**More Information**
For more information, see `Deleting AWS OpsWorks Instances`_ in the *AWS OpsWorks User Guide*.
.. _`Deleting AWS OpsWorks Instances`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-delete.html
awscli-1.17.14/awscli/examples/opsworks/register-elastic-ip.rst 0000644 0000000 0000000 00000001271 13620325556 024512 0 ustar root root 0000000 0000000 **To register an Elastic IP address with a stack**
The following example registers an Elastic IP address, identified by its IP address, with a specified stack.
**Note:** The Elastic IP address must be in the same region as the stack. ::
aws opsworks register-elastic-ip --region us-east-1 --stack-id d72553d4-8727-448c-9b00-f024f0ba1b06 --elastic-ip 54.148.130.96
*Output* ::
{
"ElasticIp": "54.148.130.96"
}
**More Information**
For more information, see `Registering Elastic IP Addresses with a Stack`_ in the *OpsWorks User Guide*.
.. _`Registering Elastic IP Addresses with a Stack`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources-reg.html#resources-reg-eip
awscli-1.17.14/awscli/examples/opsworks/create-deployment.rst 0000644 0000000 0000000 00000005061 13620325556 024260 0 ustar root root 0000000 0000000 **Example 1: To deploy apps and run stack commands**
The following examples show how to use the ``create-deployment`` command to deploy apps and run stack commands. Notice that the quote (``"``) characters in the JSON object that specifies the command are all preceded by escape characters (\\). Without the escape characters, the command might return an invalid JSON error.
The following ``create-deployment`` example deploys an app to a specified stack. ::
aws opsworks create-deployment \
--stack-id cfb7e082-ad1d-4599-8e81-de1c39ab45bf \
--app-id 307be5c8-d55d-47b5-bd6e-7bd417c6c7eb
--command "{\"Name\":\"deploy\"}"
Output::
{
"DeploymentId": "5746c781-df7f-4c87-84a7-65a119880560"
}
**Example 2: To deploy a Rails App and Migrate the Database**
The following ``create-deployment`` command deploys a Ruby on Rails app to a specified stack and migrates the database. ::
aws opsworks create-deployment \
--stack-id cfb7e082-ad1d-4599-8e81-de1c39ab45bf \
--app-id 307be5c8-d55d-47b5-bd6e-7bd417c6c7eb \
--command "{\"Name\":\"deploy\", \"Args\":{\"migrate\":[\"true\"]}}"
Output::
{
"DeploymentId": "5746c781-df7f-4c87-84a7-65a119880560"
}
For more information on deployment, see `Deploying Apps `__ in the *AWS OpsWorks User Guide*.
**Example 3: Run a Recipe**
The following ``create-deployment`` command runs a custom recipe, ``phpapp::appsetup``, on the instances in a specified stack. ::
aws opsworks create-deployment \
--stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb \
--command "{\"Name\":\"execute_recipes\", \"Args\":{\"recipes\":[\"phpapp::appsetup\"]}}"
Output::
{
"DeploymentId": "5cbaa7b9-4e09-4e53-aa1b-314fbd106038"
}
For more information, see `Run Stack Commands `__ in the *AWS OpsWorks User Guide*.
**Example 4: Install Dependencies**
The following ``create-deployment`` command installs dependencies, such as packages or Ruby gems, on the instances in a
specified stack. ::
aws opsworks create-deployment \
--stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb \
--command "{\"Name\":\"install_dependencies\"}"
Output::
{
"DeploymentId": "aef5b255-8604-4928-81b3-9b0187f962ff"
}
For more information, see `Run Stack Commands `__ in the *AWS OpsWorks User Guide*.
awscli-1.17.14/awscli/examples/opsworks/create-app.rst 0000644 0000000 0000000 00000003725 13620325556 022665 0 ustar root root 0000000 0000000 **To create an app**
The following example creates a PHP app named SimplePHPApp from code stored in a GitHub repository.
The command uses the shorthand form of the application source definition. ::
aws opsworks --region us-east-1 create-app --stack-id f6673d70-32e6-4425-8999-265dd002fec7 --name SimplePHPApp --type php --app-source Type=git,Url=git://github.com/amazonwebservices/opsworks-demo-php-simple-app.git,Revision=version1
*Output*::
{
"AppId": "6cf5163c-a951-444f-a8f7-3716be75f2a2"
}
**To create an app with an attached database**
The following example creates a JSP app from code stored in .zip archive in a public S3 bucket.
It attaches an RDS DB instance to serve as the app's data store. The application and database sources are defined in separate
JSON files that are in the directory from which you run the command. ::
aws opsworks --region us-east-1 create-app --stack-id 8c428b08-a1a1-46ce-a5f8-feddc43771b8 --name SimpleJSP --type java --app-source file://appsource.json --data-sources file://datasource.json
The application source information is in ``appsource.json`` and contains the following. ::
{
"Type": "archive",
"Url": "https://s3.amazonaws.com/jsp_example/simplejsp.zip"
}
The database source information is in ``datasource.json`` and contains the following. ::
[
{
"Type": "RdsDbInstance",
"Arn": "arn:aws:rds:us-west-2:123456789012:db:clitestdb",
"DatabaseName": "mydb"
}
]
**Note**: For an RDS DB instance, you must first use ``register-rds-db-instance`` to register the instance with the stack.
For MySQL App Server instances, set ``Type`` to ``OpsworksMysqlInstance``. These instances are
created by AWS OpsWorks,
so they do not have to be registered.
*Output*::
{
"AppId": "26a61ead-d201-47e3-b55c-2a7c666942f8"
}
For more information, see `Adding Apps`_ in the *AWS OpsWorks User Guide*.
.. _`Adding Apps`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html
awscli-1.17.14/awscli/examples/opsworks/describe-commands.rst 0000644 0000000 0000000 00000003704 13620325556 024220 0 ustar root root 0000000 0000000 **To describe commands**
The following ``describe-commands`` commmand describes the commands in a specified instance. ::
aws opsworks --region us-east-1 describe-commands --instance-id 8c2673b9-3fe5-420d-9cfa-78d875ee7687
*Output*::
{
"Commands": [
{
"Status": "successful",
"CompletedAt": "2013-07-25T18:57:47+00:00",
"InstanceId": "8c2673b9-3fe5-420d-9cfa-78d875ee7687",
"DeploymentId": "6ed0df4c-9ef7-4812-8dac-d54a05be1029",
"AcknowledgedAt": "2013-07-25T18:57:41+00:00",
"LogUrl": "https://s3.amazonaws.com/prod_stage-log/logs/008c1a91-ec59-4d51-971d-3adff54b00cc?AWSAccessKeyId=AKIAIOSFODNN7EXAMPLE &Expires=1375394373&Signature=HkXil6UuNfxTCC37EPQAa462E1E%3D&response-cache-control=private&response-content-encoding=gzip&response-content- type=text%2Fplain",
"Type": "undeploy",
"CommandId": "008c1a91-ec59-4d51-971d-3adff54b00cc",
"CreatedAt": "2013-07-25T18:57:34+00:00",
"ExitCode": 0
},
{
"Status": "successful",
"CompletedAt": "2013-07-25T18:55:40+00:00",
"InstanceId": "8c2673b9-3fe5-420d-9cfa-78d875ee7687",
"DeploymentId": "19d3121e-d949-4ff2-9f9d-94eac087862a",
"AcknowledgedAt": "2013-07-25T18:55:32+00:00",
"LogUrl": "https://s3.amazonaws.com/prod_stage-log/logs/899d3d64-0384-47b6-a586-33433aad117c?AWSAccessKeyId=AKIAIOSFODNN7EXAMPLE &Expires=1375394373&Signature=xMsJvtLuUqWmsr8s%2FAjVru0BtRs%3D&response-cache-control=private&response-content-encoding=gzip&response-conten t-type=text%2Fplain",
"Type": "deploy",
"CommandId": "899d3d64-0384-47b6-a586-33433aad117c",
"CreatedAt": "2013-07-25T18:55:29+00:00",
"ExitCode": 0
}
]
}
**More Information**
For more information, see `AWS OpsWorks Lifecycle Events`_ in the *AWS OpsWorks User Guide*.
.. _`AWS OpsWorks Lifecycle Events`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-events.html
awscli-1.17.14/awscli/examples/opsworks/set-time-based-auto-scaling.rst 0000644 0000000 0000000 00000002072 13620325556 026025 0 ustar root root 0000000 0000000 **To set the time-based scaling configuration for a layer**
The following example sets the time-based configuration for a specified instance.
You must first use ``create-instance`` to add the instance to the layer. ::
aws opsworks --region us-east-1 set-time-based-auto-scaling --instance-id 69b6237c-08c0-4edb-a6af-78f3d01cedf2 --auto-scaling-schedule file://schedule.json
The example puts the schedule in a separate file in the working directory named ``schedule.json``.
For this example, the instance is on for a few hours around midday UTC (Coordinated Universal Time) on Monday and Tuesday. ::
{
"Monday": {
"10": "on",
"11": "on",
"12": "on",
"13": "on"
},
"Tuesday": {
"10": "on",
"11": "on",
"12": "on",
"13": "on"
}
}
*Output*: None.
**More Information**
For more information, see `Using Automatic Time-based Scaling`_ in the *AWS OpsWorks User Guide*.
.. _`Using Automatic Time-based Scaling`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-autoscaling-timebased.html
awscli-1.17.14/awscli/examples/opsworks/attach-elastic-load-balancer.rst 0000644 0000000 0000000 00000001013 13620325556 026200 0 ustar root root 0000000 0000000 **To attach a load balancer to a layer**
The following example attaches a load balancer, identified by its name, to a specified layer. ::
aws opsworks --region us-east-1 attach-elastic-load-balancer --elastic-load-balancer-name Java-LB --layer-id 888c5645-09a5-4d0e-95a8-812ef1db76a4
*Output*: None.
**More Information**
For more information, see `Elastic Load Balancing`_ in the *AWS OpsWorks User Guide*.
.. _`Elastic Load Balancing`: http://docs.aws.amazon.com/opsworks/latest/userguide/load-balancer-elb.html
awscli-1.17.14/awscli/examples/opsworks/update-rds-db-instance.rst 0000644 0000000 0000000 00000001366 13620325556 025100 0 ustar root root 0000000 0000000 **To update a registered Amazon RDS DB instance**
The following example updates an Amazon RDS instance's master password value.
Note that this command does not change the RDS instance's master password, just the password that
you provide to AWS OpsWorks.
If this password does not match the RDS instance's password,
your application will not be able to connect to the database. ::
aws opsworks --region us-east-1 update-rds-db-instance --db-password 123456789
*Output*: None.
**More Information**
For more information, see `Registering Amazon RDS Instances with a Stack`_ in the *AWS OpsWorks User Guide*.
.. _`Registering Amazon RDS Instances with a Stack`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources-reg.html#resources-reg-rds
awscli-1.17.14/awscli/examples/opsworks/delete-user-profile.rst 0000644 0000000 0000000 00000001375 13620325556 024517 0 ustar root root 0000000 0000000 **To delete a user profile and remove an IAM user from AWS OpsWorks**
The following example deletes the user profile for a specified AWS Identity and Access Management
(IAM) user, who
is identified by Amazon Resource Name (ARN). The operation removes the user from AWS OpsWorks, but
does not delete the IAM user. You must use the IAM console, CLI, or API for that task. ::
aws opsworks --region us-east-1 delete-user-profile --iam-user-arn arn:aws:iam::123456789102:user/cli-user-test
*Output*: None.
**More Information**
For more information, see `Importing Users into AWS OpsWorks`_ in the *AWS OpsWorks User Guide*.
.. _`Importing Users into AWS OpsWorks`: http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users-manage-import.html
awscli-1.17.14/awscli/examples/opsworks/describe-elastic-load-balancers.rst 0000644 0000000 0000000 00000002176 13620325556 026712 0 ustar root root 0000000 0000000 **To describe a stack's elastic load balancers**
The following ``describe-elastic-load-balancers`` command describes a specified stack's load balancers. ::
aws opsworks --region us-west-2 describe-elastic-load-balancers --stack-id 6f4660e5-37a6-4e42-bfa0-1358ebd9c182
*Output*: This particular stack has one load balancer.
::
{
"ElasticLoadBalancers": [
{
"SubnetIds": [
"subnet-60e4ea04",
"subnet-66e1c110"
],
"Ec2InstanceIds": [],
"ElasticLoadBalancerName": "my-balancer",
"Region": "us-west-2",
"LayerId": "344973cb-bf2b-4cd0-8d93-51cd819bab04",
"AvailabilityZones": [
"us-west-2a",
"us-west-2b"
],
"VpcId": "vpc-b319f9d4",
"StackId": "6f4660e5-37a6-4e42-bfa0-1358ebd9c182",
"DnsName": "my-balancer-2094040179.us-west-2.elb.amazonaws.com"
}
]
}
**More Information**
For more information, see Apps_ in the *AWS OpsWorks User Guide*.
.. _Apps: http://docs.aws.amazon.com/opsworks/latest/userguide/workingapps.html
awscli-1.17.14/awscli/examples/opsworks/deregister-instance.rst 0000644 0000000 0000000 00000001023 13620325556 024570 0 ustar root root 0000000 0000000 **To deregister a registered instance from a stack**
The following ``deregister-instance`` command deregisters a registered instance from its stack. ::
aws opsworks --region us-east-1 deregister-instance --instance-id 4d6d1710-ded9-42a1-b08e-b043ad7af1e2
*Output*: None.
**More Information**
For more information, see `Deregistering a Registered Instance`_ in the *AWS OpsWorks User Guide*.
.. _`Deregistering a Registered Instance`: http://docs.aws.amazon.com/opsworks/latest/userguide/registered-instances-unassign.html
awscli-1.17.14/awscli/examples/opsworks/deregister-rds-db-instance.rst 0000644 0000000 0000000 00000001427 13620325556 025751 0 ustar root root 0000000 0000000 **To deregister an Amazon RDS DB instance from a stack**
The following example deregisters an RDS DB instance, identified by its ARN, from its stack. ::
aws opsworks deregister-rds-db-instance --region us-east-1 --rds-db-instance-arn arn:aws:rds:us-west-2:123456789012:db:clitestdb
*Output*: None.
**More Information**
For more information, see `Deregistering Amazon RDS Instances`_ in the *ASW OpsWorks User Guide*.
.. _`Deregistering Amazon RDS Instances`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources-dereg.html#resources-dereg-rds
.. instance ID: clitestdb
Master usernams: cliuser
Master PWD: some23!pwd
DB Name: mydb
aws opsworks deregister-rds-db-instance --region us-east-1 --rds-db-instance-arn arn:aws:rds:us-west-2:645732743964:db:clitestdb awscli-1.17.14/awscli/examples/opsworks/create-user-profile.rst 0000644 0000000 0000000 00000002225 13620325556 024513 0 ustar root root 0000000 0000000 **To create a user profile**
You import an AWS Identity and Access Manager (IAM) user into AWS OpsWorks by calling `create-user-profile` to create a user profile.
The following example creates a user profile for the cli-user-test IAM user, who
is identified by Amazon Resource Name (ARN). The example assigns the user an SSH username of ``myusername`` and enables self management,
which allows the user to specify an SSH public key. ::
aws opsworks --region us-east-1 create-user-profile --iam-user-arn arn:aws:iam::123456789102:user/cli-user-test --ssh-username myusername --allow-self-management
*Output*::
{
"IamUserArn": "arn:aws:iam::123456789102:user/cli-user-test"
}
**Tip**: This command imports an IAM user into AWS OpsWorks, but only with the permissions that are
granted by the attached policies. You can grant per-stack AWS OpsWorks permissions by using the ``set-permissions`` command.
**More Information**
For more information, see `Importing Users into AWS OpsWorks`_ in the *AWS OpsWorks User Guide*.
.. _`Importing Users into AWS OpsWorks`: http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users-manage-import.html
awscli-1.17.14/awscli/examples/opsworks/get-hostname-suggestion.rst 0000644 0000000 0000000 00000001267 13620325556 025423 0 ustar root root 0000000 0000000 **To get the next hostname for a layer**
The following example gets the next generated hostname for a specified layer. The layer used for
this example is a Java Application Server layer with one instance. The stack's hostname theme is
the default, Layer_Dependent. ::
aws opsworks --region us-east-1 get-hostname-suggestion --layer-id 888c5645-09a5-4d0e-95a8-812ef1db76a4
*Output*::
{
"Hostname": "java-app2",
"LayerId": "888c5645-09a5-4d0e-95a8-812ef1db76a4"
}
**More Information**
For more information, see `Create a New Stack`_ in the *AWS OpsWorks User Guide*.
.. _`Create a New Stack`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html
awscli-1.17.14/awscli/examples/opsworks/start-stack.rst 0000644 0000000 0000000 00000001010 13620325556 023065 0 ustar root root 0000000 0000000 **To start a stack's instances**
The following example starts all of a stack's 24/7 instances.
To start a particular instance, use ``start-instance``. ::
aws opsworks --region us-east-1 start-stack --stack-id 8c428b08-a1a1-46ce-a5f8-feddc43771b8
*Output*: None.
**More Information**
For more information, see `Starting an Instance`_ in the *AWS OpsWorks User Guide*.
.. _`Starting an Instance`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-starting.html#workinginstances-starting-start
awscli-1.17.14/awscli/examples/opsworks/deregister-elastic-ip.rst 0000644 0000000 0000000 00000001010 13620325556 025012 0 ustar root root 0000000 0000000 **To deregister an Elastic IP address from a stack**
The following example deregisters an Elastic IP address, identified by its IP address, from its stack. ::
aws opsworks deregister-elastic-ip --region us-east-1 --elastic-ip 54.148.130.96
*Output*: None.
**More Information**
For more information, see `Deregistering Elastic IP Addresses`_ in the *AWS OpsWorks User Guide*.
.. _`Deregistering Elastic IP Addresses`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources-dereg.html#resources-dereg-eip
awscli-1.17.14/awscli/examples/opsworks/stop-instance.rst 0000644 0000000 0000000 00000001372 13620325556 023427 0 ustar root root 0000000 0000000 **To stop an instance**
The following example stops a specified instance, which is identified by its instance ID.
You can obtain an instance ID by going to the instance's details page on the AWS OpsWorks console or by
running the ``describe-instances`` command. ::
aws opsworks stop-instance --region us-east-1 --instance-id 3a21cfac-4a1f-4ce2-a921-b2cfba6f7771
You can restart a stopped instance by calling ``start-instance`` or by deleting the instance by calling
``delete-instance``.
*Output*: None.
**More Information**
For more information, see `Stopping an Instance`_ in the *AWS OpsWorks User Guide*.
.. _`Stopping an Instance`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-starting.html#workinginstances-starting-stop
awscli-1.17.14/awscli/examples/opsworks/register-volume.rst 0000644 0000000 0000000 00000001177 13620325556 023774 0 ustar root root 0000000 0000000 **To register an Amazon EBS volume with a stack**
The following example registers an Amazon EBS volume, identified by its volume ID, with a specified stack. ::
aws opsworks register-volume --region us-east-1 --stack-id d72553d4-8727-448c-9b00-f024f0ba1b06 --ec-2-volume-id vol-295c1638
*Output*::
{
"VolumeId": "ee08039c-7cb7-469f-be10-40fb7f0c05e8"
}
**More Information**
For more information, see `Registering Amazon EBS Volumes with a Stack`_ in the *AWS OpsWorks User Guide*.
.. _`Registering Amazon EBS Volumes with a Stack`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources-reg.html#resources-reg-ebs
awscli-1.17.14/awscli/examples/opsworks/unassign-instance.rst 0000644 0000000 0000000 00000001014 13620325556 024262 0 ustar root root 0000000 0000000 **To unassign a registered instance from its layers**
The following ``unassign-instance`` command unassigns an instance from its attached layers. ::
aws opsworks --region us-east-1 unassign-instance --instance-id 4d6d1710-ded9-42a1-b08e-b043ad7af1e2
**Output**: None.
**More Information**
For more information, see `Unassigning a Registered Instance`_ in the *AWS OpsWorks User Guide*.
.. _`Unassigning a Registered Instance`: http://docs.aws.amazon.com/opsworks/latest/userguide/registered-instances-unassign.html
awscli-1.17.14/awscli/examples/opsworks/start-instance.rst 0000644 0000000 0000000 00000001523 13620325556 023575 0 ustar root root 0000000 0000000 **To start an instance**
The following ``start-instance`` command starts a specified 24/7 instance. ::
aws opsworks start-instance --instance-id f705ee48-9000-4890-8bd3-20eb05825aaf
*Output*: None. Use describe-instances_ to check the instance's status.
.. _describe-instances: http://docs.aws.amazon.com/cli/latest/reference/opsworks/describe-instances.html
**Tip** You can start every offline instance in a stack with one command by calling start-stack_.
.. _start-stack: http://docs.aws.amazon.com/cli/latest/reference/opsworks/start-stack.html
**More Information**
For more information, see `Manually Starting, Stopping, and Rebooting 24/7 Instances`_ in the *AWS OpsWorks User Guide*.
.. _`Manually Starting, Stopping, and Rebooting 24/7 Instances`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-starting.html
awscli-1.17.14/awscli/examples/opsworks/update-app.rst 0000644 0000000 0000000 00000000641 13620325556 022676 0 ustar root root 0000000 0000000 **To update an app**
The following example updates a specified app to change its name. ::
aws opsworks --region us-east-1 update-app --app-id 26a61ead-d201-47e3-b55c-2a7c666942f8 --name NewAppName
*Output*: None.
**More Information**
For more information, see `Editing Apps`_ in the *AWS OpsWorks User Guide*.
.. _`Editing Apps`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-editing.html
awscli-1.17.14/awscli/examples/opsworks/update-elastic-ip.rst 0000644 0000000 0000000 00000000664 13620325556 024155 0 ustar root root 0000000 0000000 **To update an Elastic IP address name**
The following example updates the name of a specified Elastic IP address. ::
aws opsworks --region us-east-1 update-elastic-ip --elastic-ip 54.148.130.96 --name NewIPName
*Output*: None.
**More Information**
For more information, see `Resource Management`_ in the *AWS OpsWorks User Guide*.
.. _`Resource Management`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources.html
awscli-1.17.14/awscli/examples/opsworks/reboot-instance.rst 0000644 0000000 0000000 00000000700 13620325556 023726 0 ustar root root 0000000 0000000 **To reboot an instance**
The following example reboots an instance. ::
aws opsworks --region us-east-1 reboot-instance --instance-id dfe18b02-5327-493d-91a4-c5c0c448927f
*Output*: None.
**More Information**
For more information, see `Rebooting an Instance`_ in the *AWS OpsWorks User Guide*.
.. _`Rebooting an Instance`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-starting.html#workinginstances-starting-reboot
awscli-1.17.14/awscli/examples/opsworks/associate-elastic-ip.rst 0000644 0000000 0000000 00000000760 13620325556 024643 0 ustar root root 0000000 0000000 **To associate an Elastic IP address with an instance**
The following example associates an Elastic IP address with a specified instance. ::
aws opsworks --region us-east-1 associate-elastic-ip --instance-id dfe18b02-5327-493d-91a4-c5c0c448927f --elastic-ip 54.148.130.96
*Output*: None.
**More Information**
For more information, see `Resource Management`_ in the *AWS OpsWorks User Guide*.
.. _`Resource Management`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources.html
awscli-1.17.14/awscli/examples/opsworks/delete-stack.rst 0000644 0000000 0000000 00000001335 13620325556 023204 0 ustar root root 0000000 0000000 **To delete a stack**
The following example deletes a specified stack, which is identified by its stack ID.
You can obtain a stack ID by clicking **Stack Settings** on the AWS OpsWorks console or by
running the ``describe-stacks`` command.
**Note:** Before deleting a layer, you must use ``delete-app``, ``delete-instance``, and ``delete-layer``
to delete all of the stack's apps, instances, and layers. ::
aws opsworks delete-stack --region us-east-1 --stack-id 154a9d89-7e9e-433b-8de8-617e53756c84
*Output*: None.
**More Information**
For more information, see `Shut Down a Stack`_ in the *AWS OpsWorks User Guide*.
.. _`Shut Down a Stack`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-shutting.html
awscli-1.17.14/awscli/examples/opsworks/register-rds-db-instance.rst 0000644 0000000 0000000 00000001626 13620325556 025441 0 ustar root root 0000000 0000000 **To register an Amazon RDS instance with a stack**
The following example registers an Amazon RDS DB instance, identified by its Amazon Resource Name (ARN), with a specified stack.
It also specifies the instance's master username and password. Note that AWS OpsWorks does not validate either of these
values. If either one is incorrect, your application will not be able to connect to the database. ::
aws opsworks register-rds-db-instance --region us-east-1 --stack-id d72553d4-8727-448c-9b00-f024f0ba1b06 --rds-db-instance-arn arn:aws:rds:us-west-2:123456789012:db:clitestdb --db-user cliuser --db-password some23!pwd
*Output*: None.
**More Information**
For more information, see `Registering Amazon RDS Instances with a Stack`_ in the *AWS OpsWorks User Guide*.
.. _`Registering Amazon RDS Instances with a Stack`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources-reg.html#resources-reg-rds
awscli-1.17.14/awscli/examples/opsworks/create-layer.rst 0000644 0000000 0000000 00000001112 13620325556 023205 0 ustar root root 0000000 0000000 **To create a layer**
The following ``create-layer`` command creates a PHP App Server layer named MyPHPLayer in a specified stack. ::
aws opsworks create-layer --region us-east-1 --stack-id f6673d70-32e6-4425-8999-265dd002fec7 --type php-app --name MyPHPLayer --shortname myphplayer
*Output*::
{
"LayerId": "0b212672-6b4b-40e4-8a34-5a943cf2e07a"
}
**More Information**
For more information, see `How to Create a Layer`_ in the *AWS OpsWorks User Guide*.
.. _`How to Create a Layer`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-create.html
awscli-1.17.14/awscli/examples/opsworks/delete-app.rst 0000644 0000000 0000000 00000001015 13620325556 022652 0 ustar root root 0000000 0000000 **To delete an app**
The following example deletes a specified app, which is identified by its app ID.
You can obtain an app ID by going to the app's details page on the AWS OpsWorks console or by
running the ``describe-apps`` command. ::
aws opsworks delete-app --region us-east-1 --app-id 577943b9-2ec1-4baf-a7bf-1d347601edc5
*Output*: None.
**More Information**
For more information, see `Apps`_ in the *AWS OpsWorks User Guide*.
.. _`Apps`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingapps.html
awscli-1.17.14/awscli/examples/opsworks/describe-user-profiles.rst 0000644 0000000 0000000 00000001632 13620325556 025214 0 ustar root root 0000000 0000000 **To describe user profiles**
The following ``describe-user-profiles`` command describes the account's user profiles. ::
aws opsworks --region us-east-1 describe-user-profiles
*Output*::
{
"UserProfiles": [
{
"IamUserArn": "arn:aws:iam::123456789012:user/someuser",
"SshPublicKey": "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAkOuP7i80q3Cko...",
"AllowSelfManagement": true,
"Name": "someuser",
"SshUsername": "someuser"
},
{
"IamUserArn": "arn:aws:iam::123456789012:user/cli-user-test",
"AllowSelfManagement": true,
"Name": "cli-user-test",
"SshUsername": "myusername"
}
]
}
**More Information**
For more information, see `Managing AWS OpsWorks Users`_ in the *AWS OpsWorks User Guide*.
.. _`Managing AWS OpsWorks Users`: http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users-manage.html
awscli-1.17.14/awscli/examples/opsworks/assign-instance.rst 0000644 0000000 0000000 00000001055 13620325556 023724 0 ustar root root 0000000 0000000 **To assign a registered instance to a layer**
The following example assigns a registered instance to a custom layer. ::
aws opsworks --region us-east-1 assign-instance --instance-id 4d6d1710-ded9-42a1-b08e-b043ad7af1e2 --layer-ids 26cf1d32-6876-42fa-bbf1-9cadc0bff938
*Output*: None.
**More Information**
For more information, see `Assigning a Registered Instance to a Layer`_ in the *AWS OpsWorks User Guide*.
.. _`Assigning a Registered Instance to a Layer`: http://docs.aws.amazon.com/opsworks/latest/userguide/registered-instances-assign.html
awscli-1.17.14/awscli/examples/opsworks/update-my-user-profile.rst 0000644 0000000 0000000 00000001323 13620325556 025153 0 ustar root root 0000000 0000000 **To update a user's profile**
The following example updates the ``development`` user's profile to use a specified SSH public key.
The user's AWS credentials are represented by the ``development`` profile in the ``credentials`` file
(``~\.aws\credentials``), and the key is in a ``.pem`` file in the working directory. ::
aws opsworks --region us-east-1 --profile development update-my-user-profile --ssh-public-key file://development_key.pem
*Output*: None.
**More Information**
For more information, see `Editing AWS OpsWorks User Settings`_ in the *AWS OpsWorks User Guide*.
.. _`Editing AWS OpsWorks User Settings`: http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users-manage-edit.html
awscli-1.17.14/awscli/examples/opsworks/delete-instance.rst 0000644 0000000 0000000 00000002104 13620325556 023676 0 ustar root root 0000000 0000000 **To delete an instance**
The following example deletes a specified instance, which is identified by its instance ID.
It also deletes any attached Amazon Elastic Block Store (Amazon EBS) volumes or Elastic IP addresses.
You can obtain an instance ID by going to the instance's details page on the AWS OpsWorks console or by
running the ``describe-instances`` command.
If the instance is online, you must first stop the instance by calling ``stop-instance``, and then
wait until the instance has stopped. You can use ``describe-instances`` to check the instance status. ::
aws opsworks delete-instance --region us-east-1 --instance-id 3a21cfac-4a1f-4ce2-a921-b2cfba6f7771
To retain the instance's Amazon EBS volumes or Elastic IP addresses,
use the ``--no-delete-volumes`` or ``--no-delete-elastic-ip`` arguments, respectively.
*Output*: None.
**More Information**
For more information, see `Deleting AWS OpsWorks Instances`_ in the *AWS OpsWorks User Guide*.
.. _`Deleting AWS OpsWorks Instances`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-delete.html
awscli-1.17.14/awscli/examples/opsworks/describe-volumes.rst 0000644 0000000 0000000 00000001503 13620325556 024104 0 ustar root root 0000000 0000000 **To describe a stack's volumes**
The following example describes a stack's EBS volumes. ::
aws opsworks --region us-east-1 describe-volumes --stack-id 8c428b08-a1a1-46ce-a5f8-feddc43771b8
*Output*::
{
"Volumes": [
{
"Status": "in-use",
"AvailabilityZone": "us-west-2a",
"Name": "CLITest",
"InstanceId": "dfe18b02-5327-493d-91a4-c5c0c448927f",
"VolumeType": "standard",
"VolumeId": "56b66fbd-e1a1-4aff-9227-70f77118d4c5",
"Device": "/dev/sdi",
"Ec2VolumeId": "vol-295c1638",
"MountPoint": "/mnt/myvolume",
"Size": 1
}
]
}
**More Information**
For more information, see `Resource Management`_ in the *AWS OpsWorks User Guide*.
.. _`Resource Management`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources.html
awscli-1.17.14/awscli/examples/opsworks/disassociate-elastic-ip.rst 0000644 0000000 0000000 00000000706 13620325556 025343 0 ustar root root 0000000 0000000 **To disassociate an Elastic IP address from an instance**
The following example disassociates an Elastic IP address from a specified instance. ::
aws opsworks --region us-east-1 disassociate-elastic-ip --elastic-ip 54.148.130.96
*Output*: None.
**More Information**
For more information, see `Resource Management`_ in the *AWS OpsWorks User Guide*.
.. _`Resource Management`: http://docs.aws.amazon.com/opsworks/latest/userguide/resources.html
awscli-1.17.14/awscli/examples/opsworks/describe-deployments.rst 0000644 0000000 0000000 00000003265 13620325556 024764 0 ustar root root 0000000 0000000 **To describe deployments**
The following ``describe-deployments`` commmand describes the deployments in a specified stack. ::
aws opsworks --region us-east-1 describe-deployments --stack-id 38ee91e2-abdc-4208-a107-0b7168b3cc7a
*Output*::
{
"Deployments": [
{
"StackId": "38ee91e2-abdc-4208-a107-0b7168b3cc7a",
"Status": "successful",
"CompletedAt": "2013-07-25T18:57:49+00:00",
"DeploymentId": "6ed0df4c-9ef7-4812-8dac-d54a05be1029",
"Command": {
"Args": {},
"Name": "undeploy"
},
"CreatedAt": "2013-07-25T18:57:34+00:00",
"Duration": 15,
"InstanceIds": [
"8c2673b9-3fe5-420d-9cfa-78d875ee7687",
"9e588a25-35b2-4804-bd43-488f85ebe5b7"
]
},
{
"StackId": "38ee91e2-abdc-4208-a107-0b7168b3cc7a",
"Status": "successful",
"CompletedAt": "2013-07-25T18:56:41+00:00",
"IamUserArn": "arn:aws:iam::123456789012:user/someuser",
"DeploymentId": "19d3121e-d949-4ff2-9f9d-94eac087862a",
"Command": {
"Args": {},
"Name": "deploy"
},
"InstanceIds": [
"8c2673b9-3fe5-420d-9cfa-78d875ee7687",
"9e588a25-35b2-4804-bd43-488f85ebe5b7"
],
"Duration": 72,
"CreatedAt": "2013-07-25T18:55:29+00:00"
}
]
}
**More Information**
For more information, see `Deploying Apps`_ in the *AWS OpsWorks User Guide*.
.. _`Deploying Apps`: http://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-deploying.html
awscli-1.17.14/awscli/examples/opsworks/detach-elastic-load-balancer.rst 0000644 0000000 0000000 00000001011 13620325556 026162 0 ustar root root 0000000 0000000 **To detach a load balancer from its layer**
The following example detaches a load balancer, identified by its name, from its layer. ::
aws opsworks --region us-east-1 detach-elastic-load-balancer --elastic-load-balancer-name Java-LB --layer-id 888c5645-09a5-4d0e-95a8-812ef1db76a4
*Output*: None.
**More Information**
For more information, see `Elastic Load Balancing`_ in the *AWS OpsWorks User Guide*.
.. _`Elastic Load Balancing`: http://docs.aws.amazon.com/opsworks/latest/userguide/load-balancer-elb.html
awscli-1.17.14/awscli/examples/opsworks/describe-load-based-auto-scaling.rst 0000644 0000000 0000000 00000002345 13620325556 026776 0 ustar root root 0000000 0000000 **To describe a layer's load-based scaling configuration**
The following example describes a specified layer's load-based scaling configuration.
The layer is identified by its layer ID, which you can find on the layer's
details page or by running ``describe-layers``. ::
aws opsworks describe-load-based-auto-scaling --region us-east-1 --layer-ids 6bec29c9-c866-41a0-aba5-fa3e374ce2a1
*Output*: The example layer has a single load-based instance. ::
{
"LoadBasedAutoScalingConfigurations": [
{
"DownScaling": {
"IgnoreMetricsTime": 10,
"ThresholdsWaitTime": 10,
"InstanceCount": 1,
"CpuThreshold": 30.0
},
"Enable": true,
"UpScaling": {
"IgnoreMetricsTime": 5,
"ThresholdsWaitTime": 5,
"InstanceCount": 1,
"CpuThreshold": 80.0
},
"LayerId": "6bec29c9-c866-41a0-aba5-fa3e374ce2a1"
}
]
}
**More Information**
For more information, see `How Automatic Load-based Scaling Works`_ in the *AWS OpsWorks User Guide*.
.. _`How Automatic Load-based Scaling Works`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-autoscaling.html#workinginstances-autoscaling-loadbased
awscli-1.17.14/awscli/examples/opsworks/update-instance.rst 0000644 0000000 0000000 00000000745 13620325556 023727 0 ustar root root 0000000 0000000 **To update an instance**
The following example updates a specified instance's type. ::
aws opsworks --region us-east-1 update-instance --instance-id dfe18b02-5327-493d-91a4-c5c0c448927f --instance-type c3.xlarge
*Output*: None.
**More Information**
For more information, see `Editing the Instance Configuration`_ in the *AWS OpsWorks User Guide*.
.. _`Editing the Instance Configuration`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-properties.html
awscli-1.17.14/awscli/examples/opsworks/describe-my-user-profile.rst 0000644 0000000 0000000 00000001511 13620325556 025450 0 ustar root root 0000000 0000000 **To obtain a user's profile**
The following example shows how to obtain the profile
of the AWS Identity and Access Management (IAM) user that is running the command. ::
aws opsworks --region us-east-1 describe-my-user-profile
*Output*: For brevity, most of the user's SSH public key is replaced by an ellipsis (...). ::
{
"UserProfile": {
"IamUserArn": "arn:aws:iam::123456789012:user/myusername",
"SshPublicKey": "ssh-rsa AAAAB3NzaC1yc2EAAAABJQ...3LQ4aX9jpxQw== rsa-key-20141104",
"Name": "myusername",
"SshUsername": "myusername"
}
}
**More Information**
For more information, see `Importing Users into AWS OpsWorks`_ in the *AWS OpsWorks User Guide*.
.. _`Importing Users into AWS OpsWorks`: http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users-manage-import.html
awscli-1.17.14/awscli/examples/opsworks/describe-apps.rst 0000644 0000000 0000000 00000002052 13620325556 023355 0 ustar root root 0000000 0000000 **To describe apps**
The following ``describe-apps`` command describes the apps in a specified stack. ::
aws opsworks --region us-east-1 describe-apps --stack-id 38ee91e2-abdc-4208-a107-0b7168b3cc7a
*Output*: This particular stack has one app.
::
{
"Apps": [
{
"StackId": "38ee91e2-abdc-4208-a107-0b7168b3cc7a",
"AppSource": {
"Url": "https://s3-us-west-2.amazonaws.com/opsworks-tomcat/simplejsp.zip",
"Type": "archive"
},
"Name": "SimpleJSP",
"EnableSsl": false,
"SslConfiguration": {},
"AppId": "da1decc1-0dff-43ea-ad7c-bb667cd87c8b",
"Attributes": {
"RailsEnv": null,
"AutoBundleOnDeploy": "true",
"DocumentRoot": "ROOT"
},
"Shortname": "simplejsp",
"Type": "other",
"CreatedAt": "2013-08-01T21:46:54+00:00"
}
]
}
**More Information**
For more information, see Apps_ in the *AWS OpsWorks User Guide*.
.. _Apps: http://docs.aws.amazon.com/opsworks/latest/userguide/workingapps.html
awscli-1.17.14/awscli/examples/opsworks/create-instance.rst 0000644 0000000 0000000 00000002022 13620325556 023676 0 ustar root root 0000000 0000000 **To create an instance**
The following ``create-instance`` command creates an m1.large Amazon Linux instance named myinstance1 in a specified stack.
The instance is assigned to one layer. ::
aws opsworks --region us-east-1 create-instance --stack-id 935450cc-61e0-4b03-a3e0-160ac817d2bb --layer-ids 5c8c272a-f2d5-42e3-8245-5bf3927cb65b --hostname myinstance1 --instance-type m1.large --os "Amazon Linux"
To use an autogenerated name, call `get-hostname-suggestion`_, which generates
a hostname based on the theme that you specified when you created the stack.
Then pass that name to the `hostname` argument.
.. _get-hostname-suggestion: http://docs.aws.amazon.com/cli/latest/reference/opsworks/get-hostname-suggestion.html
*Output*::
{
"InstanceId": "5f9adeaa-c94c-42c6-aeef-28a5376002cd"
}
**More Information**
For more information, see `Adding an Instance to a Layer`_ in the *AWS OpsWorks User Guide*.
.. _`Adding an Instance to a Layer`: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-add.html
awscli-1.17.14/awscli/examples/qldb/ 0000755 0000000 0000000 00000000000 13620325757 017141 5 ustar root root 0000000 0000000 awscli-1.17.14/awscli/examples/qldb/create-ledger.rst 0000644 0000000 0000000 00000003050 13620325556 022371 0 ustar root root 0000000 0000000 **Example 1: To create a ledger with default properties**
The following ``create-ledger`` example creates a ledger with the name ``myExampleLedger`` and the permissions mode ``ALLOW_ALL``. The optional parameter for deletion protection is not specified, so it defaults to ``true``. ::
aws qldb create-ledger \
--name myExampleLedger \
--permissions-mode ALLOW_ALL
Output::
{
"State": "CREATING",
"Arn": "arn:aws:qldb:us-west-2:123456789012:ledger/myExampleLedger",
"DeletionProtection": true,
"CreationDateTime": 1568839243.951,
"Name": "myExampleLedger"
}
**Example 2: To create a ledger with deletion protection disabled and with specified tags**
The following ``create-ledger`` example creates a ledger with the name ``myExampleLedger2`` and the permissions mode ``ALLOW_ALL``. The deletion protection feature is disabled, and the specified tags are attached to the resource. ::
aws qldb create-ledger \
--name myExampleLedger \
--no-deletion-protection \
--permissions-mode ALLOW_ALL \
--tags IsTest=true,Domain=Test
Output::
{
"Arn": "arn:aws:qldb:us-west-2:123456789012:ledger/myExampleLedger2",
"DeletionProtection": false,
"CreationDateTime": 1568839543.557,
"State": "CREATING",
"Name": "myExampleLedger2"
}
For more information, see `Basic Operations for Amazon QLDB Ledgers `__ in the *Amazon QLDB Developer Guide*.
awscli-1.17.14/awscli/examples/qldb/list-ledgers.rst 0000644 0000000 0000000 00000001357 13620325556 022274 0 ustar root root 0000000 0000000 **To list your available ledgers**
The following ``list-ledgers`` example lists all ledgers that are associated with the current AWS account and Region. ::
aws qldb list-ledgers
Output::
{
"Ledgers": [
{
"State": "ACTIVE",
"CreationDateTime": 1568839243.951,
"Name": "myExampleLedger"
},
{
"State": "ACTIVE",
"CreationDateTime": 1568839543.557,
"Name": "myExampleLedger2"
}
]
}
For more information, see `Basic Operations for Amazon QLDB Ledgers `__ in the *Amazon QLDB Developer Guide*.
awscli-1.17.14/awscli/examples/qldb/list-journal-s3-exports.rst 0000644 0000000 0000000 00000003506 13620325556 024344 0 ustar root root 0000000 0000000 **To list journal export jobs**
The following ``list-journal-s3-exports`` example lists journal export jobs for all ledgers that are associated with the current AWS account and Region. ::
aws qldb list-journal-s3-exports
Output::
{
"JournalS3Exports": [
{
"Status": "IN_PROGRESS",
"LedgerName": "myExampleLedger",
"S3ExportConfiguration": {
"EncryptionConfiguration": {
"ObjectEncryptionType": "SSE_S3"
},
"Bucket": "awsExampleBucket",
"Prefix": "ledgerexport1/"
},
"RoleArn": "arn:aws:iam::123456789012:role/my-s3-export-role",
"ExportCreationTime": 1568847801.418,
"ExportId": "ADR2ONPKN5LINYGb4dp7yZ",
"InclusiveStartTime": 1568764800.0,
"ExclusiveEndTime": 1568847599.0
},
{
"Status": "COMPLETED",
"LedgerName": "myExampleLedger2",
"S3ExportConfiguration": {
"EncryptionConfiguration": {
"ObjectEncryptionType": "SSE_S3"
},
"Bucket": "awsExampleBucket",
"Prefix": "ledgerexport1/"
},
"RoleArn": "arn:aws:iam::123456789012:role/my-s3-export-role",
"ExportCreationTime": 1568846847.638,
"ExportId": "2pdvW8UQrjBAiYTMehEJDI",
"InclusiveStartTime": 1568592000.0,
"ExclusiveEndTime": 1568764800.0
}
]
}
For more information, see `Exporting Your Journal in Amazon QLDB `__ in the *Amazon QLDB Developer Guide*.
awscli-1.17.14/awscli/examples/qldb/export-journal-to-s3.rst 0000644 0000000 0000000 00000002056 13620325556 023627 0 ustar root root 0000000 0000000 **To export journal blocks to S3**
The following ``export-journal-to-s3`` example creates an export job for journal blocks within a specified date and time range from a ledger with the name ``myExampleLedger``. The export job writes the blocks into a specified Amazon S3 bucket. ::
aws qldb export-journal-to-s3 \
--name myExampleLedger \
--inclusive-start-time 2019-09-18T00:00:00Z \
--exclusive-end-time 2019-09-18T22:59:59Z \
--role-arn arn:aws:iam::123456789012:role/my-s3-export-role \
--s3-export-configuration file://my-s3-export-config.json
Contents of ``my-s3-export-config.json``::
{
"Bucket": "awsExampleBucket",
"Prefix": "ledgerexport1/",
"EncryptionConfiguration": {
"ObjectEncryptionType": "SSE_S3"
}
}
Output::
{
"ExportId": "ADR2ONPKN5LINYGb4dp7yZ"
}
For more information, see `Exporting Your Journal in Amazon QLDB `__ in the *Amazon QLDB Developer Guide*.
awscli-1.17.14/awscli/examples/qldb/update-ledger.rst 0000644 0000000 0000000 00000001262 13620325556 022413 0 ustar root root 0000000 0000000 **To update properties of a ledger**
The following ``update-ledger`` example updates the specified ledger to disable the deletion protection feature. ::
aws qldb update-ledger \
--name myExampleLedger \
--no-deletion-protection
Output::
{
"CreationDateTime": 1568839243.951,
"Arn": "arn:aws:qldb:us-west-2:123456789012:ledger/myExampleLedger",
"DeletionProtection": false,
"Name": "myExampleLedger",
"State": "ACTIVE"
}
For more information, see `Basic Operations for Amazon QLDB Ledgers `__ in the *Amazon QLDB Developer Guide*.
awscli-1.17.14/awscli/examples/qldb/describe-journal-s3-export.rst 0000644 0000000 0000000 00000002156 13620325556 024766 0 ustar root root 0000000 0000000 **To describe a journal export job**
The following ``describe-journal-s3-export`` example displays the details for the specified export job from a ledger. ::
aws qldb describe-journal-s3-export \
--name myExampleLedger \
--export-id ADR2ONPKN5LINYGb4dp7yZ
Output::
{
"ExportDescription": {
"S3ExportConfiguration": {
"Bucket": "awsExampleBucket",
"Prefix": "ledgerexport1/",
"EncryptionConfiguration": {
"ObjectEncryptionType": "SSE_S3"
}
},
"RoleArn": "arn:aws:iam::123456789012:role/my-s3-export-role",
"Status": "COMPLETED",
"ExportCreationTime": 1568847801.418,
"InclusiveStartTime": 1568764800.0,
"ExclusiveEndTime": 1568847599.0,
"LedgerName": "myExampleLedger",
"ExportId": "ADR2ONPKN5LINYGb4dp7yZ"
}
}
For more information, see `Exporting Your Journal in Amazon QLDB `__ in the *Amazon QLDB Developer Guide*.
awscli-1.17.14/awscli/examples/qldb/untag-resource.rst 0000644 0000000 0000000 00000000761 13620325556 022637 0 ustar root root 0000000 0000000 **To remove tags from a resource**
The following ``untag-resource`` example removes tags with the specified tag keys from a specified ledger. ::
aws qldb untag-resource \
--resource-arn arn:aws:qldb:us-west-2:123456789012:ledger/myExampleLedger \
--tag-keys IsTest Domain
This command produces no output.
For more information, see `Tagging Amazon QLDB Resources `__ in the *Amazon QLDB Developer Guide*.
awscli-1.17.14/awscli/examples/qldb/get-digest.rst 0000644 0000000 0000000 00000001143 13620325556 021723 0 ustar root root 0000000 0000000 **To get a digest for a ledger**
The following ``get-digest`` example requests a digest from the specified ledger at the latest committed block in the journal. ::
aws qldb get-digest \
--name vehicle-registration
Output::
{
"Digest": "6m6BMXobbJKpMhahwVthAEsN6awgnHK62Qq5McGP1Gk=",
"DigestTipAddress": {
"IonText": "{strandId:\"KmA3ZZca7vAIiJAK9S5Iwl\",sequenceNo:123}"
}
}
For more information, see `Data Verification in Amazon QLDB `__ in the *Amazon QLDB Developer Guide*.
awscli-1.17.14/awscli/examples/qldb/list-tags-for-resource.rst 0000644 0000000 0000000 00000001041 13620325556 024204 0 ustar root root 0000000 0000000 **To list the tags attached to a ledger**
The following ``list-tags-for-resource`` example lists all tags attached to the specified ledger. ::
aws qldb list-tags-for-resource \
--resource-arn arn:aws:qldb:us-west-2:123456789012:ledger/myExampleLedger
Output::
{
"Tags": {
"IsTest": "true",
"Domain": "Test"
}
}
For more information, see `Tagging Amazon QLDB Resources `__ in the *Amazon QLDB Developer Guide*.
awscli-1.17.14/awscli/examples/qldb/get-block.rst 0000644 0000000 0000000 00000005425 13620325556 021545 0 ustar root root 0000000 0000000 **To get a journal block and proof for verification**
The following ``get-block`` example requests a block data object and a proof from the specified ledger. The request is for a specified digest tip address and block address. ::
aws qldb get-block \
--name vehicle-registration \
--block-address file://myblockaddress.json \
--digest-tip-address file://mydigesttipaddress.json
Contents of ``myblockaddress.json``::
{
"IonText": "{strandId:\"KmA3ZZca7vAIiJAK9S5Iwl\",sequenceNo:100}"
}
Contents of ``mydigesttipaddress.json``::
{
"IonText": "{strandId:\"KmA3ZZca7vAIiJAK9S5Iwl\",sequenceNo:123}"
}
Output::
{
"Block": {
"IonText": "{blockAddress:{strandId:\"KmA3ZZca7vAIiJAK9S5Iwl\",sequenceNo:100},transactionId:\"FnQeJBAicTX0Ah32ZnVtSX\",blockTimestamp:2019-09-16T19:37:05.360Z,blockHash:{{NoChM92yKRuJAb/jeLd1VnYn4DHiWIf071ACfic9uHc=}},entriesHash:{{l05LOsiKV14SDbuaYnH7uwXzUvqzIwUiRLXGbTyj/nY=}},previousBlockHash:{{7kewBXhpdbClcZKxhVmpoMHpUGOJtWQD0iY2LPfZkYA=}},entriesHashList:[{{eRSwnmAM7WWANWDd5iGOyK+T4tDXyzUq6HZ/0fgLHos=}},{{mHVex/yjHAWjFPpwhBuH2GKXmKJjK2FBa9faqoUVNtg=}},{{y5cCBr7pOAIUfsVQ1j0TqtE97b4b4oo1R0vnYyE5wWM=}},{{TvTXygML1bMe6NvEZtGkX+KR+W/EJl4qD1mmV77KZQg=}}],transactionInfo:{statements:[{statement:\"FROM VehicleRegistration AS r \\nWHERE r.VIN = '1N4AL11D75C109151'\\nINSERT INTO r.Owners.SecondaryOwners\\n VALUE { 'PersonId' : 'CMVdR77XP8zAglmmFDGTvt' }\",startTime:2019-09-16T19:37:05.302Z,statementDigest:{{jcgPX2vsOJ0waum4qmDYtn1pCAT9xKNIzA+2k4R+mxA=}}}],documents:{JUJgkIcNbhS2goq8RqLuZ4:{tableName:\"VehicleRegistration\",tableId:\"BFJKdXgzt9oF4wjMbuxy4G\",statements:[0]}}},revisions:[{blockAddress:{strandId:\"KmA3ZZca7vAIiJAK9S5Iwl\",sequenceNo:100},hash:{{mHVex/yjHAWjFPpwhBuH2GKXmKJjK2FBa9faqoUVNtg=}},data:{VIN:\"1N4AL11D75C109151\",LicensePlateNumber:\"LEWISR261LL\",State:\"WA\",PendingPenaltyTicketAmount:90.25,ValidFromDate:2017-08-21,ValidToDate:2020-05-11,Owners:{PrimaryOwner:{PersonId:\"BFJKdXhnLRT27sXBnojNGW\"},SecondaryOwners:[{PersonId:\"CMVdR77XP8zAglmmFDGTvt\"}]},City:\"Everett\"},metadata:{id:\"JUJgkIcNbhS2goq8RqLuZ4\",version:3,txTime:2019-09-16T19:37:05.344Z,txId:\"FnQeJBAicTX0Ah32ZnVtSX\"}}]}"
},
"Proof": {
"IonText": "[{{l3+EXs69K1+rehlqyWLkt+oHDlw4Zi9pCLW/t/mgTPM=}},{{48CXG3ehPqsxCYd34EEa8Fso0ORpWWAO8010RJKf3Do=}},{{9UnwnKSQT0i3ge1JMVa+tMIqCEDaOPTkWxmyHSn8UPQ=}},{{3nW6Vryghk+7pd6wFCtLufgPM6qXHyTNeCb1sCwcDaI=}},{{Irb5fNhBrNEQ1VPhzlnGT/ZQPadSmgfdtMYcwkNOxoI=}},{{+3CWpYG/ytf/vq9GidpzSx6JJiLXt1hMQWNnqOy3jfY=}},{{NPx6cRhwsiy5m9UEWS5JTJrZoUdO2jBOAAOmyZAT+qE=}}]"
}
}
For more information, see `Data Verification in Amazon QLDB `__ in the *Amazon QLDB Developer Guide*.
awscli-1.17.14/awscli/examples/qldb/delete-ledger.rst 0000644 0000000 0000000 00000000605 13620325556 022373 0 ustar root root 0000000 0000000 **To delete a ledger**
The following ``delete-ledger`` example deletes the specified ledger. ::
aws qldb delete-ledger \
--name myExampleLedger
This command produces no output.
For more information, see `Basic Operations for Amazon QLDB Ledgers `__ in the *Amazon QLDB Developer Guide*.
awscli-1.17.14/awscli/examples/qldb/get-revision.rst 0000644 0000000 0000000 00000004052 13620325556 022304 0 ustar root root 0000000 0000000 **To get a document revision and proof for verification**
The following ``get-revision`` example requests a revision data object and a proof from the specified ledger. The request is for a specified digest tip address, document ID, and block address of the revision. ::
aws qldb get-revision \
--name vehicle-registration \
--block-address file://myblockaddress.json \
--document-id JUJgkIcNbhS2goq8RqLuZ4 \
--digest-tip-address file://mydigesttipaddress.json
Contents of ``myblockaddress.json``::
{
"IonText": "{strandId:\"KmA3ZZca7vAIiJAK9S5Iwl\",sequenceNo:100}"
}
Contents of ``mydigesttipaddress.json``::
{
"IonText": "{strandId:\"KmA3ZZca7vAIiJAK9S5Iwl\",sequenceNo:123}"
}
Output::
{
"Revision": {
"IonText": "{blockAddress:{strandId:\"KmA3ZZca7vAIiJAK9S5Iwl\",sequenceNo:100},hash:{{mHVex/yjHAWjFPpwhBuH2GKXmKJjK2FBa9faqoUVNtg=}},data:{VIN:\"1N4AL11D75C109151\",LicensePlateNumber:\"LEWISR261LL\",State:\"WA\",PendingPenaltyTicketAmount:90.25,ValidFromDate:2017-08-21,ValidToDate:2020-05-11,Owners:{PrimaryOwner:{PersonId:\"BFJKdXhnLRT27sXBnojNGW\"},SecondaryOwners:[{PersonId:\"CMVdR77XP8zAglmmFDGTvt\"}]},City:\"Everett\"},metadata:{id:\"JUJgkIcNbhS2goq8RqLuZ4\",version:3,txTime:2019-09-16T19:37:05.344Z,txId:\"FnQeJBAicTX0Ah32ZnVtSX\"}}"
},
"Proof": {
"IonText": "[{{eRSwnmAM7WWANWDd5iGOyK+T4tDXyzUq6HZ/0fgLHos=}},{{VV1rdaNuf+yJZVGlmsM6gr2T52QvBO8Lg+KgpjcnWAU=}},{{7kewBXhpdbClcZKxhVmpoMHpUGOJtWQD0iY2LPfZkYA=}},{{l3+EXs69K1+rehlqyWLkt+oHDlw4Zi9pCLW/t/mgTPM=}},{{48CXG3ehPqsxCYd34EEa8Fso0ORpWWAO8010RJKf3Do=}},{{9UnwnKSQT0i3ge1JMVa+tMIqCEDaOPTkWxmyHSn8UPQ=}},{{3nW6Vryghk+7pd6wFCtLufgPM6qXHyTNeCb1sCwcDaI=}},{{Irb5fNhBrNEQ1VPhzlnGT/ZQPadSmgfdtMYcwkNOxoI=}},{{+3CWpYG/ytf/vq9GidpzSx6JJiLXt1hMQWNnqOy3jfY=}},{{NPx6cRhwsiy5m9UEWS5JTJrZoUdO2jBOAAOmyZAT+qE=}}]"
}
}
For more information, see `Data Verification in Amazon QLDB `__ in the *Amazon QLDB Developer Guide*.
awscli-1.17.14/awscli/examples/qldb/tag-resource.rst 0000644 0000000 0000000 00000000714 13620325556 022272 0 ustar root root 0000000 0000000 **To tag a ledger**
The following ``tag-resource`` example adds a set of tags to a specified ledger. ::
aws qldb tag-resource \
--resource-arn arn:aws:qldb:us-west-2:123456789012:ledger/myExampleLedger \
--tags IsTest=true,Domain=Test
This command produces no output.
For more information, see `Tagging Amazon QLDB Resources `__ in the *Amazon QLDB Developer Guide*.
awscli-1.17.14/awscli/examples/qldb/describe-ledger.rst 0000644 0000000 0000000 00000001153 13620325556 022710 0 ustar root root 0000000 0000000 **To describe a ledger**
The following ``describe-ledger`` example displays the details for the specified ledger. ::
aws qldb describe-ledger \
--name myExampleLedger
Output::
{
"CreationDateTime": 1568839243.951,
"Arn": "arn:aws:qldb:us-west-2:123456789012:ledger/myExampleLedger",
"State": "ACTIVE",
"Name": "myExampleLedger",
"DeletionProtection": true
}
For more information, see `Basic Operations for Amazon QLDB Ledgers `__ in the *Amazon QLDB Developer Guide*.
awscli-1.17.14/awscli/examples/qldb/list-journal-s3-exports-for-ledger.rst 0000644 0000000 0000000 00000002235 13620325556 026366 0 ustar root root 0000000 0000000 **To list journal export jobs for a ledger**
The following ``list-journal-s3-exports-for-ledger`` example lists journal export jobs for the specified ledger. ::
aws qldb list-journal-s3-exports-for-ledger \
--name myExampleLedger
Output::
{
"JournalS3Exports": [
{
"LedgerName": "myExampleLedger",
"ExclusiveEndTime": 1568847599.0,
"ExportCreationTime": 1568847801.418,
"S3ExportConfiguration": {
"Bucket": "awsExampleBucket",
"Prefix": "ledgerexport1/",
"EncryptionConfiguration": {
"ObjectEncryptionType": "SSE_S3"
}
},
"ExportId": "ADR2ONPKN5LINYGb4dp7yZ",
"RoleArn": "arn:aws:iam::123456789012:role/qldb-s3-export",
"InclusiveStartTime": 1568764800.0,
"Status": "IN_PROGRESS"
}
]
}
For more information, see `Exporting Your Journal in Amazon QLDB `__ in the *Amazon QLDB Developer Guide*. awscli-1.17.14/awscli/examples/sqs/ 0000755 0000000 0000000 00000000000 13620325757 017025 5 ustar root root 0000000 0000000 awscli-1.17.14/awscli/examples/sqs/change-message-visibility.rst 0000644 0000000 0000000 00000000547 13620325556 024616 0 ustar root root 0000000 0000000 **To change a message's timeout visibility**
This example changes the specified message's timeout visibility to 10 hours (10 hours * 60 minutes * 60 seconds).
Command::
aws sqs change-message-visibility --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --receipt-handle AQEBTpyI...t6HyQg== --visibility-timeout 36000
Output::
None. awscli-1.17.14/awscli/examples/sqs/change-message-visibility-batch.rst 0000644 0000000 0000000 00000001431 13620325556 025666 0 ustar root root 0000000 0000000 **To change multiple messages' timeout visibilities as a batch**
This example changes the 2 specified messages' timeout visibilities to 10 hours (10 hours * 60 minutes * 60 seconds).
Command::
aws sqs change-message-visibility-batch --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --entries file://change-message-visibility-batch.json
Input file (change-message-visibility-batch.json)::
[
{
"Id": "FirstMessage",
"ReceiptHandle": "AQEBhz2q...Jf3kaw==",
"VisibilityTimeout": 36000
},
{
"Id": "SecondMessage",
"ReceiptHandle": "AQEBkTUH...HifSnw==",
"VisibilityTimeout": 36000
}
]
Output::
{
"Successful": [
{
"Id": "SecondMessage"
},
{
"Id": "FirstMessage"
}
]
}
awscli-1.17.14/awscli/examples/sqs/get-queue-url.rst 0000644 0000000 0000000 00000000323 13620325556 022253 0 ustar root root 0000000 0000000 **To get a queue URL**
This example gets the specified queue's URL.
Command::
aws sqs get-queue-url --queue-name MyQueue
Output::
{
"QueueUrl": "https://queue.amazonaws.com/80398EXAMPLE/MyQueue"
} awscli-1.17.14/awscli/examples/sqs/remove-permission.rst 0000644 0000000 0000000 00000000422 13620325556 023235 0 ustar root root 0000000 0000000 **To remove a permission**
This example removes the permission with the specified label from the specified queue.
Command::
aws sqs remove-permission --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --label SendMessagesFromMyQueue
Output::
None. awscli-1.17.14/awscli/examples/sqs/set-queue-attributes.rst 0000644 0000000 0000000 00000001735 13620325556 023663 0 ustar root root 0000000 0000000 **To set queue attributes**
This example sets the specified queue to a delivery delay of 10 seconds, a maximum message size of 128 KB (128 KB * 1,024 bytes), a message retention period of 3 days (3 days * 24 hours * 60 minutes * 60 seconds), a receive message wait time of 20 seconds, and a default visibility timeout of 60 seconds. This example also associates the specified dead letter queue with a maximum receive count of 1,000 messages.
Command::
aws sqs set-queue-attributes --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyNewQueue --attributes file://set-queue-attributes.json
Input file (set-queue-attributes.json)::
{
"DelaySeconds": "10",
"MaximumMessageSize": "131072",
"MessageRetentionPeriod": "259200",
"ReceiveMessageWaitTimeSeconds": "20",
"RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:80398EXAMPLE:MyDeadLetterQueue\",\"maxReceiveCount\":\"1000\"}",
"VisibilityTimeout": "60"
}
Output::
None.
awscli-1.17.14/awscli/examples/sqs/send-message.rst 0000644 0000000 0000000 00000001533 13620325556 022131 0 ustar root root 0000000 0000000 **To send a message**
This example sends a message with the specified message body, delay period, and message attributes, to the specified queue.
Command::
aws sqs send-message --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --message-body "Information about the largest city in Any Region." --delay-seconds 10 --message-attributes file://send-message.json
Input file (send-message.json)::
{
"City": {
"DataType": "String",
"StringValue": "Any City"
},
"Greeting": {
"DataType": "Binary",
"BinaryValue": "Hello, World!"
},
"Population": {
"DataType": "Number",
"StringValue": "1250800"
}
}
Output::
{
"MD5OfMessageBody": "51b0a325...39163aa0",
"MD5OfMessageAttributes": "00484c68...59e48f06",
"MessageId": "da68f62c-0c07-4bee-bf5f-7e856EXAMPLE"
}
awscli-1.17.14/awscli/examples/sqs/untag-queue.rst 0000644 0000000 0000000 00000001027 13620325556 022014 0 ustar root root 0000000 0000000 **To remove cost allocation tags from a queue**
The following ``untag-queue`` example removes a cost allocation tag from the specified Amazon SQS queue. ::
aws sqs tag-queue \
--queue-url https://sqs.us-west-2.amazonaws.com/123456789012/MyQueue \
--tag-keys "Priority"
This command produces no output.
For more information, see `Adding Cost Allocation Tags `__ in the *Amazon Simple Queue Service Developer Guide*.
awscli-1.17.14/awscli/examples/sqs/create-queue.rst 0000644 0000000 0000000 00000001247 13620325556 022145 0 ustar root root 0000000 0000000 **To create a queue**
This example creates a queue with the specified name, sets the message retention period to 3 days (3 days * 24 hours * 60 minutes * 60 seconds), and sets the queue's dead letter queue to the specified queue with a maximum receive count of 1,000 messages.
Command::
aws sqs create-queue --queue-name MyQueue --attributes file://create-queue.json
Input file (create-queue.json)::
{
"RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:80398EXAMPLE:MyDeadLetterQueue\",\"maxReceiveCount\":\"1000\"}",
"MessageRetentionPeriod": "259200"
}
Output::
{
"QueueUrl": "https://queue.amazonaws.com/80398EXAMPLE/MyQueue"
}
awscli-1.17.14/awscli/examples/sqs/list-dead-letter-source-queues.rst 0000644 0000000 0000000 00000000655 13620325556 025530 0 ustar root root 0000000 0000000 **To list dead letter source queues**
This example lists the queues that are associated with the specified dead letter source queue.
Command::
aws sqs list-dead-letter-source-queues --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyDeadLetterQueue
Output::
{
"queueUrls": [
"https://queue.amazonaws.com/80398EXAMPLE/MyQueue",
"https://queue.amazonaws.com/80398EXAMPLE/MyOtherQueue"
]
} awscli-1.17.14/awscli/examples/sqs/delete-message.rst 0000644 0000000 0000000 00000000346 13620325556 022443 0 ustar root root 0000000 0000000 **To delete a message**
This example deletes the specified message.
Command::
aws sqs delete-message --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --receipt-handle AQEBRXTo...q2doVA==
Output::
None. awscli-1.17.14/awscli/examples/sqs/add-permission.rst 0000644 0000000 0000000 00000000515 13620325556 022473 0 ustar root root 0000000 0000000 **To add a permission to a queue**
This example enables the specified AWS account to send messages to the specified queue.
Command::
aws sqs add-permission --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --label SendMessagesFromMyQueue --aws-account-ids 12345EXAMPLE --actions SendMessage
Output::
None. awscli-1.17.14/awscli/examples/sqs/delete-message-batch.rst 0000644 0000000 0000000 00000001122 13620325556 023513 0 ustar root root 0000000 0000000 **To delete multiple messages as a batch**
This example deletes the specified messages.
Command::
aws sqs delete-message-batch --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --entries file://delete-message-batch.json
Input file (delete-message-batch.json)::
[
{
"Id": "FirstMessage",
"ReceiptHandle": "AQEB1mgl...Z4GuLw=="
},
{
"Id": "SecondMessage",
"ReceiptHandle": "AQEBLsYM...VQubAA=="
}
]
Output::
{
"Successful": [
{
"Id": "FirstMessage"
},
{
"Id": "SecondMessage"
}
]
} awscli-1.17.14/awscli/examples/sqs/get-queue-attributes.rst 0000644 0000000 0000000 00000002366 13620325556 023650 0 ustar root root 0000000 0000000 **To get a queue's attributes**
This example gets all of the specified queue's attributes.
Command::
aws sqs get-queue-attributes --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --attribute-names All
Output::
{
"Attributes": {
"ApproximateNumberOfMessagesNotVisible": "0",
"RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:80398EXAMPLE:MyDeadLetterQueue\",\"maxReceiveCount\":1000}",
"MessageRetentionPeriod": "345600",
"ApproximateNumberOfMessagesDelayed": "0",
"MaximumMessageSize": "262144",
"CreatedTimestamp": "1442426968",
"ApproximateNumberOfMessages": "0",
"ReceiveMessageWaitTimeSeconds": "0",
"DelaySeconds": "0",
"VisibilityTimeout": "30",
"LastModifiedTimestamp": "1442426968",
"QueueArn": "arn:aws:sqs:us-east-1:80398EXAMPLE:MyNewQueue"
}
}
This example gets only the specified queue's maximum message size and visibility timeout attributes.
Command::
aws sqs get-queue-attributes --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyNewQueue --attribute-names MaximumMessageSize VisibilityTimeout
Output::
{
"Attributes": {
"VisibilityTimeout": "30",
"MaximumMessageSize": "262144"
}
}
awscli-1.17.14/awscli/examples/sqs/list-queue-tags.rst 0000644 0000000 0000000 00000001073 13620325556 022606 0 ustar root root 0000000 0000000 **To list all cost allocation tags for a queue**
The following ``list-queue-tags`` example displays all of the cost allocation tags associated with the specified queue. ::
aws sqs list-queue-tags \
--queue-url https://sqs.us-west-2.amazonaws.com/123456789012/MyQueue
Output::
{
"Tags": {
"Team": "Alpha"
}
}
For more information, see `Listing Cost Allocation Tags `__ in the *Amazon Simple Queue Service Developer Guide*.
awscli-1.17.14/awscli/examples/sqs/list-queues.rst 0000644 0000000 0000000 00000001425 13620325556 022036 0 ustar root root 0000000 0000000 **To list queues**
This example lists all queues.
Command::
aws sqs list-queues
Output::
{
"QueueUrls": [
"https://queue.amazonaws.com/80398EXAMPLE/MyDeadLetterQueue",
"https://queue.amazonaws.com/80398EXAMPLE/MyQueue",
"https://queue.amazonaws.com/80398EXAMPLE/MyOtherQueue",
"https://queue.amazonaws.com/80398EXAMPLE/TestQueue1",
"https://queue.amazonaws.com/80398EXAMPLE/TestQueue2"
]
}
This example lists only queues that start with "My".
Command::
aws sqs list-queues --queue-name-prefix My
Output::
{
"QueueUrls": [
"https://queue.amazonaws.com/80398EXAMPLE/MyDeadLetterQueue",
"https://queue.amazonaws.com/80398EXAMPLE/MyQueue",
"https://queue.amazonaws.com/80398EXAMPLE/MyOtherQueue"
]
} awscli-1.17.14/awscli/examples/sqs/receive-message.rst 0000644 0000000 0000000 00000003733 13620325556 022626 0 ustar root root 0000000 0000000 **To receive a message**
This example receives up to 10 available messages, returning all available attributes.
Command::
aws sqs receive-message --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --attribute-names All --message-attribute-names All --max-number-of-messages 10
Output::
{
"Messages": [
{
"Body": "My first message.",
"ReceiptHandle": "AQEBzbVv...fqNzFw==",
"MD5OfBody": "1000f835...a35411fa",
"MD5OfMessageAttributes": "9424c491...26bc3ae7",
"MessageId": "d6790f8d-d575-4f01-bc51-40122EXAMPLE",
"Attributes": {
"ApproximateFirstReceiveTimestamp": "1442428276921",
"SenderId": "AIDAIAZKMSNQ7TEXAMPLE",
"ApproximateReceiveCount": "5",
"SentTimestamp": "1442428276921"
},
"MessageAttributes": {
"PostalCode": {
"DataType": "String",
"StringValue": "ABC123"
},
"City": {
"DataType": "String",
"StringValue": "Any City"
}
}
}
]
}
This example receives the next available message, returning only the SenderId and SentTimestamp attributes as well as the PostalCode message attribute.
Command::
aws sqs receive-message --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --attribute-names SenderId SentTimestamp --message-attribute-names PostalCode
Output::
{
"Messages": [
{
"Body": "My first message.",
"ReceiptHandle": "AQEB6nR4...HzlvZQ==",
"MD5OfBody": "1000f835...a35411fa",
"MD5OfMessageAttributes": "b8e89563...e088e74f",
"MessageId": "d6790f8d-d575-4f01-bc51-40122EXAMPLE",
"Attributes": {
"SenderId": "AIDAIAZKMSNQ7TEXAMPLE",
"SentTimestamp": "1442428276921"
},
"MessageAttributes": {
"PostalCode": {
"DataType": "String",
"StringValue": "ABC123"
}
}
}
]
} awscli-1.17.14/awscli/examples/sqs/purge-queue.rst 0000644 0000000 0000000 00000000314 13620325556 022016 0 ustar root root 0000000 0000000 **To purge a queue**
This example deletes all messages in the specified queue.
Command::
aws sqs purge-queue --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyNewQueue
Output::
None. awscli-1.17.14/awscli/examples/sqs/tag-queue.rst 0000644 0000000 0000000 00000001015 13620325556 021446 0 ustar root root 0000000 0000000 **To add cost allocation tags to a queue**
The following ``tag-queue`` example adds a cost allocation tag to the specified Amazon SQS queue. ::
aws sqs tag-queue \
--queue-url https://sqs.us-west-2.amazonaws.com/123456789012/MyQueue \
--tags Priority=Highest
This command produces no output.
For more information, see `Adding Cost Allocation Tags `__ in the *Amazon Simple Queue Service Developer Guide*.
awscli-1.17.14/awscli/examples/sqs/send-message-batch.rst 0000644 0000000 0000000 00000004211 13620325556 023204 0 ustar root root 0000000 0000000 **To send multiple messages as a batch**
This example sends 2 messages with the specified message bodies, delay periods, and message attributes, to the specified queue.
Command::
aws sqs send-message-batch --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --entries file://send-message-batch.json
Input file (send-message-batch.json)::
[
{
"Id": "FuelReport-0001-2015-09-16T140731Z",
"MessageBody": "Fuel report for account 0001 on 2015-09-16 at 02:07:31 PM.",
"DelaySeconds": 10,
"MessageAttributes": {
"SellerName": {
"DataType": "String",
"StringValue": "Example Store"
},
"City": {
"DataType": "String",
"StringValue": "Any City"
},
"Region": {
"DataType": "String",
"StringValue": "WA"
},
"PostalCode": {
"DataType": "String",
"StringValue": "99065"
},
"PricePerGallon": {
"DataType": "Number",
"StringValue": "1.99"
}
}
},
{
"Id": "FuelReport-0002-2015-09-16T140930Z",
"MessageBody": "Fuel report for account 0002 on 2015-09-16 at 02:09:30 PM.",
"DelaySeconds": 10,
"MessageAttributes": {
"SellerName": {
"DataType": "String",
"StringValue": "Example Fuels"
},
"City": {
"DataType": "String",
"StringValue": "North Town"
},
"Region": {
"DataType": "String",
"StringValue": "WA"
},
"PostalCode": {
"DataType": "String",
"StringValue": "99123"
},
"PricePerGallon": {
"DataType": "Number",
"StringValue": "1.87"
}
}
}
]
Output::
{
"Successful": [
{
"MD5OfMessageBody": "203c4a38...7943237e",
"MD5OfMessageAttributes": "10809b55...baf283ef",
"Id": "FuelReport-0001-2015-09-16T140731Z",
"MessageId": "d175070c-d6b8-4101-861d-adeb3EXAMPLE"
},
{
"MD5OfMessageBody": "2cf0159a...c1980595",
"MD5OfMessageAttributes": "55623928...ae354a25",
"Id": "FuelReport-0002-2015-09-16T140930Z",
"MessageId": "f9b7d55d-0570-413e-b9c5-a9264EXAMPLE"
}
]
}
awscli-1.17.14/awscli/examples/sqs/delete-queue.rst 0000644 0000000 0000000 00000000300 13620325556 022131 0 ustar root root 0000000 0000000 **To delete a queue**
This example deletes the specified queue.
Command::
aws sqs delete-queue --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyNewerQueue
Output::
None. awscli-1.17.14/awscli/examples/kafka/ 0000755 0000000 0000000 00000000000 13620325757 017274 5 ustar root root 0000000 0000000 awscli-1.17.14/awscli/examples/kafka/create-configuration.rst 0000644 0000000 0000000 00000002557 13620325556 024144 0 ustar root root 0000000 0000000 **To create a custom Amazon MSK configuration**
The following ``create-configuration`` example creates a custom MSK configuration with the server properties that are specified in the input file. ::
aws kafka create-configuration \
--name "CustomConfiguration" \
--description "Topic autocreation enabled; Apache ZooKeeper timeout 2000 ms; Log rolling 604800000 ms." \
--kafka-versions "2.2.1" \
--server-properties file://configuration.txt
Contents of ``configuration.txt``::
auto.create.topics.enable = true
zookeeper.connection.timeout.ms = 2000
log.roll.ms = 604800000
This command produces no output.
Output::
{
"Arn": "arn:aws:kafka:us-west-2:123456789012:configuration/CustomConfiguration/a1b2c3d4-5678-90ab-cdef-11111EXAMPLE-2",
"CreationTime": "2019-10-09T15:26:05.548Z",
"LatestRevision":
{
"CreationTime": "2019-10-09T15:26:05.548Z",
"Description": "Topic autocreation enabled; Apache ZooKeeper timeout 2000 ms; Log rolling 604800000 ms.",
"Revision": 1
},
"Name": "CustomConfiguration"
}
For more information, see `Amazon MSK Configuration Operations `__ in the *Amazon Managed Streaming for Apache Kafka Developer Guide*.
awscli-1.17.14/awscli/examples/kafka/create-cluster.rst 0000644 0000000 0000000 00000002440 13620325556 022745 0 ustar root root 0000000 0000000 **To create an Amazon MSK cluster**
The following ``create-cluster`` example creates an MSK cluster named ``MessagingCluster`` with three broker nodes. A JSON file named ``brokernodegroupinfo.json`` specifies the three subnets over which you want Amazon MSK to distribute the broker nodes. This example doesn't specify the monitoring level, so the cluster gets the ``DEFAULT`` level. ::
aws kafka create-cluster \
--cluster-name "MessagingCluster" \
--broker-node-group-info file://brokernodegroupinfo.json \
--kafka-version "2.2.1" \
--number-of-broker-nodes 3
Contents of ``brokernodegroupinfo.json``::
{
"InstanceType": "kafka.m5.xlarge",
"BrokerAZDistribution": "DEFAULT",
"ClientSubnets": [
"subnet-0123456789111abcd",
"subnet-0123456789222abcd",
"subnet-0123456789333abcd"
]
}
Output::
{
"ClusterArn": "arn:aws:kafka:us-west-2:123456789012:cluster/MessagingCluster/a1b2c3d4-5678-90ab-cdef-11111EXAMPLE-2",
"ClusterName": "MessagingCluster",
"State": "CREATING"
}
For more information, see `Create an Amazon MSK Cluster `__ in the *Amazon Managed Streaming for Apache Kafka*.
awscli-1.17.14/awscli/examples/kafka/update-cluster-configuration.rst 0000644 0000000 0000000 00000002651 13620325556 025635 0 ustar root root 0000000 0000000 **To update the configuration of an Amazon MSK cluster**
The following ``update-cluster-configuration`` example updates the configuration of the specified existing MSK cluster. It uses a custom MSK configuration. ::
aws kafka update-cluster-configuration \
--cluster-arn "arn:aws:kafka:us-west-2:123456789012:cluster/MessagingCluster/a1b2c3d4-5678-90ab-cdef-11111EXAMPLE-2" \
--configuration-info file://configuration-info.json \
--current-version "K21V3IB1VIZYYH"
Contents of ``configuration-info.json``::
{
"Arn": "arn:aws:kafka:us-west-2:123456789012:configuration/CustomConfiguration/a1b2c3d4-5678-90ab-cdef-11111EXAMPLE-2",
"Revision": 1
}
The output returns an ARN for this ``update-cluster-configuration`` operation. To determine if this operation is complete, use the ``describe-cluster-operation`` command with this ARN as input. ::
{
"ClusterArn": "arn:aws:kafka:us-west-2:123456789012:cluster/MessagingCluster/a1b2c3d4-5678-90ab-cdef-11111EXAMPLE-2",
"ClusterOperationArn": "arn:aws:kafka:us-west-2:123456789012:cluster-operation/V123450123/a1b2c3d4-1234-abcd-cdef-22222EXAMPLE-2/a1b2c3d4-abcd-1234-bcde-33333EXAMPLE"
}
For more information, see `Update the Configuration of an Amazon MSK Cluster `__ in the *Amazon Managed Streaming for Apache Kafka Developer Guide*.
awscli-1.17.14/awscli/examples/kafka/update-broker-storage.rst 0000644 0000000 0000000 00000002547 13620325556 024241 0 ustar root root 0000000 0000000 **To update the EBS storage for brokers**
The following ``update-broker-storage`` example updates the amount of EBS storage for all the brokers in the cluster. Amazon MSK sets the target storage amount for each broker to the amount specified in the example. You can get the current version of the cluster by describing the cluster or by listing all of the clusters. ::
aws kafka update-broker-storage \
--cluster-arn "arn:aws:kafka:us-west-2:123456789012:cluster/MessagingCluster/a1b2c3d4-5678-90ab-cdef-11111EXAMPLE-2" \
--current-version "K21V3IB1VIZYYH" \
--target-broker-ebs-volume-info "KafkaBrokerNodeId=ALL,VolumeSizeGB=1100"
The output returns an ARN for this ``update-broker-storage`` operation. To determine if this operation is complete, use the ``describe-cluster-operation`` command with this ARN as input. ::
{
"ClusterArn": "arn:aws:kafka:us-west-2:123456789012:cluster/MessagingCluster/a1b2c3d4-5678-90ab-cdef-11111EXAMPLE-2",
"ClusterOperationArn": "arn:aws:kafka:us-west-2:123456789012:cluster-operation/V123450123/a1b2c3d4-1234-abcd-cdef-22222EXAMPLE-2/a1b2c3d4-abcd-1234-bcde-33333EXAMPLE"
}
For more information, see `Update the EBS Storage for Brokers `__ in the *Amazon Managed Streaming for Apache Kafka Developer Guide*.
awscli-1.17.14/awscli/examples/cloud9/ 0000755 0000000 0000000 00000000000 13620325757 017416 5 ustar root root 0000000 0000000 awscli-1.17.14/awscli/examples/cloud9/describe-environment-status.rst 0000644 0000000 0000000 00000000541 13620325554 025606 0 ustar root root 0000000 0000000 **To get status information for an AWS Cloud9 development environment**
This example gets status information for the specified AWS Cloud9 development environment.
Command::
aws cloud9 describe-environment-status --environment-id 685f892f431b45c2b28cb69eadcdb0EX
Output::
{
"status": "ready",
"message": "Environment is ready to use"
} awscli-1.17.14/awscli/examples/cloud9/list-environments.rst 0000644 0000000 0000000 00000000523 13620325554 023643 0 ustar root root 0000000 0000000 **To get a list of available AWS Cloud9 development environment identifiers**
This example gets a list of available AWS Cloud9 development environment identifiers.
Command::
aws cloud9 list-environments
Output::
{
"environmentIds": [
"685f892f431b45c2b28cb69eadcdb0EX",
"1980b80e5f584920801c09086667f0EX"
]
} awscli-1.17.14/awscli/examples/cloud9/create-environment-ec2.rst 0000644 0000000 0000000 00000001134 13620325554 024416 0 ustar root root 0000000 0000000 **To create an AWS Cloud9 EC2 development environment**
This example creates an AWS Cloud9 development environment with the specified settings, launches an Amazon Elastic Compute Cloud (Amazon EC2) instance, and then connects from the instance to the environment.
Command::
aws cloud9 create-environment-ec2 --name my-demo-env --description "My demonstration development environment." --instance-type t2.micro --subnet-id subnet-1fab8aEX --automatic-stop-time-minutes 60 --owner-arn arn:aws:iam::123456789012:user/MyDemoUser
Output::
{
"environmentId": "8a34f51ce1e04a08882f1e811bd706EX"
} awscli-1.17.14/awscli/examples/cloud9/update-environment-membership.rst 0000644 0000000 0000000 00000001234 13620325554 026120 0 ustar root root 0000000 0000000 **To change the settings of an existing environment member for an AWS Cloud9 development environment**
This example changes the settings of the specified existing environment member for the specified AWS Cloud9 development environment.
Command::
aws cloud9 update-environment-membership --environment-id 8a34f51ce1e04a08882f1e811bd706EX --user-arn arn:aws:iam::123456789012:user/AnotherDemoUser --permissions read-only
Output::
{
"membership": {
"environmentId": "8a34f51ce1e04a08882f1e811bd706EX",
"userId": "AIDAJ3LOROMOUXTBSU6EX",
"userArn": "arn:aws:iam::123456789012:user/AnotherDemoUser",
"permissions": "read-only"
}
} awscli-1.17.14/awscli/examples/cloud9/delete-environment-membership.rst 0000644 0000000 0000000 00000000562 13620325554 026103 0 ustar root root 0000000 0000000 **To delete an environment member from an AWS Cloud9 development environment**
This example deletes the specified environment member from the specified AWS Cloud9 development environment.
Command::
aws cloud9 delete-environment-membership --environment-id 8a34f51ce1e04a08882f1e811bd706EX --user-arn arn:aws:iam::123456789012:user/AnotherDemoUser
Output::
None. awscli-1.17.14/awscli/examples/cloud9/create-environment-membership.rst 0000644 0000000 0000000 00000001144 13620325554 026101 0 ustar root root 0000000 0000000 **To add an environment member to an AWS Cloud9 development environment**
This example adds the specified environment member to the specified AWS Cloud9 development environment.
Command::
aws cloud9 create-environment-membership --environment-id 8a34f51ce1e04a08882f1e811bd706EX --user-arn arn:aws:iam::123456789012:user/AnotherDemoUser --permissions read-write
Output::
{
"membership": {
"environmentId": "8a34f51ce1e04a08882f1e811bd706EX",
"userId": "AIDAJ3LOROMOUXTBSU6EX",
"userArn": "arn:aws:iam::123456789012:user/AnotherDemoUser",
"permissions": "read-write"
}
} awscli-1.17.14/awscli/examples/cloud9/delete-environment.rst 0000644 0000000 0000000 00000000505 13620325554 023747 0 ustar root root 0000000 0000000 **To delete an AWS Cloud9 development environment**
This example deletes the specified AWS Cloud9 development environment. If an Amazon EC2 instance is connected to the environment, also terminates the instance.
Command::
aws cloud9 delete-environment --environment-id 8a34f51ce1e04a08882f1e811bd706EX
Output::
None. awscli-1.17.14/awscli/examples/cloud9/describe-environments.rst 0000644 0000000 0000000 00000002130 13620325554 024444 0 ustar root root 0000000 0000000 **To get information about AWS Cloud9 development environments**
This example gets information about the specified AWS Cloud9 development environments.
Command::
aws cloud9 describe-environments --environment-ids 685f892f431b45c2b28cb69eadcdb0EX 349c86d4579e4e7298d500ff57a6b2EX
Output::
{
"environments": [
{
"id": "685f892f431b45c2b28cb69eadcdb0EX",
"name": "my-demo-ec2-env",
"description": "Created from CodeStar.",
"type": "ec2",
"arn": "arn:aws:cloud9:us-east-1:123456789012:environment:685f892f431b45c2b28cb69eadcdb0EX",
"ownerArn": "arn:aws:iam::123456789012:user/MyDemoUser",
"lifecycle": {
"status": "CREATED"
}
},
{
"id": "349c86d4579e4e7298d500ff57a6b2EX",
"name": my-demo-ssh-env",
"description": "",
"type": "ssh",
"arn": "arn:aws:cloud9:us-east-1:123456789012:environment:349c86d4579e4e7298d500ff57a6b2EX",
"ownerArn": "arn:aws:iam::123456789012:user/MyDemoUser",
"lifecycle": {
"status": "CREATED"
}
}
]
} awscli-1.17.14/awscli/examples/cloud9/update-environment.rst 0000644 0000000 0000000 00000000605 13620325554 023770 0 ustar root root 0000000 0000000 **To change the settings of an existing AWS Cloud9 development environment**
This example changes the specified settings of the specified existing AWS Cloud9 development environment.
Command::
aws cloud9 update-environment --environment-id 8a34f51ce1e04a08882f1e811bd706EX --name my-changed-demo-env --description "My changed demonstration development environment."
Output::
None. awscli-1.17.14/awscli/examples/cloud9/describe-environment-memberships.rst 0000644 0000000 0000000 00000004310 13620325554 026577 0 ustar root root 0000000 0000000 **To gets information about environment members for an AWS Cloud9 development environment**
This example gets information about environment members for the specified AWS Cloud9 development environment.
Command::
aws cloud9 describe-environment-memberships --environment-id 8a34f51ce1e04a08882f1e811bd706EX
Output::
{
"memberships": [
{
"environmentId": "8a34f51ce1e04a08882f1e811bd706EX",
"userId": "AIDAJ3LOROMOUXTBSU6EX",
"userArn": "arn:aws:iam::123456789012:user/AnotherDemoUser",
"permissions": "read-write"
},
{
"environmentId": "8a34f51ce1e04a08882f1e811bd706EX",
"userId": "AIDAJNUEDQAQWFELJDLEX",
"userArn": "arn:aws:iam::123456789012:user/MyDemoUser",
"permissions": "owner"
}
]
}
**To get information about the owner of an AWS Cloud9 development environment**
This example gets information about the owner of the specified AWS Cloud9 development environment.
Command::
aws cloud9 describe-environment-memberships --environment-id 8a34f51ce1e04a08882f1e811bd706EX --permissions owner
Output::
{
"memberships": [
{
"environmentId": "8a34f51ce1e04a08882f1e811bd706EX",
"userId": "AIDAJNUEDQAQWFELJDLEX",
"userArn": "arn:aws:iam::123456789012:user/MyDemoUser",
"permissions": "owner"
}
]
}
**To get information about an environment member for multiple AWS Cloud9 development environments**
This example gets information about the specified environment member for multiple AWS Cloud9 development environments.
Command::
aws cloud9 describe-environment-memberships --user-arn arn:aws:iam::123456789012:user/MyDemoUser
Output::
{
"memberships": [
{
"environmentId": "10a75714bd494714929e7f5ec4125aEX",
"lastAccess": 1516213427.0,
"userId": "AIDAJNUEDQAQWFELJDLEX",
"userArn": "arn:aws:iam::123456789012:user/MyDemoUser",
"permissions": "owner"
},
{
"environmentId": "1980b80e5f584920801c09086667f0EX",
"lastAccess": 1516144884.0,
"userId": "AIDAJNUEDQAQWFELJDLEX",
"userArn": "arn:aws:iam::123456789012:user/MyDemoUser",
"permissions": "owner"
}
]
} awscli-1.17.14/awscli/examples/iotthingsgraph/ 0000755 0000000 0000000 00000000000 13620325757 021251 5 ustar root root 0000000 0000000 awscli-1.17.14/awscli/examples/iotthingsgraph/delete-namespace.rst 0000644 0000000 0000000 00000001012 13620325556 025166 0 ustar root root 0000000 0000000 **To delete a namespace**
The following ``delete-namespace`` example deletes a namespace. ::
aws iotthingsgraph delete-namespace
Output::
{
"namespaceArn": "arn:aws:iotthingsgraph:us-west-2:123456789012",
"namespaceName": "us-west-2/123456789012/default"
}
For more information, see `Lifecycle Management for AWS IoT Things Graph Entities, Flows, Systems, and Deployments `__ in the *AWS IoT Things Graph User Guide*.
awscli-1.17.14/awscli/examples/iotthingsgraph/create-flow-template.rst 0000644 0000000 0000000 00000001210 13620325556 026013 0 ustar root root 0000000 0000000 **To create a flow**
The following ``create-flow-template`` example creates a flow (workflow). The value of ``MyFlowDefinition`` is the GraphQL that models the flow. ::
aws iotthingsgraph create-flow-template \
--definition language=GRAPHQL,text="MyFlowDefinition"
Output::
{
"summary": {
"createdAt": 1559248067.545,
"id": "urn:tdm:us-west-2/123456789012/default:Workflow:MyFlow",
"revisionNumber": 1
}
}
For more information, see `Working with Flows `__ in the *AWS IoT Things Graph User Guide*.
awscli-1.17.14/awscli/examples/iotthingsgraph/search-flow-executions.rst 0000644 0000000 0000000 00000001647 13620325556 026406 0 ustar root root 0000000 0000000 **To search for flow executions**
The following ``search-flow-executions`` example search for all executions of a flow in a specified system instance. ::
aws iotthingsgraph search-flow-executions \
--system-instance-id "urn:tdm:us-west-2/123456789012/default:Deployment:Room218"
Output::
{
"summaries": [
{
"createdAt": 1559247540.656,
"flowExecutionId": "f6294f1e-b109-4bbe-9073-f451a2dda2da",
"flowTemplateId": "urn:tdm:us-west-2/123456789012/default:Workflow:MyFlow",
"status": "RUNNING ",
"systemInstanceId": "urn:tdm:us-west-2/123456789012/default:System:MySystem",
"updatedAt": 1559247540.656
}
]
}
For more information, see `Working with Systems and Flow Configurations `__ in the *AWS IoT Things Graph User Guide*.
awscli-1.17.14/awscli/examples/iotthingsgraph/list-flow-execution-messages.rst 0000644 0000000 0000000 00000001511 13620325556 027524 0 ustar root root 0000000 0000000 **To get information about events in a flow execution**
The following ``list-flow-execution-messages`` example gets information about events in a flow execution. ::
aws iotthingsgraph list-flow-execution-messages \
--flow-execution-id "urn:tdm:us-west-2/123456789012/default:Workflow:SecurityFlow_2019-05-11T19:39:55.317Z_MotionSensor_69b151ad-a611-42f5-ac21-fe537f9868ad"
Output::
{
"messages": [
{
"eventType": "EXECUTION_STARTED",
"messageId": "f6294f1e-b109-4bbe-9073-f451a2dda2da",
"payload": "Flow execution started",
"timestamp": 1559247540.656
}
]
}
For more information, see `Working with Flows `__ in the *AWS IoT Things Graph User Guide*.
awscli-1.17.14/awscli/examples/iotthingsgraph/dissociate-entity-from-thing.rst 0000644 0000000 0000000 00000000731 13620325556 027512 0 ustar root root 0000000 0000000 **To dissociate a thing from a device**
The following ``dissociate-entity-from-thing`` example dissociates a thing from a device. ::
aws iotthingsgraph dissociate-entity-from-thing \
--thing-name "MotionSensorName" \
--entity-type "DEVICE"
This command produces no output.
For more information, see `Creating and Uploading Models `__ in the *AWS IoT Things Graph User Guide*.
awscli-1.17.14/awscli/examples/iotthingsgraph/deploy-system-instance.rst 0000644 0000000 0000000 00000001425 13620325556 026422 0 ustar root root 0000000 0000000 **To deploy a system instance**
The following ``delete-system-template`` example deploys a system instance. ::
aws iotthingsgraph deploy-system-instance \
--id "urn:tdm:us-west-2/123456789012/default:Deployment:Room218"
Output::
{
"summary": {
"arn": "arn:aws:iotthingsgraph:us-west-2:123456789012:Deployment:Room218",
"createdAt": 1559249776.254,
"id": "urn:tdm:us-west-2/123456789012/default:Deployment:Room218",
"status": "DEPLOYED_IN_TARGET",
"target": "CLOUD",
"updatedAt": 1559249776.254
}
}
For more information, see `Working with Systems and Flow Configurations `__ in the *AWS IoT Things Graph User Guide*.
awscli-1.17.14/awscli/examples/iotthingsgraph/search-flow-templates.rst 0000644 0000000 0000000 00000001613 13620325556 026207 0 ustar root root 0000000 0000000 **To search for flows (or workflows)**
The following ``search-flow-templates`` example searches for all flows (workflows) that contain the Camera device model. ::
aws iotthingsgraph search-flow-templates \
--filters name="DEVICE_MODEL_ID",value="urn:tdm:aws/examples:DeviceModel:Camera"
Output::
{
"summaries": [
{
"id": "urn:tdm:us-west-2/123456789012/default:Workflow:MyFlow",
"revisionNumber": 1,
"createdAt": 1559247540.292
},
{
"id": "urn:tdm:us-west-2/123456789012/default:Workflow:SecurityFlow",
"revisionNumber": 3,
"createdAt": 1548283099.27
}
]
}
For more information, see `Working with Flows `__ in the *AWS IoT Things Graph User Guide*.
awscli-1.17.14/awscli/examples/iotthingsgraph/update-flow-template.rst 0000644 0000000 0000000 00000001320 13620325556 026034 0 ustar root root 0000000 0000000 **To update a flow**
The following ``update-flow-template`` example updates a flow (workflow). The value of ``MyFlowDefinition`` is the GraphQL that models the flow. ::
aws iotthingsgraph update-flow-template \
--id "urn:tdm:us-west-2/123456789012/default:Workflow:MyFlow" \
--definition language=GRAPHQL,text="MyFlowDefinition"
Output::
{
"summary": {
"createdAt": 1559248067.545,
"id": "urn:tdm:us-west-2/123456789012/default:Workflow:MyFlow",
"revisionNumber": 2
}
}
For more information, see `Working with Flows `__ in the *AWS IoT Things Graph User Guide*.
awscli-1.17.14/awscli/examples/iotthingsgraph/get-entities.rst 0000644 0000000 0000000 00000001753 13620325556 024407 0 ustar root root 0000000 0000000 **To get definitions for entities**
The following ``get-entities`` example gets a definition for a device model. ::
aws iotthingsgraph get-entities \
--ids "urn:tdm:aws/examples:DeviceModel:MotionSensor"
Output::
{
"descriptions": [
{
"id": "urn:tdm:aws/examples:DeviceModel:MotionSensor",
"type": "DEVICE_MODEL",
"createdAt": 1559256190.599,
"definition": {
"language": "GRAPHQL",
"text": "##\n# Specification of motion sensor devices interface.\n##\ntype MotionSensor @deviceModel(id: \"urn:tdm:aws/examples:deviceModel:MotionSensor\",\n capability: \"urn:tdm:aws/examples:capability:MotionSensorCapability\") {ignore:void}"
}
}
]
}
For more information, see `Creating and Uploading Models `__ in the *AWS IoT Things Graph User Guide*.
awscli-1.17.14/awscli/examples/iotthingsgraph/untag-resource.rst 0000644 0000000 0000000 00000000774 13620325556 024753 0 ustar root root 0000000 0000000 **To remove a tag for a resource**
The following ``untag-resource`` example removes a tag for the specified resource. ::
aws iotthingsgraph untag-resource \
--resource-arn "arn:aws:iotthingsgraph:us-west-2:123456789012:Deployment/default/Room218" \
--tag-keys "Type"
This command produces no output.
For more information, see `Tagging Your AWS IoT Things Graph Resources `__ in the *AWS IoT Things Graph User Guide*.
awscli-1.17.14/awscli/examples/iotthingsgraph/get-system-template-revisions.rst 0000644 0000000 0000000 00000001450 13620325556 027731 0 ustar root root 0000000 0000000 **To get revision information about a system**
The following ``get-system-template-revisions`` example gets revision information about a system. ::
aws iotthingsgraph get-system-template-revisions \
--id "urn:tdm:us-west-2/123456789012/default:System:MySystem"
Output::
{
"summaries": [
{
"id": "urn:tdm:us-west-2/123456789012/default:System:MySystem",
"arn": "arn:aws:iotthingsgraph:us-west-2:123456789012:System/default/MySystem",
"revisionNumber": 1,
"createdAt": 1559247540.656
}
]
}
For more information, see `Working with Systems and Flow Configurations